CN116188647A - Control method, intelligent terminal and storage medium - Google Patents

Control method, intelligent terminal and storage medium Download PDF

Info

Publication number
CN116188647A
CN116188647A CN202310168444.5A CN202310168444A CN116188647A CN 116188647 A CN116188647 A CN 116188647A CN 202310168444 A CN202310168444 A CN 202310168444A CN 116188647 A CN116188647 A CN 116188647A
Authority
CN
China
Prior art keywords
acquiring
virtual character
virtual
control method
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310168444.5A
Other languages
Chinese (zh)
Inventor
李铭浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chuanying Information Technology Co Ltd
Original Assignee
Shanghai Chuanying Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chuanying Information Technology Co Ltd filed Critical Shanghai Chuanying Information Technology Co Ltd
Priority to CN202310168444.5A priority Critical patent/CN116188647A/en
Publication of CN116188647A publication Critical patent/CN116188647A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Abstract

The application provides a control method, intelligent equipment and a storage medium, wherein the control method comprises the following steps: acquiring biological characteristics; acquiring or determining a virtual character according to the biological characteristics; in response to playing the target song, the virtual character is controlled to sing the target song. Based on the method and the device, the virtual character can be constructed according to the biological characteristics, and the personal on-site concert performance can be performed, so that interesting experience of the user is improved.

Description

Control method, intelligent terminal and storage medium
Technical Field
The application relates to the technical field of computers, in particular to a control method, an intelligent terminal and a storage medium.
Background
With the development of information technology, people are paying more attention to the field of intelligent information interaction. The meta-universe is a virtual reality space in which users can communicate and interact with environments and other people that are determined or generated by technological means. The metauniverse is a virtual world which is formed by constructing and mapping on and independent of the real world on the basis of the traditional network space and along with the improvement of the maturity of various digital technologies. In the virtual world of the meta-universe, a large number of people can be gathered together for entertainment, work, and social.
In order to provide entertainment functions such as singing, dancing or live interaction in a virtual world, the method can be mainly realized through the following technologies:
technology one: and answering the interactive live broadcast of the questions to the virtual person through the barrage module.
And (2) a technology II: the singing method based on the virtual scene is used for determining or generating the virtual scene and the image through songs, and personalized interaction is realized in singing of a user.
And (3) a technology III: presetting a dance library of a virtual object, acquiring music factors through an audio file, and extracting dance movements in the dance library to enter and exit the dance library for splicing according to the music factors.
In the process of conception and implementation of the present application, the inventors found that the above technical solution has at least the following problems: in the first technology, only through answer interaction, the mode is single, and the virtual image cannot be constructed based on the personal of the user, so that the interestingness is low. In the second technique, the user can only sing, dance and configure the virtual figures and scenes by the system, the user cannot create and adjust the virtual figures by himself, and the user cannot use his own voice to obtain a good singing effect, so that the method has low interest. In the third technique, the dance action library is required to be spliced, so that the mode is easy to fix, and the interestingness is reduced.
Therefore, there is a need to propose a solution that enhances the interesting experience of the user.
The foregoing description is provided for general background information and does not necessarily constitute prior art.
Disclosure of Invention
Aiming at the technical problems, the application provides the control method, the intelligent terminal and the storage medium, which can construct a virtual concert scene and perform personal on-site concert performance according to personal characteristics of a user, so that interesting experience of the user is improved.
The application provides a control method, which comprises the following steps:
s10: acquiring biological characteristics;
s20: acquiring or determining a virtual character according to the biological characteristics;
s30: in response to playing the target song, the virtual character is controlled to sing the target song.
Optionally, the step S20 includes at least one of:
constructing a virtual character image of the virtual character according to the biological characteristics;
and extracting sound features according to the biological features, and combining the sound features with the virtual character image to obtain the virtual character.
Optionally, the step of constructing the virtual character image of the virtual character according to the biometric feature includes:
acquiring body shape characteristics according to the biological characteristics, and constructing the virtual human image according to the body shape characteristics; and/or the number of the groups of groups,
And obtaining facial features according to the biological features, and constructing the virtual human image according to the facial features.
Optionally, the step S10 includes at least one of:
acquiring image data, and acquiring or confirming appearance characteristics according to the image data;
and acquiring audio data, and acquiring or confirming tone characteristics according to the audio data.
Optionally, the step of acquiring or confirming the appearance feature from the image data includes:
acquiring multi-frame images with different angles according to the image data;
and acquiring or confirming the appearance characteristic according to the multi-frame images of different angles.
Optionally, the step of acquiring or confirming the tone characteristic according to the audio data includes:
acquiring or confirming original tone characteristics according to the audio data;
and acquiring custom trimming information, and trimming the tone according to the custom trimming information to obtain trimmed tone characteristics.
Optionally, the step S30 includes:
and determining or generating dance actions and/or singing mouth shapes according to the target songs, and controlling the virtual character to sing according to the dance actions and/or the singing mouth shapes.
Optionally, the control method further comprises at least one of:
constructing a virtual character background stage according to the target song;
adding clothing elements for the virtual character according to the target song;
and making a video in the process of singing the target song by the virtual character.
The application also provides an intelligent terminal, including: the control program is stored in the memory, and when being executed by the processor, the control program realizes the steps of any control method.
The present application also provides a storage medium storing a computer program which, when executed by a processor, implements the steps of any of the control methods described above.
As described above, the control method of the present application includes: acquiring biological characteristics; acquiring or determining a virtual character according to the biological characteristics; in response to playing the target song, the virtual character is controlled to sing the target song. Through the technical scheme, the virtual character can be constructed according to the biological characteristics, and the personal on-site concert performance can be carried out, so that the playing method in the meta universe is greatly enriched, and the interesting experience of the user is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic hardware structure of a mobile terminal implementing various embodiments of the present application;
fig. 2 is a schematic diagram of a communication network system according to an embodiment of the present application;
fig. 3 is a flow chart of a control method shown according to the first embodiment;
fig. 4 is a schematic flow chart of determining a virtual character from a biological feature according to a control method according to a second embodiment;
fig. 5 is a schematic flow chart of the acquisition of the biological feature involved in the control method according to the third embodiment;
fig. 6 is another flow chart of the control method shown according to the fifth embodiment.
The realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings. Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the element defined by the phrase "comprising one … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element, and furthermore, elements having the same name in different embodiments of the present application may have the same meaning or may have different meanings, a particular meaning of which is to be determined by its interpretation in this particular embodiment or by further combining the context of this particular embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context. Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, steps, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, steps, operations, elements, components, items, categories, and/or groups. The terms "or," "and/or," "including at least one of," and the like, as used herein, may be construed as inclusive, or meaning any one or any combination. For example, "including at least one of: A. b, C "means" any one of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; a and B and C ", again as examples," A, B or C "or" A, B and/or C "means" any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; a and B and C). An exception to this definition will occur only when a combination of elements, functions, steps or operations are in some way inherently mutually exclusive.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily occurring in sequence, but may be performed alternately or alternately with other steps or at least a portion of the other steps or stages.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should be noted that, in this document, step numbers such as S10 and S20 are adopted, and the purpose of the present invention is to more clearly and briefly describe the corresponding content, and not to constitute a substantial limitation on the sequence, and those skilled in the art may execute S20 first and then execute S10 when implementing the present invention, which is within the scope of protection of the present application.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" for representing elements are used only for facilitating the description of the present application, and are not of specific significance per se. Thus, "module," "component," or "unit" may be used in combination.
The intelligent terminal may be implemented in various forms. For example, the smart terminals described in the present application may include smart terminals such as cell phones, tablet computers, notebook computers, palm computers, personal digital assistants (Personal Digital Assistant, PDA), portable media players (Portable Media Player, PMP), navigation devices, wearable devices, smart bracelets, pedometers, and stationary terminals such as digital TVs, desktop computers, and the like.
The following description will be given taking a mobile terminal as an example, and those skilled in the art will understand that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for a moving purpose.
Referring to fig. 1, which is a schematic hardware structure of a mobile terminal implementing various embodiments of the present application, the mobile terminal 100 may include: an RF (Radio Frequency) unit 101, a WiFi module 102, an audio output unit 103, an a/V (audio/video) input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, a processor 110, and a power supply 111. Those skilled in the art will appreciate that the mobile terminal structure shown in fig. 1 is not limiting of the mobile terminal and that the mobile terminal may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the components of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be used for receiving and transmitting signals during the information receiving or communication process, specifically, after receiving downlink information of the base station, processing the downlink information by the processor 110; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol including, but not limited to, GSM (Global System of Mobile communication, global system for mobile communications), GPRS (General Packet Radio Service ), CDMA2000 (Code Division Multiple Access, 2000, CDMA 2000), WCDMA (Wideband Code Division Multiple Access ), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, time Division synchronous code Division multiple access), FDD-LTE (Frequency Division Duplexing-Long Term Evolution, frequency Division duplex long term evolution), TDD-LTE (Time Division Duplexing-Long Term Evolution, time Division duplex long term evolution), and 5G, among others.
WiFi belongs to a short-distance wireless transmission technology, and a mobile terminal can help a user to send and receive e-mails, browse web pages, access streaming media and the like through the WiFi module 102, so that wireless broadband Internet access is provided for the user. Although fig. 1 shows a WiFi module 102, it is understood that it does not belong to the necessary constitution of a mobile terminal, and can be omitted entirely as required within a range that does not change the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a talk mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the mobile terminal 100. The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive an audio or video signal. The a/V input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 can receive sound (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound into audio data. The processed audio (voice) data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 101 in the case of a telephone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting the audio signal.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor and a proximity sensor, optionally, the ambient light sensor may adjust the brightness of the display panel 1061 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1061 and/or the backlight when the mobile terminal 100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; as for other sensors such as fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured in the mobile phone, the detailed description thereof will be omitted.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the mobile terminal. Alternatively, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 1071 or thereabout by using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects the touch azimuth of the user, detects a signal brought by touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 110, and can receive and execute commands sent from the processor 110. Further, the touch panel 1071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 107 may include other input devices 1072 in addition to the touch panel 1071. Alternatively, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc., as specifically not limited herein.
Alternatively, the touch panel 1071 may overlay the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or thereabout, the touch panel 1071 is transferred to the processor 110 to determine the type of touch event, and the processor 110 then provides a corresponding visual output on the display panel 1061 according to the type of touch event. Although in fig. 1, the touch panel 1071 and the display panel 1061 are two independent components for implementing the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 may be integrated with the display panel 1061 to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 108 serves as an interface through which at least one external device can be connected with the mobile terminal 100. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and an external device.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, and alternatively, the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 109 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, the application processor optionally handling mainly an operating system, a user interface, an application program, etc., the modem processor handling mainly wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power source 111 (e.g., a battery) for supplying power to the respective components, and preferably, the power source 111 may be logically connected to the processor 110 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the mobile terminal of the present application is based will be described below.
Referring to fig. 2, fig. 2 is a schematic diagram of a communication network system provided in the embodiment of the present application, where the communication network system is an LTE system of a general mobile communication technology, and the LTE system includes a UE (User Equipment) 201, an e-UTRAN (Evolved UMTS Terrestrial Radio Access Network ) 202, an epc (Evolved Packet Core, evolved packet core) 203, and an IP service 204 of an operator that are sequentially connected in communication.
Alternatively, the UE201 may be the terminal 100 described above, which is not described here again.
The E-UTRAN202 includes eNodeB2021 and other eNodeB2022, etc. Alternatively, the eNodeB2021 may connect with other enodebs 2022 over a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide access for the UE201 to the EPC 203.
EPC203 may include MME (Mobility Management Entity ) 2031, hss (Home Subscriber Server, home subscriber server) 2032, other MMEs 2033, SGW (Serving Gate Way) 2034, pgw (PDN Gate Way) 2035 and PCRF (Policy and Charging Rules Function, policy and tariff function entity) 2036, and so on. Optionally, MME2031 is a control node that handles signaling between UE201 and EPC203, providing bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location registers (not shown) and to hold user specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034 and PGW2035 may provide IP address allocation and other functions for UE201, PCRF2036 is a policy and charging control policy decision point for traffic data flows and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem ), or other IP services, etc.
Although the LTE system is described above as an example, it should be understood by those skilled in the art that the present application is not limited to LTE systems, but may be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, 5G, and future new network systems (e.g., 6G), etc.
Based on the above-mentioned mobile terminal hardware structure and communication network system, various embodiments of the present application are presented.
First embodiment
Referring to fig. 3, fig. 3 is a flowchart of a control method according to a first embodiment, where the control method is applicable to an intelligent terminal (such as a mobile phone), and includes the following steps:
s10, acquiring biological characteristics.
In this embodiment, the pre-stored biometric feature of the user is obtained, or the biometric feature is determined, generated and output by the user when the user collects the biometric feature on site by using the intelligent terminal, or the user performs the custom design of the biometric feature by using the biometric feature information editing function supported by the intelligent terminal, so as to determine or generate the biometric feature. The biological characteristics refer to information related to biological characteristics and behavioral characteristics of a natural human or animal, including but not limited to facial, physical, height, sound, limb movements, and the like. In this embodiment, the acquired biometric feature includes at least one of the above-listed or inexhaustible information.
S20: and acquiring or determining the virtual character according to the biological characteristics.
In the present embodiment, after a biometric feature is acquired, a virtual character is acquired from the biometric feature, or a virtual character is constructed and determined from the biometric feature. The virtual character is one of virtual concert elements required for a virtual concert to sing. Optionally, the virtual concert element is constructed to map to, and independent of, concert live scenes in the real world, including but not limited to action figures, stages, lights, audio, dance movements, spectators, and the like.
S30: in response to playing the target song, the virtual character is controlled to sing the target song.
In this embodiment, in response to playing a target song, the virtual character and the target song obtained according to the biological characteristics are combined, and the virtual character is controlled to sing the target song, so that a personal virtual concert is achieved.
The embodiment adopts the scheme, and particularly obtains the biological characteristics; acquiring or determining a virtual character according to the biological characteristics; and in response to playing the target song, controlling the virtual character to sing the target song, constructing the virtual character according to the biological characteristics and playing the personal on-site singing meeting in the meta universe, so that the playing method in the meta universe is greatly enriched, and the interesting experience of the user is improved.
Second embodiment
On the basis of any one of the above embodiments of the present application, the present embodiment further discloses a method for determining a virtual character according to a biometric feature. Referring to fig. 4, fig. 4 is a schematic flow chart of determining a virtual character according to a biological feature according to a control method according to a second embodiment. The step S20 may include at least one of the following steps of:
step S201, constructing a virtual portrait of the virtual character according to the biological characteristics;
and step S202, extracting sound features according to the biological features, and combining the sound features with the virtual character image to obtain the virtual character.
In this embodiment, according to the acquired biological characteristics, a virtual portrait of the virtual character is constructed based on the reconstruction technique, and optionally, the constructed virtual portrait may include, but is not limited to, facial five sense organs, body forms, and the like of the virtual character.
Optionally, from the acquired biometric characteristics, sound features are extracted therefrom, which may optionally include, but are not limited to, tone color, volume, speed of sound, and the like. Then, the sound characteristics and the constructed virtual person image are combined to obtain a virtual person.
Alternatively, in the present embodiment, the step S201 may include constructing the virtual portrait of the virtual character according to the biometric feature:
step S2011, obtaining body shape characteristics according to the biological characteristics, and constructing the virtual human image according to the body shape characteristics; and/or the number of the groups of groups,
step S2012, obtaining facial features according to the biological features, and constructing the virtual human figure according to the facial features.
In this embodiment, the shape feature is obtained from the obtained biometric feature. Taking the example of the user's shape features, the shape features may include, but are not limited to, the torso and limbs, and the like. And constructing the virtual human figure of the virtual human figure based on the reconstruction technology according to the acquired figure characteristics.
Then, facial features are acquired from the acquired biometric features. Taking the facial features of the user as an example, the facial features may include, but are not limited to, facial shapes, five sense organs, hairstyles, and the like. And constructing a virtual figure of the virtual figure based on the reconstruction technology according to the acquired facial features.
Alternatively, the embodiment may further add an appropriate virtual face to the virtual character on the basis of the virtual human figure constructed only according to the figure characteristics, so as to construct a virtual human figure of the virtual character.
Alternatively, the present embodiment may further add a trunk, limbs, and the like in a proper ratio to the virtual character on the basis of the virtual human face constructed based on only the facial features, to construct a virtual human image of the virtual character.
Optionally, the present embodiment may further include:
and acquiring user-defined shaping information of a user, and performing image adjustment on the virtual human figure according to the user-defined shaping information to obtain a shaped virtual human figure.
In this embodiment, user-defined shaping information of the user is obtained, and the virtual portrait constructed in the above steps is subjected to image adjustment according to the user-defined shaping information to obtain a shaped virtual portrait, where optionally, the types of the user-defined shaping information may include, but are not limited to, skin color, beauty, face shape, fat and thin, height, leg length, arm length, eyes, nose, mouth, hairstyle, and the like.
The embodiment builds the virtual character image of the virtual character according to the biological characteristics; and extracting sound features according to the biological features, combining the sound features with the virtual figures to obtain the virtual figures, and constructing the virtual figures based on the appearance features and the sound features to further enrich playing methods in the universe and promote interestingness.
Third embodiment
On the basis of any one of the above embodiments of the present application, the present embodiment further discloses a method for acquiring a biological feature. Referring to fig. 5, fig. 5 is a schematic flow chart of a control method according to a third embodiment for acquiring a biological feature. The acquiring of the biometric feature in step S10 may include at least one of:
step S101, acquiring image data, and acquiring or confirming appearance characteristics according to the image data;
step S102, acquiring audio data, and acquiring or confirming tone characteristics according to the audio data.
In this embodiment, image data is acquired, alternatively, the image data may be a picture or video, and the image data may include, but is not limited to, a whole-body image, a half-body image, and a facial image. The topographical features are acquired or validated from the acquired image data, alternatively, the topographical features may include, but are not limited to, body features and facial features. Alternatively, the manner of acquiring the image data may include acquiring a picture or video uploaded through a local gallery, or acquiring a picture or video taken on site by retrieving a local camera, or the like.
Optionally, audio data is acquired, optionally, the audio data is audio data containing sound features, and tone features are acquired or confirmed according to the acquired audio data. Alternatively, the manner of acquiring the audio data may include acquiring sound audio uploaded through a local audio library, or acquiring live recorded sound audio by retrieving a microphone, etc.
Optionally, in this embodiment, the step of acquiring or confirming the appearance feature according to the image data may include:
acquiring multi-frame images with different angles according to the image data;
and acquiring or determining the appearance characteristics according to the multi-frame images of different angles.
In this embodiment, multiple frame images of different angles are acquired according to the acquired image data, including pictures or videos. Optionally, when the acquired image data is video, extracting multi-frame images with different angles according to the acquired video; when the acquired image data is a picture, the acquired picture is rotated at different angles, and multi-frame images at different angles are extracted according to the rotated picture at different angles. Then, the appearance characteristic is acquired or determined according to the extracted multi-frame images with different angles.
Optionally, the present embodiment may further include:
responding to a local camera instruction, and providing user guidance prompts of different shooting angles;
and responding to the shooting completion instruction, and acquiring pictures or videos of different shooting angles.
In this embodiment, in response to retrieving a local camera instruction, i.e., taking a live shot by retrieving a local camera, user guidance prompts for different shooting angles are provided at the same time. Optionally, the user guidance prompt may include, but is not limited to, a user guidance prompt with different shooting angles through voice prompt, and/or a user guidance prompt with different shooting angles through text prompt, and/or a user guidance prompt with different shooting angles through audio-video class tutorials, etc. Optionally, the field shooting mode may include field shooting pictures and/or field shooting videos, and optionally, if the field shooting mode is adopted, the number of the field shooting pictures can be preset; if the video is shot on site, the duration of shooting the video on site can be preset.
And responding to the shooting completion instruction, namely, after the user finishes shooting according to the prompt, acquiring photos showing different angles shot by the user on site, and/or acquiring videos showing different angles shot by the user on site.
For example, when the user is instructed to take pictures at different angles through voice prompt, the user can be prompted to slowly shake the head and click the head through voice so as to obtain different face shooting angles; and/or prompting the user to rotate the body leftwards and rightwards through voice so as to acquire different body shooting angles.
Optionally, in this embodiment, the step of acquiring or confirming the tone characteristic according to the audio data may include:
acquiring or confirming original tone characteristics according to the audio data;
and acquiring custom trimming information, and trimming the tone according to the custom trimming information to obtain trimmed tone characteristics.
In this embodiment, the original tone characteristics are acquired or confirmed from the acquired audio data. Optionally, the method for obtaining the original tone color features is to process the audio data through a voice cloning technology to obtain the original tone color features. And then, acquiring custom trimming information, and performing tone trimming on the acquired original tone characteristics according to the custom trimming information to obtain trimmed tone characteristics, wherein optionally, the types of the custom trimming information can include but are not limited to audio, pitch, volume, speech speed and the like.
According to the scheme, the embodiment obtains or confirms the appearance characteristic according to the image data by obtaining the image data; the audio data is acquired, tone characteristics are acquired or confirmed according to the audio data, corresponding biological characteristics can be extracted through the image data and the audio data, virtual human images are constructed according to the biological characteristics, playing methods in the meta universe are enriched, and interestingness is improved.
Fourth embodiment
On the basis of any one of the embodiments of the present application, the present embodiment further discloses a method for controlling the virtual character to sing the target song. In response to playing the target song, controlling the virtual character to sing the target song may include:
step S301, dance movements and/or singing mouth shapes are determined or generated according to the target songs, and the virtual characters are controlled to sing according to the dance movements and/or the singing mouth shapes.
In this embodiment, in response to playing a target song, a dance action and/or a singing mouth shape are determined or generated according to the played target song, and the virtual character is controlled to sing the target song according to the dance action and/or the singing mouth shape in combination with the constructed virtual character and the dance action and/or the singing mouth shape.
More specifically, the target song is obtained, and optionally, the manner of obtaining the target song may include obtaining a target song recommended by the system, and/or obtaining a target song selected by the user. And extracting song characteristics according to the acquired target song. Alternatively, song features may include, but are not limited to, background music, volume, tempo, rhythm, vocal, song wind, lyrics, and the like of the song. And then, acquiring custom song creation information, and performing song creation on the extracted song characteristics according to the custom song creation information to obtain re-created song characteristics. Alternatively, the types of custom song authoring information may include, but are not limited to, tempo, volume level, lyric content and sound effects, etc. Next, the singing song of the particular timbre is determined or generated by the song composition system in combination with the timbre characteristics and song characteristics.
And then, responding to the singing songs with the specific tone, determining or generating dance actions and/or singing mouth shapes according to the played singing songs, and controlling the virtual character to sing according to the dance actions and/or the singing mouth shapes.
Alternatively, the manner in which dance actions and/or performance profiles are determined or generated may include determining or generating, by the animation generation system, corresponding dance actions and performance profiles from the target song. Alternatively, the animation generation system may convert audio into animations, which include not only cartoon animations, but also virtual human animations that are humanoid, and the determined or generated animations may include mouth shapes and audio alignments, limb movements, and the like.
Illustratively, in a meta-space song creation large race, each contestant plays its own creation ability based on 15 songs and 10 virtual figures of different styles specified by the host.
Firstly, each contestant selects a song of a song wind as an authoring basis, and loads the song into an editor, so that the contestant adjusts or adds the vocal parts to each vocal part, and the contestant finely adjusts rhythm, lyrics and the like until the effect of the contestant wanting to express by the contestant is achieved.
Each contestant selects a virtual human figure as a performance basis, adjusts the biological characteristics of the virtual human, such as adjusting the size and color of a nose, eyes, pupils, complexion, face, cheekbone height, nose bridge, chin, hand and foot size, leg and arm length, height, three-girth and the like, so that the virtual human figure accords with the created song.
After the created song and the virtual figure are uploaded, the system automatically generates virtual population type information, virtual human dance motion information and the like. Each contestant adjusts the above information to achieve the effect of satisfaction. After the player is adjusted, the system generates animation and song audio based on the animation generation system, the song synthesis system and the like by utilizing the adjusted information, and generates a final effect as the contest of the player after the animation and the song audio are fused.
Optionally, in this embodiment, the control method may further include at least one of:
constructing a virtual character background stage according to the target song;
adding clothing elements for the virtual character according to the target song;
and making a video in the process of singing the target song by the virtual character.
In this embodiment, a background stage corresponding to the curved wind is constructed for the virtual character according to the obtained target song, and optionally, elements of the background stage may include, but are not limited to, lighting, sound effects, scenery, and the like of the stage.
Alternatively, clothing elements corresponding to the curved wind are added to the virtual character according to the obtained target song, and alternatively, the clothing elements may include, but are not limited to, clothing, shoes, ornaments of the virtual character, and the like.
Optionally, the process of singing the target song by the virtual character, namely, the process of singing the target song by the virtual character according to dance movements, singing mouth shapes, background stages, clothing elements and the like is made into a personal virtual concert video. Optionally, a social sharing channel or platform for the user is provided to support the user to perform social sharing on the singing video of the personal virtual concert through social media. Optionally, the user can use social media to realize live broadcasting of singing, video release, personal virtual IP creation or social playing methods such as PK scoring among different users through the intelligent terminal device.
Illustratively, in a meta-universe social APP, after the user opens the APP, the user selects to create a virtual human figure, and pops up a selection button to select to upload or shoot a video. If the user selects to upload the video, the video is transmitted to the background through the network; if the user selects to shoot the video, calling a front camera of the mobile phone, starting shooting the video for 10 seconds, simultaneously reminding the user to slowly shake and click the head by voice so as to acquire multi-frame images with different angles, and then transmitting the shot video to the background. Based on the uploaded video or shot video transmitted to the background, the face image of the virtual person is built by utilizing different frames with different angles, after the face image is built, the trunk image and the limb image are built according to the face image and transmitted to the APP terminal, and a user starts to operate and adjust the nose, eyes, pupil size and color, skin color, face, cheekbone height, nose bridge, chin, hand and foot size, leg and arm length, height, three-girth and the like on the interface. And after the user is regulated and confirmed, the cloud end and the APP end are stored.
And popping up a selection button to ask a user to upload the audio or record the audio more than or equal to 30 seconds, enabling the user to self-define and adjust the fundamental frequency, the pitch, the volume and the speech speed of the audio, repairing or deleting the unsatisfactory fragments, then confirming, transmitting the adjusted audio to the background to extract the corresponding tone, and storing the tone to the cloud end and the APP end.
And (3) popping up a recommended song list and a search box, and selecting 99 songs at most by the user. After the user finishes selecting, the song characteristics of the song are read from the background and are defined by the user, the user selects different segments or whole songs to adjust the rhythm and volume or adjusts different sound parts in the song, and the content of the song words is adjusted, so that the song words are required to be the same as the original lyrics in terms of self and rhythm. After the adjustment is completed, the background generates a stage background of the corresponding song and clothing of the virtual character through the lyrics of the adjusted song. Background elements such as roses, carousel, ferris wheel and the like are generated for the love, and virtual clothing elements such as skirt, student wear, JK wear, lorita wear, water jacket, sportswear, sports shoes, jeans and the like are generated.
The virtual portrait specified by the user is fused with the clothing element, etc.
And taking the user-designated tone and the adjusted songs as input, synthesizing by a background song synthesizing system, and sending the synthesized songs to an animation generating system to acquire the singing mouth shape and the dance movements.
Transmitting the song, the singing mouth shape information, the dance motion information, the clothing and clothing information of the virtual person and the stage background information to an APP end, fusing and rendering the APP end into an animation, fusing the song audio into the APP end, and playing the animation video.
Users can live broadcast or generate videos to be released to communities through the sharing function, such as sharing to social platforms of YouTube, faceBook, tik Tok and the like through the sharing function.
According to the scheme, a user can conveniently use the portable terminal equipment to perform the personal concert in the meta-universe, can formulate the singing song and re-create the singing song, completes the performance of the personal live concert by using the virtual portrait and tone created by the user, can meet social demands, greatly enriches the playing method in the meta-universe, and improves interesting experience of the user.
Fifth embodiment
On the basis of any of the above embodiments of the present application, this embodiment further discloses a control method. Referring to fig. 6, fig. 6 is another flow chart of the control method according to the fifth embodiment. The control method may include:
step S10, biological characteristics are acquired. Alternatively, the biometric features may include an outline feature and a sound feature. Alternatively, the step S1 may include the following steps S11 and S12:
step S11, acquiring image data, and acquiring or confirming appearance features according to the image data.
Alternatively, the manner of acquiring the image data may include acquiring a picture or video uploaded through a local gallery, or acquiring a picture or video taken on site by retrieving a local camera, or the like.
Alternatively, the step of acquiring the appearance feature from the image data may include the steps of S111 and S112 of:
step S111, multi-frame images with different angles are acquired according to the image data.
Optionally, the image data may include a picture or a video, whether the acquired image data is a video is judged, and when the acquired image data is a video, multi-frame images with different angles are extracted according to the acquired video; when the acquired image data is a picture, the acquired picture is rotated at different angles, and multi-frame images at different angles are extracted according to the rotated picture at different angles.
And step S112, acquiring or confirming the appearance characteristic according to the multi-frame images of different angles.
And step S12, acquiring audio data, and acquiring or confirming tone characteristics according to the audio data.
Alternatively, the manner of acquiring the audio data may include acquiring sound audio uploaded through a local audio library, or acquiring live recorded sound audio by retrieving a microphone, etc. The timbre characteristic belongs to one of the sound characteristics.
Optionally, the step of acquiring the tone color feature according to the audio data may include the following steps S121 and S122:
step S121, acquiring or confirming the original tone characteristic according to the audio data.
Optionally, the method for obtaining the original tone color features is to process the audio data through a voice cloning technology to obtain the original tone color features.
Step S122, obtaining custom trimming information, and performing tone trimming on the original tone characteristics according to the custom trimming information to obtain trimmed tone characteristics.
And step S20, acquiring or determining the virtual character according to the biological characteristics. Alternatively, the step S20 may include the following steps S21 and S22:
s21, constructing a virtual portrait of the virtual character according to the biological characteristics; optionally, the step S21 may include the following steps S211 to S213:
step S211, obtaining the figure characteristic according to the biological characteristic, and constructing the virtual human figure according to the figure characteristic.
Optionally, when the acquired image is a whole-body image, body features in the appearance features are acquired according to the whole-body image, and the virtual human figure is constructed according to the body features.
And step S212, obtaining facial features according to the biological features, and constructing the virtual human figure according to the facial features.
Optionally, when the acquired image is not a whole body image, acquiring facial features in appearance features according to the acquired image, constructing the virtual human face image according to the facial features, and adding limb features on the basis of the virtual human face image to construct the virtual human image.
And S213, obtaining user-defined shaping information of a user, and performing image adjustment on the virtual human figure according to the user-defined shaping information to obtain a shaped virtual human figure.
And S22, extracting sound features according to the biological features, and combining the sound features with the virtual character to obtain the virtual character. In this embodiment, the trimmed tone color feature is obtained by processing the obtained audio data, and the trimmed tone color feature is combined with the virtual character image to obtain the virtual character.
S30: in response to playing the target song, the virtual character is controlled to sing the target song. Alternatively, the target song may include a user original song and a user selected song.
If the obtained target song is a user selection song, the step S30 may include the following steps S31 to S35:
step S31, obtaining the target song, and obtaining song characteristics according to the target song. Optionally, the method for acquiring the target song may include acquiring the target song recommended by the system, and/or acquiring the target song selected by the user; song features may include, but are not limited to, background music, volume, tempo, rhythm, vocal, song wind, lyrics, and the like of the song.
Step S32, user-defined song creation information of a user is obtained, song creation is carried out on the song characteristics according to the user-defined song creation information, and the re-created song characteristics are obtained.
And step S33, synthesizing the singing song with the specific tone color based on the song synthesizing system according to the revised tone color characteristics and the recreated song characteristics.
And step S34, determining or generating dance movements and singing mouth shapes based on an animation generation system according to the singing songs.
And step S35, in response to playing the singing song, controlling the virtual character to singe the singing song according to the dance action and the singing mouth shape.
If the obtained target song is the original song of the user, the step S30 may include the following step S35:
And step S36, responding to playing of the original song of the user, determining or generating a dance action and a singing mouth shape based on an animation generation system according to the original song of the user, and controlling the virtual character to sing the original song of the user according to the dance action and the singing mouth shape.
Alternatively, the method for obtaining the original user song may include obtaining the original user song uploaded by the user, or obtaining the original user song in real time.
Optionally, the control method may further include at least one of:
and constructing a virtual character background stage according to the target song.
And adding clothing elements for the virtual character according to the target song.
And (3) making the process of singing the target song by the virtual character into a video, namely making the process of singing the target song by the virtual character into a personal virtual concert video according to the determined or generated dance movements, singing mouth shapes, background stages, clothes and the like.
In a mobile phone concept experience store, in order to increase the client retention rate and retention time, a user whole body image is captured through silence of an RGB-D camera above a screen in a 'together singing' intelligent screen project which brings the client with the technological experience, the required characteristics of a human face, a whole body trunk, limbs and other virtual characters are reconstructed, a virtual human image of the appearance of the client is built, and clothing is initialized. The screen pops up the recommended 10 songs for the customer to select while providing a "change batch" button until the user confirms the selection of a song. And generating singing mouth shape, dance movements, virtual clothing ornaments, stage backgrounds and other elements according to the selected song wind, music, melody, lyrics and the like.
After the needed information is determined, the virtual character merges with the clothing ornament and the stage, songs begin to be played, singing mouth shape and dance action information and the virtual character image are generated into an animation in real time, a user inputs the singing voice of the user through a handheld microphone, the singing voice of the user and the animation are combined on a large screen to be played, and scoring is carried out after singing of the user is finished.
The embodiment adopts the scheme, and particularly obtains the biological characteristics; acquiring or determining a virtual character according to the biological characteristics; controlling the virtual character to sing the target song in response to playing the target song; constructing a virtual character background stage according to the target song; adding clothing elements for the virtual character according to the target song; the process of singing the target song by the virtual character is made into a video, the virtual character can be constructed according to biological characteristics, and the personal on-site singing meeting performance in the meta universe can be carried out, so that the playing method in the meta universe is greatly enriched, the interesting experience of a user is improved, and the daily life and the retention rate are further improved.
The embodiment of the application also provides an intelligent terminal, which comprises a memory and a processor, wherein a control program is stored in the memory, and the control program is executed by the processor to realize the steps of the control method in any embodiment.
The embodiment of the application also provides a storage medium, and a control program is stored on the storage medium, and when the control program is executed by a processor, the steps of the control method in any one of the embodiments are implemented.
The embodiments of the intelligent terminal and the storage medium provided in the present application may include all technical features of any one of the embodiments of the control method, and the expansion and explanation contents of the description are substantially the same as those of each embodiment of the method, which are not repeated herein.
The present embodiments also provide a computer program product comprising computer program code which, when run on a computer, causes the computer to perform the method in the various possible implementations as above.
The embodiments also provide a chip including a memory for storing a computer program and a processor for calling and running the computer program from the memory, so that a device on which the chip is mounted performs the method in the above possible embodiments.
It can be understood that the above scenario is merely an example, and does not constitute a limitation on the application scenario of the technical solution provided in the embodiments of the present application, and the technical solution of the present application may also be applied to other scenarios. For example, as one of ordinary skill in the art can know, with the evolution of the system architecture and the appearance of new service scenarios, the technical solutions provided in the embodiments of the present application are equally applicable to similar technical problems.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device of the embodiment of the application can be combined, divided and pruned according to actual needs.
In this application, the same or similar term concept, technical solution, and/or application scenario description will generally be described in detail only when first appearing, and when repeated later, for brevity, will not generally be repeated, and when understanding the content of the technical solution of the present application, etc., reference may be made to the previous related detailed description thereof for the same or similar term concept, technical solution, and/or application scenario description, etc., which are not described in detail later.
In this application, the descriptions of the embodiments are focused on, and the details or descriptions of one embodiment may be found in the related descriptions of other embodiments.
The technical features of the technical solutions of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the present application.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as above, including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, a controlled terminal, or a network device, etc.) to perform the method of each embodiment of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices. The computer instructions may be stored in a storage medium or transmitted from one storage medium to another storage medium, for example, from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.) means. The storage media may be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that contains an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, storage disks, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., solid State Disk (SSD)), among others.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.

Claims (10)

1. A control method, characterized by comprising the steps of:
s10: acquiring biological characteristics;
s20: acquiring or determining a virtual character according to the biological characteristics;
s30: in response to playing the target song, the virtual character is controlled to sing the target song.
2. The control method according to claim 1, characterized in that the S20 step includes at least one of:
constructing a virtual character image of the virtual character according to the biological characteristics;
and extracting sound features according to the biological features, and combining the sound features with the virtual character image to obtain the virtual character.
3. The control method according to claim 2, wherein the step of constructing a virtual portrait of the virtual character from the biometric feature includes:
acquiring body shape characteristics according to the biological characteristics, and constructing the virtual human image according to the body shape characteristics; and/or the number of the groups of groups,
And obtaining facial features according to the biological features, and constructing the virtual human image according to the facial features.
4. A control method according to any one of claims 1 to 3, characterized in that the S10 step includes at least one of:
acquiring image data, and acquiring or confirming appearance characteristics according to the image data;
and acquiring audio data, and acquiring or confirming tone characteristics according to the audio data.
5. The control method according to claim 4, wherein the step of acquiring or confirming the appearance characteristic from the image data includes:
acquiring multi-frame images with different angles according to the image data;
and acquiring or confirming the appearance characteristic according to the multi-frame images of different angles.
6. The control method according to claim 4, wherein the step of acquiring or confirming tone characteristics from the audio data includes:
acquiring or confirming original tone characteristics according to the audio data;
and acquiring custom trimming information, and trimming the tone according to the custom trimming information to obtain trimmed tone characteristics.
7. A control method according to any one of claims 1 to 3, characterized in that the S30 step includes:
And determining or generating dance actions and/or singing mouth shapes according to the target songs, and controlling the virtual character to sing according to the dance actions and/or the singing mouth shapes.
8. A control method according to any one of claims 1 to 3, characterized in that the control method further comprises at least one of:
constructing a virtual character background stage according to the target song;
adding clothing elements for the virtual character according to the target song;
and making a video in the process of singing the target song by the virtual character.
9. An intelligent terminal, characterized by comprising a memory and a processor, wherein the memory stores a control program, and the control program when executed by the processor realizes the steps of the control method according to any one of claims 1 to 8.
10. A storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the control method according to any one of claims 1 to 8.
CN202310168444.5A 2023-02-24 2023-02-24 Control method, intelligent terminal and storage medium Pending CN116188647A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310168444.5A CN116188647A (en) 2023-02-24 2023-02-24 Control method, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310168444.5A CN116188647A (en) 2023-02-24 2023-02-24 Control method, intelligent terminal and storage medium

Publications (1)

Publication Number Publication Date
CN116188647A true CN116188647A (en) 2023-05-30

Family

ID=86436295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310168444.5A Pending CN116188647A (en) 2023-02-24 2023-02-24 Control method, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN116188647A (en)

Similar Documents

Publication Publication Date Title
JP7408048B2 (en) Anime character driving method and related device based on artificial intelligence
US20210029305A1 (en) Method and apparatus for adding a video special effect, terminal device and storage medium
CN109599079B (en) Music generation method and device
KR101189053B1 (en) Method For Video Call Based on an Avatar And System, Apparatus thereof
CN110119815A (en) Model training method, device, storage medium and equipment
CN105117102B (en) Audio interface display methods and device
CN106937039A (en) A kind of imaging method based on dual camera, mobile terminal and storage medium
US20230419582A1 (en) Virtual object display method and apparatus, electronic device, and medium
WO2023279960A1 (en) Action processing method and apparatus for virtual object, and storage medium
CN110691279A (en) Virtual live broadcast method and device, electronic equipment and storage medium
CN109819167B (en) Image processing method and device and mobile terminal
CN109391842B (en) Dubbing method and mobile terminal
WO2022079933A1 (en) Communication supporting program, communication supporting method, communication supporting system, terminal device, and nonverbal expression program
CN114155322A (en) Scene picture display control method and device and computer storage medium
CN108986026A (en) A kind of picture joining method, terminal and computer readable storage medium
CN110019919B (en) Method and device for generating rhyme-rhyme lyrics
CN108198162A (en) Photo processing method, mobile terminal, server, system, storage medium
CN110309327A (en) Audio generation method, device and the generating means for audio
CN113420177A (en) Audio data processing method and device, computer equipment and storage medium
CN108197206A (en) Expression packet generation method, mobile terminal and computer readable storage medium
CN110808019A (en) Song generation method and electronic equipment
CN115631270A (en) Live broadcast method and device of virtual role, computer storage medium and terminal
CN106823374A (en) Talking Avatar hands based on android system swim the construction method of software
CN111915744A (en) Interaction method, terminal and storage medium for augmented reality image
US20230067387A1 (en) Method for music generation, electronic device, storage medium cross reference to related applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication