CN112233660A - User portrait expansion method, device, controller and user portrait acquisition system - Google Patents

User portrait expansion method, device, controller and user portrait acquisition system Download PDF

Info

Publication number
CN112233660A
CN112233660A CN202011098447.9A CN202011098447A CN112233660A CN 112233660 A CN112233660 A CN 112233660A CN 202011098447 A CN202011098447 A CN 202011098447A CN 112233660 A CN112233660 A CN 112233660A
Authority
CN
China
Prior art keywords
user
information
voice
module
voiceprint information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011098447.9A
Other languages
Chinese (zh)
Inventor
孙仁财
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Huanwang Technology Co Ltd
Original Assignee
Guangdong Huanwang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Huanwang Technology Co Ltd filed Critical Guangdong Huanwang Technology Co Ltd
Priority to CN202011098447.9A priority Critical patent/CN112233660A/en
Publication of CN112233660A publication Critical patent/CN112233660A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention relates to a user portrait expansion method, a device, a controller and a user portrait acquisition system, wherein the user portrait expansion method comprises the following steps: acquiring initial voice information input by a user; carrying out audio processing on the initial voice information to extract voiceprint information of the user; determining a basic attribute feature tag of the user according to the voiceprint information; the user's base attribute feature tag is extended to the user attribute list of the user representation. The invention provides a user portrait expansion method based on voiceprint characteristics, which can provide a recording channel of additional user data for user portraits of smart television services, and can intuitively analyze and obtain basic attributes of users such as gender, age group, place of birth and other related characteristic labels through the channel.

Description

User portrait expansion method, device, controller and user portrait acquisition system
Technical Field
The invention relates to the technical field of computer processing, in particular to a user portrait expansion method, a user portrait expansion device, a controller and a user portrait acquisition system.
Background
User portrayal in the current smart television market is a feature tag generally generated by actions of browsing, purchasing, using and the like of a user, and the feature tag comprises the following components: the program content delivery analysis-based data prediction method comprises a user preference feature tag and a user basic attribute feature tag, wherein the user preference feature tag obtained based on program content delivery analysis can be obtained through statistical analysis and other modes, and most of data sources and corresponding tags of user basic attributes (gender, age group, place of birth and the like) belong to prediction and estimation.
The existing method for generating the user viewing preference related feature tag based on the program content lacks related user basic attribute feature tag data generated by user voice input. For the feature data labels related to the user basic attributes, in the prior art, feature prediction data of the user basic attributes are obtained based on feedback of users to delivered content, and the prediction result is low in accuracy and limited in label dimension.
Disclosure of Invention
Accordingly, the present invention is directed to overcoming the deficiencies of the prior art and providing a method, apparatus, controller and user representation capture system for user representation expansion.
In order to achieve the purpose, the invention adopts the following technical scheme: a user representation augmentation method, comprising:
acquiring initial voice information input by a user;
carrying out audio processing on the initial voice information to extract voiceprint information of the user;
determining a basic attribute feature tag of the user according to the voiceprint information;
the user's base attribute feature tag is extended to the user attribute list of the user representation.
Optionally, the initial voice information entered by the user is entered by the user according to a specific input vocabulary prompted by the client.
Optionally, the voiceprint information includes:
language, timbre, tone, rhythm, and/or timbre.
Optionally, the determining the basic attribute feature tag of the user according to the voiceprint information includes:
inputting the voiceprint information into a preset model to determine a basic attribute feature label of the user;
the preset model can reflect the corresponding relation between the voiceprint information and the user basic attribute feature label.
Optionally, the user representation expansion method further includes:
collecting voice operation information of a user using a television voice service after inputting initial voice information;
and analyzing the voice operation information to update the voiceprint information of the user, and calibrating the basic attribute feature tag of the user according to the updated voiceprint information.
Optionally, the user representation expansion method further includes:
performing recognition processing on the voice operation information to recognize operation behaviors of a user on a television;
and updating the user preference feature tag according to the operation behavior of the user on the television.
The invention also provides a user portrait extending device, comprising:
the acquisition module is used for acquiring initial voice information input by a user;
the processing module is used for carrying out audio processing on the initial voice information so as to extract the voiceprint information of the user;
the determining module is used for determining the basic attribute feature tag of the user according to the voiceprint information;
and the expansion module is used for expanding the basic attribute feature tag of the user into a user attribute list of the user portrait.
Optionally, the acquisition module is further configured to acquire the voice operation information input by the user again after the user inputs the initial voice information;
the user portrait expansion device further comprises:
the calibration module is used for calibrating the basic attribute feature label of the user according to the updated voiceprint information; the updated voiceprint information is obtained by performing audio processing on the voice operation information through the processing module; and/or the like, and/or,
the voice recognition module is used for carrying out voice content recognition on the voice operation information so as to recognize the operation behavior of the user on the television; and the updating module is used for updating the user preference feature tag according to the operation behavior executed by the user on the television.
The invention also provides a controller for implementing the user representation augmentation method of any one of the preceding claims.
In addition, the present invention also provides a user representation acquisition system, comprising: the user image expansion device described above.
The invention adopts the technical scheme that the user portrait expansion method comprises the following steps: acquiring initial voice information input by a user; carrying out audio processing on the initial voice information to extract voiceprint information of the user; determining a basic attribute feature tag of the user according to the voiceprint information; the user's base attribute feature tag is extended to the user attribute list of the user representation. The invention provides a user portrait expansion method based on voiceprint characteristics, which can provide a recording channel of additional user data for user portraits of smart television services, and can intuitively analyze and obtain basic attributes of users such as gender, age group, place of birth and other related characteristic labels through the channel.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart illustrating a user representation expansion method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a user representation expansion method according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram of a user image expansion apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a user image expansion apparatus according to a second embodiment of the present invention.
In the figure: 1. an acquisition module; 2. a processing module; 3. a determination module; 4. an expansion module; 5. a calibration module; 6. a voice recognition module; 7. and updating the module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
FIG. 1 is a flow chart illustrating a user portrait expansion method according to an embodiment of the present invention.
As shown in FIG. 1, the method for extending a user portrait according to this embodiment includes:
s11: acquiring initial voice information input by a user;
further, the initial voice information input by the user is input by the user according to a specific input vocabulary prompted by the client.
S12: carrying out audio processing on the initial voice information to extract voiceprint information of the user;
further, the voiceprint information includes:
language, timbre, tone, rhythm, and/or timbre.
S13: determining a basic attribute feature tag of the user according to the voiceprint information;
further, the determining the basic attribute feature tag of the user according to the voiceprint information includes:
inputting the voiceprint information into a preset model to determine a basic attribute feature label of the user;
the preset model can reflect the corresponding relation between the voiceprint information and the user basic attribute feature label.
S15: collecting voice operation information of a user using a television voice service after inputting initial voice information;
s16: analyzing and processing the voice operation information to update the voiceprint information of the user, and calibrating the basic attribute feature tag of the user according to the updated voiceprint information;
s14: the user's base attribute feature tag is extended to the user attribute list of the user representation.
When the user portrait extension method is actually executed, a user firstly starts a corresponding system, such as a system formed by binding a remote controller or a mobile phone with a television, wherein the remote controller and the mobile phone are both provided with a voice acquisition module, a screen on a client (which can be the television, the remote controller or the mobile phone) prompts a user of vocabulary characters or numbers needing voice input, the input information is stored as input initial voice information, and the initial voice information is subjected to audio processing to extract voiceprint information of the user, and the method comprises the following steps: language (dialect or mandarin), tone, rhythm, and/or tone quality, and inputting the voiceprint information into a preset model to determine the basic attribute feature tag of the user; the preset model can reflect the corresponding relation between the voiceprint information and the user basic attribute feature labels, and is obtained by performing learning training in advance according to big data. Through the process, the nationality, the affiliated age group and other information of the user can be preliminarily judged. When the user uses the television later, the television can be controlled through the television voice service, for example, when the user wants to switch channels or adjust volume, a switching command is sent out in a voice mode; and re-extracting the voiceprint information by combining the initial voice information and the subsequent voice operation information which are input by the user so as to calibrate the basic attribute feature tag of the user.
The user portrait expansion method in this embodiment determines a feature tag of a user based on voiceprint features, the method can provide a recording channel of additional user data for a user portrait of an intelligent television service, and relevant feature tags of basic attributes of the user, such as gender, age group, place of birth and the like, can be obtained through visual analysis through the channel.
FIG. 2 is a flowchart illustrating a user portrait expansion method according to a second embodiment of the present invention.
As shown in FIG. 2, the method for extending a user portrait according to this embodiment includes:
s21: acquiring initial voice information input by a user;
s22: carrying out audio processing on the initial voice information to extract voiceprint information of the user;
s23: determining a basic attribute feature tag of the user according to the voiceprint information;
s24: extending the basic attribute feature tag of the user to a user attribute list of the user portrait;
s25: collecting voice operation information of a user using a television voice service after inputting initial voice information;
s26: analyzing and processing the voice operation information to update the voiceprint information of the user, and calibrating the basic attribute feature tag of the user according to the updated voiceprint information;
s27: performing recognition processing on the voice operation information to recognize operation behaviors of a user on a television;
s28: and updating the user preference feature tag according to the operation behavior of the user on the television.
On the basis of the first embodiment, the operation behavior of the user on the television can be identified according to the voice operation information recorded by the user in the subsequent use process, so as to determine the preference of the user and update the user preference feature tag.
FIG. 3 is a flow chart of an embodiment of a user image expansion device according to the present invention.
As shown in FIG. 3, the user image expansion apparatus of this embodiment includes:
the acquisition module 1 is used for acquiring initial voice information input by a user;
the processing module 2 is used for performing audio processing on the initial voice information to extract voiceprint information of the user;
the determining module 3 is used for determining the basic attribute feature tag of the user according to the voiceprint information;
and the expansion module 4 is used for expanding the basic attribute feature tag of the user into a user attribute list of the user portrait.
Please refer to fig. 1, which is a schematic diagram of a user portrait expansion apparatus according to an embodiment of the present invention.
The user portrait expansion device described in this embodiment determines a feature tag of a user based on voiceprint features, the device can provide a recording channel of additional user data for user portraits of smart television services, and relevant feature tags of basic attributes of the user, such as gender, age group, place of birth and the like, can be obtained through visual analysis through the channel.
FIG. 4 is a flow chart of a user image expansion apparatus according to a second embodiment of the present invention.
As shown in fig. 4, in the user portrait expansion apparatus according to the first embodiment, the collection module 1 is further configured to collect again the voice operation information input by the user after the user inputs the initial voice information;
the user portrait expansion device further comprises:
the calibration module 5 is used for calibrating the basic attribute feature label of the user according to the updated voiceprint information; the updated voiceprint information is obtained by performing audio processing on the voice operation information through the processing module 2;
further, the user representation expansion device may further include:
the voice recognition module 6 is used for performing voice content recognition on the voice operation information so as to recognize the operation behavior of the user on the television; and the updating module 7 is used for updating the user preference feature tag according to the operation behavior executed by the user on the television.
The working principle of the user portrait extending apparatus of this embodiment is the same as that of the user portrait extending method of fig. 2, and will not be described herein again.
The present invention also provides a controller for implementing the user representation expansion method of FIG. 1 or FIG. 2.
In addition, the present invention also provides an embodiment of a user representation acquisition system, comprising:
the user image expansion device described in FIG. 3 or FIG. 4.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A method for augmenting a representation of a user, comprising:
acquiring initial voice information input by a user;
carrying out audio processing on the initial voice information to extract voiceprint information of the user;
determining a basic attribute feature tag of the user according to the voiceprint information;
the user's base attribute feature tag is extended to the user attribute list of the user representation.
2. The method of claim 1, wherein the initial voice information entered by the user is entered by the user according to a specific input vocabulary prompted by the client.
3. The user representation augmentation method of claim 1, wherein the voiceprint information comprises:
language, timbre, tone, rhythm, and/or timbre.
4. The method of claim 1, wherein determining the user's base attribute signature based on the voiceprint information comprises:
inputting the voiceprint information into a preset model to determine a basic attribute feature label of the user;
the preset model can reflect the corresponding relation between the voiceprint information and the user basic attribute feature label.
5. A user representation expansion method according to any one of claims 1 to 4, further comprising:
collecting voice operation information of a user using a television voice service after inputting initial voice information;
and analyzing the voice operation information to update the voiceprint information of the user, and calibrating the basic attribute feature tag of the user according to the updated voiceprint information.
6. The method of claim 5, further comprising:
performing recognition processing on the voice operation information to recognize operation behaviors of a user on a television;
and updating the user preference feature tag according to the operation behavior of the user on the television.
7. An image expansion device for a user, comprising:
the acquisition module is used for acquiring initial voice information input by a user;
the processing module is used for carrying out audio processing on the initial voice information so as to extract the voiceprint information of the user;
the determining module is used for determining the basic attribute feature tag of the user according to the voiceprint information;
and the expansion module is used for expanding the basic attribute feature tag of the user into a user attribute list of the user portrait.
8. The user representation augmenting device of claim 6, wherein the capture module is further configured to re-capture the user entered voice operation information after the user entered the initial voice information:
further comprising:
the calibration module is used for calibrating the basic attribute feature label of the user according to the updated voiceprint information; the updated voiceprint information is obtained by performing audio processing on the voice operation information through the processing module; and/or the like, and/or,
the voice recognition module is used for carrying out voice content recognition on the voice operation information so as to recognize the operation behavior of the user on the television; and the updating module is used for updating the user preference feature tag according to the operation behavior executed by the user on the television.
9. A controller adapted to perform the method of user representation augmentation of any one of claims 1 to 6.
10. A user representation acquisition system, comprising:
a user representation expansion apparatus as claimed in claim 7 or 8.
CN202011098447.9A 2020-10-14 2020-10-14 User portrait expansion method, device, controller and user portrait acquisition system Pending CN112233660A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011098447.9A CN112233660A (en) 2020-10-14 2020-10-14 User portrait expansion method, device, controller and user portrait acquisition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011098447.9A CN112233660A (en) 2020-10-14 2020-10-14 User portrait expansion method, device, controller and user portrait acquisition system

Publications (1)

Publication Number Publication Date
CN112233660A true CN112233660A (en) 2021-01-15

Family

ID=74113645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011098447.9A Pending CN112233660A (en) 2020-10-14 2020-10-14 User portrait expansion method, device, controller and user portrait acquisition system

Country Status (1)

Country Link
CN (1) CN112233660A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114722252A (en) * 2022-03-18 2022-07-08 深圳市小满科技有限公司 Foreign trade user classification method based on user portrait and related equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170116991A1 (en) * 2015-10-22 2017-04-27 Avaya Inc. Source-based automatic speech recognition
CN107360465A (en) * 2017-08-22 2017-11-17 四川长虹电器股份有限公司 A kind of method that Intelligent television terminal is drawn a portrait using vocal print generation user
CN109145204A (en) * 2018-07-27 2019-01-04 苏州思必驰信息科技有限公司 The generation of portrait label and application method and system
US20190147862A1 (en) * 2017-11-16 2019-05-16 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for providing voice service
CN111755015A (en) * 2019-03-26 2020-10-09 北京君林科技股份有限公司 User portrait construction method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170116991A1 (en) * 2015-10-22 2017-04-27 Avaya Inc. Source-based automatic speech recognition
CN107360465A (en) * 2017-08-22 2017-11-17 四川长虹电器股份有限公司 A kind of method that Intelligent television terminal is drawn a portrait using vocal print generation user
US20190147862A1 (en) * 2017-11-16 2019-05-16 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for providing voice service
CN109145204A (en) * 2018-07-27 2019-01-04 苏州思必驰信息科技有限公司 The generation of portrait label and application method and system
CN111755015A (en) * 2019-03-26 2020-10-09 北京君林科技股份有限公司 User portrait construction method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114722252A (en) * 2022-03-18 2022-07-08 深圳市小满科技有限公司 Foreign trade user classification method based on user portrait and related equipment

Similar Documents

Publication Publication Date Title
US10733978B2 (en) Operating method for voice function and electronic device supporting the same
CN107767869B (en) Method and apparatus for providing voice service
CN109147826B (en) Music emotion recognition method and device, computer equipment and computer storage medium
CN107463700B (en) Method, device and equipment for acquiring information
US10586528B2 (en) Domain-specific speech recognizers in a digital medium environment
CN110910903B (en) Speech emotion recognition method, device, equipment and computer readable storage medium
CN110544473B (en) Voice interaction method and device
CN108305618B (en) Voice acquisition and search method, intelligent pen, search terminal and storage medium
CN110992937B (en) Language off-line identification method, terminal and readable storage medium
CN110097870A (en) Method of speech processing, device, equipment and storage medium
CN110827825A (en) Punctuation prediction method, system, terminal and storage medium for speech recognition text
CN112333596B (en) Earphone equalizer adjustment method, device, server and medium
CN111653274B (en) Wake-up word recognition method, device and storage medium
CN113411674A (en) Video playing control method and device, electronic equipment and storage medium
CN112420049A (en) Data processing method, device and storage medium
CN110908631A (en) Emotion interaction method, device, equipment and computer readable storage medium
CN112233660A (en) User portrait expansion method, device, controller and user portrait acquisition system
CN108346424B (en) Speech synthesis method and device, and device for speech synthesis
CN108346423B (en) Method and device for processing speech synthesis model
JP2006178334A (en) Language learning system
CN115798459A (en) Audio processing method and device, storage medium and electronic equipment
CN113192530B (en) Model training and mouth motion parameter acquisition method, device, equipment and medium
CN111400443B (en) Information processing method, device and storage medium
CN115035453A (en) Video title and tail identification method, device and equipment and readable storage medium
CN114999457A (en) Voice system testing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination