CN110675874A - Method for realizing interaction between virtual character and UI (user interface) based on intelligent sound box - Google Patents

Method for realizing interaction between virtual character and UI (user interface) based on intelligent sound box Download PDF

Info

Publication number
CN110675874A
CN110675874A CN201910935733.7A CN201910935733A CN110675874A CN 110675874 A CN110675874 A CN 110675874A CN 201910935733 A CN201910935733 A CN 201910935733A CN 110675874 A CN110675874 A CN 110675874A
Authority
CN
China
Prior art keywords
sound box
user
virtual character
virtual
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910935733.7A
Other languages
Chinese (zh)
Inventor
刘海
张斌
梁嘉豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Obersi Intelligent Technology Co Ltd
Original Assignee
Shenzhen Obersi Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Obersi Intelligent Technology Co Ltd filed Critical Shenzhen Obersi Intelligent Technology Co Ltd
Priority to CN201910935733.7A priority Critical patent/CN110675874A/en
Publication of CN110675874A publication Critical patent/CN110675874A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Abstract

The invention discloses a method for realizing interaction between a virtual character and a UI (user interface) based on an intelligent sound box, which comprises the following steps: sending the real-time state information of the virtual role to a user terminal through wireless communication equipment of the intelligent sound box; and after receiving the virtual character state information, the user terminal performs at least one operation of running mode, character setting and the like on the virtual character through the UI. The invention has the advantages that the system functions are separated from the dependence on the handle, the number is not influenced by the keys, the operation is simple, the operation of the system is controlled by the voice of the user, in addition, the emotion and action information of the user are extracted from the voice information of the user, and are synchronously played by the voice playing module and the display module correspondingly, so that the user realizes mutual communication and expression of own emotion through a user terminal UI interface in a real environment, the emotion communication is really realized, and the experience effect of the user in a virtual environment is further improved.

Description

Method for realizing interaction between virtual character and UI (user interface) based on intelligent sound box
Technical Field
The invention relates to the technical field of voice control of virtual characters, in particular to a method for realizing interaction between a virtual character and a UI (user interface) based on an intelligent sound box.
Background
With the continuous development of intelligent terminal devices, devices capable of performing voice interaction are increasing, and the applications of voice interaction in the daily life of users are also increasing, so that attention is paid to product design for improving the usability of products.
Currently, the commonly used voice interaction process includes the following: in the first mode, a user clicks a control button or a home key on terminal equipment to start a voice interaction process, the user speaks an expected operation, and the terminal equipment collects voice data of the user to realize voice interaction with the equipment. The second mode adopts fixed word of awaking to start the voice interaction process, and the user needs to know the word of awaking that this terminal equipment's voice interaction used earlier, and the user says this word of awaking, and terminal equipment starts the voice interaction process according to the fixed word of awaking who gathers, gathers user's speech data after starting the voice interaction process and carries out voice interaction, example: the user says "degree of smallness" to wake up the voice interaction function of the mobile phone. The third mode is that an oneshot technology is adopted to realize voice interaction, and voice interaction is started by using a wakeup word and an expected action, that is, a user speaks a fixed wakeup word and expects the content executed by the terminal device at the same time, the terminal device starts a voice interaction process according to the wakeup word and directly performs voice interaction according to the collected content executed by the user expected terminal device, for example: the user says 'degree of smallness, how much weather today' and carries out voice interaction with the mobile phone.
In the above several voice interaction schemes, when performing voice interaction, a user needs to wake up physically by using a control button, a home key, or the like, or the user speaks a fixed wake-up word to enable a terminal device to start a voice interaction function, and there is a certain error in waking up by using the wake-up word, which results in a complicated use procedure of the current voice interaction and a low success rate, so that the use frequency of the voice interaction by the user is low.
Disclosure of Invention
The embodiment of the invention provides a method for realizing interaction between a virtual character and a UI (user interface) based on an intelligent sound box, which controls corresponding overcast and rainy instruction operation through the virtual character, thereby avoiding the problems of complicated operation, limited functions by the number of keys and the like caused by dependence on the keys and sensing equipment in a virtual environment.
The embodiment of the invention provides a method for realizing interaction between a virtual character and a UI (user interface) based on an intelligent sound box, which comprises the following steps:
monitoring and identifying a task command of a user within a preset monitoring coverage range;
calling corresponding task animations of the virtual roles and controlling the virtual roles to play the corresponding task animations;
controlling the virtual role to execute interaction with a UI (user interface) through a task instruction;
and the intelligent sound box receives the task instruction and controls the virtual role to execute the task.
In the method for realizing interaction between the virtual character of the intelligent sound box and the UI, the task animation comprises at least one of character animation, UI animation and special effect animation.
In the method for realizing interaction between the virtual character of the intelligent sound box and the UI, at least one of the user terminal or the intelligent sound box continuously monitors the voice information of the user in a preset monitoring coverage range through a microphone, and controls the virtual character to implement corresponding mode conversion.
In the method for realizing interaction between the virtual character of the intelligent sound box and the UI, voice information of a user is continuously monitored within a preset monitoring coverage range;
converting the voice information into character information;
converting the text information into user intention information through a voice engine;
matching and selecting the user intention information and the preset mode type close to the intention information, and outputting a virtual character mode changing command when the matching degree is higher than a preset threshold value.
In the method for realizing interaction between the virtual character of the intelligent sound box and the UI, the method for controlling the virtual character to execute the interaction with the UI through the task instruction comprises the following steps:
calling corresponding animations of the virtual characters and corresponding UI interfaces of the UI interfaces according to task instructions;
controlling the virtual role to perform operation action of a task instruction on a corresponding UI (user interface);
and after the virtual character finishes the operation action, starting the function of the corresponding operation action of the intelligent sound box.
In the method for realizing interaction between the virtual character of the intelligent sound box and the UI, the virtual character is clicked, dragged, slid, zoomed and pushed or pulled on the UI interface.
In the method for realizing interaction between the virtual character and the UI of the intelligent sound box, the intelligent sound box acquires voice information through at least six microphones.
In the method for realizing interaction between the virtual character of the intelligent sound box and the UI, the communication interaction mode between the intelligent sound box and the user terminal is Ethernet communication interaction.
The invention has the advantages that the system functions are separated from the dependence on the handle, the number is not influenced by the keys, the operation is simple, the operation of the system is controlled by the voice of the user, in addition, the emotion and action information of the user are extracted from the voice information of the user, and the corresponding task is completed by the virtual role, so that the user realizes mutual communication and expression of the emotion through a user terminal UI interface in a real environment, the emotion communication is really realized, and the experience effect of the user in the virtual environment is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for implementing interaction between a virtual character and a UI based on a smart sound box according to an embodiment of the present invention;
fig. 2 is a flow chart diagram of a method for implementing interaction between a virtual character and a UI based on a smart speaker according to an embodiment of the present invention, the method controlling the virtual character to implement corresponding mode transformation.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Please refer to fig. 1, which is a flowchart illustrating a method for implementing interaction between a virtual character and a UI based on a smart speaker according to an embodiment of the present invention, the method includes:
monitoring and identifying a task command of a user within a preset monitoring coverage range;
calling corresponding task animations of the virtual roles and controlling the virtual roles to play the corresponding task animations;
controlling the virtual role to execute interaction with a UI (user interface) through a task instruction;
and the intelligent sound box receives the task instruction and controls the virtual role to execute the task.
In the method for realizing interaction between the virtual character of the intelligent sound box and the UI, the task animation comprises at least one of character animation, UI animation and special effect animation.
In the method for realizing interaction between the virtual character of the intelligent sound box and the UI, at least one of the user terminal or the intelligent sound box continuously monitors the voice information of the user in a preset monitoring coverage range through a microphone, and controls the virtual character to implement corresponding mode conversion.
In the method for realizing interaction between the virtual character of the intelligent sound box and the UI, voice information of a user is continuously monitored within a preset monitoring coverage range;
converting the voice information into character information;
converting the text information into user intention information through a voice engine;
matching and selecting the user intention information and the preset mode type close to the intention information, and outputting a virtual character mode changing command when the matching degree is higher than a preset threshold value.
In the method for realizing interaction between the virtual character of the intelligent sound box and the UI, the method for controlling the virtual character to execute the interaction with the UI through the task instruction comprises the following steps:
calling corresponding animations of the virtual characters and corresponding UI interfaces of the UI interfaces according to task instructions;
controlling the virtual role to perform operation action of a task instruction on a corresponding UI (user interface);
and after the virtual character finishes the operation action, starting the function of the corresponding operation action of the intelligent sound box.
In the method for realizing interaction between the virtual character of the intelligent sound box and the UI, the virtual character is clicked, dragged, slid, zoomed and pushed or pulled on the UI interface.
In the method for realizing interaction between the virtual character and the UI of the intelligent sound box, the intelligent sound box acquires voice information through at least six microphones.
In the method for realizing interaction between the virtual character of the intelligent sound box and the UI, the communication interaction mode between the intelligent sound box and the user terminal is Ethernet communication interaction.
The speech recognition technique employed by the present invention can be classified by the speech recognition system according to the constraints on the input speech. The method mainly comprises the following three steps:
one is from the speaker to recognition system correlation considerations
Recognition systems can be classified into 3 types: (1) person-specific speech recognition system: only the voice of a specific person is considered for recognition; (2) unspecified human voice system: the recognized speech is independent of the person, and the recognition system is usually learned by using a large number of speech databases of different persons; (3) identification system of multiple persons: are generally able to recognize the speech of a group of people or are a specific group of speech recognition systems that only require training on the speech of the group of people to be recognized.
Secondly, considering from the way of speaking
Recognition systems can also be classified into 3 types: (1) isolated word tone recognition system: the isolated word recognition system requires that each word is input and then is stopped; (2) connection word sound recognition system: the connecting word input system requires that each word is clearly pronounced, and some phenomenon of continuous sound begins to appear; (3) continuous speech recognition system: the continuous speech input is a natural fluent continuous speech input, and a large number of polyphones and inflections may occur.
Thirdly, considering the size of the word exchange from the recognition system
Recognition systems can also be classified into 3 types: (1) a small vocabulary speech recognition system. Speech recognition systems that typically include tens of words. (2) A medium vocabulary speech recognition system. Typically comprising hundreds to thousands of words. (3) A large vocabulary speech recognition system. Speech recognition systems that typically include thousands to tens of thousands of words. With the improvement of the computing power of the computer and the digital signal processor and the precision of the recognition system, the recognition system is classified according to the size of word exchange and is changed continuously. Current medium vocabulary speech recognition systems may be small vocabulary speech recognition systems in the future. These different limitations also determine the difficulty of the speech recognition system.
The invention has the advantages that the system functions are separated from the dependence on the handle, the number is not influenced by the keys, the operation is simple, the operation of the system is controlled by the voice of the user, in addition, the emotion and action information of the user are extracted from the voice information of the user, and the corresponding task is completed by the virtual role, so that the user realizes mutual communication and expression of the emotion through a user terminal UI interface in a real environment, the emotion communication is really realized, and the experience effect of the user in the virtual environment is further improved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus can be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A method for realizing interaction between a virtual character and a UI (user interface) based on an intelligent sound box is characterized by comprising the following steps:
monitoring and identifying a task command of a user within a preset monitoring coverage range;
calling corresponding task animations of the virtual roles and controlling the virtual roles to play the corresponding task animations;
controlling the virtual role to execute interaction with a UI (user interface) through a task instruction;
and the intelligent sound box receives the task instruction and controls the virtual role to execute the task.
2. The method of claim 1, wherein the task animation comprises at least one of a character animation, a UI animation, and a special effect animation.
3. The method for realizing interaction between a virtual character and a UI based on a smart sound box as claimed in claim 1, wherein at least one of the user terminal or the smart sound box continuously monitors the voice information of the user within a preset monitoring coverage range through a microphone and controls the virtual character to implement corresponding mode transformation.
4. The method for realizing interaction between a virtual character and a UI based on a smart sound box as claimed in claim 2, wherein the step of continuously monitoring the voice information of the user in a preset monitoring coverage area comprises;
converting the voice information into character information;
converting the text information into user intention information through a voice engine;
matching and selecting the user intention information and the preset mode type close to the intention information, and outputting a virtual character mode changing command when the matching degree is higher than a preset threshold value.
5. The method for realizing interaction between the virtual character based on the smart sound box and the UI as claimed in claim 2, wherein the method for controlling the virtual character to execute the interaction with the UI interface through the task instruction comprises the following steps:
calling corresponding animations of the virtual characters and corresponding UI interfaces of the UI interfaces according to task instructions;
controlling the virtual role to perform operation action of a task instruction on a corresponding UI (user interface);
and after the virtual character finishes the operation action, starting the function of the corresponding operation action of the intelligent sound box.
6. The method for realizing interaction between a virtual character and a UI based on a smart sound box as claimed in claim 5, wherein the virtual character is at least one of clicked, dragged, slid, zoomed, and pushed/pulled on the UI interface.
7. The method for realizing interaction between a virtual character and a UI based on a smart sound box as claimed in claim 1, wherein the smart sound box collects voice information through at least six microphones.
8. The method for realizing interaction between a virtual character and a UI based on a smart sound box according to claim 1, wherein the communication interaction mode between the smart sound box and the user terminal is Ethernet communication interaction.
CN201910935733.7A 2019-09-29 2019-09-29 Method for realizing interaction between virtual character and UI (user interface) based on intelligent sound box Pending CN110675874A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910935733.7A CN110675874A (en) 2019-09-29 2019-09-29 Method for realizing interaction between virtual character and UI (user interface) based on intelligent sound box

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910935733.7A CN110675874A (en) 2019-09-29 2019-09-29 Method for realizing interaction between virtual character and UI (user interface) based on intelligent sound box

Publications (1)

Publication Number Publication Date
CN110675874A true CN110675874A (en) 2020-01-10

Family

ID=69080471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910935733.7A Pending CN110675874A (en) 2019-09-29 2019-09-29 Method for realizing interaction between virtual character and UI (user interface) based on intelligent sound box

Country Status (1)

Country Link
CN (1) CN110675874A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111443803A (en) * 2020-03-26 2020-07-24 捷开通讯(深圳)有限公司 Mode switching method, device, storage medium and mobile terminal

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103078995A (en) * 2012-12-18 2013-05-01 苏州思必驰信息科技有限公司 Customizable individualized response method and system used in mobile terminal
EP2699015A1 (en) * 2012-08-14 2014-02-19 Kentec Inc. Television device and method for displaying virtual on-screen interactive moderator
CN105739808A (en) * 2014-12-08 2016-07-06 阿里巴巴集团控股有限公司 Display method and apparatus for cursor movement on terminal device
CN106486126A (en) * 2016-12-19 2017-03-08 北京云知声信息技术有限公司 Speech recognition error correction method and device
CN108491147A (en) * 2018-04-16 2018-09-04 青岛海信移动通信技术股份有限公司 A kind of man-machine interaction method and mobile terminal based on virtual portrait
CN109213470A (en) * 2018-09-11 2019-01-15 昆明理工大学 A kind of cursor control method based on speech recognition
CN109920410A (en) * 2017-12-11 2019-06-21 现代自动车株式会社 The device and method for determining the reliability recommended for the environment based on vehicle
CN110136718A (en) * 2019-05-31 2019-08-16 深圳市语芯维电子有限公司 The method and apparatus of voice control

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2699015A1 (en) * 2012-08-14 2014-02-19 Kentec Inc. Television device and method for displaying virtual on-screen interactive moderator
CN103078995A (en) * 2012-12-18 2013-05-01 苏州思必驰信息科技有限公司 Customizable individualized response method and system used in mobile terminal
CN105739808A (en) * 2014-12-08 2016-07-06 阿里巴巴集团控股有限公司 Display method and apparatus for cursor movement on terminal device
CN106486126A (en) * 2016-12-19 2017-03-08 北京云知声信息技术有限公司 Speech recognition error correction method and device
CN109920410A (en) * 2017-12-11 2019-06-21 现代自动车株式会社 The device and method for determining the reliability recommended for the environment based on vehicle
CN108491147A (en) * 2018-04-16 2018-09-04 青岛海信移动通信技术股份有限公司 A kind of man-machine interaction method and mobile terminal based on virtual portrait
CN109213470A (en) * 2018-09-11 2019-01-15 昆明理工大学 A kind of cursor control method based on speech recognition
CN110136718A (en) * 2019-05-31 2019-08-16 深圳市语芯维电子有限公司 The method and apparatus of voice control

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111443803A (en) * 2020-03-26 2020-07-24 捷开通讯(深圳)有限公司 Mode switching method, device, storage medium and mobile terminal
CN111443803B (en) * 2020-03-26 2023-10-03 捷开通讯(深圳)有限公司 Mode switching method and device, storage medium and mobile terminal

Similar Documents

Publication Publication Date Title
CN110060678B (en) Virtual role control method based on intelligent device and intelligent device
CN108470034B (en) A kind of smart machine service providing method and system
CN110265040B (en) Voiceprint model training method and device, storage medium and electronic equipment
CN110808034A (en) Voice conversion method, device, storage medium and electronic equipment
CN110838286A (en) Model training method, language identification method, device and equipment
CN112099628A (en) VR interaction method and device based on artificial intelligence, computer equipment and medium
CN105244042B (en) A kind of speech emotional interactive device and method based on finite-state automata
CN105551498A (en) Voice recognition method and device
CN109005480A (en) Information processing method and related product
CN110248021A (en) A kind of smart machine method for controlling volume and system
CN110570857B (en) Voice wake-up method and device, electronic equipment and storage medium
KR20190005103A (en) Electronic device-awakening method and apparatus, device and computer-readable storage medium
CN112735418A (en) Voice interaction processing method and device, terminal and storage medium
CN111968641B (en) Voice assistant awakening control method and device, storage medium and electronic equipment
CN110675874A (en) Method for realizing interaction between virtual character and UI (user interface) based on intelligent sound box
CN107783650A (en) A kind of man-machine interaction method and device based on virtual robot
CN114391165A (en) Voice information processing method, device, equipment and storage medium
CN106683668A (en) Method of awakening control of intelligent device and system
CN115798459A (en) Audio processing method and device, storage medium and electronic equipment
CN109948155A (en) A kind of selection method and device, terminal device of more intentions
CN114708849A (en) Voice processing method and device, computer equipment and computer readable storage medium
CN114999496A (en) Audio transmission method, control equipment and terminal equipment
CN109725798A (en) The switching method and relevant apparatus of Autonomous role
CN113593582A (en) Control method and device of intelligent device, storage medium and electronic device
CN112233665A (en) Model training method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Wang Man

Inventor after: Wang Maolin

Inventor after: Zhang Bin

Inventor before: Liu Hai

Inventor before: Zhang Bin

Inventor before: Liang Jiahao

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200110