CN114187915A - Interaction method - Google Patents

Interaction method Download PDF

Info

Publication number
CN114187915A
CN114187915A CN202111541981.7A CN202111541981A CN114187915A CN 114187915 A CN114187915 A CN 114187915A CN 202111541981 A CN202111541981 A CN 202111541981A CN 114187915 A CN114187915 A CN 114187915A
Authority
CN
China
Prior art keywords
voice
information
waveform
storage unit
signal waveform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111541981.7A
Other languages
Chinese (zh)
Inventor
朱俊杰
缪文南
姚泽彬
陈少武
薛浩鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou City University of Technology
Original Assignee
Guangzhou City University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou City University of Technology filed Critical Guangzhou City University of Technology
Priority to CN202111541981.7A priority Critical patent/CN114187915A/en
Publication of CN114187915A publication Critical patent/CN114187915A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides an interaction method, after a control chip identifies a wake-up signal, a voice identification unit identifies voice input information, and the accuracy of identifying the voice input information by the voice identification unit is improved; then identifying voiceprint information; the control chip identifies a first storage unit matched with the current voiceprint information from the plurality of first storage units; then inputting information according to the voice; outputting corresponding first voice output information and first interaction information in a current first storage unit; different users are distinguished by identifying voiceprint information, so that different interaction effects with different users are realized; inputting different voice input information for the same user, and outputting corresponding voice output information and interaction information; different interactive effects can also be produced.

Description

Interaction method
Technical Field
The invention relates to the technical field of data processing, in particular to an interaction method.
Background
Electronic pets are electronic toys composed of electronic components (hardware and software), and currently refer to specific programs running on a computer or an internet server, and the current concept of electronic pets is widely applied to electronic devices (such as pet machines, mobile phones and computers) with display screens. The electronic pet has the same life characteristics as the real pet.
A patent document with Chinese application number 201610414739.6 and publication number 2017.9.26 discloses an intelligent electronic pet voice interaction system and method; the system comprises a starting module, a voice control module, a local processing module, a cloud storage module and a local storage module; the starting module is used for setting a growth task; the voice control module is used for controlling the input, the output and the recognition of voice; the local processing module is used for calling data in the local storage module to determine and judge the voice information of the user and executing corresponding operation according to the judgment result; the cloud storage module is used for storing data in a cloud; the local storage module is used for storing data.
The system determines and judges the voice information and executes corresponding operation; but the system can only judge the waveform of the voice information; outputting corresponding voice information according to the waveform; it cannot distinguish different users by inputting voice information; different interaction effects cannot be generated with different users.
Disclosure of Invention
The invention provides an interaction method for generating different interaction effects with different users.
In order to achieve the purpose, the technical scheme of the invention is as follows: an interaction method, an interaction device comprises a control chip, a voice recognition module, a voice output module and a display device; the voice recognition module comprises a voice awakening unit, a voice recognition unit and a voiceprint recognition unit; the voice wake-up unit is used for identifying the wake-up signal and converting the wake-up signal into a wake-up waveform; the voice recognition unit is used for recognizing voice input information and converting the voice input information into voice waveforms; the voiceprint recognition unit is used for recognizing voiceprint information.
The control chip is provided with a voiceprint storage unit, a second storage unit and more than one first storage unit, and one first storage unit is correspondingly arranged according to voiceprint information; the second storage unit stores a second signal waveform, and the control chip is used for comparing the wake-up waveform with the second signal waveform and controlling the voice recognition unit and the voiceprint recognition unit to start according to a comparison result; the first storage unit stores a first signal waveform, first voice output information and first interaction information; the first signal waveform is arranged corresponding to the first voice output information and the first interaction information; the control chip is used for comparing the voice waveform with the first signal waveform, controlling the voice output module to output first voice output information corresponding to the storage unit according to a comparison result, and controlling the display device to output first interaction information corresponding to the storage unit.
The control chip also comprises a third storage unit; the third storage unit stores a third signal waveform, third voice output information and third interaction information; the control chip is used for comparing the voice waveform with the third signal waveform, controlling the voice output module to output third voice output information and controlling the display device to output third interaction information according to the comparison result.
The interaction method comprises the following steps:
(1) pre-recording voiceprint information, a voiceprint information corresponding to a first storage unit.
(2) Pre-recording a first signal waveform in the first storage unit, and recording voice output information and interactive information according to the first signal waveform.
(3) The voice awakening unit receives voice information in real time and converts the received voice information into an awakening waveform; the control chip compares the current wake-up waveform with the second signal waveform; if the current wake-up waveform is similar to the second signal waveform, performing the step (4); and (4) if the current wake-up waveform is not similar to the second signal waveform, repeating the step (3).
(4) The control chip judges that the current wake-up waveform is a wake-up signal, and controls the voice recognition module to start.
And (4.1) the voice recognition module receives the voice information, and if the voice recognition module receives the voice information, the step (5) is carried out.
(5) The voice print recognition unit recognizes voice print information in the current voice information, and the voice recognition unit recognizes voice input information in the current voice information and converts the received voice input information into a voice waveform; and (5) the control chip detects whether the voiceprint information currently identified exists in the voiceprint storage unit, and if so, the step (6) is carried out.
(6) Identifying a first storage unit matched with the voiceprint information, and controlling the current voice waveform of the chip to be compared with the first signal waveform in the current first storage unit; if the voice waveform is similar to the first signal waveform, performing the step (7); if the current speech waveform is not similar to the first signal waveform, proceed to step (4.1).
(7) The voice output module outputs voice output information corresponding to the waveform of the first signal waveform; the display device outputs the interactive information corresponding to the waveform of the first signal waveform.
According to the method, after the control chip identifies the wake-up signal, the voice identification unit identifies the voice input information, so that the accuracy of the voice identification unit for identifying the voice input information is high; then, after voiceprint information in the voice information is identified; the control chip identifies a first storage unit matched with the current voiceprint information from the plurality of first storage units; then, recognizing the voice input information in the voice information and converting the voice input information into a voice waveform; outputting first voice output information and first interaction information corresponding to the current voice waveform in a current first storage unit; distinguishing different users by identifying voiceprint information, and outputting information in a first storage unit corresponding to the users; outputting corresponding first voice output information and first interaction information according to the voice input information of the current user; different interaction effects with different users are realized; meanwhile, different voice input information can be input by the same user, and different interaction effects can be generated. Meanwhile, if the current voice waveform is not similar to the first signal waveform, the voice information can be repeatedly input, and the accuracy of the recognition result of the voice recognition unit is improved.
Further, the step (5) further includes that the control chip detects whether the currently recognized voiceprint information exists in the voiceprint storage unit, and if the currently recognized voiceprint information does not exist in the voiceprint storage unit, the step (8) is performed.
(8) Comparing the current voice waveform with the third signal waveform by the control chip; and (5) if the current voice waveform is similar to the third signal waveform, performing the step (9).
(9) The voice output module outputs voice output information corresponding to the waveform of the third signal waveform; the display device outputs the interactive information corresponding to the waveform of the third signal waveform.
By the method, when a user who does not store the voiceprint information in the voiceprint storage unit interacts with the interaction device, an interaction effect can be generated.
Drawings
FIG. 1 is a schematic diagram of an interactive device using the present invention.
FIG. 2 is a flow chart of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1-2; an interactive method; the interactive device comprises a control chip 1, a voice recognition module, a voice output module 2 and a display device 3; the voice recognition module comprises a voice awakening unit 41, a voice recognition unit 42 and a voiceprint recognition unit 43; the voice wake-up unit 41 is configured to recognize a wake-up signal and convert the wake-up signal into a wake-up waveform; the voice recognition unit 42 is used for recognizing the voice input information and converting the voice input information into a voice waveform; the voiceprint recognition unit 43 is used to recognize voiceprint information.
The control chip is provided with a voiceprint storage unit, a second storage unit and more than one first storage unit, and one first storage unit is correspondingly arranged according to voiceprint information; the second storage unit stores a second signal waveform, and the control chip is used for comparing the wake-up waveform with the second signal waveform and controlling the voice recognition unit and the voiceprint recognition unit to start according to a comparison result; the first storage unit stores a first signal waveform, first voice output information and first interaction information; the first signal waveform is arranged corresponding to the first voice output information and the first interaction information; the control chip is used for comparing the voice waveform with the first signal waveform, controlling the voice output module to output first voice output information corresponding to the storage unit according to a comparison result, and controlling the display device to output first interaction information corresponding to the storage unit;
the control chip also comprises a third storage unit; the third storage unit stores a third signal waveform, third voice output information and third interaction information; the control chip is used for comparing the voice waveform with the third signal waveform, controlling the voice output module to output third voice output information and controlling the display device to output third interaction information according to the comparison result.
The interaction method comprises the following steps:
(1) pre-recording voiceprint information, a voiceprint information corresponding to a first storage unit.
(2) Pre-recording a first signal waveform in the first storage unit, and recording voice output information and interactive information according to the first signal waveform.
(3) The voice awakening unit receives voice information in real time and converts the received voice information into an awakening waveform; the control chip compares the current wake-up waveform with the second signal waveform; if the current wake-up waveform is similar to the second signal waveform, performing the step (4); and (4) if the current wake-up waveform is not similar to the second signal waveform, repeating the step (3).
(4) The control chip judges that the current wake-up waveform is a wake-up signal, and controls the voice recognition module to start.
And (4.1) the voice recognition module receives the voice information, and if the voice recognition module receives the voice information, the step (5) is carried out.
(5) The voice print recognition unit recognizes voice print information in the current voice information, and the voice recognition unit recognizes voice input information in the current voice information and converts the received voice input information into a voice waveform; and (5) the control chip detects whether the voiceprint information currently identified exists in the voiceprint storage unit, and if so, the step (6) is carried out.
(6) Identifying a first storage unit matched with the voiceprint information, and controlling the current voice waveform of the chip to be compared with the first signal waveform in the current first storage unit; if the voice waveform is similar to the first signal waveform, performing the step (7); if the current speech waveform is not similar to the first signal waveform, proceed to step (4.1).
(7) The voice output module outputs first voice output information corresponding to the waveform of the first signal waveform; the display device outputs first interaction information corresponding to the first signal waveform.
According to the method, after the control chip identifies the wake-up signal, the voice identification unit identifies the voice input information, so that the accuracy of the voice identification unit for identifying the voice input information is high; then, after voiceprint information in the voice information is identified; the control chip identifies a first storage unit matched with the current voiceprint information from the plurality of first storage units; then, recognizing the voice input information in the voice information and converting the voice input information into a voice waveform; outputting first voice output information and first interaction information corresponding to the current voice waveform in a current first storage unit; distinguishing different users by identifying voiceprint information, and outputting information in a first storage unit corresponding to the users; outputting corresponding first voice output information and first interaction information according to the voice input information of the current user; different interaction effects with different users are realized; meanwhile, different voice input information can be input by the same user, and different interaction effects can be generated. Meanwhile, if the current voice waveform is not similar to the first signal waveform, the voice information can be repeatedly input, and the accuracy of the recognition result of the voice recognition unit is improved.
The method comprises the following steps:
and (5) the control chip detects whether the voiceprint information currently identified exists in the voiceprint storage unit, and if the voiceprint information does not exist in the voiceprint storage unit, the step (8) is carried out.
(8) Comparing the current voice waveform with the third signal waveform by the control chip; and (5) if the current voice waveform is similar to the third signal waveform, performing the step (9).
(9) The voice output module outputs voice output information corresponding to the waveform of the third signal waveform; the display device outputs the interactive information corresponding to the waveform of the third signal waveform. When the user who does not store the voiceprint information in the voiceprint storage unit interacts with the interaction device, the interaction effect can be generated.
The step (1) further comprises: presetting a counting threshold; step (6) also comprises the step (4.1) of counting once if the current voice waveform is not similar to the first signal waveform; if the technical value reaches the count threshold, step (10) is performed.
(10) Sending a signal prompt to add a new first signal waveform.

Claims (3)

1. An interaction method, characterized by: the interactive device comprises a control chip, a voice recognition module, a voice output module and a display device; the voice recognition module comprises a voice awakening unit, a voice recognition unit and a voiceprint recognition unit; the voice wake-up unit is used for identifying the wake-up signal and converting the wake-up signal into a wake-up waveform; the voice recognition unit is used for recognizing voice input information and converting the voice input information into voice waveforms; the voiceprint identification unit is used for identifying voiceprint information;
the control chip is provided with a voiceprint storage unit, a second storage unit and more than one first storage unit, and one first storage unit is correspondingly arranged according to voiceprint information; the second storage unit stores a second signal waveform, and the control chip is used for comparing the wake-up waveform with the second signal waveform and controlling the voice recognition unit and the voiceprint recognition unit to start according to a comparison result; the first storage unit stores a first signal waveform, first voice output information and first interaction information; the first signal waveform is arranged corresponding to the first voice output information and the first interaction information; the control chip is used for comparing the voice waveform with the first signal waveform, controlling the voice output module to output first voice output information corresponding to the storage unit according to a comparison result, and controlling the display device to output first interaction information corresponding to the storage unit;
the control chip also comprises a third storage unit; the third storage unit stores a third signal waveform, third voice output information and third interaction information; the control chip is used for comparing the voice waveform with the third signal waveform, controlling the voice output module to output third voice output information and the display device to output third interaction information according to the comparison result;
the interaction method comprises the following steps:
(1) pre-recording voiceprint information, one voiceprint information corresponding to a first storage unit;
(2) pre-recording a first signal waveform in a first storage unit, and recording voice output information and interactive information according to the first signal waveform;
(3) the voice awakening unit receives voice information in real time and converts the received voice information into an awakening waveform; the control chip compares the current wake-up waveform with the second signal waveform; if the current wake-up waveform is similar to the second signal waveform, performing the step (4); if the current wake-up waveform is not similar to the second signal waveform, repeating the step (3);
(4) judging that the current wake-up waveform is a wake-up signal by the control chip, and controlling the voice recognition module to start up by the control chip;
(4.1) the voice recognition module receives voice information, and if the voice recognition module receives the voice information, the step (5) is carried out;
(5) the voice print recognition unit recognizes voice print information in the current voice information, and the voice recognition unit recognizes voice input information in the current voice information and converts the received voice input information into a voice waveform; the control chip detects whether the voiceprint information currently identified exists in the voiceprint storage unit, and if yes, the step (6) is carried out;
(6) identifying a first storage unit matched with the voiceprint information, and controlling the current voice waveform of the chip to be compared with the first signal waveform in the current first storage unit; if the voice waveform is similar to the first signal waveform, performing the step (7); if the current voice waveform is not similar to the first signal waveform, performing the step (4.1);
(7) the voice output module outputs first voice output information corresponding to the waveform of the first signal waveform; the display device outputs first interaction information corresponding to the first signal waveform.
2. An interactive method as claimed in claim 1, characterized in that: the step (5) also comprises that the control chip detects whether the voiceprint information currently identified exists in the voiceprint storage unit, and if the voiceprint information does not exist, the step (8) is carried out;
(8) comparing the current voice waveform with the third signal waveform by the control chip; if the current voice waveform is similar to the third signal waveform, performing the step (9);
(9) the voice output module outputs third voice output information corresponding to the waveform of the third signal waveform; the display device outputs third interaction information corresponding to the waveform of the third signal waveform.
3. An interactive method as claimed in claim 1, characterized in that: the step (1) further comprises: presetting a counting threshold; step (6) also comprises the step (4.1) of counting once if the current voice waveform is not similar to the first signal waveform; if the technical value reaches the counting threshold value, performing the step (10);
(10) sending a signal prompt to add a new first signal waveform.
CN202111541981.7A 2021-12-16 2021-12-16 Interaction method Pending CN114187915A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111541981.7A CN114187915A (en) 2021-12-16 2021-12-16 Interaction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111541981.7A CN114187915A (en) 2021-12-16 2021-12-16 Interaction method

Publications (1)

Publication Number Publication Date
CN114187915A true CN114187915A (en) 2022-03-15

Family

ID=80605279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111541981.7A Pending CN114187915A (en) 2021-12-16 2021-12-16 Interaction method

Country Status (1)

Country Link
CN (1) CN114187915A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023109415A1 (en) * 2021-12-16 2023-06-22 广州城市理工学院 Holographic interactive system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023109415A1 (en) * 2021-12-16 2023-06-22 广州城市理工学院 Holographic interactive system

Similar Documents

Publication Publication Date Title
US10733978B2 (en) Operating method for voice function and electronic device supporting the same
CN104254884B (en) Low-power integrated-circuit for analyzing digitized audio stream
CN110689889B (en) Man-machine interaction method and device, electronic equipment and storage medium
CN108986826A (en) Automatically generate method, electronic device and the readable storage medium storing program for executing of minutes
CN108831477B (en) Voice recognition method, device, equipment and storage medium
TW200832239A (en) Method for character recognition
CN103559880B (en) Voice entry system and method
CN110534109B (en) Voice recognition method and device, electronic equipment and storage medium
CN110047481A (en) Method for voice recognition and device
CN111326154B (en) Voice interaction method and device, storage medium and electronic equipment
CN111161726B (en) Intelligent voice interaction method, device, medium and system
CN110544468B (en) Application awakening method and device, storage medium and electronic equipment
CN109032345A (en) Apparatus control method, device, equipment, server-side and storage medium
CN109785834B (en) Voice data sample acquisition system and method based on verification code
WO2021128846A1 (en) Electronic file control method and apparatus, and computer device and storage medium
US20220269724A1 (en) Audio playing method, electronic device, and storage medium
CN114187915A (en) Interaction method
WO2020024415A1 (en) Voiceprint recognition processing method and apparatus, electronic device and storage medium
CN114582333A (en) Voice recognition method and device, electronic equipment and storage medium
CN112581967B (en) Voiceprint retrieval method, front-end back-end server and back-end server
CN111506183A (en) Intelligent terminal and user interaction method
CN107885482A (en) Audio frequency playing method, device, storage medium and electronic equipment
CN108989551B (en) Position prompting method and device, storage medium and electronic equipment
US9894193B2 (en) Electronic device and voice controlling method
CN111625636B (en) Method, device, equipment and medium for rejecting man-machine conversation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination