CN108806686B - Starting control method of voice question searching application and family education equipment - Google Patents

Starting control method of voice question searching application and family education equipment Download PDF

Info

Publication number
CN108806686B
CN108806686B CN201810747125.9A CN201810747125A CN108806686B CN 108806686 B CN108806686 B CN 108806686B CN 201810747125 A CN201810747125 A CN 201810747125A CN 108806686 B CN108806686 B CN 108806686B
Authority
CN
China
Prior art keywords
voice
family education
question
search
education equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810747125.9A
Other languages
Chinese (zh)
Other versions
CN108806686A (en
Inventor
徐杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201810747125.9A priority Critical patent/CN108806686B/en
Publication of CN108806686A publication Critical patent/CN108806686A/en
Application granted granted Critical
Publication of CN108806686B publication Critical patent/CN108806686B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/54Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for retrieval
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Psychiatry (AREA)
  • Hospice & Palliative Care (AREA)
  • General Health & Medical Sciences (AREA)
  • Child & Adolescent Psychology (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

A starting control method of a voice question searching application and family education equipment are provided, the method comprises the following steps: the family education equipment monitors voice signals sent by the outside; the family education equipment detects whether the voice signal contains a wake-up word of a voice search question application for waking up the family education equipment; if the voice signal contains a wake-up word for waking up the voice search question application of the home education equipment, the home education equipment identifies the user gender of the external user who sends out the voice signal according to the voice signal; the family education equipment acquires the instant emotion type of the external user; the family education equipment displays a voice question searching interface of the voice question searching application, and outputs a virtual animal matched with the gender of the user on the voice question searching interface; the virtual animal is used for outputting response voice corresponding to the search question voice sent by the external user according to the tone matched with the instant emotion type. The embodiment of the invention is beneficial to arousing the interest of primary and secondary school students in searching questions.

Description

Starting control method of voice question searching application and family education equipment
Technical Field
The invention relates to the technical field of family education equipment, in particular to a starting control method of a voice question searching application and family education equipment.
Background
At present, more and more students and pupils use family education equipment (such as family education machines) to assist learning. In the practical application of the family education equipment, the primary and secondary school students can start the voice question searching application in the family education equipment to search for the questions in a voice mode, and the searched questions are answered to achieve auxiliary learning. In practice, after the voice question searching application is started, the family education equipment often outputs a relatively monotonous voice question searching interface for primary and secondary school students to search for questions, and the voice question searching interface is not beneficial to exciting the interest of the primary and secondary school students in searching for the questions.
Disclosure of Invention
The embodiment of the invention discloses a starting control method of a voice question searching application and family education equipment, which are beneficial to exciting the interest of primary and secondary school students in searching questions.
The first aspect of the embodiment of the invention discloses a method for controlling the starting of a voice question searching application, which comprises the following steps:
the family education equipment monitors voice signals sent by the outside;
the family education equipment detects whether the voice signal contains a wake-up word for waking up a voice search application of the family education equipment;
if the voice signal contains a wake-up word for waking up the voice search question application of the family education equipment, the family education equipment identifies the user gender of the external user starting the voice signal according to the voice signal;
the family education equipment acquires the instant emotion type of the outside user;
the family education equipment displays a voice question searching interface of the voice question searching application, and outputs a virtual animal matched with the gender of the user on the voice question searching interface; and the virtual animal is used for outputting a response voice corresponding to the search question voice sent by the external user according to the tone matched with the instant emotion type.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the identifying, by the family education device, the user gender of the external user who has sent out the voice signal according to the voice signal includes:
the family education equipment extracts the sound characteristics of the voice signals according to the voice signals;
and the family education equipment identifies the user gender of the external user starting the voice signal according to the voice characteristics of the voice signal.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the obtaining, by the family education device, the immediate emotion type of the external user includes:
the family education equipment identifies the instant emotion type of the external user according to the tone of the voice signal;
or, the family education device obtains the instant emotion type of the external user, including:
the family education equipment detects whether a wearable device is wirelessly connected at present;
if the wearable equipment is wirelessly connected with the family education equipment currently, the family education equipment acquires the sound characteristics of a wearer of the wearable equipment;
the family education device verifying whether the sound features of the wearer of the wearable device match the sound features of the voice signal;
if the current heart rate data of the wearer of the wearable device and the current blood pressure data of the wearer are matched, the family education device informs the wearable device to obtain the current heart rate data of the wearer of the wearable device and the current blood pressure data of the wearer, and the wearable device determines the instant emotion type of the wearer according to the current heart rate data of the wearer and the current blood pressure data of the wearer;
and the family education equipment receives the instant emotion type of the wearer sent by the wearable equipment as the instant emotion type of the external user.
As an optional implementation manner, in the first aspect of this embodiment of the present invention, the method further includes:
the family education equipment monitors the search question voice sent by an external user;
the family education equipment acquires a question searching result corresponding to the question searching voice;
the family education equipment outputs a question searching result corresponding to the question searching voice to the voice question searching interface for displaying;
and the family education equipment controls the virtual animal to output a response voice corresponding to the search question voice sent by the external user according to the tone matched with the instant emotion type.
As an optional implementation manner, in the first aspect of this embodiment of the present invention, the method further includes:
the family education equipment inquires whether student personal attribute information corresponding to the sound features of the voice signals is stored or not; the student personal attribute information at least comprises student identity information and a target curriculum schedule corresponding to the students, and the target curriculum schedule at least comprises a plurality of different learning subjects learned by the students and teacher terminal identifications corresponding to the learning subjects;
if the personal attribute information of the students corresponding to the voice features of the voice signals is stored, the family education equipment queries a target learning subject corresponding to the search result from the target curriculum schedule according to the search result; inquiring a target teacher terminal identification corresponding to the target learning subject from the target curriculum schedule;
the family education equipment converts the search question voice into search question words and reports the student identity information, the search question words and the target teacher terminal identification to a cloud platform, so that the cloud platform sends the student identity information and the search question words to a teacher terminal to which the target teacher terminal identification belongs according to the target teacher terminal identification.
A second aspect of an embodiment of the present invention discloses a family education device, including:
the monitoring unit is used for monitoring voice signals sent by the outside;
the detection unit is used for detecting whether the voice signal contains a wake-up word for waking up the voice search topic application of the family education equipment;
the recognition unit is used for recognizing the user gender of the external user who starts the voice signal according to the voice signal when the detection unit detects that the voice signal contains a wake-up word of the voice search topic application for waking up the family education equipment;
the first acquisition unit is used for acquiring the instant emotion type of the external user;
the control unit is used for displaying a voice question searching interface applied by the voice question searching application and outputting a virtual animal matched with the gender of the user on the voice question searching interface; and the virtual animal is used for outputting a response voice corresponding to the search question voice sent by the external user according to the tone matched with the instant emotion type.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the recognition unit is specifically configured to, when the detection unit detects that the voice signal includes a wake-up word for waking up a voice search application of the family education device, extract a sound feature of the voice signal according to the voice signal; and identifying the user gender of the external user starting the voice signal according to the voice characteristics of the voice signal.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the first obtaining unit is specifically configured to identify an instant emotion type of the external user according to a tone of the voice signal;
alternatively, the first acquiring unit includes:
the detection subunit is used for detecting whether the family education equipment is wirelessly connected with wearable equipment currently;
the acquisition subunit is used for acquiring the sound characteristics of the wearer of the wearable device when the detection subunit detects that the family education device is wirelessly connected with the wearable device currently;
a verification subunit for verifying whether the sound features of the wearer of the wearable device match the sound features of the speech signal;
the interaction subunit is used for informing the wearable device to acquire the current heart rate data of the wearer of the wearable device and the current blood pressure data of the wearer when the verification result of the verification subunit is matched, and determining the instant emotion type of the wearer by the wearable device according to the current heart rate data of the wearer and the current blood pressure data of the wearer;
the interaction subunit is further configured to receive the instant emotion type of the wearer sent by the wearable device, as an instant emotion type of an external user.
As an alternative implementation, in the second aspect of the embodiment of the present invention:
the monitoring unit is also used for monitoring the question searching voice sent by the external user;
the family education device further includes a second acquisition unit, wherein:
the second obtaining unit is used for obtaining a question searching result corresponding to the question searching voice;
the control unit is also used for outputting a question searching result corresponding to the question searching voice to the voice question searching interface for displaying; and controlling the virtual animal to output a response voice corresponding to the search question voice sent by the external user according to the tone matched with the instant emotion type.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the family education device further includes:
the query unit is used for querying whether student personal attribute information corresponding to the sound features of the voice signals is stored or not; the student personal attribute information at least comprises student identity information and a target curriculum schedule corresponding to the students, and the target curriculum schedule at least comprises a plurality of different learning subjects learned by the students and teacher terminal identifications corresponding to the learning subjects; and if the personal attribute information of the students corresponding to the voice characteristics of the voice signals is stored, inquiring a target learning subject corresponding to the search result from the target curriculum schedule by taking the search result as a basis; inquiring a target teacher terminal identification corresponding to the target learning subject from the target curriculum schedule;
and the conversion unit is used for converting the search question voice into search question words and reporting the student identity information, the search question words and the target teacher terminal identification to a cloud platform, so that the cloud platform sends the student identity information and the search question words to a teacher terminal to which the target teacher terminal identification belongs according to the target teacher terminal identification.
A third aspect of an embodiment of the present invention discloses a family education apparatus, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the starting control method of the voice question searching application disclosed by the first aspect of the embodiment of the invention.
A fourth aspect of the present invention discloses a computer-readable storage medium storing a computer program, where the computer program enables a computer to execute the method for controlling the start of the speech question searching application disclosed in the first aspect of the present invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, when the family education equipment detects that the voice signal sent by the outside contains the awakening word of the voice search question application for awakening the family education equipment, the family education equipment can identify the user gender of the outside user sending the voice signal according to the voice signal; further, the family education device can acquire the instant emotion type of the external user, display a voice question searching interface applied by the voice question searching, and output a virtual animal matched with the gender of the user on the voice question searching interface, wherein the virtual animal is used for outputting a response voice corresponding to the question searching voice sent by the external user according to the tone matched with the instant emotion type. Therefore, by implementing the embodiment of the invention, the family education equipment can load the virtual animal matched with the user gender of the external user on the displayed voice question searching interface, so that the virtual animal can output the response voice corresponding to the question searching voice sent by the external user according to the tone matched with the instant emotion type of the external user, thereby being beneficial to arousing the interest of the primary and secondary school students in question searching.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for controlling the start of a speech question search application according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating another method for controlling the start of a speech question searching application according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a family education device according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another family education device disclosed in the embodiment of the present invention;
FIG. 5 is a schematic structural diagram of another family education device disclosed in the embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a starting control method of a voice question searching application and family education equipment, which are beneficial to exciting the interest of primary and secondary school students in searching questions. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for controlling the start of a voice topic search application according to an embodiment of the present invention. As shown in fig. 1, the method for controlling the start of the speech question searching application may include the following steps:
101. the family education equipment listens to the voice signal sent by the outside.
In the embodiment of the invention, after the family education equipment is started, the voice interception function of the family education equipment can be started, and the family education equipment can intercept the voice signal sent by the outside in real time through the started voice interception function of the family education equipment.
Optionally, in the embodiment of the present invention, after the home teaching device is powered on, it may be detected whether a first track for starting the voice interception function, which is input by an external user on a display screen of the home teaching device, is received, and if the first track is received, the voice interception function of the home teaching device is started, so that power consumption of the home teaching device may be reduced; correspondingly, after the family education equipment is started, whether the second track which is input by an external user on the display screen of the family education equipment and used for closing the voice interception function can be detected, and if the second track is received, the voice interception function of the family education equipment can be closed, so that the power consumption of the family education equipment can be reduced.
102. The family education equipment detects whether the voice signal contains a wake-up word for waking up a voice search question application of the family education equipment; if the voice signal contains a wake-up word for waking up the voice question searching application of the family education device, executing step 103-step 105; if the voice signal does not contain the awakening word of the voice question searching application for awakening the family education equipment, the process is ended.
For example, the family education device may detect whether the voice signal contains a wake-up word "duffel" for a voice question application that wakes up the family education device; if the voice signal contains a wake-up word "small cloth" for waking up the voice search topic application of the family education device, execute step 103; if the voice signal does not contain the awakening word 'small cloth' used for awakening the voice question searching application of the family education equipment, the process is ended.
103. And the family education equipment identifies the user gender of the external user sending the voice signal according to the voice signal.
In the embodiment of the invention, the family education equipment can extract the sound characteristics of the voice signal according to the voice signal; and the family education equipment can identify the user gender of the external user who sends the voice signal according to the voice characteristics of the voice signal.
For example, the family education device can extract a tone (belonging to a sound feature) of the voice signal according to the voice signal, and the family education device can recognize the user gender of the external user who utters the voice signal according to the tone of the voice signal. The male vocal cords are longer, wider and thicker, so that the frequency is low during vibration, and the emitted tone is low; the vocal cords of women are shorter, thinner and narrower, so that the frequency is high during vibration, and the emitted tones are high; therefore, if the tone of the voice signal is low, the family education device can determine that the gender of the external user of the voice signal is male; if the tone of the voice signal is high, the family education device may determine that the gender of the external user of the voice signal is female.
104. And the family education equipment acquires the instant emotion type of the external user.
As an optional implementation manner, in the embodiment of the present invention, the family education device may recognize an instant emotion type of the external user according to the intonation of the voice signal; the instant emotion types of the external users can include a low emotion type and a high emotion type. For example, when the intonation of the voice signal is a high-hole intonation, the family education device can recognize that the instant emotion type of the external user is an emotional upsurge type; when the intonation of the voice signal is a low intonation, the family education device can recognize that the instant emotion type of the external user is an emotion-low type.
As another optional implementation manner, in an embodiment of the present invention, the acquiring, by the family education device, the instant emotion type of the external user includes:
the family education equipment detects whether a wearable device is wirelessly connected at present;
if the wearable equipment is wirelessly connected with the family education equipment currently, the family education equipment acquires the sound characteristics of a wearer of the wearable equipment;
and the family education device verifying whether the sound features of the wearer of the wearable device match the sound features of the voice signal;
if the current heart rate data of the wearable device and the current blood pressure data of the wearer are matched, the family education device can inform the wearable device to acquire the current heart rate data of the wearer of the wearable device and the current blood pressure data of the wearer, and the wearable device determines the instant emotion type of the wearer according to the current heart rate data of the wearer and the current blood pressure data of the wearer; when the current heart rate data exceeds a specified heart rate threshold value and the current blood pressure data exceeds a specified blood pressure threshold value, the wearable device identifies that the instant emotion type of the wearer is an emotional upsurge type; or when the current heart rate data does not exceed a specified heart rate threshold value and the current blood pressure data does not exceed a specified blood pressure threshold value, identifying the instant emotion type of the wearer as a low emotion type;
and the family education device receives the instant emotion type of the wearer sent by the wearable device as the instant emotion type of the external user.
Wherein, implement above-mentioned embodiment, can be because the accurate instant emotion type that discerns the external user of the wearable equipment that external user wore sends for family education equipment to make family education equipment can obtain accurate external user's instant emotion type.
105. The family education equipment displays a voice question searching interface of the voice question searching application, and outputs a virtual animal matched with the gender of the user on the voice question searching interface; the virtual animal is used for outputting response voice corresponding to the search question voice sent by the external user according to the tone matched with the instant emotion type.
For example, the virtual animal matching the gender of the user may be a virtual animal that is generally preferred by the user to whom the gender of the user belongs. For example, when the gender of the user is male, the virtual animal matched with the gender of the user can be a virtual dog, a virtual tortoise, and the like; when the gender of the user is female or male, the virtual animal matched with the gender of the user can be a virtual cat, a virtual bird and the like.
It should be noted that the virtual animal matching the gender of the user may also be any virtual animal in a set of virtual animals configured for the gender of the user in advance by the family education device, and the embodiment of the present invention is not limited thereto.
In the embodiment of the invention, the tone matched with the instant emotion type has the main function of improving the instant emotion of an external user. For example, when the instant emotion type is a low emotion type, the virtual animal may be configured to output a response voice corresponding to a search question voice uttered by the external user according to a happy tone matched with the low emotion type; or, when the instant emotion type is an emotional upsurge type, the virtual animal may be configured to output a response voice corresponding to the search question voice sent by the external user according to a peace intonation matched with the emotional upsurge type.
In the embodiment of the present invention, the response voice may be an output voice obtained by performing text/voice conversion on a search result corresponding to a search voice sent by the external user, for example, when the search voice sent by the external user is "what a radical wearing characters is", the search result corresponding to the search voice sent by the external user may be "the radical wearing characters is: correspondingly, the response voice can be a search question result 'the radical with characters worn on the head' corresponding to the search question voice sent by the external user: and tenthly, performing text/voice conversion to obtain output voice. For another example, when the search question speech uttered by the external user is "recommend good sentences describing spring and i want to write composition", the search question result corresponding to the search question speech uttered by the external user may be "thin rain such as silk, spring rain such as oil", and accordingly, the response speech may be output speech obtained by performing text/speech conversion on the search question result "thin rain such as silk, spring rain such as oil" corresponding to the search question speech uttered by the external user.
In this embodiment of the present invention, the response voice may be a learning encouragement voice of a search result (e.g., a knowledge point video) corresponding to the search voice uttered by the external user, for example, when the search voice uttered by the external user is "i want to watch videos related to rational numbers", the search result corresponding to the search voice uttered by the external user may be videos related to rational numbers, and accordingly, the response voice may be a learning encouragement voice of videos related to rational numbers corresponding to the search voice uttered by the external user, for example, the learning encouragement voice may be "to assist a celebrity in finding videos related to rational numbers, and make a learning effort".
Therefore, by implementing the method described in fig. 1, the family education device can load the virtual animal matching the user gender of the external user on the displayed voice question searching interface, so that the virtual animal can output the response voice corresponding to the question searching voice sent by the external user according to the tone matching the instant emotion type of the external user, thereby being beneficial to arousing the interest of the middle and primary school students in searching the questions.
Example two
Referring to fig. 2, fig. 2 is a flowchart illustrating another method for controlling the start of a speech question searching application according to an embodiment of the present invention. As shown in fig. 2, the method for controlling the start of the speech question searching application may include the following steps:
201. the family education equipment listens to the voice signal sent by the outside.
202. The family education equipment detects whether the voice signal contains a wake-up word for waking up a voice search question application of the family education equipment; if the voice signal contains a wake-up word for waking up the voice question searching application of the family education device, executing step 203-step 210; if the voice signal does not contain the awakening word of the voice question searching application for awakening the family education equipment, the process is ended.
203. And the family education equipment identifies the user gender of the external user sending the voice signal according to the voice signal.
In the embodiment of the invention, the family education equipment can extract the sound characteristics of the voice signal according to the voice signal; and the family education equipment can identify the user gender of the external user who sends the voice signal according to the voice characteristics of the voice signal.
204. The family education equipment detects whether the wearable equipment is wirelessly connected at present, and if yes, the sound characteristics of a wearer of the wearable equipment are obtained; verifying whether the sound characteristics of the wearer of the wearable device are matched with the sound characteristics of the voice signals or not, if so, informing the wearable device to acquire the current heart rate data of the wearer of the wearable device and the current blood pressure data of the wearer, and determining the instant emotion type of the wearer by the wearable device according to the current heart rate data of the wearer and the current blood pressure data of the wearer.
205. The family education device receives the instant emotion type of the wearer sent by the wearable device, and the instant emotion type serves as the instant emotion type of the external user.
206. The family education equipment displays a voice question searching interface of the voice question searching application, and outputs a virtual animal matched with the gender of the user on the voice question searching interface; the virtual animal is used for outputting response voice corresponding to the search question voice sent by the external user according to the tone matched with the instant emotion type.
207. The family education equipment monitors the question searching voice sent by the external user and obtains a question searching result corresponding to the question searching voice.
208. And the family education equipment outputs the question searching result corresponding to the question searching voice to a voice question searching interface for displaying.
209. And the family education equipment controls the virtual animal to output a response voice corresponding to the search question voice sent by the external user according to the tone matched with the instant emotion type.
210. The family education equipment inquires whether student personal attribute information corresponding to the sound features of the voice signals is stored or not; the student personal attribute information at least comprises student identity information and a target curriculum schedule corresponding to the student, and the target curriculum schedule at least comprises a plurality of different learning subjects learned by the student and teacher terminal identifications corresponding to each learning subject; if the personal attribute information of the student corresponding to the sound feature of the voice signal is stored, go to step 211; if the personal attribute information of the student corresponding to the sound feature of the voice signal is not stored, the flow is ended.
The teacher terminal identification can be a mobile phone number of the teacher terminal or account information of a teaching application installed on the teacher terminal.
211. The family education equipment queries a target learning subject corresponding to the search question result from a target curriculum schedule according to the search question result; and inquiring the target teacher terminal identification corresponding to the target learning subject from the target curriculum schedule.
212. The home education equipment converts the search question voice into search question words, and reports the student identity information, the search question words and the target teacher terminal identification to the cloud platform, so that the cloud platform sends the student identity information and the search question words to the teacher terminal to which the target teacher terminal identification belongs according to the target teacher terminal identification.
In the embodiment of the present invention, the above steps 210 to 212 are implemented, so that the teacher can find the problems encountered by some students in the subject (e.g., language) taught by the teacher according to the collected subject characters issued by the students in the subject (e.g., language) taught by the teacher, so that the teacher can specifically explain the problems encountered by the students to the students, thereby facilitating to improve the learning efficiency of the students.
As an optional implementation manner, in the embodiment of the present invention, after receiving the student identity information and the search question words sent by the cloud platform, the teacher terminal to which the target teacher terminal identifier belongs may prompt the teacher to award the student to which the student identity information belongs for encouraging the student to learn, and the teacher terminal to which the target teacher terminal identifier belongs may send the virtual gift to the cloud platform, so that the cloud platform may send the virtual gift to the home teaching device, thereby enabling the teacher to encourage the student to learn, and facilitating the promotion of learning interest and power of the student.
As an optional implementation manner, in the embodiment of the present invention, the cloud platform may detect whether the total number of the virtual gifts pushed to the family education device exceeds a specified number, and if the total number of the virtual gifts pushed to the family education device exceeds the specified number, the cloud platform may determine a user function (for example, an animation function executed by a virtual animal) in the voice question searching application that has not been activated corresponding to the total number of the virtual gifts pushed to the family education device, and activate the user authority for the family education device, so that the available user functions of the voice question searching application may be enriched, and the learning interest and power of the student may be improved.
As an optional implementation manner, in the embodiment of the present invention, the personal attribute information of the student may further include a guardian terminal identifier corresponding to the student. Accordingly, the method depicted in fig. 2 may further include:
family education equipment is with student identity information, search for the subject characters and this guardian terminal identification and report to the cloud platform, so that the cloud platform is according to this guardian terminal identification, with student identity information, search for the subject characters and send the guardian terminal that this guardian terminal identification belongs to, make the guardian can be according to the search for the subject characters that the student of its guardianship who collects sent on the study, discover the problem that this student met on the study, so that the guardian can the pertinence further explain the problem that this student met to the student of its guardianship, thereby be favorable to promoting this student's learning efficiency.
As an alternative embodiment, in the method described in fig. 2, the family education device may further perform the following operations:
the family education equipment informs the wearable equipment to start a recording microphone of the wearable equipment to monitor the environmental sound source, the wearable equipment can verify whether the monitored environmental sound source is matched with a crying sound source of the child in the database, if so, the wearable equipment plays target music, and the target music is used for attracting and transferring the attention of the student and relieving the emotion of the student when the student encounters difficulty in learning.
Because the environment sound source monitored by the recording microphone contains background noise, the wearable equipment can firstly carry out pre-discrete sampling and quantification on the environment sound source containing the background noise to obtain a data frame, construct a wavelet neural network based on a Morlet wavelet function for the data frame, construct a particle swarm fitness function for parameters of the wavelet neural network, obtain the optimal parameters of the wavelet neural network through a particle swarm algorithm, input the data frame into the wavelet neural network for filtering, thereby removing the noise and extracting to obtain a voice signal; further, the wearable device can check whether the voiceprint features of the extracted voice signals are matched with the voiceprint features of the crying sound sources of the children in the database, if the voiceprint features of the extracted voice signals are matched with the voiceprint features of the crying sound sources of the children in the database, the wearable device plays target music, the target music is used for attracting and transferring the attention of the students, and the emotion of the students when the students encounter difficulty in learning is relieved. By implementing the embodiment, the adaptability to the noise characteristics of different environment sound sources can be improved.
Wherein, wearable equipment check-up whether the voiceprint characteristic of the speech signal who extracts matches with the voiceprint characteristic of the child's source of crying in the database includes:
the wearable device carries out preprocessing on the extracted voice signals, wherein the preprocessing comprises pre-emphasis, framing and windowing processing;
the wearable device extracts voiceprint features MFCC, L PCC, △ MFCC, △L PCC, first-order difference of energy and GFCC from the preprocessed voice signal to jointly form a first multi-dimensional feature vector, wherein the MFCC is a Mel frequency cepstrum coefficient, the L PCC is a linear prediction cepstrum coefficient, the △ MFCC is the first-order difference of the MFCC, the △L PCC is the first-order difference of L PCC, and the GFCC is a Gamma tone filter cepstrum coefficient;
and the wearable equipment judges whether the first multi-dimensional feature vector is completely matched with a second multi-dimensional vector corresponding to the voiceprint features of the crying sound source of the child in the database, and if so, the extracted voiceprint features of the voice signal are accurately verified to be matched with the voiceprint features of the crying sound source of the child in the database.
In the embodiment of the invention, the target music can be music preset by the wearable device and used for attracting the attention of the student and relieving the emotion of the student when the student encounters difficulty in learning; or, the wearable device may also be music acquired from the cloud for attracting and transferring the attention of the student and relieving the emotion of the student when the student encounters difficulty in learning, and the embodiment of the present invention is not limited.
Therefore, by implementing the method described in fig. 2, the family education device can load the virtual animal matching the user gender of the external user on the displayed voice question searching interface, so that the virtual animal can output the response voice corresponding to the question searching voice sent by the external user according to the tone matching the instant emotion type of the external user, thereby being beneficial to arousing the interest of the middle and primary school students in searching the questions.
In addition, the implementation of the method described in fig. 2 enables the teacher to find the problems that some students encounter in the subject (e.g., language) taught by the teacher according to the collected search words that some students issue in the subject (e.g., language) taught by the teacher, so that the teacher can further explain the problems that the students encounter in a targeted manner to the students, thereby facilitating the improvement of the learning efficiency of the students.
In addition, the method described in fig. 2 can be implemented to relieve the students' emotions when they encounter difficulties in learning.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of a family education device according to an embodiment of the present invention. As shown in fig. 3, the family education device may include:
the monitoring unit 301 is configured to monitor a voice signal sent by the outside;
a detecting unit 302, configured to detect whether the voice signal contains a wake-up word for waking up a voice question search application of the family education device;
a recognition unit 303, configured to recognize, according to the voice signal, a user gender of an external user who sends the voice signal when the detection unit 302 detects that the voice signal includes a wake-up word for waking up a voice search application of the family education device;
a first obtaining unit 304, configured to obtain an instant emotion type of an external user;
a control unit 305 for displaying a voice question searching interface of the voice question searching application and outputting a virtual animal matched with the gender of the user on the voice question searching interface; the virtual animal is used for outputting response voice corresponding to the search question voice sent by the external user according to the tone matched with the instant emotion type.
In the embodiment of the present invention, after the family education device is powered on, the voice interception function of the family education device may be started, and the interception unit 301 may intercept the voice signal sent from the outside in real time through the started voice interception function of the family education device.
Optionally, in the embodiment of the present invention, after the home teaching device is powered on, it may be detected whether a first track for starting the voice interception function, which is input by an external user on a display screen of the home teaching device, is received, and if the first track is received, the voice interception function of the home teaching device is started, so that power consumption of the home teaching device may be reduced; correspondingly, after the family education equipment is started, whether the second track which is input by an external user on the display screen of the family education equipment and used for closing the voice interception function can be detected, and if the second track is received, the voice interception function of the family education equipment can be closed, so that the power consumption of the family education equipment can be reduced.
In this embodiment of the present invention, the recognition unit 303 is specifically configured to, when the detection unit 302 detects that the voice signal includes a wake-up word for waking up a voice search application of the family education device, extract a sound feature of the voice signal according to the voice signal; and identifying the user gender of the external user sending the voice signal according to the voice characteristics of the voice signal. For example, the recognition unit 303 may extract a tone (belonging to a sound feature) of the voice signal according to the voice signal, and the recognition unit 303 may recognize the user gender of the external user who utters the voice signal according to the tone of the voice signal.
As an optional implementation manner, in the embodiment of the present invention, the first obtaining unit 304 may identify an instant emotion type of an external user according to a tone of the voice signal; the instant emotion types of the external users can include a low emotion type and a high emotion type. For example, when the intonation of the voice signal is a high pitch, the first obtaining unit 304 may recognize that the instant emotion type of the external user is an emotional upsurge type; when the intonation of the voice signal is a deep intonation, the first obtaining unit 304 may recognize that the immediate emotion type of the external user is a low emotion type.
As another alternative implementation, as shown in fig. 4, the first obtaining unit 304 may include:
a detecting subunit 3041, configured to detect whether a wearable device is currently wirelessly connected to the home education device;
an obtaining subunit 3042, configured to obtain a sound feature of a wearer of the wearable device when the detecting subunit 3041 detects that the wearable device is wirelessly connected to the home education device currently;
a verification subunit 3043 configured to verify whether the sound characteristics of the wearer of the wearable device match the sound characteristics of the voice signal;
an interaction subunit 3044, configured to notify the wearable device to obtain current heart rate data of a wearer of the wearable device and current blood pressure data of the wearer when the verification result of the verification subunit 3043 is a match, and determine an instant emotion type of the wearer by the wearable device according to the current heart rate data of the wearer and the current blood pressure data of the wearer; when the current heart rate data exceeds a specified heart rate threshold value and the current blood pressure data exceeds a specified blood pressure threshold value, the wearable device identifies that the instant emotion type of the wearer is an emotional upsurge type; or when the current heart rate data does not exceed a specified heart rate threshold value and the current blood pressure data does not exceed a specified blood pressure threshold value, identifying the instant emotion type of the wearer as a low emotion type;
and the interaction subunit 3044 is further configured to receive the instant emotion type of the wearer sent by the wearable device as the instant emotion type of the external user.
Wherein, implement above-mentioned embodiment, can be because the accurate instant emotion type that discerns the external user of the wearable equipment that external user wore sends for family education equipment to make family education equipment can obtain accurate external user's instant emotion type.
Therefore, the family education device described in fig. 3 can load the virtual animal matching the user gender of the external user on the displayed voice question searching interface, so that the virtual animal can output the response voice corresponding to the question searching voice sent by the external user according to the tone matching the instant emotion type of the external user, thereby being beneficial to arousing the interest of the primary and secondary school students in searching the questions.
Example four
Another family education device disclosed in the embodiment of the present invention includes, in addition to all the components of the family education device shown in fig. 3, a second obtaining unit, a querying unit, and a converting unit, wherein:
the monitoring unit 301 is further configured to monitor a question searching voice sent by an external user;
the second acquisition unit is used for acquiring a question searching result corresponding to the question searching voice;
the control unit 305 is further configured to output a question searching result corresponding to the question searching voice to a voice question searching interface for displaying; and controlling the virtual animal to output a response voice corresponding to the search question voice sent by the external user according to the tone matched with the instant emotion type.
The query unit is used for querying whether student personal attribute information corresponding to the sound features of the voice signals is stored or not; the student personal attribute information at least comprises student identity information and a target curriculum schedule corresponding to the student, and the target curriculum schedule at least comprises a plurality of different learning subjects learned by the student and teacher terminal identifications corresponding to the learning subjects; and if the personal attribute information of the student corresponding to the voice feature of the voice signal is stored, inquiring a target learning subject corresponding to the search result from the target curriculum schedule by taking the search result as a basis; inquiring a target teacher terminal identification corresponding to the target learning subject from the target curriculum schedule;
the conversion unit is used for converting the search question voice into search question words, reporting student identity information, the search question words and the target teacher terminal identification to the cloud platform, enabling the cloud platform to send the student identity information and the search question words to the teacher terminal to which the target teacher terminal identification belongs according to the target teacher terminal identification, enabling a teacher to discover problems encountered by students in subjects (such as languages) taught by the teacher according to the collected search question words sent by the students in the subjects taught by the teacher, and enabling the teacher to pertinently further explain the problems encountered by the students, so that the learning efficiency of the students is improved.
As an optional implementation manner, in the embodiment of the present invention, the personal attribute information of the student may further include a guardian terminal identifier corresponding to the student. Correspondingly, the conversion unit can also report student identity information, search for the subject characters and this guardian terminal identification to the cloud platform, so that the cloud platform is according to this guardian terminal identification, with student identity information, search for the subject characters and send the guardian terminal that this guardian terminal identification belongs to, make the guardian can be according to the search for the subject characters that the student of its guardianship who collects sent on learning, discover the problem that this student met on learning, so that the guardian can be specific to the student of its guardianship further solve the problem that this student met, thereby be favorable to promoting this student's learning efficiency.
As an optional implementation manner, in the embodiment of the present invention, the control unit 305 may further perform the following operations:
the control unit 305 informs the wearable device to start a recording microphone of the wearable device to monitor the environmental sound source, the wearable device can verify whether the monitored environmental sound source is matched with the crying sound source of the child in the database, if so, the wearable device plays target music, and the target music is used for attracting attention of the student and relieving the emotion of the student when the student encounters difficulty in learning.
Because the environment sound source monitored by the recording microphone contains background noise, the wearable equipment can firstly carry out pre-discrete sampling and quantification on the environment sound source containing the background noise to obtain a data frame, construct a wavelet neural network based on a Morlet wavelet function for the data frame, construct a particle swarm fitness function for parameters of the wavelet neural network, obtain the optimal parameters of the wavelet neural network through a particle swarm algorithm, input the data frame into the wavelet neural network for filtering, thereby removing the noise and extracting to obtain a voice signal; further, the wearable device can check whether the voiceprint features of the extracted voice signals are matched with the voiceprint features of the crying sound sources of the children in the database, if the voiceprint features of the extracted voice signals are matched with the voiceprint features of the crying sound sources of the children in the database, the wearable device plays target music, the target music is used for attracting and transferring the attention of the students, and the emotion of the students when the students encounter difficulty in learning is relieved. By implementing the embodiment, the adaptability to the noise characteristics of different environment sound sources can be improved.
Wherein, wearable equipment check-up whether the voiceprint characteristic of the speech signal who extracts matches with the voiceprint characteristic of the child's source of crying in the database includes:
the wearable device carries out preprocessing on the extracted voice signals, wherein the preprocessing comprises pre-emphasis, framing and windowing processing;
the wearable device extracts voiceprint features MFCC, L PCC, △ MFCC, △L PCC, first-order difference of energy and GFCC from the preprocessed voice signal to jointly form a first multi-dimensional feature vector, wherein the MFCC is a Mel frequency cepstrum coefficient, the L PCC is a linear prediction cepstrum coefficient, the △ MFCC is the first-order difference of the MFCC, the △L PCC is the first-order difference of L PCC, and the GFCC is a Gamma tone filter cepstrum coefficient;
and the wearable equipment judges whether the first multi-dimensional feature vector is completely matched with a second multi-dimensional vector corresponding to the voiceprint features of the crying sound source of the child in the database, and if so, the extracted voiceprint features of the voice signal are accurately verified to be matched with the voiceprint features of the crying sound source of the child in the database.
In the embodiment of the invention, the target music can be music preset by the wearable device and used for attracting the attention of the student and relieving the emotion of the student when the student encounters difficulty in learning; or, the wearable device may also be music acquired from the cloud for attracting and transferring the attention of the student and relieving the emotion of the student when the student encounters difficulty in learning, and the embodiment of the present invention is not limited.
Therefore, the family education device described in fig. 4 can load the virtual animal matching the user gender of the external user on the displayed voice question searching interface, so that the virtual animal can output the response voice corresponding to the question searching voice sent by the external user according to the tone matching the instant emotion type of the external user, thereby being beneficial to arousing the interest of the primary and secondary school students in searching the questions.
In addition, the family education device described in fig. 4 enables the teacher to find out the problems that some students encounter in the subject (e.g. language) taught by the teacher according to the collected search words that some students issue in the subject (e.g. language) taught by the teacher, so that the teacher can further explain the problems that the students encounter in the specific way to the students, thereby being beneficial to improving the learning efficiency of the students.
In addition, the family education device described in fig. 4 can relieve the students from feeling when they encounter difficulties in learning.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another family education device disclosed in the embodiment of the present invention. As shown in fig. 5, the family education device may include:
a memory 501 in which executable program code is stored;
a processor 502 coupled to a memory 501;
wherein, the processor 502 calls the executable program code stored in the memory 501 to execute the method described in fig. 1 or fig. 2.
The family education device described in fig. 5 can load a virtual animal matching the user gender of the external user on the displayed voice question searching interface, so that the virtual animal can output a response voice corresponding to the question searching voice sent by the external user according to the tone matching the instant emotion type of the external user, thereby being beneficial to arousing the interest of the primary and secondary school students in searching questions.
In addition, the family education device described in fig. 5 enables the teacher to find out the problems that some students encounter in the subject (e.g. language) taught by the teacher according to the collected search words that some students issue in the subject (e.g. language) taught by the teacher, so that the teacher can further explain the problems that the students encounter in the specific way to the students, thereby being beneficial to improving the learning efficiency of the students.
In addition, the family education device described in fig. 5 can relieve the students from feeling when they encounter difficulties in learning.
An embodiment of the present invention discloses a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the method described in fig. 1 or fig. 2.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The start control method and the family education device for the voice question search application disclosed by the embodiment of the invention are introduced in detail, a specific embodiment is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A starting control method of a voice question searching application is characterized by comprising the following steps:
the family education equipment monitors voice signals sent by the outside;
the family education equipment detects whether the voice signal contains a wake-up word for waking up a voice search application of the family education equipment;
if the voice signal contains a wake-up word for waking up the voice search question application of the family education equipment, the family education equipment identifies the user gender of the external user starting the voice signal according to the voice signal;
the family education equipment acquires the instant emotion type of the outside user;
the family education equipment displays a voice question searching interface of the voice question searching application, and outputs a virtual animal matched with the gender of the user on the voice question searching interface; and the virtual animal is used for outputting a response voice corresponding to the search question voice sent by the external user according to the tone matched with the instant emotion type.
2. The start control method according to claim 1, wherein the family education device recognizes the user gender of the external user who has uttered the voice signal based on the voice signal, including:
the family education equipment extracts the sound characteristics of the voice signals according to the voice signals;
and the family education equipment identifies the user gender of the external user starting the voice signal according to the voice characteristics of the voice signal.
3. The start control method according to claim 2, wherein the obtaining of the immediate emotion type of the external user by the family education device comprises:
the family education equipment identifies the instant emotion type of the external user according to the tone of the voice signal;
or, the family education device obtains the instant emotion type of the external user, including:
the family education equipment detects whether a wearable device is wirelessly connected at present;
if the wearable equipment is wirelessly connected with the family education equipment currently, the family education equipment acquires the sound characteristics of a wearer of the wearable equipment;
the family education device verifying whether the sound features of the wearer of the wearable device match the sound features of the voice signal;
if the current heart rate data of the wearer of the wearable device and the current blood pressure data of the wearer are matched, the family education device informs the wearable device to obtain the current heart rate data of the wearer of the wearable device and the current blood pressure data of the wearer, and the wearable device determines the instant emotion type of the wearer according to the current heart rate data of the wearer and the current blood pressure data of the wearer;
and the family education equipment receives the instant emotion type of the wearer sent by the wearable equipment as the instant emotion type of the external user.
4. The startup control method according to claim 3, characterized in that the method further comprises:
the family education equipment monitors the search question voice sent by an external user;
the family education equipment acquires a question searching result corresponding to the question searching voice;
the family education equipment outputs a question searching result corresponding to the question searching voice to the voice question searching interface for displaying;
and the family education equipment controls the virtual animal to output a response voice corresponding to the search question voice sent by the external user according to the tone matched with the instant emotion type.
5. The startup control method according to claim 4, characterized in that the method further comprises:
the family education equipment inquires whether student personal attribute information corresponding to the sound features of the voice signals is stored or not; the student personal attribute information at least comprises student identity information and a target curriculum schedule corresponding to the students, and the target curriculum schedule at least comprises a plurality of different learning subjects learned by the students and teacher terminal identifications corresponding to the learning subjects;
if the personal attribute information of the students corresponding to the voice features of the voice signals is stored, the family education equipment queries a target learning subject corresponding to the search result from the target curriculum schedule according to the search result; inquiring a target teacher terminal identification corresponding to the target learning subject from the target curriculum schedule;
the family education equipment converts the search question voice into search question words and reports the student identity information, the search question words and the target teacher terminal identification to a cloud platform, so that the cloud platform sends the student identity information and the search question words to a teacher terminal to which the target teacher terminal identification belongs according to the target teacher terminal identification.
6. A family education device, comprising:
the monitoring unit is used for monitoring voice signals sent by the outside;
the detection unit is used for detecting whether the voice signal contains a wake-up word for waking up the voice search topic application of the family education equipment;
the recognition unit is used for recognizing the user gender of the external user who starts the voice signal according to the voice signal when the detection unit detects that the voice signal contains a wake-up word of the voice search topic application for waking up the family education equipment;
the first acquisition unit is used for acquiring the instant emotion type of the external user;
the control unit is used for displaying a voice question searching interface applied by the voice question searching application and outputting a virtual animal matched with the gender of the user on the voice question searching interface; and the virtual animal is used for outputting a response voice corresponding to the search question voice sent by the external user according to the tone matched with the instant emotion type.
7. The family education device according to claim 6, wherein the recognition unit is specifically configured to extract a sound feature of the voice signal according to the voice signal when the detection unit detects that the voice signal contains a wake-up word for waking up a voice search application of the family education device; and identifying the user gender of the external user starting the voice signal according to the voice characteristics of the voice signal.
8. The family education device according to claim 7, wherein the first obtaining unit is specifically configured to identify an instant emotion type of the external user according to a tone of the voice signal;
alternatively, the first acquiring unit includes:
the detection subunit is used for detecting whether the family education equipment is wirelessly connected with wearable equipment currently;
the acquisition subunit is used for acquiring the sound characteristics of the wearer of the wearable device when the detection subunit detects that the family education device is wirelessly connected with the wearable device currently;
a verification subunit for verifying whether the sound features of the wearer of the wearable device match the sound features of the speech signal;
the interaction subunit is used for informing the wearable device to acquire the current heart rate data of the wearer of the wearable device and the current blood pressure data of the wearer when the verification result of the verification subunit is matched, and determining the instant emotion type of the wearer by the wearable device according to the current heart rate data of the wearer and the current blood pressure data of the wearer;
the interaction subunit is further configured to receive the instant emotion type of the wearer sent by the wearable device, as an instant emotion type of an external user.
9. The family education device of claim 8, wherein:
the monitoring unit is also used for monitoring the question searching voice sent by the external user;
the family education device further includes a second acquisition unit, wherein:
the second obtaining unit is used for obtaining a question searching result corresponding to the question searching voice;
the control unit is also used for outputting a question searching result corresponding to the question searching voice to the voice question searching interface for displaying; and controlling the virtual animal to output a response voice corresponding to the search question voice sent by the external user according to the tone matched with the instant emotion type.
10. The family education device of claim 9, further comprising:
the query unit is used for querying whether student personal attribute information corresponding to the sound features of the voice signals is stored or not; the student personal attribute information at least comprises student identity information and a target curriculum schedule corresponding to the students, and the target curriculum schedule at least comprises a plurality of different learning subjects learned by the students and teacher terminal identifications corresponding to the learning subjects; and if the personal attribute information of the students corresponding to the voice characteristics of the voice signals is stored, inquiring a target learning subject corresponding to the search result from the target curriculum schedule by taking the search result as a basis; inquiring a target teacher terminal identification corresponding to the target learning subject from the target curriculum schedule;
and the conversion unit is used for converting the search question voice into search question words and reporting the student identity information, the search question words and the target teacher terminal identification to a cloud platform, so that the cloud platform sends the student identity information and the search question words to a teacher terminal to which the target teacher terminal identification belongs according to the target teacher terminal identification.
CN201810747125.9A 2018-07-09 2018-07-09 Starting control method of voice question searching application and family education equipment Active CN108806686B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810747125.9A CN108806686B (en) 2018-07-09 2018-07-09 Starting control method of voice question searching application and family education equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810747125.9A CN108806686B (en) 2018-07-09 2018-07-09 Starting control method of voice question searching application and family education equipment

Publications (2)

Publication Number Publication Date
CN108806686A CN108806686A (en) 2018-11-13
CN108806686B true CN108806686B (en) 2020-07-28

Family

ID=64075875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810747125.9A Active CN108806686B (en) 2018-07-09 2018-07-09 Starting control method of voice question searching application and family education equipment

Country Status (1)

Country Link
CN (1) CN108806686B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109616109B (en) * 2018-12-04 2020-05-19 北京蓦然认知科技有限公司 Voice awakening method, device and system
CN109637286A (en) * 2019-01-16 2019-04-16 广东小天才科技有限公司 Spoken language training method based on image recognition and family education equipment
CN110246519A (en) * 2019-07-25 2019-09-17 深圳智慧林网络科技有限公司 Emotion identification method, equipment and computer readable storage medium
CN111651102B (en) * 2020-04-30 2021-09-17 北京大米科技有限公司 Online teaching interaction method and device, storage medium and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0512023A (en) * 1991-07-04 1993-01-22 Omron Corp Emotion recognizing device
CN105247609B (en) * 2013-05-31 2019-04-12 雅马哈株式会社 The method and device responded to language is synthesized using speech
CN106548773B (en) * 2016-11-04 2020-06-23 百度在线网络技术(北京)有限公司 Child user searching method and device based on artificial intelligence
CN106886582A (en) * 2017-02-07 2017-06-23 广东小天才科技有限公司 Method and system for embedding learning assistant in terminal equipment
CN107728780B (en) * 2017-09-18 2021-04-27 北京光年无限科技有限公司 Human-computer interaction method and device based on virtual robot

Also Published As

Publication number Publication date
CN108806686A (en) 2018-11-13

Similar Documents

Publication Publication Date Title
CN108806686B (en) Starting control method of voice question searching application and family education equipment
US10516938B2 (en) System and method for assessing speaker spatial orientation
CN108288467B (en) Voice recognition method and device and voice recognition engine
US11837249B2 (en) Visually presenting auditory information
CN109036395A (en) Personalized speaker control method, system, intelligent sound box and storage medium
CN111833853A (en) Voice processing method and device, electronic equipment and computer readable storage medium
KR102314213B1 (en) System and Method for detecting MCI based in AI
CN108320734A (en) Audio signal processing method and device, storage medium, electronic equipment
CN113035232B (en) Psychological state prediction system, method and device based on voice recognition
CN108766431B (en) Automatic awakening method based on voice recognition and electronic equipment
CN110085220A (en) Intelligent interaction device
CN112102850A (en) Processing method, device and medium for emotion recognition and electronic equipment
CN114121006A (en) Image output method, device, equipment and storage medium of virtual character
KR102444012B1 (en) Device, method and program for speech impairment evaluation
CN111696559A (en) Providing emotion management assistance
JP2019124952A (en) Information processing device, information processing method, and program
CN114708869A (en) Voice interaction method and device and electric appliance
CN109841221A (en) Parameter adjusting method, device and body-building equipment based on speech recognition
Qadri et al. A critical insight into multi-languages speech emotion databases
CN117198338B (en) Interphone voiceprint recognition method and system based on artificial intelligence
JP2006230548A (en) Physical condition judging device and its program
JP2013088552A (en) Pronunciation training device
CN111128127A (en) Voice recognition processing method and device
CN108984229B (en) Application program starting control method and family education equipment
CN108648545A (en) New word reviewing method applied to family education equipment and family education equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant