CN109949809B - Voice control method and terminal equipment - Google Patents

Voice control method and terminal equipment Download PDF

Info

Publication number
CN109949809B
CN109949809B CN201910237971.0A CN201910237971A CN109949809B CN 109949809 B CN109949809 B CN 109949809B CN 201910237971 A CN201910237971 A CN 201910237971A CN 109949809 B CN109949809 B CN 109949809B
Authority
CN
China
Prior art keywords
voice
display
target
different
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910237971.0A
Other languages
Chinese (zh)
Other versions
CN109949809A (en
Inventor
谢晓桦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910237971.0A priority Critical patent/CN109949809B/en
Publication of CN109949809A publication Critical patent/CN109949809A/en
Application granted granted Critical
Publication of CN109949809B publication Critical patent/CN109949809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a voice control method and terminal equipment, wherein the method comprises the following steps: determining a target voice mode under the condition that a voice control function of the terminal equipment is started, wherein different voice modes comprise different voice response parameters; responding to the voice input by the user in the target voice mode. Therefore, the voice input by the user can be responded in different voice modes, different voice modes comprise different voice response parameters, the response mode of the voice control function is enriched, and different requirements of different users or different scenes can be met.

Description

Voice control method and terminal equipment
Technical Field
The present invention relates to the field of voice control technologies, and in particular, to a voice control method and a terminal device.
Background
With the continuous development of electronic technology, the application of voice control function is more and more extensive. The voice control function supports the terminal device to answer questions according to voice input by a user or realize interactive operation, and great convenience can be provided for the user.
In the prior art, the voice control function only responds to the voice content, and for the same voice content input by different people or users in different scenes, the reply content or the executed operation of the voice control function is the same, which cannot meet different requirements of different people or different scenes.
Therefore, in the prior art, the response mode of the voice control function is single, and the personalized requirements of the user cannot be met.
Disclosure of Invention
The embodiment of the invention provides a voice control method and terminal equipment, and aims to solve the problems that in the prior art, the response mode of a voice control function is single, and the personalized requirements of a user cannot be met.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a voice control method, which is applied to a terminal device, and the method includes:
determining a target voice mode under the condition that a voice control function of the terminal equipment is started, wherein different voice modes comprise different voice response parameters;
responding to the voice input by the user in the target voice mode.
In a second aspect, an embodiment of the present invention provides a terminal device, where the terminal device includes:
the determining module is used for determining a target voice mode under the condition that the voice control function of the terminal equipment is started, wherein different voice modes comprise different voice response parameters;
and the response module is used for responding the voice input by the user in the target voice mode.
In a third aspect, an embodiment of the present invention provides another terminal device, which includes a processor, a memory, and a computer program stored in the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the voice control method.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps of the above-mentioned voice control method.
In the embodiment of the invention, the voice control method determines a target voice mode under the condition that the voice control function of the terminal equipment is started, wherein different voice modes comprise different voice response parameters; responding to the voice input by the user in the target voice mode. Therefore, the voice input by the user can be responded in different voice modes, different voice modes comprise different voice response parameters, the response mode of the voice control function is enriched, and different requirements of different users or different scenes can be met.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flow chart of a voice control method provided by an embodiment of the present invention;
FIG. 2 is a second flowchart of a voice control method according to an embodiment of the present invention;
FIG. 3 is a third flowchart of a voice control method according to an embodiment of the present invention;
fig. 4 is one of the structural diagrams of the terminal device provided in the embodiment of the present invention;
fig. 5 is a second structural diagram of a terminal device according to the embodiment of the present invention;
fig. 6 is a third structural diagram of a terminal device according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a voice control method provided in an embodiment of the present invention, where the voice control method is applied to a terminal device, and as shown in fig. 1, the method includes the following steps:
step 101, determining a target voice mode under the condition that a voice control function of the terminal device is started, wherein different voice modes comprise different voice response parameters.
In this step, the terminal device determines a target voice mode when the voice control function is turned on. The terminal equipment can determine a target voice mode when a voice control function is started; or determining a target voice mode when receiving voice each time when the voice control function is in an open state; and the target voice mode can be determined at preset time intervals when the voice control function is in an open state.
The terminal device may determine a target voice mode according to an input of a user, and specifically, the terminal device may receive a selection operation of the user and determine that a voice mode selected by the user is the target voice mode; the input operation of the user can be received, and the voice mode input by the user is determined to be the target voice mode; the terminal equipment can also receive voice input by a user and determine a target voice mode according to voice characteristic parameters (such as tone and/or voiceprint) of the voice input by the user; the terminal device may further obtain biometric information of the user, and determine the target voice mode according to the biometric information of the user, where the biometric information may include at least one of: face feature information, iris feature information, fingerprint feature information.
The terminal device may also determine a target voice mode according to display states of display screens, and specifically, when the terminal device includes at least two display screens, the terminal device acquires the display states of the at least two display screens when the voice control function is turned on, and determines the target voice mode according to the display states of the at least two display screens, where the display states include a lighting state or a screen turning-off state, and different display states correspond to different display modes.
The voice response parameters may include at least one of: voice recognition database, content collection, sensitive words, voice broadcast parameters, application program restriction information.
And 102, responding to the voice input by the user in the target voice mode.
After determining the target voice mode, the terminal device responds to the voice input by the user in the target voice mode.
When the voice response parameter includes a voice recognition database, the voice response parameter of the target voice mode includes the target voice recognition database, and the step of the terminal device responding to the voice input by the user in the target voice mode specifically includes: and performing voice recognition on the voice input by the user according to the target voice recognition database.
When the voice response parameter includes a content set, the voice response parameter of the target voice mode includes a target content set, and the responding to the voice input by the user in the target voice mode by the terminal device specifically includes: and acquiring the reply content in the target content set according to the voice input by the user.
When the voice response parameter includes a sensitive word, the voice response parameter of the target voice mode includes a target sensitive word, and the step of the terminal device responding to the voice input by the user in the target voice mode specifically includes: and filtering the target sensitive words in the reply content corresponding to the voice input by the user, and outputting the filtered reply content.
When the voice response parameter includes the voice broadcast parameter, the voice response parameter of target voice mode includes the target voice broadcast parameter, terminal equipment is in the voice of response user input specifically includes under the target voice mode: and broadcasting the reply content corresponding to the voice input by the user according to the target voice broadcasting parameter.
When the voice response parameter includes the application program restriction information, the voice response parameter of the target voice mode includes the target application program restriction information, and the terminal device responding to the voice input by the user in the target voice mode specifically includes: and limiting the user to use the target application program through voice input according to the target application program limiting information.
In this embodiment of the present invention, the terminal device may be a mobile terminal device, for example: mobile phones, Tablet Personal computers (Tablet Personal Computer), Laptop computers (Laptop Computer), Personal Digital Assistants (PDA), Mobile Internet Devices (MID) or Wearable devices (Wearable Device), digital cameras, and the like; but also fixed terminal equipment such as computers and the like.
In this embodiment, the voice control method determines a target voice mode when a voice control function of the terminal device is turned on, where different voice modes include different voice response parameters; responding to the voice input by the user in the target voice mode. Therefore, the voice input by the user can be responded in different voice modes, different voice modes comprise different voice response parameters, the response mode of the voice control function is enriched, and different requirements of different users or different scenes can be met.
Optionally, the voice response parameters include at least one of:
voice recognition database, content collection, sensitive words, voice broadcast parameters, application program restriction information.
In this embodiment, the voice response parameter includes at least one of the foregoing parameters, and it can be understood that, when the voice response parameter includes a voice recognition database, the terminal device may invoke different voice recognition databases according to specific situations to perform voice recognition on a voice input by a user, so that a success rate of the voice recognition can be effectively improved. When the voice response parameters comprise content sets, the terminal device can acquire response contents corresponding to voices input by the user from different content sets, different response contents can be obtained aiming at the same voices input by different crowds or different scenes, and personalized requirements of the user can be greatly met.
When the voice response parameter includes a sensitive word, the terminal device may filter different sensitive words for the reply content, so that the appropriate reply content can be output for each group. When the voice response parameters comprise voice broadcast parameters, the terminal device can broadcast the reply contents according to different voice broadcast parameters, and can adapt to different requirements of different crowds or different scenes. When the voice response parameter includes application restriction information, the terminal device may restrict use of different applications through voice input for different groups of people or different scenes.
Optionally, the responding to the voice input by the user in the target voice mode includes at least one of:
performing voice recognition on voice input by a user according to the target voice recognition database;
acquiring reply content in the target content set according to the voice input by the user;
filtering target sensitive words in the reply content corresponding to the voice input by the user, and outputting the filtered reply content;
broadcasting the reply content corresponding to the voice input by the user according to the target voice broadcasting parameter;
and limiting the user to use the target application program through voice input according to the target application program limiting information.
In this embodiment, when the voice response parameter includes a voice recognition database, the voice response parameter of the target voice mode includes a target voice recognition database, and the terminal device performs voice recognition on the voice input by the user according to the target voice recognition database. It can be understood that the types of languages used may be different for people in different regions, and in this embodiment, for people in different regions, the terminal device performs speech recognition according to different speech recognition databases, so that the success rate of speech recognition can be effectively improved.
When the voice response parameter comprises a content set, the voice response parameter of the target voice mode comprises a target content set, and the terminal equipment acquires reply content in the target content set according to voice input by a user. It can be understood that the contents focused by different people may be different, for example, children may prefer to focus on animation, and old people may focus more on news, in this embodiment, the terminal device obtains the reply contents from different content sets for different people, so that the user can reply to the input voice more accurately.
And when the voice response parameter comprises a sensitive word, the voice response parameter of the target voice mode comprises a target sensitive word, and the terminal equipment filters the target sensitive word in the reply content corresponding to the voice input by the user and outputs the filtered reply content. It is understood that, for different people or different scenes, different words may need to be masked, for example, a child is not suitable for hearing the violence-related words, the target sensitive words may include at least one violence-related word, and when the target sensitive words are included in the obtained reply content, the terminal device filters the target sensitive words in the reply content, so that the children can mask the violence-related words.
When the voice response parameters comprise voice broadcast parameters, the voice response parameters of the target voice mode comprise target voice broadcast parameters, and the terminal device broadcasts the reply content corresponding to the voice input by the user according to the target voice broadcast parameters. The voice broadcast parameters comprise one or more of volume, speed and tone. For example, for the elderly, the response speed may be slow, the hearing is not good, and the target voice broadcast parameters in the target voice mode corresponding to the elderly may include a slow voice speed and a large volume, so that when the elderly uses the language control function of the terminal device, the terminal device may broadcast the reply content at the slow voice speed and the large volume, which provides convenience for the elderly.
And when the voice response parameter comprises application program limiting information, the target voice mode comprises target application program limiting information, and the terminal equipment limits the user to use the target application program through voice input according to the target application program limiting information. For example, to prevent children from being addicted to swordmen, parents may set game software on the terminal device as a target application, so that when a child uses the voice control function of the terminal device, the terminal device can restrict the child from using the target application through voice input.
Referring to fig. 2, fig. 2 is a second flowchart of a voice control method provided in an embodiment of the present invention, where the method is applied to a terminal device, and a main difference between the present embodiment and the previous embodiment is that the present embodiment specifically determines a target voice mode according to an input of a user, as shown in fig. 2, the method includes the following steps:
step 201, receiving a first input of a user under the condition that the voice control function of the terminal device is started.
In this step, the terminal device receives a first input of a user when the voice control function is on. The first input may include a selection operation, an input operation of inputting a voice mode, a voice input, and a biometric information input.
Step 202, determining a target group to which the user belongs according to the first input.
In this step, the terminal device determines a target group to which the user belongs according to the first input. The first input is voice input, and the terminal device can determine a target group to which the user belongs according to voice characteristic parameters (such as tone and/or voiceprint) of the voice input. The population may be divided according to age, for example a child population, a young population, a middle-aged population, an elderly population. The group may also be divided according to regions, which is not specifically limited in this embodiment.
Step 203, determining the voice mode corresponding to the target group as a target voice mode, wherein different groups correspond to different voice modes, and the different voice modes include different voice response parameters.
In this step, the terminal device determines the voice mode corresponding to the target group as a target voice mode, where different groups correspond to different voice modes, and the different voice modes include different voice response parameters. Therefore, the terminal equipment can provide different voice modes for different users according to the group to which the user belongs, and can meet the individual requirements of the users.
And step 204, responding to the voice input by the user in the target voice mode.
Step 204 is the same as step 102 shown in fig. 1 of the present invention, and is not described herein again.
In this embodiment, the voice control method receives a first input of a user when a voice control function of the terminal device is turned on; determining a target group to which the user belongs according to the first input; determining the voice modes corresponding to the target groups as target voice modes, wherein different groups correspond to different voice modes, and the different voice modes comprise different voice response parameters; responding to the voice input by the user in the target voice mode. Therefore, the target voice mode is determined according to the input of the user, and different requirements of different users or different scenes can be met.
Referring to fig. 3, fig. 3 is a third flowchart of a voice control method provided in an embodiment of the present invention, where the method is applied to a terminal device, and the main difference between the present embodiment and the previous embodiment is that the terminal device in the present embodiment includes at least two display screens, and determines a target voice mode according to display states of the display screens, as shown in fig. 3, the method includes the following steps:
step 301, under the condition that the voice control function of the terminal device is turned on, obtaining the display states of the at least two display screens, wherein the display states include a lighting state or a screen-off state.
In this step, the terminal device obtains the display states of the at least two display screens when the voice control function is turned on, where the display states include a lighting state or a screen-off state. The terminal equipment can acquire the display states of the at least two display screens when a voice control function is started; the display states of the at least two display screens can be acquired at preset time intervals when the voice control function is in an on state; the display states of the at least two display screens can be acquired when the lighting-on or lighting-off operation of at least one display screen in the at least two display screens by a user is received.
Step 302, determining a target voice mode according to the display states of the at least two display screens, wherein different display states correspond to different voice modes.
In this step, the terminal device determines a target voice mode according to the display states of the at least two display screens, wherein different display states correspond to different voice modes.
For example, the terminal device includes a first display screen and a second display screen, and it is assumed that a first display state corresponds to a first voice mode, a second display state corresponds to a second voice mode, and a third display state corresponds to a third voice mode, where the first display state is a display state in which the first display screen is turned on and the second display screen is turned off, the second display state is a display state in which the first display screen is turned off and the second display screen is turned on, and the third display state is a display state in which both the first display screen and the second display screen are turned on. If the first display screen is obtained to be lightened and the second display screen is turned off, the terminal equipment can determine that the first voice mode is the target voice mode.
In the embodiment of the present invention, different tones may be set for the first voice mode and the second voice mode in advance, for example, a first tone (e.g., a serious tone) may be set for the first voice mode, and a second tone (e.g., a naughty tone) may be set for the second voice mode, so that the response content may be broadcasted in different tones according to the display screen used by the user, and different requirements of the user in different scenes may be met.
And step 303, responding to the voice input by the user in the target voice mode.
This step 303 is the same as step 102 shown in fig. 1 of the present invention, and is not described herein again.
In this embodiment, the voice control method obtains the display states of the at least two display screens when the voice control function of the terminal device is turned on, where the display states include a lighting state or a screen-off state; determining a target voice mode according to the display states of the at least two display screens, wherein different display states correspond to different voice modes; responding to the voice input by the user in the target voice mode. Therefore, the target voice mode is determined according to the display state of the display screen, different display states correspond to different display modes, the response mode of the voice control function is enriched, and different requirements of different users or different scenes can be met.
Referring to fig. 4, fig. 4 is one of the structural diagrams of the terminal device according to the embodiment of the present invention, which can implement the details of the voice control method in the foregoing embodiment and achieve the same effect. As shown in fig. 4, the terminal device 400 includes a determining module 401 and a responding module 402, the determining module 401 and the responding module 402 are connected, wherein:
a determining module 401, configured to determine a target voice mode when a voice control function of the terminal device is turned on, where different voice modes include different voice response parameters;
a response module 402, configured to respond to the voice input by the user in the target voice mode.
Optionally, the voice response parameters include at least one of:
voice recognition database, content collection, sensitive words, voice broadcast parameters, application program restriction information.
Optionally, the response module 402 is specifically configured to implement at least one of the following:
performing voice recognition on voice input by a user according to the target voice recognition database;
acquiring reply content in the target content set according to the voice input by the user;
filtering target sensitive words in the reply content corresponding to the voice input by the user, and outputting the filtered reply content;
broadcasting the reply content corresponding to the voice input by the user according to the target voice broadcasting parameter;
and limiting the user to use the target application program through voice input according to the target application program limiting information.
Optionally, referring to fig. 5, fig. 5 is a second schematic structural diagram of a terminal device according to an embodiment of the present invention, and as shown in fig. 5, the determining module 401 includes:
a receiving unit 4011 configured to receive a first input of a user;
a first determining unit 4012, configured to determine, according to the first input, a target group to which the user belongs;
a second determining unit 4013, configured to determine a speech mode corresponding to the target group as a target speech mode, where different groups correspond to different speech modes.
Optionally, the terminal device includes at least two display screens, referring to fig. 6, fig. 6 is a third schematic structural diagram of the terminal device provided in the embodiment of the present invention, and as shown in fig. 6, the determining module 401 includes:
an obtaining unit 4014, configured to obtain display states of the at least two display screens, where the display states include a lighting state or a screen-off state;
a third determining unit 4015, configured to determine a target voice mode according to the display states of the at least two display screens, where different display states correspond to different voice modes.
In this embodiment, the terminal device determines a target voice mode when a voice control function of the terminal device is turned on, where different voice modes include different voice response parameters; responding to the voice input by the user in the target voice mode. Therefore, the voice input by the user can be responded in different voice modes, different voice modes comprise different voice response parameters, the response mode of the voice control function is enriched, and different requirements of different users or different scenes can be met.
Fig. 7 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, as shown in fig. 7, the terminal device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 7 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
Wherein, the processor 710 is configured to:
determining a target voice mode under the condition that a voice control function of the terminal equipment is started, wherein different voice modes comprise different voice response parameters;
responding to the voice input by the user in the target voice mode.
Optionally, the voice response parameters include at least one of:
voice recognition database, content collection, sensitive words, voice broadcast parameters, application program restriction information.
Optionally, the voice response input by the user in the target voice mode executed by the processor 710 includes at least one of:
performing voice recognition on voice input by a user according to the target voice recognition database;
acquiring reply content in the target content set according to the voice input by the user;
filtering target sensitive words in the reply content corresponding to the voice input by the user, and outputting the filtered reply content;
broadcasting the reply content corresponding to the voice input by the user according to the target voice broadcasting parameter;
and limiting the user to use the target application program through voice input according to the target application program limiting information.
Optionally, the determining the target speech mode performed by the processor 710 includes:
receiving a first input of a user;
determining a target group to which the user belongs according to the first input;
and determining the voice mode corresponding to the target group as a target voice mode, wherein different groups correspond to different voice modes.
Optionally, the terminal device includes at least two display screens, and the determining the target voice mode performed by the processor 710 includes:
acquiring display states of the at least two display screens, wherein the display states comprise a lighting state or a screen-off state;
and determining a target voice mode according to the display states of the at least two display screens, wherein different display states correspond to different voice modes.
In the embodiment of the invention, the terminal equipment determines a target voice mode under the condition that the voice control function of the terminal equipment is started, wherein different voice modes comprise different voice response parameters; responding to the voice input by the user in the target voice mode. Therefore, the voice input by the user can be responded in different voice modes, different voice modes comprise different voice response parameters, the response mode of the voice control function is enriched, and different requirements of different users or different scenes can be met.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The terminal device provides the user with wireless broadband internet access through the network module 702, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the terminal device 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 706. The image frames processed by the graphic processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of a phone call mode.
The terminal device 700 further comprises at least one sensor 705, such as light sensors, motion sensors and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the luminance of the display panel 7061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 7061 and/or a backlight when the terminal device 700 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 706 is used to display information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7061, and the Display panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 710, receives a command from the processor 710, and executes the command. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 7071 may be overlaid on the display panel 7061, and when the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7061 according to the type of the touch event. Although in fig. 7, the touch panel 7071 and the display panel 7061 are implemented as two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated to implement the input and output functions of the terminal device, which is not limited herein.
The interface unit 708 is an interface for connecting an external device to the terminal apparatus 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 700 or may be used to transmit data between the terminal apparatus 700 and the external device.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby performing overall monitoring of the terminal device. Processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The terminal device 700 may further include a power supply 711 (e.g., a battery) for supplying power to various components, and preferably, the power supply 711 may be logically connected to the processor 710 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 700 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 710, a memory 709, and a computer program stored in the memory 709 and capable of running on the processor 710, where the computer program is executed by the processor 710 to implement each process of the foregoing voice control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the foregoing voice control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A voice control method is applied to terminal equipment, and is characterized by comprising the following steps:
determining a target voice mode under the condition that a voice control function of the terminal equipment is started, wherein different voice modes comprise different voice response parameters;
responding to the voice input by the user in the target voice mode;
the terminal device comprises at least two display screens, and the determining of the target voice mode comprises the following steps:
acquiring display states of the at least two display screens, wherein the display states comprise a lighting state or a screen-off state;
determining a target voice mode according to the display states of the at least two display screens, wherein different display states correspond to different voice modes;
the different display states correspond to different voice modes, and the different display states comprise:
the display device comprises a first display state, a second display state and a third display state, wherein the first display state corresponds to a first voice mode, the second display state corresponds to a second voice mode, the third display state corresponds to a third voice mode, the first display state is a display state that a first display screen is lightened, a second display screen is lightened, the second display state is a display state that the first display screen is lightened, the second display screen is lightened, and the third display state is a display state that the first display screen and the second display screen are lightened.
2. The voice-control method of claim 1, wherein the voice response parameters include at least one of:
voice recognition database, content collection, sensitive words, voice broadcast parameters, application program restriction information.
3. The voice control method of claim 2, wherein the responding to the user-input voice in the target voice mode comprises at least one of:
performing voice recognition on voice input by a user according to the target voice recognition database;
acquiring reply content in the target content set according to the voice input by the user;
filtering target sensitive words in the reply content corresponding to the voice input by the user, and outputting the filtered reply content;
broadcasting the reply content corresponding to the voice input by the user according to the target voice broadcasting parameter;
and limiting the user to use the target application program through voice input according to the target application program limiting information.
4. The speech control method of any one of claims 1 to 3, wherein the determining a target speech pattern comprises:
receiving a first input of a user;
determining a target group to which the user belongs according to the first input;
and determining the voice mode corresponding to the target group as a target voice mode, wherein different groups correspond to different voice modes.
5. A terminal device, characterized in that the terminal device comprises:
the determining module is used for determining a target voice mode under the condition that the voice control function of the terminal equipment is started, wherein different voice modes comprise different voice response parameters;
the response module is used for responding to the voice input by the user in the target voice mode;
the terminal device comprises at least two display screens, and the determining module comprises:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring the display states of the at least two display screens, and the display states comprise a lighting state or a screen-off state;
the third determining unit is used for determining a target voice mode according to the display states of the at least two display screens, wherein different display states correspond to different voice modes;
the different display states correspond to different voice modes, and the different display states comprise:
the display device comprises a first display state, a second display state and a third display state, wherein the first display state corresponds to a first voice mode, the second display state corresponds to a second voice mode, the third display state corresponds to a third voice mode, the first display state is a display state that a first display screen is lightened, a second display screen is lightened, the second display state is a display state that the first display screen is lightened, the second display screen is lightened, and the third display state is a display state that the first display screen and the second display screen are lightened.
6. The terminal device of claim 5, wherein the voice response parameters include at least one of:
voice recognition database, content collection, sensitive words, voice broadcast parameters, application program restriction information.
7. The terminal device of claim 6, wherein the response module is specifically configured to implement at least one of:
performing voice recognition on voice input by a user according to the target voice recognition database;
acquiring reply content in the target content set according to the voice input by the user;
filtering target sensitive words in the reply content corresponding to the voice input by the user, and outputting the filtered reply content;
broadcasting the reply content corresponding to the voice input by the user according to the target voice broadcasting parameter;
and limiting the user to use the target application program through voice input according to the target application program limiting information.
8. The terminal device of any of claims 5 to 7, wherein the determining module comprises:
a receiving unit for receiving a first input of a user;
the first determining unit is used for determining a target group to which the user belongs according to the first input;
and the second determining unit is used for determining the voice mode corresponding to the target group as the target voice mode, wherein different groups correspond to different voice modes.
9. A terminal device, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the voice control method according to any one of claims 1 to 4.
CN201910237971.0A 2019-03-27 2019-03-27 Voice control method and terminal equipment Active CN109949809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910237971.0A CN109949809B (en) 2019-03-27 2019-03-27 Voice control method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910237971.0A CN109949809B (en) 2019-03-27 2019-03-27 Voice control method and terminal equipment

Publications (2)

Publication Number Publication Date
CN109949809A CN109949809A (en) 2019-06-28
CN109949809B true CN109949809B (en) 2021-07-06

Family

ID=67012049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910237971.0A Active CN109949809B (en) 2019-03-27 2019-03-27 Voice control method and terminal equipment

Country Status (1)

Country Link
CN (1) CN109949809B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110853634B (en) * 2019-09-30 2023-03-10 珠海格力节能环保制冷技术研究中心有限公司 Multi-modal voice interaction feedback response control method, computer readable storage medium and air conditioner
CN112165552A (en) * 2020-09-27 2021-01-01 广州三星通信技术研究有限公司 Method for controlling voice assistant and electronic device using same
CN115480888A (en) * 2021-06-16 2022-12-16 上海博泰悦臻网络技术服务有限公司 Voice control method, device, system, electronic equipment, storage medium and product

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130059278A (en) * 2011-11-28 2013-06-05 (주)수풀콜매니저 An interactive ars voice menu connecting system and a connecting method thereof
CN105488749A (en) * 2015-11-30 2016-04-13 淮阴工学院 Aged people and children oriented accompanying system and interactive mode
CN107454260A (en) * 2017-08-03 2017-12-08 深圳天珑无线科技有限公司 Terminal enters control method, electric terminal and the storage medium of certain scenarios pattern
CN108012169B (en) * 2017-11-30 2019-02-01 百度在线网络技术(北京)有限公司 A kind of interactive voice throws screen method, apparatus and server
CN108668024B (en) * 2018-05-07 2021-01-08 维沃移动通信有限公司 Voice processing method and terminal
CN108900612A (en) * 2018-06-29 2018-11-27 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information
CN109243444B (en) * 2018-09-30 2021-06-01 百度在线网络技术(北京)有限公司 Voice interaction method, device and computer-readable storage medium

Also Published As

Publication number Publication date
CN109949809A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
US20210034223A1 (en) Method for display control and mobile terminal
CN109343759B (en) Screen-turning display control method and terminal
CN108874352B (en) Information display method and mobile terminal
CN109379484B (en) Information processing method and terminal
CN108196815B (en) Method for adjusting call sound and mobile terminal
CN108391008B (en) Message reminding method and mobile terminal
CN107734170B (en) Notification message processing method, mobile terminal and wearable device
CN109710349B (en) Screen capturing method and mobile terminal
CN111010608B (en) Video playing method and electronic equipment
CN107808107B (en) Application message display method and mobile terminal
CN109949809B (en) Voice control method and terminal equipment
CN111402866A (en) Semantic recognition method and device and electronic equipment
CN111093137B (en) Volume control method, volume control equipment and computer readable storage medium
CN110457716B (en) Voice output method and mobile terminal
CN109982273B (en) Information reply method and mobile terminal
CN111522477A (en) Application starting method and electronic equipment
CN110795188A (en) Message interaction method and electronic equipment
CN111402157B (en) Image processing method and electronic equipment
CN110913070B (en) Call method and terminal equipment
CN110740214B (en) Prompting method, terminal and computer readable storage medium
CN110213439B (en) Message processing method and terminal
CN109543193B (en) Translation method, translation device and terminal equipment
CN108418961B (en) Audio playing method and mobile terminal
CN108089799B (en) Control method of screen edge control and mobile terminal
CN111416955B (en) Video call method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant