CN116665664A - Voice interaction method, terminal equipment and storage medium - Google Patents

Voice interaction method, terminal equipment and storage medium Download PDF

Info

Publication number
CN116665664A
CN116665664A CN202310438750.6A CN202310438750A CN116665664A CN 116665664 A CN116665664 A CN 116665664A CN 202310438750 A CN202310438750 A CN 202310438750A CN 116665664 A CN116665664 A CN 116665664A
Authority
CN
China
Prior art keywords
information
voice
mobile terminal
voice information
interaction method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310438750.6A
Other languages
Chinese (zh)
Inventor
黄育雄
吴海全
曹磊
何桂晓
郭世文
曾添福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhaoqing Deqing Guanxu Electronics Co ltd
Shenzhen Grandsun Electronics Co Ltd
Original Assignee
Zhaoqing Deqing Guanxu Electronics Co ltd
Shenzhen Grandsun Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhaoqing Deqing Guanxu Electronics Co ltd, Shenzhen Grandsun Electronics Co Ltd filed Critical Zhaoqing Deqing Guanxu Electronics Co ltd
Priority to CN202310438750.6A priority Critical patent/CN116665664A/en
Publication of CN116665664A publication Critical patent/CN116665664A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The application is applicable to the technical field of communication, and provides a voice interaction method, terminal equipment and a storage medium, wherein the method comprises the following steps: when running communication software through a mobile terminal, immediately acquiring the latest information of the communication software, and then carrying out grading processing on the latest information and converting the latest information into first voice information; then the first voice information is sent to wearable equipment in communication connection with the mobile terminal; if the mobile terminal receives second voice information sent by the user responding to the first voice information and operating the wearable equipment in a non-manual mode, the second voice information is converted into text information; then generating inquiry information according to the second voice information, and sending the inquiry information to the wearable equipment; and if the mobile terminal receives the confirmation information sent by the wearable device which is not manually operated by the user in response to the inquiry information, judging whether to reply the latest information according to the confirmation information. The application can realize the timely receiving and replying of the new message and effectively relieve the anxiety of the user.

Description

Voice interaction method, terminal equipment and storage medium
Technical Field
The present application belongs to the field of communication technologies, and in particular, to a voice interaction method, a terminal device, and a storage medium.
Background
The development of science and technology and the application of the Internet bring unprecedented convenience to the life of people, and the mobile phone serving as a communication tool is extremely popular at present. The mobile phone brings convenience to life of people and brings certain potential safety hazards to users who excessively rely on the mobile phone for social contact, for example, in order to be able to view or reply to messages in time, some users can view the mobile phone when waiting for traffic lights, and even other users can view the mobile phone in the driving process, and the behaviors can lead the users to be in dangerous places at any time and even cause traffic accidents.
Disclosure of Invention
In view of the above, the embodiments of the present application provide a voice interaction method, a terminal device, and a storage medium, so as to solve the problem that in the prior art, a message and a reply message cannot be checked in time.
A first aspect of an embodiment of the present application provides a voice interaction method, applied to a mobile terminal, where the voice interaction method includes:
when running communication software, immediately acquiring the latest information of the communication software;
carrying out grading processing on the latest information and converting the latest information into first voice information;
transmitting the first voice information to a wearable device in communication connection with the mobile terminal;
if second voice information sent by the wearable equipment is received by a user responding to the first voice information, converting the second voice information into text information;
generating inquiry information according to the second voice information, and sending the inquiry information to the wearable equipment;
and if the confirmation information sent by the wearable device is received by the user responding to the inquiry information, judging whether the text information is used as a reply to the latest message according to the confirmation information.
A second aspect of an embodiment of the present application provides a voice interaction method, which is applied to a wearable device, where the voice interaction method includes:
if the first voice information sent by the mobile terminal is received, immediately broadcasting the first voice information;
if the reply voice of the user is monitored, second voice information of the user is recorded, wherein the second voice information is reply information made by the user according to the latest message corresponding to the first voice information;
transmitting the second voice information to the mobile terminal;
if the inquiry information sent by the mobile terminal is received, the inquiry information is immediately broadcast;
if the confirmation voice of the user is monitored, the confirmation information of the user is recorded, and the confirmation information is sent to the mobile terminal, so that the mobile terminal judges whether to reply the latest message according to the confirmation information.
A third aspect of the embodiments of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where when the terminal device is a mobile terminal, the processor executes the computer program to implement the steps of the voice interaction method according to the first aspect of the embodiments of the present application;
when the terminal device is a wearable device, the processor executes the computer program to implement the steps of the voice interaction method according to the second aspect of the embodiment of the present application.
A fourth aspect of the embodiments of the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the voice interaction method according to the first or second aspect of the embodiments of the present application.
According to the voice interaction method provided by the first aspect of the embodiment of the application, when the mobile terminal runs the communication software, the latest information of the communication software is obtained immediately, and then the latest information is subjected to grading processing and converted into first voice information, so that the first voice information comprises the source, the sender and the content of the latest information; the first voice information is sent to the wearable equipment in communication connection with the mobile terminal, so that a user can respond to the first voice information to operate the wearable equipment in a non-manual mode to send second voice information to the mobile terminal; if the mobile terminal receives the second voice information, the second voice information is converted into text information; then generating inquiry information according to the second voice information, and sending the inquiry information to the wearable device, so that a user can respond to the inquiry information to operate the wearable device in a non-manual mode to send confirmation information; if the terminal equipment receives the confirmation information, judging whether to take the text information as a reply to the latest information according to the confirmation information, so that the requirement of timely receiving and replying to the new information by the user can be met, accidents can be effectively avoided when the user is in dangerous places, and the life and property safety of the user can be ensured.
According to the voice interaction method provided by the second aspect of the embodiment of the application, the wearable equipment is used for receiving the first voice information sent by the mobile terminal and immediately broadcasting the first voice information; then, when the wearable device in the monitoring state monitors that the user makes a reply voice according to the latest message corresponding to the first voice information, the second voice information of the user is recorded; transmitting the second voice information to the mobile terminal in a wireless communication mode; if the wearable equipment receives the inquiry information sent by the mobile terminal, immediately broadcasting the inquiry information; when the wearable device monitors the confirmation voice of the user, the confirmation information of the user is recorded and sent to the mobile terminal, so that the mobile terminal judges whether to reply the latest information according to the confirmation information, and therefore, the requirement of the user for timely receiving and replying the new information can be met, anxiety of the user can be relieved, accidents can be effectively avoided when the user is in dangerous places, and life and property safety of the user can be guaranteed.
It will be appreciated that the advantages of the third to fourth aspects may be found in the relevant description of the first and second aspects, and are not described here again.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a first voice interaction method according to an embodiment of the present application;
FIG. 2 is a flowchart of a second voice interaction method according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a first voice interaction system according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a second voice interaction system according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The popularization of the instant chat tool enables people to communicate with the outside every moment, and people can fight for second to check messages and reply messages even in driving no matter eating or sleeping, so that if the messages cannot be checked in time, the people can see anxiety and other symptoms, but in the driving process, if the messages are checked in a distracted way, traffic accidents are easy to happen, if the messages cannot be checked in time, the people can see the symptoms such as restlessness, anxiety and the like, and if important messages are missed, important losses can be caused. Therefore, in order to enable people to safely receive messages in the driving process, a voice interaction method is provided, when a mobile terminal runs communication software, the latest message of the communication software is obtained immediately, and then the latest message is subjected to grading processing and converted into first voice information, so that the first voice information comprises the source, the sender and the content of the latest message; the first voice information is sent to the wearable equipment in communication connection with the mobile terminal, so that a user can respond to the first voice information to operate the wearable equipment in a non-manual mode to send second voice information to the mobile terminal; if the mobile terminal receives the second voice information, the second voice information is converted into text information; then generating inquiry information according to the second voice information, and sending the inquiry information to the wearable device, so that a user can respond to the inquiry information to operate the wearable device in a non-manual mode to send confirmation information; if the terminal equipment receives the confirmation information, judging whether the text information is used as a reply to the latest message according to the confirmation information. The method and the device can ensure the driving safety of the user and timely check and reply the message, thereby effectively relieving the anxiety of the user and greatly ensuring the life and property safety of the user.
As shown in fig. 1, the first voice interaction method provided by the embodiment of the present application is applied to a mobile terminal, and includes the following steps S101 to S106:
step S101, when the communication software is operated, the latest information of the communication software is obtained immediately, and the step S102 is entered.
In the application, when the latest information of the communication software is acquired, the latest information can be an independent processing center in the mobile terminal, and the processing center instantly acquires the new information by docking with the communication software; or the latest information can be obtained through the communication software itself in real time; the latest information can be acquired through any equipment which can acquire the information and is externally connected with the mobile terminal.
In the application, the user can set up to acquire the latest information in real time, or can set up a certain period in advance to acquire the latest information periodically; the method can set and acquire all the latest information, can acquire the latest information of one or a plurality of specific contacts only according to the requirement, and can not process the latest information sent by anyone except the specific contacts at the moment, or can also take preset information as a reply to the latest information.
In one embodiment, the communication software is at least one of WeChat, QQ, enterprise WeChat, nail and SMS.
In application, the communication software includes not only WeChat, QQ, enterprise WeChat, nail and SMS, but also Kakao Talk, blackberry BBM (BlackBerry Messenger), LINE, kik Messenger, viber, skype, facebook Messenger, whatsApp and other communication software, which is only an example of communication software and not limiting.
In application, the communication software may be one or more of the above communication software, which is not limited herein, and the user may select the communication software to obtain the latest message according to the actual requirement.
Step S102, grading the latest information and converting the latest information into first voice information, and entering step S103.
In the application, after the latest message is acquired, the message is subjected to grading processing, and the processed content is converted into the first voice information.
In an application, the source of the latest message may be a different contact or a different group of the same communication software; the same contact or different contacts or the same group or different groups of different communication software are possible; the latest messages are therefore ranked so that the first speech information contains as much as possible all the information that the user wants to obtain.
In one embodiment, step S102 includes:
acquiring the software source, the name of a sender and the message content of the latest message;
the software source, the sender name, and the message content are converted into first voice information.
In application, the hierarchical processing comprises a first stage processing, a second stage processing and a third stage processing, wherein the first stage processing obtains the software source of the latest information, and the software source comprises a plurality of communication software such as short messages, weChat, QQ and the like. The second stage of processing obtains the name of the sender, if the latest message is a group message, the name of the group and remarks of the sender in the group are obtained; if the latest message is a personal letter, acquiring the remark name of the user for the person; if the message source of the latest message is a short message, the mobile phone number of the sender which is not remarked or the name of the remarked contact person is obtained. The third stage process obtains the information content of the latest information, if the content of the latest information is text, the text content is directly obtained; if the latest message is a voice message, converting the voice message into characters; if the content of the latest message is Word or PDF or other format file, only the file name of the file is obtained; if the content of the latest message is a picture, the message content of the latest message is replaced by a picture or other content set by the user.
Step S103, the first voice information is sent to the wearable device communicatively connected to the mobile terminal, and the step S104 is entered.
In the application, after the first voice information is acquired, the first voice information is required to be sent to a wearable device in communication connection with the mobile terminal in a wireless communication mode, so that the reply of the user can be acquired in time.
In one embodiment, prior to step S103, comprising: and if the broadcast data sent by the wearable equipment is received, establishing wireless communication connection with the wearable equipment.
In application, before sending the first voice information to the wearable device, the wearable device needs to establish a wireless communication connection with the mobile terminal, so that the mobile terminal can directionally send the first voice information and other information to the wearable device.
In applications, the above-mentioned wireless communication connection includes bluetooth connection, wi-Fi connection, and other wireless communication connection, and the wireless communication connection is merely an example, and the connection is not limited herein.
In one embodiment, step S103 includes: transmitting the first voice information to the wearable device through a Bluetooth protocol;
the Bluetooth protocol is one of an A2DP protocol, an AVRCP protocol, an HSP protocol and an HFP protocol.
In application, the wearable device may be one of a bluetooth headset, a smart watch, a smart bracelet, and other smart wearable devices, where the bluetooth headset may be a low-power consumption (Bluetooth Low Energy, BLE) bluetooth headset including any one of a BLE4.0 module, a BLE4.2 module, a BLE5.0 module, a BLE5.2 module, and the like, or may be a classical bluetooth headset including a bluetooth 4.0 module or less.
In applications, bluetooth protocols include Audio transmission (Advanced Audio Distribution Profile, A2 DP) protocol, audio/video remote control (AVRCP) protocol, HSP (Head Set Profile) protocol, HFP (handles-Free Profile) protocol; the A2DP protocol at least includes one of audio Coding (SBC) protocol, advanced audio Coding (Advanced Audio Coding, AAC) protocol, high-quality bluetooth Coding (Low-Latency Hi-Definition Audio Codec, LHDC) protocol, and other Coding protocols. The above bluetooth protocol is merely exemplary, and other bluetooth protocols not illustrated may be included in the practical application process, and thus the present application is not limited thereto.
Step S104, if second voice information sent by the wearable device is received and operated by the user in response to the first voice information, the second voice information is converted into text information, and step S105 is performed.
In the application, after the wearable device records the second voice information, the second voice information is sent to the mobile terminal through a certain Bluetooth protocol, and the mobile terminal immediately converts the second voice information into text information after receiving the second voice information.
Step S105, generating query information according to the second voice information, and sending the query information to the wearable device, and then entering step S106.
In the application, in order to ensure the accuracy of the reply, the mobile terminal generates inquiry information according to the second voice information, sends the inquiry information to the wearable device through a certain Bluetooth protocol, acquires the confirmation information of the user through the wearable device, and then judges whether to reply the latest message according to the confirmation information. The method and the device can improve the accuracy of the mobile terminal for replying the information, avoid embarrassing or even unsightly situation of the user caused by the reply error, and improve the user experience.
And step S106, if the confirmation information sent by the wearable device is received and operated by the user in response to the inquiry information, judging whether to take the text information as a reply to the latest message according to the confirmation information.
In the application, the content of the inquiry information may be set to a certain format according to the preference of the user, or the inquiry information may be generated using a format built in the mobile terminal, and the format of the inquiry information is not limited here. For example, the format is: reply + communication software name + contact name/group name + message content + determine or cancel, the query information may be: the reply WeChat contact sends a three message "whether it is available today" to determine whether to cancel. When the wearable device receives the inquiry information, immediately broadcasting the content of the inquiry information, selecting voice reply "confirm" or "cancel" by a user according to the content of the inquiry information, sending the content of the voice reply to the mobile terminal by the wearable device as confirmation information, if the confirmation information is "confirm", replying the message content to a corresponding contact person by the mobile terminal, and if the confirmation information is "cancel", cancelling the reply to the latest information by the mobile terminal.
The first voice interaction method provided by the embodiment of the application can meet the requirement that a user can timely receive and reply new messages even in the driving process, avoid accidents caused by distraction of the user, greatly ensure the life and property safety of the user, and realize the receiving and replying of the messages by the method in the occasions of inconvenient actions or incapacity of touching the mobile phone, thereby bringing the safety to the user.
As shown in fig. 2, the second voice interaction method provided by the embodiment of the present application is applied to a wearable device, and includes the following steps S201 to S205:
step S201, if the first voice information sent by the mobile terminal is received, the first voice information is immediately broadcasted, and step S202 is performed.
In the application, after the wearable device receives the first voice information, the first voice information is broadcasted through the voice broadcasting module at the first time, and the user can select to stop broadcasting at any time during the broadcasting of the wearable device.
In one embodiment, prior to step S201, comprising: and sending broadcast data to the mobile terminal, and establishing wireless communication connection with the mobile terminal.
In applications, the information interaction between the mobile terminal and the wearable device is performed by means of a wireless communication connection, and the wireless communication is not limited to bluetooth, but may also include more modes, for example, communication by means of a wireless local area network (Wireless Local Area Networks, WLAN), zigbee, a mobile communication network, a global navigation satellite system (Global Navigation Satellite System, GNSS), frequency modulation (Frequency Modulation, FM), a short-range wireless communication technology (Near Field Communication, NFC), an Infrared technology (IR), and the like.
In one embodiment, the wearable device is a headset or a telephone watch.
In application, the wearable device can be any wearable device except for an earphone and a telephone watch, which can realize the voice interaction method.
Step S202, if the reply voice of the user is monitored, the second voice information of the user is recorded, and step S203 is performed.
In application, the wearable device can record the voice of the user through the voice recording module, and when the surrounding environment has noise, the voice recording module can eliminate redundant noise according to the frequency spectrum characteristics of the voice, and only the voice is reserved.
In one embodiment, the second voice information is reply information made by the user according to the latest message corresponding to the first voice information.
In the application, the user can wake up the wearable device in a voice wake-up mode, record voice information through the wearable device, then send the voice information to the mobile terminal, and actively send a message to a contact person of certain communication software through the mobile terminal.
In the application, if no latest message is received and no message is required to be actively sent to other people, the wearable device can temporarily disconnect the connection with the mobile terminal, and reestablish the wireless communication connection with the mobile terminal in a voice awakening mode when the information is required to be sent; or the user may set the start time and duration of establishing the connection.
Step S203, the second voice information is sent to the mobile terminal, and step S204 is entered.
In the application, after the second voice information is recorded, the wearable device sends the second voice information to the mobile terminal in a wireless communication mode.
Step S204, if the inquiry information sent by the mobile terminal is received, the inquiry information is immediately reported, and the step S205 is entered.
In the application, the wearable device instantly broadcasts the received inquiry information through the voice broadcasting module, and in the broadcasting process, a user can pause the broadcasting process at any time, and can select to rebroadcast or inhibit broadcasting when appropriate.
Step S205, if the confirmation voice of the user is monitored, the confirmation information of the user is recorded, and the confirmation information is sent to the mobile terminal, so that the mobile terminal judges whether to reply to the latest message according to the confirmation information.
In the application, the confirmation information of the user can prevent the information from replying to errors, avoid bringing unnecessary trouble or avoid causing misunderstanding of the user and the contact person.
The second voice interaction method provided by the embodiment of the application can enable the user to not only timely learn the information of the contact person in the driving process, but also reply corresponding information to the contact person in a voice mode, so that the user is prevented from missing the listening and replying of important information, some loss caused by the listening and replying of important information can be avoided, and meanwhile, the anxiety symptoms of part of the user can be well relieved, so that the voice interaction method has wide application prospect.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
The embodiment of the application provides a first voice interaction system, which is used for executing the steps in the embodiment of the first voice interaction method. The first voice interaction system may be a virtual device (virtual appliance) in the mobile terminal, executed by a processor of the mobile terminal, or the mobile terminal itself.
As shown in fig. 3, a first voice interaction system 300 includes: an information acquisition module 301, an information processing module 302, an information transmission module 303, and an information reply module 304;
an information obtaining module 301, configured to obtain an latest message of the communication software;
an information processing module 302, configured to convert the latest message into first voice information when the latest message is received, convert second voice information sent by a wearable device when receiving second voice information that a user operates in response to the first voice information, and generate query information according to the second voice information;
an information sending module 303, configured to send the first voice information and the query information to a wearable device;
and the information reply module 304 is configured to reply to the latest message according to the received second voice information that the user operates the wearable device to send in response to the first voice information and the received acknowledgement information that the user operates the wearable device to send in response to the query information.
In application, the information acquisition module, the information processing module, the information sending module and the information reply module in the first voice interaction system can be realized by different logic circuits integrated in the processor, and can also be realized by a plurality of distributed processors.
The embodiment of the application provides a second voice interaction system, which is used for executing the steps in the embodiment of the second voice interaction method. The second voice interaction system may be a virtual device in the wearable device, executed by a processor of the wearable device, or may be the wearable device itself.
As shown in fig. 4, the second voice interaction system 400 includes: an information receiving module 401, a voice broadcasting module 402, a voice recording module 403 and an information sending module 404;
an information receiving module 401, configured to receive first voice information and query information sent by a mobile terminal;
a voice broadcasting module 402, configured to broadcast the first voice information and the query information;
a voice recording module 403, configured to record second voice information and confirmation information of the user;
and an information sending module 404, configured to send the second voice information and the acknowledgement information to the mobile terminal.
In application, the information receiving module, the voice broadcasting module, the voice recording module and the information sending module in the second voice interaction system can be realized by different logic circuits integrated in the processor, and can also be realized by a plurality of distributed processors.
As shown in fig. 5, an embodiment of the present application further provides a terminal device 500, including: at least one processor 501 (only one processor is shown in fig. 5), a memory 502 and a computer program 503 stored in the memory 502 and executable on the at least one processor 501, the steps of the first speech interaction method being implemented when the processor 501 executes the computer program 503 when the terminal device is a mobile terminal;
when the terminal device is a wearable device, the processor 501 executes the computer program 503 to implement the steps of the second voice interaction method.
In an application, the terminal device may include, but is not limited to, a memory, a processor. It will be appreciated by those skilled in the art that fig. 5 is merely an example of a terminal device and is not meant to be limiting, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., may also include input and output devices, network access devices, etc. The input output devices may include cameras, audio acquisition/playback devices, display devices, keyboards, keys, etc. The network access device may include a communication module for communicating with other devices.
In applications, the processor may be a central processing unit (Central Processing Unit, CPU), but also other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or any conventional processor or the like.
In applications, the memory may in some embodiments be an internal storage unit of the terminal device, such as a hard disk or a memory of the terminal device. The memory may in other embodiments also be an external storage device of the terminal device, such as a plug-in hard disk provided on the terminal device, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like. Further, the memory may also include both an internal storage unit of the terminal device and an external storage device. The memory is used to store an operating system, application programs, boot Loader (Boot Loader), data, and other programs, etc., such as program code for a computer program, etc. The memory may also be used to temporarily store data that has been output or is to be output.
In application, the terminal device may further include any communication module capable of directly or indirectly performing wired or wireless communication with other devices, for example, the communication module may provide a solution applied to a network device and including a communication interface (for example, a universal serial bus interface (Universal Serial Bus, USB), a wired local area network (Local Area Networks, LAN), a wireless local area network (Wireless Local Area Networks, WLAN) (for example, wi-Fi network), bluetooth, zigbee, a mobile communication network, a global navigation satellite system (Global Navigation Satellite System, GNSS), frequency modulation (Frequency Modulation, FM), short-range wireless communication technology (Near Field Communication, NFC), infrared technology (Infrared, IR), etc., where the communication module may include an antenna, or may include an antenna array of multiple antenna elements.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that the above-described functional units are merely illustrated in terms of division for convenience and brevity, and that in practical applications, the above-described functional allocation may be performed by different functional units, i.e., the internal structure of the apparatus is divided into different functional units, so as to perform all or part of the above-described functions. The functional units in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present application. The specific working process of the units in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program can realize the steps of the first or second voice interaction method when being executed by a processor.
The embodiments of the present application provide a computer program product enabling a terminal device to carry out the steps of the first or second voice interaction method described above when the computer program product is run on the terminal device.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above-described embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the method embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to an apparatus/terminal device, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed terminal device and method may be implemented in other manners. For example, the above-described embodiments of the terminal device are merely illustrative, e.g., a module or a division of modules is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or modules, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A voice interaction method, which is applied to a mobile terminal, the voice interaction method comprising:
when running communication software, immediately acquiring the latest information of the communication software;
carrying out grading processing on the latest information and converting the latest information into first voice information;
transmitting the first voice information to a wearable device in communication connection with the mobile terminal;
if second voice information sent by the user responding to the first voice information and not manually operating the wearable equipment is received, converting the second voice information into text information;
generating inquiry information according to the second voice information, and sending the inquiry information to the wearable equipment;
and if the confirmation information sent by the wearable device is received by the user responding to the inquiry information and operated manually, judging whether to take the text information as a reply to the latest message according to the confirmation information.
2. The voice interaction method of claim 1, wherein the communication software comprises at least one of WeChat, QQ, enterprise WeChat, nail and SMS.
3. The voice interaction method of claim 1, wherein before the sending the first voice information to a wearable device communicatively connected to the mobile terminal, comprising:
and if the broadcast data sent by the wearable equipment is received, establishing wireless communication connection with the wearable equipment.
4. The voice interaction method of claim 1, wherein the step of hierarchically processing and converting the latest message into the first voice information comprises:
acquiring the software source, the name of a sender and the message content of the latest message;
the software source, the sender name, and the message content are converted into first voice information.
5. The voice interaction method of claim 1, wherein the sending the first voice information to a wearable device communicatively connected to the mobile terminal comprises:
transmitting the first voice information to the wearable device through a Bluetooth protocol;
the Bluetooth protocol is one of an A2DP protocol, an AVRCP protocol, an HSP protocol and an HFP protocol.
6. A voice interaction method, characterized by being applied to a wearable device, the voice interaction method comprising:
if the first voice information sent by the mobile terminal is received, immediately broadcasting the first voice information;
if the reply voice of the user is monitored, second voice information of the user is recorded, wherein the second voice information is reply information made by the user according to the latest message corresponding to the first voice information;
transmitting the second voice information to the mobile terminal;
if the inquiry information sent by the mobile terminal is received, the inquiry information is immediately broadcast;
if the confirmation voice of the user is monitored, the confirmation information of the user is recorded, and the confirmation information is sent to the mobile terminal, so that the mobile terminal judges whether to reply the latest message according to the confirmation information.
7. The voice interaction method of claim 6, wherein the wearable device is a headset or a telephone watch.
8. The voice interaction method of claim 6, wherein before the first voice information is immediately broadcasted if the first voice information sent by the mobile terminal is received, the method comprises:
and sending broadcast data to the mobile terminal, and establishing wireless communication connection with the mobile terminal.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that when the terminal device is a mobile terminal, the processor implements the steps of the voice interaction method according to any of claims 1 to 5 when the computer program is executed;
when the terminal device is a wearable device, the processor performs the steps of the voice interaction method according to any of claims 6 to 8 when executing the computer program.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the voice interaction method of any one of claims 1 to 5 or claims 6 to 8.
CN202310438750.6A 2023-04-20 2023-04-20 Voice interaction method, terminal equipment and storage medium Pending CN116665664A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310438750.6A CN116665664A (en) 2023-04-20 2023-04-20 Voice interaction method, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310438750.6A CN116665664A (en) 2023-04-20 2023-04-20 Voice interaction method, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116665664A true CN116665664A (en) 2023-08-29

Family

ID=87717958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310438750.6A Pending CN116665664A (en) 2023-04-20 2023-04-20 Voice interaction method, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116665664A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117273867A (en) * 2023-11-16 2023-12-22 浙江口碑网络技术有限公司 Information processing system, method, apparatus, electronic device, and computer storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117273867A (en) * 2023-11-16 2023-12-22 浙江口碑网络技术有限公司 Information processing system, method, apparatus, electronic device, and computer storage medium

Similar Documents

Publication Publication Date Title
CN113572731B (en) Voice communication method, personal computer, terminal and computer readable storage medium
EP3690610A1 (en) Method for quickly starting application service, and terminal
CN111713141B (en) Bluetooth playing method and electronic equipment
CN106936987B (en) Method and device capable of identifying voice source of Bluetooth headset
CN110650405A (en) Wireless earphone control system, method, device and storage medium
KR102097987B1 (en) Apparatus and method for processing data of bluetooth in a portable terminal
CN111629366B (en) Interaction method and device between Bluetooth devices, storage medium and electronic device
WO2023284454A1 (en) Bluetooth connection prompting method and apparatus, device, storage medium, and program product
CN116665664A (en) Voice interaction method, terminal equipment and storage medium
CN104243271A (en) Method and system for realizing off-line message pushing through XMPP
CN111615090B (en) Interaction method and device between Bluetooth devices, storage medium and electronic device
CN101098156B (en) Communication equipment having special use mode
CN105049327A (en) Touch vibration instant messaging method and system
CN113965249A (en) Heaven-earth satellite communication device based on wired connection and sleep awakening method thereof
US9363358B2 (en) Wireless Bluetooth apparatus with intercom and broadcasting functions and operating method thereof
CN111105797A (en) Voice interaction method and device and electronic equipment
US11061464B2 (en) Electronic device, method for reducing power consumption, and apparatus
CN113271385B (en) Call forwarding method
CN106304287B (en) Method for reducing standby power consumption of mobile terminal and mobile terminal
CN107465827B (en) Information reply method, terminal and storage medium
CN113132440A (en) Audio transmission method, audio transmission system and electronic equipment
JP2007074233A (en) Ptt communication system, ptt communication terminal, message processing server, and method of processing message
CN109600507A (en) The means of communication and mobile terminal under a kind of screen locking state based on mobile terminal
CN115051991B (en) Audio processing method, device, storage medium and electronic equipment
CN111867154B (en) Mobile terminal, communication method thereof and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination