CN113393838A - Voice processing method and device, computer readable storage medium and computer equipment - Google Patents

Voice processing method and device, computer readable storage medium and computer equipment Download PDF

Info

Publication number
CN113393838A
CN113393838A CN202110734114.9A CN202110734114A CN113393838A CN 113393838 A CN113393838 A CN 113393838A CN 202110734114 A CN202110734114 A CN 202110734114A CN 113393838 A CN113393838 A CN 113393838A
Authority
CN
China
Prior art keywords
preset
awakening
recognition result
state
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110734114.9A
Other languages
Chinese (zh)
Inventor
鲁勇
崔潇潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Intengine Technology Co Ltd
Original Assignee
Beijing Intengine Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Intengine Technology Co Ltd filed Critical Beijing Intengine Technology Co Ltd
Priority to CN202110734114.9A priority Critical patent/CN113393838A/en
Publication of CN113393838A publication Critical patent/CN113393838A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Selective Calling Equipment (AREA)

Abstract

The embodiment of the invention discloses a voice processing method, a voice processing device, a computer readable storage medium and computer equipment, wherein the method comprises the steps of obtaining voice information; carrying out voice recognition on the voice information to obtain a recognition result; entering an awakening state when an entry matched with a preset awakening word exists in the identification result; extracting preset command words in the recognition result in an awakening state; and generating a control instruction according to the preset command word, and controlling the intelligent home system according to the control instruction. Therefore, the awakening words and the command words can be spoken at one time, the control device of the intelligent home system is awakened and controlled quickly, the voice processing efficiency is improved, and the control efficiency of the intelligent home system is improved.

Description

Voice processing method and device, computer readable storage medium and computer equipment
Technical Field
The present invention relates to the field of speech processing technologies, and in particular, to a speech processing method and apparatus, a computer-readable storage medium, and a computer device.
Background
In recent years, with the continuous development of internet technology and internet of things technology, a development trend of smart homes is created, and the smart homes can greatly improve the safety, convenience and comfort of home life.
In the intelligent home, the voice recognition technology is widely used, digital voice can be converted into characters which can be understood by a computer through the voice recognition technology, and therefore a control module of the intelligent home can 'understand' a control command like a person. Therefore, non-contact control over the smart home can be achieved, and convenience of control over the smart home is further improved.
However, at present, the process of performing voice control on smart homes is to perform voice control in a dialogue mode, and the efficiency of voice processing is low, which affects the use experience of users.
Disclosure of Invention
The embodiment of the application provides a voice processing method and device, a computer-readable storage medium and computer equipment.
A first aspect of the present application provides a speech processing method, including:
acquiring voice information;
carrying out voice recognition on the voice information to obtain a recognition result;
entering an awakening state when an entry matched with a preset awakening word exists in the identification result;
extracting preset command words in the recognition result in the awakening state;
and generating a control instruction according to the preset command word, and controlling the intelligent home system according to the control instruction.
Accordingly, a second aspect of the present application provides a speech processing apparatus, comprising:
an acquisition unit configured to acquire voice information;
the recognition unit is used for carrying out voice recognition on the voice information to obtain a recognition result;
the awakening unit is used for entering an awakening state when a vocabulary entry matched with a preset awakening word exists in the identification result;
the extraction unit is used for extracting preset command words in the identification result in the awakening state;
and the control unit is used for generating a control instruction according to the preset command word and controlling the intelligent home system according to the control instruction.
In some embodiments, the wake-up unit comprises:
the segmentation subunit is used for segmenting the recognition result into a plurality of entries;
and the awakening subunit is used for entering an awakening state when an entry matched with a preset awakening word exists in the entries.
In some embodiments, the dicing subunit comprises:
the acquisition module is used for acquiring the entry length of a preset awakening word;
and the first segmentation module is used for segmenting the recognition result according to the entry length to obtain a plurality of entries.
In some embodiments, the wake-up unit comprises:
the obtaining subunit is used for obtaining the current operation state when the entry matched with the preset awakening word exists in the identification result;
and the switching subunit is used for switching the running state into the awakening state when the current running state is the standby state.
In some embodiments, the apparatus further comprises:
and the switching unit is used for switching the running state into the standby state when the recognition result matched with the preset command word is not received in the preset time period.
In some embodiments, the extraction unit includes:
the determining subunit is used for determining a target entry as a preset command word when detecting that the target entry matched with the command word in the preset command word set exists in the recognition result;
and the extraction subunit is used for extracting the preset command words.
In some embodiments, the determining subunit includes:
the second segmentation module is used for segmenting the recognition result into a plurality of entries;
the matching module is used for matching the plurality of entries with command words in a preset command word set respectively;
and the determining module is used for determining the target entry as the preset command word when the target entry matched with the command word in the preset command word set exists.
The third aspect of the present application further provides a computer-readable storage medium, which stores a plurality of instructions adapted to be loaded by a processor to perform the steps of the speech processing method provided in the first aspect of the present application.
A fourth aspect of the present application provides a computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the speech processing method provided in the first aspect of the present application when executing the computer program.
A fifth aspect of the present application provides a computer program product or computer program comprising computer instructions stored in a storage medium. The processor of the computer device reads the computer instructions from the storage medium, and the processor executes the computer instructions to make the computer device execute the steps of the voice processing method provided by the first aspect.
The voice processing method provided by the embodiment of the application acquires voice information; carrying out voice recognition on the voice information to obtain a recognition result; entering an awakening state when an entry matched with a preset awakening word exists in the identification result; extracting preset command words in the recognition result in an awakening state; and generating a control instruction according to the preset command word, and controlling the intelligent home system according to the control instruction. Therefore, the awakening words and the command words can be spoken at one time, the awakening of the control device of the intelligent home system and the control over the intelligent home system are achieved, the voice processing efficiency is improved, and the control efficiency of the intelligent home system is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a scenario of speech processing provided herein;
FIG. 2 is a schematic flow chart of a speech processing method provided herein;
FIG. 3 is a schematic diagram of a speech processing apparatus provided in the present application;
fig. 4 is a schematic structural diagram of a terminal provided in the present application.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a voice processing method, a voice processing device, a computer readable storage medium and computer equipment. The voice processing method can be used in a voice processing device. The speech processing apparatus may be integrated in a computer device, which may be a terminal or a server. The terminal can be a mobile phone, a tablet Computer, a notebook Computer, an intelligent television, a wearable intelligent device, a Personal Computer (PC), and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, Network acceleration service (CDN), big data and an artificial intelligence platform.
Please refer to fig. 1, which is a schematic view of a speech processing scenario provided in the present application; as shown in the figure, the computer device acquires voice information, and then performs voice recognition on the acquired voice information to obtain a recognition result, specifically, the recognition result may be represented in a text form. And then, detecting whether an entry matched with the preset awakening word exists in the recognition result, and when the entry matched with the preset awakening word exists in the recognition result, enabling the control device of the intelligent home system to enter an awakening state. And then, in the awakening state, continuously extracting preset command words from the recognition result, generating a control instruction according to the preset command words, and finally controlling the intelligent home system according to the control instruction.
It should be noted that the scene schematic diagram of the speech processing shown in fig. 1 is only an example, and the speech processing scene described in the embodiment of the present application is for more clearly illustrating the technical solution of the present application, and does not constitute a limitation on the technical solution provided by the present application. As can be appreciated by those skilled in the art, with the evolution of speech processing and the emergence of new service scenarios, the technical solutions provided in the present application are also applicable to similar technical problems.
Based on the above-described implementation scenarios, detailed descriptions will be given below.
Embodiments of the present application will be described from the perspective of a speech processing apparatus, which may be integrated in a computer device. As shown in fig. 2, the computer device may be a terminal or a server, and is a schematic flow chart of the speech processing method provided by the present application, where the method includes:
step 101, acquiring voice information.
The voice processing method can be applied to an artificial intelligence system carrying a voice recognition module and a voice control module. The artificial intelligence system comprises but is not limited to a Bluetooth sound box and an intelligent home system controlled by voice. In the embodiment of the present application, a detailed description is given by taking an intelligent home system controlled by voice as an example.
In the above smart home system, a control device may be configured for the smart home system, and any electrical device in the smart home system may be controlled by the control device. For example, the control device may control the on/off of a lamp, control the on/off of a motorized window shade, control the on/off of a television, adjust the temperature of a refrigerator or an air conditioner, and the like. The control device can be in the form of a terminal, such as a smart phone; or may be integrated in some or some of the electrical devices in the smart home, such as a smart television or smart sound box. When the control device is a smart phone, a user can control the smart home system through a control page displayed in the smart phone, and can also control the smart home system by inputting voice information into the smart phone. When the control device is integrated in certain electrical equipment in the intelligent home, voice information can be sent to the control device to realize control over the intelligent home system.
At present, when the control of the smart home system is realized by sending voice information to the control device, the control device generally needs to send voice information including a wakeup word to the control device to wake up the smart home system. After the control device of the intelligent home system is awakened and feeds back the voice information which enters the awakening state, the user sends the voice information containing the command words to the control device again, and after the control device of the intelligent home system receives the voice information containing the command words, the control device of the intelligent home system extracts the command words and controls the intelligent home system according to the command words.
Specifically, in actual use, in order to improve the use experience of the user, the control device of the smart home system generally exists in the identity of a virtual character, such as "white". When a user needs to control the smart home system through the small white, the user generally says "small white, hello" first, or says "small white" directly, wherein the "small white" is a wakeup word corresponding to the control device of the smart home system. When the Chinese speech information is received, the Chinese speech information is subjected to speech recognition, and when the Chinese speech information contains the awakening word 'Chinese speech', the running mode of the Chinese speech information is adjusted to the awakening mode and the Chinese speech information is fed back, such as 'hello' or 'hello'. Then, the user's voice information continues to be received, for example, "turn on television". And after receiving the voice information, the subtitle identifies the voice information, and extracts a command word of 'television on' from an identification result. Then, the state of the television is controlled according to the command word, so that the television is started. In some cases, the user may also feed back a voice message before controlling the television to be turned on, such as "good", or "good, which is turning on the television for you", etc.
In the above situation, it is necessary to input a wake-up voice message to the control device of the smart home system to wake up the control device of the smart home system, and then send a command voice message to the control module of the smart home system after the control device of the smart home system gives a feedback message to perform voice control on the smart home system through the control device of the smart home system, which is low in process efficiency.
In order to solve the problem of low efficiency of the voice control intelligent home system, the application provides a voice processing method. The following describes the speech processing method provided by the present application in detail.
Firstly, when the control device of the smart home system does not acquire the voice information, the control device of the smart home system maintains a standby state. In the standby state, the control device of the smart home system is not in the dormant state but in a low-energy consumption state, and in the low-energy consumption state, the voice information can be continuously acquired in a low-power consumption mode so as to be capable of detecting the voice information in time. When the acquisition of the voice information is detected, the acquisition of the voice information is started to be changed into the continuous acquisition of the voice information so as to acquire the complete voice information.
And 102, carrying out voice recognition on the voice information to obtain a recognition result.
The control device of the intelligent home system at least comprises the following modules: the voice information acquisition module can acquire the voice information of a user, converts the voice information into an electric signal and outputs the electric signal to the voice recognition processing module as the input of the voice recognition processing module; the voice recognition processing module is used for recognizing an input signal, namely an electric signal converted from voice information, and carrying out corresponding processing according to a recognition result obtained by recognition; and the control module is used for controlling the electrical equipment in the intelligent home system.
After the voice information acquisition module acquires the voice information, the voice information is converted into a corresponding electric signal and is output to the voice recognition processing module. And the voice recognition processing module recognizes the electric signal after receiving the electric signal sent by the voice information acquisition module to obtain a recognition result.
And 103, entering an awakening state when the entry matched with the preset awakening word exists in the identification result.
And after the obtained voice information is subjected to voice recognition to obtain a recognition result. Whether a wakeup word exists in the recognition result is detected, for example, whether an entry such as a small white exists in the recognition result is detected when the wakeup word is a small white. And when the entry of the Chinese character Xiaobai exists in the recognition result, switching the running state of the control device of the intelligent home system into the awakening state. In the awakening state, the control device of the intelligent home system acquires the voice information of the user at any time and identifies the acquired voice information. If the awakening entry of the 'white word' does not exist in the recognition result, the control device of the intelligent home system still keeps the standby state, and voice information is continuously acquired in a low-power-consumption mode, so that the waste of energy is avoided. In the application, different from the related art, after the preset wake-up word is detected to exist in the recognition result and the recognition result is switched to the wake-up state, the control device of the smart home system does not feed back the voice information, but continues to perform further command word detection on the recognition result.
In some embodiments, in the awake state, in order to improve accuracy of the recognition result of the acquired voice information, enhancement processing may be performed on the acquired voice information after the voice information is acquired. And then, recognizing the voice information after the enhancement processing to obtain a more accurate recognition result.
In some embodiments, entering an awake state when there is an entry matching a preset awake word in the recognition result includes:
1. segmenting the recognition result into a plurality of entries;
2. and when the entries matched with the preset awakening words exist in the plurality of entries, entering an awakening state.
In the embodiment of the application, whether the preset awakening words exist in the recognition result is detected, and the recognition result can be segmented to obtain a plurality of entries. And then matching the entries obtained by segmentation with preset awakening words one by one, and controlling a control device of the intelligent home system to enter an awakening state when the entries matched with the preset awakening words exist in the plurality of entries.
The vocabulary entries obtained through segmentation are matched with the preset awakening words one by one, the text similarity between each vocabulary entry obtained through segmentation and the preset awakening words can be calculated respectively, and when the vocabulary entry of which the text similarity between the vocabulary entry and the preset awakening words reaches a preset threshold value exists, the control device of the intelligent home system is controlled to enter an awakening state.
In some embodiments, segmenting the recognition result into a plurality of terms includes:
1.1, obtaining the entry length of a preset awakening word;
1.2, segmenting the recognition result according to the length of the entries to obtain a plurality of entries.
The length of the entry can be the number of characters of the Chinese character or the number of characters of other languages. For example, when the wake-up word is "small white", the entry length of the preset wake-up word may be obtained as 2 chinese characters. And after the entry length of the preset awakening word is obtained, segmenting the recognition result according to the entry length to obtain a plurality of entries. The entry lengths of the multiple entries obtained by segmentation are the same as the entry lengths of the preset awakening words. Further, the recognition result is segmented according to the length of the entry, and the recognition result is not mechanically divided into a plurality of entries according to the length of the entry. Specifically, for example, when the recognition result is "white color, and cool color", if the entry length at this time is 2 chinese characters, the entries obtained by segmentation are the entries of "white color", "cool light", and "light color". Therefore, the preset awakening words in the recognition result can be recognized no matter the preset awakening words appear at any positions of the recognition result. For example, when the recognition result is "white with a cool color", the preset wake word "white" can be recognized.
In some embodiments, when there is an entry matching a preset wake word in the recognition result, entering a wake state includes:
A. when the entry matched with the preset awakening word exists in the recognition result, acquiring the current running state;
B. and when the current running state is the standby state, switching the running state into the awakening state.
In the embodiment of the present application, the user may obtain the voice information in the standby state or in the wake-up state. Therefore, in the embodiment of the application, when the entry matched with the preset wake-up word is detected to exist in the recognition result, the current running state of the control device of the smart home system can be acquired, and if the running state at this time is the standby state, the running state of the control device of the smart home system is switched to the wake-up state. Otherwise, if the current operation state of the control device of the smart home system is the wake-up state, the current operation state of the control device of the smart home system is not adjusted.
Or, in some embodiments, when the voice information is acquired, the operation state of the control device of the smart home system is acquired first, and if it is determined that the current operation state is the wakeup state, it is not necessary to detect whether a preset wakeup word exists in the recognition result; and if the current running state is the standby state, detecting whether a preset awakening word exists in the identification result.
In some embodiments, the speech processing method provided by the present application further includes:
and when the voice information is not received in the preset time period, switching the running state into the standby state.
When the preset awakening words are detected to exist in the voice information, awakening the control device of the intelligent home system, and enabling the control device of the intelligent home system to be in an awakening state. In the awakening state, the control device of the intelligent home system continuously acquires voice information and performs voice recognition on the voice information so as to extract command words from the voice information and control the intelligent home system according to the command words. Because in the awakening state, the control device of the intelligent home system can continuously acquire and recognize the voice, and is in a high-power-consumption state. However, the user does not need to perform voice control on the smart home at any time, and the general user can control the smart home system in a plurality of time periods such as home returning, sleeping and getting up in the morning, and the other time periods are all sporadic control instructions. Therefore, when the control device of the smart home system acquires the voice information in the wake-up state, the time period in which the voice information is not acquired is also detected, and when the control device of the smart home system detects that the voice information is not acquired in the time period of the preset length, the operation state of the control device of the smart home system is switched to the standby state again.
In some embodiments, if in the wake-up state, the control state of the smart home system identifies the acquired voice information and then accumulates the duration when no command word is found in the identification result, and when the duration reaches a preset time period, the operating state of the control device of the smart home system is switched to the standby state. That is, if the command word cannot be detected within the preset time period, the operating state of the control device of the smart home system is switched to the standby state.
And 104, extracting preset command words in the recognition result in the awakening state.
After the operation state of the control device of the smart home system is switched to the wakeup state, whether a preset command word exists in the recognition result can be further detected. The preset command word may be one or more. In general, a plurality of command words are preset; also, it is possible to have different command words for different electrical devices. For example, the command word for controlling the motorized window treatment may be "open the window treatment" or "close the window treatment", and the command word for controlling the television may be "open the television", "close the television", "turn up the television volume", or "switch the television channel".
In general, there are many preset command words, and a relatively large amount of calculation is required for detecting whether there is a preset command word in the recognition result, so that energy consumption in the detection process is relatively high and the detection process needs to be executed in an awake state. And when the preset command words are detected to exist in the recognition result in the awakening state, extracting the preset command words.
In some embodiments, extracting the preset command word in the recognition result in the wake state includes:
1. when detecting that a target entry matched with a command word in a preset command word set exists in the recognition result, determining the target entry as the preset command word;
2. and extracting preset command words.
In the embodiment of the application, a preset command word set can be set for a control device of the smart home system. And then, matching the recognition result with each command word in a preset command word set one by one to obtain one or more target entries matched with the command words in the command word set. And determining the matched target entries as preset command words.
In some embodiments, when it is detected that a target entry matching a command word in the preset command word set exists in the recognition result, determining the target entry as the preset command word includes:
1.1, dividing the recognition result into a plurality of entries;
1.2, matching the multiple entries with command words in a preset command word set respectively;
1.3, when a target entry matched with a command word in the preset command word set exists, determining the target entry as the preset command word.
In the embodiment of the application, the recognition result may also be segmented, and then the entry obtained by segmentation is matched with the command word in the preset command word set, so as to determine that the matched target entry is the preset command word. Here, the segmentation is not performed with one set length, but with a plurality of lengths. For example, when the recognition result is a chinese text, a vocabulary entry set consisting of a plurality of vocabulary entries may be obtained by performing segmentation with one chinese character as a length, performing segmentation with two chinese characters as a length, performing segmentation with a plurality of chinese characters as a length, and the like. Then, the entry set is matched with a preset command word set.
In some embodiments, in the wake mode, speech recognition may be performed on the acquired speech information again to obtain a new recognition result.
The voice recognition of the acquired voice information in the wake-up mode may include:
A. carrying out voice enhancement processing on the acquired voice information to obtain voice information after the voice enhancement processing;
B. and identifying the voice information after the voice enhancement processing to obtain a new identification result.
The control device of the intelligent home system is still in a standby state when the voice information is acquired, and in the standby state, the control device of the intelligent home system is in a low-energy-consumption state, so that the voice acquisition and processing capacity of the control device is weak. Since the wake-up word is generally a single word and is relatively fixed, the recognition result of the voice message acquired in the standby state is adopted to sufficiently detect the wake-up word. The difficulty of detecting the command words in the recognition result is high, because on one hand, the voice information can contain a plurality of command words, on the other hand, the number of the command words is large, different electrical equipment can correspond to different command words, and the same electrical equipment can also have a plurality of different command words. Therefore, for the detection of command words, it is necessary to collect relatively clear voice information and perform more efficient voice processing. Therefore, in the present application, after the control device of the smart home system switches to the wake-up state, the obtained voice information is enhanced and recognized again, specifically, the obtained voice information is enhanced and recognized, and then the enhanced voice information is recognized, so as to obtain a more accurate recognition result.
The voice enhancement processing can extract useful voice signals from a noise background, and suppress and reduce noise interference. That is, original speech can be extracted from noisy speech as pure as possible, thereby reducing noise interference to the speech recognition process. Specifically, the speech enhancement processing method includes, but is not limited to, noise cancellation methods, harmonic enhancement methods, and the like.
In some embodiments, extracting the preset command word in the recognition result in the wake state includes:
a. acquiring position information of a preset awakening word in voice information in an awakening state;
b. dividing the voice information into a first voice segment and a second voice segment according to the position information, wherein the first voice segment is a voice segment corresponding to a preset awakening word;
c. and extracting preset command words from the second voice segment.
In this embodiment, in the voice processing method provided by the present application, the obtained voice information may include a wakeup word and a command word. After the wake-up word is determined according to the recognition result, the voice segment corresponding to the wake-up word can be determined. And then removing the voice segment corresponding to the awakening word, and only keeping the voice segments except the awakening word. And then, enhancing the voice segments except the awakening words and then performing voice recognition, thereby obtaining more accurate recognition results related to the command words.
In particular, the wake-up word may be in a header position in the voice message, such as "white-light on tv"; the wake-up word may also be in the tail position of the voice message, such as "turn on the television small white"; the wake-up word may also be in the middle of a voice message, such as "close the curtain while opening the television". For any voice message, after the position information of the awakening word 'Xiaobai' in the voice message is determined, the voice message can be divided into a first voice segment corresponding to the 'Xiaobai' and a second voice segment corresponding to the voice messages except for the 'Xiaobai'. It is understood that the preset command word must be in the second speech segment and not in the first speech segment. Therefore, the preset command words can be directly extracted from the second voice segment, interference of the awakening words is eliminated, and the extraction efficiency of the preset command words can be further improved.
And 105, generating a control instruction according to the preset command word, and controlling the intelligent home system according to the control instruction.
After the preset command words are extracted from the recognition result in the awakening state, the control instruction is generated according to the preset command words. The control instruction comprises a controlled object and specific control operation aiming at the controlled object. The controlled object may be one or more electrical devices, and may specifically be determined according to the number and content of the preset command words.
Therefore, in the embodiment of the application, when the voice information is acquired, the identification of the wakeup word and the command word is directly performed according to the voice information. If the awakening words are recognized, awakening a control device of the intelligent home system and further controlling the intelligent home system according to the recognized command words; and if the awakening word is not recognized, not awakening the control device of the intelligent home system, and not controlling the intelligent home system. And if the awakening word is recognized but the command word is not recognized, awakening the control device of the intelligent home system and continuously performing voice acquisition and command word recognition. Compared with the prior art, the method and the device have the advantages that after the awakening words are recognized, the recognition operation does not need to be stopped, and the awakening result does not need to be fed back, so that the efficiency of voice processing can be improved.
As can be seen from the above description, the voice processing method provided in the embodiment of the present application obtains voice information; carrying out voice recognition on the voice information to obtain a recognition result; entering an awakening state when an entry matched with a preset awakening word exists in the identification result; extracting preset command words in the recognition result in an awakening state; and generating a control instruction according to the preset command word, and controlling the intelligent home system according to the control instruction. Therefore, the awakening words and the command words can be spoken at one time, the awakening of the intelligent home control module and the control over the intelligent home system are achieved, the voice processing efficiency is improved, and the control efficiency of the intelligent home system is improved.
In order to better implement the above method, an embodiment of the present invention further provides a voice processing apparatus, which may be integrated in a terminal.
For example, as shown in fig. 3, for a schematic structural diagram of a speech processing apparatus provided in an embodiment of the present application, the speech processing apparatus may include an obtaining unit 201, a recognition unit 202, a wake-up unit 203, an extraction unit 204, and a control unit 205, as follows:
an acquisition unit 201 for acquiring voice information;
the recognition unit 202 is configured to perform voice recognition on the voice information to obtain a recognition result;
the awakening unit 203 is configured to enter an awakening state when a vocabulary entry matching the preset awakening word exists in the recognition result;
an extracting unit 204, configured to extract a preset command word in the recognition result in the wake-up state;
the control unit 205 is configured to generate a control instruction according to the preset command word, and control the smart home system according to the control instruction.
In some embodiments, a wake-up unit, comprising:
the segmentation subunit is used for segmenting the recognition result into a plurality of entries;
and the awakening subunit is used for entering an awakening state when an entry matched with the preset awakening word exists in the plurality of entries.
In some embodiments, dicing the subunits comprises:
the acquisition module is used for acquiring the entry length of a preset awakening word;
and the first segmentation module is used for segmenting the recognition result according to the length of the entries to obtain a plurality of entries.
In some embodiments, a wake-up unit, comprising:
the obtaining subunit is used for obtaining the current operation state when the entry matched with the preset awakening word exists in the identification result;
and the switching subunit is used for switching the running state into the awakening state when the current running state is the standby state.
In some embodiments, the speech processing apparatus provided in the embodiments of the present application further includes:
and the switching unit is used for switching the running state into the standby state when the voice information is not received in the preset time period.
In some embodiments, the extraction unit comprises:
the determining subunit is used for determining the target entry as the preset command word when detecting that the target entry matched with the command word in the preset command word set exists in the recognition result;
and the extraction subunit is used for extracting the preset command words.
In some embodiments, determining the sub-unit comprises:
the second segmentation module is used for segmenting the recognition result into a plurality of entries;
the matching module is used for matching the multiple entries with command words in a preset command word set respectively;
and the determining module is used for determining the target entry as the preset command word when the target entry matched with the command word in the preset command word set exists.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above description, in the voice processing method provided in the embodiment of the present application, the obtaining unit 201 obtains the voice information; the recognition unit 202 performs voice recognition on the voice information to obtain a recognition result; when the entry matched with the preset awakening word exists in the recognition result, the awakening unit 203 controls to enter an awakening state; the extracting unit 204 extracts a preset command word in the recognition result in the wake-up state; the control unit 205 generates a control instruction according to the preset command word, and controls the smart home system according to the control instruction. Therefore, the awakening words and the command words can be spoken at one time, the awakening of the control device of the intelligent home system and the control over the intelligent home system are achieved, the voice processing efficiency is improved, and the control efficiency of the intelligent home system is improved.
An embodiment of the present application also provides a computer device, which may be a terminal, as shown in fig. 4, where the terminal may include a Radio Frequency (RF) circuit 301, a memory 302 including one or more computer-readable storage media, an input unit 303, a display unit 304, a sensor 305, an audio circuit 306, a Wireless Fidelity (WiFi) module 307, a processor 308 including one or more processing cores, and a power supply 309. Those skilled in the art will appreciate that the terminal configuration shown in fig. 4 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 301 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for receiving downlink information from a base station and then processing the received downlink information by one or more processors 308; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuit 301 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 301 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 302 may be used to store software programs and modules, and the processor 308 executes various functional applications and information interactions by executing the software programs and modules stored in the memory 302. The memory 302 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal, etc. Further, the memory 302 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 302 may also include a memory controller to provide the processor 308 and the input unit 303 access to the memory 302.
The input unit 303 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in one particular embodiment, the input unit 303 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 308, and can receive and execute commands sent by the processor 308. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 303 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 304 may be used to display information input by or provided to the user and various graphical user interfaces of the terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 304 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 308 to determine the type of touch event, and the processor 308 then provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 4 the touch-sensitive surface and the display panel are shown as two separate components to implement input and output functions, in some embodiments the touch-sensitive surface may be integrated with the display panel to implement input and output functions.
The terminal may also include at least one sensor 305, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or the backlight when the terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here.
Audio circuitry 306, a speaker, and a microphone may provide an audio interface between the user and the terminal. The audio circuit 306 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electric signal, which is received by the audio circuit 306 and converted into audio data, which is then processed by the audio data output processor 308, and then transmitted to, for example, another terminal via the RF circuit 301, or the audio data is output to the memory 302 for further processing. The audio circuitry 306 may also include an earbud jack to provide peripheral headset communication with the terminal.
WiFi belongs to short distance wireless transmission technology, and the terminal can help the user to send and receive e-mail, browse web page and access streaming media etc. through WiFi module 307, which provides wireless broadband internet access for the user. Although fig. 4 shows the WiFi module 307, it is understood that it does not belong to the essential constitution of the terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 308 is a control center of the terminal, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 302 and calling data stored in the memory 302, thereby performing overall monitoring of the mobile phone. Optionally, processor 308 may include one or more processing cores; preferably, the processor 308 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 308.
The terminal also includes a power supply 309 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 308 via a power management system to manage charging, discharging, and power consumption management functions via the power management system. The power supply 309 may also include one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and any like components.
Although not shown, the terminal may further include a camera, a bluetooth module, and the like, which will not be described herein. Specifically, in this embodiment, the processor 308 in the terminal loads the executable file corresponding to the process of one or more application programs into the memory 302 according to the following instructions, and the processor 308 runs the application programs stored in the memory 302, thereby implementing various functions:
acquiring voice information; carrying out voice recognition on the voice information to obtain a recognition result; entering an awakening state when an entry matched with a preset awakening word exists in the identification result; extracting preset command words in the recognition result in an awakening state; and generating a control instruction according to the preset command word, and controlling the intelligent home system according to the control instruction.
It should be noted that the computer device provided in the embodiment of the present application and the method in the foregoing embodiment belong to the same concept, and specific implementation of the above operations may refer to the foregoing embodiment, which is not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present invention provide a computer-readable storage medium having stored therein a plurality of instructions, which can be loaded by a processor to perform the steps of any of the methods provided by the embodiments of the present invention. For example, the instructions may perform the steps of:
acquiring voice information; carrying out voice recognition on the voice information to obtain a recognition result; entering an awakening state when an entry matched with a preset awakening word exists in the identification result; extracting preset command words in the recognition result in an awakening state; and generating a control instruction according to the preset command word, and controlling the intelligent home system according to the control instruction.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in any method provided by the embodiment of the present invention, the beneficial effects that can be achieved by any method provided by the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
According to an aspect of the application, there is provided, among other things, a computer program product or computer program comprising computer instructions stored in a storage medium. The computer instructions are read from the storage medium by a processor of the computer device, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations of fig. 2 described above.
The foregoing describes a speech processing method, apparatus, computer-readable storage medium, and computer device provided in the embodiments of the present invention in detail, and specific examples are applied herein to explain the principles and implementations of the present invention, and the descriptions of the foregoing embodiments are only used to help understand the method and its core ideas of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method of speech processing, the method comprising:
acquiring voice information;
carrying out voice recognition on the voice information to obtain a recognition result;
entering an awakening state when an entry matched with a preset awakening word exists in the identification result;
extracting preset command words in the recognition result in the awakening state;
and generating a control instruction according to the preset command word, and controlling the intelligent home system according to the control instruction.
2. The method according to claim 1, wherein entering an awake state when an entry matching a preset awake word exists in the recognition result comprises:
segmenting the recognition result into a plurality of entries;
and when the entries matched with the preset awakening words exist in the plurality of entries, entering an awakening state.
3. The method of claim 2, wherein the segmenting the recognition result into a plurality of entries comprises:
acquiring the entry length of a preset awakening word;
and segmenting the recognition result according to the entry length to obtain a plurality of entries.
4. The method according to claim 1, wherein entering an awake state when an entry matching a preset awake word exists in the recognition result comprises:
when the entry matched with a preset awakening word exists in the recognition result, acquiring a current operation state;
and when the current running state is a standby state, switching the running state into an awakening state.
5. The method of claim 4, further comprising:
and when the recognition result matched with the preset command word is not received within the preset time period, switching the running state into the standby state.
6. The method according to claim 1, wherein the extracting the preset command word in the recognition result in the wake-up state comprises:
when detecting that a target entry matched with a command word in a preset command word set exists in the recognition result, determining the target entry as a preset command word;
and extracting the preset command words.
7. The method according to claim 1, wherein the extracting the preset command word in the recognition result in the wake-up state comprises:
acquiring the position information of the preset awakening word in the voice information in the awakening state;
dividing the voice information into a first voice segment and a second voice segment according to the position information, wherein the first voice segment is a voice segment corresponding to the preset awakening word;
and extracting preset command words from the second voice fragment.
8. A speech processing apparatus, characterized in that the apparatus comprises:
an acquisition unit configured to acquire voice information;
the recognition unit is used for carrying out voice recognition on the voice information to obtain a recognition result;
the awakening unit is used for entering an awakening state when a vocabulary entry matched with a preset awakening word exists in the identification result;
the extraction unit is used for extracting preset command words in the identification result in the awakening state;
and the control unit is used for generating a control instruction according to the preset command word and controlling the intelligent home system according to the control instruction.
9. A computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the speech processing method according to any one of claims 1 to 7.
10. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the speech processing method according to any one of claims 1 to 7 when executing the computer program.
CN202110734114.9A 2021-06-30 2021-06-30 Voice processing method and device, computer readable storage medium and computer equipment Pending CN113393838A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110734114.9A CN113393838A (en) 2021-06-30 2021-06-30 Voice processing method and device, computer readable storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110734114.9A CN113393838A (en) 2021-06-30 2021-06-30 Voice processing method and device, computer readable storage medium and computer equipment

Publications (1)

Publication Number Publication Date
CN113393838A true CN113393838A (en) 2021-09-14

Family

ID=77624779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110734114.9A Pending CN113393838A (en) 2021-06-30 2021-06-30 Voice processing method and device, computer readable storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN113393838A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116069818A (en) * 2023-01-05 2023-05-05 广州市华势信息科技有限公司 Application processing method and system based on zero code development
CN116582382A (en) * 2023-07-11 2023-08-11 北京探境科技有限公司 Intelligent device control method and device, storage medium and electronic device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104538030A (en) * 2014-12-11 2015-04-22 科大讯飞股份有限公司 Control system and method for controlling household appliances through voice
CN106448664A (en) * 2016-10-28 2017-02-22 魏朝正 System and method for controlling intelligent home equipment by voice
CN109493849A (en) * 2018-12-29 2019-03-19 联想(北京)有限公司 Voice awakening method, device and electronic equipment
CN110288989A (en) * 2019-06-03 2019-09-27 安徽兴博远实信息科技有限公司 Voice interactive method and system
CN110334348A (en) * 2019-06-28 2019-10-15 珍岛信息技术(上海)股份有限公司 A kind of text method of calibration based in plain text
CN110797015A (en) * 2018-12-17 2020-02-14 北京嘀嘀无限科技发展有限公司 Voice wake-up method and device, electronic equipment and storage medium
CN111128201A (en) * 2019-12-31 2020-05-08 百度在线网络技术(北京)有限公司 Interaction method, device, system, electronic equipment and storage medium
US20200152177A1 (en) * 2017-07-19 2020-05-14 Tencent Technology (Shenzhen) Company Limited Speech recognition method and apparatus, and storage medium
CN111599371A (en) * 2020-05-19 2020-08-28 苏州奇梦者网络科技有限公司 Voice adding method, system, device and storage medium
CN112382281A (en) * 2020-11-05 2021-02-19 北京百度网讯科技有限公司 Voice recognition method and device, electronic equipment and readable storage medium
CN112487132A (en) * 2019-09-12 2021-03-12 北京国双科技有限公司 Keyword determination method and related equipment
CN112487181A (en) * 2019-09-12 2021-03-12 北京国双科技有限公司 Keyword determination method and related equipment
CN112686041A (en) * 2021-01-06 2021-04-20 北京猿力未来科技有限公司 Pinyin marking method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104538030A (en) * 2014-12-11 2015-04-22 科大讯飞股份有限公司 Control system and method for controlling household appliances through voice
CN106448664A (en) * 2016-10-28 2017-02-22 魏朝正 System and method for controlling intelligent home equipment by voice
US20200152177A1 (en) * 2017-07-19 2020-05-14 Tencent Technology (Shenzhen) Company Limited Speech recognition method and apparatus, and storage medium
CN110797015A (en) * 2018-12-17 2020-02-14 北京嘀嘀无限科技发展有限公司 Voice wake-up method and device, electronic equipment and storage medium
CN109493849A (en) * 2018-12-29 2019-03-19 联想(北京)有限公司 Voice awakening method, device and electronic equipment
CN110288989A (en) * 2019-06-03 2019-09-27 安徽兴博远实信息科技有限公司 Voice interactive method and system
CN110334348A (en) * 2019-06-28 2019-10-15 珍岛信息技术(上海)股份有限公司 A kind of text method of calibration based in plain text
CN112487132A (en) * 2019-09-12 2021-03-12 北京国双科技有限公司 Keyword determination method and related equipment
CN112487181A (en) * 2019-09-12 2021-03-12 北京国双科技有限公司 Keyword determination method and related equipment
CN111128201A (en) * 2019-12-31 2020-05-08 百度在线网络技术(北京)有限公司 Interaction method, device, system, electronic equipment and storage medium
CN111599371A (en) * 2020-05-19 2020-08-28 苏州奇梦者网络科技有限公司 Voice adding method, system, device and storage medium
CN112382281A (en) * 2020-11-05 2021-02-19 北京百度网讯科技有限公司 Voice recognition method and device, electronic equipment and readable storage medium
CN112686041A (en) * 2021-01-06 2021-04-20 北京猿力未来科技有限公司 Pinyin marking method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116069818A (en) * 2023-01-05 2023-05-05 广州市华势信息科技有限公司 Application processing method and system based on zero code development
CN116069818B (en) * 2023-01-05 2023-09-12 广州市华势信息科技有限公司 Application processing method and system based on zero code development
CN116582382A (en) * 2023-07-11 2023-08-11 北京探境科技有限公司 Intelligent device control method and device, storage medium and electronic device
CN116582382B (en) * 2023-07-11 2023-09-29 北京探境科技有限公司 Intelligent device control method and device, storage medium and electronic device

Similar Documents

Publication Publication Date Title
KR102354275B1 (en) Speech recognition method and apparatus, and storage medium
US11366510B2 (en) Processing method for reducing power consumption and mobile terminal
CN108712566B (en) Voice assistant awakening method and mobile terminal
WO2017008569A1 (en) Message updating method, apparatus, and terminal
CN106528545B (en) Voice information processing method and device
CN106203235B (en) Living body identification method and apparatus
CN108322599B (en) Network access method and mobile terminal
CN109672775B (en) Method, device and terminal for adjusting awakening sensitivity
CN107734170B (en) Notification message processing method, mobile terminal and wearable device
CN112230877A (en) Voice operation method and device, storage medium and electronic equipment
CN113393838A (en) Voice processing method and device, computer readable storage medium and computer equipment
WO2015043200A1 (en) Method and apparatus for controlling applications and operations on a terminal
CN115985323B (en) Voice wakeup method and device, electronic equipment and readable storage medium
CN111327744B (en) Function control assembly and electronic equipment
CN116486833A (en) Audio gain adjustment method and device, storage medium and electronic equipment
CN111580911A (en) Operation prompting method and device for terminal, storage medium and terminal
CN108008808B (en) Operation parameter adjusting method and mobile terminal
CN108170360B (en) Control method of gesture function and mobile terminal
CN111897916B (en) Voice instruction recognition method, device, terminal equipment and storage medium
CN105635379B (en) Noise suppression method and device
CN114593081A (en) Intelligent control method and device, storage medium and terminal equipment
CN111880988A (en) Voiceprint wake-up log collection method and device
CN111694419A (en) Sensor control method and electronic device
CN115995231B (en) Voice wakeup method and device, electronic equipment and readable storage medium
CN109561481B (en) Data sending method, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210914

RJ01 Rejection of invention patent application after publication