CN115985309A - Voice recognition method and device, electronic equipment and storage medium - Google Patents

Voice recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115985309A
CN115985309A CN202211530677.7A CN202211530677A CN115985309A CN 115985309 A CN115985309 A CN 115985309A CN 202211530677 A CN202211530677 A CN 202211530677A CN 115985309 A CN115985309 A CN 115985309A
Authority
CN
China
Prior art keywords
voice recognition
voice
intention
weight value
current weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211530677.7A
Other languages
Chinese (zh)
Inventor
周力为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pateo Connect and Technology Shanghai Corp
Original Assignee
Pateo Connect and Technology Shanghai Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pateo Connect and Technology Shanghai Corp filed Critical Pateo Connect and Technology Shanghai Corp
Priority to CN202211530677.7A priority Critical patent/CN115985309A/en
Publication of CN115985309A publication Critical patent/CN115985309A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Navigation (AREA)

Abstract

The embodiment of the invention provides a voice recognition method, a voice recognition device, electronic equipment and a storage medium, wherein the method comprises the following steps: responding to a voice instruction aiming at the voice recognition terminal, and respectively inputting the voice instruction into each voice engine to obtain a voice recognition intention output by each voice engine; acquiring historical use information and a current weight value of each voice recognition intention; screening out target voice recognition intents from the voice recognition intents output by the voice engines according to at least one of the historical use information and the current weight value; and executing an application function corresponding to the target voice recognition intention.

Description

Voice recognition method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of semantic recognition technologies, and in particular, to a speech recognition method, a speech recognition apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development and maturity of voice recognition technology, the application scenarios of current voice recognition are more and more abundant, for example, intelligent sound, voice robot, vehicle-mounted voice and the like, which have been deeply involved in many aspects of people's life and provide convenience for people. However, although speech recognition technology has rapidly developed, there still remains a problem that user intention recognition is inaccurate.
Disclosure of Invention
The embodiment of the invention provides a voice recognition method, a voice recognition device, electronic equipment and a computer-readable storage medium, which are used for solving or partially solving the problem that the user intention recognition is inaccurate in voice recognition.
The embodiment of the invention discloses a voice recognition method, which is applied to a voice recognition terminal, wherein the voice recognition terminal comprises a plurality of voice engines, and the method comprises the following steps:
responding to a voice instruction aiming at the voice recognition terminal, and respectively inputting the voice instruction into each voice engine to obtain a voice recognition intention output by each voice engine;
acquiring historical use information and a current weight value of each voice recognition intention;
screening out target voice recognition intentions from the voice recognition intentions output by the voice engines according to at least one of the historical use information and the current weight value;
and executing an application function corresponding to the target voice recognition intention.
Optionally, the historical usage information includes historical usage times, and the screening of the target voice recognition intention from the voice recognition intentions output by the respective voice engines according to at least one of the historical usage information and the current weight value includes:
taking the voice recognition intention with the highest current weight value as a target voice recognition intention corresponding to the voice instruction;
and when the number of the voice recognition intentions with the highest current weight value is greater than or equal to 2, taking the voice recognition intention with the highest current weight value and the largest historical use number as the target voice recognition intention corresponding to the voice instruction.
Optionally, the historical usage information includes historical usage times, and the screening of the target voice recognition intention from the voice recognition intentions output by the respective voice engines according to at least one of the historical usage information and the current weight value includes:
taking the voice recognition intention with the largest historical use number as a target recognition intention corresponding to the voice instruction;
and when the number of the voice recognition intentions with the largest historical use times is greater than or equal to 2, taking the voice recognition intention with the highest historical use times and the highest current weight value as the target voice recognition intention corresponding to the voice instruction.
Optionally, the method further comprises:
in response to a use instruction for any one of the voice recognition intents, determining a first voice recognition intention to be used, and updating a current weight value of the first voice recognition intention by adopting a preset weight increment value to generate a target weight value of the first voice recognition intention.
Optionally, the method further comprises:
in response to any of the speech recognition intents not being used, determining a second speech recognition intention that is not used, and obtaining a cumulative duration that the second speech recognition intention is not used;
and under the condition that the accumulated time length is greater than or equal to a preset time length, updating the current weight value of the second voice recognition intention according to the accumulated time length, the preset time length and a preset decrement value to generate a target weight value of the second voice recognition intention.
Optionally, the updating the current weight value of the second voice recognition intention according to the accumulated time length, the preset time length and a preset decrement value to generate a target weight value of the second voice recognition intention includes:
calculating a multiple for the preset decrement value by adopting the accumulated time length and the preset time length;
and calculating a decrement value aiming at the current weight value by adopting the multiple and the preset duration, updating the current weight value based on the decrement value, and generating a target weight value of the second voice recognition intention.
Optionally, the voice recognition terminal is a vehicle-mounted terminal, and the voice recognition is intended to provide a vehicle-mounted function for a vehicle.
The embodiment of the invention also discloses a voice recognition device, which is applied to a voice recognition terminal, wherein the voice recognition terminal comprises a plurality of voice engines, and the device comprises:
the intention output module is used for responding to a voice instruction aiming at the voice recognition terminal, inputting the voice instruction into each voice engine respectively and obtaining the voice recognition intention output by each voice engine;
the intention attribute acquisition module is used for acquiring historical use information and a current weight value of each voice recognition intention;
the intention screening module is used for screening target voice recognition intents from the voice recognition intents output by the voice engines according to at least one of the historical use information and the current weight value;
and the function execution module is used for executing the application function corresponding to the target voice recognition intention.
Optionally, the historical usage information includes historical usage times, and the intention filtering module is specifically configured to:
taking the voice recognition intention with the highest current weight value as a target voice recognition intention corresponding to the voice instruction;
and when the number of the voice recognition intentions with the highest current weight value is greater than or equal to 2, taking the voice recognition intention with the highest current weight value and the largest historical use times as the target voice recognition intention corresponding to the voice instruction.
Optionally, the historical usage information includes historical usage times, and the intention filtering module is specifically configured to:
taking the voice recognition intention with the largest historical use times as a target recognition intention corresponding to the voice instruction;
and when the number of the voice recognition intentions with the largest historical use times is greater than or equal to 2, taking the voice recognition intention with the highest historical use times and the highest current weight value as the target voice recognition intention corresponding to the voice instruction.
Optionally, the method further comprises:
and the weight value adjusting module is used for responding to a use instruction aiming at any voice recognition intention, determining a first voice recognition intention to be used, updating the current weight value of the first voice recognition intention by adopting a preset weight increment value, and generating a target weight value of the first voice recognition intention.
Optionally, the method further comprises:
the accumulated time length acquisition module is used for responding to that any voice recognition intention is not used, determining a second voice recognition intention which is not used, and acquiring the accumulated time length for which the second voice recognition intention is not used;
and the decrement adjusting module is used for updating the current weight value of the second voice recognition intention according to the accumulated time length, the preset time length and a preset decrement value under the condition that the accumulated time length is greater than or equal to a preset time length, so as to generate a target weight value of the second voice recognition intention.
Optionally, the decrement adjusting module is specifically configured to:
calculating a multiple for the preset decrement value by using the accumulated time length and the preset time length;
and calculating a decrement value aiming at the current weight value by adopting the multiple and the preset duration, updating the current weight value based on the decrement value, and generating a target weight value of the second voice recognition intention.
Optionally, the voice recognition terminal is a vehicle-mounted terminal, and the voice recognition is intended to provide a vehicle-mounted function for a vehicle.
The embodiment of the invention also discloses electronic equipment which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory finish mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method according to the embodiment of the present invention when executing the program stored in the memory.
Also disclosed is a computer-readable storage medium having stored thereon instructions, which, when executed by one or more processors, cause the processors to perform 100002 as described in embodiments of the present invention
Method of 2010.2.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, for a voice recognition terminal comprising a plurality of voice engines, in a related voice application scene, the voice recognition terminal can respond to a voice instruction for the voice recognition terminal, respectively input the voice instruction into each voice engine, obtain a voice recognition intention output by each voice engine, then can obtain historical use information and a current weight value of each voice recognition intention, and screen out a target voice recognition intention from the voice recognition intentions output by each voice engine according to at least one of the historical use information and the current weight value, and then execute an application function corresponding to the target voice recognition intention, so that on one hand, voice recognition is performed based on a plurality of voice engines, the efficiency of voice recognition is effectively ensured, and on the other hand, for the voice recognition intentions output by each voice engine, a target voice recognition intention most matched with the voice instruction input by a user is obtained by screening through the historical use information and the current weight value of each voice recognition intention, and then the application function corresponding to the target voice recognition intention is executed, so that not only the functionality of the voice engines can be expanded, but also the accuracy of the user intention recognition can be improved, and the user experience is improved.
Drawings
FIG. 1 is a flow chart of the steps of a speech recognition method provided in an embodiment of the present invention;
FIG. 2 is a system block diagram of vehicle-mounted speech processing provided in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a system architecture provided in an embodiment of the invention;
FIG. 4 is a flow diagram of intent processing provided in an embodiment of the present invention;
FIG. 5 is a flow diagram of intent processing provided in an embodiment of the present invention;
FIG. 6 is a flow diagram of speech recognition provided in an embodiment of the present invention;
FIG. 7 is a schematic illustration of a user scenario provided in an embodiment of the present invention;
fig. 8 is a block diagram of a speech recognition apparatus provided in an embodiment of the present invention;
fig. 9 is a block diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As an example, with the development of voice recognition technology and in-vehicle terminals, in-vehicle voice has been widely used in vehicles, and a driver can input a corresponding voice command to control the in-vehicle terminal to perform a corresponding in-vehicle function. For example, the vehicle-mounted terminal may simultaneously integrate a plurality of voice recognition systems, including a first voice assistant, a second voice assistant, etc., when the vehicle-mounted terminal controls the vehicle-mounted terminal system through voice, some voice skills are only supported by the first voice assistant, and some voice skills are only supported by the second voice assistant, in this case, due to the existence of different voice assistants, and due to differences in areas of excellence, the voice instructions input by the user in the vehicle are easily recognized, and problems of user intention recognition errors, inaccurate recognition, etc. may occur.
In view of the above, one of the core invention points of the present invention is that, for a voice recognition terminal including a plurality of voice engines, in a related voice application scenario, in response to a voice instruction for the voice recognition terminal, the voice instruction is respectively input into each voice engine, a voice recognition intention output by each voice engine is obtained, then, historical usage information and a current weight value of each voice recognition intention can be obtained, a target voice recognition intention is selected from the voice recognition intentions output by each voice engine according to at least one of the historical usage information and the current weight value, then, an application function corresponding to the target voice recognition intention is executed, so that on one hand, voice recognition is performed based on a plurality of voice engines, and the efficiency of voice recognition is effectively ensured, and on the other hand, for the voice recognition intentions output by each voice engine, a target voice recognition intention most matched with the voice instruction input by the user is obtained by filtering through the historical usage information and the current weight value of each voice recognition intention, and then, an application function corresponding to the target voice recognition intention is executed, which not only can extend the functionality of the voice engines, but also can improve the accuracy of user intention recognition, and improve the user experience.
Referring to fig. 1, a flowchart illustrating steps of a speech recognition method provided in an embodiment of the present invention is shown, and is applied to a speech recognition terminal, where the speech recognition terminal includes a plurality of speech engines, and specifically includes the following steps:
step 101, responding to a voice instruction aiming at the voice recognition terminal, respectively inputting the voice instruction into each voice engine, and obtaining a voice recognition intention output by each voice engine;
alternatively, the embodiment of the present invention may be applied to a voice recognition terminal, on which several different voice engines may be configured, and the different voice engines may correspond to recognition of different contents, for example, assuming that the voice recognition terminal is a vehicle-mounted terminal of a vehicle, a voice engine (1), a voice engine (2), a voice engine (3), and the like may be configured in the vehicle-mounted terminal, where the voice engine (1) may be adept at recognizing navigation instructions, the voice engine (2) may be adept at recognizing instructions for controlling hardware devices of the vehicle, and the voice engine (3) may be adept at recognizing instructions for controlling entertainment functions of the vehicle.
In addition, different speech engines are not incapable of recognizing contents in other fields, but can recognize speech recognition intentions corresponding to contents in some fields more quickly and accurately, and in this regard, based on a plurality of different speech engines provided in the in-vehicle terminal, when a corresponding speech command is input by the in-vehicle user, the speech command can be input to each speech engine, speech recognition, semantic analysis, intention generation, dialog management, and the like can be performed on the speech command, and the corresponding speech recognition intentions can be output by each speech engine.
The voice recognition intention may represent an intention corresponding to the voice instruction and currently intended to be executed by the user, for example, an intention of the user to control the in-vehicle terminal to perform navigation through the voice instruction (hereinafter, navigation), an intention of the user to control the in-vehicle terminal to perform music playing through the voice instruction (hereinafter, music), an intention of the user to control the in-vehicle terminal to perform radio playing through the voice instruction (hereinafter, radio), an intention of the user to control the in-vehicle terminal to perform voice call through the voice instruction (hereinafter, voice call), an intention of the user to control the in-vehicle terminal to sing through the voice instruction (hereinafter, singing), an intention of the user to control the in-vehicle terminal to order tickets through the voice instruction (hereinafter, ticket ordering), an intention of the user to control the in-vehicle terminal to shop through the voice instruction (hereinafter, shopping), an intention of the user to control the in-vehicle terminal to perform voice call through the voice instruction (hereinafter, voice call), a weather forecast through the user to control the in-vehicle terminal through the voice instruction (hereinafter).
In an example, referring to fig. 2, a system structure diagram of vehicle-mounted voice processing provided in the embodiment of the present invention is shown, where the top layer is an application layer, an application that needs to support voice control may register a corresponding intention-to-intention processing module, for example, a navigation application needs to register an intention-to-intention processing module for navigation when it needs to support voice control, and after obtaining an intention parsed by voice, the intention processing module notifies the application that registers the intention, and then the application executes an action corresponding to the intention; the second layer is an audio acquisition module which is mainly used for recording through a recorder of the system, performing signal processing such as noise reduction and echo elimination on audio data obtained by recording, sending the processed audio data to a lower voice engine layer,
the layer comprises a plurality of voice engines, after each engine acquires a voice signal, the speech recognition, semantic understanding, dialogue management and intention generation are carried out by respective internal modules for processing, finally an intention is generated, then the intention is transmitted downwards to an intention processing module, the module rejects the intentions generated by different voice engines according to the weight and return time of the intention, the most appropriate intention is selected and returned to the application, and the application executes corresponding operation, such as outputting a navigation route and the like.
As shown in fig. 3, the data flow related to the foregoing process may be that, the recording module S1 acquires a corresponding voice signal and transmits the voice signal to each voice engine module S2, each voice audio respectively recognizes a corresponding voice recognition intention, and then transmits the voice recognition intention to the intention processing module S3, and the intention processing module accepts or rejects intentions generated by different voice engines according to the weight and return time of the intentions, and selects the most suitable intention to return to the intention receiving processing module S4 of the application, so as to execute the corresponding application operation, thereby performing voice recognition based on multiple voice engines, and effectively ensuring the efficiency of voice recognition.
102, acquiring historical use information and a current weight value of each voice recognition intention;
after each voice engine identifies the voice instruction input by the user and obtains the corresponding voice identification intention, the vehicle-mounted terminal can further obtain the historical use information and the current weight value corresponding to each voice identification intention. Wherein the historical usage information may be indicative of a frequency with which the respective speech recognition intent is used during historical usage of the vehicle; the current weight value can represent the use priority of the corresponding voice recognition intention at the current moment, the voice recognition intention is screened by obtaining the historical use information corresponding to the voice recognition intention and the current weight value, the target voice recognition intention most matched with the voice instruction input by the user is obtained, and the accuracy of voice recognition is guaranteed.
Optionally, for different voice recognition intentions, the corresponding weight values may be dynamically adjusted according to the usage of the vehicle-mounted voice by the user, specifically, when the vehicle leaves the factory, the weight values corresponding to the respective voice recognition intentions may be the same (or may be different), and according to the usage of the vehicle by the user, the weight values corresponding to the respective voice recognition intentions may be dynamically adjusted based on the usage behavior of the user, for example, for a vehicle-mounted terminal, the usage scenario thereof is relatively fixed, the common intention relates to navigation, music, radio, vehicle control, telephone, and the like, while the usage scenarios of video playing, schedule viewing, and k songs are relatively few, and the number of ticket booking, shopping, and chatting is less, the initial weight may be defined according to the usage scenarios, and after a certain amount of user data is collected by the cloud, the initial weight may be adjusted according to the user usage situation, optionally, assuming that the initial weight w (navigation) = w (music) = w (radio station) = w (vehicle control) = w (telephone) > w (video) = w (schedule) = w (k song) > w (ticket booking) = w (shopping) = w (chatting), and after a certain amount of user data is collected by the cloud, the initial weight may be adjusted according to the user usage frequency: w (navigation) > w (music) > w (vehicle control) > w (radio station) > w (telephone) > w (video) > w (schedule) > w (k song) > w (booking) w (chatty) > w (shopping), thereby along with the use of the vehicle-mounted voice function of the vehicle by the user, through dynamically adjusting the weighted value corresponding to the relevant voice recognition intention, the vehicle-mounted terminal can be ensured to accurately and quickly feed back when the user uses the vehicle-mounted voice.
In a specific implementation, a voice recognition terminal (such as a vehicle-mounted terminal) can respond to a use instruction aiming at any voice recognition intention, determine a first voice recognition intention to be used, update a current weight value of the first voice recognition intention by adopting a preset weight increment value, and generate a target weight value of the first voice recognition intention; and determining a second unused voice recognition intention in response to the fact that any voice recognition intention is unused, obtaining the accumulated time length that the second voice recognition intention is unused, and updating the current weight value of the second voice recognition intention according to the accumulated time length, the preset time length and the preset decrement value under the condition that the accumulated time length is greater than or equal to the preset time length to generate a target weight value of the second voice recognition intention.
For the voice recognition intention with lower use frequency, the vehicle-mounted terminal can adopt the accumulated time length and the preset time length to calculate a multiple aiming at the preset reduction value, then adopt the multiple and the preset time length to calculate the reduction value aiming at the current weight value, update the current weight value based on the reduction value, and generate a target weight value of the second voice recognition intention. Alternatively, the preset duration may be a duration threshold for determining that the voice recognition intention is not used, and when the accumulated duration is greater than or equal to the preset duration, the corresponding voice recognition intention may be determined as an unused voice recognition intention, and a weight value reduction operation may be performed thereon.
In an example, assuming that the user does not operate a certain voice recognition intention once, including voice control or by manual control, etc., the in-vehicle terminal may perform an incremental operation on a current weight value of the voice recognition intention, for example, assuming that the user operates 5 times of navigation, an adjusted weight value after navigation may be w +5a, where w may be a weight value before adjustment, and a may be a weight value increased by a single operation, it may be understood that, for the adjustment of the weight value, it may be a periodic adjustment, such as a one-time cumulative adjustment for the weight value of the voice recognition intention every day, every week, every month, etc., or may be a real-time adjustment, and for the real-time adjustment, it may be a corresponding voice recognition intention every time the user uses, that is, an adjustment is once, such as w + a, etc. Correspondingly, for the decrement adjustment, if a certain intention has not been operated within a certain time T, the corresponding decrement value Δ is obtained, for example, the weather forecast has not been operated within n times of the time T, the weight value corresponding to the weather forecast may be adjusted to w-n Δ, so that for the voice recognition intention frequently used by the user, the vehicle-mounted terminal may increase the weight value corresponding to the voice recognition intention, and for the voice recognition intention that the user does not use so much, the vehicle-mounted terminal may decrease the weight value corresponding to the voice recognition intention, and further, by dynamically adjusting the weight value corresponding to the related voice recognition intention, it may be ensured that the vehicle-mounted terminal may accurately and quickly feed back when the user uses the vehicle-mounted voice.
103, screening out a target voice recognition intention from the voice recognition intentions output by the voice engines according to at least one of the historical use information and the current weight value;
in the embodiment of the invention, the voice recognition terminal can screen the target voice recognition intention most matched with the voice instruction from the voice recognition intentions output by each voice engine according to at least one of the historical use information and the current weight value corresponding to each voice recognition intention, so that the target voice recognition intention most matched with the voice instruction input by the user is obtained by screening the historical use information and the current weight value of each voice recognition intention, and then the application function corresponding to the target voice recognition intention is executed, so that the functionality of the voice engine is effectively expanded, the voice recognition error rate is reduced, and the user experience is improved.
In an optional embodiment, the history usage information may be history usage times of the voice recognition intentions, and after obtaining a current weight value corresponding to each voice recognition intention, the voice recognition terminal may use the voice recognition intention with the highest current weight value as a target voice recognition intention corresponding to the voice instruction, and if the number of the voice recognition intentions with the highest current weight value is greater than or equal to 2, the vehicle-mounted terminal may further combine the history usage times corresponding to each voice recognition intention, and use the voice recognition intention with the highest current weight value and the largest history usage times as the target voice recognition intention corresponding to the voice instruction. For example, referring to fig. 4, which is a schematic flow chart illustrating intention processing provided in the embodiment of the present invention, after different voice recognition intents respectively output corresponding voice recognition intents, the in-vehicle terminal may first obtain a weight value corresponding to each voice recognition intention, perform comparison of the weight values, screen out the voice recognition intention with the largest weight value, and if only 1 voice recognition intention with the largest weight value exists, directly take the voice recognition intention as a target voice recognition intention, and return the target voice recognition intention; if the voice recognition intention with the largest weight value is greater than or equal to 2, the vehicle-mounted terminal can acquire the historical use times corresponding to the voice recognition intention with the largest weight value, the voice recognition intention with the high historical use times is used as the target voice recognition intention, and if the historical use times are the same, the voice recognition intention corresponding to the awakened voice engine is returned, so that the target voice recognition intention which is most matched with the voice instruction input by the user is acquired by screening the historical use information of each voice recognition intention and the current weight value, and then the application function corresponding to the target voice recognition intention is executed.
In another alternative embodiment, the voice recognition terminal may use the voice recognition intention with the largest number of historical uses as the target recognition intention corresponding to the voice instruction, and if the number of the voice recognition intentions with the largest number of historical uses is greater than or equal to 2, the in-vehicle terminal may further use the voice recognition intention with the largest number of historical uses and the highest current weight value as the target voice recognition intention corresponding to the voice instruction in combination with the current weight value corresponding to each voice recognition intention. For example, referring to fig. 5, which is a schematic flow chart illustrating intention processing provided in the embodiment of the present invention, after different voice recognition intentions output corresponding voice recognition intentions, the in-vehicle terminal may first obtain historical usage times corresponding to the voice recognition intentions, compare the historical usage times, screen out a voice recognition intention with the largest historical usage times, and if only 1 voice recognition intention with the largest historical usage times exists, directly take the voice recognition intention as a target voice recognition intention, and return the target voice recognition intention; if the voice recognition intention with the largest historical use frequency is greater than or equal to 2, the vehicle-mounted terminal can acquire a weight value corresponding to the voice recognition intention with the largest historical use frequency, the voice recognition intention with the high weight value is used as a target voice recognition intention, and if the weight values are the same, the voice recognition intention corresponding to the awakened voice engine is returned, so that the target voice recognition intention which is most matched with a voice instruction input by a user is obtained by screening the historical use information of each voice recognition intention and the current weight value, and then the application function corresponding to the target voice recognition intention is executed, so that the functionality of the voice engine is effectively expanded, the voice recognition error rate is reduced, and the user experience is improved.
It should be noted that the embodiment of the present invention includes but is not limited to the above examples, and it is understood that, under the guidance of the idea of the embodiment of the present invention, a person skilled in the art may also set the method according to actual requirements, and the present invention is not limited to this.
And 104, executing an application function corresponding to the target voice recognition intention.
After the target voice recognition intention is screened out from different voice recognition intentions, the voice recognition terminal can execute application functions corresponding to the target voice recognition intention, such as navigation, music, radio stations, voice calls, singing, ticket booking, shopping, voice conversations, weather forecast and the like, so that on one hand, voice recognition is carried out based on a plurality of voice engines, the efficiency of voice recognition is effectively ensured, on the other hand, for the voice recognition intentions output by the voice engines, the target voice recognition intention most matched with a voice instruction input by a user is obtained by screening historical use information and a current weight value of each voice recognition intention, and then the corresponding application functions are executed, the functionality of the voice engines is effectively expanded, the voice recognition error rate is reduced, and the user experience is improved.
In one example, referring to fig. 6, which illustrates a flow diagram of voice recognition provided in an embodiment of the present invention, when a user inputs a voice instruction, an in-vehicle terminal may acquire corresponding voice data through a microphone and send the voice data to each voice engine (1, 2, ..., n, etc.), each voice engine may perform intent analysis and output a corresponding voice recognition intent, and then the in-vehicle terminal may perform intent processing on each voice recognition intent, screen out a target voice recognition intent, and then execute the target voice recognition intent. In addition, referring to fig. 7, which is a schematic diagram illustrating a user scenario provided in the embodiment of the present invention, based on the processing procedure of the voice recognition intention, it is assumed that a user issues a voice instruction: ancient poems and a speech engine A do not support poem broadcasting, and the ancient poems are returned to encyclopedia to explain what the ancient poems are; and the speech engine B returns the content of the ancient poems to recite the ancient poems, and the result of the speech engine B is selected to be processed according to the weights set for different intentions. The user sends out a voice instruction: today, speech engine a returns the song "today" and speech engine B returns the dialog intent "today is wednesday # 2, 4 months 2022", the case selects the results of a for processing. The user sends out a voice instruction: navigation to a new street crossing, passing through a basalt gate and having the least red light, the voice engine A returns a search result in 1S, the voice engine B does not return a result in 1S, and the result of the voice engine A is selected for processing. The user sends out a voice instruction: i want to listen to the comment, speech engine a and speech engine B all return the intention to play the song within a specified time, selecting which to use based on the wake-up word or the user's historical frequency of use. The user sends out a voice instruction: if the temperature is too low, speech engine A recognizes that: the temperature is too low and speech engine B recognizes that: if the number of the results is too low, the chatting is returned due to recognition errors, and the result with specific intention is preferentially selected for processing.
In the embodiment of the invention, for a voice recognition terminal comprising a plurality of voice engines, in a related voice application scene, the voice recognition terminal can respond to a voice instruction for the voice recognition terminal, respectively input the voice instruction into each voice engine, obtain a voice recognition intention output by each voice engine, then can obtain historical use information and a current weight value of each voice recognition intention, and screen out a target voice recognition intention from the voice recognition intentions output by each voice engine according to at least one of the historical use information and the current weight value, and then execute an application function corresponding to the target voice recognition intention, so that on one hand, voice recognition is performed based on a plurality of voice engines, the efficiency of voice recognition is effectively ensured, and on the other hand, for the voice recognition intentions output by each voice engine, a target voice recognition intention most matched with the voice instruction input by a user is obtained by screening through the historical use information and the current weight value of each voice recognition intention, and then the application function corresponding to the target voice recognition intention is executed, so that not only the functionality of the voice engines can be expanded, but also the accuracy of the user intention recognition can be improved, and the user experience is improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 8, a block diagram of a structure of a speech recognition apparatus provided in an embodiment of the present invention is shown, and is applied to a speech recognition terminal, where the speech recognition terminal includes a plurality of speech engines, and may specifically include the following modules:
an intention output module 801, configured to respond to a voice instruction for the voice recognition terminal, and input the voice instruction into each of the voice engines respectively, so as to obtain a voice recognition intention output by each of the voice engines;
an intention attribute obtaining module 802, configured to obtain historical usage information and a current weight value of each voice recognition intention;
an intention screening module 803, configured to screen out a target speech recognition intention from the speech recognition intents output by each of the speech engines according to at least one of the historical usage information and the current weight value;
a function executing module 804, configured to execute an application function corresponding to the target speech recognition intention.
In an optional embodiment, the historical usage information includes historical usage times, and the intention filtering module 803 is specifically configured to:
taking the voice recognition intention with the highest current weight value as a target voice recognition intention corresponding to the voice instruction;
and when the number of the voice recognition intentions with the highest current weight value is greater than or equal to 2, taking the voice recognition intention with the highest current weight value and the largest historical use number as the target voice recognition intention corresponding to the voice instruction.
In an optional embodiment, the historical usage information includes historical usage times, and the intention filtering module 803 is specifically configured to:
taking the voice recognition intention with the largest historical use times as a target recognition intention corresponding to the voice instruction;
and when the number of the voice recognition intentions with the largest historical use times is greater than or equal to 2, taking the voice recognition intention with the highest historical use times and the highest current weight value as the target voice recognition intention corresponding to the voice instruction.
In an optional embodiment, further comprising:
and the weight value adjusting module is used for responding to a use instruction aiming at any voice recognition intention, determining a first voice recognition intention to be used, updating the current weight value of the first voice recognition intention by adopting a preset weight increment value, and generating a target weight value of the first voice recognition intention.
In an alternative embodiment, further comprising:
the accumulated time length acquisition module is used for responding to that any voice recognition intention is not used, determining a second voice recognition intention which is not used, and acquiring the accumulated time length for which the second voice recognition intention is not used;
and the decrement adjusting module is used for updating the current weight value of the second voice recognition intention according to the accumulated duration, the preset duration and a preset decrement value under the condition that the accumulated duration is greater than or equal to a preset duration, and generating the target weight value of the second voice recognition intention.
In an optional embodiment, the decrement adjustment module is specifically configured to:
calculating a multiple for the preset decrement value by using the accumulated time length and the preset time length;
and calculating a decrement value aiming at the current weight value by adopting the multiple and the preset duration, updating the current weight value based on the decrement value, and generating a target weight value of the second voice recognition intention.
In an alternative embodiment, the speech recognition terminal is a vehicle terminal, and the speech recognition is intended to provide vehicle functions for the vehicle.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
In addition, an embodiment of the present invention further provides an electronic device, including: the processor, the memory, and the computer program stored in the memory and capable of running on the processor, when executed by the processor, implement the processes of the above-mentioned speech recognition method embodiment, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the embodiment of the speech recognition method, and can achieve the same technical effects, and in order to avoid repetition, the computer program is not described herein again. The computer-readable storage medium may be a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, a processor 910, and a power supply 911. It will be understood by those skilled in the art that the electronic device configurations involved in the embodiments of the present invention are not intended to be limiting, and that an electronic device may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 901 may be used for receiving and sending signals during information transceiving or a call, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 910; in addition, the uplink data is transmitted to the base station. Generally, the radio frequency unit 901 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 901 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 902, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 903 may convert audio data received by the radio frequency unit 901 or the network module 902 or stored in the memory 909 into an audio signal and output as sound. Also, the audio output unit 903 may provide audio output related to a specific function performed by the electronic device 900 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 903 includes a speaker, a buzzer, a receiver, and the like.
The input unit 904 is used to receive audio or video signals. The input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics processor 9041 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 906. The image frames processed by the graphic processor 9041 may be stored in the memory 909 (or other storage medium) or transmitted via the radio frequency unit 901 or the network module 902. The microphone 9042 can receive sounds and can process such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 901 in case of the phone call mode.
The electronic device 900 also includes at least one sensor 905, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, where the ambient light sensor may adjust the brightness of the display panel 9061 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 9061 and/or the backlight when the electronic device 900 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensor 905 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not further described herein.
The display unit 906 is used to display information input by the user or information provided to the user. The Display unit 906 may include a Display panel 9061, and the Display panel 9061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 907 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 907 includes a touch panel 9091 and other input devices 9072. The touch panel 9091, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 9091 (e.g., operations by a user on or near the touch panel 9091 using a finger, a stylus, or any other suitable object or accessory). The touch panel 9091 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 910, receives a command from the processor 910, and executes the command. In addition, the touch panel 9091 may be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 907 may include other input devices 9072 in addition to the touch panel 9091. In particular, other input devices 9072 may include, but are not limited to, a physical keyboard, function keys
(such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 9091 may be overlaid on the display panel 9061, and when the touch panel 9091 detects a touch operation on or near the touch panel 9091, the touch panel is transmitted to the processor 910 to determine the type of the touch event, and then the processor 910 provides a corresponding visual output on the display panel 9061 according to the type of the touch event. It is understood that in one embodiment, the touch panel 9091 and the display panel 9061 are implemented as two independent components to implement the input and output functions of the electronic device, but in some embodiments, the touch panel 9091 and the display panel 9061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 908 is an interface for connecting an external device to the electronic apparatus 900. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 908 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic apparatus 900 or may be used to transmit data between the electronic apparatus 900 and external devices.
The memory 909 may be used to store software programs as well as various data. The memory 909 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 909 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 910 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 909 and calling data stored in the memory 909, thereby performing overall monitoring of the electronic device. Processor 910 may include one or more processing units; preferably, the processor 910 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The electronic device 900 may further include a power supply 911 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 911 is logically connected to the processor 910 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system.
In addition, the electronic device 900 includes some functional modules that are not shown, and are not described in detail here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' ...does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present invention or portions thereof contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part thereof which substantially contributes to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk or an optical disk, and various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A speech recognition method applied to a speech recognition terminal, wherein the speech recognition terminal includes a plurality of speech engines, the method comprising:
responding to a voice instruction aiming at the voice recognition terminal, and respectively inputting the voice instruction into each voice engine to obtain a voice recognition intention output by each voice engine;
obtaining historical use information and a current weight value of each voice recognition intention;
screening out target voice recognition intentions from the voice recognition intentions output by the voice engines according to at least one of the historical use information and the current weight value;
and executing an application function corresponding to the target voice recognition intention.
2. The method of claim 1, wherein the historical usage information comprises historical usage times, and wherein the filtering out target voice recognition intents from the voice recognition intents output by the respective voice engines according to at least one of the historical usage information and the current weight value comprises:
taking the voice recognition intention with the highest current weight value as a target voice recognition intention corresponding to the voice instruction;
and when the number of the voice recognition intentions with the highest current weight value is greater than or equal to 2, taking the voice recognition intention with the highest current weight value and the largest historical use times as the target voice recognition intention corresponding to the voice instruction.
3. The method of claim 1, wherein the historical usage information comprises historical usage times, and wherein the filtering out target voice recognition intents from the voice recognition intents output by the respective voice engines according to at least one of the historical usage information and the current weight value comprises:
taking the voice recognition intention with the largest historical use times as a target recognition intention corresponding to the voice instruction;
and when the number of the voice recognition intentions with the largest historical use number is greater than or equal to 2, taking the voice recognition intention with the largest historical use number and the current weight value as the target voice recognition intention corresponding to the voice instruction.
4. The method of claim 1, further comprising:
in response to a use instruction for any one of the voice recognition intents, determining a first voice recognition intention to be used, and updating a current weight value of the first voice recognition intention by adopting a preset weight increment value to generate a target weight value of the first voice recognition intention.
5. The method of claim 1 or 4, further comprising:
in response to any one of the voice recognition intents not being used, determining a second voice recognition intention which is not used, and acquiring a cumulative duration for which the second voice recognition intention is not used;
and under the condition that the accumulated time length is greater than or equal to a preset time length, updating the current weight value of the second voice recognition intention according to the accumulated time length, the preset time length and a preset decrement value, and generating a target weight value of the second voice recognition intention.
6. The method of claim 5, wherein updating the current weight value of the second voice recognition intention according to the accumulated duration, the preset duration and a preset decrement value to generate the target weight value of the second voice recognition intention comprises:
calculating a multiple for the preset decrement value by using the accumulated time length and the preset time length;
and calculating a decrement value aiming at the current weight value by adopting the multiple and the preset duration, updating the current weight value based on the decrement value, and generating a target weight value of the second voice recognition intention.
7. The method of claim 1, wherein the speech recognition terminal is an in-vehicle terminal, and wherein the speech recognition is intended to provide in-vehicle functionality for a vehicle.
8. A speech recognition apparatus applied to a speech recognition terminal including a plurality of speech engines, the apparatus comprising:
the intention output module is used for responding to a voice instruction aiming at the voice recognition terminal, inputting the voice instruction into each voice engine respectively and obtaining the voice recognition intention output by each voice engine;
the intention attribute acquisition module is used for acquiring historical use information and a current weight value of each voice recognition intention;
the intention screening module is used for screening target voice recognition intents from the voice recognition intents output by the voice engines according to at least one of the historical use information and the current weight value;
and the function execution module is used for executing the application function corresponding to the target voice recognition intention.
9. An electronic device, comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus;
the memory is used for storing a computer program;
the processor, when executing a program stored on the memory, implementing the method of any of claims 1-7.
10. A computer-readable storage medium having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the method of any one of claims 1-7.
CN202211530677.7A 2022-12-01 2022-12-01 Voice recognition method and device, electronic equipment and storage medium Pending CN115985309A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211530677.7A CN115985309A (en) 2022-12-01 2022-12-01 Voice recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211530677.7A CN115985309A (en) 2022-12-01 2022-12-01 Voice recognition method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115985309A true CN115985309A (en) 2023-04-18

Family

ID=85963746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211530677.7A Pending CN115985309A (en) 2022-12-01 2022-12-01 Voice recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115985309A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229957A (en) * 2023-05-08 2023-06-06 江铃汽车股份有限公司 Multi-voice information fusion method, system and equipment for automobile cabin system and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229957A (en) * 2023-05-08 2023-06-06 江铃汽车股份有限公司 Multi-voice information fusion method, system and equipment for automobile cabin system and storage medium

Similar Documents

Publication Publication Date Title
CN108668024B (en) Voice processing method and terminal
CN108984066B (en) Application icon display method and mobile terminal
CN110097872B (en) Audio processing method and electronic equipment
CN111475072B (en) Payment information display method and electronic equipment
CN111107219B (en) Control method and electronic equipment
CN109040444B (en) Call recording method, terminal and computer readable storage medium
CN107846518A (en) A kind of navigational state switching method, mobile terminal and computer-readable recording medium
CN107765954B (en) Application icon updating method, mobile terminal and server
CN110990679A (en) Information searching method and electronic equipment
CN110830368A (en) Instant messaging message sending method and electronic equipment
CN110706679B (en) Audio processing method and electronic equipment
CN110456923B (en) Gesture sensing data processing method and electronic equipment
CN109982273B (en) Information reply method and mobile terminal
CN115985309A (en) Voice recognition method and device, electronic equipment and storage medium
CN108270928B (en) Voice recognition method and mobile terminal
CN111427644B (en) Target behavior identification method and electronic equipment
CN112612874A (en) Data processing method and device and electronic equipment
CN111292727B (en) Voice recognition method and electronic equipment
CN109246305B (en) Navigation processing method of navigation equipment, navigation equipment and mobile terminal
CN109714462B (en) Method for marking telephone number and mobile terminal thereof
CN109684006B (en) Terminal control method and device
CN111080305A (en) Risk identification method and device and electronic equipment
CN111479005B (en) Volume adjusting method and electronic equipment
EP3846426B1 (en) Speech processing method and mobile terminal
CN111045588B (en) Information viewing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: Room 3701, No. 866 East Changzhi Road, Hongkou District, Shanghai, 200080

Applicant after: Botai vehicle networking technology (Shanghai) Co.,Ltd.

Address before: 201800 208, building 4, No. 1411, Yecheng Road, Jiading Industrial Zone, Jiading District, Shanghai

Applicant before: Botai vehicle networking technology (Shanghai) Co.,Ltd.

Country or region before: China