CN111292744B - Speech instruction recognition method, system and computer readable storage medium - Google Patents

Speech instruction recognition method, system and computer readable storage medium Download PDF

Info

Publication number
CN111292744B
CN111292744B CN202010074215.3A CN202010074215A CN111292744B CN 111292744 B CN111292744 B CN 111292744B CN 202010074215 A CN202010074215 A CN 202010074215A CN 111292744 B CN111292744 B CN 111292744B
Authority
CN
China
Prior art keywords
voice
message
instruction
intelligent terminal
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010074215.3A
Other languages
Chinese (zh)
Other versions
CN111292744A (en
Inventor
陈乙银
塞力克·斯兰穆
郑斌
胡泰东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Grey Shark Technology Co ltd
Original Assignee
Shenzhen Grey Shark Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Grey Shark Technology Co ltd filed Critical Shenzhen Grey Shark Technology Co ltd
Priority to CN202010074215.3A priority Critical patent/CN111292744B/en
Publication of CN111292744A publication Critical patent/CN111292744A/en
Application granted granted Critical
Publication of CN111292744B publication Critical patent/CN111292744B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/424Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1081Input via voice recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a voice instruction recognition method, a system and a computer readable storage medium, wherein the voice instruction recognition method comprises the following steps: starting a voice command recognition script in an intelligent terminal to load a voice model in the voice command recognition script; when an application program in the intelligent terminal is started, starting audio acquisition equipment of the intelligent terminal; collecting instruction audio input to the audio collecting equipment, and identifying the instruction audio to an instruction message; comparing the instruction message with the recognition result message in the voice model, and matching the instruction message with the voice model when the instruction message is matched with the recognition result message or the similarity is greater than a similarity threshold; and executing the preset execution operation in the matched voice model, and controlling the application program based on the execution operation. By adopting the technical scheme, the working time during voice semantic recognition can be reduced and the power consumption in the voice operation process can be reduced through the training of the voice model and the configuration of the preset instruction.

Description

Speech instruction recognition method, system and computer readable storage medium
Technical Field
The present invention relates to the field of speech control, and in particular, to a method and system for recognizing speech instructions, and a computer readable storage medium.
Background
Along with the rapid popularization of intelligent terminals, tablet computers and notebook computers, people have increasingly depended on the use of the devices. For use of such devices, a user typically inputs a designation based on a touch screen that the device has, such as a single click, a double click, a long press of an operation button displayed on the touch screen, to output an operation instruction to the device.
In order to enrich the instruction input of users to devices, many device manufacturers have developed voice-operated functions. And through recognition of the voice sent by the user to the equipment, the voice is analyzed into the operation of the equipment, and then the corresponding operation is executed.
In the prior art, the voice input is converted into the voice command through voice recognition, and then the voice command and the game command in the game are mapped, and in the concrete implementation, the voice acquisition recognition module and the voice control command set are required to be packaged into an SDK and deeply integrated into the game module, or the modification of the input driving program in the terminal equipment is required to be realized with high cost, and the deep cooperation development of the game manufacturer and the equipment manufacturer is required to be completed. And the mode has poor compatibility, needs to be adapted for each game instruction, and does not consider the problem of power consumption of voice recognition. In addition, if the voice recognition process is long or stuck, the instruction input of the user will be affected.
Therefore, a novel voice command recognition method is needed, a model applied to low-power consumption scene control can be trained, and when voice command recognition is performed, the processes of voice recognition and command conversion are reduced, and the cruising ability of the intelligent terminal is improved.
Disclosure of Invention
In order to overcome the technical defects, the invention aims to provide a voice command recognition method, a voice command recognition system and a computer readable storage medium, which can reduce the working time during voice semantic recognition and reduce the power consumption during voice operation through training of a voice model and configuration of preset commands.
The invention discloses a voice instruction recognition method, which comprises the following steps:
starting a voice command recognition script in an intelligent terminal to load a voice model in the voice command recognition script;
when an application program in the intelligent terminal is started, starting audio acquisition equipment of the intelligent terminal;
collecting instruction audio input to the audio collecting equipment, and identifying the instruction audio to an instruction message;
comparing the instruction message with the recognition result message in the voice model, and matching the instruction message with the voice model when the instruction message is matched with the recognition result message or the similarity is greater than a similarity threshold;
And executing the preset execution operation in the matched voice model, and controlling the application program based on the execution operation.
Preferably, in an intelligent terminal, the step of starting the voice command recognition script to load the voice model in the voice command recognition script includes:
starting a voice command recognition script in the intelligent terminal, and judging whether a voice model exists in the voice command recognition script or not;
when the voice model is not provided, a prompt interface is formed in the intelligent terminal, and information for activating the voice message receiving function is displayed;
receiving at least one externally formed voice message;
identifying each voice message to form at least one identification result message, and displaying the identification result message on a mapping interface;
the operation unit of the target application program is also displayed on the mapping interface;
and associating each identification result message with one or more operation units, forming a configuration relation and then storing the configuration relation.
Preferably, the step of identifying each voice message to form at least one identification result message and displaying the identification result message on a mapping interface comprises:
analyzing the voice message and converting the voice message into a text message;
extracting keywords in the text message;
The keyword is saved as at least one recognition result message, and the recognition result message is sent to a server side to generate a voice model at the server side.
Preferably, the step of extracting keywords in the text message includes:
acquiring common expressions of a target application program and the target application program;
comparing the text message with the common term, and extracting the content of the text message, which is matched with the common term or has the similarity higher than a preset threshold value;
the common term that stores content as a keyword or modifies content to have closest similarity is a keyword.
Preferably, the step of associating each recognition result message with one or more operation units, and storing after forming the configuration relation includes:
receiving external operation executed on the mapping interface, and moving the positions of the operation unit and the mapping interface according to the external operation;
when any operation unit moves to a position corresponding to a recognition result message, the recognition result message is associated with the operation unit;
and storing the association relation between each operation unit and the recognition result message as the configuration relation of the voice model.
Preferably, after the step of associating each recognition result message with one or more operation units to form a configuration relationship and then storing the configuration relationship, the method further comprises the following steps:
Naming the configuration relation and downloading the voice model from the server;
modifying the name of the voice model into the name of the configuration relation, and storing the configuration relation into the voice model;
the speech model is saved to a database.
Preferably, the step of executing the execution operation preset in the matched voice model and controlling the application program based on the execution operation includes:
according to preset execution operation in the voice model, constructing a touch event aiming at a display unit of the intelligent terminal, and sending the touch event to a control unit of the intelligent terminal;
based on the injection scheme of the installation system of the intelligent terminal, the control unit generates touch control and takes effect to form an execution operation control application program.
Preferably, the step of constructing a touch event for the display unit of the intelligent terminal according to a preset execution operation in the voice model and transmitting the touch event to the control unit of the intelligent terminal further comprises:
and displaying a prompt sign on a display unit of the intelligent terminal to be used as a notification signal for successful construction of the touch event.
The invention also discloses a voice instruction recognition system, which comprises:
the script module is arranged in the intelligent terminal, and when the script module is activated, a voice instruction recognition script arranged in the script module is started;
The loading module is used for loading the voice model in the voice instruction recognition script in the script module;
the audio acquisition equipment is used for acquiring instruction audio when an application program in the intelligent terminal is started;
the control unit is used for identifying instruction audio to instruction information, comparing the instruction information with the identification result information in the voice model, matching the instruction information with the voice model when the instruction information is matched with the identification result information or the similarity is larger than a similarity threshold value, and executing preset execution operation in the matched voice model so as to control the application program based on the execution operation.
The invention also discloses a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, realizes the following steps:
starting a voice command recognition script in an intelligent terminal to load a voice model in the voice command recognition script;
when an application program in the intelligent terminal is started, starting audio acquisition equipment of the intelligent terminal;
collecting instruction audio input to the audio collecting equipment, and identifying the instruction audio to an instruction message;
comparing the instruction message with the recognition result message in the voice model, and matching the instruction message with the voice model when the instruction message is matched with the recognition result message or the similarity is greater than a similarity threshold;
And executing the preset execution operation in the matched voice model, and controlling the application program based on the execution operation.
After the technical scheme is adopted, compared with the prior art, the method has the following beneficial effects:
1. the trained model supports a plurality of applications in the same scene or a plurality of applications in different scenes;
2. the mapping mode is more direct, so that a user can correlate the trained voice model with an operation instruction conveniently;
3. when the voice model is used, the recognition power consumption and time are reduced, and the process of converting voice into operation is effectively accelerated;
4. after the voice command is identified, the mapped control command is not required to be analyzed, the time consumption for analysis is transferred to a earlier stage by utilizing a pre-set voice model, and the speed of converting the voice command into the control command is increased;
5. the sharing of the speech model improves the reusability of the speech instruction recognition system.
Drawings
FIG. 1 is a flow chart of a voice command recognition method according to a preferred embodiment of the invention;
FIG. 2 is a flow chart illustrating a voice command recognition method according to a further preferred embodiment of the present invention;
FIG. 3 is a flow chart of a voice command recognition method according to a still further preferred embodiment of the present invention;
FIG. 4 is a schematic diagram of a voice command recognition system according to a preferred embodiment of the present invention.
Detailed Description
Advantages of the invention are further illustrated in the following description, taken in conjunction with the accompanying drawings and detailed description.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In the description of the present invention, it should be understood that the terms "longitudinal," "transverse," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the present invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention.
In the description of the present invention, unless otherwise specified and defined, it should be noted that the terms "mounted," "connected," and "coupled" are to be construed broadly, and may be, for example, mechanical or electrical, or may be in communication with each other between two elements, directly or indirectly through intermediaries, as would be understood by those skilled in the art, in view of the specific meaning of the terms described above.
In the following description, suffixes such as "module", "component", or "unit" for representing elements are used only for facilitating the description of the present invention, and are not of specific significance per se. Thus, "module" and "component" may be used in combination.
Referring to fig. 1, a flow chart of a voice command recognition method according to a preferred embodiment of the invention is shown, in which the voice command recognition method includes the following steps:
s100: in an intelligent terminal, a voice command recognition script is started to load a voice model in the voice command recognition script
The intelligent terminal is used as a device for receiving the voice command and converting the voice command into a control command for the intelligent terminal, a voice command recognition script is stored in the intelligent terminal, for example, an API (application program interface) based on natural language or non-natural language processing and a voice model (or when the intelligent terminal is not stored, a new voice model is created) is stored in the voice command recognition script. For loading the voice model, the user can select the correspondence of the voice model, for example, the user needs to perform voice control on a certain target application program, such as "king glory", "stimulating battlefield", "messenger video", etc., when selecting the voice model, the voice model special for the desired application program can be selected, that is, the voice model can be used for an application in an application scene in the intelligent terminal, and when the same voice instruction is executed in different application programs, such as "return", "enter setting interface", etc., the universal voice model can be selected, or any voice model with unified voice instruction conversion can be selected, that is, the voice model supports multiple applications in the same scene or multiple applications in different scenes.
S200: when an application program in the intelligent terminal is started, enabling the audio acquisition equipment of the intelligent terminal
An application program in the intelligent terminal is started based on the operation of a user, and after the application program is started, an audio acquisition device of the intelligent terminal, such as a microphone, an earphone device connected with the intelligent terminal in a wired or wireless mode, and the like, can be called based on the authority setting of the application program of the intelligent terminal.
S300: collecting instruction audio input to the audio collecting equipment and identifying the instruction audio to instruction information
The user inputs instruction audio to the intelligent terminal, for example, makes a sound to the intelligent terminal, makes a sound to an earphone device connected with the intelligent terminal in a wired or wireless manner, and the like, and the instruction audio is collected by the audio collection device. It can be understood that the audio collection device can always silence when the application program runs, and can collect the instruction audio at any time when the user makes a sound, or presets a key and operation (such as a double-click intelligent terminal, a three-click intelligent terminal, etc.) for starting a collection function, and enables collection of the instruction audio according to activation of the preset key and operation. After the instruction audio is collected, the audio collection device forwards the instruction audio to a control unit (such as a CPU) and the control unit converts the instruction audio of the audio signal into an instruction message in a text form or a digital form, and various conversion modes are available and are not repeated here.
S400: comparing the instruction message with the recognition result message in the voice model, and matching the instruction message with the voice model when the instruction message is matched with the recognition result message or the similarity is larger than a similarity threshold value
The control unit compares the converted instruction message with the recognition result message in the voice model, and it can be understood that the voice model has converted the voice instruction preset in training into the recognition result message and the mapping relation between the recognition result message and the corresponding operation during training. Therefore, after the instruction message is compared with the recognition result message, whether the instruction message can be matched with the recognition result message (for example, the instruction message is identical to the recognition result message, the instruction message comprises the recognition result message, the instruction message is contained in the recognition result message, the expression meaning of the instruction message is identical to the expression meaning of the recognition result message, part of the instruction message is overlapped with part of the recognition result message, and the like) is judged, or if the instruction message is completely different from the recognition result message, the instruction message and the recognition result message have a certain similarity (the similarity is larger than a similarity threshold) through the identical semantics and the approximate semantics, so that the instruction message is determined to be matched with the recognition result message in the voice model.
S500: executing preset execution operation in the matched voice model, and controlling the application program based on the execution operation
After the command message and the voice model are mapped successfully, the application program is controlled based on preset execution operation of the voice model, which is preset in the voice model and mapped with the recognition result message. That is, in the voice command recognition method of this embodiment, the mapping configuration relationship between the command audio and the operation command is: instruction audio-instruction message-recognition result message in speech model-execution operation-operation control of application program. It will be appreciated that the operations performed may be specific operations within the application program, such as fast forward of streaming media within "Tencel video," return to city control of operation hero within "king" etc.
With the configuration, the analysis mapping relation between the voice message and the execution operation is completed in advance during the training of the voice model by using the voice model, so that the analysis process of the execution operation corresponding to the voice message can be completely abandoned during the voice instruction recognition, namely, the flow of the voice recognition operation is simplified.
Referring to fig. 2, in a preferred embodiment, in an intelligent terminal, the step S100 of starting the voice command recognition script to load the voice model in the voice command recognition script includes:
S110: starting a voice command recognition script in the intelligent terminal, and judging whether the voice command recognition script has a voice model or not
After the voice command recognition script is started, whether the voice model is available to be loaded in the script is judged.
S120: when the voice model is not provided, a prompt interface is formed in the intelligent terminal, and information for activating the voice message receiving function is displayed
Without a speech model, training is required to form the speech model. The voice model training method can be completed in a server end and an intelligent terminal, and when the voice model is trained, the voice model is displayed outwards through interaction media such as a display screen carried by the server end, a display screen connected with the server end, a display screen of the intelligent terminal and the like. After the display screens are arranged, a prompt interface is formed when the voice model training method is started, the prompt interface is displayed on the display screens, and information of the activated voice message receiving function is displayed, so that a user needing to form a voice model can be informed of sending voice messages to a server side, an intelligent terminal or a device which is connected with the server side and the intelligent terminal and can receive voice, such as a microphone, so as to start voice recognition and model establishment.
S130: receiving externally formed at least one voice message
And prompting the user to send the information of the voice message to the equipment according to the displayed prompting interface and after entering the model training interface. After receiving the model training interface, the user may send at least one voice message to a device (e.g., a server, an intelligent terminal, or a device connected to the server, an intelligent terminal and capable of receiving voice) according to the guidance of the model training interface, for example, an operation instruction message including pure Chinese, such as "attack", "defense", "city return", "set", "retreat", etc., or an operation instruction message including foreign language, such as "attack", "security", "back", "done", etc., or an operation instruction message including digital, such as "666", "333", "886", etc.
S140: identifying each voice message to form at least one identification result message, and displaying the identification result message on a mapping interface
After receiving the voice messages, each voice message is subjected to voice recognition to form at least one recognition result message. It will be appreciated that the resulting recognition result message may correspond entirely to the received voice message, e.g., the user entered the voice message as "all attacks" on the device, the recognition result message as "all attacks" or may correspond to a portion of the received voice message, e.g., the recognition result message as "all attacks" or "attacks". The recognition result message formed by recognition is displayed on a display screen of the device, specifically, a mapping interface of the display screen, so as to inform the user that the recognition result of the voice message by the user equipment can confirm the recognition accuracy, and when the recognition result message accuracy is high enough (greater than a set threshold value or confirmed by the user), the next step can be executed; and when the accuracy of the recognition result message is insufficient (less than the set threshold value, or not confirmed by the user), the user may be requested to re-input the voice message, or to re-recognize the voice message, until the accuracy of the recognition result message is sufficiently high.
S150: operation unit for displaying target application program on mapping interface
In addition to the recognition result message, at least one target application program is displayed on the mapping interface, where the target application programs are application programs that can use a voice model and execute corresponding operations according to the voice model, such as a game application program that executes corresponding operations according to the voice message, a media application program that executes streaming media control according to the voice message, and so on. On the mapping interface, a unique, easily identifiable operation unit, such as a name, an icon, etc., of the target application program may be employed for display of the target application program. That is, the mapping interface displays the identification result message and the operation unit corresponding to the target application program, so that the user can conveniently know the usage scenarios to which the identification result message can correspond.
S160: associating each identification result message with one or more operation units to form configuration relationship and storing
The user can input a control instruction in the mapping interface, and for each identification result message, the control instruction is associated with one or more operation units, so that a mapping relation between the identification result message and the operation units is formed, the mapping relation between the identification result message and the target application program is extended, the mapping relation between the voice message and specific operations in the target application program is further extended, and the mapping relation is stored as a configuration relation. For example, the recognition result message is "attack", and according to the mapping operation of the user on the recognition result message, the recognition result message of "attack" is associated with a game-like application program such as "king", "using summoning", "yin-yang engineer", etc., so that the "attack" after the voice recognition will correspond to the specific operation of the target application program within the formed voice model. The specific operation may be that the recognition result message is associated with a specific icon in the application program in the mapping interface according to the mapping operation of the user, so that the initial "attack" voice message is converted into execution of attack icons of game application programs such as "king glory", "using summons", "yin-yang engineer" and the like.
Through the configuration, the trained model supports a plurality of applications in the same scene or a plurality of applications in different scenes, so that one voice message can be used in a plurality of application programs, and the occupied space of the voice model is saved; secondly, the user maps the voice message with the operation unit more directly.
In an advanced preferred embodiment, the step S140 of identifying each voice message to form at least one identification result message and displaying the identification result message on a mapping interface includes:
s141: parsing voice message and converting voice message into text message
After receiving the voice message, the voice message in the form of voice signal can be converted into text message by the voice recognition module. The speech recognition module used in this embodiment may be a conventional APK or the like that converts speech into text.
S142: extracting keywords in the text message;
for the converted text message, keywords therein will be extracted. The extraction of the keywords may be the whole text message (for example, when the number of words in the text message is small), or the text message with noise removed, or the rest keywords unrelated to the operation instruction, as described above.
S143: storing the keyword as at least one recognition result message, and sending the recognition result message to a server side to generate a voice model at the server side
And storing the obtained keywords as at least one recognition result message, and when the voice message is received as the intelligent terminal, the intelligent terminal can send the recognition result message to a server end, and after the recognition result message is stored in the server end, converting the recognition result message into a common voice model.
More specifically, the step S142 of extracting keywords in the text message includes:
s142-1: acquiring common expressions of a target application program and the target application program;
and selecting part or all of the applications installed in the intelligent terminal of the user as target applications according to the selection operation of the user. After the target application program is selected, the commonly used expressions in the target application program are acquired. Taking the target application program of "the owner glowing" as an example, after the target application program is determined to include that the common information of "the owner glowing" can be called from the network as common words, such as "one wave", "wild", "back city", "withdrawing", and the like, the common information special for the user can be customized according to the configuration of the user, such as "shoot with me", "get back without back" lamp, and the interfaces of "the owner glowing" can be identified, so that the characters displayed in the interfaces are converted into common words, such as characters directly displayed in the interfaces of the target application program, such as "mall", "setting", "hero", and the like; taking the target application program of "Tengmao video" as an example, after the target application program is determined to include, common information of "Tengmao video" can be called from the network as common words, such as "exit", "recommend", "increase volume", etc., common information dedicated to the user can be customized according to the configuration of the user, such as "fast forward 15 seconds", "fast reverse 30 seconds", "next head", etc., each interface of "Tengmao video" can be identified, and characters displayed in the interface can be converted into common words, such as characters directly displayed in the interface of the target application program, such as "daily recommendation", "movie", "synthetic skill", "sports", etc.
S142-2: comparing the text message with the common term, and extracting the content of the text message, which is matched with the common term or has the similarity higher than a preset threshold value;
after the words are commonly used, the text message obtained after recognition is compared with the commonly used words, and the following situations can exist in the comparison:
1) Text message and common term complete matching
Taking the commonly used term as an example of "attack" or "city return", when the text message converted from the voice message is "attack" or "city return", on the one hand, the text message represents that the user inputs the voice of the terminal as "attack" or "city return" sent by the user, and on the other hand, the text message is completely reserved under the condition that the text message is completely matched with the commonly used term.
2) Part of text message matches common term
Taking the common expressions as "attack" or "get around the city" as an example, when the text message converted from the voice message is "i want to attack", "attack counterpart", "i want to get around the city" or "get around the city", on the one hand, the text message indicates that the user sends the voice input to the terminal for the voice to attack "," attack counterpart "," i want to get around the city "or" get around the city ", on the other hand, the text message contains all the common expressions, but does not completely reserve the text message, but extracts the common expressions included in the text message as the recognition result message, so as to save the occupation space of the voice model.
3) Partial matching of text messages with commonly used words
Taking the commonly used terms as "fast forward 15 seconds", "fast reverse 30 seconds", "play music adjustment atmosphere", when the text message converted from the voice message is "fast reverse", "fast forward" or "coming music", on the one hand, the text message indicates "fast reverse", "fast forward" or "coming music" sent by the user for the voice input by the terminal, on the other hand, the text message includes the commonly used terms, that is, the text message includes the portion of the commonly used terms, so that the text message can be selectively and completely reserved, for example, only "fast reverse", "fast forward" or "coming music" is reserved, or the text message is automatically mapped to the commonly used terms according to the inclusion procedure of the text message and the commonly used terms, for example, when the text message is "fast reverse", the text message is extracted as "fast reverse 30 seconds" closest to the "fast reverse".
4) Partial matching of text messages with commonly used words
Taking the commonly used terms as "fast forward 15 seconds", "fast backward 30 seconds", "play music adjustment atmosphere" as an example, when the text message converted from the voice message is "i want to fast forward", "i want to fast backward" or "i want to play music", on the one hand, the text message indicates that the user sends the voice input to the terminal for "i want to fast forward", "i want to fast backward" or "i want to play music", and on the other hand, the text message includes a portion of the commonly used terms, that is, a portion of the text message overlaps a portion of the commonly used terms, and then only the portion overlapping the two is reserved, and "fast backward", "fast forward" or "play music" is reserved.
5) The similarity between the part of the text message and the part of the commonly used term is higher than a threshold value
Taking the commonly used expressions of "fast forward 15 seconds", "fast backward 30 seconds" and "play music adjustment atmosphere" as examples, when the text message converted from the voice message is "i want to go forward", "i want to look back" or "i want to play songs", on the one hand, "i want to go forward", "i want to look back" or "i want to play songs" sent by the user for the voice input by the terminal is indicated, on the other hand, the part of the text message is basically not overlapped or not overlapped with the part of the commonly used expression, but the control instruction expressed by the text message is actually the same as the control instruction expressed by the commonly used expression. In this case, step S322 simply recognizes the meaning of the word message in addition to the recognition of the word message, compares the meaning with each of the expressions in the commonly used words, considers that the word message has a certain degree of similarity to the commonly used word when the meaning of the expression is identical to the meaning of the expression, and can selectively include all of the word message or all of the commonly used word as a keyword when the degree of similarity is greater than a predetermined threshold.
S142-3: storing content as keywords or modifying content to the commonly used term with closest similarity as keywords
In each of the above cases, the extracted content is finally saved as a keyword, or the content is modified with a common term as a standard. For example, in the case of 4) and 5) above, it is preferable to use the commonly used term as a usage standard so that the procedure of extracting and expressing meaning understanding of the text message can be simplified. Based on the existing commonly used terms, the analysis results which have expressed meaning on the commonly used terms in advance can be used, so that the formation flow of the voice model is simplified.
In a preferred embodiment, the step S150 of displaying the operation unit of the target application program on the mapping interface further includes:
s151: acquiring the type and key frame of a target application program;
and acquiring an application program list installed in the intelligent terminal, and determining the types of the application programs, such as games, media, social, reading, news and the like, according to the application programs or all the application programs which are set by a user and can be used as target application programs. For these target applications, at least one key frame under their activation and running will also be acquired, e.g. a display frame under the target application launch interface, a display frame under the entry operation interface, a display frame under the most commonly used interface, etc.
S152: extracting part or all operation units operating on target application program in key frame
After the key frames are acquired, part or all of operation units corresponding to the operation of the target application program are extracted. For example, in a certain key frame, the operation unit has an attack key, a defending key, a skill key, which are always displayed at the front end, or a direction key, an instruction key, a guide key, or the like, which are displayed after the user touches the display screen.
In another preferred embodiment, the step S160 of associating each identification result message with one or more operation units to form a configuration relationship and then storing the configuration relationship includes:
s161: receiving external operation executed on the mapping interface, and moving the position of the operation unit on the mapping interface according to the external operation;
the operation unit is displayed on the mapping interface to inform the user which operations within the target application will be mapped to the speech model. After the user recognizes these operation units, external operations such as long pressing, clicking, double clicking, etc. of the operation units are applied to the display screen, and according to these external operations, when a contact portion of the user to the display screen, such as a finger, a touch pen, etc., moves on the display screen, the operation units also move with the movement of the contact portion, thereby changing the position of the operation units within the mapping interface.
S162: when any operation unit moves to a position corresponding to a recognition result message, the recognition result message is associated with the operation unit;
the mapping interface also displays the identification result information, and a blank area can be arranged beside the identification result information to be used for establishing the mapping relation. For example, if one or more operation units are moved into the blank area and maintained for a certain time, it indicates that the operation unit is associated with the recognition result message. Thus, when any one or more of the operation units is moved to the position corresponding to the recognition result message based on the operation of the user, and the contact portion of the user is moved out of the display screen, the final position of the operation unit is indicated, and when the final position corresponds to the recognition result message, the recognition result message is associated with the operation unit.
S163: and storing the association relation between each operation unit and the recognition result message as the configuration relation of the voice model.
After the identification result message is associated with the operation units, the association relation between each operation unit and the identification result message is saved, and if the operation unit further has the next keyword or the identification result message corresponding to the keyword, the operation unit further can be configured.
In a preferred embodiment, after the step S160 of associating each of the identification result messages with one or more operation units to form a configuration relationship and then storing the configuration relationship, the method further includes the following steps:
s170: naming the configuration relation and downloading the voice model from the server;
according to the operation of the user, naming each stored configuration relation can be the application of the target application program and the voice model, such as the glowing of an owner, the summoning of a user, the blood return and the like, or the plurality of configurations are stored after being packaged, the naming mode is only the target application program, and the original voice model is downloaded from a server side.
S180: modifying the name of the voice model into the name of the configuration relation, and storing the configuration relation into the voice model;
after receiving the native voice model, the voice model can be modified into the name of the configuration relation, and the configuration relation is saved into the voice model. Finally, step S180 is executed to save the speech model to a database, and then the configuration interface or the mapping interface is finished.
Referring to fig. 3, in a preferred embodiment, the step S500 of executing the execution operation preset in the matched speech model and controlling the application program based on the execution operation includes:
S510: according to preset execution operation in the voice model, constructing a touch event aiming at a display unit of the intelligent terminal, and sending the touch event to a control unit of the intelligent terminal;
determining a recognition result message in the mapped voice model, determining coordinate positions of required execution operation of a display screen of the intelligent terminal corresponding to the execution operation according to preset execution operation corresponding to the recognition result message, and constructing a touch event to a display unit based on the coordinate positions. Preferably, the construction of the touch event can be displayed to the user in an interactive manner, for example, a prompt symbol, such as a ripple animation effect, or a prompt sound effect, is displayed on the display unit of the intelligent terminal, so as to be used as a notification signal of successful construction of the touch event, and inform the user that the voice command is recognized and mapped to the construction of the touch event.
S520: the control unit generates touch control and takes effect based on the injection scheme of the installation system of the intelligent terminal to form an execution operation control application program
The touch event (user touch, and the handle mapping touch which is in wired and wireless connection with the intelligent terminal) is fused into an updated touch event so as to ensure that the multi-point touch experience of the intelligent terminal is realized at the same time, and the touch control for the operation unit in the application program is generated by utilizing the injection scheme of the installation system of the intelligent terminal, such as the android project scheme, and the control unit, so that the execution operation of the controllable application program is finally formed.
Referring to FIG. 4, a voice command recognition system is shown, which includes a script module disposed within an intelligent terminal that, when activated, initiates a voice command recognition script disposed within the script module; the loading module is used for loading the voice model in the voice instruction recognition script in the script module; the audio acquisition equipment is used for acquiring instruction audio when an application program in the intelligent terminal is started; the control unit is used for identifying instruction audio to instruction information, comparing the instruction information with the identification result information in the voice model, matching the instruction information with the voice model when the instruction information is matched with the identification result information or the similarity is larger than a similarity threshold value, and executing preset execution operation in the matched voice model so as to control the application program based on the execution operation.
When the loading module does not have a voice model, the voice instruction recognition system further comprises a voice model training module, which comprises: the prompting unit forms a prompting interface and displays information for activating the voice message receiving function; a receiving unit that receives at least one voice message formed externally; the recognition unit recognizes each voice message to form at least one recognition result message, and displays the recognition result message on a mapping interface; the interaction unit is used for forming a mapping interface and displaying an operation unit of the target application program on the mapping interface; and the association unit associates each identification result message with one or more operation units to form a configuration relation and then stores the configuration relation. In a preferred embodiment, the association unit comprises: the mobile element is connected with the interaction unit, receives external operation executed on the mapping interface, and moves the positions of the operation element and the mapping interface according to the external operation; a display element for highlighting the moving operation element; the association element associates the identification result message with the operation element when any operation element moves to a position corresponding to the identification result message; and the storage element stores the association relation between each operation element and the recognition result message as the configuration relation of the voice model.
In one embodiment, a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of: starting a voice command recognition script in an intelligent terminal to load a voice model in the voice command recognition script; when an application program in the intelligent terminal is started, starting audio acquisition equipment of the intelligent terminal; collecting instruction audio input to the audio collecting equipment, and identifying the instruction audio to an instruction message; comparing the instruction message with the recognition result message in the voice model, and matching the instruction message with the voice model when the instruction message is matched with the recognition result message or the similarity is greater than a similarity threshold; and executing the preset execution operation in the matched voice model, and controlling the application program based on the execution operation.
The intelligent terminal may be implemented in various forms. For example, the terminals described in the present invention may include smart terminals such as mobile phones, smart phones, notebook computers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), navigation devices, and the like, and fixed terminals such as digital TVs, desktop computers, and the like. In the following, it is assumed that the terminal is an intelligent terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for a moving purpose.
It should be noted that the embodiments of the present invention are preferred and not limited in any way, and any person skilled in the art may make use of the above-disclosed technical content to change or modify the same into equivalent effective embodiments without departing from the technical scope of the present invention, and any modification or equivalent change and modification of the above-described embodiments according to the technical substance of the present invention still falls within the scope of the technical scope of the present invention.

Claims (8)

1. A method of speech instruction recognition comprising the steps of:
starting a voice command recognition script in an intelligent terminal to load a voice model in the voice command recognition script;
when an application program in the intelligent terminal is started, starting audio acquisition equipment of the intelligent terminal;
collecting instruction audio input to audio collection equipment, and identifying the instruction audio to instruction information;
comparing the instruction message with the recognition result message in the voice model, and matching the instruction message with the voice model when the instruction message is matched with the recognition result message or the similarity is greater than a similarity threshold;
executing a preset execution operation in the matched voice model, and controlling the application program based on the execution operation, wherein the step of starting the voice instruction recognition script in an intelligent terminal to load the voice model in the voice instruction recognition script comprises the following steps:
Starting a voice command recognition script in the intelligent terminal, and judging whether a voice model exists in the voice command recognition script or not; when the voice model is not provided, a prompt interface is formed in the intelligent terminal, and information for activating the voice message receiving function is displayed;
receiving at least one externally formed voice message;
identifying each voice message to form at least one identification result message, and displaying the identification result message on a mapping interface;
an operation unit of the target application program is also displayed on the mapping interface;
associating each identification result message with one or more operation units, forming a configuration relation, storing,
and
Associating each identification result message with one or more operation units, and storing after forming a configuration relation, wherein the step of storing comprises the following steps: receiving external operation executed on a mapping interface, and moving the positions of the operation unit and the mapping interface according to the external operation;
when any operation unit moves to a position corresponding to a recognition result message, the recognition result message is associated with the operation unit;
and storing the association relation between each operation unit and the recognition result message as the configuration relation of the voice model.
2. The voice command recognition method of claim 1, wherein,
the step of identifying each voice message to form at least one identification result message and displaying the identification result message on a mapping interface comprises the following steps:
analyzing the voice message and converting the voice message into a text message;
extracting keywords in the text message;
and storing the keyword as at least one recognition result message, and sending the recognition result message to a server side to generate a voice model at the server side.
3. The voice command recognition method of claim 2, wherein,
the step of extracting the keywords in the text message comprises the following steps:
acquiring common expressions of a target application program and the target application program;
comparing the text message with the common term, and extracting the content of the text message, which is matched with the common term or has similarity higher than a preset threshold;
and storing the content as a keyword or modifying the content to a common term with closest similarity as the keyword.
4. The voice command recognition method of claim 1, wherein,
and after the step of storing after the configuration relation is formed, the method further comprises the following steps of:
Naming the configuration relation and downloading the voice model from a server;
modifying the name of the voice model as the name of the configuration relation, and storing the configuration relation into the voice model;
and saving the voice model to a database.
5. The voice command recognition method of claim 1, wherein,
executing the preset execution operation in the matched voice model, and controlling the application program based on the execution operation comprises the following steps:
according to preset execution operation in the voice model, constructing a touch event aiming at a display unit of the intelligent terminal, and sending the touch event to a control unit of the intelligent terminal;
based on the injection scheme of the installation system of the intelligent terminal, the control unit generates touch control and takes effect to form an execution operation control application program.
6. The voice command recognition method of claim 5, wherein,
the steps of constructing a touch event for a display unit of the intelligent terminal according to the preset execution operation in the voice model and sending the touch event to a control unit of the intelligent terminal further comprise:
and displaying a prompt sign on a display unit of the intelligent terminal to be used as a notification signal for successful construction of the touch event.
7. A voice command recognition system, comprising:
the script module is arranged in the intelligent terminal, and when the script module is activated, a voice instruction recognition script arranged in the script module is started;
the loading module is used for loading the voice model in the voice instruction recognition script in the script module;
the audio acquisition equipment is used for acquiring instruction audio when an application program in the intelligent terminal is started;
the control unit is used for identifying the instruction audio to the instruction message, comparing the instruction message with the identification result message in the voice model, matching the instruction message with the voice model when the instruction message is matched with the identification result message or the similarity is larger than a similarity threshold value, and executing the preset execution operation in the matched voice model so as to control the application program based on the execution operation, wherein the control unit is used for controlling the application program based on the execution operation
Starting a voice command recognition script in the intelligent terminal, and judging whether a voice model exists in the voice command recognition script or not; when the voice model is not provided, a prompt interface is formed in the intelligent terminal, and information for activating the voice message receiving function is displayed;
receiving at least one externally formed voice message;
Identifying each voice message to form at least one identification result message, and displaying the identification result message on a mapping interface;
an operation unit of the target application program is also displayed on the mapping interface;
associating each identification result message with one or more operation units to form a configuration relation and then storing the configuration relation, wherein the method comprises the following steps:
associating each identification result message with one or more operation units, and storing after forming a configuration relation, wherein the step of storing comprises the following steps: receiving external operation executed on a mapping interface, and moving the positions of the operation unit and the mapping interface according to the external operation;
when any operation unit moves to a position corresponding to a recognition result message, the recognition result message is associated with the operation unit;
and storing the association relation between each operation unit and the recognition result message as the configuration relation of the voice model.
8. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor realizes the steps of:
starting a voice command recognition script in an intelligent terminal to load a voice model in the voice command recognition script;
when an application program in the intelligent terminal is started, starting audio acquisition equipment of the intelligent terminal;
Collecting instruction audio input to audio collection equipment, and identifying the instruction audio to instruction information;
comparing the instruction message with the recognition result message in the voice model, and matching the instruction message with the voice model when the instruction message is matched with the recognition result message or the similarity is greater than a similarity threshold;
executing a preset execution operation in the matched voice model, and controlling the application program based on the execution operation, wherein the step of starting the voice instruction recognition script in an intelligent terminal to load the voice model in the voice instruction recognition script comprises the following steps:
starting a voice command recognition script in the intelligent terminal, and judging whether a voice model exists in the voice command recognition script or not; when the voice model is not provided, a prompt interface is formed in the intelligent terminal, and information for activating the voice message receiving function is displayed;
receiving at least one externally formed voice message;
identifying each voice message to form at least one identification result message, and displaying the identification result message on a mapping interface;
an operation unit of the target application program is also displayed on the mapping interface;
associating each identification result message with one or more operation units, forming a configuration relation, storing,
And
Associating each identification result message with one or more operation units, and storing after forming a configuration relation, wherein the step of storing comprises the following steps: receiving external operation executed on a mapping interface, and moving the positions of the operation unit and the mapping interface according to the external operation;
when any operation unit moves to a position corresponding to a recognition result message, the recognition result message is associated with the operation unit;
and storing the association relation between each operation unit and the recognition result message as the configuration relation of the voice model.
CN202010074215.3A 2020-01-22 2020-01-22 Speech instruction recognition method, system and computer readable storage medium Active CN111292744B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010074215.3A CN111292744B (en) 2020-01-22 2020-01-22 Speech instruction recognition method, system and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010074215.3A CN111292744B (en) 2020-01-22 2020-01-22 Speech instruction recognition method, system and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111292744A CN111292744A (en) 2020-06-16
CN111292744B true CN111292744B (en) 2023-04-28

Family

ID=71021309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010074215.3A Active CN111292744B (en) 2020-01-22 2020-01-22 Speech instruction recognition method, system and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111292744B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113934146B (en) * 2020-06-29 2024-07-23 阿里巴巴集团控股有限公司 Method and device for controlling Internet of things equipment and electronic equipment
CN112732379B (en) * 2020-12-30 2023-12-15 智道网联科技(北京)有限公司 Method for running application program on intelligent terminal, terminal and storage medium
CN112767916B (en) * 2021-02-05 2024-03-01 百度在线网络技术(北京)有限公司 Voice interaction method, device, equipment, medium and product of intelligent voice equipment
CN114594923A (en) * 2022-02-16 2022-06-07 北京梧桐车联科技有限责任公司 Control method, device and equipment of vehicle-mounted terminal and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010063475A (en) * 2008-09-08 2010-03-25 Weistech Technology Co Ltd Device and method for controlling voice command game
CN101377797A (en) * 2008-09-28 2009-03-04 腾讯科技(深圳)有限公司 Method for controlling game system by voice
CN105204838A (en) * 2014-06-26 2015-12-30 金德奎 Method for concretely controlling on application program by means of mobile phone voice control software
CN106297784A (en) * 2016-08-05 2017-01-04 Intelligent terminal plays the method and system of quick voice responsive identification

Also Published As

Publication number Publication date
CN111292744A (en) 2020-06-16

Similar Documents

Publication Publication Date Title
CN111292744B (en) Speech instruction recognition method, system and computer readable storage medium
CN108121490B (en) Electronic device, method and server for processing multi-mode input
CN110998720B (en) Voice data processing method and electronic device supporting the same
US10440167B2 (en) Electronic device and method of executing function of electronic device
CN109243432B (en) Voice processing method and electronic device supporting the same
KR102414122B1 (en) Electronic device for processing user utterance and method for operation thereof
US20160328205A1 (en) Method and Apparatus for Voice Operation of Mobile Applications Having Unnamed View Elements
CN110462647B (en) Electronic device and method for executing functions of electronic device
CN109309751B (en) Voice recording method, electronic device and storage medium
US11150870B2 (en) Method for providing natural language expression and electronic device supporting same
CN110457105B (en) Interface operation method, device, equipment and storage medium
EP3364661A2 (en) Electronic device and method for controlling the same
CN110457214B (en) Application testing method and device and electronic equipment
CN109086276B (en) Data translation method, device, terminal and storage medium
CN113655938A (en) Interaction method, device, equipment and medium for intelligent cockpit
US10685650B2 (en) Mobile terminal and method of controlling the same
KR20200106703A (en) Apparatus and method for providing information based on user selection
CN110570846B (en) Voice control method and device and mobile phone
KR20210001082A (en) Electornic device for processing user utterance and method for operating thereof
CN108322770B (en) Video program identification method, related device, equipment and system
KR102369309B1 (en) Electronic device for performing an operation for an user input after parital landing
CN112732379B (en) Method for running application program on intelligent terminal, terminal and storage medium
KR102396147B1 (en) Electronic device for performing an operation using voice commands and the method of the same
CN111326145B (en) Speech model training method, system and computer readable storage medium
US20220270604A1 (en) Electronic device and operation method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230315

Address after: 518055 1501, Building 1, Chongwen Park, Nanshan Zhiyuan, No. 3370, Liuxian Avenue, Fuguang Community, Taoyuan Street, Nanshan District, Shenzhen, Guangdong Province

Applicant after: Shenzhen Grey Shark Technology Co.,Ltd.

Address before: 210022 Room 601, block a, Chuangzhi building, 17 Xinghuo Road, Jiangbei new district, Nanjing City, Jiangsu Province

Applicant before: Nanjing Thunder Shark Information Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant