CN112216278A - Speech recognition system, instruction generation system and speech recognition method thereof - Google Patents

Speech recognition system, instruction generation system and speech recognition method thereof Download PDF

Info

Publication number
CN112216278A
CN112216278A CN202011026628.0A CN202011026628A CN112216278A CN 112216278 A CN112216278 A CN 112216278A CN 202011026628 A CN202011026628 A CN 202011026628A CN 112216278 A CN112216278 A CN 112216278A
Authority
CN
China
Prior art keywords
module
user interface
speech recognition
interface
current user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011026628.0A
Other languages
Chinese (zh)
Inventor
张国峰
洪士升
汪青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Via Technologies Inc
Original Assignee
Via Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Via Technologies Inc filed Critical Via Technologies Inc
Priority to CN202011026628.0A priority Critical patent/CN112216278A/en
Priority to TW109136554A priority patent/TWI780502B/en
Publication of CN112216278A publication Critical patent/CN112216278A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/54Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for retrieval
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The invention provides a voice recognition system, an instruction generation system and a voice recognition method thereof. The speech recognition system is adapted to communicate with an application system. The application system receives a speech input. The voice recognition system comprises a voice recognition module, a natural voice understanding system and an instruction generation system. The speech recognition module receives speech input provided by the application system and recognizes the speech input to generate speech information. The natural speech understanding system is coupled with the speech recognition module. The natural speech understanding system understands speech information to produce a semantic analysis result. The instruction generation system is coupled with the natural speech understanding system. The instruction generation system compares the selected items in the interface content of the current user interface by using the semantic analysis result, and outputs a control instruction to the application system according to the comparison result. Thus, in addition to providing convenient speech recognition functionality, system resources required for speech recognition in application systems may be reduced.

Description

Speech recognition system, instruction generation system and speech recognition method thereof
Technical Field
The present invention relates to a voice recognition technology, and more particularly, to a voice recognition system, an instruction generation system and a voice recognition method thereof.
Background
With the evolution of speech recognition technology, various application systems are beginning to be equipped with speech recognition functions in an attempt to improve the operational convenience of the application systems. In particular, for a Virtual Reality (VR) system or an Augmented Reality (AR) system, if a voice recognition function is provided, the operation convenience and user experience of the VR system or the AR system can be greatly improved. However, since the speech recognition usually costs a lot of system resources, it will result in an increase of the system construction cost and even affect the system operation speed.
Another problem is that, since the conventional Speech Recognition is implemented by compiling an Automatic Speech Recognition grammar (ASR grammar), a user needs to compile all the descriptions and contents of Speech selection into the ASR grammar, and only the text completely conforming to the ASR grammar can be matched. That is, the conventional voice recognition method requires a large amount of work for system developers, and is not flexible in use of voice recognition. Even if the speech recognition function is required to be implemented in the virtual reality system or the augmented reality system, the internal system settings of the application system may need to be modified, which results in an increase in the complexity and cost of the system settings.
Disclosure of Invention
The invention provides a voice recognition system, an instruction generation system and a voice recognition method thereof, which can provide a convenient voice recognition function.
The speech recognition system of the present invention is adapted to communicate with an application system. The application system is used for receiving voice input. The voice recognition system comprises a voice recognition module, a natural voice understanding system and an instruction generation system. The voice recognition module is used for receiving voice input provided by the application system and recognizing the voice input to generate voice information. The natural speech understanding system is coupled to the speech recognition module and is used for understanding the speech information to generate a semantic analysis result. The instruction generating system is coupled to the natural speech understanding system and is used for comparing a selected item in an interface content of a current user interface by using the semantic analysis result and outputting a control instruction to the application system according to a comparison result.
In an embodiment of the invention, the instruction generating system includes a comparing module and an instruction confirming module. The comparison module is used for receiving the semantic analysis result and comparing the selected items in the interface content of the current user interface by using the semantic analysis result to generate a comparison result. The instruction confirmation module is coupled to the comparison module and used for converting the comparison result according to the instruction format and outputting the control instruction.
In an embodiment of the invention, the application system is configured to display a current user interface. When the application system receives the control instruction output by the voice recognition system, the application system selects a selection item in the interface content of the current user interface according to the control instruction, and the selection item is replaced to display the next user interface or execute specific operation.
In an embodiment of the invention, the natural speech understanding system includes a natural language processor, a knowledge-aided understanding module, a retrieval system, and an analysis result output module. The natural language processor is coupled to the speech recognition module and is used for receiving the speech information to generate possible intention grammar data. The knowledge-assisted understanding module is coupled to the natural language processor and uses intent data that stores possible intent grammar data. The retrieval system is coupled with the knowledge auxiliary understanding module and is used for receiving keywords of the possible intention grammar data provided by the knowledge auxiliary understanding module to generate a response result to the knowledge auxiliary understanding module according to the keywords, so that the knowledge auxiliary understanding module generates the determined intention grammar data according to the response result. The analysis result output module is coupled to the knowledge-aided understanding module and the instruction generation system and is used for outputting a semantic analysis result according to the determined intention grammar data.
The instruction generation system of the present invention is adapted to communicate with an application system. The application system is used for receiving voice input. The instruction generation system comprises a comparison module and an instruction confirmation module. The comparison module is used for receiving a semantic analysis result corresponding to the voice input and comparing the selected items in the interface content of the current user interface by using the semantic analysis result to generate a comparison result. The instruction confirmation module is coupled with the comparison module to convert the comparison result according to the instruction format and output the control instruction to the application system.
The speech recognition method of the present invention is suitable for speech recognition systems. The speech recognition system is in communication with an application system, and the application system is configured to receive speech input. The speech recognition method includes the following steps. A speech input provided by an application system is received. The voice input is recognized to generate voice information. The speech information is understood to produce semantic analysis results. Comparing the semantic analysis results to output a control command to the application system.
The speech recognition method of the present invention is suitable for an instruction generating system. The instruction generating system is adapted to communicate with an application system, and the application system is configured to receive a voice input. The speech recognition method includes the following steps. A semantic analysis result corresponding to the speech input is received. The semantic analysis result is used to compare the selected items in the interface content of the current user interface to generate a comparison result. The comparison result is confirmed according to the instruction format, and the control instruction is output to the application system.
Based on the above, the speech recognition system, the command generation system and the speech recognition method thereof of the present invention can recognize the speech input provided by the application system and return the corresponding control command to the application system, so that the application system can execute the corresponding operation according to the control command. Therefore, the voice recognition system, the instruction generation system and the voice recognition method thereof not only can provide a convenient voice recognition function, but also can reduce system resources required for voice recognition in an application system.
In order to make the aforementioned and other features and advantages of the invention more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
FIG. 1 is a schematic diagram of a speech recognition system according to an embodiment of the present invention.
FIG. 2 is a flow diagram of a speech recognition method according to an embodiment of the present invention.
FIG. 3 is a diagram of an instruction generation system according to an embodiment of the invention.
Fig. 4 is a flow diagram of a speech recognition method according to another embodiment of the invention.
FIG. 5 is a schematic diagram of a user interface of an application system according to an embodiment of the invention.
FIG. 6 is a schematic diagram of a natural speech understanding system according to an embodiment of the present invention.
Wherein the symbols in the drawings are briefly described as follows:
100: a speech recognition system; 101: inputting voice; 102: voice information; 103: semantic analysis results; 104: interface content; 105: a control instruction; 110: a voice recognition module; 120. 620: a natural language understanding system; 130: an instruction generation system; 131: a comparison module; 132: an instruction confirmation module; 133: a temporary storage device; 134: an access module; 135: a project acquisition module; 140: a storage device; 200: an application system; 210: a voice receiving module; 220: an instruction execution module; 301: numbering an interface; 302. 307: an instruction format; 303: interface content; 304: selecting an item; 305: comparing the results; 306: a control instruction; 511. 521, 531: an interface name; 512-514, 522-524, 532-534: selecting an item; 603: possible intent grammar data; 604: a keyword; 605: responding to the result; 606: determining intent grammar data; 621: a natural language processor; 622: a knowledge-aided understanding module; 623: intention data; 624: a retrieval system; 625: a structured database; 626: a search engine; 627: indicating a data storage device; 628: retrieving an interface unit; 629: an analysis result output module; s210 to S240, S410 to S450: and (5) carrying out the following steps.
Detailed Description
In order that the present disclosure may be more readily understood, the following specific examples are given as illustrative of the invention which may be practiced in various ways. Further, wherever possible, the same reference numbers are used in the drawings and the description to refer to the same or like parts.
FIG. 1 is a schematic diagram of a speech recognition system according to an embodiment of the present invention. Referring to fig. 1, the speech recognition system 100 is adapted to communicate with the application system 200, and the speech recognition system 100 and the application system 200 may communicate in a wired or wireless manner. In the embodiment, the application system 200 includes a voice receiving module 210 and an instruction executing module 220. The application 200 receives the voice input 101 provided by the user through the voice receiving module 210 and transmits the voice input 101 to the voice recognition system 100. In this embodiment, the speech recognition system 100 can perform speech recognition on the speech input 101 provided by the application system 200 to generate a corresponding instruction, and the speech input 101 transmits the instruction back to the instruction execution module 220 of the application system 200, so that the instruction execution module 220 executes the relevant operation of the application system 200. In other words, the speech recognition system 100 of the present embodiment can be configured with any application system and provide a speech recognition function.
In this embodiment, the application system 200 may be, for example, a game program or an application program running or loaded on a Virtual Reality (VR) device or an Augmented Reality (AR) device, and the user may control related operations in the game program in a voice manner. The virtual reality device or augmented reality device may include hardware circuits such as a processing circuit, a memory, and a voice sensing device, so that the processing circuit can execute or access the relevant modules or programs in the memory, thereby at least realizing the voice receiving function, the instruction executing function, and the application program executing function of the present invention. In the present embodiment, the speech recognition system 100 can be implemented in a cloud server or a local host device, for example, to provide speech related recognition and processing functions. The speech recognition system 100 may also include another processing circuit and another memory, so that the speech recognition function of the present invention can be realized at least by the other processing circuit executing or accessing the relevant module or program in the other memory.
In the embodiment, the speech recognition system 100 includes a speech recognition module 110, a natural language understanding system 120, an instruction generation system 130, and a storage device 140. The speech recognition module 110 is coupled to the speech receiving module 210 of the application system 200 and the natural language understanding system 120. The instruction generating system 130 is coupled to the natural language understanding system 120 and the storage device 140. FIG. 2 is a flow diagram of a speech recognition method according to an embodiment of the present invention. In conjunction with the speech recognition method of fig. 2, the speech recognition system 100 of fig. 1 can perform steps S210 to S240 of fig. 2 to implement the speech recognition function. In step S210, the speech recognition module 110 of the speech recognition system 100 receives the speech input 101 provided by the application system 200. In step S220, the voice recognition module 110 recognizes the voice input 101 to generate the voice message 102. In the embodiment, the speech recognition module 110 can convert the signal of the speech input 101 into the speech information 102 (or data) that can be processed and analyzed by the computer.
In step S230, the natural language understanding system 120 receives the speech information 102 output by the speech recognition module 110 and understands the speech information 102 to generate the semantic analysis result 103. In step S240, the instruction generating system 130 receives the semantic analysis results 103 output by the natural language understanding system 120, and compares the semantic analysis results 103 to output the control instruction 105 to the application system 200. In this embodiment, the storage device 140 can provide the interface content 104 of the user interface displayed by the current application system 200 to the instruction generating system 130, so that the instruction generating system 130 can compare the semantic analysis result 103 with the interface content 104 of the current user interface to generate the control instruction 105 to the instruction execution module 220 of the application system 200. In the embodiment, the instruction execution module 220 can cause the application system 200 to display a next user interface or perform a specific operation according to the control instruction 105. The storage device 140 may be any type of memory in a server or a computer system, such as a Dynamic Random Access Memory (DRAM), a Static Random Access Memory (SRAM), a Flash memory (Flash memory), a Read Only Memory (ROM), etc., which is not limited in this respect and can be selected by those skilled in the art according to actual requirements.
In the present embodiment, the natural language understanding system 120 may, for example, convert the speech Information 102 into Text Information (Text Information) and normalize the Text Information to generate the semantic analysis result 103 having the intention object. Furthermore, the command generating system 130 can generate the control command 105 corresponding to the intended object and provide the control command to the application system 200, so that the application system 200 can execute the control command 105 to display the next user interface or perform a specific operation. Therefore, the speech recognition method of the embodiment can make the application system 200 perform speech recognition without spending additional system resources, and can effectively save the system resources required by the application system 200 to perform the function of recognizing the speech input of the user.
It is noted that the semantic analysis results 103 output by the natural language understanding system 120 may include one or more possible semantic data, and the semantic data may include keywords and intention data. In other words, the user can express the selection intention by spoken language, such as the full name, short name, or alias of the selected item, and the corresponding control command can be generated by the voice recognition system 100 of the embodiment without reciting the complete specific name. In this regard, the manner in which the natural language understanding system 120 generates the semantic analysis results 103 will be illustrated below with respect to the embodiment of FIG. 6.
FIG. 3 is a diagram of an instruction generation system according to an embodiment of the invention. Fig. 4 is a flow diagram of a speech recognition method according to another embodiment of the invention. Referring to fig. 1, fig. 3 and fig. 4, the command generating system 130 of fig. 1 is an application system Interface (Interface), and a user can edit the command generating system 130 to make the speech recognition system 100 suitable for the corresponding application system 200. Instruction generation system 130 may include a system architecture as shown in FIG. 3. In the embodiment, the instruction generating system 130 includes a comparing module 131, an instruction confirming module 132, a temporary storage 133, an accessing module 134, and an item retrieving module 135. The comparison module 131 is coupled to the instruction confirmation module 132 and the item acquisition module 135. The register 133 is coupled to the instruction confirmation module 132 and the access module 134. The access module 134 is coupled to the item retrieving module 135 and the storage device 140. In this embodiment, the temporary storage device 133 may be, for example, a Dynamic Random Access Memory (DRAM), a Static Random Access Memory (SRAM), a Flash memory (Flash memory), a Read Only Memory (ROM), etc., which is not limited in this disclosure and can be selected by those skilled in the art according to actual requirements.
In conjunction with the speech recognition method of FIG. 4, the command generation system 130 of FIG. 3 can perform steps S410-S450 of FIG. 4 to implement speech recognition and command generation functions. In step S410, the temporary storage device 133 receives the interface number 301 of the current user interface provided by the application system 200. In step S420, the access module 134 generates the interface content 303 of the current user interface according to the interface number 301. In the present embodiment, the accessing module 134 can access the interface data pre-loaded in the storage device 140 according to the interface number 301 to obtain the interface content 303 of the current user interface displayed by the application system 200.
In step S430, the item obtaining module 135 receives the interface content 303 of the current user interface provided by the access module 134, and the item obtaining module 135 obtains the selection item 304 from the interface content 303 of the current user interface to output the selection item 304 to the comparison module 131. In step S440, the comparing module 131 compares the selection item 304 of the current user interface with the semantic analysis result 103 to generate a comparison result 305. It is noted that the selection item 304 may include an item name and a plurality of reference keywords corresponding to the item name. That is, the item acquisition module 135 may extract the item name of the selection item 304 and the plurality of reference keywords corresponding to the item name from the interface content 303 of the current user interface. Also, the comparison module 131 may compare whether the semantic analysis result 103 matches the item name and one of the plurality of reference keywords to generate a comparison result 305. In other words, the comparing module 131 may output the comparison result 305 of the corresponding item number, for example, as long as the semantic analysis result 103 generated after the speech input spoken by the user is understood by the natural language understanding system 120 matches the item name and one of the reference keywords. The multiple reference keywords may be, for example, shorthand names or aliases of item names.
In step S450, the command confirmation module 132 converts the comparison result 305 according to the command format 307, and outputs the control command 306 having the corresponding item number, for example. In the present embodiment, the instruction format 307 refers to the instruction format that can be received by the application system 100, and the instruction confirmation module 132 outputs the control instruction 306 to the application system 200 through the temporary storage device 133. Therefore, when the application system 200 displays the current user interface, if the application system 200 receives the control command 306 outputted by the speech recognition system 100, the application system 200 can select the selected item in the interface content 303 of the current user interface according to the control command 306, so that the application system 200 can change and display the next user interface or perform a specific operation according to the obtained item number. Accordingly, the voice recognition method of the present embodiment enables the command generating system 130 to effectively recognize the voice input of the user and generate the corresponding control command.
In addition, since the speech recognition system 100 can be applied to various application systems, the user only needs to perform relevant editing on the speech recognition system 100, and does not need to change the application system. For example, the speech recognition system 100 may first operate in an editing mode (or be edited by a Software Development Kit (SDK) of the speech recognition system) to write the interface content 104 and the instruction format 302 of the user interface displayed by the application system 200 into the storage device 140 through the temporary storage device 133 and the access module 134 in advance. Therefore, when the speech recognition system 100 operates in the working mode, the instruction generating system 130 can receive the interface number 301 of the current user interface through the temporary storage device 133, and the access module 134 can read the storage device 140 according to the interface number 301 to obtain the corresponding interface content 303 of the current user interface. Instruction validation module 132 may obtain instruction format 307 via access module 134. That is, the speech recognition system 100 of the present embodiment can be adapted to be used with various application systems to provide effective speech recognition and speech selection functions.
FIG. 5 is a schematic diagram of a user interface of an application system according to an embodiment of the invention. FIG. 5 is an example of a user interface that may be displayed by the application system 200 of FIG. 1. Referring to fig. 1, 3, and 5, the application 200 may, for example, execute a game program of a virtual reality. In this regard, taking a game program as an example, it should be noted that a game developer may first establish one or more corresponding data sets according to each user interface that may be displayed in a game, where each user interface may include one or more item names. In this regard, the game developer may establish, for each project name, a data set including a project designation, a number of columns and a number of rows of the project name on the interface, and a plurality of reference keywords corresponding to the project name. Therefore, when the speech recognition system 100 is connected to the game program, the game program can input the created data sets into the buffer 133 of the command generation system 130 and store the data sets in the buffer 133. The accessing module 134 can then read the temporary storage device 133 and store the plurality of data sets into the storage device 140 of the speech recognition system 100.
Next, assume that the application 200 first displays the user interface 510 of FIG. 5. The interface content of the user interface 510 includes an interface name 511 (home page) and a plurality of selection items 512-514. The interface contents 303 of the storage device 140 accessed by the access module 134 may include data of the plurality of selection items 512-514, for example. It is noted that the item retrieving module 135 may extract a plurality of item names of the selection items 512-514 and a plurality of reference keywords corresponding to the item names from the interface content of the user interface 510, so as to output the plurality of item names and the plurality of reference keywords corresponding thereto to the comparing module 131 for comparison. In this example, the comparison module 131 may obtain the following data content from the item acquisition module 135:
{ id ═ 0; column is 0; line is 0; title ═ instructions for use; alias is "use", "first", "third from last", "one", "use", "operation", … }
{ id ═ 1; column is 0; line 1; title ═ role selection; alias ═ angle selection "," second "," penultimate "," character ", … }
{ id ═ 2; column is 0; line 2; title ═ checkpoint selection; alias ═ customs, "third," "penultimate," "battle," "fight," … }
Wherein "id" is the item number, "column" is the column number, "line" is the row number, "title" is the item name, "alias" is the reference keyword. In this case, when the user wants to select the selection item 513, for example, the full name "role selection", abbreviated as "choice" or the number of items "second", "second to last", etc. corresponding to the selection item 513 can be spoken, so that the comparison module 131 can compare the selection item 513 and output the corresponding comparison result 305 to the instruction confirmation module 132, so as to generate the corresponding control instruction 306 to the instruction execution module 220 of the application system 200. Thus, the application 200 may then be replaced to display the next user interface 520.
Then, when the user interface 520 displayed by the system 200 is displayed, the interface content of the user interface 520 includes the interface name 521 (role selection) and a plurality of selection items 522-524. In this example, the comparison module 131 may obtain the following data content from the item acquisition module 135:
{ id ═ 3; column is 0; line is 0; title is "Zhaoyun"; alias is the "Zhao Zilong", "first", "third from last", "one", "Zhao", "Zilong", … }
{ id ═ 4; column is 0; line 1; title is "closed feather"; alias is "close cloud length", "second", "penultimate", "close", "cloud length", … }
{ id ═ 5; column is 0; line 2; title ═ cao "; alias ═ cao mend "," third "," penultimate "," three "," cao "," mend ", … }
In this case, the user may say, for example, the full name "zhao cloud", abbreviated as "zhao", the alias "zhao sublong", or the item number "first", "third last", etc. corresponding to the selection item 522, so that the comparing module 131 can compare the selection item 522 selected by the user and output the corresponding comparison result 305 to the instruction confirming module 132, so as to generate the corresponding control instruction 306 to the instruction executing module 220 of the application system 200. Thus, the application system 200 may then be replaced to display the next user interface 530.
Next, when the user interface 530 displayed by the system 200 is displayed, the interface contents of the user interface 530 include an interface name 531 (weapon selection) and a plurality of selection items 532-534. In this example, the comparison module 131 may obtain the following data content from the item acquisition module 135:
{ id ═ 6; column is 0; line is 0; title is "qinghong sword"; alias being "sword", "rainbow", "first", "third from last", "one", … }
{ id ═ 7; column is 0; line 1; title is "long gun"; alias is "gun", "second", "penultimate", "second", … }
{ id ═ 8; column is 0; line 2; title is big knife; alias is knife, third, penultimate, and third …
In this case, the user may say, for example, the full name "qinghong sword", abbreviated as "qinghong" or the number of items "first" corresponding to the selection item 532, so that the comparison module 131 can compare the selection item 532 selected by the user and output the corresponding comparison result 305 to the instruction confirmation module 132, so as to generate the corresponding control instruction 306 to the instruction execution module 220 of the application system 200. Thus, the application 200 may then perform the particular operations associated with the continuation of the game program.
However, the voice input provided by the user is not limited to the full name, abbreviation, alias or number of terms form described above. In an embodiment, the comparing module 131 may also directly extract item number information (which may include sequential item number information or reverse-sequential item number information) about multiple selection items of the current user interface from the semantic analysis result 103, directly extract line numbers or column numbers about multiple selection items of the current user interface from the semantic analysis result 103, or directly perform pinyin matching according to the semantic analysis result 103 to find a selection item to which the beginning, end, or character string of the item name may be matched. Furthermore, the comparison module 131 may also output the comparison result 305 corresponding to a plurality of successfully matched selection items, so that the application system 200 may also execute a plurality of control commands simultaneously or sequentially.
FIG. 6 is a schematic diagram of a natural speech understanding system according to an embodiment of the present invention. It should be noted that, in some embodiments of the present invention, the natural language understanding system of the present invention may be, for example, an architecture applying the natural language understanding system in the invention patent of china (with the publication number of CN103761242B), but the present invention is not limited thereto. In other embodiments of the present invention, the natural language understanding system of the present invention may also adopt other system architectures that can generate the semantic analysis results according to the embodiments of the present invention. Referring to fig. 1 and 6, the natural language understanding system 620 of fig. 6 is an exemplary embodiment of the natural language understanding system 120 of fig. 1, but the natural language understanding system of the present invention is not limited thereto. In this embodiment. The natural language understanding system 620 includes a natural language processor 621, a knowledge-aided understanding module 622, a retrieval system 624, and an analysis result output module 629. Knowledge-aided understanding module 622 is coupled to natural language processor 621 and retrieval system 624. Knowledge-aided understanding module 622 includes intent data 623. The search system 624 includes a structured database 625, a search engine 626, a pointing data storage 627, and a search interface unit 628, wherein the search engine 626 is coupled to the structured database 625, the pointing data storage 627, and the search interface unit 628.
In the present embodiment, with reference to table 1 below, when the natural language understanding system 620 receives the request information of the speech information 102 provided by the speech recognition module 110 of fig. 1 (e.g., by vocally inputting "i am dragon" when the user is displaying the user interface 520 of fig. 5), the natural language processor 621 may analyze the speech information 102 to generate the possible intention grammar data 603. The natural language processor 621 may send the possible intention grammar data 603 to the knowledge aided understanding module 622, wherein the possible intention grammar data 603 includes keywords 604 and intention data 623. In this regard, since the keyword 604 (e.g., "sub-dragon") in the intention grammar data 603 may belong to different domains (e.g., < role selection >) and movie (< readfile >), one speech message 102 may be analyzed into a plurality of possible intention grammar data 603 (e.g., "< role >, < role > -sub-dragon" or "< fetch file >, < filemname > -sub-dragon"), and further analysis by the knowledge assistance understanding module 622 is required to confirm the intention of the user. In this embodiment, the knowledge-aided understanding module 622 can retrieve the keyword 604 (e.g., "son dragon") in the possible intention grammar data 603 and send it to the search interface unit 628 of the search system 624 to search the structured database 625 through the search engine 626 to determine whether there is a character name or movie name of "son dragon". Also, the natural language processor 621 stores the intention data 623 inside the knowledge assisted understanding module 622.
TABLE 1
Figure BDA0002702303340000121
In other words, in the present embodiment, the natural language understanding system 620 can first retrieve the keyword 604 in the possible intention grammar data 603, and then determine the domain attribute of the keyword 604 according to the full-text search result of the structured database 625, and then further analyze and confirm the user's specific intention. The user can easily express his intention or information in a spoken manner without having to memorize specific terms, such as those related to fixed word lists in the prior art.
In the present embodiment, the structured database 625 in the retrieval system 624 may, for example, store a plurality of records. The search engine 626 in the search system 624 performs a full text search on the structured database 625 according to the keywords 604, and confirms the user's intention, and then returns the response result 605 of the full text search (assuming that the structured database 625 stores a record with a title field containing "rolenameguid: zhao sonlong" and no record with a title field storing "filmnamemedia duid: zhao sonlong" information, so that the response result 605 is "rolenameguid") to the knowledge assistance understanding module 622.
In this embodiment, the search interface unit 628 can obtain the indication data from the indication data storage 627 by the search engine 626, and the search interface unit 628 sequentially outputs the indication data in the full match record and the partial match record of the matching keyword 604 as the response result 605 to the knowledge aided understanding system 622, wherein the priority of the full match record is greater than that of the partial match record. Then, the knowledge-aided understanding module 622 can compare the stored intention data 623 according to the response result 605, and send the determined intention syntax data 606 (e.g. after comparing the response result 605 with the possible intention syntax data 603, it is determined that the intention of the user should be "< roleselect >, < rolename >) to the analysis result output module 629.
However, in another embodiment of the present invention, with reference to the following table 2, each record stored in the structured database 220 may further include information such as a popularity field, a likes field, or an dislikes field. In this regard, it is assumed that the intent syntax data 603 may include two pieces of data (e.g., "< roleselect >, < rolename > -sub-dragon" or "< roleselect >, < rolename > -violet dragon"). Moreover, after the search engine 626 of the search system 624 performs the full-text search, if it is determined that two records match the search result (assuming that the structured database 625 stores two records, and the interior of the title field thereof has records of "rolenameguid: zhao sublong" and "rolenameguid: zilong", respectively), the search engine 626 of the search system 624 can further determine the hotness field, the favorite field, and the aversive field of the two records. In this regard, the search engine 626 of the search system 624 may further determine the semantic analysis result 103 according to the value of the hot field (e.g., if the hot value (8) corresponding to "Zhao daughter dragon" is higher and the hot value (2) corresponding to "Zilong" is lower, then the search engine 626 takes "Zhao daughter dragon" as the semantic analysis result 103). Alternatively, the search engine 626 of the search system 624 may further determine the semantic analysis result 103 according to the value of the preference field (e.g., if the preference value (20) corresponding to "Zhao Zilong" is higher and the preference value (5) corresponding to "Zilong" is lower, then the search engine 626 takes "Zhao Zilong" as the semantic analysis result 103). Alternatively, the search engine 626 of the search system 624 may further determine the semantic analysis result 103 according to the value of the aversive field (e.g., if the aversive value (1) corresponding to "Zhao son dragon" is lower and the aversive value (20) corresponding to "Zilong" is higher, the search engine 626 takes "Zhao son dragon" as the semantic analysis result 103). In another embodiment of the present invention, the search engine 626 of the search system 624 may also refer to at least one of the hotness field, the favorite field and the aversion field, without being limited to the single determination criterion (for example, if the hotness values of "zhao zilong" and "zilong" are the same, the search engine 626 further compares the favorite value, or adds the hotness values and the favorite field for comparison).
TABLE 2
Recording Title bar Content bar Heating degree fence Favorite column Aversion fence
1 rolenameguid-Zhaozi dragon Character selection 8 20 1
2 rolenameguid purple dragon Character selection 2 5 20
Therefore, the analysis result output module 629 may output the semantic analysis results 103 having a specific intention object according to the determined intention syntax data 606. In this regard, since the natural language understanding system 620 can determine complete matching and partial matching after the full-text search of the keyword 604 and output the appropriate semantic analysis result 103 (for example, the semantic analysis result 103 of "zhao cloud" is output and sent to the instruction generating system 105 according to the received determined intention grammar data 606 "< roleselect >, < rolenam >," zhao subdragon "to confirm that the user wants to select zhao cloud), in some embodiments of the present invention, the user can provide a more spoken language or flexibly changing voice input form, and the voice recognition system having the natural language understanding system 620 of the present embodiment can effectively and accurately feed back the corresponding control instruction to the application system, thereby providing an effective voice selection function.
In summary, the voice recognition system, the command generation system and the voice recognition method thereof of the present invention can provide the voice recognition function through another system externally installed outside the application system, and return the corresponding control command to the application system. Moreover, the voice recognition system, the instruction generation system and the voice recognition method thereof of the present invention can perform effective voice recognition for the spoken voice input provided by the user. Therefore, the voice recognition system, the instruction generation system and the voice recognition method thereof can effectively reduce system resources required for voice recognition in an application system, and can realize a convenient and flexible voice selection function.
The above description is only for the preferred embodiment of the present invention, and it is not intended to limit the scope of the present invention, and any person skilled in the art can make further modifications and variations without departing from the spirit and scope of the present invention, therefore, the scope of the present invention should be determined by the claims of the present application.

Claims (38)

1. A speech recognition system adapted to communicate with an application system and the application system is configured to receive speech input, wherein the speech recognition system comprises:
a voice recognition module for receiving the voice input provided by the application system and recognizing the voice input to generate voice information;
a natural speech understanding system coupled to the speech recognition module for understanding the speech information to generate a semantic analysis result; and
the instruction generating system is coupled with the natural voice understanding system and used for comparing the selected items in the interface content of the current user interface by using the semantic analysis result and outputting a control instruction to the application system according to the comparison result.
2. The speech recognition system of claim 1, wherein the instruction generation system comprises:
a comparison module for receiving the semantic analysis result and comparing the selected item in the interface content of the current user interface with the semantic analysis result to generate a comparison result; and
the command confirmation module is coupled to the comparison module and used for converting the comparison result according to the command format and outputting the control command.
3. The speech recognition system of claim 2, wherein the interface content of the current user interface includes a project name of the selected project and a project label corresponding to the project name and a plurality of reference keywords, the comparison module compares whether the semantic analysis result matches one of the project name and the reference keywords to generate the comparison result.
4. The speech recognition system of claim 3, wherein the instruction generation system further comprises:
the temporary storage device is used for receiving the interface number of the current user interface provided by the application system;
the access module is coupled with the temporary storage device and used for generating the interface content of the current user interface according to the interface number; and
an item obtaining module, coupled to the accessing module and the comparing module, for obtaining the item name of the selected item, the item label corresponding to the item name, and the reference keywords from the interface content of the current user interface, so as to output the item name of the selected item, the item label corresponding to the item name, and the reference keywords to the comparing module.
5. The speech recognition system of claim 4, wherein the register is further coupled to the command confirmation module, and the command confirmation module outputs the control command to the application system through the register.
6. The speech recognition system of claim 4, further comprising:
the storage device is coupled to the access module of the instruction generating system, and the access module is used for accessing the storage device to obtain the interface content of the current user interface.
7. The speech recognition system of claim 6, wherein the interface contents of the current user interface are written into the storage device in advance through the temporary storage device and the access module.
8. The voice recognition system of claim 6, wherein the command format is pre-written to the storage device via the temporary storage device and the access module, and the command confirmation module obtains the command format via the access module.
9. The speech recognition system of claim 4, wherein the application system is configured to display the current user interface, and when the application system receives the control command outputted from the speech recognition system, the application system selects the selection item in the interface content of the current user interface according to the control command, and changes the selection item to display a next user interface or perform a specific operation.
10. The speech recognition system of claim 9, wherein when the application system displays the next user interface instead, the application system outputs a next interface number of the next user interface to the command generating system, so that the access module of the command generating system obtains a next interface content of the next user interface through a storage device and provides the next interface content to the item obtaining module.
11. The speech recognition system of claim 1, wherein the natural speech understanding system comprises:
a natural language processor coupled to the speech recognition module and configured to receive the speech information to generate possible intent grammar data;
a knowledge-aided understanding module coupled to the natural language processor and configured to store intent data of the possible intent grammar data;
a retrieval system, coupled to the knowledge-aided understanding module, for receiving the keyword of the possible intention grammar data provided by the knowledge-aided understanding module, and generating a response result to the knowledge-aided understanding module according to the keyword, so that the knowledge-aided understanding module generates determined intention grammar data according to the response result; and
the analysis result output module is coupled to the knowledge-aided understanding module and the instruction generation system and is used for outputting the semantic analysis result according to the determined intention grammar data.
12. The speech recognition system of claim 11, wherein the retrieval system comprises:
the retrieval interface unit is coupled with the knowledge auxiliary understanding module;
a search engine coupled to the search interface unit;
an indication data storage device coupled to the search engine; and
the search interface unit can search the plurality of records in the structured database through the search engine according to the keyword, and obtain corresponding indication data as the response result through the indication data storage device according to the search result.
13. The speech recognition system of claim 12, wherein the search engine performs a full text search on the plurality of records, and determines the domain attribute of the keyword according to the full text search result.
14. The speech recognition system of claim 13, wherein each of the plurality of entries includes at least one of a popularity field, a likes field, and an dislikes field, and the search engine compares the value of the at least one of the popularity field, the likes field, and the dislikes field of each of the plurality of entries to determine the search result.
15. The speech recognition system of claim 12 wherein the knowledge-aided understanding module compares the response result and the intent data to generate the determined intent grammar data.
16. An instruction generating system adapted to communicate with an application system and the application system is configured to receive a voice input, wherein the instruction generating system comprises:
a comparison module for receiving a semantic analysis result corresponding to the voice input and comparing a selection item in the interface content of the current user interface with the semantic analysis result to generate a comparison result; and
the command confirmation module is coupled with the comparison module and used for converting the comparison result according to the command format and outputting a control command to the application system.
17. The system of claim 16, wherein the interface content of the current user interface includes a project name of the selected project, a project label corresponding to the project name, and a plurality of reference keywords, the comparison module compares whether the semantic analysis result matches one of the project name and the reference keywords to generate the comparison result.
18. The instruction generating system of claim 16, further comprising:
the temporary storage device is used for receiving the interface number of the current user interface provided by the application system;
the access module is coupled with the temporary storage device and used for generating the interface content of the current user interface according to the interface number; and
an item obtaining module, coupled to the accessing module and the comparing module, for obtaining an item name of the selected item, an item label corresponding to the item name, and a plurality of reference keywords from the interface content of the current user interface, so as to output the item name of the selected item, the item label corresponding to the item name, and the plurality of reference keywords to the comparing module.
19. The instruction generating system of claim 18 wherein the register is further coupled to the instruction confirmation module, and the instruction confirmation module outputs the control instruction to the application system through the register.
20. The system of claim 18, wherein the storage device is coupled to the access module of the command generating system, and the access module is configured to access the storage device to obtain the interface content of the current user interface.
21. The system of claim 20, wherein the interface content of the current user interface is pre-written to the storage device via the register and the access module.
22. The instruction generating system of claim 20 wherein the instruction format is written to the storage device in advance through the register and the access module, and the instruction confirmation module obtains the instruction format through the access module.
23. The system of claim 18, wherein the application system is configured to display the current user interface, and when the application system receives the control command outputted from the voice recognition system, the application system selects the selection item in the interface content of the current user interface according to the control command, and changes the selection item to display a next user interface or perform a specific operation.
24. The system of claim 23, wherein when the application system displays the next user interface instead, the application system outputs a next interface number of the next user interface to the command generating system, so that the access module of the command generating system can access the next interface content of the next user interface via the storage device and provide the next interface content to the item retrieving module.
25. A speech recognition method adapted for a speech recognition system, wherein the speech recognition system is in communication with an application system, and wherein the application system is configured to receive speech input, wherein the speech recognition method comprises:
receiving the speech input provided by the application system;
recognizing the voice input to generate voice information;
understanding the speech information to generate a semantic analysis result; and
and comparing the selected items in the interface content of the current user interface by using the semantic analysis result, and outputting a control instruction to the application system according to the comparison result.
26. The speech recognition method of claim 25, wherein outputting the control command to the application system comprises:
the comparison result is converted according to the instruction format, and the control instruction is output.
27. The speech recognition method of claim 26 wherein the interface content of the current user interface includes an item name of the selected item and an item label corresponding to the item name and a plurality of reference keywords, and the step of comparing the selected item of the current user interface with the semantic analysis result to generate the comparison result comprises:
comparing whether the semantic analysis result is matched with the item name and one of the plurality of reference keywords to generate a comparison result.
28. The speech recognition method of claim 27, further comprising:
receiving an interface number of the current user interface provided by the application system;
generating the interface content of the current user interface according to the interface number; and
the item name of the selected item, the item label corresponding to the item name, and the reference keywords are obtained from the interface content of the current user interface.
29. The speech recognition method of claim 28, wherein the application system is configured to display the current user interface, and when the application system receives the control command outputted from the speech recognition system, the application system selects the selection item in the interface content of the current user interface according to the control command, and changes the selection item to display a next user interface or perform a specific operation.
30. The speech recognition method of claim 25 wherein the step of interpreting the speech information to generate the semantic analysis result comprises:
generating possible intention grammar data according to the voice information;
generating a response result according to the keyword of the possible intention grammar data;
generating determined intention grammar data according to the response result; and
and outputting the semantic analysis result according to the determined intention grammar data.
31. The speech recognition method of claim 30 wherein the step of generating the response result according to the keyword comprises:
searching a plurality of records according to the keyword, and acquiring corresponding indication data as the response result according to the searching result.
32. The speech recognition method of claim 31, wherein searching the plurality of records according to the keyword comprises:
and performing full-text retrieval on the plurality of records, and judging the domain attribute of the keyword according to a full-text retrieval result.
33. The speech recognition method of claim 32, wherein searching the plurality of records according to the keyword further comprises:
comparing the value of at least one of the popularity field, the like field and the dislike field of each of the records to determine the search result.
34. The speech recognition method of claim 31 wherein the step of outputting the semantic analysis results according to the determined intention grammar data comprises:
comparing the response result with the intention data of the possible intention grammar data to generate the determined intention grammar data.
35. A speech recognition method adapted for an instruction generation system, wherein the instruction generation system is adapted to communicate with an application system, and wherein the application system is configured to receive speech input, wherein the speech recognition method comprises:
receiving a semantic analysis result corresponding to the voice input;
comparing the selected items in the interface content of the current user interface by using the semantic analysis result to generate a comparison result; and
the comparison result is confirmed according to the instruction format, and a control instruction is output to the application system.
36. The speech recognition method of claim 35, wherein the interface content of the current user interface includes a project name of the selected project and a plurality of reference keywords corresponding to the project name, and the step of comparing the selected project of the current user interface with the semantic analysis result to generate the comparison result comprises:
comparing whether the semantic analysis result is matched with the item name and one of the plurality of reference keywords to generate a comparison result.
37. The speech recognition method of claim 35, further comprising:
receiving an interface number of the current user interface provided by the application system;
generating the interface content of the current user interface according to the interface number; and
the selection item is obtained from the interface content of the current user interface.
38. The speech recognition method of claim 37, wherein the application system is configured to display the current user interface, and when the application system receives the control command outputted from the speech recognition system, the application system selects the selected item in the interface content of the current user interface according to the control command, and changes the selected item to display a next user interface or perform a specific operation.
CN202011026628.0A 2020-09-25 2020-09-25 Speech recognition system, instruction generation system and speech recognition method thereof Pending CN112216278A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011026628.0A CN112216278A (en) 2020-09-25 2020-09-25 Speech recognition system, instruction generation system and speech recognition method thereof
TW109136554A TWI780502B (en) 2020-09-25 2020-10-21 Speech recognition system, command generation system, and speech recognition method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011026628.0A CN112216278A (en) 2020-09-25 2020-09-25 Speech recognition system, instruction generation system and speech recognition method thereof

Publications (1)

Publication Number Publication Date
CN112216278A true CN112216278A (en) 2021-01-12

Family

ID=74051202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011026628.0A Pending CN112216278A (en) 2020-09-25 2020-09-25 Speech recognition system, instruction generation system and speech recognition method thereof

Country Status (2)

Country Link
CN (1) CN112216278A (en)
TW (1) TWI780502B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080059191A1 (en) * 2006-09-04 2008-03-06 Fortemedia, Inc. Method, system and apparatus for improved voice recognition
CN104615052A (en) * 2015-01-15 2015-05-13 深圳乐投卡尔科技有限公司 Android vehicle navigation global voice control device and Android vehicle navigation global voice control method
US20180133900A1 (en) * 2016-11-15 2018-05-17 JIBO, Inc. Embodied dialog and embodied speech authoring tools for use with an expressive social robot
CN108877796A (en) * 2018-06-14 2018-11-23 合肥品冠慧享家智能家居科技有限责任公司 The method and apparatus of voice control smart machine terminal operation
CN109830239A (en) * 2017-11-21 2019-05-31 群光电子股份有限公司 Voice processing apparatus, voice recognition input systems and voice recognition input method
CN110232919A (en) * 2019-06-19 2019-09-13 北京智合大方科技有限公司 Real-time voice stream extracts and speech recognition system and method
CN110895931A (en) * 2019-10-17 2020-03-20 苏州意能通信息技术有限公司 VR (virtual reality) interaction system and method based on voice recognition
US20200234700A1 (en) * 2017-07-14 2020-07-23 Cognigy Gmbh Method for conducting dialog between human and computer

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10387899B2 (en) * 2016-10-26 2019-08-20 New Relic, Inc. Systems and methods for monitoring and analyzing computer and network activity
TWI690811B (en) * 2019-03-26 2020-04-11 中華電信股份有限公司 Intelligent Online Customer Service Convergence Core System

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080059191A1 (en) * 2006-09-04 2008-03-06 Fortemedia, Inc. Method, system and apparatus for improved voice recognition
CN104615052A (en) * 2015-01-15 2015-05-13 深圳乐投卡尔科技有限公司 Android vehicle navigation global voice control device and Android vehicle navigation global voice control method
US20180133900A1 (en) * 2016-11-15 2018-05-17 JIBO, Inc. Embodied dialog and embodied speech authoring tools for use with an expressive social robot
US20200234700A1 (en) * 2017-07-14 2020-07-23 Cognigy Gmbh Method for conducting dialog between human and computer
CN109830239A (en) * 2017-11-21 2019-05-31 群光电子股份有限公司 Voice processing apparatus, voice recognition input systems and voice recognition input method
CN108877796A (en) * 2018-06-14 2018-11-23 合肥品冠慧享家智能家居科技有限责任公司 The method and apparatus of voice control smart machine terminal operation
CN110232919A (en) * 2019-06-19 2019-09-13 北京智合大方科技有限公司 Real-time voice stream extracts and speech recognition system and method
CN110895931A (en) * 2019-10-17 2020-03-20 苏州意能通信息技术有限公司 VR (virtual reality) interaction system and method based on voice recognition

Also Published As

Publication number Publication date
TW202213086A (en) 2022-04-01
TWI780502B (en) 2022-10-11

Similar Documents

Publication Publication Date Title
US9477656B1 (en) Cross-lingual indexing and information retrieval
RU2643467C1 (en) Comparison of layout similar documents
US5099426A (en) Method for use of morphological information to cross reference keywords used for information retrieval
JP3272288B2 (en) Machine translation device and machine translation method
US7634720B2 (en) System and method for providing context to an input method
JP3152871B2 (en) Dictionary search apparatus and method for performing a search using a lattice as a key
KR101522049B1 (en) Coreference resolution in an ambiguity-sensitive natural language processing system
US20060195435A1 (en) System and method for providing query assistance
US20030004941A1 (en) Method, terminal and computer program for keyword searching
JP2010198644A (en) Blinking annotation callout highlighting cross language search result
EP2162833A1 (en) A method, system and computer program for intelligent text annotation
US10642589B2 (en) Extensibility in a database system
KR20210097347A (en) Method for image searching based on artificial intelligence and apparatus for the same
US9129016B1 (en) Methods and apparatus for providing query parameters to a search engine
US5899989A (en) On-demand interface device
CN113419721B (en) Web-based expression editing method, device, equipment and storage medium
US8433729B2 (en) Method and system for automatically generating a communication interface
JP3163141B2 (en) Relational database processing device and processing method
CN112216278A (en) Speech recognition system, instruction generation system and speech recognition method thereof
KR20020052142A (en) Converting method for converting documents between different locales
JP3714723B2 (en) Document display system
JP4283038B2 (en) Document registration device, document search device, program, and storage medium
CN116991969B (en) Method, system, electronic device and storage medium for retrieving configurable grammar relationship
Henrich et al. LISGrammarChecker: Language Independent Statistical Grammar Checking
JPH0635971A (en) Document retrieving device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination