CN116705019A - Flow automation system based on man-machine interaction - Google Patents
Flow automation system based on man-machine interaction Download PDFInfo
- Publication number
- CN116705019A CN116705019A CN202310588346.7A CN202310588346A CN116705019A CN 116705019 A CN116705019 A CN 116705019A CN 202310588346 A CN202310588346 A CN 202310588346A CN 116705019 A CN116705019 A CN 116705019A
- Authority
- CN
- China
- Prior art keywords
- information
- unit
- text information
- signal
- interactive user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 25
- 230000002452 interceptive effect Effects 0.000 claims abstract description 56
- 230000010365 information processing Effects 0.000 claims abstract description 14
- 238000004801 process automation Methods 0.000 claims abstract description 11
- 238000000034 method Methods 0.000 claims description 12
- 238000012986 modification Methods 0.000 claims description 9
- 230000004048 modification Effects 0.000 claims description 9
- 238000011156 evaluation Methods 0.000 claims description 8
- 238000012790 confirmation Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 2
- 241001672694 Citrus reticulata Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3343—Query execution using phonetics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/54—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for retrieval
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/225—Feedback of the input speech
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Signal Processing (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a process automation system based on man-machine interaction, which comprises an information acquisition unit, a voice recognition unit, an information processing unit and a result output unit, relates to the technical field of man-machine interaction automation, solves the technical problem that part of problems cannot be solved, and reduces the subsequent experience of an interactive user.
Description
Technical Field
The application relates to the technical field of man-machine interaction automation, in particular to a process automation system based on man-machine interaction.
Background
The language is an important means of communication in the human society, is one of the most effective ways, and along with the rapid promotion of smart grid construction, the front-end technology with obvious intelligent characteristics, namely the voice recognition technology, is definitely widely applied and researched in grid application, and for an electric power system, although the application time of related technologies with voice, such as voice alarm, voice synthesis and the like, is longer, the application of the voice recognition technology is more obvious compared with the development of other mature industries.
In the use process of the existing part of man-machine interaction flow automation system, more people can conveniently inquire by interaction users, but in the use process of man-machine interaction, the problem proposed by the interaction users cannot be solved, and aiming at similar conditions, the same unresolved condition exists when the follow-up interaction users consult the same problem, so that the use experience of the interaction users is reduced.
Disclosure of Invention
Aiming at the defects of the prior art, the application provides a process automation system based on man-machine interaction, which solves the problems that part of problems cannot be solved and the experience of subsequent use of interaction users is reduced.
In order to achieve the above purpose, the application is realized by the following technical scheme: a process automation system based on human-machine interaction, comprising:
the information acquisition unit is used for acquiring basic information of a target object, wherein the target object is an interactive user, the basic information comprises audio information, and the acquired basic information of the target object is transmitted to the voice recognition unit;
the voice recognition unit is used for recognizing the acquired audio information to generate corresponding text information and transmitting the text information to the information confirmation unit;
the information confirming unit is used for confirming the acquired text information, generating a confirming signal and an unacknowledged signal, transmitting the text information corresponding to the confirming signal to the information processing unit, and reversely transmitting the text information corresponding to the unacknowledged signal to the voice recognition unit;
the information processing unit is used for identifying the acquired text information, generating corresponding result information, displaying the result information to an interactive user to correspondingly generate a solution signal and an unresolved signal, transmitting the solution signal to the result output unit, and transmitting the unresolved signal to the manual switching unit;
the manual switching unit is used for distributing the acquired text information by personnel;
the result output unit is used for acquiring result information corresponding to the solution signal, displaying the result information to a corresponding interactive user through the display equipment, and transmitting the result information to the user feedback unit;
and the user feedback unit is used for evaluating the acquired result information, generating an evaluation result and transmitting the evaluation result to the information storage unit.
As a further aspect of the application: the information confirming unit confirms the text information specifically as follows:
s1: the method comprises the steps that corresponding text information is obtained and displayed to a corresponding interactive user, the display mode is displayed through a screen, the screen is provided with a man-machine touch screen function, the interactive user confirms according to the obtained text information, two touch screen keys, namely a correct touch screen key and a wrong touch screen key, are displayed on the screen, and the interactive user selects according to displayed text;
s2: if the interactive user confirms that the text information is correct, the system automatically generates a confirmation signal and transmits the corresponding text information to the information processing unit;
s3: if the interactive user confirms that the text information is incorrect, the system generates an error signal, meanwhile, the system generates a're-identification' option and a 'manual input modification' option, the selection corresponding to the interactive user is acquired, if the selection of the interactive user is acquired as the're-identification', the system generates a secondary identification signal and reversely transmits the secondary identification signal to the voice identification unit, the voice identification unit reacquires the audio information, the voice identification unit prompts the user to perform the re-voice input, the acquired audio information is extracted again to generate the corresponding text information, then the operation of S1 is repeated again until the interactive user confirms that the text information is correct, and if the selection of the interactive user is acquired as the 'manual input modification', the system generates a modification signal and generates a corresponding prompt to display the corresponding prompt to the interactive user.
As a further aspect of the application: the specific way of the information processing unit identifying the text information and generating the signal is as follows:
w1: the method comprises the steps of acquiring text information, matching according to the acquired information data, generating a solution signal by a system when the information data are matched with the text information, and correspondingly displaying a result matched with the text information to an interactive user;
w2: when the information data is not matched with the text information, the system generates an unresolved signal, and simultaneously transmits the unresolved signal to the manual transfer unit, and then transmits the text information to the information storage unit.
As a further aspect of the application: the specific modes of the user feedback unit obtaining the result information and generating the evaluation result are as follows:
if the interactive user is rated as satisfactory, the system does not perform any processing, and if the interactive user is rated as unsatisfactory, the system marks the user as unsatisfactory, and simultaneously acquires the text information corresponding to the interactive user.
As a further aspect of the application: and the information storage unit is used for classifying the acquired text information of the unsatisfied user to obtain classified information and transmitting the classified information to the information output unit.
Advantageous effects
The application provides a process automation system based on man-machine interaction. Compared with the prior art, the method has the following beneficial effects:
according to the application, the audio of the interactive user is identified by adopting a man-machine interaction mode, the identified information is displayed to the interactive user, the interactive user further confirms, the interactive user selects the problem according to the situation of identification errors, so that the accuracy of identification in the man-machine interaction process is improved, the problem which cannot be identified is solved by a manual answer mode, meanwhile, the problem which cannot be answered is stored and classified, and the operator carries out subsequent answer filling, so that the automatic solution to the problem of the interactive user is realized, the manual labor force is reduced, and the problem answer range of the man-machine interaction is enriched by the storage and subsequent filling of the problem, so that the subsequent use experience of the interactive user is improved.
Drawings
FIG. 1 is a block diagram of a system of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, the present application provides a process automation system based on man-machine interaction, comprising:
the information acquisition unit is used for acquiring basic information of a target object, wherein the target object is an interactive user, the basic information comprises audio information, the audio information is audio information of mandarin pronunciation, and the acquired basic information of the target object is transmitted to the voice recognition unit.
The voice recognition unit is used for recognizing the acquired audio information to generate corresponding text information, and the technology is the prior art, is not repeated herein, and transmits the text information to the information confirmation unit.
The information confirming unit is used for confirming the acquired text information and generating a confirming signal and an unacknowledged signal, transmitting the text information corresponding to the confirming signal to the information processing unit, and reversely transmitting the text information corresponding to the unacknowledged signal to the voice recognition unit, wherein the specific confirming mode is as follows:
s1: the method comprises the steps that corresponding text information is obtained and displayed to a corresponding interactive user, the display mode is displayed through a screen, the screen is provided with a man-machine touch screen function, the interactive user confirms according to the obtained text information, two touch screen keys, namely a correct touch screen key and a wrong touch screen key, are displayed on the screen, and the interactive user selects according to displayed text;
s2: if the interactive user confirms that the text information is correct, the system automatically generates a confirmation signal and transmits the corresponding text information to the information processing unit;
s3: if the interactive user confirms that the text information is incorrect, the system generates an error signal, meanwhile, the system generates a're-identification' option and a 'manual input modification' option, the selection corresponding to the interactive user is acquired, if the selection of the interactive user is acquired as the're-identification', the system generates a secondary identification signal and reversely transmits the secondary identification signal to the voice identification unit, the voice identification unit reacquires the audio information, the voice identification unit prompts the user to perform the re-voice input, the acquired audio information is extracted again to generate the corresponding text information, then the operation of S1 is repeated again until the interactive user confirms that the text information is correct, and if the selection of the interactive user is acquired as the 'manual input modification', the system generates a modification signal and generates a corresponding prompt to display the corresponding prompt to the interactive user.
The information storage unit stores different information data, the information data is equivalent to a database, different information is stored, the information data is transmitted to the information processing unit, and the information storage unit is in bidirectional electrical connection with the information processing unit.
The information processing unit is used for identifying the acquired text information and generating corresponding result information, then displaying the result information to an interactive user to correspondingly generate a solution signal and an unresolved signal, transmitting the solution signal to the result output unit and transmitting the unresolved signal to the manual switching unit, wherein the specific signal generation mode is as follows:
w1: the method comprises the steps of acquiring text information, matching according to the acquired information data, generating a solution signal by a system when the information data are matched with the text information, and correspondingly displaying a result matched with the text information to an interactive user;
w2: when the information data is not matched with the text information, the system generates an unresolved signal, and simultaneously transmits the unresolved signal to the manual transfer unit, and then transmits the text information to the information storage unit.
And the manual switching unit is used for carrying out personnel distribution on the acquired text information and carrying out random distribution when the system carries out personnel distribution.
The result output unit is used for acquiring result information corresponding to the solution signal, displaying the result information to a corresponding interactive user through the display equipment, and transmitting the result information to the user feedback unit.
The user feedback unit is used for evaluating the obtained result information and generating an evaluation result, and then transmitting the evaluation result to the information storage unit, wherein the specific generation mode is as follows: if the interactive user is rated as satisfactory, the system does not perform any processing, and if the interactive user is rated as unsatisfactory, the system marks the user as unsatisfactory, acquires the text information corresponding to the interactive user and transmits the text information to the information storage unit.
The information storage unit is used for classifying the acquired text information of the unsatisfied user to obtain classified information, transmitting the classified information to the information output unit, and carrying out corresponding problem solving by a subsequent operator through the displayed classified information.
Some of the data in the above formulas are numerical calculated by removing their dimensionality, and the contents not described in detail in the present specification are all well known in the prior art.
The above embodiments are only for illustrating the technical method of the present application and not for limiting the same, and it should be understood by those skilled in the art that the technical method of the present application may be modified or substituted without departing from the spirit and scope of the technical method of the present application.
Claims (5)
1. A process automation system based on human-machine interaction, comprising:
the information acquisition unit is used for acquiring basic information of a target object, wherein the target object is an interactive user, the basic information comprises audio information, and the acquired basic information of the target object is transmitted to the voice recognition unit;
the voice recognition unit is used for recognizing the acquired audio information to generate corresponding text information and transmitting the text information to the information confirmation unit;
the information confirming unit is used for confirming the acquired text information, generating a confirming signal and an unacknowledged signal, transmitting the text information corresponding to the confirming signal to the information processing unit, and reversely transmitting the text information corresponding to the unacknowledged signal to the voice recognition unit;
the information processing unit is used for identifying the acquired text information, generating corresponding result information, displaying the result information to an interactive user to correspondingly generate a solution signal and an unresolved signal, transmitting the solution signal to the result output unit, and transmitting the unresolved signal to the manual switching unit;
the manual switching unit is used for distributing the acquired text information by personnel;
the result output unit is used for acquiring result information corresponding to the solution signal, displaying the result information to a corresponding interactive user through the display equipment, and transmitting the result information to the user feedback unit;
and the user feedback unit is used for evaluating the acquired result information, generating an evaluation result and transmitting the evaluation result to the information storage unit.
2. The process automation system based on man-machine interaction according to claim 1, wherein the information confirmation unit confirms the text information in the following manner:
s1: acquiring corresponding text information and displaying the text information to a corresponding interactive user, and confirming the interactive user according to the acquired text information;
s2: if the interactive user confirms that the text information is correct, the system automatically generates a confirmation signal and transmits the corresponding text information to the information processing unit;
s3: if the interactive user confirms that the text information is incorrect, the system generates an error signal, meanwhile, the system generates a're-identification' option and a 'manual input modification' option, the selection corresponding to the interactive user is acquired, if the selection of the interactive user is acquired as're-identification', the system generates a secondary identification signal and reversely transmits the secondary identification signal to the voice identification unit, the voice identification unit acquires the audio information again, and if the selection of the interactive user is acquired as 'manual input modification', the system generates a modification signal and generates a corresponding prompt to be displayed to the interactive user.
3. The process automation system based on man-machine interaction according to claim 1, wherein the specific way of the information processing unit identifying text information and generating signals is as follows:
w1: the method comprises the steps of acquiring text information, matching according to the acquired information data, generating a solution signal by a system when the information data are matched with the text information, and correspondingly displaying a result matched with the text information to an interactive user;
w2: when the information data is not matched with the text information, the system generates an unresolved signal, and simultaneously transmits the unresolved signal to the manual transfer unit, and then transmits the text information to the information storage unit.
4. The process automation system based on man-machine interaction according to claim 1, wherein the specific way for the user feedback unit to obtain the result information and generate the evaluation result is:
if the interactive user is rated as satisfactory, the system does not perform any processing, and if the interactive user is rated as unsatisfactory, the system marks the user as unsatisfactory, and simultaneously acquires the text information corresponding to the interactive user.
5. The process automation system based on man-machine interaction according to claim 1, wherein the information storage unit is configured to classify the acquired text information of the unsatisfied user to obtain classification information, and transmit the classification information to the information output unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310588346.7A CN116705019A (en) | 2023-05-24 | 2023-05-24 | Flow automation system based on man-machine interaction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310588346.7A CN116705019A (en) | 2023-05-24 | 2023-05-24 | Flow automation system based on man-machine interaction |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116705019A true CN116705019A (en) | 2023-09-05 |
Family
ID=87836559
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310588346.7A Pending CN116705019A (en) | 2023-05-24 | 2023-05-24 | Flow automation system based on man-machine interaction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116705019A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117891351A (en) * | 2024-03-14 | 2024-04-16 | 北京太一云科技有限公司 | Virtual-real interaction method and system for universe cross-screen |
-
2023
- 2023-05-24 CN CN202310588346.7A patent/CN116705019A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117891351A (en) * | 2024-03-14 | 2024-04-16 | 北京太一云科技有限公司 | Virtual-real interaction method and system for universe cross-screen |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106875941B (en) | Voice semantic recognition method of service robot | |
US11494161B2 (en) | Coding system and coding method using voice recognition | |
CN109840318B (en) | Filling method and system for form item | |
CN111145510B (en) | Alarm receiving processing method, device and equipment | |
US6651045B1 (en) | Intelligent human/computer interface system | |
CN116705019A (en) | Flow automation system based on man-machine interaction | |
CN111858877A (en) | Multi-type question intelligent question answering method, system, equipment and readable storage medium | |
CN111179928A (en) | Intelligent control method for power transformation and distribution station based on voice interaction | |
CN204229230U (en) | For the Intelligent Mobile Robot of automatic meter reading | |
CN110288995B (en) | Interaction method and device based on voice recognition, storage medium and electronic equipment | |
CN111179935B (en) | Voice quality inspection method and device | |
JPH02277196A (en) | Man-machine system | |
CN105468468A (en) | Data error correction method and apparatus facing question answering system | |
CN115512696A (en) | Simulation training method and vehicle | |
CN109961789B (en) | Service equipment based on video and voice interaction | |
CN115019411A (en) | Routing inspection system and method based on voice interaction | |
CN111695763B (en) | Scheduling system and method based on voice question and answer | |
KR101248323B1 (en) | Method and Apparatus for providing Ubiquitous Smart Parenting and Customized Education Service, and Recording medium thereof | |
CN113591463A (en) | Intention recognition method and device, electronic equipment and storage medium | |
CN117456995A (en) | Interactive method and system of pension service robot | |
CN112069833A (en) | Log analysis method, log analysis device and electronic equipment | |
CN115688758A (en) | Statement intention identification method and device and storage medium | |
CN115582637A (en) | Automatic detection system for laser cutting missing process | |
CN114116972A (en) | Processing system of transformer knowledge intelligent question-answer model based on BilSTM | |
CN112086091A (en) | Intelligent endowment service system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |