CN108145714B - Distributed control system of service type robot - Google Patents

Distributed control system of service type robot Download PDF

Info

Publication number
CN108145714B
CN108145714B CN201810014623.2A CN201810014623A CN108145714B CN 108145714 B CN108145714 B CN 108145714B CN 201810014623 A CN201810014623 A CN 201810014623A CN 108145714 B CN108145714 B CN 108145714B
Authority
CN
China
Prior art keywords
service
voice
interface
application
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810014623.2A
Other languages
Chinese (zh)
Other versions
CN108145714A (en
Inventor
唐志福
方继勇
李玺华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Gqy Video &telecom Joint Stock Co ltd
Original Assignee
Ningbo Gqy Video &telecom Joint Stock Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Gqy Video &telecom Joint Stock Co ltd filed Critical Ningbo Gqy Video &telecom Joint Stock Co ltd
Priority to CN201810014623.2A priority Critical patent/CN108145714B/en
Publication of CN108145714A publication Critical patent/CN108145714A/en
Application granted granted Critical
Publication of CN108145714B publication Critical patent/CN108145714B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a distributed control system of a service robot, comprising: the upper computer subsystem is used for providing software functions of the service type robot, presenting external service contents of the service type robot and proxy services of the cloud service group; the lower computer subsystem is communicated with the upper computer subsystem through a CAN bus and is used for receiving a behavior instruction sent by the upper computer subsystem and controlling the service robot to execute corresponding actions; feeding back the execution result of the action and the state of each node of the service type robot to the upper computer subsystem; and the cloud service group is in wireless communication with the upper computer subsystem, and provides remote maintenance of the service type robot and various service clouds provided by third-party enterprises. The invention integrates and fuses various intelligent basic applications, adopts distributed control, and has good expansibility, modularization and easy maintenance.

Description

Distributed control system of service type robot
Technical Field
The invention relates to the field of robots, in particular to a distributed control system of a service type robot.
Background
In recent years, artificial intelligence basic algorithms are changed greatly, and many intelligent applications basically meet the living needs of people. Meanwhile, the network bandwidth is accelerated, and the computing capacity of the processor is improved, so that a lot of complex operations are transferred to the cloud. With the explosion of many basic applications such as voice recognition, graphics recognition and natural language processing. With the development of the basic applications, the intelligence degree of the service robot is greatly improved. Meanwhile, the software control system of the robot is challenged, and the software control system is designed in terms of flexibility, easy expansion and good maintenance. The software control system has the defects of system simplification and centralization at present.
Disclosure of Invention
The invention aims to provide a distributed control system of a service robot, which integrates and fuses various intelligent basic applications, adopts distributed control, and has good expansibility, modularization and easy maintenance.
The technical scheme provided by the invention is as follows:
a distributed control system of a service robot comprises an upper computer subsystem, a lower computer subsystem and a cloud service group; the upper computer subsystem is used for providing software functions of the service type robot, presenting external service contents of the service type robot and proxy services of the cloud service group; the lower computer subsystem is communicated with the upper computer subsystem through a CAN bus and is used for receiving a behavior instruction sent by the upper computer subsystem and controlling the service robot to execute corresponding actions; feeding back the execution result of the action and the state of each node of the service type robot to the upper computer subsystem; the cloud service group is in wireless communication with the upper computer subsystem, and provides remote maintenance of the service robot and various service clouds provided by third-party enterprises.
In the technical scheme, the upper computer subsystem is equivalent to the brain of a service robot, and integrates and fuses various intelligent basic applications; the lower computer subsystem is equivalent to the trunk and the limbs of the service robot and is matched with the upper computer subsystem to complete the execution of various actions; when the client scene requirements exceed the capacity of the application integrated by the service type robot, the service type robot can also ask for help from the cloud service group; the scheme adopts a distributed control and modularized hardware connection structure, and is easy to expand and maintain.
Further, the upper computer subsystem comprises an interface main application, a plurality of interface sub-applications, a module layer, a core layer, a monitoring layer and a behavior scheduler; the module layer includes: the system comprises a voice module, a peripheral module, an action module and a scene module; the core layer includes: the system comprises a voice service, a knowledge service, a peripheral service, a face agent service, a navigation agent service, a configuration database, a knowledge base, a maintenance service, a face recognition service and a navigation service; the lower computer subsystem comprises a plurality of MCUs, a motor and a drive group; the cloud service group comprises a maintenance center and a service cloud; the maintenance center includes: maintaining a program and a knowledge base mirror image; the service cloud includes: scene cloud, chatting cloud, location service cloud, entertainment cloud, voice recognition cloud and banking private cloud.
In the technical scheme, the structures of the upper computer subsystem, the lower computer subsystem and the cloud service group are refined.
Further, the interface main application collects customer input information from the main interface to obtain customer scene requirements; when the customer scene needs voice prompt, the interface main application is forwarded to a voice service through the voice module, and the voice service provides a corresponding voice broadcasting function; when the client scene needs peripheral service, the interface main application sends a corresponding peripheral service request to the peripheral service through the peripheral module; when the customer scene needs behavior service, the interface main application sends a corresponding behavior service request to the behavior scheduler through the action module; when the customer scene needs cloud service, the interface master application sends a corresponding cloud service request to the cloud service group through the scene module; and when the client scene needs navigation service, the interface main application sends a corresponding navigation service request to the behavior scheduler through the action module.
In the technical scheme, the processing of the requirement input by the client from the interface main application is provided.
Further, still include: when the client scene requirement is processed by an interface sub-application, the corresponding interface sub-application is called by the interface main application, and the client scene requirement is further processed by the interface sub-application; when the customer scene needs voice prompt, the interface sub-application forwards the voice prompt to a voice service through the voice module, and the voice service provides a corresponding voice broadcast function; when the client scene needs peripheral service, the interface sub-application sends a corresponding peripheral service request to the peripheral service through the peripheral module; when the customer scene needs behavior service, the interface sub-application sends a corresponding behavior service request to the behavior scheduler through the action module; when the customer scene needs cloud service, the interface sub-application sends a corresponding cloud service request to the cloud service group through the scene module; when the client scene needs navigation service, the interface sub-application sends a corresponding navigation service request to the behavior scheduler through the action module; and when the interface sub-application finishes the requirement of the customer scene, the interface sub-application finishes the service and returns to the interface main application.
In the technical scheme, the processing of the requirements input by the customer from the interface sub-application is provided.
Further, when sound is monitored, the voice service requests voice recognition from the voice recognition cloud for the collected audio signals to obtain a recognition text; the voice service requests service from the knowledge service for the recognition text to obtain related knowledge; according to the identification text and the related knowledge, the voice service obtains the customer scene requirement; the customer scene requirements are forwarded through the voice service and displayed on a screen through the interface main application or the interface sub application; when the customer scene needs voice prompt, the voice service provides a corresponding voice broadcasting function; when the client scene needs peripheral service, the interface main application or the interface sub application sends a corresponding peripheral service request to the peripheral service through the peripheral module by forwarding the voice service; when the customer scene needs behavior service, the voice service sends a corresponding behavior service request to the behavior scheduler; when the customer scene needs cloud service, the interface main application or the interface sub application sends a corresponding cloud service request to the cloud service group through the scene module by forwarding of the voice service; and when the customer scene requirement needs navigation service, the voice service sends a corresponding navigation service request to the behavior scheduler.
In the technical scheme, the processing of the requirement of the customer through voice input is provided.
Further, when the face information is detected, the face recognition service performs face recognition to obtain face feature information; the face recognition service judges whether the face feature information is matched with face feature information of a database; when the face feature information of the database is matched, the face recognition service acquires client information from the database and sends the client information to the face agent service; when the question information needs to be displayed on the screen, the interface main application or the interface sub application displays the client information and the question information on the screen through the forwarding of the face proxy service; when the information of the user is required to be broadcasted through voice, the voice service broadcasts the client information and the information of the user through forwarding of the face agent service.
In the technical scheme, a method for acquiring client information through face recognition is provided.
Further, when the behavior service request is received, the behavior scheduler judges whether the behavior service request is allowed according to the current state of the service type robot; and when the behavior service request is allowed and the lower computer subsystem is required to execute the action, the behavior scheduler performs action decomposition and sends corresponding action requirements to each MCU.
In the technical scheme, the processing of the behavior service request is perfected, and the behavior implementation of the service robot is guaranteed.
Further, when a cloud service request is received, the cloud service group feeds back response information to the interface main application or the interface sub-application; and when the response information needs to be played in a voice mode, forwarding the response information to the voice service through the interface main application or the interface sub application, and providing a voice broadcasting function of the response information by the voice service.
In the technical scheme, the processing of the cloud service request is perfected, and the cloud service implementation of the service robot is guaranteed.
Further, when the navigation service request is received, the behavior dispatcher issues a navigation instruction to the navigation service; the navigation service outputs speed information to the lower computer subsystem to guide the movement of the robot; when the service robot encounters an obstacle in the navigation process, reporting navigation abnormity to the navigation agent service; when the service robot arrives at a designated place, reporting navigation success to the navigation agent service; when the navigation state needs to be displayed, the navigation agent service displays the navigation state on a screen through the interface main application or the interface sub application; and when the navigation state needs to be played, the navigation agent service broadcasts the navigation state through the voice service.
In the technical scheme, the processing of the navigation service request is perfected, and the navigation service implementation of the service robot is guaranteed.
Further, the maintenance center periodically modifies and updates the knowledge base mirror image and pushes the knowledge base mirror image to the upper computer subsystem so as to update the knowledge base.
In the technical scheme, the service robot is more intelligent and smart by continuously updating the knowledge base.
The distributed control system of the service type robot provided by the invention can bring at least one of the following beneficial effects:
1. the invention integrates and fuses various intelligent basic applications, adopts distributed control, and has good expansibility, modularization and easy maintenance.
2. The invention can automatically update the knowledge base, so that the service robot is more intelligent and smart.
Drawings
The above features, technical features, advantages and modes of realisation of a distributed control system for a service robot will be further described in the following, in a clearly understandable manner, with reference to the accompanying drawings, which illustrate preferred embodiments.
FIG. 1 is a schematic structural diagram of an embodiment of a distributed control system of a service robot according to the present invention;
FIG. 2 is a schematic structural diagram of another embodiment of a service robot in the distributed control system of the service robot according to the present invention;
fig. 3 is a schematic structural diagram of another embodiment of a cloud-side service group in the distributed control system of the service type robot according to the present invention.
The reference numbers illustrate:
1000. a service robot, 1100, a host computer subsystem, 1110, an interface master application, 1120, an interface slave application, 1160, a module layer, 1161, a voice module, 1162, a peripheral module, 1163, an action module, 1164, a scenario module, 1130, a core layer, 1131, a face agent service, 1132, a peripheral service, 1133, a voice service, 1134, a knowledge service, 1135, a navigation agent service, 1136, a maintenance service, 1137, a configuration database, 1138, a knowledge base, 1139, a face recognition service, 1140, a navigation service, 1150, a monitoring layer, 1170, a behavior scheduler, 1200, a lower computer subsystem, 1210, a MCU, 1220, a motor, 1230, a drive group, 2000, a cloud service group, 2100, a service cloud, 2110, a scenario cloud, 2120, a bank location service cloud, 2130, an entertainment cloud, 2140, a chat cloud, 2150, a private business cloud, 2160, a voice recognition cloud, a cloud, 2200, a maintenance center, a maintenance program, 2220. the knowledge base is mirrored.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
In an embodiment of the present invention, embodiment 1, as shown in fig. 1, a distributed control system for a service robot includes an upper computer subsystem 1100, a lower computer subsystem 1200, and a cloud service group 2000;
the upper computer subsystem 1100 is configured to provide software functions of the service robot 1000, present external service contents of the service robot 1000, and provide proxy services of the cloud service group 2000;
the lower computer subsystem 1200 is in communication with the upper computer subsystem 1100 through a CAN bus, and is configured to receive a behavior instruction issued by the upper computer subsystem 1100 and control the service robot 1000 to execute a corresponding action; and feeding back the result of the execution of the operation and the state of each node of the service robot 1000 to the host computer subsystem 1100;
the cloud service group 2000 wirelessly communicates with the upper computer subsystem 1100, and provides remote maintenance of the service robot 1000 and various service clouds 2100 provided by third-party enterprises.
Specifically, the upper computer subsystem is equivalent to the brain of the service robot, integrates and fuses various intelligent basic applications, including the application developed by a robot developer and the application developed by a third-party enterprise, and provides all software functions of the service robot and service contents presented to the outside; the lower computer subsystem is equivalent to the trunk and the limbs of the service robot, completes the execution of various robot actions by matching with the upper computer subsystem, and feeds back the execution result of the actions and the state of each node at the bottom layer; for example, the upper computer subsystem requires the service robot to ask the customer for a smile in the face, wherein actions such as smile and speaking need to be completed by the lower computer subsystem; the lower computer subsystem and the upper computer subsystem are communicated through a CAN bus. When the customer demand exceeds the capacity of the application integrated by the service type robot, the service type robot can also ask for help from the cloud service group; the upper computer subsystem is communicated with the cloud service group through wireless access to the Internet. The cloud service group not only provides remote maintenance of the service type robot, but also can access various service clouds provided by third-party enterprises and not integrated on the service type robot.
The scheme adopts distributed control and a modular structure, provides an interface for accessing an application or service cloud developed by a third-party enterprise, and is easy to expand and maintain.
In another embodiment of the present invention, as shown in fig. 2 and 3, embodiment 2 is a distributed control system for a service robot, including an upper computer subsystem 1100, a lower computer subsystem 1200 and a cloud service group 2000;
the upper computer subsystem 1100 is configured to provide software functions of the service robot 1000, present external service contents of the service robot 1000, and provide proxy services of the cloud service group 2000;
the lower computer subsystem 1200 is in communication with the upper computer subsystem 1100 through a CAN bus, and is configured to receive a behavior instruction issued by the upper computer subsystem 1100 and control the service robot 1000 to execute a corresponding action; and feeding back the result of the execution of the operation and the state of each node of the service robot 1000 to the host computer subsystem 1100;
the cloud service group 2000, which communicates with the upper computer subsystem 1100 wirelessly, provides remote maintenance of the service robot 1000 and various service clouds 2100 provided by third-party enterprises;
the upper computer subsystem 1100 comprises an interface main application 1110, a plurality of interface sub-applications 1120, a module layer 1160, a core layer 1130, a monitoring layer 1150 and a behavior scheduler 1170; the module layer 1160 includes: a voice module 1161, a peripheral module 1162, an action module 1163 and a scene module 1164; the core layer 1130 includes: voice service 1133, knowledge service 1134, peripheral service 1132, face agent service 1131, navigation agent service 1135, configuration database 1137, knowledge base 1138, maintenance service 1136, face recognition service 1139, navigation service 1140;
the lower computer subsystem 1200 includes a plurality of MCUs 1210, a plurality of motors 1220, and a driving group 1230;
the cloud service group 2000 comprises a maintenance center 2200 and a service cloud 2100; the maintenance center 2200 includes: maintenance program 2210 and repository mirror 2220; the service cloud 2100 includes: scene cloud 2110, chat cloud 2140, location services cloud 2120, entertainment cloud 2130, voice recognition cloud 2160, and banking private cloud 2150.
Specifically, in comparison with the previous embodiment, the embodiment refines the upper computer subsystem, the lower computer subsystem and the cloud service group.
The upper computer subsystem adopts a classic MVC (Model View Controller) software architecture, organizes codes by a method of separating service logic, data and interface display, and gathers the service logic into a component, so that the service logic does not need to be rewritten while improving and personalizing a customized interface and user interaction; wherein the behavior scheduler belongs to a control layer (Controller) in MVC, the core layer is a model layer (model) in MVC, and the interface master application and the interface sub application are model layers (view) in MVC. The upper computer subsystem is composed of three major operating systems (Windows, Android and Linux operating systems), wherein the three major operating systems form an Ethernet local area network through a router. The Windows and/or Android operating system is responsible for man-machine interaction and undertakes running of various interactive basic intelligent applications and various basic application proxy services; the Linux operating system is responsible for behavior coordination and scheduling, and integrates a large number of complex algorithms. The Linux system is simultaneously responsible for communicating with the lower computer subsystem.
The upper computer subsystem comprises an interface main application, a plurality of interface sub-applications, a core layer, a module layer, a monitoring layer and a behavior scheduler. The interface main application and the plurality of interface sub-applications adopt Windows and/or Android operating systems, are responsible for man-machine interaction in an interface mode, and undertake running of various interactive basic applications, such as requirement acquisition and interface display input in the interface mode. The interface main application is provided by a robot developer, the interface sub-application is generally provided by a third-party enterprise, and applications belonging to a certain industry, such as a transfer sub-application of banking business and a VIP (very important person) entry sub-application of face recognition, are integrated in an upper computer subsystem; the interface main application and the interface sub application have respective interfaces, and a main interface is presented by default, namely the interface corresponding to the interface main application; the interface main application generally runs in the foreground, and only when the interface main application wakes up the interface sub-application, the interface corresponding to the interface sub-application is switched to, at the moment, the interface sub-application runs in the foreground, and the interface main application runs in the background. And when the operation of the interface sub-application is finished, returning to the interface main application, and operating the interface main application in the foreground. The interface sub-application and interface main application relationships reflect the collaborative relationships between the robot developer and the third party enterprise.
The module layer further includes: the system comprises a voice module, a peripheral module, an action module and a scene module; the interface main application and the interface sub-application share a voice module, a peripheral module, an action module and a scene module; the peripheral module is used for serving peripheral for the interface main application or the interface sub application; the action module is used for acting behavior service and navigation service for the interface main application or the interface sub application; the voice module is used for acting voice service for the interface main application or the interface sub-application, for example, when a client scene demand needs to provide a voice prompt, the interface main application or the interface sub-application accesses the voice service of the core layer through the voice module, and the voice service provides a voice reporting function; the scene module is used for providing an interface for accessing a service cloud of the cloud service group for the interface main application or the interface sub application proxy cloud service.
The core layer adopts a Windows and/or Android operating system to complete non-display applications and provide various core services, including: the system comprises a voice service, a knowledge service, a peripheral service, a face agent service, a navigation agent service, a configuration database, a knowledge base, a maintenance service, a face recognition service and a navigation service; the voice service provides functions of voice recognition, semantic understanding, voice synthesis and the like; the knowledge service is used for assisting in completing semantic understanding functions, including interactive command understanding and business semantic understanding; in the voice recognition, the help of the voice recognition cloud to assist the voice recognition may be required; during semantic understanding, related knowledge needs to be provided for the knowledge service, such as the type of a recognition result, a command for operating a service robot (such as dancing, singing and navigating), chatting, banking business and the like, and the semantic understanding can determine the next action according to the return of the knowledge service; the peripheral service encapsulates a hardware drive and is used for providing consistent peripheral services for interface main application, interface sub-application and voice service, such as a print service transacted result, a fingerprint identification result, an identity card identification result and the like; the face proxy service is used for scene adaptation and strategy service of the face recognition result; the navigation agent service is used for scene adaptation and strategy service of a navigation result; the configuration database is used for carrying out data configuration according to use occasions; the knowledge base is used for storing various kinds of knowledge for being inquired by a knowledge service and also collecting and counting interactive semantic data of the robot; the maintenance service is used for periodically sending acquisition and statistical results to the maintenance center; the face recognition service adopts a camera to collect photos for recognizing faces and extracting characteristic values, and a face library is established; and returning the recognition result to the face agent service; the navigation service is based on the ROS (the Robot Operating System) System and is used for providing the functions of map building and navigation.
The monitoring layer adopts a Windows or Android operating system and is used for monitoring and diagnosing the running conditions of the application layer and the service of the core layer and providing a diagnostic tool and a running log of the service robot.
The behavior scheduler adopts a linux operating system and is used for behavior scheduling of the service type robot; the behavior scheduler supports the application running on it to be modified or configured to suit the customization of a specific service occasion; the behavior scheduler is a service type robot behavior control and coordination center, an operation state aggregation center of each module and a sensor state aggregation center; receiving a behavior service request, and controlling and coordinating the behavior of the robot according to the state of the robot; simultaneously feeding back the working state of each module and the state of the sensor; and communicating with the lower computer subsystem through the CAN bus.
The lower computer subsystem comprises: a plurality of MCUs (micro control units), a plurality of motors and drive groups; the MCUs are used for receiving the behavior instructions issued by the behavior scheduler and controlling the motor and the driving group to execute corresponding actions; and transmitting the result of the execution of the operation and the state of each joint node of the service robot to the host computer subsystem.
And data are exchanged among all communication main bodies of the upper computer subsystem in a JSON (JavaScript Object notification) format, and service requests and service responses are carried out.
Example communication agent ID, as in table 1:
communication agent ID Name of communication subject
0x1 Behavior scheduler
0x2 Voice service
0x3 Interface host application
0x4 Interface sub-application
0x5 Knowledge service
0x6 Navigation proxy service
0x7 Peripheral services
0x8 Face proxy service
TABLE 1
Task command ID and parameter protocol example:
task ID Task name Parameter(s)
0x56 Navigation Navigation point
0x57 Sound production Is free of
0x58 Dancing Is free of
0x59 Singing song Is free of
JSON packet example for communication body exchange:
navigation service request and response:
{“from”:=0x2,”cmd”:0x56,”subcmd”:0x1}
{“from”:=0x1,”cmd”:0x56,”subcmd”:0x1,”resp”:0x1}
the above shows that the voice triggers the navigation behavior in the man-machine interaction. The service robot receives a navigation instruction; and the behavior scheduler receives the navigation instruction and simultaneously issues the navigation instruction to the navigation system.
Dance service request and response:
{“from”:=0x3,”cmd”:0x56,”subcmd”:0x1}
{“from”:=0x1,”cmd”:0x56,”subcmd”:0x1,”resp”:0x1}
the above shows that the dancing behavior is triggered by the interface in the man-machine interaction. The service robot receives the dancing instruction; and the behavior scheduler receives the dancing instruction and simultaneously issues the dancing instruction to each MCU.
The service cloud comprises a maintenance center, a scene cloud, a chatting cloud, a location service cloud, an entertainment cloud, a voice recognition cloud and a banking private cloud. The maintenance center includes: maintaining programs and knowledge base mirroring. The maintenance program is used for regularly receiving the acquisition and statistical results sent by the maintenance service, for example, acquiring customer requirements, classifying the customer requirements, counting the most topics asked by the customers, counting the problem that the knowledge service cannot find answers, acquiring logs of abnormal programs and the like; big data analysis and data mining functions are also provided, such as analyzing which bank products are more attractive, which problems are often followed by VIP customers, which non-VIPs are most likely to be potential big customers, etc., which analysis results are very useful to the bank, and the analysis results are fed back to the knowledge base. And the knowledge base mirror image is used for periodically modifying and updating, and pushing the updated knowledge base to the remote robot. The maintenance center perfects knowledge service and then pushes the knowledge service to the robot; reporting the software problems and the unconsidered scene problems of the robot.
In another embodiment of the present invention, embodiment 3, as shown in fig. 2 and 3, a distributed control system for a service robot, in addition to being the same as embodiment 2, further comprises:
the interface main application collects customer input information from the main interface to obtain customer scene requirements;
when the customer scene needs voice prompt, the interface main application is forwarded to a voice service through the voice module, and the voice service provides a corresponding voice broadcasting function;
when the customer scene needs behavior service, the interface main application sends a corresponding behavior service request to the behavior scheduler through the action module;
when the behavior service request is received, the behavior scheduler judges whether the behavior service request is allowed or not according to the current state of the service type robot;
when the behavior service request is allowed and the lower computer subsystem is required to execute the action, the behavior scheduler performs action decomposition and sends corresponding action requirements to each MCU;
when the behavior service is finished, the lower computer subsystem feeds back an execution result to the behavior scheduler, and the behavior scheduler forwards the execution result to the interface main application;
the interface main application displays an action execution result on a screen;
the interface master application collecting the next customer scenario requirement from the master interface;
when the client scene needs navigation service, the interface main application sends a corresponding navigation service request to the behavior scheduler through the action module;
when the navigation service request is received, the behavior dispatcher issues a navigation instruction to the navigation service;
the navigation service outputs speed information to the lower computer subsystem to guide the movement of the robot;
when the service robot encounters an obstacle in the navigation process, reporting navigation abnormity to the navigation agent service;
when the service robot arrives at a designated place, reporting navigation success to the navigation agent service;
when the navigation state needs to be displayed, the navigation agent service displays the navigation state on a screen through the interface main application;
when the navigation state needs to be played, the navigation agent service broadcasts the navigation state through the voice service;
the interface master application collecting the next customer scenario requirement from the master interface;
when the client scene needs peripheral service, the interface main application sends a corresponding peripheral service request to the peripheral service through the peripheral module;
the interface master application collecting the next customer scenario requirement from the master interface;
when the customer scene needs cloud service, the interface master application sends a corresponding cloud service request to the cloud service group through the scene module;
when a cloud service request is received, the cloud service group feeds back response information to the interface main application or the interface sub-application;
when the response information needs to be displayed on a screen, displaying the response information on the screen through the interface main application;
and when the response information needs voice playing, forwarding the response information to the voice service through the interface main application, and providing a voice broadcasting function of the response information by the voice service.
Specifically, as an example, the service type robot is enabled by default to be an interface of an interface main application, namely a main interface, and a customer requires the service type robot to dance in an interface touch screen mode. The interface host application collects this information and gets the customer scenario requirement that requests to dance individually. Assuming that the client scene needs to be displayed and broadcasted, displaying 'please dance with one' on the screen through the interface main application, forwarding the voice to the voice service through the voice module of the interface main application, and providing corresponding voice broadcast by the voice service. And the interface master application judges that the client scene needs behavior service, so that a corresponding behavior service request is sent to the behavior scheduler. After receiving the requirement, the behavior scheduler judges whether the behavior service request is allowed or not according to the current state of the service type robot; and if the service robot is in an idle state at present, the behavior scheduler decomposes the dancing action and informs the corresponding MCU to perform corresponding action. When the behavior service is finished, the behavior scheduler feeds back the execution result to the interface master application, and displays 'dancing finish' on the screen.
The customer inputs the next requirement 'please take me to the ATM' in the interface touch screen mode. The interface host application collects this information and obtains the customer scenario request to go to the ATM. Assuming that the client scene needs not to be broadcasted, the interface main application identifies that the client scene needs navigation service, and the interface main application sends a corresponding navigation service request to the behavior scheduler through the action module. The behavior scheduler issues the navigation instruction to a navigation service, and the navigation service outputs speed information to a lower computer subsystem so as to guide the movement of the robot; when the service robot reaches the ATM, the navigation agent service is reported that the navigation is successful, the navigation agent service displays that the navigation is successful on a screen, and the navigation agent service reports that the navigation is successful through voice service.
The customer inputs the next requirement of 'please print the bankbook' in the mode of interface touch screen. The interface main application collects the information and judges that the client scene needs peripheral service, so the interface main application sends a printing request to the peripheral service through the peripheral module, and the peripheral service provides a printing function.
The customer inputs the next requirement of 'please list financing products' in the interface touch screen mode. The interface main application collects the information and judges that the client scene needs cloud service. And the interface main application requests cloud service from the banking business private cloud of the cloud service group through the scene module. When the cloud service request is received, the cloud service group feeds back response information to the interface master application. The interface main application displays the response information on a screen, namely displays the corresponding financial products and the profits on the main interface. And when the response information needs to be played in a voice mode, forwarding the response information to the voice service through the interface main application, and providing the voice broadcasting function of the response information by the voice service.
In another embodiment of the present invention, embodiment 4, as shown in fig. 2 and 3, a distributed control of a service robot, in addition to the same as embodiment 2, further comprises:
the interface main application collects customer input information from the main interface to obtain customer scene requirements;
when the customer scene requirements need interface sub-application processing, the corresponding interface sub-application is called through the interface main application, and the customer scene requirements are further processed through the interface sub-application;
when the customer scene needs voice prompt, the voice module of the interface sub-application forwards the voice prompt to a voice service, and the voice service provides a corresponding voice broadcasting function;
when the client scene needs peripheral service, the interface sub-application sends a corresponding peripheral service request to the peripheral service through the peripheral module;
when the customer scene needs behavior service, the interface sub-application sends a corresponding behavior service request to the behavior scheduler through the action module;
when the customer scene needs cloud service, the interface sub-application sends a corresponding cloud service request to the cloud service group through the scene module;
when the client scene needs navigation service, the interface sub-application sends a corresponding navigation service request to the behavior scheduler through the action module;
and when the interface sub-application finishes the requirement of the customer scene, the interface sub-application finishes the service and returns to the interface of the interface main application.
Specifically, the service type robot is enabled by default to be an interface of an interface main application, and a customer requests a transfer service through an interface touch screen mode. The interface host application collects this information and obtains the customer scenario that the demand is a transfer service. And judging that the customer scene requirements need to be processed by the transfer sub-application by the interface main application, calling the transfer sub-application by the interface main application, entering a transfer sub-application interface, and further processing the customer scene requirements by the transfer sub-application.
The account transfer sub-application can further consult the account transfer amount of the client, the account number of the receiver and the like, and the interface sub-application collects the input information of the client from the interface to obtain the requirement of the next client scene and displays the requirement on the screen; and further inquiring whether the client needs to print, wherein when the client needs to print, the interface sub-application collects the client scene requirement and judges that the client scene requirement needs peripheral service, so that the interface sub-application sends a printing request to the peripheral service through the peripheral module, and the peripheral service provides a printing function.
If the customer has further requirements, such as a behavior service, a cloud service or a navigation service, the execution process is similar to the action of the interface main application, and therefore the description is not repeated here.
And when the interface sub-application finishes the requirements of the client scene, the interface sub-application finishes the service and returns to the interface main application.
In another embodiment of the present invention, embodiment 5, as shown in fig. 2 and 3, a distributed control of a service robot, in addition to the same as embodiment 2, further comprises:
when sound is monitored, the voice service requests voice recognition from the voice recognition cloud for the collected audio signals to obtain a recognition text;
the voice service requests service from the knowledge service for the recognition text to obtain related knowledge;
according to the identification text and the related knowledge, the voice service obtains the customer scene requirement;
the customer scene requirements are forwarded through the voice service and displayed on a screen through the interface main application or the interface sub application;
when the customer scene needs voice prompt, the voice service provides a corresponding voice broadcasting function;
when the customer scene needs cloud service, the interface main application or the interface sub application sends a corresponding cloud service request to the cloud service group through the scene module by forwarding of the voice service;
when a cloud service request is received, the cloud service group feeds back response information to the interface main application or the interface sub-application;
when the response information needs to be displayed on a screen, displaying the response information on the screen through the interface main application or the interface sub application;
when the response information needs to be played in a voice mode, the response information is forwarded to the voice service through the interface main application or the interface sub application, and the voice service provides a voice broadcasting function of the response information;
when sound is monitored, the voice service requests voice recognition from the voice recognition cloud for the collected audio signals to obtain a recognition text;
the voice service requests service from the knowledge service for the recognition text to obtain related knowledge;
according to the identification text and the related knowledge, the voice service obtains the next customer scene requirement;
when the customer scene needs behavior service, the interface main application or the interface sub application sends a corresponding behavior service request to the behavior scheduler through the action module;
when the behavior service request is received, the behavior scheduler judges whether the behavior service request is allowed or not according to the current state of the service type robot;
when the behavior service request is allowed and the lower computer subsystem is required to execute the action, the behavior scheduler performs action decomposition and sends corresponding action requirements to each MCU;
when the behavior service is finished, the lower computer subsystem feeds back an execution result to the behavior scheduler, and the behavior scheduler forwards the execution result to the interface main application or the interface sub application;
and the interface main application or the interface sub application displays an action execution result on a screen.
Specifically, the difference between this embodiment and the previous embodiment is that the customer scenario requirements are input by voice.
In an example, a client inquires about 'what financial product exists' in a voice mode, and the voice service requests voice recognition from the voice recognition cloud of the cloud service group for the collected audio signal to obtain a recognition text 'what financial product exists'. The voice service requests service from the knowledge service to the recognition text, and the knowledge service returns a preliminary semantic understanding result, such as question category, possible answer, behavior category and the like, to obtain that the question is related to banking business; according to the obtained recognition text and the obtained related knowledge, the voice service obtains the scene requirement of the client as 'what financial product is and related to banking business'.
The voice service forwards this requirement to either the interface main application or the interface sub-application, depending on which application is currently running in the foreground; here, it is assumed that the interface master application is currently running, so the voice service forwards this requirement to the interface master application. And the interface main application requests cloud service from the banking business private cloud of the cloud service group through the scene module. When the cloud service request is received, the cloud service group feeds back response information to the interface master application. The interface main application displays the response information on a screen, namely displays the corresponding financial products and the profits on the main interface.
Next, the customer invites the service robot to "please dance with one' by voice. The voice service requests voice recognition from the voice recognition cloud of the cloud service group for the collected audio signals, and the obtained recognition text is 'please dance one by one'. The voice service requests service from the knowledge service to the recognition text, and the knowledge service returns a preliminary semantic understanding result; based on the obtained recognition text and the obtained related knowledge, the voice service obtains the scene requirement of the customer as 'please dance one by one'. The customer scenario needs a behavior service, so the voice service sends a corresponding behavior service request to the behavior scheduler. After receiving the requirement, the behavior scheduler judges that the behavior service request can be allowed currently according to the current state of the service type robot, and then decomposes the dancing action and informs a corresponding MCU to carry out corresponding action. When the behavior service is finished, the behavior scheduler feeds back the execution result to the interface master application, and displays 'dancing finish' on the screen.
Next, the customer may also make other customer scenario requirements by voice, such as requiring a peripheral service, a cloud service, or a navigation service, and the difference is that the customer scenario requirements are obtained by voice recognition and semantic understanding, and after the customer scenario requirements are obtained, the execution process is similar to the action of the interface main application, so that the details are not repeated here.
In another embodiment of the present invention, embodiment 6, as shown in fig. 2 and 3, a distributed control of a service robot, in addition to the same as embodiment 2, further comprises:
when the face information is detected, the face recognition service carries out face recognition to obtain face characteristic information;
the face recognition service judges whether the face feature information is matched with face feature information of a database;
when the face feature information of the database is matched, the face recognition service acquires client information from the database and sends the client information to the face agent service;
when the question information needs to be displayed on the screen, the interface main application or the interface sub application displays the client information and the question information on the screen through the forwarding of the face proxy service;
when the information of the user is required to be broadcasted through voice, the voice service broadcasts the client information and the information of the user through forwarding of the face agent service.
Specifically, in comparison with the previous embodiment, the present embodiment identifies the client by using the face recognition service.
In the example, when a client stands in front of a service robot, a sensor learns that someone appears, a camera obtains a head portrait of the client, a face recognition service calculates face feature information of the client, judges whether the face feature information is matched with the face feature information stored in a database, and if the face feature information is matched with the face feature information stored in the database, the client information is obtained from the database, and names, titles and the like input before are inquired; the face recognition service obtains the customer information and sends the customer information to the face agent service. The face agent service requests a voice report to the voice service and requests a display to an interface main application or an interface sub-application according to the use scene of the current service type robot (a person just stands in front of the robot), specifically, the interface main application or the interface sub-application depends on which application is operated by the current foreground, so that the service type robot asks a good for a client and displays the name of the client and the asking good information on a screen.
In another embodiment of the present invention, embodiment 7, as shown in fig. 2 and 3, a distributed control of a service robot, in addition to the same as embodiment 2, further comprises:
and the maintenance center periodically modifies and updates the knowledge base mirror image and pushes the knowledge base mirror image to the upper computer subsystem so as to update the knowledge base.
Specifically, the maintenance center receives an operation log recorded by the service type robot in operation sent by the maintenance service, periodically knows the update condition of related knowledge, periodically modifies and updates the knowledge base mirror image according to the abnormality and the update of the knowledge in operation, and pushes the mirror image to the service type robot so as to update the knowledge base of the service type robot. Therefore, the service robot is more intelligent and smart by continuously updating the knowledge base.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A distributed control system of a service robot is characterized by comprising an upper computer subsystem, a lower computer subsystem and a cloud service group;
the upper computer subsystem is used for providing software functions of the service type robot, presenting external service contents of the service type robot and providing proxy services of the cloud service group;
the lower computer subsystem is communicated with the upper computer subsystem through a CAN bus and is used for receiving a behavior instruction sent by the upper computer subsystem and controlling the service robot to execute corresponding actions; feeding back the execution result of the action and the state of each node of the service type robot to the upper computer subsystem;
the cloud service group is in wireless communication with the upper computer subsystem, and provides remote maintenance of the service robot and various service clouds provided by third-party enterprises;
the upper computer subsystem adopts an MVC software architecture and comprises an interface main application, a plurality of interface sub-applications, a module layer, a core layer and a behavior scheduler; the module layer comprises a voice module, a peripheral module, an action module and a scene module; the core layer provides voice service, peripheral service and navigation service; the behavior scheduler is communicated with the lower computer subsystem through a CAN bus to provide behavior service;
the interface main application and the interface sub-applications are responsible for man-machine interaction in an interface mode, and the interface sub-applications comprise applications provided by a third-party enterprise;
the interface main application or the interface sub-application applies for the voice service through the voice module;
the interface main application or the interface sub-application applies for the peripheral service through the peripheral module;
the interface main application or the interface sub-application applies for cloud service from the cloud service group through the scene module;
the interface main application or the interface sub-application applies for the navigation service through the action module;
and the interface main application or the interface sub application applies for behavior service from the behavior scheduler through the action module.
2. The distributed control system of a service robot according to claim 1, characterized in that:
the upper computer subsystem also comprises a monitoring layer; the core layer further includes: the system comprises a knowledge service, a face agent service, a navigation agent service, a configuration database, a knowledge base, a maintenance service and a face identification service;
the lower computer subsystem comprises a plurality of MCUs, a plurality of motors and a drive group;
the cloud service group comprises a maintenance center and a service cloud; the maintenance center includes: maintaining a program and a knowledge base mirror image; the service cloud includes: scene cloud, chatting cloud, location service cloud, entertainment cloud, voice recognition cloud and banking private cloud.
3. The distributed control system of a service robot according to claim 2, wherein:
the interface main application collects customer input information from the main interface to obtain customer scene requirements;
when the customer scene needs voice prompt, the interface main application is forwarded to a voice service through the voice module, and the voice service provides a corresponding voice broadcasting function;
when the client scene needs peripheral service, the interface main application sends a corresponding peripheral service request to the peripheral service through the peripheral module;
when the customer scene needs behavior service, the interface main application sends a corresponding behavior service request to the behavior scheduler through the action module;
when the customer scene needs cloud service, the interface master application sends a corresponding cloud service request to the cloud service group through the scene module;
and when the client scene needs navigation service, the interface main application sends a corresponding navigation service request to the behavior scheduler through the action module.
4. The distributed control system of a service robot according to claim 3, wherein:
when the customer scene requirements need interface sub-application processing, the corresponding interface sub-application is called through the interface main application, and the customer scene requirements are further processed through the interface sub-application;
when the customer scene needs voice prompt, the interface sub-application forwards the voice prompt to a voice service through the voice module, and the voice service provides a corresponding voice broadcast function;
when the client scene needs peripheral service, the interface sub-application sends a corresponding peripheral service request to the peripheral service through the peripheral module;
when the customer scene needs behavior service, the interface sub-application sends a corresponding behavior service request to the behavior scheduler through the action module;
when the customer scene needs cloud service, the interface sub-application sends a corresponding cloud service request to the cloud service group through the scene module;
when the client scene needs navigation service, the interface sub-application sends a corresponding navigation service request to the behavior scheduler through the action module;
and when the interface sub-application finishes the requirement of the customer scene, the interface sub-application finishes the service and returns to the interface main application.
5. The distributed control system of a service robot according to claim 2, further comprising:
when sound is monitored, the voice service requests voice recognition from the voice recognition cloud for the collected audio signals to obtain a recognition text;
the voice service requests service from the knowledge service for the recognition text to obtain related knowledge;
according to the identification text and the related knowledge, the voice service obtains the requirements of the customer scene;
the customer scene requirements are forwarded through the voice service and displayed on a screen through the interface main application or the interface sub application;
when the customer scene needs voice prompt, the voice service provides a corresponding voice broadcasting function;
when the client scene needs peripheral service, the interface main application or the interface sub application sends a corresponding peripheral service request to the peripheral service through the peripheral module by forwarding the voice service;
when the customer scene needs behavior service, the voice service sends a corresponding behavior service request to the behavior scheduler;
when the customer scene needs cloud service, the interface main application or the interface sub application sends a corresponding cloud service request to the cloud service group through the scene module by forwarding of the voice service;
and when the customer scene requirement needs navigation service, the voice service sends a corresponding navigation service request to the behavior scheduler.
6. The distributed control system of a service robot according to claim 2, wherein:
when the face information is detected, the face recognition service carries out face recognition to obtain face characteristic information;
the face recognition service judges whether the face feature information is matched with face feature information of a database;
when the face feature information of the database is matched, the face recognition service acquires client information from the database and sends the client information to the face agent service;
when the question information needs to be displayed on the screen, the interface main application or the interface sub application displays the client information and the question information on the screen through the forwarding of the face proxy service;
when the information of the user is required to be broadcasted through voice, the voice service broadcasts the client information and the information of the user through forwarding of the face agent service.
7. The distributed control system of a service robot according to claim 3 or 4, characterized in that:
when the behavior service request is received, the behavior scheduler judges whether the behavior service request is allowed or not according to the current state of the service type robot;
and when the behavior service request is allowed and the lower computer subsystem is required to execute the action, the behavior scheduler performs action decomposition and sends corresponding action requirements to each MCU.
8. The distributed control system of a service robot according to claim 3 or 4, characterized in that:
when a cloud service request is received, the cloud service group feeds back response information to the interface main application or the interface sub-application;
and when the response information needs to be played in a voice mode, forwarding the response information to the voice service through the interface main application or the interface sub application, and providing a voice broadcasting function of the response information by the voice service.
9. The distributed control system of a service robot according to claim 3 or 4, characterized in that:
when the navigation service request is received, the behavior dispatcher issues a navigation instruction to the navigation service;
the navigation service outputs speed information to the lower computer subsystem to guide the movement of the robot;
when the service robot encounters an obstacle in the navigation process, reporting navigation abnormity to the navigation agent service;
when the service robot arrives at a designated place, reporting navigation success to the navigation agent service;
when the navigation state needs to be displayed, the navigation agent service displays the navigation state on a screen through the interface main application or the interface sub application;
and when the navigation state needs to be played, the navigation agent service broadcasts the navigation state through the voice service.
10. The distributed control system of a service robot according to claim 2, wherein:
and the maintenance center periodically modifies and updates the knowledge base mirror image and pushes the knowledge base mirror image to the upper computer subsystem so as to update the knowledge base.
CN201810014623.2A 2018-01-08 2018-01-08 Distributed control system of service type robot Active CN108145714B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810014623.2A CN108145714B (en) 2018-01-08 2018-01-08 Distributed control system of service type robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810014623.2A CN108145714B (en) 2018-01-08 2018-01-08 Distributed control system of service type robot

Publications (2)

Publication Number Publication Date
CN108145714A CN108145714A (en) 2018-06-12
CN108145714B true CN108145714B (en) 2020-05-19

Family

ID=62461172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810014623.2A Active CN108145714B (en) 2018-01-08 2018-01-08 Distributed control system of service type robot

Country Status (1)

Country Link
CN (1) CN108145714B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109605373A (en) * 2018-12-21 2019-04-12 重庆大学 Voice interactive method based on robot
CN110134081B (en) * 2019-04-08 2020-09-04 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Control system based on robot capability model
CN111027699B (en) * 2019-11-15 2022-02-11 广东电网有限责任公司 Remote maintenance method for expert knowledge base of intelligent warning system of transformer substation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015157353A1 (en) * 2014-04-10 2015-10-15 Smartvue Corporation Systems and methods for automated cloud-based analytics for security and/or surveillance
CN106597881A (en) * 2016-11-03 2017-04-26 深圳量旌科技有限公司 Cloud service robot based on distributed decision-making algorithm
CN107009343A (en) * 2017-05-03 2017-08-04 山东大学 A kind of banking assistant robot based on many biometric informations
CN107526298A (en) * 2016-06-21 2017-12-29 广州零号软件科技有限公司 Smart home service robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160031081A1 (en) * 2014-08-01 2016-02-04 Brian David Johnson Systems and methods for the modular configuration of robots

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015157353A1 (en) * 2014-04-10 2015-10-15 Smartvue Corporation Systems and methods for automated cloud-based analytics for security and/or surveillance
CN107526298A (en) * 2016-06-21 2017-12-29 广州零号软件科技有限公司 Smart home service robot
CN106597881A (en) * 2016-11-03 2017-04-26 深圳量旌科技有限公司 Cloud service robot based on distributed decision-making algorithm
CN107009343A (en) * 2017-05-03 2017-08-04 山东大学 A kind of banking assistant robot based on many biometric informations

Also Published As

Publication number Publication date
CN108145714A (en) 2018-06-12

Similar Documents

Publication Publication Date Title
JP6882463B2 (en) Computer-based selection of synthetic speech for agents
CN108145714B (en) Distributed control system of service type robot
US9116962B1 (en) Context dependent recognition
US8195430B2 (en) Cognitive agent
US11087763B2 (en) Voice recognition method, apparatus, device and storage medium
CN103049565B (en) Application Instance and inquiry storage
CN107133086A (en) Task processing method, device and system based on distributed system
EP3683730A1 (en) Dynamic learning method and system for robot, robot, and cloud server
CN106886612A (en) A kind of big data acquisition analysis system
WO2020015682A1 (en) System and method for controlling unmanned aerial vehicle
CN113176948B (en) Edge gateway, edge computing system and configuration method thereof
WO2018205230A1 (en) Item search method and device, and robot
CN113179190B (en) Edge controller, edge computing system and configuration method thereof
CN107704169A (en) The method of state management and system of visual human
WO2023124393A1 (en) Intelligent interaction system and method, and electronic device and storage medium
CN107430852A (en) The selectivity of the online processing of phonetic entry is stopped in the electronic equipment for supporting voice
US20210173680A1 (en) Artificial intelligence apparatus and method for extracting user's concern
US20230028830A1 (en) Robot response method, apparatus, device and storage medium
CN109408209A (en) Task executing method and device
CN110018823A (en) Processing method and system, the generation method and system of interactive application
JP6837603B1 (en) Support systems, support methods and programs
EP3834101A1 (en) Computer-implemented system and method for collecting feedback
CN108388399A (en) The method of state management and system of virtual idol
CN106325515A (en) Service-oriented human-computer interaction system and implementation method
CN106997449A (en) Robot and face identification method with face identification functions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant