CN113488040A - Method, device and medium for realizing functions of museum robot - Google Patents

Method, device and medium for realizing functions of museum robot Download PDF

Info

Publication number
CN113488040A
CN113488040A CN202110717051.6A CN202110717051A CN113488040A CN 113488040 A CN113488040 A CN 113488040A CN 202110717051 A CN202110717051 A CN 202110717051A CN 113488040 A CN113488040 A CN 113488040A
Authority
CN
China
Prior art keywords
voice
module
exhibit
voice instruction
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110717051.6A
Other languages
Chinese (zh)
Inventor
李志芸
尹青山
王建华
高明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong New Generation Information Industry Technology Research Institute Co Ltd
Original Assignee
Shandong New Generation Information Industry Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong New Generation Information Industry Technology Research Institute Co Ltd filed Critical Shandong New Generation Information Industry Technology Research Institute Co Ltd
Priority to CN202110717051.6A priority Critical patent/CN113488040A/en
Publication of CN113488040A publication Critical patent/CN113488040A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F25/00Audible advertising
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1815Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The embodiment of the specification discloses a method, equipment and a medium for realizing the functions of a museum robot, wherein the method is applied to the robot, the robot comprises a voice module and a decision engine module, and the method comprises the following steps: the voice module receives a voice instruction input from the outside, identifies the voice instruction and determines the type of the voice instruction, wherein the type of the voice instruction comprises: the voice recognition module is used for recognizing the voice command of the exhibition, and sending the recognized voice command to the decision engine module; and the decision engine module calls the corresponding functional module according to the type of the voice instruction so that the corresponding functional module can complete corresponding operation according to the voice instruction. The museum robot can realize the functions of voice conversation, exhibit explanation, museum navigation and exhibit, and can provide more comprehensive services for users.

Description

Method, device and medium for realizing functions of museum robot
Technical Field
The present disclosure relates to the field of robotics, and in particular, to a method, an apparatus, and a medium for implementing a function of a museum robot.
Background
Museums are the novel tourism industry that develops nowadays and is growing day by day, collect education, historical relic and deposit and social scientific research as an organic whole, in order to obtain better tourism experience, need the explanation person to introduce the exhibit history to the visitor usually, along with the arrival of intelligent era, intelligent robot replaces the manpower gradually and provides the service for people.
The functions of the existing museum robot are single, and most of intelligent robots can only provide guidance service; or the robot is arranged beside the exhibit, and the simple explanation of the exhibit is played circularly, so that more comprehensive service cannot be provided for the user.
Disclosure of Invention
One or more embodiments of the present specification provide a method, an apparatus, and a medium for implementing a function of a museum robot, which are used to solve the following technical problems: the existing museum robot has single function, can only provide some simple explaining work, and cannot provide more comprehensive service for users.
One or more embodiments of the present disclosure adopt the following technical solutions:
one or more embodiments of the present specification provide a method for implementing functions of a museum robot, the method being applied to a robot, the robot including a voice module and a decision engine module, the method including: the voice module receives a voice instruction input from the outside, identifies the voice instruction and determines the category of the voice instruction, wherein the category of the voice instruction comprises: the voice recognition module is used for recognizing voice commands of the conversation broadcast class, the exhibit explanation class, the museum navigation class and the exhibit display class and sending the recognized voice commands to the decision engine module; and the decision engine module calls the corresponding functional module according to the type of the voice instruction so that the corresponding functional module can complete the corresponding operation according to the voice instruction.
Further, before the voice module of the robot receives an externally input voice instruction, the method further includes: the voice recognition module converts the input awakening command words into awakening instructions and sends the awakening instructions to the voice module, and the voice module awakens the museum robot.
Further, the robot further comprises a navigation module, and when the voice instruction refers to visiting a plurality of exhibits, the type of the voice instruction is a museum navigation type; the decision engine module calls the corresponding functional module according to the category of the voice instruction so that the corresponding functional module completes the corresponding operation according to the voice instruction, and the method specifically comprises the following steps: and the decision-making calls the navigation module according to the museum navigation class so that the navigation module determines the priority of the plurality of exhibit tours according to the input sequence of the plurality of exhibits in the voice command and determines the tour route according to the priority of the plurality of exhibit tours.
Further, before determining the route of the tour according to the priorities of the plurality of exhibit tours, the method further comprises: determining a number of visitors to each exhibit of the plurality of exhibits; the determining of the tour route according to the priorities of the multiple exhibit tours specifically comprises: and determining the tour route according to the number of visitors of each exhibit in the exhibits and the tour priority of the exhibits.
Further, after the tour route is determined according to the number of visitors of each exhibit in the exhibits and the priority of the sightseeing of the exhibits, the method further comprises the following steps: and if the variation condition of the number of the persons who visit the non-touring exhibit exceeds a preset threshold value, re-determining the route of the tour according to the latest number of the persons who visit the non-touring exhibit.
Further, the robot further comprises a front-end module, and when the voice instruction is to display one or more exhibits, the type of the voice instruction is an exhibit display type; the decision engine module calls the corresponding functional module according to the category of the voice instruction so that the corresponding functional module completes the corresponding operation according to the voice instruction, and the method specifically comprises the following steps: the decision engine module receives the exhibit display instruction and determines the name of the exhibit to be displayed according to the exhibit display instruction; and calling corresponding exhibit information in an exhibit library according to the exhibit name, and sending the exhibit information to the front-end module so that the front-end module can complete the exhibit display function.
Further, the decision engine module of the robot comprises an infrared sensing module, and the voice module comprises a voice broadcasting module; when the voice instruction explains one or more exhibits, the category of the voice instruction is an exhibit explanation category; the decision engine module calls the corresponding functional module according to the category of the voice instruction so that the corresponding functional module completes the corresponding operation according to the voice instruction, and the method specifically comprises the following steps: the decision engine module receives a voice instruction of an exhibit explanation class and determines explanation content according to an exhibit name of an exhibit to be explained in the voice instruction; the infrared sensing module determines explanation volume according to whether the number of the people in the scene exceeds a preset threshold value; and the decision engine module generates a control instruction according to the explanation content and the explanation volume and sends the control instruction to the voice broadcasting module to complete the explanation function of the exhibit.
Further, the recognizing the voice command and determining the category of the voice command specifically include: the voice module converts the voice instruction into text data and performs semantic feature extraction on the text data; presetting semantic features corresponding to different types of voice instructions, and expanding the semantic features in the same range to obtain the semantic features of each type of voice instructions after expansion; and matching the semantic features extracted from the text data with preset expanded semantic features, and determining the category corresponding to the voice command.
One or more embodiments of the present specification provide a function implementation device of a museum robot, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to: the method comprises the following steps that a voice module receives a voice instruction input from the outside, identifies the voice instruction and determines the type of the voice instruction, wherein the type of the voice instruction comprises the following steps: the voice recognition module is used for recognizing voice commands of the conversation broadcast class, the exhibit explanation class, the museum navigation class and the exhibit display class and sending the recognized voice commands to the decision engine module; and the decision engine module calls the corresponding functional module according to the type of the voice instruction so that the corresponding functional module can complete corresponding operation according to the voice instruction.
One or more embodiments of the present specification provide a non-transitory computer storage medium storing computer-executable instructions configured to: the method comprises the following steps that a voice module receives a voice instruction input from the outside, identifies the voice instruction and determines the type of the voice instruction, wherein the type of the voice instruction comprises the following steps: the voice recognition module is used for recognizing voice commands of the conversation broadcast class, the exhibit explanation class, the museum navigation class and the exhibit display class and sending the recognized voice commands to the decision engine module; and the decision engine module calls the corresponding functional module according to the type of the voice instruction so that the corresponding functional module can complete corresponding operation according to the voice instruction.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects: through user input voice, the voice modules of the museum robot can be identified by the voice modules, the identified instruction is sent to the decision engine module of the museum robot, the decision engine module can call other related modules to complete corresponding operation, the museum robot can realize voice conversation, exhibit explanation, museum navigation and exhibit functions, and more comprehensive services can be provided for users.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort. In the drawings:
fig. 1 is a schematic flow chart of a method for implementing a function of a museum robot according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of another method for implementing a function of a museum robot according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a function implementation device of a museum robot according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present specification without any creative effort shall fall within the protection scope of the present specification.
Museums are the novel tourism industry that develops nowadays and is growing day by day, collect education, historical relic and deposit and social scientific research as an organic whole, in order to obtain better tourism experience, need the explanation person to introduce the exhibit history to the visitor usually, along with the arrival of intelligent era, intelligent robot replaces the manpower gradually and provides the service for people. The rapid development of computer technology and the rapid progress of scientific technology greatly promote the development of society, and as a typical representative of the development of scientific technology, the robot technology represents the fusion of various technologies such as information, communication and the like, and is a product with extremely high technical content. The museum robot is a service-type machine.
The functions of the existing museum robot are single, and most of intelligent robots can only provide guidance service; or the robot is arranged beside the exhibit, and the simple explanation of the exhibit is played circularly, so that more comprehensive service cannot be provided for the user.
An embodiment of the present specification provides a museum robot, including a voice module, a front-end module, a navigation module, and a decision engine module. The voice recognition module, the front-end module and the navigation module are all communicated with the decision engine module, wherein MQTT is adopted for the communication between the front-end module and the decision engine module, and HTTP is adopted for the communication between the other modules.
In an embodiment of the present specification, the voice module is mainly responsible for voice wakeup, voice recognition, intention recognition, voice synthesis, and the like, and converts voice into characters through voice recognition, then enters the intention recognition module to perform intention-related service processing, and then converts a text to be broadcasted into voice through voice synthesis to complete processing of the voice module; the navigation module is mainly responsible for realizing the functions of navigation walking, path planning, walking obstacle avoidance and the like of the robot, and can be used for carrying out map construction, stop point addition, line addition forbidding and the like on the robot; the front-end module is used for front-end interactive display, can check the setting, the state and the like of the operation robot, display an interactive page, play a video and the like; the decision engine module is the brain of the robot and is mainly responsible for the business process processing of the robot, receiving the input of voice intention results, calling related skills according to the intention, and realizing voice broadcasting, navigation planning, video playing, task suspension, task continuation and the like of the robot.
It should be noted that the decision engine module is mainly designed in the go language. The Go language has many natural advantages, such as simple deployment, good concurrency, good language design, good execution performance, and the like. The robot explanation relates to a large amount of multi-task scheduling and processing, the Go language has good natural concurrency, Goroutine and channel enable high-concurrency server software to be written easily, and in many cases, locking mechanisms and various problems caused by the locking mechanisms are not needed to be considered at all. A single Go application can also effectively utilize a plurality of CPU cores, and the performance of parallel execution is good. Go compilation produces a static executable file with no external dependencies other than glibc. This makes deployment exceptionally convenient: only one basic system and necessary management and monitoring tools are needed on the target machine, and the dependency relationship of various packages and libraries needed by the application is completely not needed, so that the maintenance burden is greatly reduced. The method can be directly compiled into machine codes, does not depend on other libraries, has certain requirements on the version of glibc, and can be completed by throwing one file for deployment.
An embodiment of the present specification provides a method for implementing a function of a museum robot, which is applied to a museum robot, and fig. 1 is a schematic flow diagram of the method for implementing a function of a museum robot provided by the embodiment of the present specification, as shown in fig. 1, the method includes:
step S101, a voice module receives a voice instruction input from the outside, identifies the voice instruction, determines the type of the voice instruction, and sends the identified voice instruction to a decision engine module.
The category of the voice instruction may include: the system comprises a dialogue broadcast class, an exhibit explanation class, a museum navigation class and an exhibit display class.
Specifically, before step S101, the method further comprises: the voice recognition module converts the input awakening command words into awakening instructions and sends the awakening instructions to the voice module, and the voice module awakens the museum robot.
In an embodiment of the present specification, the user inputs a wake command word to the museum robot, for example, the museum robot says "hello" and may also be "bob pabbo", and the present specification does not specifically limit the wake command word. The voice module converts the received sound signal of the command word into a text format, when the information in the text format is confirmed to be the awakening command word, an awakening control instruction is generated, the control instruction is sent to the voice awakening module, and the museum robot is awakened by the voice awakening module.
In one embodiment of the present description, upon waking up the museum robot, a voice conversation may be made with the robot. The user inputs command words to the robot voice, the voice module converts the command words into voice instructions for the command words, and the voice instructions are identified to determine the category to which the command words belong. The categories of the voice instruction include: the system comprises a dialogue broadcast class, an exhibit explanation class, a museum navigation class and an exhibit display class.
Specifically, in step S101, recognizing the voice command, and determining the type of the voice command specifically includes: the voice module converts the voice instruction into text data and performs semantic feature extraction on the text data; presetting semantic features corresponding to different types of voice instructions, and expanding the semantic features in the same range to obtain the semantic features of each type of voice instructions after expansion; and matching the semantic features extracted from the text data with preset expanded semantic features to determine the category corresponding to the voice command.
In one embodiment of the present specification, the voice module converts the voice command into text data, and performs semantic feature extraction on the text data, for example: the text data is 'I want to go to the A exhibition hall', and the extracted semantic features are 'go' and 'A exhibition hall'.
In one embodiment of the present specification, semantic features corresponding to different classes of voice commands are preset. Semantic features corresponding to the explanation class of the exhibit can be 'thinking', 'understanding', 'exhibit' and 'history'; the museum navigation classes may be "go", "exhibition hall", "location", and "where"; the exhibit displays may be "want", "see", "picture" and "exhibit", and it should be noted that the semantic features are all examples, and this is not specifically limited in this embodiment of the present specification. And expanding the semantic features in the same range to obtain the semantic features of each category after the voice instruction is expanded, wherein the semantic features can be expanded into 'go to' and the like. And matching the semantic features extracted from the text data with preset expanded semantic features to determine the category corresponding to the voice command.
In one embodiment of the present description, after determining the category corresponding to the voice command, the voice command is sent to the decision engine module. It should be noted that the decision engine module establishes a communication connection with the voice module through HTTP.
And step S102, the decision engine module calls the corresponding functional module according to the type of the voice command so that the corresponding functional module can complete the corresponding operation according to the voice command.
In an embodiment of the present specification, after receiving the voice instruction and the category to which the voice instruction belongs, the decision engine module sends the voice instruction to the corresponding functional module according to the category of the voice instruction, so that the corresponding functional module realizes a function according to the voice instruction. The museum robot further comprises a front-end module and a navigation module, wherein the decision engine module is in communication connection with the front-end module through the MQTT and in communication connection with the navigation module through the HTTP.
Specifically, when the voice instruction refers to visiting a plurality of exhibits, the type of the voice instruction is a museum navigation type; step S102 specifically includes: the decision engine module calls the navigation module according to the museum navigation class so that the navigation module determines the priority of the touring of a plurality of exhibits according to the input sequence of the plurality of exhibits in the voice command and determines the route of the touring according to the priority of the touring of the plurality of exhibits. Before determining the route of the tour according to the priorities of the plurality of exhibit tours, the method further comprises the following steps: determining the number of visitors of each exhibit in the plurality of exhibits; determining the tour route according to the priorities of the multiple exhibit tours, and specifically comprising the following steps: and determining the tour route according to the number of visitors of each exhibit in the plurality of exhibits and the tour priority of the plurality of exhibits. And if the variation condition of the number of the persons who visit the non-touring exhibit exceeds a preset threshold value, re-determining the route of the tour according to the latest number of the persons who visit the non-touring exhibit.
In an embodiment of the present specification, when the voice command refers to visiting a plurality of exhibits or visiting an exhibit of a certain category, the category corresponding to the voice command is a museum navigation category. And the decision engine module calls the navigation module, and the navigation module determines the priority of the tourism of the plurality of exhibits according to the input sequence of the plurality of exhibits input in the voice command and further determines a route. For example, if the user inputs "i want to see the a exhibit, the B exhibit, and the C exhibit", the sequence of inputting the exhibits in the voice command is "ABC", the priority of visiting is determined to be "ABC", it should be noted that, according to the input sequence of the exhibits, the preference degree of the user for each exhibit may be determined, or other sequences that may represent the preference degree of the user may be used to determine the priority of visiting, and the user may also designate the sequence.
In an embodiment of the present specification, before generating the tour route, the tour route may be generated for the user by determining the traffic of people at each exhibit, that is, the tour experience of the user may be better improved by avoiding watching people at a peak traffic of people at a certain exhibit. Specifically, the number of visitors at each exhibit is obtained in advance according to the exhibits which the user inputs and wants to visit, and the tour route is determined according to the number of visitors at each exhibit and the priorities of multiple exhibits. For example, it is determined that the priority of the tour is "a exhibit, B exhibit, and C exhibit" in order, but the number of visitors at the a exhibit is large, and the guiding of the user to go to the viewing at this time affects the viewing experience, and the number of persons at the B exhibit is small at this time, so the tour route can be set to "B exhibit, a exhibit, and C exhibit". The tour route is generated by combining the tour priority and the exhibit flow, so that the off-peak viewing can be realized on the premise of ensuring the preferential viewing of the exhibits of the user, and better tour experience is brought to the user.
In one embodiment of the present description, after the tour route is determined, the route adjustment may be performed according to the number of visitors at the exhibition location that are not visited. If the number of people at the non-touring exhibit exceeds a preset threshold value, for example, more than 10 people, the touring sequence of the exhibit can be moved backwards, the non-touring exhibits with the number of people not exceeding 10 people can be moved forwards, and a new touring route is generated. The timely adjustment of the route can further improve the tour experience of the user, reduce the waiting time of the user and avoid the peak period of the stream of people.
Specifically, when the voice instruction is to display one or more exhibits, the category of the voice instruction is an exhibit display category; step S102 specifically includes: the decision engine module receives the exhibit display instruction and determines the name of the exhibit to be displayed according to the exhibit display instruction; the corresponding exhibit information in the exhibit library is called according to the exhibit name, and the exhibit information is sent to the front-end module, so that the front-end module can complete the exhibit display function
In one embodiment of the present specification, when the voice command is to display one or more exhibits, the category of the corresponding voice command is an exhibit display category. And at the moment, the decision engine module receives the exhibit display voice command, determines the name of the exhibit to be displayed according to the voice command, and sequentially determines the name of the exhibit to be displayed according to the input sequence of the user if a plurality of exhibits are displayed. The exhibition information management system is characterized in that an exhibit library is preset, the exhibit library comprises exhibit information of all exhibits in a museum, the exhibit information of corresponding exhibits of the exhibit library is called according to the determined names of the exhibits, the exhibit information is sent to a front end module, the front end module displays the exhibit information, and an exhibit display function is completed. Wherein, the exhibit information may include one or more of the following: the exhibition name, the exhibition picture, the three-dimensional stereo image of the exhibit, the age of the exhibit and the historical story of the exhibit.
Specifically, when the voice instruction explains one or more exhibits, the category of the voice instruction is an exhibit explanation category; step S102 specifically includes: the decision engine module receives a voice instruction of an exhibit explanation class and determines explanation contents according to an exhibit name of an exhibit to be explained in the voice instruction; the infrared sensing module determines explanation volume according to whether the number of the people in the scene exceeds a preset threshold value; and the decision engine module generates a control instruction according to the explanation content and the explanation volume and sends the control instruction to the voice broadcasting module to complete the explanation function of the exhibit.
In one embodiment of the present specification, when the voice command is to explain one or more exhibits, the category of the corresponding voice command is an exhibit explanation category. Generally, when a user needs to explain an exhibit, there are two scenes, one is that the user watches the exhibit before the other is in the process of watching the exhibit. When a user is in front of watching an exhibit, the robot is generally required to explain at least one interested exhibit, and when the user is in the process of watching the exhibit, the robot is generally required to explain the exhibit at the position where the robot is located. The embodiments of the present description do not specifically limit the application scenarios. The decision engine module receives a voice instruction of an exhibit explanation class, wherein the voice instruction comprises names of exhibits to be explained, and if a user needs to explain a plurality of exhibits, the explanation sequence can be determined according to the input sequence of the user, and the explanation can also be carried out according to the sequence appointed by the user. Determining the explanation content according to the name of the exhibit to be explained, wherein the explanation content can comprise: name of exhibit, age, historical story, etc.
In an embodiment of the present specification, the decision engine module further includes an infrared sensing module, and the infrared sensing module may determine whether the personnel data in the current environment of the user exceeds a preset threshold, and determine the volume of the device during explanation according to the quantity of the personnel. For example, if the infrared sensing module determines that there are 20 people in the current environment and the number of people exceeds a preset threshold value of 5, the device volume is set to 6, and it should be noted that the device volumes are 1, 2, … 9, and 10 in sequence from low to high. The explanation volume and the explanation content are sent to the voice module, and the voice module adjusts the volume and broadcasts the explanation content to complete the explanation function of the exhibit. The explanation volume is set through the number of the personnel, so that the condition that the user experience is poor due to the fact that the explanation volume is small when the number of the personnel is large is avoided.
Fig. 2 is a schematic flowchart of another method for implementing a function of a museum robot according to an embodiment of the present disclosure, where as shown in fig. 2, the method includes:
the user inputs command words to wake up the robot, and after the robot is woken up, the user can carry out voice conversation with the robot. And the voice module of the robot identifies the voice conversation and analyzes the intention. If the recognized intention is in a preset intention library of the voice module, the voice module realizes a corresponding function according to the recognized intention; and if the identified intention is not in the preset intention library of the voice module, sending the identified intention to a decision engine module of the museum robot.
The decision engine module of the museum robot receives the intention identified by the voice module, calls a skill library according to the identified intention, and controls the corresponding module to realize the corresponding function, wherein the skill library comprises the following skill categories: a voice class, a navigation class, and a presentation class.
And when the skill category called by the decision engine module of the museum robot calling the skill library according to the identified intention is the voice category, controlling the voice module to realize the voice function. And when the skill category called by the decision engine module of the museum robot calling the skill library according to the identified intention is the navigation category, controlling the navigation module to realize the navigation function. And when the skill category called by the decision engine module of the museum robot calling the skill library according to the identified intention is the display category, controlling the front-end module to realize the display function.
And when the decision engine module of the museum robot receives the intention identified by the voice module and judges that the identified intention is not in the preset intention library of the decision engine module, voice prompt is carried out through the voice module. For example, the voice plays "i don't know", etc. Thereafter, it continues to wait for the next intent input.
An embodiment of the present specification further provides a function implementation device of a museum robot, as shown in fig. 3, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to: the voice module receives a voice instruction input from the outside, identifies the voice instruction and determines the type of the voice instruction, wherein the type of the voice instruction comprises: the voice recognition module is used for recognizing the voice command of the exhibition, and sending the recognized voice command to the decision engine module; and the decision engine module calls the corresponding functional module according to the type of the voice instruction so that the functional module can complete corresponding operation according to the voice instruction.
One or more embodiments of the present specification provide a non-transitory computer storage medium storing computer-executable instructions configured to: the voice module receives a voice instruction input from the outside, identifies the voice instruction and determines the type of the voice instruction, wherein the type of the voice instruction comprises: the voice recognition module is used for recognizing the voice command of the exhibition, and sending the recognized voice command to the decision engine module; and the decision engine module calls the corresponding functional module according to the type of the voice instruction so that the corresponding functional module can complete corresponding operation according to the voice instruction.
In at least one embodiment provided by this specification, by inputting a voice by a user, the voice modules of the museum robot may all be recognized by the voice modules, and send a recognized instruction to the decision engine module of the museum robot, so that the decision engine module may call other related modules to complete corresponding operations, so that the museum robot may implement functions of voice conversation, explanation of an exhibit, navigation in the museum, and exhibition of the exhibit, and may provide more comprehensive services for the user.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the device, and the nonvolatile computer storage medium, since they are substantially similar to the embodiments of the method, the description is simple, and for the relevant points, reference may be made to the partial description of the embodiments of the method.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above description is merely one or more embodiments of the present disclosure and is not intended to limit the present disclosure. Various modifications and alterations to one or more embodiments of the present description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of one or more embodiments of the present specification should be included in the scope of the claims of the present specification.

Claims (10)

1. A method for realizing functions of a museum robot is applied to a robot, the robot comprises a voice module and a decision engine module, and the method comprises the following steps:
the voice module receives a voice instruction input from the outside, identifies the voice instruction and determines the category of the voice instruction, wherein the category of the voice instruction comprises: the system comprises a dialogue broadcast class, an exhibit explanation class, a museum navigation class and an exhibit display class;
sending the recognized voice instruction to the decision engine module;
and the decision engine module calls the corresponding functional module according to the type of the voice instruction so that the corresponding functional module can complete the corresponding operation according to the voice instruction.
2. The method of claim 1, wherein before the voice module of the robot receives the voice command inputted from the outside, the method further comprises:
the voice recognition module converts the input awakening command words into awakening instructions and sends the awakening instructions to the voice module, and the voice module awakens the museum robot.
3. The method as claimed in claim 1, wherein the robot further comprises a navigation module, and when the voice command is to visit a plurality of exhibits, the category of the voice command is a museum navigation category;
the decision engine module calls the corresponding functional module according to the category of the voice instruction so that the corresponding functional module completes the corresponding operation according to the voice instruction, and the method specifically comprises the following steps:
and the decision-making calls the navigation module according to the museum navigation class so that the navigation module determines the priority of the plurality of exhibit tours according to the input sequence of the plurality of exhibits in the voice command and determines the tour route according to the priority of the plurality of exhibit tours.
4. The method of claim 3, wherein before determining the tour route according to the priorities of the plurality of exhibit tours, the method further comprises:
determining a number of visitors to each exhibit of the plurality of exhibits;
the determining of the tour route according to the priorities of the multiple exhibit tours specifically comprises:
and determining the tour route according to the number of visitors of each exhibit in the exhibits and the tour priority of the exhibits.
5. The method of claim 3, wherein after determining the tour route according to the number of visitors of each exhibit in the plurality of exhibits and the priority of the sightseeing of the plurality of exhibits, the method further comprises:
and if the variation condition of the number of the persons who visit the non-touring exhibit exceeds a preset threshold value, re-determining the route of the tour according to the latest number of the persons who visit the non-touring exhibit.
6. The method of claim 1, wherein the robot further comprises a front-end module, and when the voice command is to display one or more exhibits, the category of the voice command is an exhibit display category;
the decision engine module calls the corresponding functional module according to the category of the voice instruction so that the corresponding functional module completes the corresponding operation according to the voice instruction, and the method specifically comprises the following steps:
the decision engine module receives the exhibit display instruction and determines the name of the exhibit to be displayed according to the exhibit display instruction;
and calling corresponding exhibit information in an exhibit library according to the exhibit name, and sending the exhibit information to the front-end module so that the front-end module can complete the exhibit display function.
7. The method of claim 1, wherein the decision engine module of the robot comprises an infrared sensing module, and the voice module comprises a voice broadcast module; when the voice instruction explains one or more exhibits, the category of the voice instruction is an exhibit explanation category;
the decision engine module calls the corresponding functional module according to the category of the voice instruction so that the corresponding functional module completes the corresponding operation according to the voice instruction, and the method specifically comprises the following steps:
the decision engine module receives a voice instruction of an exhibit explanation class and determines explanation content according to an exhibit name of an exhibit to be explained in the voice instruction;
the infrared sensing module determines explanation volume according to whether the number of the people in the scene exceeds a preset threshold value;
and the decision engine module generates a control instruction according to the explanation content and the explanation volume and sends the control instruction to the voice broadcasting module to complete the explanation function of the exhibit.
8. The method for realizing the function of the museum robot according to claim 1, wherein the recognizing the voice command and determining the category of the voice command specifically comprises:
the voice module converts the voice instruction into text data and performs semantic feature extraction on the text data;
presetting semantic features corresponding to different types of voice instructions, and expanding the semantic features in the same range to obtain the semantic features of each type of voice instructions after expansion;
and matching the semantic features extracted from the text data with preset expanded semantic features, and determining the category corresponding to the voice command.
9. A function implementing device of a museum robot comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to: the method comprises the following steps that a voice module receives a voice instruction input from the outside, identifies the voice instruction and determines the type of the voice instruction, wherein the type of the voice instruction comprises the following steps: the voice recognition module is used for recognizing the voice command of the exhibition, and sending the recognized voice command to the decision engine module; and the decision engine module calls the corresponding functional module according to the type of the voice instruction so that the corresponding functional module can complete the corresponding operation according to the voice instruction.
10. A non-transitory computer storage medium storing computer-executable instructions configured to: the method comprises the following steps that a voice module receives a voice instruction input from the outside, identifies the voice instruction and determines the type of the voice instruction, wherein the type of the voice instruction comprises the following steps: the voice recognition module is used for recognizing the voice command of the exhibition, and sending the recognized voice command to the decision engine module; and the decision engine module calls the corresponding functional module according to the type of the voice instruction so that the corresponding functional module can complete the corresponding operation according to the voice instruction.
CN202110717051.6A 2021-06-28 2021-06-28 Method, device and medium for realizing functions of museum robot Pending CN113488040A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110717051.6A CN113488040A (en) 2021-06-28 2021-06-28 Method, device and medium for realizing functions of museum robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110717051.6A CN113488040A (en) 2021-06-28 2021-06-28 Method, device and medium for realizing functions of museum robot

Publications (1)

Publication Number Publication Date
CN113488040A true CN113488040A (en) 2021-10-08

Family

ID=77936290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110717051.6A Pending CN113488040A (en) 2021-06-28 2021-06-28 Method, device and medium for realizing functions of museum robot

Country Status (1)

Country Link
CN (1) CN113488040A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114020225A (en) * 2021-12-30 2022-02-08 山东新一代信息产业技术研究院有限公司 Double-screen interaction system and interaction method for explaining robot

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105222783A (en) * 2015-09-07 2016-01-06 广东欧珀移动通信有限公司 A kind of exhibition room air navigation aid and mobile terminal
CN107122846A (en) * 2017-03-27 2017-09-01 中国农业大学 A kind of scenic spot guidance method, service end, client and system
CN109227536A (en) * 2018-08-20 2019-01-18 南京邮电大学 Intelligent greeting explains machine person speech interaction control system and control method
CN109571499A (en) * 2018-12-25 2019-04-05 广州天高软件科技有限公司 A kind of intelligent navigation leads robot and its implementation
CN109598547A (en) * 2018-11-30 2019-04-09 安徽振伟展览展示有限公司 A kind of large size exhibition room museum intelligent guidance system
CN109887503A (en) * 2019-01-20 2019-06-14 北京联合大学 A kind of man-machine interaction method of intellect service robot
CN110703665A (en) * 2019-11-06 2020-01-17 青岛滨海学院 Indoor interpretation robot for museum and working method
CN111639818A (en) * 2020-06-05 2020-09-08 上海商汤智能科技有限公司 Route planning method and device, computer equipment and storage medium
CN112418145A (en) * 2020-12-04 2021-02-26 南京岁卞智能设备有限公司 Intelligent guide system for large exhibition hall based on machine vision and big data analysis
CN112419944A (en) * 2020-11-20 2021-02-26 关键 Museum exhibition system
CN112905675A (en) * 2021-03-19 2021-06-04 中网道科技集团股份有限公司 Intelligent navigation method, device, equipment and medium for commercial complex

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105222783A (en) * 2015-09-07 2016-01-06 广东欧珀移动通信有限公司 A kind of exhibition room air navigation aid and mobile terminal
CN107122846A (en) * 2017-03-27 2017-09-01 中国农业大学 A kind of scenic spot guidance method, service end, client and system
CN109227536A (en) * 2018-08-20 2019-01-18 南京邮电大学 Intelligent greeting explains machine person speech interaction control system and control method
CN109598547A (en) * 2018-11-30 2019-04-09 安徽振伟展览展示有限公司 A kind of large size exhibition room museum intelligent guidance system
CN109571499A (en) * 2018-12-25 2019-04-05 广州天高软件科技有限公司 A kind of intelligent navigation leads robot and its implementation
CN109887503A (en) * 2019-01-20 2019-06-14 北京联合大学 A kind of man-machine interaction method of intellect service robot
CN110703665A (en) * 2019-11-06 2020-01-17 青岛滨海学院 Indoor interpretation robot for museum and working method
CN111639818A (en) * 2020-06-05 2020-09-08 上海商汤智能科技有限公司 Route planning method and device, computer equipment and storage medium
CN112419944A (en) * 2020-11-20 2021-02-26 关键 Museum exhibition system
CN112418145A (en) * 2020-12-04 2021-02-26 南京岁卞智能设备有限公司 Intelligent guide system for large exhibition hall based on machine vision and big data analysis
CN112905675A (en) * 2021-03-19 2021-06-04 中网道科技集团股份有限公司 Intelligent navigation method, device, equipment and medium for commercial complex

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114020225A (en) * 2021-12-30 2022-02-08 山东新一代信息产业技术研究院有限公司 Double-screen interaction system and interaction method for explaining robot

Similar Documents

Publication Publication Date Title
CN103335657A (en) Method and system for strengthening navigation performance based on image capture and recognition technology
US20180145941A1 (en) Personal information apparatus, sharing system and sharing system operating method for servicing map-based user generated contents for social network service
CN110489048A (en) Using quick start method and relevant apparatus
CN104077026A (en) Device and method for displaying execution result of application
CN109817210A (en) Voice writing method, device, terminal and storage medium
CN111722825A (en) Interaction method, information processing method, vehicle and server
CN105284099A (en) Automatically adapting user interfaces for hands-free interaction
WO2023226846A1 (en) Media content generation method and apparatus, device and storage medium
CN112365596A (en) Tourism guide system based on augmented reality
CN111031351A (en) Method and device for predicting target object track
CN113488040A (en) Method, device and medium for realizing functions of museum robot
CN110019723A (en) Answering method, terminal device and storage medium based on question answering system
CN110968362B (en) Application running method, device and storage medium
CN112102823A (en) Voice interaction method of intelligent terminal, intelligent terminal and storage medium
CN101689207B (en) Providing access to a collection of content items
US20080013130A1 (en) Destination description generating system and destination description interpreting system
CN115002274B (en) Control method and device, electronic equipment and computer readable storage medium
CN106504558A (en) The treating method and apparatus of vehicle travel task
CN115379136A (en) Special effect prop processing method and device, electronic equipment and storage medium
CN113676761A (en) Multimedia resource playing method and device and main control equipment
CN115022702A (en) Method, device, equipment, medium and product for displaying gift in live broadcast room
CN109960489A (en) Generate method, apparatus, equipment, medium and the question answering system of intelligent Answer System
CN111580766B (en) Information display method and device and information display system
CN112308511A (en) Task plan generation method and device, electronic equipment and storage medium
CN111124332A (en) Control method and control device for equipment presentation content and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination