CN113134835B - Robot explanation method and device, intelligent equipment and storage medium - Google Patents

Robot explanation method and device, intelligent equipment and storage medium Download PDF

Info

Publication number
CN113134835B
CN113134835B CN202110360072.7A CN202110360072A CN113134835B CN 113134835 B CN113134835 B CN 113134835B CN 202110360072 A CN202110360072 A CN 202110360072A CN 113134835 B CN113134835 B CN 113134835B
Authority
CN
China
Prior art keywords
target area
area
preset
robot
explanation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110360072.7A
Other languages
Chinese (zh)
Other versions
CN113134835A (en
Inventor
顾震江
梁朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uditech Co Ltd
Original Assignee
Uditech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uditech Co Ltd filed Critical Uditech Co Ltd
Priority to CN202110360072.7A priority Critical patent/CN113134835B/en
Publication of CN113134835A publication Critical patent/CN113134835A/en
Application granted granted Critical
Publication of CN113134835B publication Critical patent/CN113134835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention is applicable to the technical field of robots, and provides a robot explanation method, a device, intelligent equipment and a storage medium, wherein the robot explanation method is applied to a robot and comprises the following steps: acquiring scene information of a preset area, wherein the preset area is an area divided in advance in a scene; determining a target area according to the scene information, wherein the target area is a preset area meeting the explanation triggering condition; and moving to the target area, and playing the commentary material related to the target area in the target area. The invention realizes intelligent explanation by using the robot, not only can reduce the labor cost, but also can effectively keep the stability of the explanation level, and the intelligence of the explanation can improve the effectiveness of the explanation and enhance the user experience.

Description

Robot explanation method and device, intelligent equipment and storage medium
Technical Field
The invention relates to the technical field of robots, in particular to a robot explanation method, a robot explanation device, intelligent equipment and a storage medium.
Background
At present, in public places such as product exhibition halls, museums, science and technology museums and even large hotels, commentators need to convey information to visitors on site to inform the visitors of the on-site attention or correspondingly explain exhibition items.
However, a large amount of manpower, material resources and time are needed for culturing commentators, the cost is high, commentary is repetitive work, the commentary is stressed greatly under the condition that a large number of visitors are needed or the commentary is insufficient, the commentary level is difficult to maintain stably, the commentary is meaningless, and the experience of the visitors is not good.
In summary, the prior art has the problems of high cost, difficulty in maintaining stable interpretation level and poor interpretation effectiveness.
Disclosure of Invention
The embodiment of the invention provides a robot explanation method, a robot explanation device, intelligent equipment and a storage medium, which can solve the problems of high cost, difficulty in maintaining stable explanation level and poor explanation effectiveness in the prior art.
In a first aspect, an embodiment of the present invention provides a robot interpretation method, applied to a robot, including:
acquiring scene information of a preset area, wherein the preset area is an area divided in advance in a scene;
determining a target area according to the scene information, wherein the target area is a preset area meeting the explanation triggering condition;
and moving to the target area, and playing the commentary material related to the target area in the target area.
In a possible implementation manner of the first aspect, the determining the target area according to the context information includes:
if the number of people in the preset area is larger than or equal to a preset number of people threshold, determining the preset area as a target area;
or if the personnel density in the preset area is greater than or equal to a preset personnel density threshold value, determining that the preset area is a target area;
or if the environmental information in the preset area reaches a preset environmental reminding threshold value, determining that the preset area is a target area.
In a possible implementation manner of the first aspect, the determining a target area according to the scene information includes:
if more than one preset area meeting the explanation triggering conditions exists, the robot calculates the distance between each preset area and the robot;
and determining one preset area which is closest to the robot in all the preset areas meeting the explanation triggering conditions as a target area.
In a possible implementation manner of the first aspect, the moving to the target area and playing the narration material associated with the target area in the target area include:
when the robot moves to the specified explanation area in the target area, stopping moving, and starting playing the explanation material associated with the target area;
or when the robot moves into the target area, playing the narration material associated with the target area is started, and when the robot reaches the specified narration area in the target area, the robot stops moving.
In a possible implementation manner of the first aspect, the moving to the target area and playing the narration material associated with the target area in the target area include:
and adjusting the light and/or the playing volume in the target area according to the commentary material.
In a possible implementation manner of the first aspect, the moving the robot to the target area and playing the narration material associated with the target area in the target area by the robot includes:
acquiring the environmental volume in the target area;
and adjusting the volume of the commentary material related to the target area according to the environment volume.
In a possible implementation manner of the first aspect, the playing the narration material associated with the target area in the target area includes:
monitoring scene information in the target area in real time in the process of playing the commentary material;
determining whether the target area meets the comment triggering condition or not according to the scene information monitored in real time;
and if the target area does not meet the comment triggering condition, stopping comment.
In a second aspect, an embodiment of the present invention provides a robot interpretation apparatus, applied to a robot, including:
the scene information acquiring unit is used for acquiring scene information of a preset area, wherein the preset area is an area which is divided in advance in a scene;
a target area determining unit, configured to determine a target area according to the scene information, where the target area is a preset area that meets an explanation triggering condition;
and the scene explanation unit is used for moving to the target area and playing the explanation material related to the target area in the target area.
In a third aspect, an embodiment of the present invention provides an intelligent device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the robot interpretation method according to the first aspect is implemented.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium, which stores a computer program, and when executed by a processor, the computer program implements the robot interpretation method according to the first aspect.
In a fifth aspect, embodiments of the present invention provide a computer program product, which, when run on a smart device, causes the smart device to perform the robot interpretation method as described in the first aspect above.
In the embodiment of the invention, the robot acquires scene information of a preset area, wherein the preset area is a pre-divided area in a scene, then determines a target area according to the scene information, the target area is the preset area meeting a comment triggering condition, finally moves to the target area, and plays a comment material related to the target area in the target area. This scheme utilizes the robot to replace the manual work to carry out information transfer to visiting the personnel, not only can reduce the cost of labor, still can effectively keep explaining horizontally stability, explains the validity that intellectuality can improve and explain, strengthens user experience.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating a method for implementing a robot according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a specific implementation of step S20 of the robot according to the embodiment of the present invention;
fig. 3 is a flowchart illustrating a specific implementation of step S30 of the robot interpretation method according to the embodiment of the present invention;
fig. 4 is a flowchart illustrating another specific implementation of step S30 of the robot according to the embodiment of the present invention;
fig. 5 is a block diagram illustrating a structure of a robot according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an intelligent device provided in an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present invention and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present invention. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather mean "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In public places such as large hotels, museums, exhibition halls and the like, information such as introductions, commentary and the like are generally required to be conveyed to visitors to inform the visitors of notice matters or know related information. The robot explanation method provided by the embodiment of the invention can be applied to robots. When a visitor enters the preset area, the robot transmits information to the visitor.
Fig. 1 shows an implementation flow of a robot interpretation method provided by an embodiment of the present invention, and the method flow includes steps S10 to S30. The specific realization principle of each step is as follows:
s10, scene information of a preset area is obtained, and the preset area is an area which is divided in advance in a scene.
The scene refers to an application scene of the robot, and specifically, the scene may be any one of a hotel, a museum, a science and technology museum, and a product exhibition hall. The scene information includes at least one of the number of persons, the density of persons, and the environmental information.
In one embodiment, before the step S10, a scene in which the robot is located is divided into a plurality of preset areas according to a spatial area plan. Illustratively, taking a hotel application scenario as an example, in the same hotel, the following can be planned: elevator waiting area, viewing area, tea room area, smoking area, public area, guest room area, restaurant area, toilet area. The robot acquires scene information of each preset area.
In one embodiment, the robot is in communication connection with the internet of things equipment arranged in each preset area through the internet of things technology, and the robot acquires scene information in the preset area through the internet of things equipment. The Internet of things equipment can be a camera, a temperature sensor, an air pressure sensor, an air quality sensor and the like.
In a possible implementation manner, the internet of things devices set in different preset areas may be the same or different. Specifically, the corresponding internet of things device may be set in a targeted manner according to the characteristics of the preset area. In this case, the types of scene information of different preset areas acquired by the robot may be different.
And S20, determining a target area according to the scene information, wherein the target area is a preset area meeting the comment triggering condition.
In the embodiment of the present invention, the preset area is associated with a corresponding comment triggering condition, and the comment triggering condition is used for determining whether the preset area needs a robot to comment and introduce.
In one possible embodiment, the respective comment trigger condition is set in advance according to the characteristics of each preset area. In other words, the respective comment trigger conditions may be different for different preset areas. Specifically, each preset area has an area number, and the area number is used for identifying the preset area. In the embodiment of the present invention, a correspondence between the area number and the comment trigger condition is pre-established. The robot acquires the number of a preset area, and determines whether the preset area meets the comment triggering condition corresponding to the area number or not according to the acquired scene information of the preset area. And if so, determining the preset area as the target area.
As a possible embodiment of the present invention, the step S20: determining a target area according to the scene information, specifically comprising:
and if the number of the personnel in the preset area is greater than or equal to a preset personnel number threshold value, determining the preset area as a target area.
In some embodiments, the robot is in communication connection with a camera in a preset area, and the robot acquires image information in the preset area and determines personnel data in the preset area according to the image information. And the robot judges whether the number of the personnel in the preset area is greater than or equal to a preset personnel number threshold value. If yes, determining that the preset area triggers the robot explanation, and determining the preset area as a target area.
Or if the personnel density in the preset area is greater than or equal to a preset personnel density threshold value, determining that the preset area is a target area.
In some embodiments, the robot is in communication connection with a camera in a preset area, and the robot determines the person density in the preset area according to the image information by acquiring the image information in the preset area. And the robot judges whether the personnel density in the preset area is greater than or equal to a preset personnel density threshold value. If yes, determining that the preset area triggers the robot explanation, and determining the preset area as a target area.
Or if the environmental information in the preset area reaches a preset environmental reminding threshold value, determining that the preset area is a target area.
In some embodiments, the robot is in communication connection with a sensor in a preset area, and the robot acquires environmental information in the preset area through the sensor and judges whether the environmental information in the preset area reaches a preset environmental reminding threshold value. If yes, determining that the preset area triggers the robot explanation, and determining the preset area as a target area.
In some embodiments, the robot determines the target area by tour inspection. In this embodiment, the robot is equipped with a sensor, such as an image sensor or a laser radar sensor, and detects environmental information in a preset area through the sensor mounted on the robot during a tour process, and determines whether the environmental information in the preset area reaches a preset environmental alert threshold. If yes, determining that the preset area triggers the robot explanation, and determining the preset area as a target area.
Illustratively, the robot is provided with a camera that photographs the surroundings of the robot as image information. In the embodiment of the invention, the robot acquires the image information shot by the camera in the tour or inspection process, extracts the depth information of the image information and determines the environmental information of the preset area.
Illustratively, the robot acquires laser scanning information scanned by a laser radar sensor, and determines environmental information according to the laser scanning information. In the embodiment of the invention, the laser radar of the robot is used for scanning to determine the environmental information of the preset area in the tour process of the robot.
As a possible implementation manner of the present invention, fig. 2 shows a robot explanation method step S20 provided in the embodiment of the present invention: according to the scene information, a specific implementation process of the target area is determined, which is detailed as follows:
and S201, if more than one preset area satisfying the explanation triggering conditions is met, the robot calculates the distance between each preset area and the robot.
And S202, determining one preset area which is closest to the robot in all the preset areas meeting the explanation triggering conditions as a target area.
In the embodiment of the invention, if at the same detection time, at least two preset areas are detected to meet the explanation triggering condition. At this time, according to the principle of the shortest distance, the shortest preset area is determined as the target area, and the target area is moved to the preset area for explanation.
In some embodiments, if there is more than one preset area satisfying the comment triggering condition, the preset areas satisfying the comment triggering condition are all determined as target areas, and the distance between each target area and the robot is calculated. And the target areas are prioritized according to the distance from near to far, the robot sequentially moves to the target areas according to the sequencing result to perform explanation playing, and the robot preferentially moves to the target area closest to the robot.
Specifically, there are two target areas, and a first target area and a second target area are determined according to the result of the target area prioritization, where the first target area is a candidate area closest to the robot. And when the explanation playing of the robot in the first target area is finished, the robot moves to a second target area to play the explanation.
In a possible implementation manner, after the robot determines the target area, the robot performs face recognition on the people in the target area, determines whether the people in the target area are workers according to the face recognition result, and cancels the comment playing if the people in the target area are all workers.
And S30, moving to the target area, and playing the commentary material related to the target area in the target area.
In the embodiment of the present invention, after the narration material associated with the target area is played, and when the number of people in the target area is smaller than a preset people number threshold, or when the density of people in the target area is smaller than a preset people density threshold, the narration of the robot in the target area is completed.
In some embodiments, the narration material is stored in the robot body or the cloud server. The comment materials are correspondingly associated with the preset areas one by one. The comment materials comprise multimedia data such as PPT, text, audio, video and the like. In the embodiment of the invention, the explanation material comprises information such as air quality, temperature and humidity related to the preset area or introduction of related exhibits located in the preset area.
In some embodiments, the number of people or the density of people in the target area is obtained in real time or periodically, if the number of people in the preset area is greater than or equal to a preset number of people threshold value, and the density of people is greater than or equal to a preset density of people threshold value, when the robot plays the explanation material in the target area for a preset number of times and another target area to be explained exists, the robot completes explanation in the current target area, leaves the current target area, and goes to the next target area to play the explanation.
As a possible embodiment of the present invention, the step S30: moving to the target area, and playing the narration material associated with the target area in the target area, specifically including:
and when the robot moves to the specified explanation area in the target area, stopping moving and starting playing the explanation material associated with the target area.
In the embodiment of the present invention, each preset area is provided with a designated explanation area, and the explanation area is a dedicated area which is preset and divided and used for robot explanation and playing. And when the robot moves to the specified explanation area in the target area, stopping moving, and simultaneously triggering and playing the explanation material related to the target area.
The step S30: moving to the target area, and playing the commentary material associated with the target area in the target area, specifically including:
and when the robot moves into the target area, playing the narration material associated with the target area, and when the robot reaches the specified narration area in the target area, stopping moving.
In the embodiment of the invention, once the robot enters the target area, the narration material related to the target area starts to be played, meanwhile, the robot continues to move to the specified explanation area, and when the robot reaches the specified explanation area in the target area, the robot stops moving. The robot does not stop playing the commentary materials in the moving process.
In the embodiment of the invention, each preset area is provided with at least one appointed explanation area, and the walking paths among all the explanation areas form a topological map. The robot moves and walks along the topological map.
In some embodiments, the robot obtains the position of the target area and automatically navigates to the target area according to the topological map and the position of the target area.
As one possible embodiment of the present invention, the step S30: moving to the target area, and playing the narration material associated with the target area in the target area, specifically including:
and adjusting the light and/or the playing volume in the target area according to the commentary material.
In the embodiment of the invention, the content of the comment material is analyzed, and the light and/or the playing volume in the target area are adjusted according to the analysis result, so that the comment atmosphere is enhanced, the comment effect is improved, and the experience of people on the scene is better.
In one embodiment, the robot may carry test equipment for oxygen content detection, air composition detection, virus detection, etc. to provide a safe environment for the viewer.
As one possible embodiment of the present invention, as shown in fig. 3, the step S30: moving to the target area, and playing the narration material associated with the target area in the target area, specifically including:
s301, obtaining the environmental volume in the target area.
S302, according to the environment volume, adjusting the volume of the commentary material related to the target area.
In the embodiment of the invention, the robot detects the environmental volume in the target area in real time in the process of explaining and playing the target area, and automatically adjusts the volume of the explaining and playing of the robot according to the detected environmental sound so as to effectively explain. In one embodiment, the environment volume is an average volume of the current environment, for example, if the obtained average volume is 30dB, the robot adjusts the volume of the commentary playing to a decibel value equal to or greater than the average volume.
As a possible implementation manner of the present invention, as shown in fig. 4, the playing the narrative material associated with the target area in the target area specifically includes:
and S311, monitoring scene information in the target area in real time in the process of playing the commentary material.
S312, determining whether the target area meets the comment triggering condition according to the scene information monitored in real time.
And S313, if the target area does not meet the comment triggering condition, stopping comment.
In the embodiment of the invention, in the comment playing process, the robot monitors the scene information in the target area in real time, determines whether the target area meets the comment triggering condition according to the scene information monitored in real time, and stops the comment and moves to the next target area or moves to a preset waiting area once the target area is determined not to meet the comment triggering condition. It is to be understood that the stopping of the commentary includes stopping immediately, and also includes stopping the robot after the commentary material is played.
As can be seen from the above, in the embodiment of the present invention, by acquiring scene information of a preset area, where the preset area is an area pre-divided in a scene, and then determining a target area according to the scene information, where the target area is the preset area meeting a comment triggering condition, the robot finally moves to the target area, and plays a comment material associated with the target area in the target area. This scheme utilizes the robot to replace the manual work to carry out information transfer to visiting the personnel, not only can reduce the cost of labor, still can effectively keep explaining horizontally stability, explains the validity that intellectuality can improve and explain, strengthens user experience.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Corresponding to the robot explanation method described in the above embodiments, fig. 5 shows a block diagram of a robot explanation apparatus provided in an embodiment of the present invention, which is applied to a robot, and only a part related to the embodiment of the present invention is shown for convenience of explanation.
Referring to fig. 5, the robot explanation apparatus includes: a scene information acquisition unit 51, a target area determination unit 52, and a scene interpretation unit 53, wherein:
a scene information obtaining unit 51, configured to obtain scene information of a preset region, where the preset region is a region divided in advance in a scene;
a target area determining unit 52, configured to determine a target area according to the scene information, where the target area is a preset area that meets an explanation triggering condition;
the scene commentary unit 53 is configured to move to the target area, and play the commentary material associated with the target area in the target area.
In a possible implementation, the context information includes at least one of a number of people, a density of people, and environment information, and the target area determining unit 52 includes:
the first determining module is used for determining the preset area as a target area if the number of the personnel in the preset area is greater than or equal to a preset personnel number threshold value;
the second determining module is used for determining the preset area as a target area if the personnel density in the preset area is greater than or equal to a preset personnel density threshold value;
and the third determining module is used for determining the preset area as the target area if the environmental information in the preset area reaches a preset environmental reminding threshold value.
In a possible implementation, the target area determination unit 52 includes:
the distance calculation module is used for calculating the distance between each preset area and the robot if more than one preset area meeting the explanation triggering conditions is met;
and the fourth determining module is used for determining one preset area which is closest to the robot in all the preset areas meeting the explanation triggering conditions as a target area.
In one possible implementation, the scene narration unit 53 includes:
the first explanation module is used for stopping moving when the robot moves to the specified explanation area in the target area and starting playing explanation materials related to the target area;
and the second comment module is used for starting playing comment materials related to the target area when the robot moves into the target area, and stopping moving when the robot reaches a specified comment area in the target area.
In one possible implementation, the scene interpretation unit 53 includes:
and the playing effect adjusting module is used for adjusting the light and/or the playing volume in the target area according to the explication material.
In one possible implementation, the scene narration unit 53 includes:
the environment volume acquisition module is used for acquiring the environment volume in the target area;
and the volume adjusting module is used for adjusting the volume of the explication material which is related to the target area according to the environment volume.
In one possible implementation, the scene narration unit 53 includes:
the information monitoring module is used for monitoring scene information in the target area in real time in the process of playing the commentary material;
the playing control module is used for determining whether the target area meets the comment triggering condition or not according to the scene information monitored in real time; and if the target area does not meet the comment triggering condition, stopping comment.
As can be seen from the above, in the embodiment of the present invention, by acquiring scene information of a preset area, where the preset area is an area pre-divided in a scene, and then determining a target area according to the scene information, where the target area is the preset area meeting a comment triggering condition, the robot finally moves to the target area, and plays a comment material associated with the target area in the target area. This scheme utilizes the robot to replace the manual work to carry out information transmission to visiting the personnel, not only can reduce the cost of labor, still can effectively keep explaining the horizontally stability, explains the intellectuality and can improve the validity of explaining, reinforcing user experience.
It should be noted that, because the contents of information interaction, execution process, and the like between the above-mentioned apparatuses/units are based on the same concept as the method embodiment of the present invention, specific functions and technical effects thereof can be referred to specifically in the method embodiment section, and are not described herein again.
An embodiment of the present invention further provides an intelligent device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor, when executing the computer program, implements the steps of any one of the robot interpretation methods shown in fig. 1 to 4.
Embodiments of the present invention further provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of any one of the robot interpretation methods shown in fig. 1 to 4.
Embodiments of the present invention also provide a computer program product, which when run on a server, causes the server to execute the steps of implementing any one of the robot interpretation methods as represented in fig. 1 to 4.
Fig. 6 is a schematic diagram of an intelligent device according to an embodiment of the present invention. As shown in fig. 6, the smart device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62 stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps in the various robot interpretation method embodiments described above, such as steps S10 to S40 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the modules/units in the device embodiments described above, such as the functions of the units 51 to 54 shown in fig. 5.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to implement the present invention. The one or more modules/units may be a series of computer-readable instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 62 in the smart device 6.
The smart device 6 may be a smart robot. The smart device 6 may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 6 is merely an example of a smart device 6 and does not constitute a limitation of the smart device 6 and may include more or less components than those shown, or combine certain components, or different components, for example, the smart device 6 may also include input-output devices, network access devices, buses, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the smart device 6, such as a hard disk or a memory of the smart device 6. The memory 61 may also be an external storage device of the Smart device 6, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the Smart device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the smart device 6. The memory 61 is used for storing the computer programs and other programs and data required by the smart device. The memory 61 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above devices/units, the specific functions and technical effects of the embodiments of the method of the present invention based on the same concept can be referred to the section of the embodiments of the method, and are not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the present invention. For the specific working processes of the units and modules in the system, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may be implemented by a computer program, which may be stored in a computer-readable storage medium and used for instructing related hardware to implement the steps of the embodiments of the method according to the embodiments of the present invention. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to an apparatus/terminal device, recording medium, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier signals, telecommunications signals, and software distribution medium. Such as a usb-drive, a removable hard drive, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (8)

1. A robot explanation method is applied to a robot and is characterized by comprising the following steps:
dividing a scene where the robot is located into a plurality of preset areas according to space area planning;
acquiring scene information of a preset area, wherein the preset area is an area divided in advance in a scene;
determining a target area according to the scene information, wherein the target area is a preset area meeting the explanation triggering condition;
moving to the target area, and playing the commentary material associated with the target area in the target area, including: monitoring scene information in the target area in real time in the process of playing the commentary material; determining whether the target area meets the comment triggering condition or not according to the scene information monitored in real time; if the target area does not meet the explanation triggering condition, stopping explanation, and moving to the next previous target area or going to a preset waiting area; the robot carries test equipment for oxygen content detection, air component detection or virus detection, and the explanation material comprises air quality, temperature and humidity related to a target area or introduction information of related exhibits located in the target area;
the determining a target area according to the scene information includes:
setting corresponding comment triggering conditions in advance according to the characteristics of each preset area, and determining whether the preset area meets the corresponding comment triggering conditions or not according to the acquired scene information of the preset area;
if more than one preset area meeting the explanation triggering conditions exists, the robot calculates the distance between each preset area and the robot;
and determining one preset area which is closest to the robot in all the preset areas meeting the explanation triggering conditions as a target area.
2. The robot narrative method of claim 1, wherein said scene information comprises at least one of a number of people, a density of people, and environmental information, and said determining a target area based on said scene information comprises:
if the number of people in the preset area is larger than or equal to a preset number of people threshold, determining the preset area as a target area;
or if the personnel density in the preset area is greater than or equal to a preset personnel density threshold value, determining the preset area as a target area;
or if the environmental information in the preset area reaches a preset environmental reminding threshold value, determining that the preset area is a target area.
3. The robotic commentary method of claim 1, wherein the moving to the target area and playing commentary material associated with the target area at the target area comprises:
when the robot moves to the specified explanation area in the target area, stopping moving, and starting playing the explanation material associated with the target area;
or when the robot moves into the target area, playing the narration material associated with the target area is started, and when the robot reaches the specified narration area in the target area, the robot stops moving.
4. The robotic commentary method of claim 1, wherein the moving to the target area and playing commentary material associated with the target area at the target area comprises:
and adjusting the light and/or playing volume in the target area according to the commentary material.
5. The robotic commentary method of claim 1, wherein the moving to the target area and playing commentary material associated with the target area at the target area comprises:
acquiring the environmental volume in the target area;
and adjusting the volume of the commentary material related to the target area according to the environment volume.
6. A robot explanation device is characterized by being applied to a robot and comprising:
the scene information acquiring unit is used for acquiring scene information of a preset area, wherein the preset area is an area divided in advance in a scene;
a target area determining unit, configured to determine a target area according to the scene information, where the target area is a preset area that meets an explanation trigger condition;
the scene explanation unit is used for moving to the target area and playing explanation materials related to the target area in the target area;
the scene interpretation unit includes:
the information monitoring module is used for monitoring scene information in the target area in real time in the process of playing the commentary material;
the playing control module is used for determining whether the target area meets the comment triggering condition or not according to the scene information monitored in real time; if the target area does not meet the explanation triggering condition, stopping explanation, and moving to the next target area or moving to a preset waiting area; the robot carries test equipment for oxygen content detection, air component detection or virus detection, and the explanation material comprises air quality, temperature and humidity related to a target area or introduction information of related exhibits located in the target area;
the robot comment device is also used for setting corresponding comment triggering conditions in advance according to the characteristics of each preset area and determining whether the preset area meets the corresponding comment triggering conditions or not according to the acquired scene information of the preset area;
the target region determination unit includes:
the distance calculation module is used for calculating the distance between each preset area and the robot if more than one preset area meeting the explanation triggering conditions is met;
a fourth determining unit, configured to determine, as a target area, one of all preset areas that satisfy an explanation triggering condition and are closest to the robot;
the robotic commentary apparatus is further for:
and dividing the scene where the robot is located into a plurality of preset areas according to the space area planning.
7. A smart device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor when executing the computer program implements the robotic interpretation method of any one of claims 1 to 5.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a robot narration method according to any one of claims 1 to 5.
CN202110360072.7A 2021-04-02 2021-04-02 Robot explanation method and device, intelligent equipment and storage medium Active CN113134835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110360072.7A CN113134835B (en) 2021-04-02 2021-04-02 Robot explanation method and device, intelligent equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110360072.7A CN113134835B (en) 2021-04-02 2021-04-02 Robot explanation method and device, intelligent equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113134835A CN113134835A (en) 2021-07-20
CN113134835B true CN113134835B (en) 2023-01-20

Family

ID=76810415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110360072.7A Active CN113134835B (en) 2021-04-02 2021-04-02 Robot explanation method and device, intelligent equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113134835B (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008084135A (en) * 2006-09-28 2008-04-10 Toshiba Corp Movement control method, mobile robot and movement control program
TWI388956B (en) * 2009-05-20 2013-03-11 Univ Nat Taiwan Science Tech Mobile robot, method for planning paths of manipulating target objects thereof
CN103699126B (en) * 2013-12-23 2016-09-28 中国矿业大学 The guidance method of intelligent guide robot
CN107224252A (en) * 2017-07-21 2017-10-03 长沙稻冰工程技术有限公司 Cleaning systems control method, cleaning systems and computer-readable recording medium
CN110225141B (en) * 2019-06-28 2022-04-22 北京金山安全软件有限公司 Content pushing method and device and electronic equipment
CN110703665A (en) * 2019-11-06 2020-01-17 青岛滨海学院 Indoor interpretation robot for museum and working method
CN112288945A (en) * 2020-09-02 2021-01-29 苏州穿山甲机器人股份有限公司 Method for improving efficacy of mobile vending robot
CN112565396B (en) * 2020-12-02 2023-04-07 深圳优地科技有限公司 Information pushing method and device, robot and storage medium
CN112418145B (en) * 2020-12-04 2021-07-16 北京矩阵志诚科技有限公司 Intelligent guide system for large exhibition hall based on machine vision and big data analysis

Also Published As

Publication number Publication date
CN113134835A (en) 2021-07-20

Similar Documents

Publication Publication Date Title
Chen et al. Soundspaces: Audio-visual navigation in 3d environments
US10809795B2 (en) Six degree of freedom tracking with scale recovery and obstacle avoidance
CN108303972B (en) Interaction method and device of mobile robot
CN111126399B (en) Image detection method, device and equipment and readable storage medium
US20190340197A1 (en) System and method for controlling camera and program
CN109819400B (en) User position searching method, device, equipment and medium
KR101984778B1 (en) Method for multiple image cooperation on diagnosis exterior wall of structure and apparatus for performing the method
WO2016029806A1 (en) Sound image playing method and device
US8724848B1 (en) Locating objects using indicia
CN110737212A (en) Unmanned aerial vehicle control system and method
US20180213362A1 (en) Method, controller, telepresence robot, and storage medium for controlling communications between first communication device and second communication devices
CN113910224B (en) Robot following method and device and electronic equipment
WO2023124017A1 (en) Intelligent device control method and apparatus, and server and storage medium
CN110688873A (en) Multi-target tracking method and face recognition method
CN113134835B (en) Robot explanation method and device, intelligent equipment and storage medium
US20200143003A1 (en) Method and apparatus for providing an updated digital building model
CN109190486B (en) Blind guiding control method and device
CN108803383A (en) A kind of apparatus control method, device, system and storage medium
CN108732948B (en) Intelligent device control method and device, intelligent device and medium
CN114199268A (en) Robot navigation and guidance method and device based on voice prompt and guidance robot
CN112286185A (en) Floor sweeping robot, three-dimensional map building method and system thereof, and computer readable storage medium
CN114153310A (en) Robot guest greeting method, device, equipment and medium
CN108833935A (en) A kind of direct broadcasting room recommended method, device, equipment and storage medium
CN117178241A (en) System and method for intelligently explaining exhibition scene
CN106534806A (en) Augmented reality (AR) technology-based bird identification recreation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant