CN112238458B - Robot management device, robot management method, and robot management system - Google Patents

Robot management device, robot management method, and robot management system Download PDF

Info

Publication number
CN112238458B
CN112238458B CN202010642902.0A CN202010642902A CN112238458B CN 112238458 B CN112238458 B CN 112238458B CN 202010642902 A CN202010642902 A CN 202010642902A CN 112238458 B CN112238458 B CN 112238458B
Authority
CN
China
Prior art keywords
visitor
robot
unit
facility
movement instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010642902.0A
Other languages
Chinese (zh)
Other versions
CN112238458A (en
Inventor
坂口浩之
塚本智宏
栁田梦
新井敬一郎
达富由树
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honda Motor Co Ltd
Original Assignee
Honda Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honda Motor Co Ltd filed Critical Honda Motor Co Ltd
Publication of CN112238458A publication Critical patent/CN112238458A/en
Application granted granted Critical
Publication of CN112238458B publication Critical patent/CN112238458B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The robot management device (4) provided by the invention dispatches a guiding robot (3) capable of walking by itself, which serves visitors visiting an exhibition hall of an automobile sales shop. The robot management device includes: an in-facility image acquisition unit (451) that acquires in-exhibition hall images captured by a plurality of imaging units provided in the exhibition hall; a visitor determination unit (454) that determines a visitor to visit the exhibition hall from the intra-exhibition hall image acquired by the intra-facility image acquisition unit (451); and a robot movement instruction unit (457) that outputs a movement instruction to the guidance robot (3) so that the guidance robot (3) moves to the vicinity of the visitor. When the visitor specification unit (454) determines that the visitor cannot be specified based on the intra-exhibition image, the robot movement instruction unit (457) outputs a movement instruction to the guidance robot (3) so that the guidance robot (3) moves to the vicinity of the visitor determined to be unable to be specified.

Description

Robot management device, robot management method, and robot management system
Technical Field
The present invention relates to a robot management device, a robot management method, and a robot management system.
Background
Recently, as one of means for solving the shortage of human hands, various techniques concerning a robot control system using a guided robot have been proposed (see, for example, patent literature 1). The robot control system described in patent document 1 is configured to enable a guidance robot (communication robot) disposed in a facility to perform appropriate reception to a visitor by combining with a sensor provided in the facility.
However, when a visitor is identified using a sensor installed in a facility, it may be difficult to identify the visitor from the location and posture of the visitor. For example, when the direction of walking and the face orientation are different when visiting while talking, it is difficult to identify the visitor. When it is difficult to determine a visitor, efficient reception using the guidance robot is also difficult.
Prior art literature
Patent document 1: japanese patent No. 6142306 (JP 6142306B).
Disclosure of Invention
The present invention provides a robot management device for managing a guidance robot capable of traveling by itself, which serves a visitor to a visiting facility. The robot management device includes: an in-facility image acquisition unit that acquires an in-facility image captured by a capturing unit provided in a facility; a visitor specification unit that specifies a visitor visiting the facility from the in-facility image acquired by the in-facility image acquisition unit; and a robot movement instruction unit that outputs a movement instruction to the guidance robot so that the guidance robot moves to the side of the visitor, wherein the robot movement instruction unit outputs a movement instruction to the guidance robot so that the guidance robot moves to the side of the visitor determined to be unable to be identified when the visitor determination unit determines that the visitor is unable to be identified based on the intra-facility image.
Another aspect of the present invention is a robot management method for managing a robot capable of traveling by itself, which serves a visitor to a visiting facility. The robot management method comprises the following steps: an in-facility image acquisition step of acquiring an in-facility image captured by a capturing section provided in a facility; a visitor determination step of determining a visitor visiting the facility based on the intra-facility image acquired in the intra-facility image acquisition step; and a robot movement instruction step of inputting a movement instruction to the guiding robot so that the guiding robot moves to the side of the visitor. The robot movement indication step includes: in the visitor specification step, when it is determined that the visitor cannot be specified based on the in-facility image, a movement instruction is output to the guidance robot so that the guidance robot moves to the side of the visitor determined to be unable to be specified.
A robot management system according to still another aspect of the present invention includes: the robot management device; an imaging unit provided in a facility and capable of communicating with the robot management device; and a guidance robot which is disposed in the facility, is capable of traveling by itself, and is capable of communicating with the robot management device.
Drawings
The objects, features and advantages of the present invention are further elucidated by the following description of embodiments in connection with the accompanying drawings.
Fig. 1 is a schematic configuration diagram of a robot management system using a server device constituting a robot management device according to an embodiment of the present invention.
Fig. 2 is a perspective view schematically showing a guidance robot constituting the robot management system shown in fig. 1.
Fig. 3 is a block diagram showing a main part structure of the robot management system shown in fig. 1.
Fig. 4 is a block diagram showing a main part structure of the server apparatus shown in fig. 3.
Fig. 5 is a flowchart showing an example of the guidance robot dispatch process executed by the computing unit of the server apparatus of fig. 3.
Fig. 6 is a flowchart showing an example of the visitor specification processing executed by the arithmetic unit of the server apparatus of fig. 3.
Detailed Description
An embodiment of the present invention will be described below with reference to fig. 1 to 6. The robot management device according to an embodiment of the present invention constitutes a part of a robot management system that dispatches a guidance robot to a visitor of a visiting facility and provides a service such as simple guidance (hereinafter referred to as a robot dispatch guidance service) to the visitor.
As an enterprise providing the robot dispatch guide service, for example, a sales shop, an art hall, a museum, a gallery, an art gallery, and other art related facilities, a science museum, a commemorative hall, an exhibition, and a seminar for retail various products are mentioned. Examples of sales shops for retail various products include department stores, supermarkets, and specialty shops. As the special sales shop, various special sales shops, automobile sales shops, and the like can be exemplified. In the following embodiment, an example will be described in which a robot management device is configured by a server device, the server device is installed in an automobile sales shop, and a guidance robot disposed in an exhibition hall of the automobile sales shop is dispatched to a visitor visiting the exhibition hall.
Fig. 1 is a schematic configuration diagram of a robot management system 100 including a server device 4 constituting a robot management device according to an embodiment of the present invention. As shown in fig. 1, the robot management system 100 of the present embodiment includes an imaging unit 11, a guidance robot 3, and a server device 4. The robot management device according to the present embodiment is mainly composed of the server device 4. The robot management system 100 having the server device 4 of the present embodiment is provided with a guidance robot 3 capable of traveling by itself in an exhibition hall 1, which is a facility of an automobile sales shop where the vehicle 2 is exhibited. Then, the plurality of photographing units 11 provided on the ceiling 10 of the exhibition hall 1 each photograph a visitor coming to the exhibition hall 1 by the server device 4 owned by the car sales shop, and the visitor is specified from the photographed image of the exhibition hall 1 (hereinafter referred to as an in-exhibition hall image or an in-facility image).
As shown in fig. 1, when the friend group 101 including the visitors A, B, C of three persons and the parent group 102 including the visitors D, E of two persons come to the exhibition hall 1, the server apparatus 4 identifies the visitors a to C from the intra-exhibition-hall image including the friend group 101 captured by the capturing unit 11, and identifies the visitor D, E from the intra-exhibition-hall image including the parent group 102 captured by the capturing unit 11. The server device 4 extracts face images (recognition faces) of the visitors a to E from these intra-exhibition hall images, and determines the visitors a to E based on the face images.
At this time, when the visitors a to E cannot be specified based on the intra-exhibition images, the server device 4 dispatches the guidance robot 3 to the visitors a to E determined to be not specified. In the case where, for example, the visitor E (e.g., child) of the parent-child group 102 cannot be determined, the server apparatus 4 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the visitor E (e.g., child). Since the guiding robot 3 moves to the vicinity of the visitor E, the visitor E can change the position and posture of the visitor E by noticing the guiding robot 3 and facing the guiding robot 3.
Here, the guidance robot 3 will be briefly described with reference to fig. 2. Fig. 2 is a perspective view schematically showing a guidance robot 3 constituting a part of the robot management system 100 shown in fig. 1. As shown in fig. 2, the guiding robot 3 is formed in a substantially gourd shape, and includes a head 301 above a middle thin portion 302 and a body 303 below the middle thin portion. The guiding robot 3 is formed in a shape that is lovely and enthusiastically shaped as a whole in two parts of a substantially head and body, the head 301 being slightly larger than the body 303. The height of the guiding robot 3 is about 110 cm. In addition, the guiding robot 3 has no hands or feet.
The guide robot 3 is configured to be movable in any direction of 0 to 360 degrees, such as the forward-backward direction and the oblique direction, by a traveling device 304 provided at the lower end of the trunk 303. The specific structure of running gear 304 is not described here. In this way, the guidance robot 3 has a shape in which the head 301 is slightly larger than the trunk 303 and has no hand or foot, and thus, for example, a child is easy to hug and communicate with the child. The guiding robot 3 can swing in the front-rear direction and the left-right direction by the traveling device 304, and by swinging the guiding robot 3, a visitor can easily notice the approach of the guiding robot 3, or the like. In this way, the guidance robot 3 can perform an operation or the like for easily communicating with the visitor.
A substantially elliptical face 305 is provided on the head 301 of the guidance robot 3, and the face 305 is configured to be capable of displaying an expression, a simple text image, and the like of the guidance robot 3. A pair of virtual eyes 306 symbolizing eyes are displayed on the face 305 of the guidance robot 3, and the face 305 is configured to be able to express various expressions through the pair of virtual eyes 306. For example, the expression such as happiness and fun can be expressed by changing the shape of the pair of virtual eyes 306. A virtual mouth 307 symbolizing a mouth is also displayed on the face 305, and by changing the shape of the pair of virtual eyes 306 and the virtual mouth 307, the change in expression is easily understood.
The guidance robot 3 is configured such that a pair of virtual eyes 306 can move in position within the face 305. The guidance robot 3 performs an action of moving the line of sight by changing the positions of the pair of virtual eyes 306 in the face 305. The guiding robot 3 attracts the line of sight of the visitor by changing the positions of the pair of virtual eyes 306 in front of the visitor to express an action of moving the line of sight. At this time, the guiding robot 3 is more likely to attract the sight of the visitor by performing the rotation operation while performing the sight-moving operation.
By dispatching the guidance robot 3 configured as described above to, for example, an indeterminate visitor E, the guidance robot 3 is caused to perform an operation (attraction operation) of changing the position and posture of the visitor E so that the face of the visitor E can be photographed by the photographing section 11 or the guidance robot 3 can be caused to directly photograph the face of the visitor E. As a result, for example, the visitor E can be identified, and efficient and smooth reception using the guidance robot can be performed. In order to satisfactorily realize such provision of the robot dispatch guidance service at the automobile sales outlet, the present embodiment is configured as follows with the robot management system 100 having the server device 4.
Fig. 3 is a block diagram showing a main part structure of the robot management system 100 shown in fig. 1. As shown in fig. 3, the imaging unit 11, the guidance robot 3, and the server device 4 are connected to a communication network 5 such as a wireless communication network, the internet, and a telephone network. For convenience, only one photographing part 11 is shown in fig. 3, but as shown in fig. 1, there are actually a plurality of photographing parts 11. Also, only one guidance robot 3 is shown in fig. 3, but a plurality of guidance robots 3 can be provided. In addition, when a large number of guide robots 3 are disposed in the exhibition hall 1, there is a fear that the number of guide robots 3 is small because the guide robots may give a feeling of oppression to the panelist and the cost in the automobile sales shop is increased.
As shown in fig. 3, the imaging unit 11 includes a communication unit 111, a camera unit 112, a sensor unit 113, a storage unit 114, and an arithmetic unit 115. The communication unit 111 is configured to be capable of wirelessly communicating with the server device 4 and the guidance robot 3 via the communication network 5. The camera unit 112 is a camera having an imaging element such as a CCD or CMOS, and is configured to be able to capture a visitor who arrives at the exhibition hall 1. The sensor unit 113 is a sensor such as a dynamic detection sensor or a human sensor, and is configured to be able to detect the position and operation of a visitor entering the exhibition hall 1 in the exhibition hall. The camera section 112 and the sensor section 113 are arranged in plurality on the ceiling 10 of the exhibition hall 1 so that a visitor coming to the exhibition hall 1 can take a picture and detect it regardless of any position in the exhibition hall 1.
The storage unit 114 has a volatile or nonvolatile memory, not shown. The storage unit 114 stores various data, various programs executed by the arithmetic unit 115, and the like. For example, the storage unit 114 temporarily stores the intra-exhibition image captured by the camera unit 112 and the position information of the visitor detected by the sensor unit 113.
The computation unit 115 has a CPU, executes predetermined processing based on signals received from the outside of the imaging unit 11 through the communication unit 111, various programs stored in the storage unit 114, and the like, and outputs predetermined control signals to the communication unit 111, the camera unit 112, the sensor unit 113, and the storage unit 114.
For example, the arithmetic unit 115 outputs a control signal for capturing an image of the exhibition hall 1 to the camera unit 112 and a control signal for detecting the position of the visitor to the sensor unit 113 based on the signal received from the server device 4 via the communication unit 111 and a predetermined program. Then, the arithmetic unit 115 outputs a control signal for transmitting the captured image of the exhibition hall and the detected position information of the visitor to the server apparatus 4 via the communication unit 111. By this processing performed by the arithmetic unit 115, the inside of the exhibition hall 1 is photographed, and the position of the visitor is detected, and the image in the exhibition hall and the position information of the visitor are transmitted to the server device 4.
As shown in fig. 3, the guidance robot 3 has a communication unit 31, an input unit 32, an output unit 33, a camera unit 34, a travel unit 35, a sensor unit 36, a storage unit 37, and an arithmetic unit 38 as functional configurations. The communication unit 31 is configured to be capable of wireless communication with the server apparatus 4 and the imaging unit 11 via the communication network 5. The input unit 32 includes various switch buttons (not shown) operated at the time of maintenance or the like, a microphone (not shown) capable of inputting the voice of a visitor or the like, and the like.
The output unit 33 includes a speaker (not shown) capable of outputting sound and a display unit 331 capable of displaying an image. The display unit 331 forms the face 305 of the guidance robot 3, and the pair of virtual eyes 306, the character image, and the like are displayed on the display unit 331. The display unit 331 may be configured to be capable of displaying a pair of virtual eyes 306, a character image, and the like, and may be configured to include a liquid crystal panel, a projector, a screen, and the like.
The camera unit 34 is a camera having an imaging device such as a CCD or CMOS, and is configured to be able to take an image of a visitor who arrives at the exhibition hall 1. The camera unit 34 is provided, for example, on the head 301 of the guidance robot 3. By providing the camera portion 34 on the head 301, it becomes easy to take an image of the face of the visitor. From the viewpoint of capturing an image of the face of the visitor, the camera 34 is preferably provided near the pair of virtual eyes 306 of the guidance robot 3.
The traveling unit 35 is constituted by a traveling device 304 that causes the guidance robot 3 to travel by itself. The travel unit 35 includes a battery and a motor, and is configured to travel by driving the motor with the electricity of the battery. The traveling unit 35 can be constructed using a known electromotive technique. The sensor unit 36 includes various sensors such as a travel speed sensor, an acceleration sensor, a gyro sensor, and the like, a sensor that detects a travel and stop state of the guidance robot 3, an obstacle sensor, a human sensor, a dynamic body sensor, and the like, and a sensor that detects a state around the guidance robot 3.
The storage unit 37 has a volatile or nonvolatile memory, not shown. The storage unit 37 stores various data, various programs executed by the arithmetic unit 38, and the like. The storage unit 37 also temporarily stores data and the like related to the reception content of the visitor. For example, a main demand of the guidance robot 3, which is requested from the visitor, a description of the visitor by the guidance robot 3, and the like are temporarily stored.
The storage unit 37 stores a hall database (hall D/B) 371 and an exchange database (exchange D/B) 372 as examples of the functional configuration assumed by the memory. The exhibition hall database 371 stores data corresponding to the arrangement of the exhibition vehicles 2, the exhibition stands, and the like arranged in the exhibition hall 1. The exhibition hall database 371 is used as a reference for guiding the robot 3 when moving in the exhibition hall. The communication database 372 stores data related to voice recognition processing, voice output processing, and the like, which are executed when the guidance robot 3 communicates with the visitor. The traffic database 372 is used as a reference when the guidance robot 3 communicates with the visitor.
The computation unit 38 has a CPU, and executes predetermined processing based on a signal received from the outside of the guidance robot 3 through the communication unit 31, a signal input through the input unit 32, a signal detected by the sensor unit 36, various programs and various data stored in the storage unit 37, and the like, and outputs predetermined control signals to the communication unit 31, the output unit 33, the camera unit 34, the walking unit 35, and the storage unit 37.
More specifically, the arithmetic unit 38 outputs control signals to the traveling unit 35 and the storage unit 37 based on the signal received from the server device 4 via the communication unit 31 and the signal detected by the sensor unit 36. Thereby, the guidance robot 3 is dispatched to the visitor. The arithmetic unit 38 also outputs control signals to the camera unit 34 and the communication unit 31 based on the signals received from the server device 4 via the communication unit 31. Thus, the face of the visitor is photographed, and the photographed face image is transmitted to the server apparatus 4.
The arithmetic unit 38 also outputs a control signal to the output unit 33 (display unit 331) based on the signal received from the server apparatus 4 via the communication unit 31. Thereby, the expression of the guidance robot 3 is changed, and the line of sight of the pair of virtual eyes 306 is changed. The arithmetic unit 38 outputs a control signal to the output unit 33 and the storage unit 37 based on the signal input through the input unit 32. Thereby, the guiding robot 3 can communicate with the visitor.
Fig. 4 is a block diagram showing a main part configuration of the server apparatus 4 shown in fig. 3. As shown in fig. 4, the server device 4 includes a communication unit 41, an input unit 42, an output unit 43, a storage unit 44, and an arithmetic unit 45. The server device 4 may be configured to use virtual server functions in the cloud, or may be configured to disperse the functions.
The communication unit 41 is configured to be capable of wirelessly communicating with the imaging unit 11 and the guidance robot 3 via the communication network 5 (see fig. 3). The input unit 42 includes various switches operable by a user, such as a touch panel and a keyboard, and a microphone capable of inputting sound. The user referred to herein is a clerk of an automobile sales shop. The output unit 43 includes, for example, a monitor for displaying characters and images, a speaker for outputting sound, and the like.
The storage unit 44 has a volatile or nonvolatile memory, not shown. The storage unit 44 stores various data, various programs executed by the arithmetic unit 45, and the like. The storage unit 44 has a functional configuration assumed by a guidance robot database (guidance robot D/B) 441, an exhibition hall database (exhibition hall D/B) 442, and a visitor database (visitor D/B) 443 as memories.
The guidance robot database 441 stores basic information, maintenance information, and the like relating to the guidance robot 3, such as a robot ID of the guidance robot 3 used in the robot dispatch guidance service. The exhibition hall database 442 stores data related to the arrangement of the exhibition vehicles 2, the exhibition stands, and the like arranged in the exhibition hall 1. The hall database 442 has the same structure as the hall database 371 stored in all the storage units 37 of the guidance robot 3. Accordingly, the robot management system 100 may also be configured to have any one of the exhibition hall database 442 and the exhibition hall database 371.
The visitor database 443 stores information of visitors (hereinafter referred to as visitor data) who visit the exhibition hall 1. The visitor data includes information such as a face image and a visiting record of the visitor in addition to basic information of the visitor such as the address, name, age, occupation, sex, etc. of the visitor. The visiting record includes, in addition to the interview contents with the visitor, the contents of the interview before the interview, and the like.
When a visitor is visiting a group of visitors (hereinafter referred to as a visitor group), the visitor database 443 stores visitor data of the visitor in association with information of the visitor group (hereinafter referred to as visitor group information). The visitor group information is information capable of identifying a visitor group. For example, when a visitor and a family are together, visitor data of the visitor is stored in association with information capable of identifying the family. For example, when a visitor and a friend visit together in a multi-person group, visitor data of the visitor is stored in association with information capable of identifying the group.
The computing unit 45 has a CPU, and executes predetermined processing based on the signal received through the input unit 42, the signal received from the outside of the server apparatus 4 through the communication unit 41, various programs and various data stored in the storage unit 44, and the like, and outputs control signals to the communication unit 41, the output unit 43, and the storage unit 44.
As shown in fig. 4, the computing unit 45 has an in-facility image acquisition unit 451, a robot image acquisition unit 452, a robot visual line instruction unit 453, a visitor determination unit 454, a visitor group determination unit 455, a group emotion determination unit 456, and a robot movement instruction unit 457 as functional configurations.
The in-facility image acquisition unit 451 acquires in-exhibition hall images captured by a plurality of capturing units 11 provided in the exhibition hall 1. Specifically, the in-facility image acquisition unit 451 inputs image data (including still image data and moving image data) including images of the interior of the exhibition hall 1 (showing the space in which the vehicle 2 is shown) captured by the plurality of capturing units 11 through the communication unit 41. More specifically, the in-facility image acquisition unit 451 outputs a signal instructing to capture an in-booth image to the plurality of imaging units 11 via the communication unit 41, and inputs image data including the in-booth image captured by the plurality of imaging units 11 via the communication unit 41.
The robot image acquisition unit 452 acquires an image including a face image of a visitor captured by the guidance robot 3 installed in the exhibition hall 1. Specifically, the robot image acquisition unit 452 inputs image data (including still image data and moving image data) including a face image of the visitor photographed by the guidance robot 3 through the communication unit 41. More specifically, the robot image acquisition unit 452 outputs a signal instructing to capture a face image of the visitor to the guidance robot 3 through the communication unit 41, and inputs image data including the face image of the visitor captured by the guidance robot 3 through the communication unit 41.
The robot visual line instruction unit 453 instructs the visual line directions of the pair of virtual eyes 306 of the guidance robot 3. Specifically, the robot visual line instruction unit 453 outputs a control signal (visual line control signal) for controlling the position and operation of the pair of virtual eyes 306 of the guidance robot 3 to the guidance robot 3 via the communication unit 41.
When the control signal is input from the robot visual line instruction unit 453 through the communication unit 31, the guidance robot 3 controls the display unit 331 based on the input control signal, and changes the positions and actions of the pair of virtual eyes 306. I.e. to move the line of sight. When the guiding robot 3 moves the line of sight, the visitor is attracted by the line of sight of the guiding robot 3 and looks in the direction of the line of sight of the guiding robot 3. In this way, the robot 3 is guided to move the line of sight, thereby attracting the line of sight of the visitor and causing the position or posture of the visitor to change. For example, by guiding the line of sight of the robot 3 toward the imaging unit 11, the line of sight of the visitor can be directed toward the imaging unit 11. This enables the imaging unit 11 to capture the face of the visitor.
The visitor specification unit 454 specifies the visitor to visit the exhibition hall 1 from the intra-exhibition hall image acquired by the intra-facility image acquisition unit 451. More specifically, the visitor specification unit 454 extracts a person from the intra-exhibition image, and extracts (recognizes) a face image from the extracted person. Then, the visitor data having the face image in conformity with the extracted face image is retrieved from the visitor data stored in the visitor database 443, thereby determining the visitor. When there is no visitor data having a face image in accordance with the extracted face image, the face image of the extracted person is stored in the visitor database 443 as visitor data of a new visitor.
The visitor specification unit 454 outputs a control signal to the robot movement instruction unit 457 when it is determined that the visitor cannot be specified by failing to recognize the face image of the person extracted from the intra-exhibition image. When the control signal is input, the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the person. Specifically, the robot movement instruction unit 457 outputs a signal instructing movement to the guidance robot 3 via the communication unit 41, so that the guidance robot 3 is dispatched to the person. This signal is hereinafter referred to as a move command. Then, the robot image acquisition unit 452 outputs a signal instructing to capture a face image of the person to the guidance robot 3 via the communication unit 41. The image including the face image of the person captured by the guidance robot 3 is input to the robot image acquisition unit 452 through the communication unit 41, and the visitor determination unit 454 determines the visitor by using the face image of the person input to the robot image acquisition unit 452 in the same manner as described above.
The visitor group determination unit 455 determines the visitor group of the visiting hall 1 from the intra-hall image acquired by the intra-facility image acquisition unit 451. More specifically, the visitor group determination unit 455 extracts a plurality of visitors from the intra-exhibition image, and determines whether or not the visitors constitute a group based on the distance feeling, the face orientation, and the like of the visitors. For example, a plurality of visitors that are kept a certain distance beyond a prescribed time are determined as a visitor group. Also, for example, a plurality of visitors who talk to each other face to face are determined as a visitor group. At this time, the visitor group determination unit 455 also determines the number of visitors constituting the visitor group. In addition, when only one visitor is extracted from the intra-exhibition image, it may be determined that the visitor constitutes a group. In this case, the visitor group determination unit 455 determines the number of visitors constituting the visitor group to be 1.
The visitor group determination unit 455 also determines whether or not the plurality of visitors determined by the visitor determination unit 454 constitute a visitor group, and determines a visitor group. More specifically, the visitor group determination unit 455 extracts visitor group information of each visitor determined by the visitor determination unit 454, which corresponds to the visitor data stored in the visitor database 443, and determines whether or not each visitor constitutes a visitor group based on the extracted visitor group information.
The group emotion determining section 456 determines a group emotion that is emotion of the entire group of the visitor group by estimating emotion of a plurality of visitors constituting the visitor group determined by the visitor group determining section 455 based on the intra-exhibition image. The group emotion determining section 456 determines whether, for example, the visitor group is a group with good mood (for example, a group that looks happy) or a group with poor mood (for example, a group with dysphoria and vigilance).
The group emotion determining section 456 determines a group emotion based on the estimated proportion of the emotion of the visitor. For example, the group emotion determining section 456 determines a group having a high proportion of lovers (a low proportion of lovers) within the group as a group having a good mood, and determines a group having a high proportion of lovers (a low proportion of lovers) having a bad mood as a group having a bad mood. That is, the emotion ratio in the present embodiment means the ratio of each emotion (poor mood, good mood, etc.) in the group.
As shown in fig. 1, for example, when the friend group 101 including three visitors A, B, C and the parent group 102 including two visitors D, E visit the exhibition hall 1, the group emotion determining unit 456 determines the group emotion of the friend group 101 and the parent group 102 from the intra-exhibition-hall images captured by the plurality of capturing units 11. The group emotion determining unit 456 estimates the emotions of the visitors a to C constituting the friend group 101, and determines that the group emotion of the friend group 101 is "good" when, for example, the emotions of the visitor a and the visitor B are estimated to be good and the emotion of the visitor C is not good. Similarly, the group emotion determining section 456 estimates the emotion of the visitor D, E constituting the parent group 102, and determines the group emotion of the parent group 102 as "poor emotion" when, for example, it is estimated that the emotion of the visitor D (for example, a parent) is good and the emotion of the visitor E (for example, a child) is bad.
Group emotion is determined to be "good" when the moods of all the visitors constituting the visitor group are good, is determined to be "good" when there are relatively many visitors with good moods, is determined to be "normal" when there are the same levels of the visitors with good moods and the visitors with bad moods, is determined to be "bad" when there are relatively many visitors with bad moods, is determined to be "bad" when the moods of all the visitors are not good. In this case, when the visitor is a child, the group emotion may be determined so that the emotion of the child as the visitor is emphasized. In addition, the group of visitors in which visitors whose moods are even slightly bad can be determined as "bad mood".
The emotion of each visitor can be estimated based on the behavior, and the like of the visitor. Therefore, the group emotion determining section 456 estimates emotion of the visitor based on the face image of the visitor, the image representing the movement of the visitor, and the like extracted from the intra-exhibition image. For example, when the visitor has a face with a feeling of being angry, when the visitor is not calms around for a while, or the like, the group emotion determining section 456 estimates that the visitor is ill-minded. On the other hand, the group emotion determining section 456 estimates that the emotion is good, for example, when the visitor has a happy face, when the visitor is talking with each other, or the like. The storage unit 44 stores images representing a plurality of predetermined actions (leg trembling, talking taking place, etc.), and the group emotion determining unit 456 compares the intra-exhibition image with those action images to extract an image representing the actions of the visitor from the intra-exhibition image.
The group emotion determining section 456 estimates emotion of a visitor in the visitor group based on the robot-captured image acquired by the robot image acquiring section 452, and updates the group emotion of the visitor group. For example, when the guidance robot 3 is dispatched to the visitor group and the guidance robot 3 photographs the visitor for reasons such as failure to identify the visitor based on the intra-exhibition image, the group emotion determining unit 456 estimates the emotion of the visitor based on the face image of the visitor photographed by the guidance robot 3 and updates the group emotion based on the estimated emotion.
The robot movement instruction unit 457 instructs the guidance robot 3 to move based on the group emotion determined by the group emotion determination unit 456 so as to dispatch the guidance robot 3 to the visitor group. Specifically, the robot movement instruction unit 457 outputs an instruction to move to the vicinity of the visitor group to the guidance robot 3 via the communication unit 41.
At this time, the robot movement instruction unit 457 gives priority to the visitor group whose group emotion determined by the group emotion determination unit 456 is relatively poor, and instructs movement to the vicinity of the visitor group. For example, if there are a visitor group whose group emotion is "bad emotion" and a visitor group whose group emotion is "bad emotion", the guidance robot 3 instructs to move to the vicinity of the visitor group whose group emotion is "bad emotion" as a priority group.
When there are a plurality of visitor groups having the same group emotion determined by the group emotion determining unit 456, the robot movement instructing unit 457 instructs the robot 3 to move to the vicinity of the visitor group with a smaller priority. For example, the robot movement instruction unit 457 instructs the robot 3 to move to the vicinity of a visitor group whose group emotion is determined to be "bad", when the visitor group of one of the two groups is composed of three persons and the visitor group of the other one is composed of five persons, the visitor group composed of three persons is prioritized.
Further, the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the vicinity of the visitor determined to be unable to be identified, when the visitor identification unit 454 determines that the visitor cannot be identified based on the intra-exhibition image. For example, when a signal indicating that the visitor cannot be specified is input from the visitor specification unit 454, the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the vicinity of the person whose image is extracted from the hall by the visitor specification unit 454.
At this time, the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the vicinity of the visitor group to which the person belongs, when the person extracted from the intra-exhibition image by the visitor specification unit 454 constitutes the visitor group. Then, the robot movement instruction unit 457 instructs the guidance robot 3 to perform an operation of causing the visitor determined to be unable to be identified to be able to change the position or posture of the visitor, such as by being able to be identified from the intra-hall image acquired by the intra-facility image acquisition unit 451. More specifically, the robot movement instruction unit 457 instructs the guiding robot 3 to operate so that the visitor determined to be unable to identify the visitor becomes a direction position or a direction posture toward the imaging unit 11.
For example, the robot movement instruction unit 457 instructs the guiding robot 3 to move to the vicinity of the visitor determined to be unable to be identified in a state where the imaging unit 11 is located behind the guiding robot 3. That is, the robot movement instruction unit 457 instructs the guiding robot 3 to move so that the guiding robot 3 moves along a route connecting the imaging unit 11 and the visitor determined to be indeterminate in the plan view. When it is determined that the visitor who cannot be identified notices the approaching guide robot 3, the visitor looks in the direction of the guide robot 3, and thus the face of the visitor can be photographed by the photographing section 11 located at the rear of the guide robot 3. In this case, the guidance robot 3 may be allowed to perform an operation that is likely to draw attention from the visitor determined to be unable to be identified. This causes a change in the position or posture of the visitor.
It is preferable that the robot visual line instruction unit 453 instructs the directions of the visual lines of the pair of virtual eyes 306 of the guidance robot 3 to attract the visual lines of the visitor, thereby causing the position or posture of the visitor to be changed. Specifically, it is preferable that the direction of the line of sight of the visitor be attracted to the direction of the imaging unit 11 by the movement of the line of sight direction of the pair of virtual eyes 306 to the direction of the imaging unit 11. At this time, the robot movement instruction unit 457 may output a control signal for instructing a rotation operation, a swing operation, or the like in conjunction with the movement of the line of sight to the guidance robot 3. This makes it easier to attract the line of sight of the visitor because the guiding robot 3 performs a rotating operation or the like while moving the line of sight. The rotation operation is a rotation operation (winding operation) in which a predetermined distance (for example, 1 m) around the visitor is set as a radius, with the vertical direction as the rotation axis.
Fig. 5 is a flowchart showing an example of the guidance robot dispatch process executed by the computing unit 45 of the server apparatus 4 in fig. 3. The guidance robot dispatch process shown in fig. 5 is started when, for example, the exhibition hall 1 starts to open, and is performed until the exhibition hall is closed. When the guidance robot dispatch process starts, intra-exhibition hall images captured by the plurality of capturing units 11 provided in the exhibition hall 1 are acquired by the process of the intra-facility image capturing unit 451 (intra-facility image capturing step). The in-facility image acquisition step is periodically performed in a predetermined time (for example, 10 ms) during the execution of the guidance robot dispatch process.
As shown in fig. 5, first, in step S1, a visitor determination process (visitor determination step) of determining a visitor visiting the exhibition hall 1 from the intra-exhibition hall image acquired in the intra-facility image acquisition step is performed.
Fig. 6 is a flowchart showing an example of the visitor specification processing executed by the arithmetic unit 45 of the server apparatus 4 in fig. 3. As shown in fig. 6, the visitor determination process first extracts, in step S10, the person visiting the facility from the intra-facility image acquired in the intra-facility image acquisition step by the process at the visitor determination section 454. Next, in step S11, it is determined whether or not the face image of the extracted person can be extracted (can be recognized) by the processing in the visitor determination section 454. If step S11 is negative (S11: no), the flow proceeds to step S12, and the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the vicinity of the person determined to be unable to be extracted (robot movement instruction step).
Next, in step S13, it is determined whether or not a face image of the person determined to be unable to be extracted can be acquired by the processing in the visitor determination unit 454. If step S13 is negative (S13: no), the flow proceeds to step S14, and the processing in the robot visual line instruction unit 453 instructs the visual line direction of the pair of virtual eyes 306 guiding the robot 3 to attract the visual line of the visitor and cause the change in the position or posture of the visitor (the robot visual line instruction step).
Next, in step S15, it is determined whether or not a face image of the person determined to be unable to be extracted can be acquired by the processing in the visitor determination unit 454. If step S15 is negative (S15: no), the flow proceeds to step S16, and the robot image acquisition unit 452 acquires a face image of a person determined to be unable to be extracted by causing the guidance robot 3 to capture a face (robot image acquisition step).
On the other hand, when each of step S11, step S13, and step S15 is affirmative (yes in step S11, step S13, and step S15), or when the face image of the person determined to be unable to be extracted (unable to be specified) is acquired in step S16, the face image of the visitor is extracted (identified) by the processing in the visitor specifying unit 454 in step S17. Next, in step S18, the visitor is determined by the processing in the visitor determination unit 454.
When the visitor determination process (visitor determination step) is performed, next, in step S2, the visitor determined in the visitor determination step is determined by the process at the visitor group determination section 455. Next, in step S3, it is determined whether the determined visitor constitutes a visitor group. When step S3 is negative (no in step S3), the process returns to step S1. On the other hand, when step S3 is affirmative (S3: yes), the process proceeds to step S4, and a visitor group is determined. Steps S2 to S4 are hereinafter referred to as visitor group determination processing (visitor group determination step).
When the visitor group determination process is executed, next, in step S5, the emotion of each visitor constituting the visitor group is estimated by the process of the group emotion determination section 456 (group emotion determination step). Next, in step S6, the group emotion of the visitor group is determined based on the estimated emotions of the visitors by the processing of the group emotion determining section 456 (group emotion determining step). Next, in step S7, the robot 3 is guided to move in accordance with the group emotion instruction of the visitor group by the processing of the robot movement instruction unit 457, so that the guided robot 3 is dispatched to the vicinity of the visitor group (robot movement instruction step).
Next, in step S8, it is determined whether or not the time is the hall time of the exhibition hall 1, and if step S8 is negative (no in step S8), the routine returns to step S1, and steps S1 to S8 are repeated. On the other hand, when step S8 is affirmative (S8: yes), the process ends.
The present embodiment can provide the following effects. .
(1) The server device 4 for managing a guidance robot 3 capable of traveling by itself, which serves a visitor visiting an exhibition hall 1 of an automobile sales shop, includes: an in-facility image acquisition unit 451 that acquires in-exhibition hall images captured by a plurality of imaging units 11 provided in the exhibition hall 1; a visitor determination unit 454 that determines a visitor to visit the exhibition hall 1 based on the intra-exhibition hall image acquired by the intra-facility image acquisition unit 451; and a robot movement instruction unit 457 that outputs a movement instruction to the guidance robot 3 so that the guidance robot 3 moves to the vicinity of the visitor. When it is determined that the visitor specification unit 454 cannot specify the visitor based on the intra-exhibition image, the robot movement instruction unit 457 outputs a movement instruction to the guidance robot 3 so that the guidance robot 3 moves to the vicinity of the visitor determined to be unable to specify the visitor.
With this configuration, the guidance robot 3 is dispatched to the vicinity of the visitor determined to be indeterminate. When the guide robot 3 moves toward the visitor, the visitor is directed toward the guide robot 3, and the position or posture of the visitor can be caused to change so that the face of the visitor can be photographed by the photographing section 11. Further, by moving the guidance robot 3 to the vicinity of the visitor, the guidance robot 3 can directly photograph the face of the visitor. As a result, the imaging unit 11 and the guidance robot 3 can be appropriately engaged. Therefore, it is possible to efficiently receive a visitor visiting the exhibition hall 1 of the automobile sales shop.
The guidance robot 3, which is dispatched to the vicinity of the visitor with poor mood, can communicate with the visitor, and can improve the mood of the visitor or suppress the mood deterioration. For example, by sending the guidance robot 3 to the vicinity of a visitor whose mood is deteriorated due to waiting for a long time, it is possible to improve the mood of such a visitor or to suppress further deterioration of the mood. This allows the subsequent negotiations and the like by sales personnel (e.g., sales staff) at the automobile sales shop to be smoothly carried out.
Also, the guidance robot 3 can ask the main demand in advance from a visitor waiting for a long time, a group of visitors predicted to need waiting for a long time, or make a simple introduction. This allows the reception by the sales person or the like to be performed efficiently.
In this way, by using the server device 4 according to the present embodiment, for example, efficient and smooth reception using the guidance robot 3 can be performed.
(2) The server apparatus 4 further has a visitor group determination unit 455, and the visitor group determination unit 455 determines a visitor group visiting the exhibition hall 1 based on the intra-exhibition hall image acquired by the intra-facility image acquisition unit 451. The robot movement instruction unit 457 outputs a movement instruction to the guidance robot 3 so that the guidance robot 3 moves to the vicinity of the visitor group to which the visitor determined to be unable to be specified belongs, when the visitor determined to be unable to be specified by the visitor determination unit 454 is present in the visitor group determined to be unable to be specified by the visitor group determination unit 455. For example, in the case where visitors visit the exhibition 1 in groups, it is sometimes difficult to recognize the faces of the visitors when the visitors talk face to face. However, by dispatching the guidance robot 3 to the vicinity of the group, the faces of all visitors in the group can be recognized.
(3) The robot movement instruction unit 457 also outputs a control signal to the guidance robot 3 so that the guidance robot 3 performs an operation of causing the visitor determined to be unable to be identified to become able to identify a change in the position or posture of such visitor based on the intra-exhibition image acquired by the intra-facility image acquisition unit 451. For example, the robot movement instruction unit 457 outputs a movement instruction to the guidance robot 3 so that the visitor determined to be unable to be identified becomes a position or a posture in the direction toward the imaging unit 11. This makes it possible to identify the face of the visitor determined to be unable to identify the face.
(4) The guiding robot 3 has a pair of virtual eyes 306 capable of moving the line of sight, and the server device 4 further has a robot line of sight instruction unit 453, and the robot line of sight instruction unit 453 outputs a line of sight control signal for controlling the line of sight direction of the pair of virtual eyes 306 of the guiding robot 3 to the guiding robot 3. The robot visual line instruction unit 453 outputs a visual line control signal to the guidance robot 3 that moves to the vicinity of the visitor in response to the movement instruction of the robot movement instruction unit 457, so as to cause the visitor to change in position or posture. This makes it possible to easily identify the face of the visitor determined to be unable to identify the face.
(5) A robot management method for managing a guidance robot 3 capable of traveling by itself, which serves a visitor visiting an exhibition hall 1 of an automobile sales shop. The robot management method includes the following steps performed by the server apparatus 4: an in-facility image acquisition step of acquiring in-exhibition hall images captured by a plurality of capturing units 11 provided in the exhibition hall 1; a visitor determination step of determining a visitor visiting the exhibition hall based on the intra-exhibition hall image acquired in the intra-facility image acquisition step; and a robot movement instruction step of outputting a movement instruction to the guidance robot 3 so that the guidance robot 3 moves to the vicinity of the visitor. In the robot movement instruction step, a movement instruction is output to the guidance robot 3 so that, when it is determined in the visitor determination step that the visitor cannot be determined based on the intra-exhibition image, the guidance robot 3 moves to the vicinity of the visitor determined to be unable to be determined.
With this method, as in the server device 4 described above, it is possible to appropriately perform efficient reception using the guidance robot 3 for a visitor visiting the exhibition hall 1 of the automobile sales shop. For example, efficient and smooth reception using the guidance robot 3 can be performed.
(6) The robot management system 100 includes: the server device 4; a plurality of photographing units 11 which can communicate with the server device 4 and are installed in the exhibition hall 1; and a guidance robot 3 which is disposed in the exhibition hall 1, can travel by itself, and can communicate with the server device 4. This makes it possible to properly receive a visitor visiting the exhibition hall 1 of the automobile sales shop with high efficiency using the guidance robot 3. For example, efficient and smooth reception using the guidance robot 3 can be performed.
In the above embodiment, the computing unit 45 of the server apparatus 4 is configured by including the in-facility image acquisition unit 451, the robot image acquisition unit 452, the robot line-of-sight indication unit 453, the visitor determination unit 454, the visitor group determination unit 455, the group emotion determination unit 456, and the robot movement indication unit 457, but is not limited thereto. The calculation unit 45 may include an in-facility image acquisition unit 451, a visitor determination unit 454, and a robot movement instruction unit 457. The computing unit 45 preferably includes a visitor group determination unit 455, and more preferably includes a robot visual line instruction unit 453.
In the above embodiment, the guidance robot dispatch processing executed by the computing unit 45 of the server apparatus 4 includes, but is not limited to, an in-facility image acquisition step, a visitor specification step, a visitor group specification step, a group emotion specification step, and a robot movement instruction step. The guidance robot dispatch process may include an intra-facility image acquisition step, a visitor determination step, and a robot movement instruction step in the visitor determination step.
The invention can properly conduct efficient reception using the guiding robot for visitors of visiting facilities.
While the invention has been described in connection with preferred embodiments, it will be understood by those skilled in the art that various modifications and changes can be made without departing from the scope of the disclosure of the following claims.

Claims (7)

1. A robot management device (4) for managing a guidance robot (3) capable of traveling by itself, which serves a visitor to a visiting facility, comprising:
an in-facility image acquisition unit (451) that acquires an in-facility image captured by a capturing unit (11) provided in the facility;
a visitor determination unit (454) that determines a visitor visiting the facility based on the in-facility image acquired by the in-facility image acquisition unit (451);
a visitor group determination unit (455) that determines a visitor group that accesses the facility based on the in-facility image acquired by the in-facility image acquisition unit (451); and
a robot movement instruction unit (457) that outputs a movement instruction to the guiding robot (3) so that the guiding robot (3) moves to the vicinity of the visitor,
When the visitor specification unit (454) determines that the visitor cannot be specified based on the in-facility image, the robot movement instruction unit (457) outputs a movement instruction to the guidance robot so that the guidance robot (3) moves to the vicinity of the visitor determined to be the indeterminate, and when the visitor determined to be the indeterminate by the visitor specification unit (454) exists in the visitor group determined to be the indeterminate by the visitor group specification unit (455), the robot movement instruction unit (457) outputs a movement instruction to the guidance robot (3) so that the guidance robot (3) moves to the vicinity of the visitor group to which the visitor determined to be the indeterminate belongs.
2. The robot managing device according to claim 1, wherein,
the robot movement instruction unit (457) also outputs a control signal to the guidance robot (3) so that the guidance robot (3) executes an operation for causing a visitor determined to be unable to be identified by the visitor identification unit (454) to become a visitor whose position or orientation can be identified based on the in-facility image.
3. The robot managing device according to claim 1 or 2, wherein,
The robot movement instruction unit (457) outputs a movement instruction to the guiding robot (3) so that the guiding robot (3) moves along a route connecting the visitor determined to be indeterminate by the visitor determination unit (454) in a plan view and the imaging unit (11).
4. The robot managing device according to claim 1 or 2, wherein,
the guiding robot (3) has a virtual eye (306) capable of moving a line of sight,
the robot management device (4) further comprises a robot visual line instruction unit (453), wherein the robot visual line instruction unit (453) outputs a visual line control signal for controlling the visual line direction of the virtual eyes (306) to the guiding robot (3),
the robot visual line instruction unit (453) outputs the visual line control signal to the guiding robot (3) that has moved to the vicinity of the visitor by the movement instruction of the robot movement instruction unit (457) so as to cause the position or posture of the visitor to change.
5. The robot managing device (4) according to claim 4, characterized in that,
the robot visual line instruction unit (453) outputs the control signal to the guiding robot (3) so that the visual line of the virtual eye (306) of the guiding robot (3) in the vicinity of the visitor is directed to the imaging unit (11) by the movement instruction of the robot movement instruction unit (457).
6. A robot management method for managing a guidance robot (3) capable of traveling by itself, which serves a visitor to a visiting facility, comprising:
an in-facility image acquisition step of acquiring an in-facility image captured by a capturing unit (11) provided in the facility;
a visitor determination step of determining a visitor visiting the facility based on the intra-facility image acquired in the intra-facility image acquisition step;
a visitor group determination step of determining a visitor group visiting the facility based on the intra-facility image acquired by the intra-facility image acquisition step; and
a robot movement instruction step of outputting a movement instruction to the guiding robot (3) so that the guiding robot (3) moves to the vicinity of the visitor,
the robot movement instruction step includes, when it is determined that the visitor cannot be specified based on the in-facility image in the visitor specification step, outputting a movement instruction to the guidance robot (3) so that the guidance robot (3) moves to the vicinity of the visitor determined to be the indeterminate, and when there is a visitor determined to be the indeterminate in the visitor determination step in the visitor group specification step, outputting a movement instruction to the guidance robot (3) so that the guidance robot (3) moves to the vicinity of the visitor group to which the visitor determined to be the indeterminate belongs.
7. A robot management system comprising:
the robot management device (4) according to any one of claims 1 to 5;
an imaging unit (11) provided in a facility and capable of communicating with the robot management device (4); and
and a guidance robot (3) which is disposed in the facility, serves a visitor visiting the facility, can walk by itself, and can communicate with the robot management device (4).
CN202010642902.0A 2019-07-17 2020-07-06 Robot management device, robot management method, and robot management system Active CN112238458B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019132084A JP7273638B2 (en) 2019-07-17 2019-07-17 ROBOT MANAGEMENT DEVICE, ROBOT MANAGEMENT METHOD AND ROBOT MANAGEMENT SYSTEM
JP2019-132084 2019-07-17

Publications (2)

Publication Number Publication Date
CN112238458A CN112238458A (en) 2021-01-19
CN112238458B true CN112238458B (en) 2024-03-15

Family

ID=74170809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010642902.0A Active CN112238458B (en) 2019-07-17 2020-07-06 Robot management device, robot management method, and robot management system

Country Status (2)

Country Link
JP (1) JP7273638B2 (en)
CN (1) CN112238458B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113561195A (en) * 2021-07-20 2021-10-29 柒久园艺科技(北京)有限公司 Robot guide exhibition hall internet of things system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005193351A (en) * 2004-01-09 2005-07-21 Honda Motor Co Ltd Face image acquiring method and its system
JP2017222021A (en) * 2016-06-14 2017-12-21 グローリー株式会社 Shop reception system
KR20180109124A (en) * 2017-03-27 2018-10-08 (주)로직아이텍 Convenient shopping service methods and systems using robots in offline stores
CN109348170A (en) * 2018-09-21 2019-02-15 北京大学(天津滨海)新代信息技术研究院 Video monitoring method, device and video monitoring equipment
CN109416814A (en) * 2016-06-22 2019-03-01 劳雷尔精机株式会社 Window receiving system and service robot
CN109465818A (en) * 2017-09-08 2019-03-15 株式会社日立大厦系统 Robot management system and Method of Commodity Recommendation

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4267556B2 (en) * 2004-10-29 2009-05-27 株式会社国際電気通信基礎技術研究所 Eyeball control device, eyeball control method, and eyeball control program
JP6203696B2 (en) * 2014-09-30 2017-09-27 富士ソフト株式会社 robot
JP6972526B2 (en) * 2016-09-27 2021-11-24 大日本印刷株式会社 Content providing device, content providing method, and program
JP2018051669A (en) * 2016-09-28 2018-04-05 大日本印刷株式会社 Robot system and program
JP2018098545A (en) * 2016-12-08 2018-06-21 カシオ計算機株式会社 Robot, operation control system, operation control method, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005193351A (en) * 2004-01-09 2005-07-21 Honda Motor Co Ltd Face image acquiring method and its system
JP2017222021A (en) * 2016-06-14 2017-12-21 グローリー株式会社 Shop reception system
CN109416814A (en) * 2016-06-22 2019-03-01 劳雷尔精机株式会社 Window receiving system and service robot
KR20180109124A (en) * 2017-03-27 2018-10-08 (주)로직아이텍 Convenient shopping service methods and systems using robots in offline stores
CN109465818A (en) * 2017-09-08 2019-03-15 株式会社日立大厦系统 Robot management system and Method of Commodity Recommendation
CN109348170A (en) * 2018-09-21 2019-02-15 北京大学(天津滨海)新代信息技术研究院 Video monitoring method, device and video monitoring equipment

Also Published As

Publication number Publication date
JP2021018482A (en) 2021-02-15
CN112238458A (en) 2021-01-19
JP7273638B2 (en) 2023-05-15

Similar Documents

Publication Publication Date Title
US9796093B2 (en) Customer service robot and related systems and methods
US20220185625A1 (en) Camera-based sensing devices for performing offline machine learning inference and computer vision
US9922236B2 (en) Wearable eyeglasses for providing social and environmental awareness
US11307593B2 (en) Artificial intelligence device for guiding arrangement location of air cleaning device and operating method thereof
US11330951B2 (en) Robot cleaner and method of operating the same
US20200008639A1 (en) Artificial intelligence monitoring device and method of operating the same
KR20190100085A (en) Robor being capable of detecting danger situation using artificial intelligence and operating method thereof
JP2017529521A (en) Wearable earpieces that provide social and environmental awareness
US11455835B2 (en) Information processing apparatus and information processing method
US20200394405A1 (en) Information processing apparatus and information processing method
US10872438B2 (en) Artificial intelligence device capable of being controlled according to user's gaze and method of operating the same
CN112238458B (en) Robot management device, robot management method, and robot management system
JP6609588B2 (en) Autonomous mobility system and autonomous mobility control method
KR20190096854A (en) Artificial intelligence server for controlling a plurality of robots using artificial intelligence
KR20190094311A (en) Artificial intelligence robot and operating method thereof
KR20180074404A (en) Robot for airport and method thereof
CN112238454B (en) Robot management device, robot management method, and robot management system
US20220297308A1 (en) Control device, control method, and control system
US20220300012A1 (en) Control device, control method, and control system
US20220300982A1 (en) Customer service system, server, control method, and storage medium
CN113924461A (en) Method for guiding a target to a target person, electronic device of a target person and electronic device of a receiver, and motor vehicle
JP6739017B1 (en) Tourism support device, robot equipped with the device, tourism support system, and tourism support method
JP2021071967A (en) Response support system, method, and program
JP2020106968A (en) Vehicle-mounted information processing device, program, and control method
US20220301029A1 (en) Information delivery system, information delivery method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant