CN112238454A - Robot management device, robot management method, and robot management system - Google Patents

Robot management device, robot management method, and robot management system Download PDF

Info

Publication number
CN112238454A
CN112238454A CN202010642903.5A CN202010642903A CN112238454A CN 112238454 A CN112238454 A CN 112238454A CN 202010642903 A CN202010642903 A CN 202010642903A CN 112238454 A CN112238454 A CN 112238454A
Authority
CN
China
Prior art keywords
group
visitor
robot
emotion
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010642903.5A
Other languages
Chinese (zh)
Other versions
CN112238454B (en
Inventor
坂口浩之
塚本智宏
栁田梦
新井敬一郎
达富由树
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honda Motor Co Ltd
Original Assignee
Honda Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honda Motor Co Ltd filed Critical Honda Motor Co Ltd
Publication of CN112238454A publication Critical patent/CN112238454A/en
Application granted granted Critical
Publication of CN112238454B publication Critical patent/CN112238454B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1689Teleoperation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Manipulator (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A server device (4) manages a guidance robot that serves a visitor group in a visiting exhibition hall. A server device (4) is provided with: an in-facility image acquisition unit (451) that acquires an in-exhibition-room image captured by a capture unit provided on the ceiling of an exhibition room; a visitor group determination unit (455) that determines a visitor group of a visiting exhibition hall from the images in the exhibition hall; a group emotion determination unit (456) which estimates the emotion of each visitor of the determined visitor group on the basis of the images in the exhibition hall, and determines the group emotion of the visitor group; and a robot movement instruction unit (457) that outputs a movement instruction to the guidance robot to move the guidance robot to the vicinity of the visitor group based on the determined group emotion.

Description

Robot management device, robot management method, and robot management system
Technical Field
The present invention relates to a robot management device, a robot management method, and a robot management system.
Background
Recently, as one of means for solving the shortage of human hands, various techniques have been proposed with respect to a robot control system using a guide robot (see, for example, patent document 1). In the robot control system described in patent document 1, a guidance robot (communication robot) introduces, for example, an exhibit to a visitor and guides a display venue.
However, among visitors visiting a facility, there are those who wish to communicate with the facility side and those who do not wish to communicate with the facility side. In addition, visiting facilities in groups is often the case. As for facilities, appropriate reception of these persons (groups) is desired, and a technique for realizing appropriate reception of a visitor group in terms of facilities using a limited guidance robot is desired.
Documents of the prior art
Patent document 1: japanese patent application laid-open No. 6142306 (JP 6142306B).
Disclosure of Invention
The technical scheme of the invention is a robot management device of a guiding robot which is dispatched to serve a visitor group of a visiting facility and can walk by oneself. The robot management device includes: an in-facility image acquisition unit that acquires an in-facility image captured by an imaging unit provided in a facility; a visitor group specifying unit that specifies a visitor group of a visiting facility based on the in-facility image acquired by the in-facility image acquiring unit; a group emotion determination unit that estimates, based on the in-facility image, an emotion of one or more visitors that form the visitor group determined by the visitor group determination unit, and determines a group emotion of the visitor group; and a robot movement instruction unit that outputs a movement command to the guidance robot so that the guidance robot moves to the vicinity of the visitor group based on the group emotion determined by the group emotion determination unit.
Another technical solution of the present invention is a robot management method for managing a guidance robot that can travel by itself and serves a visitor group of a visiting facility. The robot management method comprises the following steps: an in-facility image acquisition step of acquiring an in-facility image captured by an imaging unit provided in a facility; a visitor group determination step of determining a visitor group of the visiting facility from the in-facility image acquired in the in-facility image acquisition step; a group emotion determining step of estimating an emotion of one or more visitors constituting the visitor group determined in the visitor group determining step based on the in-facility image, and determining a group emotion of the visitor group; and a robot movement instruction step of outputting a movement instruction to the guidance robot to move the guidance robot to the vicinity of the visitor group based on the group emotion determined in the group emotion determination step.
A robot management system according to still another aspect of the present invention includes: the robot management device described above; a shooting unit provided in a facility and capable of communicating with a robot management device; and a guidance robot that is disposed in a facility, serves a group of visitors who visit the facility, can travel by itself, and can communicate with the robot management device.
Drawings
The objects, features and advantages of the present invention are further clarified by the following description of the embodiments in relation to the accompanying drawings.
Fig. 1 is a schematic configuration diagram of a robot management system using a server device constituting a robot management device according to an embodiment of the present invention.
Fig. 2 is a perspective view schematically showing a guidance robot constituting the robot management system shown in fig. 1.
Fig. 3 is a block diagram showing a main part structure of the robot management system shown in fig. 1.
Fig. 4 is a block diagram showing a main part configuration of the server apparatus shown in fig. 3.
Fig. 5 is a flowchart showing an example of the guidance robot dispatching process executed by the arithmetic unit of the server device in fig. 3.
Fig. 6 is a flowchart showing an example of the visitor specification process executed by the arithmetic unit of the server device in fig. 3.
Detailed Description
An embodiment of the present invention will be described below with reference to fig. 1 to 6. A robot management device according to an embodiment of the present invention constitutes a part of a robot management system that dispatches a guidance robot to a visitor visiting a facility and provides the visitor with a service such as simple guidance (hereinafter referred to as a robot dispatch guidance service).
Examples of the enterprise providing the robot dispatch guide service include art-related facilities such as a sales shop, an art gallery, a museum, an gallery and an art gallery for selling various products, a science hall, a memorial hall, an exhibition, a seminar, and the like. Examples of the sales stores that sell various kinds of commodities include department stores, supermarkets, and exclusive sales stores. The exclusive shops include various exclusive shops and automobile shops. In the following embodiments, an example will be described in which the robot management device is configured by a server device, the server device is installed in a car sales shop, and a guidance robot arranged in an exhibition hall of the car sales shop is dispatched to a visitor visiting the exhibition hall.
Fig. 1 is a schematic configuration diagram of a robot management system 100 including a server device 4 constituting a robot management device according to an embodiment of the present invention. As shown in fig. 1, the robot management system 100 of the present embodiment includes an imaging unit 11, a guidance robot 3, and a server device 4. The robot management device of the present embodiment is mainly configured by the server device 4. The robot management system 100 including the server device 4 of the present embodiment is provided with the guidance robot 3 that can travel by itself in the exhibition hall 1, which is a facility of a car sales shop that presents the display vehicle 2. Then, the server device 4 owned by the car sales shop identifies the visitor group visiting the exhibition hall 1 from the images of the exhibition hall 1 (hereinafter, referred to as the image in the exhibition hall or the image in the facility) photographed by each of the plurality of photographing units 11 provided on the ceiling 10 of the exhibition hall 1. More specifically, the server device 4 identifies each visitor from the face image of each visitor reflected in the image in the exhibition hall, and identifies a visitor group including these visitors.
The server device 4 also determines a group emotion which is an emotion of the entire group of the determined visitor group from the in-arcade image. More specifically, the feelings of the identified visitors are estimated from the images in the exhibition hall, and the group feelings are identified from the estimated feelings. The guiding robot 3 moves to approach the visitor group according to the determined group feeling. The guiding robot 3 is preferentially dispatched to a group of visitors who are relatively in a bad mood.
As shown in fig. 1, when a friend group 101 including a three-person visitor A, B, C and a parent group 102 including a two-person visitor D, E come to the exhibition hall 1, the server device 4 determines the group emotion of the friend group 101 and the parent group 102 from the images in the exhibition hall captured by the plurality of imaging units 11.
The server apparatus 4 estimates the feelings of the visitors a to C constituting the friend group 101, and determines the group feeling of the friend group 101 as "good feeling" when, for example, the feelings of the visitors a and B are good and the feeling of the visitor C is bad. Also, the server apparatus 4 determines that the group emotion of the parent group 102 is "poor mood" when the server apparatus 4 estimates the emotion of the visitor D, E constituting the parent group 102, for example, the emotion of the visitor D (for example, a parent) is good, and the emotion of the visitor E (for example, a child) is not good. Furthermore, the feelings of the visitors can be estimated from the behavior of the visitors.
The server apparatus 4 instructs the guidance robot 3 to move to the position where the parent group 102 is located so as to dispatch the guidance robot 3 to the parent group 102, which is a visitor group with a relatively poor mood. The guiding robot 3 moving to the position where the parent-child group 102 is located provides a service of communicating with the visitor D and the visitor E constituting the parent-child group 102. For example, the guiding robot 3 communicates with the parent-child group 102 by being a play partner of the visitor E, inquiring the visitor D about the main requirement of the visit, making a simple introduction, and the like.
Here, the guide robot 3 will be described in brief with reference to fig. 2. Fig. 2 is a perspective view schematically showing a guide robot 3 constituting a part of the robot management system 100 shown in fig. 1. As shown in fig. 2, the guide robot 3 is formed in a substantially gourd-shaped configuration, and a head portion 301 is formed above a center thin portion 302 and a trunk portion 303 is formed below. The guide robot 3 is formed in two parts of the head and body, the head 301 being slightly larger than the body 303, and the whole is in a lovely and enthusiastic shape. The height of the guide robot 3 is about 110 cm. In addition, the guide robot 3 has no hands and feet.
The guide robot 3 is configured to be movable in any direction of 0 to 360 degrees, such as the front-back direction and the oblique direction, by a traveling device 304 provided at the lower end of the body 303. Here, a description of a specific configuration of the traveling device 304 is omitted. In this way, the guiding robot 3 is shaped such that the head 301 is slightly larger than the trunk 303 and does not have hands and feet, and thus, for example, a child can easily hold the guiding robot and easily communicate with the child. The guiding robot 3 can be swung in the front-rear direction and the left-right direction by the traveling device 304, and by swinging the guiding robot 3, the visitor can easily notice the approach of the guiding robot 3. In this way, the guiding robot 3 can perform an operation of easily communicating with the visitor, and the like.
The head 301 of the guide robot 3 is provided with a substantially elliptical face 305 that is long in the horizontal direction, and the face 305 is configured to be able to display the expression of the guide robot 3, simple character images, and the like. A pair of virtual eyes 306 representing eyes are displayed on the face 305 of the guide robot 3, and the face 305 is configured to be able to express various expressions by the pair of virtual eyes 306. For example, by changing the shape of the pair of virtual eyes 306, an expression such as joy, anger, sadness, and the like can be expressed. A virtual mouth 307 representing a mouth is also displayed on the face 305, and by changing the shape of the pair of virtual eyes 306 and the virtual mouth 307, the change in expression can be easily understood.
The guide robot 3 is configured such that the pair of virtual eyes 306 can move at positions within the face 305. The robot 3 is guided to express an action of moving the line of sight by changing the positions of the pair of virtual eyes 306 within the face 305. The guiding robot 3 attracts the sight line of the visitor by expressing the action of moving the sight line by changing the positions of the pair of virtual eyes 306 in front of the visitor. At this time, the guiding robot 3 more easily draws the line of sight of the visitor by performing a rotation operation, a swing operation, or the like together with the movement of moving the line of sight.
The guiding robot 3 configured as described above is dispatched to, for example, the parent group 102. The dispatched guiding robot 3 becomes a play partner of the visitor E, inquires of the visitor D about the main requirement of the visit, and simply introduces the visitor D. Further, the guidance robot 3 is dispatched to the friend group 101 so that the visitors a to C can be easily identified. For example, when the face image of any one of the visitors a to C of the friend group 101 cannot be extracted and the visitor cannot be identified, the guiding robot 3 is dispatched to the friend group 101 to change the positions and postures of the visitors a to C so that the imaging unit 11 easily images the faces of the visitors a to C. In order to satisfactorily realize the robot dispatch guidance service provided in the automobile sales shop, the robot management system 100 of the present embodiment includes the server device 4 as follows.
Fig. 3 is a block diagram showing a main part structure of the robot management system 100 shown in fig. 1. As shown in fig. 3, the imaging unit 11, the guidance robot 3, and the server device 4 are connected to a communication network 5 such as a wireless communication network, the internet, or a telephone network. For convenience, fig. 3 shows only one image pickup unit 11, but actually, as shown in fig. 1, there are a plurality of image pickup units 11. Similarly, only one guidance robot 3 is shown in fig. 3, but a plurality of guidance robots 3 may be provided. In addition, when a large number of guide robots 3 are arranged in the exhibition hall 1, it is possible to give a sense of oppression to the bystanders and to increase the cost in the car shop, and therefore it is preferable that the number of guide robots 3 is small.
As shown in fig. 3, the imaging unit 11 includes a communication unit 111, a camera unit 112, a sensor unit 113, a storage unit 114, and a calculation unit 115. The communication unit 111 is configured to be capable of performing wireless communication with the server device 4 and the guidance robot 3 via the communication network 5. The camera unit 112 is a camera having an image pickup device such as a CCD or a CMOS, and is configured to be able to photograph a visitor who has arrived at the exhibition hall 1. The sensor unit 113 is a sensor such as a motion body detection sensor or a human detection sensor, and is configured to be able to detect the position and the motion of a visitor who has arrived at the exhibition hall 1 in the exhibition hall. A plurality of camera units 112 and sensor units 113 are disposed on the ceiling 10 of the exhibition hall 1 so that a visitor who has arrived at the exhibition hall 1 can take a picture of and detect the visitor at any position in the exhibition hall 1.
The storage unit 114 includes a volatile or nonvolatile memory not shown. The storage unit 114 stores various data, various programs executed by the arithmetic unit 115, and the like. For example, the storage unit 114 temporarily stores the image of the inside of the exhibition hall captured by the camera unit 112 and the positional information of the visitor detected by the sensor unit 113.
The arithmetic unit 115 includes a CPU, executes predetermined processing based on a signal received from the outside of the imaging unit 11 through the communication unit 111 and various programs stored in the storage unit 114, and outputs a predetermined control signal to the communication unit 111, the camera unit 112, the sensor unit 113, and the storage unit 114.
For example, the arithmetic unit 115 outputs a control signal for imaging the exhibition hall 1 to the camera unit 112 and outputs a control signal for detecting the position of the visitor to the sensor unit 113, based on a signal received from the server device 4 through the communication unit 111 and a predetermined program. Then, the calculation unit 115 outputs a control signal for transmitting the captured image of the inside of the exhibition hall and the detected position information of the visitor to the server apparatus 4 through the communication unit 111. The processing executed by the calculation unit 115 detects the position of the visitor while imaging the inside of the exhibition hall 1, and transmits the image inside the exhibition hall and the position information of the visitor to the server device 4.
As shown in fig. 3, the guidance robot 3 has a functional configuration including a communication unit 31, an input unit 32, an output unit 33, a camera unit 34, a traveling unit 35, a sensor unit 36, a storage unit 37, and a calculation unit 38. The communication unit 31 is configured to be capable of performing wireless communication with the server apparatus 4 and the imaging unit 11 via the communication network 5. The input unit 32 includes various switch buttons (not shown) operated during maintenance or the like, a microphone (not shown) capable of inputting voice of a visitor or the like, and the like.
The output unit 33 includes a speaker (not shown) capable of outputting sound and a display unit 331 capable of displaying an image. The display unit 331 constitutes a face 305 for guiding the robot 3, and the pair of virtual eyes 306, character images, and the like are displayed on the display unit 331. The display unit 331 may be configured to be capable of displaying the pair of virtual eyes 306, character images, and the like, and may include a liquid crystal panel, a projector, a screen, and the like.
The camera unit 34 is a camera having an image pickup device such as a CCD or a CMOS, and can photograph a visitor who has arrived at the exhibition hall 1. The camera unit 34 is provided in, for example, a head 301 of the guide robot 3. By providing the camera unit 34 in the head 301, it becomes easy to photograph the face of the visitor. From the viewpoint of imaging the face of the visitor, it is preferable to provide the camera unit 34 in the vicinity of the pair of virtual eyes 306 of the guide robot 3.
The traveling unit 35 is constituted by a traveling device 304 that guides the robot 3 to travel by itself. The traveling unit 35 includes a battery and a motor, and is configured to travel by driving the motor with electricity from the battery. The traveling unit 35 can be configured using a known electric technique. The sensor unit 36 includes various sensors such as a sensor for detecting the traveling and stopping states of the guidance robot 3, such as a traveling speed sensor, an acceleration sensor, and a gyro sensor, and a sensor for detecting the state around the guidance robot 3, such as an obstacle sensor, a human body sensor, and a dynamic body sensor.
The storage unit 37 includes a volatile or nonvolatile memory not shown. The storage unit 37 stores various data, various programs executed by the arithmetic unit 38, and the like. The storage unit 37 also temporarily stores data related to the content of reception of the visitor, and the like. For example, a main demand that the guidance robot 3 inquires from the visitor, a description of the visitor performed by the guidance robot 3, and the like are temporarily stored.
The storage unit 37 stores an exhibition hall database (exhibition hall D/B)371 and an exchange database (exchange D/B)372 as examples of functional configurations that the memory takes. The exhibition hall database 371 stores data corresponding to the arrangement of the exhibition vehicles 2, the exhibition stands, and the like arranged in the exhibition hall 1. The exhibition hall database 371 is used as a reference for guiding the robot 3 to move in the exhibition hall. The communication database 372 stores data related to voice recognition processing and voice output processing executed when the guiding robot 3 communicates with the visitor, and the like. The communication database 372 is used as a reference for guiding the robot 3 to communicate with the visitor.
The arithmetic unit 38 includes a CPU, executes predetermined processing based on a signal received from the outside of the guidance robot 3 through the communication unit 31, a signal input through the input unit 32, a signal detected by the sensor unit 36, various programs and various data stored in the storage unit 37, and outputs a predetermined control signal to the communication unit 31, the output unit 33, the camera unit 34, the traveling unit 35, and the storage unit 37.
More specifically, the computing unit 38 outputs a control signal to the traveling unit 35 and the storage unit 37 based on a signal received from the server device 4 via the communication unit 31 and a signal detected by the sensor unit 36. Thereby, the guiding robot 3 is dispatched to the visitor. The arithmetic unit 38 also outputs a control signal to the camera unit 34 and the communication unit 31 based on a signal received from the server apparatus 4 via the communication unit 31. Thereby, the face of the visitor is photographed, and the photographed face image is transmitted to the server apparatus 4.
Thereby, the face of the visitor is photographed, and the photographed face image is transmitted to the server apparatus 4.
The computing unit 38 also outputs a control signal to the output unit 33 (display unit 331) based on a signal received from the server apparatus 4 via the communication unit 31. This changes the expression of the guide robot 3, and changes the line of sight of the pair of virtual eyes 306. The computing unit 38 outputs a control signal to the output unit 33 and the storage unit 37 based on the signal input through the input unit 32. Thereby, the guiding robot 3 can communicate with the visitor.
Fig. 4 is a block diagram showing a main part configuration of the server apparatus 4 shown in fig. 3. As shown in fig. 4, the server device 4 includes a communication unit 41, an input unit 42, an output unit 43, a storage unit 44, and a calculation unit 45. The server apparatus 4 may be configured to use a virtual server function on the cloud, or may be configured to distribute the functions.
The communication unit 41 is configured to be capable of performing wireless communication with the imaging unit 11 and the guidance robot 3 via the communication network 5 (see fig. 3). The input unit 42 includes various switches that can be operated by a user, such as a touch panel and a keyboard, and a microphone that can input sound. Here, the user is a clerk of an automobile sales shop. The output unit 43 includes, for example, a monitor capable of displaying characters and images, a speaker capable of outputting sound, and the like.
The storage unit 44 includes a volatile or nonvolatile memory not shown. The storage unit 44 stores various data, various programs executed by the arithmetic unit 45, and the like. The storage unit 44 has a functional configuration in which a guidance robot database (guidance robot D/B)441, an exhibition hall database (exhibition hall D/B)442, and a visitor database (visitor D/B)443 are provided as memories.
The guidance robot database 441 stores basic information and maintenance information related to the guidance robot 3, such as the robot ID of the guidance robot 3 used in the robot dispatch guidance service. The exhibition hall database 442 stores data on the arrangement of the exhibition vehicles 2, the exhibition stands, and the like arranged in the exhibition hall 1. The exhibition hall database 442 has the same configuration as the exhibition hall database 371 stored in the storage unit 37 of all the guidance robots 3. Therefore, the robot management system 100 may be configured to have any one of the exhibition hall database 442 and the exhibition hall database 371.
The visitor database 443 stores information of visitors of the visiting exhibition hall 1 (hereinafter referred to as visitor data). The visitor data includes information such as a face image and a visiting record of the visitor in addition to basic information of the visitor such as the address, name, age, occupation, and sex of the visitor. The visiting record includes the contents of the free conversation before the conversation, in addition to the contents of the conversation with the visitor.
In addition, when a group of visitors (hereinafter, referred to as a visitor group) comes to a visitor, the visitor database 443 stores visitor data of the visitor in association with information of the visitor group (hereinafter, referred to as visitor group information). The visitor group information is information capable of identifying a visitor group. For example, when a visitor and a family are shared, visitor data of the visitor is stored in association with information that can identify the family. For example, when a visitor and a friend visit together in a multi-group, visitor data of the visitor is stored in association with information that can identify the group.
The arithmetic unit 45 includes a CPU, executes predetermined processing based on a signal received through the input unit 42, a signal received from the outside of the server apparatus 4 through the communication unit 41, various programs and various data stored in the storage unit 44, and outputs a control signal to the communication unit 41, the output unit 43, and the storage unit 44.
As shown in fig. 4, the arithmetic unit 45 has a functional configuration including an in-facility image acquisition unit 451, a robot image acquisition unit 452, a robot line instruction unit 453, a visitor specification unit 454, a visitor group specification unit 455, a group emotion specification unit 456, and a robot movement instruction unit 457.
The in-facility image acquisition unit 451 acquires in-exhibition-hall images captured by the plurality of imaging units 11 provided in the exhibition hall 1. Specifically, the in-facility image acquiring unit 451 inputs image data (including still image data and moving image data) including images of the inside of the exhibition hall 1 (the space in which the exhibition vehicle 2 is exhibited) captured by the plurality of imaging units 11 through the communication unit 41. More specifically, the in-facility image acquisition unit 451 outputs a signal instructing the plurality of imaging units 11 to image the images in the exhibition hall via the communication unit 41, and inputs image data including the images in the exhibition hall captured by the plurality of imaging units 11 via the communication unit 41.
The robot image acquisition unit 452 acquires an image including a face image of the visitor captured by the guidance robot 3 installed in the exhibition hall 1. Specifically, the robot image acquiring unit 452 inputs image data (including still image data and moving image data) including a face image of the visitor captured by the guidance robot 3 through the communication unit 41. More specifically, the robot image acquiring unit 452 outputs a signal instructing the guiding robot 3 to capture a face image of the visitor through the communication unit 41, and inputs image data including the face image of the visitor captured by the guiding robot 3 through the communication unit 41.
The robot sight line instruction unit 453 instructs the sight line direction of the pair of virtual eyes 306 of the guide robot 3. Specifically, the robot view line instruction unit 453 outputs a control signal for controlling the position and the operation of the pair of virtual eyes 306 of the guidance robot 3 to the guidance robot 3 through the communication unit 41.
When the control signal is input from the robot sight line instruction unit 453 through the communication unit 31, the guide robot 3 controls the display unit 331 based on the input control signal to change the positions and movements of the pair of virtual eyes 306. I.e. moving the line of sight. When the guiding robot 3 moves the line of sight, the visitor is attracted by the line of sight of the guiding robot 3 and looks in the line of sight direction of the guiding robot 3. In this way, the robot 3 is guided to move the line of sight, thereby attracting the line of sight of the visitor and prompting the visitor to change its position or posture. For example, by guiding the line of sight of the robot 3 to the imaging unit 11, the line of sight of the visitor can be directed to the imaging unit 11. This enables the imaging unit 11 to image the face of the visitor.
The visitor specification unit 454 specifies the visitor in the exhibition hall 1 from the in-hall image acquired by the in-facility image acquisition unit 451. More specifically, the visitor specification unit 454 extracts a person from the image in the exhibition hall, and extracts (recognizes) the face image from the extracted person. Then, visitor data having a face image that matches the extracted face image is retrieved from the visitor data stored in the visitor database 443, thereby specifying a visitor. When there is no visitor data having a face image that matches the extracted face image, the face image of the extracted person is stored in the visitor database 443 as visitor data of a new visitor.
When the visitor identification unit 454 determines that the visitor cannot be identified because the face image of the person extracted from the image in the exhibition hall cannot be identified, it outputs a control signal to the robot movement instruction unit 457. When the control signal is input, the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the person. Specifically, the robot movement instruction unit 457 sends a signal instructing movement to the guidance robot 3 via the communication unit 41, and dispatches the guidance robot 3 to the person. Hereinafter, the signal is referred to as a move command. Then, the robot image acquiring unit 452 outputs a signal instructing to capture the face image of the person to the guidance robot 3 through the communication unit 41. The image including the face image of the person photographed by the guidance robot 3 is input to the robot image obtaining unit 452 through the communication unit 41, and the visitor specifying unit 454 specifies the visitor using the face image of the person input to the robot image obtaining unit 452 in the same manner as described above.
The visitor group identification unit 455 identifies the visitor group in the visiting hall 1 from the in-hall image acquired by the in-facility image acquisition unit 451. More specifically, the visitor group identification unit 455 extracts a plurality of visitors from the image in the exhibition hall, and determines whether or not the visitors form a group based on the sense of distance, face orientation, or the like of the visitors. For example, a plurality of visitors kept at a certain distance beyond a prescribed time are determined as a visitor group. Also, for example, a plurality of visitors talking with each other face to face is determined as a visitor group. At this time, the visitor group determination unit 455 also determines the number of visitors that constitute the visitor group. In addition, when only one visitor is extracted from the images in the exhibition hall, it can be determined that the visitor constitutes a group. In this case, the visitor group specification unit 455 specifies the number of visitors constituting the visitor group as 1.
The visitor group determination section 455 also determines whether or not the visitor determined by the visitor determination section 454 constitutes a visitor group, and determines the visitor group. More specifically, the visitor group identification unit 455 extracts visitor group information of each visitor identified by the visitor identification unit 454, which is associated with the visitor data stored in the visitor database 443, and determines whether each visitor constitutes a visitor group based on the extracted visitor group information.
The group emotion determination unit 456 estimates the emotion of one or more visitors constituting the visitor group determined by the visitor group determination unit 455 based on the in-hall image, and determines the group emotion of the visitor group. The group emotion determination unit 456 determines whether the visitor group is a good-mood group (e.g., a happy group) or a bad-mood group (e.g., a dysphoric or angry group), for example.
The group emotion determining section 456 determines the group emotion based on the estimated proportion of the emotion of the visitor. For example, the group emotion determining unit 456 determines a group with a high proportion of the emotional visitors in the group (a low proportion of the emotional visitors) as a emotional group, and determines a group with a high proportion of the emotional visitors (a low proportion of the emotional visitors) as a non-emotional group. That is, the emotional proportion in the present embodiment refers to the proportion of each emotion (bad mood, good mood, etc.) in the group.
As shown in fig. 1, for example, when the family hall 1 is visited by the friend group 101 including the three visitors A, B, C and the parent-child group 102 including the two visitors D, E, the group emotion determining unit 456 determines the group emotion of the friend group 101 and the parent group 102 from the images in the hall photographed by the plurality of photographing units 11. The group emotion specifying unit 456 estimates the emotions of the visitors a to C constituting the friend group 101, and specifies "good emotion" as the group emotion of the friend group 101 when, for example, it is estimated that the emotions of the visitors a and B are good and the emotion of the visitor C is bad. Similarly, the group emotion specifying unit 456 estimates the emotion of the visitor D, E constituting the parent-child group 102, and specifies the group emotion of the parent group 102 as "poor emotion" when, for example, the emotion of the visitor D (for example, a parent) is good and the emotion of the visitor E (for example, a child) is not good.
The group emotion is determined as "good mood" when the moods of all the visitors constituting the visitor group are good, as "good mood" when the moods of the visitors are relatively many, as "general" when the moods of the visitors are the same as those of the visitors with poor moods, as "poor mood" when the moods of the visitors are relatively many, and as "poor mood" when the moods of all the visitors are not good. In this case, when the visitor is a child, the group emotion may be determined so as to attach importance to the emotion of the child who is the visitor. Furthermore, a group of visitors in which even a somewhat ill-minded visitor exists may also be determined as "in-minded".
The feelings of the visitors can be estimated based on the behavior of the visitors. Therefore, the group emotion specifying unit 456 estimates the emotion of the visitor based on the face image representing the visitor extracted from the in-hall image, the image of the movement of the visitor, and the like. For example, when the visitor presents a face with a feeling of anger, or when the visitor is uncomfortable to look around and shake his legs, the group emotion identification unit 456 estimates that the visitor is not good in mood. On the other hand, the group emotion specifying unit 456 estimates that the mood is good when the visitors are present with a happy face, when the visitors are talking to each other, or the like. The storage unit 44 stores images indicating a plurality of predetermined motions (leg shaking, conversation taking place, etc.), and the group emotion specifying unit 456 compares the images in the exhibition hall with the motion images to extract an image indicating the motion of the visitor from the images in the exhibition hall.
The group emotion specifying unit 456 estimates the emotion of the visitor in the visitor group based on the robot captured image acquired by the robot image acquiring unit 452, and updates the group emotion of the visitor group. For example, when the guiding robot 3 is dispatched to the visitor group and the guiding robot 3 captures an image of the visitor based on the image in the exhibition hall and the like, the group emotion specifying unit 456 estimates the emotion of the visitor based on the face image of the visitor captured by the guiding robot 3 and updates the group emotion based on the estimated emotion.
The robot movement instruction unit 457 directs the robot 3 to move so as to dispatch the guidance robot 3 to the visitor group based on the group emotion instruction determined by the group emotion determination unit 456. Specifically, the robot movement instruction unit 457 outputs a command to the guidance robot 3 to move to the vicinity of the visitor group through the communication unit 41.
In this case, the robot movement instruction unit 457 instructs the visitor group having relatively poor group emotion determined by the group emotion determination unit 456 to move to the vicinity of the visitor group in priority. For example, when there are a visitor group with "poor mood" in the group feeling and a visitor group with "poor mood" in the group feeling, the visitor group with "poor mood" in the group feeling is prioritized, and the guidance robot 3 is instructed to move to the vicinity of the visitor group.
When there are a plurality of visitor groups having the same group emotion specified by the group emotion specifying unit 456, the robot movement instruction unit 457 instructs the guidance robot 3 to move to the vicinity of the visitor group having a small number of priority persons. For example, when the group emotion is determined to be "mood poor", the robot movement instruction unit 457 indicates that the visitor group including three persons is present in two groups, and the visitor group including five persons is present in one group, the robot movement instruction unit gives priority to the visitor group including three persons, and instructs the robot 3 to be guided to move to the vicinity of the visitor group.
Further, the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the vicinity of the visitor who is determined to be undecidable, when the visitor specification unit 454 determines that the visitor is undecidable based on the in-hall image. For example, when a signal indicating that the visitor cannot be identified, which is output from the visitor identification unit 454, is input, the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the vicinity of the person extracted from the image in the arcade by the visitor identification unit 454.
At this time, when the person extracted from the image in the exhibition hall by the visitor specifying unit 454 forms the visitor group, the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the vicinity of the visitor group to which the person belongs. Then, the robot movement instruction unit 457 instructs the guidance robot 3 to perform an operation to prompt the visitor whose position or posture is determined to be undecided to be able to be identified from the in-hall image acquired by the in-facility image acquisition unit 451, so as to change the position or posture of the visitor. More specifically, the robot movement instruction unit 457 instructs the guidance robot 3 to operate so that the visitor who is determined to be unidentifiable is at a position or posture in the direction toward the imaging unit 11.
For example, the robot movement instruction unit 457 instructs the guidance robot 3 to move to the vicinity of the visitor determined to be undecidable in a state where the imaging unit 11 is located behind the guidance robot 3. That is, the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 moves along a route connecting the imaging unit 11 and the visitor determined as being indeterminate from a plan view. When it is determined that the visitor who cannot be specified notices the approaching guidance robot 3, the visitor looks in the direction of the guidance robot 3, and therefore the face of the visitor can be imaged by the imaging unit 11 located behind the guidance robot 3. In this case, the guidance robot 3 may be caused to perform an operation that is likely to draw the attention of the visitor who is determined to be unidentifiable. This prompts a change in the position or posture of the visitor.
It is preferable that the robot sight line instruction part 453 instructs the sight line direction of the pair of virtual eyes 306 of the guiding robot 3 to attract the sight line of the visitor, thereby prompting a change in the position or posture of the visitor. Specifically, the line of sight of the visitor is preferably attracted to the imaging unit 11 by moving the line of sight direction of the pair of virtual eyes 306 in the direction of the imaging unit 11. At this time, the robot movement instruction unit 457 may output a control signal to the guidance robot 3, the control signal instructing a rotation operation, a swing operation, and the like in conjunction with the movement of the line of sight. Thus, the guiding robot 3 performs a rotating operation or the like while moving the line of sight, and therefore, the line of sight of the visitor is easily attracted. The rotation operation is a rotation operation using the vertical direction as a rotation axis, and a rotation operation (a circling operation) using a predetermined distance (for example, 1m) around the visitor as a radius.
Fig. 5 is a flowchart showing an example of the guidance robot dispatching process executed by the arithmetic unit 45 of the server device 4 in fig. 3. The guidance robot dispatch process shown in fig. 5 is performed until the exhibition hall 1 starts to open the hall and closes the hall, for example. When the guidance robot dispatching process is started, the in-facility image acquiring unit 451 acquires images in the exhibition hall captured by the plurality of imaging units 11 provided in the exhibition hall 1 (in-facility image acquiring step). The in-facility image acquisition step is periodically performed for a predetermined time (for example, 10ms) while the guidance robot dispatch process is performed.
As shown in fig. 5, first in step S1, visitor determination processing (visitor determination step) of determining a visitor of the exhibition hall 1 from the in-exhibition-hall image acquired in the in-facility image acquisition step is performed.
Fig. 6 is a flowchart showing an example of the visitor specification process executed by the calculation unit 45 of the server device 4 in fig. 3. As shown in fig. 6, in the visitor specification processing, first, in step S10, the visitor specification unit 454 extracts a person of the visiting facility from the in-arcade image acquired in the in-facility image acquisition step. Next, in step S11, it is determined whether or not the face image of the extracted person can be extracted (recognized) by the processing in the visitor specification unit 454. If step S11 is negative (S11: no), the process proceeds to step S12, and the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the vicinity of the person determined as being unable to be extracted.
Next, in step S13, it is determined whether or not the face image of the person determined to be unextractable can be acquired by the processing of the visitor determination unit 454. If step S13 is negative (S13: no), the process proceeds to step S14, and the line of sight direction of the pair of virtual eyes 306 guiding the robot 3 is instructed by the processing in the robot line of sight instruction unit 453, and the attraction of the line of sight of the visitor promotes the change in the position or posture of the visitor (robot line of sight instruction step).
Next, in step S15, it is determined whether or not the face image of the person determined to be unextractable can be acquired by the processing of the visitor determination unit 454. If step S15 is negative (S15: no), the process proceeds to step S16, and the guidance robot 3 is caused to capture a face by the processing of the robot image acquisition unit 452, thereby acquiring a face image of a person determined not to be extractable (robot image acquisition step).
On the other hand, when each of the steps S11, S13, and S15 is affirmative (S11, S13, S15: YES) or when the face image of the person determined as being unextractable (unidentifiable) is acquired in the step S16, the face image of the visitor is extracted (identified) by the processing in the visitor identification portion 454 in the step S17. Next, in step S18, the visitor is identified by the processing performed by the visitor identification unit 454.
When the visitor determination process (visitor determination step) is executed, next in step S2, the visitor determined in the visitor determination step is determined by the process at the visitor group determination section 455. Next, in step S3, it is determined whether the determined visitor constitutes a visitor group. When step S3 is negated (S3: NO), it returns to step S1. On the other hand, when the step S3 is affirmative (S3: YES), the flow proceeds to step S4, and the visitor group is determined. Steps S2 to S4 hereinafter are referred to as visitor group determination processing (visitor group determination step).
When the visitor group specifying process is executed, next, in step S5, the group emotion specifying unit 456 estimates the emotion of each visitor constituting the visitor group (group emotion specifying step). Next, in step S6, the group emotion determining unit 456 performs the processing to determine the group emotion of the visitor group based on the estimated emotion of each visitor (group emotion determining step). Next, in step S7, the robot movement instruction unit 457 instructs the guiding robot 3 to move so that the guiding robot 3 is dispatched to the vicinity of the visitor group in accordance with the group emotion instruction of the visitor group (robot movement instruction step).
Next, in step S8, it is determined whether or not the time is the closed hall time of the exhibition hall 1, and if step S8 is negative (S8: no), the flow returns to step S1, and steps S1 to S8 are repeated. On the other hand, the processing is ended when step S8 is affirmative (S8: YES).
The present embodiment can provide the following effects.
(1) A server device 4 for managing a self-traveling guidance robot 3 for performing a service such as reception for a group of visitors visiting an exhibition hall 1 of a car sales shop. The server device 4 includes: an in-facility image acquisition unit 451 that acquires an in-exhibition-room image captured by an imaging unit 11 provided on the ceiling 10 of the exhibition room 1; a visitor group determination unit 455 that determines the visitor group in the visiting hall 1 based on the in-hall image acquired by the in-facility image acquisition unit 451; a group emotion determination unit 456 that estimates, based on the in-hall image, the emotion of each of one or more visitors that form the visitor group determined by the visitor group determination unit 455, and determines the group emotion of the visitor group; and a robot movement instruction unit 457 that outputs a movement command to the guidance robot 3 so that the guidance robot 3 moves to the vicinity of the visitor group based on the group emotion determined by the group emotion determination unit 456.
With this configuration, the guidance robots 3 of a limited number are used, and thus, the vehicle sales shop can appropriately receive a visitor group visiting the exhibition hall 1 of the vehicle sales shop. For example, by preferentially sending the guidance robot 3 to a visitor group having a relatively poor group feeling (having a poor mood) and communicating the guidance robot 3 with the visitor group, the mood of the visitor group can be improved and deterioration of the mood can be suppressed. For example, by dispatching the guidance robot 3 to a visitor group who has been waiting for a long time and has a deteriorated mood and communicating the guidance robot 3 with the visitor group, the mood of the visitor group can be improved and further deterioration of the mood can be suppressed. This makes it possible to smoothly perform subsequent reception such as a consultation by a salesperson (e.g., a salesperson) in a car sales shop.
In addition, when a large number of guide robots 3 are arranged in the exhibition hall 1, there is a fear that the feeling of oppression is given to the bystanders and the cost in the car shop is increased. However, by dispatching the guidance robots 3 according to the group feelings of the visitor group, efficient communication with the visitor group can be achieved by a limited number of guidance robots 3. For example, by preferentially sending the guidance robot 3 to a visitor group having a relatively poor group feeling (having a poor mood), efficient communication with the visitor group can be achieved by using a limited number of guidance robots 3.
Also, the guiding robot 3 can be used to inquire about the main needs of the visitor group in advance, or to make a simple introduction to the visitor group. For example, the guidance robot 3 can be used to inquire about the main needs of the visit to the visitor who has waited for a long time, the group of visitors who are expected to wait for a long time, or the like in advance, or to make a simple introduction to the group of visitors. This enables a salesperson (e.g., a salesperson) or the like to take effective treatment thereafter.
As described above, by using the server device 4 of the present embodiment, efficient and smooth reception using, for example, a limited number of guidance robots 3 can be performed.
(2) The group emotion determination unit 456 determines the group emotion from the proportion of the emotions of one or more visitors estimated based on the images in the exhibition hall. For example, the group emotion determining unit 456 determines a group with a high proportion of the emotional visitors in the group (a low proportion of the emotional visitors) as a emotional group, and determines a group with a high proportion of the emotional visitors (a high proportion of the emotional visitors) as a non-emotional group. Thus, for example, the group emotion can be specified by a simple configuration.
(3) The visitor group specifying unit 455 specifies the number of visitors constituting the visitor group in the visiting hall 1 from the in-hall image. When there are a plurality of visitor groups having the same group emotion as that identified by the group emotion identification unit 456, the robot movement instruction unit 457 outputs a movement command to the guidance robot 3 so that the guidance robot 3 is dispatched with priority from a visitor group having a small number of visitors. When the number of people constituting a visitor group is small, since there are few people who have a conversation with each other, the moods of the visitors are likely to be deteriorated, and the group feelings are likely to be deteriorated. Therefore, the smaller the number of visitors in the group, the more necessary the communication with the guidance robot 3. Therefore, by preferentially dispatching the guidance robot 3 from a group of visitors with a small number of visitors, deterioration in the mood of the visitors can be suppressed, or the mood of the visitors can be improved.
(4) The server apparatus 4 further includes a robot image acquisition unit 452, and the robot image acquisition unit 452 acquires a robot captured image in which the guiding robot 3 captures a visitor constituting the visitor group specified by the visitor group specification unit 455. The group emotion specifying unit 456 estimates the emotion of the visitor from the robot image acquired by the robot image acquiring unit 452, and specifies the group emotion of the visitor group specified by the visitor group specifying unit 455. In this way, by causing the guiding robot 3 to photograph the visitor, the face of the visitor can be accurately photographed. This makes it possible to reliably estimate the emotion of the visitor. As a result, the group emotion can be determined more reliably. In addition, even when the feeling of the visitor is changed by dispatching the guidance robot, the group feeling corresponding to the change can be specified. That is, the group emotion can be updated based on the emotion change of the visitor.
(5) A robot management method for managing a self-walking guide robot 3 serving a visitor group in an exhibition hall 1 of a visiting car sales shop. The robot management method comprises the following steps: an in-facility image acquisition step in which the server device 4 acquires in-exhibition-room images captured by a plurality of imaging units 11 provided in the exhibition room 1; a visitor group determination step of determining a visitor group of a visiting facility from the in-showroom image acquired in the in-facility image acquisition step; a group emotion determining step of determining a group emotion of the visitor group by estimating an emotion of one or more visitors constituting the visitor group determined in the visitor group determining step from the in-hall image; and a robot movement instruction step of outputting a movement command to the guidance robot 3 so that the guidance robot 3 moves to the vicinity of the visitor group, based on the group emotion determined in the group emotion determination step.
With this method, as with the server device 4 described above, it is possible to use the limited guidance robot 3 and to achieve an appropriate reception for the visitor group visiting the exhibition hall 1 of the car sales shop with respect to the car sales shop. For example, efficient and smooth reception can be performed using a limited number of guide robots 3.
(6) The robot management system 100 includes: the server device 4; an imaging unit 11 which can communicate with the server device 4 and is provided in a plurality in the exhibition hall 1; and a guidance robot 3 that is disposed in the exhibition hall 1, that provides a service to a group of visitors visiting the exhibition hall 1, that can travel by themselves, and that can communicate with the server device 4. This enables the use of the limited guidance robot 3, thereby enabling the vehicle sales shop to appropriately receive a visitor group visiting the exhibition hall 1 of the vehicle sales shop. For example, efficient and smooth reception using the limited guidance robot 3 can be performed.
In the above embodiment, the arithmetic unit 45 of the server device 4 includes the in-facility image acquisition unit 451, the robot image acquisition unit 452, the robot line instruction unit 453, the visitor specification unit 454, the visitor group specification unit 455, the group emotion specification unit 456, and the robot movement instruction unit 457, but is not limited thereto. The calculation unit 45 may include the in-facility image acquisition unit 451, the visitor group identification unit 455, the group emotion identification unit 456, and the robot movement instruction unit 457. Preferably, the calculation unit 45 includes a robot image acquisition unit 452.
In the above embodiment, the group emotion specifying unit 456 included in the calculation unit 45 of the server device 4 specifies the group emotion by estimating the mood of each of the plurality of visitors, but the present invention is not limited thereto. For example, the group emotion determining unit may be configured to determine the group emotion by estimating the joy, anger, sadness, and sadness of each of the plurality of visitors.
In the above embodiment, the guidance robot dispatching process executed by the computing unit 45 of the server device 4 is configured to include an in-facility image acquisition step, a visitor specification step, a visitor group specification step, a group emotion specification step, and a robot movement instruction step, but is not limited thereto. The guidance robot dispatch process may include an in-facility image acquisition step, a visitor group identification step, a group emotion identification step, and a robot movement instruction step.
In the above-described embodiment, the explanation has been made using a configuration in which the robot image acquisition step is included in the visitor specification processing, but the present invention is not limited to this. For example, the robot image acquisition step may be included in the guidance robot dispatch process independently of the visitor specification process. In this case, the robot image acquisition step may be executed to update the group emotion.
By adopting the invention, the limited guiding robot can be used to realize the proper reception of the visitor group of the visiting facility in the facility aspect.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the scope of the disclosure of the following claims.

Claims (9)

1. A robot management device (4) for managing a self-walking guiding robot serving a visitor group of a visiting facility, the robot management device comprising:
an in-facility image acquisition unit (451) that acquires an in-facility image captured by a capture unit (11) provided in the facility;
a visitor group determination unit (455) that determines a visitor group visiting the facility, based on the in-facility image acquired by the in-facility image acquisition unit (451);
a group emotion determination unit (456) that estimates, based on the in-facility image, the emotion of one or more visitors that form the visitor group determined by the visitor group determination unit (455), and determines a group emotion that is the emotion of the entire group of the visitor group; and
and a robot movement instruction unit (457) that outputs a movement instruction to the guidance robot (3) to move the guidance robot (3) to the vicinity of the visitor group, based on the group emotion determined by the group emotion determination unit.
2. The robot management device according to claim 1,
the group emotion determination unit (456) determines the group emotion from the proportion of the emotion of one or more visitors estimated based on the in-facility image.
3. The robot management device according to claim 1 or claim 2,
the group emotion determination unit (456) estimates, based on the in-facility image, the mood of one or more visitors that form the visitor group determined by the visitor group determination unit (455), and determines the group emotion based on the ratio of the mood of the visiting visitor to the mood of the non-mood visitor in the visitor group.
4. The robot management device according to any one of claims 1 to 3,
the group emotion determination unit (456) also determines the group emotion of the visitor group determined by the visitor group determination unit (455) based on at least one of an image showing a face image and a motion of a visitor extracted from the in-facility image.
5. The robot management device according to any one of claims 1 to 4,
when there are a plurality of visitor groups having the same group emotion as the group emotion specified by the group emotion specifying unit (456), the robot movement instruction unit (457) outputs a movement command to the guidance robot (3) to cause the guidance robot (3) to be dispatched preferentially from the visitor group having the poor group emotion.
6. The robot management device according to any one of claims 1 to 5,
the visitor group determination unit (456) determines the number of visitors that form a visitor group visiting the facility based on the in-facility image,
the robot movement instruction unit (457) outputs a movement command to the guidance robot (3) to preferentially dispatch the guidance robot (3) from a visitor group with a small number of visitors when a plurality of visitor groups with the same group emotion identified by the group emotion identification unit (456) exist.
7. The robot management device according to any one of claims 1 to 6,
further comprising a robot image acquisition unit (452), wherein the robot image acquisition unit (452) acquires a robot captured image of a visitor constituting the visitor group specified by the visitor group specification unit (455) captured by the guidance robot (3),
the group emotion determination unit (456) estimates the emotion of a visitor based on the robot captured image acquired by the robot image acquisition unit (452), and determines the group emotion of the visitor group determined by the visitor group determination unit (455).
8. A robot management method for managing a self-walking guidance robot (3) serving a visitor group of a visiting facility, comprising:
an in-facility image acquisition step of acquiring an in-facility image captured by a capturing unit (11) provided in the facility;
a visitor group determination step of determining a visitor group visiting the facility from the in-facility image acquired in the in-facility image acquisition step;
a group emotion determining step of estimating, based on the in-facility image, an emotion of one or more visitors constituting the visitor group determined in the visitor group determining step, and determining a group emotion which is an emotion of the entire group of the visitor group; and
a robot movement instruction step of outputting a movement instruction to the guiding robot so that the guiding robot (3) moves to the vicinity of the visitor group based on the group emotion determined in the group emotion determination step.
9. A robot management system comprising:
the robot management device (4) according to any one of claims 1 to 7;
an imaging unit (11) which is provided in a facility and can communicate with the robot management device (4); and
and a guidance robot (3) which is disposed in the facility, serves a group of visitors who visit the facility, can travel by itself, and can communicate with the robot management device (4).
CN202010642903.5A 2019-07-17 2020-07-06 Robot management device, robot management method, and robot management system Active CN112238454B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-132083 2019-07-17
JP2019132083A JP7273637B2 (en) 2019-07-17 2019-07-17 ROBOT MANAGEMENT DEVICE, ROBOT MANAGEMENT METHOD AND ROBOT MANAGEMENT SYSTEM

Publications (2)

Publication Number Publication Date
CN112238454A true CN112238454A (en) 2021-01-19
CN112238454B CN112238454B (en) 2024-03-01

Family

ID=74170858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010642903.5A Active CN112238454B (en) 2019-07-17 2020-07-06 Robot management device, robot management method, and robot management system

Country Status (2)

Country Link
JP (1) JP7273637B2 (en)
CN (1) CN112238454B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007043712A1 (en) * 2005-10-14 2007-04-19 Nagasaki University Emotion evaluating method and emotion indicating method, and program, recording medium, and system for the methods
JP2011186521A (en) * 2010-03-04 2011-09-22 Nec Corp Emotion estimation device and emotion estimation method
US20150094851A1 (en) * 2013-09-27 2015-04-02 Honda Motor Co., Ltd. Robot control system, robot control method and output control method
JP2017159396A (en) * 2016-03-09 2017-09-14 大日本印刷株式会社 Guide robot control system, program, and guide robot
KR20180054407A (en) * 2016-11-15 2018-05-24 주식회사 로보러스 Apparatus for recognizing user emotion and method thereof, and robot system using the same
US20180370039A1 (en) * 2017-06-23 2018-12-27 Casio Computer Co., Ltd. More endearing robot, method of controlling the same, and non-transitory recording medium
CN109318239A (en) * 2018-10-09 2019-02-12 深圳市三宝创新智能有限公司 A kind of hospital guide robot, hospital guide's method and device
JP2019066700A (en) * 2017-10-02 2019-04-25 株式会社ぐるなび Control method, information processing apparatus, and control program

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008260107A (en) * 2007-04-13 2008-10-30 Yaskawa Electric Corp Mobile robot system
JP6058053B2 (en) * 2014-06-05 2017-01-11 Cocoro Sb株式会社 Recording control system, system and program
JP6905812B2 (en) * 2016-06-14 2021-07-21 グローリー株式会社 Store reception system
JP6774018B2 (en) * 2016-09-15 2020-10-21 富士ゼロックス株式会社 Dialogue device
JP6965525B2 (en) * 2017-02-24 2021-11-10 沖電気工業株式会社 Emotion estimation server device, emotion estimation method, presentation device and emotion estimation system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007043712A1 (en) * 2005-10-14 2007-04-19 Nagasaki University Emotion evaluating method and emotion indicating method, and program, recording medium, and system for the methods
JP2011186521A (en) * 2010-03-04 2011-09-22 Nec Corp Emotion estimation device and emotion estimation method
US20150094851A1 (en) * 2013-09-27 2015-04-02 Honda Motor Co., Ltd. Robot control system, robot control method and output control method
JP2017159396A (en) * 2016-03-09 2017-09-14 大日本印刷株式会社 Guide robot control system, program, and guide robot
KR20180054407A (en) * 2016-11-15 2018-05-24 주식회사 로보러스 Apparatus for recognizing user emotion and method thereof, and robot system using the same
US20180370039A1 (en) * 2017-06-23 2018-12-27 Casio Computer Co., Ltd. More endearing robot, method of controlling the same, and non-transitory recording medium
JP2019066700A (en) * 2017-10-02 2019-04-25 株式会社ぐるなび Control method, information processing apparatus, and control program
CN109318239A (en) * 2018-10-09 2019-02-12 深圳市三宝创新智能有限公司 A kind of hospital guide robot, hospital guide's method and device

Also Published As

Publication number Publication date
JP2021018481A (en) 2021-02-15
CN112238454B (en) 2024-03-01
JP7273637B2 (en) 2023-05-15

Similar Documents

Publication Publication Date Title
US7584158B2 (en) User support apparatus
US11330951B2 (en) Robot cleaner and method of operating the same
JP4369326B2 (en) Facility information providing system and facility information providing method
JP4599522B2 (en) Communication robot
KR20110016165A (en) Intelligent display apparutus having publicity function and method of performing publicity function
JP2003050559A (en) Autonomously movable robot
JP6700439B2 (en) Elevator system and information providing method in elevator system
JPWO2020129309A1 (en) Guidance robot control device, guidance system using it, and guidance robot control method
KR20180074404A (en) Robot for airport and method thereof
JP2005131713A (en) Communication robot
CN112238458B (en) Robot management device, robot management method, and robot management system
JP2017209769A (en) Service system and robot
CN112238454A (en) Robot management device, robot management method, and robot management system
JP6117404B1 (en) Elevator system
US11994875B2 (en) Control device, control method, and control system
US20220297308A1 (en) Control device, control method, and control system
WO2022153411A1 (en) Guidance system
US20220300982A1 (en) Customer service system, server, control method, and storage medium
JP7156457B1 (en) PASSENGER CONVEYOR NOTIFICATION SYSTEM, PORTABLE TERMINAL DEVICE, SERVER, AND NOTIFICATION SYSTEM CONTROL METHOD
US20220301029A1 (en) Information delivery system, information delivery method, and storage medium
JP6739017B1 (en) Tourism support device, robot equipped with the device, tourism support system, and tourism support method
JP7230728B2 (en) Mobile system
WO2020166377A1 (en) Moving body, control method
WO2017187614A1 (en) Communication control device, method, and system
US20240087334A1 (en) Information process system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant