CN112238454B - Robot management device, robot management method, and robot management system - Google Patents

Robot management device, robot management method, and robot management system Download PDF

Info

Publication number
CN112238454B
CN112238454B CN202010642903.5A CN202010642903A CN112238454B CN 112238454 B CN112238454 B CN 112238454B CN 202010642903 A CN202010642903 A CN 202010642903A CN 112238454 B CN112238454 B CN 112238454B
Authority
CN
China
Prior art keywords
group
visitor
robot
emotion
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010642903.5A
Other languages
Chinese (zh)
Other versions
CN112238454A (en
Inventor
坂口浩之
塚本智宏
栁田梦
新井敬一郎
达富由树
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honda Motor Co Ltd
Original Assignee
Honda Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honda Motor Co Ltd filed Critical Honda Motor Co Ltd
Publication of CN112238454A publication Critical patent/CN112238454A/en
Application granted granted Critical
Publication of CN112238454B publication Critical patent/CN112238454B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1689Teleoperation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Manipulator (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The server device (4) manages a guidance robot serving a visitor group visiting an exhibition hall. The server device (4) comprises: an in-facility image acquisition unit (451) that acquires an in-exhibition hall image captured by a capturing unit provided in the ceiling of the exhibition hall; a visitor group determination unit (455) that determines a visitor group for visiting the exhibition hall from the image in the exhibition hall; a group emotion determination unit (456) that estimates the emotion of each visitor of the specified visitor group based on the intra-exhibition image, and determines the group emotion of the visitor group; and a robot movement instruction unit (457) that outputs a movement instruction to the guidance robot so that the guidance robot moves to the vicinity of the visitor group, based on the determined group emotion.

Description

Robot management device, robot management method, and robot management system
Technical Field
The present invention relates to a robot management device, a robot management method, and a robot management system.
Background
Recently, as one of means for solving the shortage of human hands, various techniques concerning a robot control system using a guided robot have been proposed (see, for example, patent literature 1). In the robot control system described in patent document 1, a robot (communication robot) is guided to introduce, for example, an exhibit to a visitor and to guide a presentation venue.
However, there are cases where a visitor visiting the facility wishes to communicate with the facility side, and there is no case where communication is desired. In addition, facilities are often accessed in groups. As a facility aspect, it is desired to appropriately receive these persons (groups), and a technique is desired in which a limited guidance robot is used to realize an appropriate reception of a visitor group by the facility aspect.
Prior art literature
Patent document 1: japanese patent No. 6142306 (JP 6142306B).
Disclosure of Invention
The present invention provides a robot management device for guiding a robot capable of traveling by itself, which serves a visitor group of a visiting facility. The robot management device includes: an in-facility image acquisition unit that acquires an in-facility image captured by a capturing unit provided in a facility; a visitor group determination unit that determines a visitor group of a visiting facility from the intra-facility image acquired by the intra-facility image acquisition unit; a group emotion determination unit that estimates emotion of one or more visitors constituting the visitor group determined by the visitor group determination unit, based on the intra-facility image, and determines a group emotion of the visitor group; and a robot movement instruction unit that outputs a movement instruction to the guidance robot so that the guidance robot moves to the vicinity of the visitor group, based on the group emotion determined by the group emotion determination unit.
Another embodiment of the present invention is a robot management method for managing a guidance robot capable of traveling by itself, which serves a visitor group of a visiting facility. The robot management method comprises the following steps: an in-facility image acquisition step of acquiring an in-facility image captured by a capturing section provided in a facility; a visitor group determination step of determining a visitor group of a visiting facility from the intra-facility image acquired in the intra-facility image acquisition step; a group emotion determining step of estimating emotion of one or more visitors constituting the visitor group determined in the visitor group determining step based on the intra-facility image, and determining group emotion of the visitor group; and a robot movement instruction step of outputting a movement instruction to the guidance robot so that the guidance robot moves to the vicinity of the visitor group based on the group emotion determined in the group emotion determination step.
A robot management system according to still another aspect of the present invention includes: the robot management device; an imaging unit provided in a facility and capable of communicating with the robot management device; and a guidance robot which is disposed in the facility, serves a group of visitors visiting the facility, is capable of traveling by itself, and is capable of communicating with the robot management device.
Drawings
The objects, features and advantages of the present invention are further elucidated by the following description of embodiments in connection with the accompanying drawings.
Fig. 1 is a schematic configuration diagram of a robot management system using a server device constituting a robot management device according to an embodiment of the present invention.
Fig. 2 is a perspective view schematically showing a guidance robot constituting the robot management system shown in fig. 1.
Fig. 3 is a block diagram showing a main part structure of the robot management system shown in fig. 1.
Fig. 4 is a block diagram showing a main part structure of the server apparatus shown in fig. 3.
Fig. 5 is a flowchart showing an example of the guidance robot dispatch process executed by the computing unit of the server apparatus of fig. 3.
Fig. 6 is a flowchart showing an example of the visitor specification processing executed by the arithmetic unit of the server apparatus of fig. 3.
Detailed Description
An embodiment of the present invention will be described below with reference to fig. 1 to 6. The robot management device according to an embodiment of the present invention constitutes a part of a robot management system that dispatches a guidance robot to a visitor of a visiting facility and provides a service such as simple guidance (hereinafter referred to as a robot dispatch guidance service) to the visitor.
As an enterprise providing the robot dispatch guide service, for example, a sales shop, an art hall, a museum, a gallery, an art gallery, and other art-related facilities, a science museum, a commemorative hall, an exhibition, a seminar, and the like, which retail various commodities, are mentioned. Examples of sales shops for retail various products include department stores, supermarkets, and specialty shops. As the special sales shop, various special sales shops, automobile sales shops, and the like can be exemplified. In the following embodiment, an example will be described in which a robot management device is configured by a server device, the server device is installed in an automobile sales shop, and a guidance robot disposed in an exhibition hall of the automobile sales shop is dispatched to a visitor visiting the exhibition hall.
Fig. 1 is a schematic configuration diagram of a robot management system 100 including a server device 4 constituting a robot management device according to an embodiment of the present invention. As shown in fig. 1, the robot management system 100 of the present embodiment includes an imaging unit 11, a guidance robot 3, and a server device 4. The robot management device according to the present embodiment is mainly composed of the server device 4. The robot management system 100 having the server device 4 of the present embodiment is provided with a guidance robot 3 capable of traveling by itself in an exhibition hall 1, which is a facility of an automobile sales shop where the vehicle 2 is exhibited. Then, the group of visitors visiting the exhibition hall 1 is determined by the server apparatus 4 of the car sales premises based on the images of the exhibition hall 1 (hereinafter referred to as intra-exhibition hall images or intra-facility images) captured by the plurality of imaging sections 11 provided on the ceiling 10 of the exhibition hall 1. More specifically, the server device 4 identifies each visitor from the face image of each visitor shown in the image in the exhibition hall, and identifies a visitor group including the visitors.
The server device 4 also determines a group emotion as emotion of the entire group of the determined visitor group from the intra-exhibition image. More specifically, the determined emotions of the visitors are estimated from the intra-exhibition images, and group emotions are determined from the estimated emotions. The guidance robot 3 moves to approach the visitor group according to the determined group emotion. The lead robot 3 is preferentially dispatched to a relatively ill-minded group of visitors.
As shown in fig. 1, when the friend group 101 including three visitors A, B, C and the parent group 102 including two visitors D, E arrive at the exhibition hall 1, the server apparatus 4 determines the group emotion of the friend group 101 and the parent group 102 from the intra-exhibition-hall images captured by the plurality of capturing units 11.
The server device 4 estimates the moods of the visitors a to C constituting the friend group 101, and determines the group moods of the friend group 101 as "good mood" when, for example, the moods of the visitor a and the visitor B are estimated to be good and the moods of the visitor C are estimated to be bad. Similarly, the server apparatus 4 determines that the group emotion of the parent-child group 102 is "poor" when the emotion of the visitor D, E constituting the parent-child group 102, for example, the emotion of the visitor D (for example, parent) is good and the emotion of the visitor E (for example, child) is bad. Further, the emotion of each visitor can be estimated from the behavior, and the like of the visitor.
The server device 4 instructs the guidance robot 3 to move to the position where the parent group 102 is located so as to dispatch the guidance robot 3 to the parent group 102, which is a group of relatives with poor mind. The guiding robot 3 that moves to the position where the parent-child group 102 is located is provided with a service that communicates with the visitor D and the visitor E constituting the parent-child group 102. For example, the guiding robot 3 communicates with the parent group 102 by becoming a play partner for the visitor E, asking the visitor D for a main need for visiting, making a simple introduction, or the like.
Here, the guidance robot 3 will be briefly described with reference to fig. 2. Fig. 2 is a perspective view schematically showing a guidance robot 3 constituting a part of the robot management system 100 shown in fig. 1. As shown in fig. 2, the guiding robot 3 is formed in a substantially gourd shape, and includes a head 301 above a middle thin portion 302 and a body 303 below the middle thin portion. The guiding robot 3 is formed in a shape that is lovely and enthusiastically shaped as a whole in two parts of a substantially head and body, the head 301 being slightly larger than the body 303. The height of the guiding robot 3 is about 110 cm. In addition, the guiding robot 3 has no hands or feet.
The guide robot 3 is configured to be movable in any direction of 0 to 360 degrees, such as the forward-backward direction and the oblique direction, by a traveling device 304 provided at the lower end of the trunk 303. The specific structure of running gear 304 is not described here. In this way, the guidance robot 3 has a shape in which the head 301 is slightly larger than the trunk 303 and has no hand or foot, and thus, for example, a child is easy to hug and communicate with the child. The guiding robot 3 can swing in the front-rear direction and the left-right direction by the traveling device 304, and by swinging the guiding robot 3, a visitor can easily notice the approach of the guiding robot 3, or the like. In this way, the guidance robot 3 can perform an operation or the like for easily communicating with the visitor.
A substantially elliptical face 305 is provided on the head 301 of the guidance robot 3, and the face 305 is configured to be capable of displaying an expression, a simple text image, and the like of the guidance robot 3. A pair of virtual eyes 306 symbolizing eyes are displayed on the face 305 of the guidance robot 3, and the face 305 is configured to be able to express various expressions through the pair of virtual eyes 306. For example, the expression such as happiness and fun can be expressed by changing the shape of the pair of virtual eyes 306. A virtual mouth 307 symbolizing a mouth is also displayed on the face 305, and by changing the shape of the pair of virtual eyes 306 and the virtual mouth 307, the change in expression is easily understood.
The guidance robot 3 is configured such that a pair of virtual eyes 306 can move in position within the face 305. The guidance robot 3 performs an action of moving the line of sight by changing the positions of the pair of virtual eyes 306 in the face 305. The guiding robot 3 attracts the line of sight of the visitor by changing the positions of the pair of virtual eyes 306 in front of the visitor to express an action of moving the line of sight. At this time, the guiding robot 3 more easily attracts the sight line of the visitor by performing a rotation operation, a swing operation, or the like together with the sight line moving operation.
The guidance robot 3 configured as described above is dispatched to, for example, the parent-child group 102. The dispatched guidance robot 3 serves as a play partner for the visitor E, inquires about the main requirement for visiting the visitor D, and simply introduces the visitor D. Also, the guidance robot 3 is dispatched to the friend group 101 so that the visitors a to C can be easily identified. For example, when the face image of any one of the visitors a to C of the friend group 101 cannot be extracted and the visitor cannot be identified, the guidance robot 3 is dispatched to the friend group 101 to change the positions and postures of the visitors a to C so that the imaging unit 11 easily images the faces of the visitors a to C. In order to satisfactorily realize such a robot dispatch guidance service provided at an automobile sales shop, the present embodiment configures the robot management system 100 including the server device 4 as follows.
Fig. 3 is a block diagram showing a main part structure of the robot management system 100 shown in fig. 1. As shown in fig. 3, the imaging unit 11, the guidance robot 3, and the server device 4 are connected to a communication network 5 such as a wireless communication network, the internet, and a telephone network. For convenience, only one photographing part 11 is shown in fig. 3, but as shown in fig. 1, there are actually a plurality of photographing parts 11. Also, only one guidance robot 3 is shown in fig. 3, but a plurality of guidance robots 3 can be provided. In addition, when a large number of guide robots 3 are disposed in the exhibition hall 1, there is a fear that the number of guide robots 3 is small because the guide robots may give a feeling of oppression to the panelist and the cost in the automobile sales shop is increased.
As shown in fig. 3, the imaging unit 11 includes a communication unit 111, a camera unit 112, a sensor unit 113, a storage unit 114, and an arithmetic unit 115. The communication unit 111 is configured to be capable of wirelessly communicating with the server device 4 and the guidance robot 3 via the communication network 5. The camera unit 112 is a camera having an imaging element such as a CCD or CMOS, and is configured to be able to capture a visitor who arrives at the exhibition hall 1. The sensor unit 113 is a sensor such as a dynamic detection sensor or a human sensor, and is configured to be able to detect the position and operation of a visitor entering the exhibition hall 1 in the exhibition hall. The camera section 112 and the sensor section 113 are arranged in plurality on the ceiling 10 of the exhibition hall 1 so that a visitor coming to the exhibition hall 1 can take a picture and detect it regardless of any position in the exhibition hall 1.
The storage unit 114 has a volatile or nonvolatile memory, not shown. The storage unit 114 stores various data, various programs executed by the arithmetic unit 115, and the like. For example, the storage unit 114 temporarily stores the intra-exhibition image captured by the camera unit 112 and the position information of the visitor detected by the sensor unit 113.
The computation unit 115 has a CPU, executes predetermined processing based on signals received from the outside of the imaging unit 11 through the communication unit 111, various programs stored in the storage unit 114, and the like, and outputs predetermined control signals to the communication unit 111, the camera unit 112, the sensor unit 113, and the storage unit 114.
For example, the arithmetic unit 115 outputs a control signal for capturing an image of the exhibition hall 1 to the camera unit 112 and a control signal for detecting the position of the visitor to the sensor unit 113 based on the signal received from the server device 4 via the communication unit 111 and a predetermined program. Then, the arithmetic unit 115 outputs a control signal for transmitting the captured image of the exhibition hall and the detected position information of the visitor to the server apparatus 4 via the communication unit 111. By the processing executed by the arithmetic unit 115, the inside of the exhibition hall 1 is photographed, and the position of the visitor is detected, and the image in the exhibition hall and the position information of the visitor are transmitted to the server device 4.
As shown in fig. 3, the guidance robot 3 has a communication unit 31, an input unit 32, an output unit 33, a camera unit 34, a travel unit 35, a sensor unit 36, a storage unit 37, and an arithmetic unit 38 as functional configurations. The communication unit 31 is configured to be capable of wireless communication with the server apparatus 4 and the imaging unit 11 via the communication network 5. The input unit 32 includes various switch buttons (not shown) operated at the time of maintenance or the like, a microphone (not shown) capable of inputting the voice of a visitor or the like, and the like.
The output unit 33 includes a speaker (not shown) capable of outputting sound and a display unit 331 capable of displaying an image. The display unit 331 forms the face 305 of the guidance robot 3, and the pair of virtual eyes 306, the character image, and the like are displayed on the display unit 331. The display unit 331 may be configured to be capable of displaying a pair of virtual eyes 306, a character image, and the like, and may be configured to include a liquid crystal panel, a projector, a screen, and the like.
The camera unit 34 is configured as a camera having an imaging device such as a CCD or CMOS, and is capable of capturing images of visitors who come to the exhibition hall 1. The camera unit 34 is provided, for example, on the head 301 of the guidance robot 3. By providing the camera portion 34 on the head 301, it becomes easy to take an image of the face of the visitor. From the viewpoint of capturing an image of the face of the visitor, the camera 34 is preferably provided near the pair of virtual eyes 306 of the guidance robot 3.
The traveling unit 35 is constituted by a traveling device 304 that causes the guidance robot 3 to travel by itself. The travel unit 35 includes a battery and a motor, and is configured to travel by driving the motor with the electricity of the battery. The traveling unit 35 can be constructed using a known electromotive technique. The sensor unit 36 includes various sensors such as a travel speed sensor, an acceleration sensor, a gyro sensor, and the like, a sensor that detects a travel and stop state of the guidance robot 3, an obstacle sensor, a human sensor, a dynamic body sensor, and the like, and a sensor that detects a state around the guidance robot 3.
The storage unit 37 has a volatile or nonvolatile memory, not shown. The storage unit 37 stores various data, various programs executed by the arithmetic unit 38, and the like. The storage unit 37 also temporarily stores data and the like related to the reception content of the visitor. For example, a main demand of the guidance robot 3, which is requested from the visitor, a description of the visitor by the guidance robot 3, and the like are temporarily stored.
The storage unit 37 stores a hall database (hall D/B) 371 and an exchange database (exchange D/B) 372 as examples of the functional configuration assumed by the memory. The exhibition hall database 371 stores data corresponding to the arrangement of the exhibition vehicles 2, the exhibition stands, and the like arranged in the exhibition hall 1. The exhibition hall database 371 is used as a reference for guiding the robot 3 when moving in the exhibition hall. The communication database 372 stores data related to voice recognition processing, voice output processing, and the like, which are executed when the guidance robot 3 communicates with the visitor. The traffic database 372 is used as a reference when the guidance robot 3 communicates with the visitor.
The computation unit 38 has a CPU, and executes predetermined processing based on a signal received from the outside of the guidance robot 3 through the communication unit 31, a signal input through the input unit 32, a signal detected by the sensor unit 36, various programs and various data stored in the storage unit 37, and the like, and outputs predetermined control signals to the communication unit 31, the output unit 33, the camera unit 34, the walking unit 35, and the storage unit 37.
More specifically, the arithmetic unit 38 outputs control signals to the traveling unit 35 and the storage unit 37 based on the signal received from the server device 4 via the communication unit 31 and the signal detected by the sensor unit 36. Thereby, the guidance robot 3 is dispatched to the visitor. The arithmetic unit 38 also outputs control signals to the camera unit 34 and the communication unit 31 based on the signals received from the server device 4 via the communication unit 31. Thus, the face of the visitor is photographed, and the photographed face image is transmitted to the server apparatus 4.
Thus, the face of the visitor is photographed, and the photographed face image is transmitted to the server apparatus 4.
The arithmetic unit 38 also outputs a control signal to the output unit 33 (display unit 331) based on the signal received from the server apparatus 4 via the communication unit 31. Thereby, the expression of the guidance robot 3 is changed, and the line of sight of the pair of virtual eyes 306 is changed. The arithmetic unit 38 outputs a control signal to the output unit 33 and the storage unit 37 based on the signal input through the input unit 32. Thereby, the guiding robot 3 can communicate with the visitor.
Fig. 4 is a block diagram showing a main part configuration of the server apparatus 4 shown in fig. 3. As shown in fig. 4, the server device 4 includes a communication unit 41, an input unit 42, an output unit 43, a storage unit 44, and an arithmetic unit 45. The server device 4 may be configured to use virtual server functions in the cloud, or may be configured to disperse the functions.
The communication unit 41 is configured to be capable of wirelessly communicating with the imaging unit 11 and the guidance robot 3 via the communication network 5 (see fig. 3). The input unit 42 includes various switches operable by a user, such as a touch panel and a keyboard, and a microphone capable of inputting sound. The user referred to herein is a clerk of an automobile sales shop. The output unit 43 includes, for example, a monitor for displaying characters and images, a speaker for outputting sound, and the like.
The storage unit 44 has a volatile or nonvolatile memory, not shown. The storage unit 44 stores various data, various programs executed by the arithmetic unit 45, and the like. The storage unit 44 has a functional configuration assumed by a guidance robot database (guidance robot D/B) 441, an exhibition hall database (exhibition hall D/B) 442, and a visitor database (visitor D/B) 443 as memories.
The guidance robot database 441 stores basic information, such as a robot ID of the guidance robot 3 used in the robot dispatch guidance service, and maintenance information, related to the guidance robot 3. The exhibition hall database 442 stores data related to the arrangement of the exhibition vehicles 2, the exhibition stands, and the like, which are arranged in the exhibition hall 1. The hall database 442 has the same structure as the hall database 371 stored in all the storage units 37 of the guidance robot 3. Accordingly, the robot management system 100 may also be configured to have any one of the exhibition hall database 442 and the exhibition hall database 371.
The visitor database 443 stores information of visitors (hereinafter referred to as visitor data) who visit the exhibition hall 1. The visitor data includes information such as a face image and a visiting record of the visitor in addition to basic information of the visitor such as the address, name, age, occupation, sex, etc. of the visitor. The visiting record includes, in addition to the interview contents with the visitor, the contents of the interview before the interview, and the like.
When a visitor is visiting a group of visitors (hereinafter referred to as a visitor group), the visitor database 443 stores visitor data of the visitor in association with information of the visitor group (hereinafter referred to as visitor group information). The visitor group information is information capable of identifying a visitor group. For example, when a visitor and a family are together, visitor data of the visitor is stored in association with information capable of identifying the family. For example, when a visitor and a friend visit together in a multi-person group, visitor data of the visitor is stored in association with information capable of identifying the group.
The computing unit 45 has a CPU, and executes predetermined processing based on the signal received through the input unit 42, the signal received from the outside of the server apparatus 4 through the communication unit 41, various programs and various data stored in the storage unit 44, and the like, and outputs control signals to the communication unit 41, the output unit 43, and the storage unit 44.
As shown in fig. 4, the computing unit 45 has an in-facility image acquisition unit 451, a robot image acquisition unit 452, a robot visual line instruction unit 453, a visitor determination unit 454, a visitor group determination unit 455, a group emotion determination unit 456, and a robot movement instruction unit 457 as functional configurations.
The in-facility image acquisition unit 451 acquires in-exhibition hall images captured by a plurality of capturing units 11 provided in the exhibition hall 1. Specifically, the in-facility image acquisition unit 451 inputs image data (including still image data and moving image data) including images of the interior of the exhibition hall 1 (showing the space in which the vehicle 2 is shown) captured by the plurality of capturing units 11 through the communication unit 41. More specifically, the in-facility image acquisition unit 451 outputs a signal instructing to capture an in-booth image to the plurality of imaging units 11 via the communication unit 41, and inputs image data including the in-booth image captured by the plurality of imaging units 11 via the communication unit 41.
The robot image acquisition unit 452 acquires an image including a face image of a visitor captured by the guidance robot 3 installed in the exhibition hall 1. Specifically, the robot image acquisition unit 452 inputs image data (including still image data and moving image data) including a face image of the visitor photographed by the guidance robot 3 through the communication unit 41. More specifically, the robot image acquisition unit 452 outputs a signal instructing to capture a face image of the visitor to the guidance robot 3 through the communication unit 41, and inputs image data including the face image of the visitor captured by the guidance robot 3 through the communication unit 41.
The robot visual line instruction unit 453 instructs the visual line directions of the pair of virtual eyes 306 of the guidance robot 3. Specifically, the robot visual line instruction unit 453 outputs a control signal for controlling the position and operation of the pair of virtual eyes 306 of the guidance robot 3 to the guidance robot 3 through the communication unit 41.
When the control signal is input from the robot visual line instruction unit 453 through the communication unit 31, the guidance robot 3 controls the display unit 331 based on the input control signal, and changes the positions and actions of the pair of virtual eyes 306. I.e. to move the line of sight. When the guiding robot 3 moves the line of sight, the visitor is attracted by the line of sight of the guiding robot 3 and looks in the direction of the line of sight of the guiding robot 3. In this way, the robot 3 is guided to move the line of sight, thereby attracting the line of sight of the visitor and causing the position or posture of the visitor to change. For example, by guiding the line of sight of the robot 3 toward the imaging unit 11, the line of sight of the visitor can be directed toward the imaging unit 11. This enables the imaging unit 11 to capture the face of the visitor.
The visitor specification unit 454 specifies the visitor to visit the exhibition hall 1 from the intra-exhibition hall image acquired by the intra-facility image acquisition unit 451. More specifically, the visitor specification unit 454 extracts a person from the intra-exhibition image, and extracts (recognizes) a face image from the extracted person. Then, the visitor data having the face image in conformity with the extracted face image is retrieved from the visitor data stored in the visitor database 443, thereby determining the visitor. When there is no visitor data having a face image in accordance with the extracted face image, the face image of the extracted person is stored in the visitor database 443 as visitor data of a new visitor.
The visitor specification unit 454 outputs a control signal to the robot movement instruction unit 457 when it is determined that the visitor cannot be specified by failing to recognize the face image of the person extracted from the intra-exhibition image. When the control signal is input, the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the person. Specifically, the robot movement instruction unit 457 outputs a signal instructing movement to the guidance robot 3 via the communication unit 41, so that the guidance robot 3 is dispatched to the person. This signal is hereinafter referred to as a move command. Then, the robot image acquisition unit 452 outputs a signal instructing to capture a face image of the person to the guidance robot 3 via the communication unit 41. The image including the face image of the person captured by the guidance robot 3 is input to the robot image acquisition unit 452 through the communication unit 41, and the visitor determination unit 454 determines the visitor by using the face image of the person input to the robot image acquisition unit 452 in the same manner as described above.
The visitor group determination unit 455 determines a visitor group for visiting the exhibition hall 1 from the intra-exhibition hall image acquired by the intra-facility image acquisition unit 451. More specifically, the visitor group determination unit 455 extracts a plurality of visitors from the intra-exhibition image, and determines whether or not the visitors constitute a group based on the distance feeling, the face orientation, and the like of the visitors. For example, a plurality of visitors that are kept a certain distance beyond a prescribed time are determined as a visitor group. Also, for example, a plurality of visitors who talk to each other face to face are determined as a visitor group. At this time, the visitor group determination unit 455 also determines the number of visitors constituting the visitor group. In addition, when only one visitor is extracted from the intra-exhibition image, it may be determined that the visitor constitutes a group. In this case, the visitor group determination unit 455 determines the number of visitors constituting the visitor group to be 1.
The visitor group determination unit 455 also determines whether the visitor determined by the visitor determination unit 454 constitutes a visitor group, and determines the visitor group. More specifically, the visitor group determination unit 455 extracts visitor group information of each visitor determined by the visitor determination unit 454, which corresponds to the visitor data stored in the visitor database 443, and determines whether or not each visitor constitutes a visitor group based on the extracted visitor group information.
The group emotion determining section 456 determines a group emotion of the visitor group by estimating emotion of one or more visitors constituting the visitor group determined by the visitor group determining section 455 based on the intra-exhibition image. The group emotion determining section 456 determines whether, for example, the visitor group is a group with good mood (for example, a group that looks happy) or a group with poor mood (for example, a group with dysphoria and vigilance).
The group emotion determining section 456 determines a group emotion based on the estimated proportion of the emotion of the visitor. For example, the group emotion determining section 456 determines a group having a high proportion of lovers (a low proportion of lovers) within the group as a group having a good mood, and determines a group having a high proportion of lovers (a low proportion of lovers) having a bad mood as a group having a bad mood. That is, the emotion ratio in the present embodiment means the ratio of each emotion (poor mood, good mood, etc.) in the group.
As shown in fig. 1, for example, when the friend group 101 including three visitors A, B, C and the parent group 102 including two visitors D, E visit the exhibition hall 1, the group emotion determining unit 456 determines the group emotion of the friend group 101 and the parent group 102 from the intra-exhibition-hall images captured by the plurality of capturing units 11. The group emotion determining unit 456 estimates the emotions of the visitors a to C constituting the friend group 101, and determines that the group emotion of the friend group 101 is "good" when, for example, the emotions of the visitor a and the visitor B are estimated to be good and the emotion of the visitor C is not good. Similarly, the group emotion determining section 456 estimates the emotion of the visitor D, E constituting the parent group 102, and determines the group emotion of the parent group 102 as "poor emotion" when, for example, it is estimated that the emotion of the visitor D (for example, a parent) is good and the emotion of the visitor E (for example, a child) is bad.
Group emotion is determined to be "good" when the moods of all the visitors constituting the visitor group are good, is determined to be "good" when there are relatively many visitors with good moods, is determined to be "normal" when there are the same levels of the visitors with good moods and the visitors with bad moods, is determined to be "bad" when there are relatively many visitors with bad moods, is determined to be "bad" when the moods of all the visitors are not good. In this case, when the visitor is a child, the group emotion may be determined so that the emotion of the child as the visitor is emphasized. In addition, the group of visitors in which visitors whose moods are even slightly bad can be determined as "bad mood".
The emotion of each visitor can be estimated based on the behavior, and the like of the visitor. Therefore, the group emotion determining section 456 estimates emotion of the visitor based on the face image of the visitor, the image representing the movement of the visitor, and the like extracted from the intra-exhibition image. For example, when the visitor has a face with a feeling of being angry, when the visitor is not calms around for a while, or the like, the group emotion determining section 456 estimates that the visitor is ill-minded. On the other hand, the group emotion determining section 456 estimates that the emotion is good, for example, when the visitor has a happy face, when the visitor is talking with each other, or the like. The storage unit 44 stores images representing a plurality of predetermined actions (leg trembling, talking taking place, etc.), and the group emotion determining unit 456 compares the intra-exhibition image with those action images to extract an image representing the actions of the visitor from the intra-exhibition image.
The group emotion determining section 456 estimates emotion of a visitor in the visitor group based on the robot-captured image acquired by the robot image acquiring section 452, and updates the group emotion of the visitor group. For example, when the guidance robot 3 is dispatched to the visitor group and the guidance robot 3 photographs the visitor for reasons such as failure to identify the visitor based on the intra-exhibition image, the group emotion determining unit 456 estimates the emotion of the visitor based on the face image of the visitor photographed by the guidance robot 3 and updates the group emotion based on the estimated emotion.
The robot movement instruction unit 457 instructs the guidance robot 3 to move based on the group emotion determined by the group emotion determination unit 456 so as to dispatch the guidance robot 3 to the visitor group. Specifically, the robot movement instruction unit 457 outputs an instruction to move to the vicinity of the visitor group to the guidance robot 3 via the communication unit 41.
At this time, the robot movement instruction unit 457 gives priority to the visitor group whose group emotion determined by the group emotion determination unit 456 is relatively poor, and instructs movement to the vicinity of the visitor group. For example, if there are a visitor group whose group emotion is "bad emotion" and a visitor group whose group emotion is "bad emotion", the guidance robot 3 instructs to move to the vicinity of the visitor group whose group emotion is "bad emotion" as a priority group.
When there are a plurality of visitor groups having the same group emotion determined by the group emotion determining unit 456, the robot movement instructing unit 457 instructs the robot 3 to move to the vicinity of the visitor group with a smaller priority. For example, the robot movement instruction unit 457 instructs the robot 3 to move to the vicinity of a visitor group whose group emotion is determined to be "bad", when the visitor group of one of the two groups is composed of three persons and the visitor group of the other one is composed of five persons, the visitor group composed of three persons is prioritized.
Further, the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the vicinity of the visitor determined to be unable to be identified, when the visitor identification unit 454 determines that the visitor cannot be identified based on the intra-exhibition image. For example, when a signal indicating that the visitor cannot be specified is input from the visitor specification unit 454, the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the vicinity of the person whose image is extracted from the hall by the visitor specification unit 454.
At this time, the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the vicinity of the visitor group to which the person belongs, when the person extracted from the intra-exhibition image by the visitor specification unit 454 constitutes the visitor group. Then, the robot movement instruction unit 457 instructs the guidance robot 3 to perform an operation of causing the visitor determined to be unable to be identified to be able to change the position or posture of the visitor, such as by being able to be identified from the intra-hall image acquired by the intra-facility image acquisition unit 451. More specifically, the robot movement instruction unit 457 instructs the guiding robot 3 to operate so that the visitor determined to be unable to identify the visitor becomes a position or a posture in the direction of the imaging unit 11.
For example, the robot movement instruction unit 457 instructs the guiding robot 3 to move to the vicinity of the visitor determined to be unable to be identified in a state where the imaging unit 11 is located behind the guiding robot 3. That is, the robot movement instruction unit 457 instructs the guiding robot 3 to move so that the guiding robot 3 moves along a route connecting the imaging unit 11 and the visitor determined to be indeterminate in the plan view. When it is determined that the visitor who cannot be identified notices the approaching guide robot 3, the visitor looks in the direction of the guide robot 3, and thus the face of the visitor can be photographed by the photographing section 11 located at the rear of the guide robot 3. In this case, the guidance robot 3 may be allowed to perform an operation that is likely to draw attention from the visitor determined to be unable to be identified. This causes a change in the position or posture of the visitor.
It is preferable that the robot visual line direction indicator 453 indicates the visual line direction of the pair of virtual eyes 306 of the guidance robot 3 to attract the visual line of the visitor, thereby prompting the change of the position or posture of the visitor. Specifically, it is preferable that the direction of the line of sight of the visitor be attracted to the direction of the imaging unit 11 by the movement of the line of sight direction of the pair of virtual eyes 306 to the direction of the imaging unit 11. At this time, the robot movement instruction unit 457 may output a control signal for instructing a rotation operation, a swing operation, or the like in conjunction with the movement of the line of sight to the guidance robot 3. This makes it easier to attract the line of sight of the visitor because the robot 3 is guided to perform a rotation operation or the like while moving the line of sight. The rotation operation is a rotation operation (winding operation) in which a predetermined distance (for example, 1 m) around the visitor is set as a radius, with the vertical direction as the rotation axis.
Fig. 5 is a flowchart showing an example of the guidance robot dispatch process executed by the computing unit 45 of the server apparatus 4 in fig. 3. The guidance robot dispatch process shown in fig. 5 is started, for example, when the exhibition hall 1 starts to open the hall, and is performed until the hall is closed. When the guidance robot dispatch process starts, intra-exhibition hall images captured by the plurality of capturing units 11 provided in the exhibition hall 1 are acquired by the process of the intra-facility image capturing unit 451 (intra-facility image capturing step). The in-facility image acquisition step is periodically performed in a predetermined time (for example, 10 ms) during the execution of the guidance robot dispatch process.
As shown in fig. 5, first, in step S1, a visitor determination process (visitor determination step) of determining a visitor visiting the exhibition hall 1 from the intra-exhibition hall image acquired in the intra-facility image acquisition step is performed.
Fig. 6 is a flowchart showing an example of the visitor specification processing executed by the arithmetic unit 45 of the server apparatus 4 in fig. 3. As shown in fig. 6, the visitor determination process first extracts, in step S10, the person visiting the facility from the intra-facility image acquired in the intra-facility image acquisition step by the process at the visitor determination section 454. Next, in step S11, it is determined whether or not the face image of the extracted person can be extracted (can be recognized) by the processing in the visitor determination section 454. If step S11 is negative (S11: no), the flow proceeds to step S12, and the robot movement instruction unit 457 instructs the guidance robot 3 to move so that the guidance robot 3 is dispatched to the vicinity of the person determined to be unable to be extracted.
Next, in step S13, it is determined whether or not a face image of the person determined to be unable to be extracted can be acquired by the processing in the visitor determination unit 454. If step S13 is negative (S13: no), the flow proceeds to step S14, and the processing in the robot visual line instruction unit 453 instructs the visual line direction of the pair of virtual eyes 306 guiding the robot 3 to attract the visual line of the visitor and cause the change in the position or posture of the visitor (the robot visual line instruction step).
Next, in step S15, it is determined whether or not a face image of the person determined to be unable to be extracted can be acquired by the processing in the visitor determination unit 454. If step S15 is negative (S15: no), the flow proceeds to step S16, and the robot image acquisition unit 452 acquires a face image of a person determined to be unable to be extracted by causing the guidance robot 3 to capture a face (robot image acquisition step).
On the other hand, when each of step S11, step S13, and step S15 is affirmative (yes in step S11, step S13, and step S15), or when the face image of the person determined to be unable to be extracted (unable to be specified) is acquired in step S16, the face image of the visitor is extracted (identified) by the processing in the visitor specifying unit 454 in step S17. Next, in step S18, the visitor is determined by the processing in the visitor determination unit 454.
When the visitor determination process (visitor determination step) is performed, next, in step S2, the visitor determined in the visitor determination step is determined by the process at the visitor group determination section 455. Next, in step S3, it is determined whether the determined visitor constitutes a visitor group. When step S3 is negative (no in step S3), the process returns to step S1. On the other hand, when step S3 is affirmative (S3: yes), the process proceeds to step S4, and a visitor group is determined. Steps S2 to S4 are hereinafter referred to as visitor group determination processing (visitor group determination step).
When the visitor group determination process is executed, next, in step S5, the emotion of each visitor constituting the visitor group is estimated by the process of the group emotion determination section 456 (group emotion determination step). Next, in step S6, the group emotion of the visitor group is determined based on the estimated emotions of the visitors by the processing of the group emotion determining section 456 (group emotion determining step). Next, in step S7, the robot 3 is guided to move in accordance with the group emotion instruction of the visitor group by the processing of the robot movement instruction unit 457, so that the guided robot 3 is dispatched to the vicinity of the visitor group (robot movement instruction step).
Next, in step S8, it is determined whether or not the time is the hall time of the exhibition hall 1, and if step S8 is negative (no in step S8), the routine returns to step S1, and steps S1 to S8 are repeated. On the other hand, when step S8 is affirmative (S8: yes), the process ends.
The present embodiment can provide the following effects.
(1) A server device 4 for managing a guidance robot 3 capable of traveling itself for a service such as reception of a visitor group visiting an exhibition hall 1 of an automobile sales shop. The server device 4 includes: an in-facility image acquisition unit 451 that acquires an in-exhibition hall image captured by the imaging unit 11 provided on the ceiling 10 of the exhibition hall 1; a visitor group determination unit 455 that determines a visitor group for visiting the exhibition hall 1 based on the intra-exhibition hall image acquired by the intra-facility image acquisition unit 451; a group emotion determining section 456 that estimates, based on the intra-exhibition image, emotion of each of one or more visitors constituting the visitor group determined by the visitor group determining section 455, and determines a group emotion of the visitor group; and a robot movement instruction unit 457 that outputs a movement instruction to the guidance robot 3 so that the guidance robot 3 moves to the vicinity of the visitor group, based on the group emotion determined by the group emotion determination unit 456.
With this configuration, the guidance robots 3 of a limited number are used, and appropriate reception of the visitor group visiting the exhibition hall 1 of the automobile sales shop in the aspect of the automobile sales shop can be realized. For example, by preferentially dispatching the guidance robot 3 to a visitor group having relatively bad emotion (bad emotion), the guidance robot 3 communicates with the visitor group, thereby improving the emotion of the visitor group and suppressing the emotion deterioration. Further, for example, by sending the guidance robot 3 to the visitor group waiting for a long time and having a bad mood, the guidance robot 3 communicates with the visitor group, and thus the mood of the visitor group can be improved, and further deterioration of the mood can be suppressed. This allows a salesperson (e.g., a sales clerk) or the like of an automobile sales shop to conduct a reception smoothly.
Further, when many guidance robots 3 are disposed in the exhibition hall 1, there is a fear that the feeling of oppression is given to the panelist and the cost in the automobile sales shop increases. However, by dispatching the guidance robots 3 based on the group emotion of the visitor group, efficient communication with the visitor group can be achieved by using a limited number of guidance robots 3. For example, by preferentially dispatching the guidance robots 3 to the visitor group having relatively bad group emotion (bad emotion), efficient communication with the visitor group can be achieved by using a limited number of guidance robots 3.
Also, the guidance robot 3 can be used to query the visitor group for the main demand in advance, or to make a simple introduction to the visitor group. For example, the guidance robot 3 can be used to make a simple introduction to a visitor waiting for a long time, a main requirement for inquiring a visitor group requiring a long time for visiting, or the like, in advance, or to the visitor group. Thus, sales personnel (e.g., sales staff) and the like can then take effective reception.
By using the server device 4 according to the present embodiment, efficient and smooth reception using the limited guidance robot 3 can be performed, for example.
(2) The group emotion determining section 456 determines a group emotion from the proportion of the emotion of one or more visitors estimated based on the image in the exhibition hall. For example, the group emotion determining section 456 determines a group having a high proportion of lovers (a low proportion of lovers) within the group as a group having a good mood, and determines a group having a high proportion of lovers (a high proportion of lovers) within the group as a group having a bad mood. Thus, for example, the group emotion can be determined by a simple structure.
(3) The visitor group determination unit 455 determines the number of persons who form the visitor group of the hall 1 from the intra-hall image. The robot movement instruction unit 457 outputs a movement instruction to the guidance robot 3 so that the guidance robot 3 is preferentially dispatched from a group of visitors whose number of people is small when there are a plurality of groups of visitors whose group emotions are the same as each other, which are determined by the group emotion determining unit 456. When the number of persons constituting the visitor group is small, the number of subjects talking to each other is small, and thus the mood of each visitor is liable to be deteriorated and the group feeling is liable to be deteriorated. Therefore, the smaller the number of visitors, the more communication with the guidance robot 3 is required. Therefore, the guidance robot 3 can be preferentially dispatched from the group of visitors with a small number of people, thereby suppressing deterioration of the mood of the visitors or improving the mood of the visitors.
(4) The server device 4 further includes a robot image acquisition unit 452, and the robot image acquisition unit 452 acquires a robot-captured image for capturing, by the guidance robot 3, a visitor constituting the visitor group specified by the visitor group specifying unit 455. The group emotion determining section 456 estimates emotion of the visitor from the robot-captured image acquired by the robot image acquiring section 452, and determines the group emotion of the visitor group determined by the visitor group determining section 455. In this way, by causing the guidance robot 3 to take an image of the visitor, the face of the visitor can be accurately taken. This can reliably estimate the emotion of the visitor. As a result, the group emotion can be determined more reliably. In addition, even when the emotion of the visitor changes due to the dispatch guidance robot, the group emotion corresponding to the change can be specified. That is, the group emotion can be updated based on the emotion change of the visitor.
(5) A robot management method for managing a guidance robot 3 capable of walking by itself, which serves a group of visitors visiting an exhibition hall 1 of an automobile sales shop. The robot management method comprises the following steps: an in-facility image acquisition step in which the server device 4 acquires in-booth images captured by the plurality of capturing units 11 provided in the booth 1; a visitor group determination step of determining a visitor group of a visiting facility from the intra-exhibition hall image acquired in the intra-facility image acquisition step; a group emotion determining step of estimating emotion of one or more visitors constituting the visitor group determined in the visitor group determining step from the intra-exhibition image, thereby determining group emotion of the visitor group; and a robot movement instruction step of outputting a movement instruction to the guidance robot 3 so that the guidance robot 3 moves to the vicinity of the visitor group, based on the group emotion determined in the group emotion determination step.
With this method, as in the server device 4, the limited guidance robot 3 can be used to realize an appropriate reception in the aspect of the automobile sales shop for the group of visitors visiting the exhibition hall 1 of the automobile sales shop. For example, a limited number of guide robots 3 can be used to efficiently and smoothly receive a reception.
(6) The robot management system 100 includes: the server device 4; a plurality of photographing units 11 which can communicate with the server device 4 and are installed in the exhibition hall 1; and a guidance robot 3 which is disposed in the exhibition hall 1, serves a visitor group visiting the exhibition hall 1, can walk by itself, and can communicate with the server apparatus 4. This makes it possible to realize an appropriate reception in the automobile sales shop for the group of visitors visiting the exhibition hall 1 of the automobile sales shop by using the limited guidance robot 3. For example, efficient and smooth reception using the limited guidance robot 3 can be performed.
In the above embodiment, the computing unit 45 of the server apparatus 4 is configured by including the in-facility image acquisition unit 451, the robot image acquisition unit 452, the robot line-of-sight indication unit 453, the visitor determination unit 454, the visitor group determination unit 455, the group emotion determination unit 456, and the robot movement indication unit 457, but is not limited thereto. The arithmetic unit 45 may include an intra-facility image acquisition unit 451, a visitor group determination unit 455, a group emotion determination unit 456, and a robot movement instruction unit 457. The calculation unit 45 preferably includes a robot image acquisition unit 452.
In the above embodiment, the group emotion determining section 456 of the computing section 45 of the server apparatus 4 estimates the quality of the emotion of each of the plurality of visitors, and determines the group emotion based on the estimated quality, but the present invention is not limited to this. For example, the group emotion determining unit may estimate happiness, anger, and sadness of each of the plurality of visitors, and determine the group emotion based on the estimated happiness, anger, and sadness.
In the above embodiment, the guidance robot dispatch processing executed by the computing unit 45 of the server apparatus 4 is configured to include an in-facility image acquisition step, a visitor specification step, a visitor group specification step, a group emotion specification step, and a robot movement instruction step, but is not limited thereto. The guidance robot dispatch process may include an intra-facility image acquisition step, a visitor group determination step, a group emotion determination step, and a robot movement instruction step.
In the above embodiment, the configuration in which the robot image acquisition step is included in the visitor specification process has been described, but the present invention is not limited to this. For example, the robot image acquisition step may be included in the guidance robot dispatch process independently of the visitor determination process. In this case, the robot image acquisition step may be performed to update the group emotion.
By adopting the invention, the proper reception of the visitor group of the visiting facility can be realized by using the limited guiding robot in the aspect of the facility.
While the invention has been described in connection with preferred embodiments, it will be understood by those skilled in the art that various modifications and changes can be made without departing from the scope of the disclosure of the following claims.

Claims (8)

1. A robot management device (4) for managing a guided robot capable of traveling by itself, which serves a visitor group of a visiting facility, comprising:
an in-facility image acquisition unit (451) that acquires an in-facility image captured by a capturing unit (11) provided in the facility;
a visitor group determination unit (455) that determines a visitor group that accesses the facility based on the in-facility image acquired by the in-facility image acquisition unit (451);
a group emotion determination unit (456) that estimates emotion of one or more visitors constituting the visitor group determined by the visitor group determination unit (455) based on the intra-facility image, and determines a group emotion that is emotion of the entire group of the visitor group; and
A robot movement instruction unit (457) for outputting a movement instruction to the guidance robot (3) so that the guidance robot (3) moves to the vicinity of the visitor group based on the group emotion determined by the group emotion determination unit,
the group emotion determination unit (456) estimates, based on the intra-facility image, the mood of one or more visitors constituting the visitor group determined by the visitor group determination unit (455), and determines the group emotion based on the proportion of the mood-good visitors and the mood-poor visitors in the visitor group.
2. The robot managing device according to claim 1, wherein,
the group emotion determination unit (456) determines the group emotion on the basis of the proportion of the emotion of one or more visitors estimated on the basis of the intra-facility image.
3. The robot managing device according to claim 1, wherein,
the group emotion determination unit (456) also determines the group emotion of the visitor group determined by the visitor group determination unit (455) based on at least one of the face image and the action image of the visitor extracted from the in-facility image.
4. A robot managing device according to any one of claim 1 to 3, characterized in that,
the robot movement instruction unit (457) outputs a movement instruction to the guidance robot (3) so that the guidance robot (3) is preferentially dispatched from the group of visitors having bad group emotion when there are a plurality of groups of visitors having the same group emotion determined by the group emotion determination unit (456).
5. A robot managing device according to any one of claim 1 to 3, characterized in that,
the visitor group determination unit (455) determines the number of persons constituting a visitor group visiting the facility based on the intra-facility image,
the robot movement instruction unit (457) outputs a movement instruction to the guidance robot (3) so that the guidance robot (3) is preferentially dispatched from a small number of visitor groups when there are a plurality of visitor groups whose group emotions are identical to each other, which are determined by the group emotion determination unit (456).
6. A robot managing device according to any one of claim 1 to 3, characterized in that,
further comprising a robot image acquisition unit (452), wherein the robot image acquisition unit (452) acquires a robot-captured image obtained by capturing, by the guidance robot (3), a visitor constituting the visitor group specified by the visitor group specifying unit (455),
The group emotion determination unit (456) estimates emotion of a visitor based on the robot-captured image acquired by the robot image acquisition unit (452), and determines the group emotion of the visitor group determined by the visitor group determination unit (455).
7. A robot management method for managing a guided robot (3) capable of traveling by itself, which serves a visitor group of a visiting facility, comprising:
an in-facility image acquisition step of acquiring an in-facility image captured by a capturing unit (11) provided in the facility;
a visitor group determination step of determining a visitor group visiting the facility based on the intra-facility image acquired in the intra-facility image acquisition step;
a group emotion determining step of estimating emotion of one or more visitors constituting the visitor group determined in the visitor group determining step based on the intra-facility image, and determining group emotion as emotion of the entire group of the visitor group; and
a robot movement instruction step of outputting a movement instruction to the guidance robot so that the guidance robot (3) moves to the vicinity of the visitor group based on the group emotion determined in the group emotion determination step,
In the group emotion determining step, the mood of one or more visitors constituting the visitor group determined in the visitor group determining step is estimated based on the intra-facility image, and the group emotion is determined based on the proportion of the mood-good visitors and the mood-poor visitors in the visitor group.
8. A robot management system comprising:
the robot management device (4) according to any one of claims 1 to 6;
an imaging unit (11) provided in a facility and capable of communicating with the robot management device (4); and
and a guidance robot (3) which is disposed in the facility, serves a visitor group visiting the facility, can walk by itself, and can communicate with the robot management device (4).
CN202010642903.5A 2019-07-17 2020-07-06 Robot management device, robot management method, and robot management system Active CN112238454B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019132083A JP7273637B2 (en) 2019-07-17 2019-07-17 ROBOT MANAGEMENT DEVICE, ROBOT MANAGEMENT METHOD AND ROBOT MANAGEMENT SYSTEM
JP2019-132083 2019-07-17

Publications (2)

Publication Number Publication Date
CN112238454A CN112238454A (en) 2021-01-19
CN112238454B true CN112238454B (en) 2024-03-01

Family

ID=74170858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010642903.5A Active CN112238454B (en) 2019-07-17 2020-07-06 Robot management device, robot management method, and robot management system

Country Status (2)

Country Link
JP (1) JP7273637B2 (en)
CN (1) CN112238454B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007043712A1 (en) * 2005-10-14 2007-04-19 Nagasaki University Emotion evaluating method and emotion indicating method, and program, recording medium, and system for the methods
JP2011186521A (en) * 2010-03-04 2011-09-22 Nec Corp Emotion estimation device and emotion estimation method
JP2017159396A (en) * 2016-03-09 2017-09-14 大日本印刷株式会社 Guide robot control system, program, and guide robot
KR20180054407A (en) * 2016-11-15 2018-05-24 주식회사 로보러스 Apparatus for recognizing user emotion and method thereof, and robot system using the same
CN109318239A (en) * 2018-10-09 2019-02-12 深圳市三宝创新智能有限公司 A kind of hospital guide robot, hospital guide's method and device
JP2019066700A (en) * 2017-10-02 2019-04-25 株式会社ぐるなび Control method, information processing apparatus, and control program

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008260107A (en) * 2007-04-13 2008-10-30 Yaskawa Electric Corp Mobile robot system
US9517559B2 (en) * 2013-09-27 2016-12-13 Honda Motor Co., Ltd. Robot control system, robot control method and output control method
JP6328580B2 (en) * 2014-06-05 2018-05-23 Cocoro Sb株式会社 Behavior control system and program
JP6905812B2 (en) * 2016-06-14 2021-07-21 グローリー株式会社 Store reception system
JP6774018B2 (en) * 2016-09-15 2020-10-21 富士ゼロックス株式会社 Dialogue device
JP6965525B2 (en) * 2017-02-24 2021-11-10 沖電気工業株式会社 Emotion estimation server device, emotion estimation method, presentation device and emotion estimation system
JP6572943B2 (en) * 2017-06-23 2019-09-11 カシオ計算機株式会社 Robot, robot control method and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007043712A1 (en) * 2005-10-14 2007-04-19 Nagasaki University Emotion evaluating method and emotion indicating method, and program, recording medium, and system for the methods
JP2011186521A (en) * 2010-03-04 2011-09-22 Nec Corp Emotion estimation device and emotion estimation method
JP2017159396A (en) * 2016-03-09 2017-09-14 大日本印刷株式会社 Guide robot control system, program, and guide robot
KR20180054407A (en) * 2016-11-15 2018-05-24 주식회사 로보러스 Apparatus for recognizing user emotion and method thereof, and robot system using the same
JP2019066700A (en) * 2017-10-02 2019-04-25 株式会社ぐるなび Control method, information processing apparatus, and control program
CN109318239A (en) * 2018-10-09 2019-02-12 深圳市三宝创新智能有限公司 A kind of hospital guide robot, hospital guide's method and device

Also Published As

Publication number Publication date
JP2021018481A (en) 2021-02-15
CN112238454A (en) 2021-01-19
JP7273637B2 (en) 2023-05-15

Similar Documents

Publication Publication Date Title
US11307593B2 (en) Artificial intelligence device for guiding arrangement location of air cleaning device and operating method thereof
US9922236B2 (en) Wearable eyeglasses for providing social and environmental awareness
US7584158B2 (en) User support apparatus
US20200008639A1 (en) Artificial intelligence monitoring device and method of operating the same
US11330951B2 (en) Robot cleaner and method of operating the same
WO2016019265A1 (en) Wearable earpiece for providing social and environmental awareness
KR20190100085A (en) Robor being capable of detecting danger situation using artificial intelligence and operating method thereof
US11772274B2 (en) Guide robot control device, guidance system using same, and guide robot control method
US10872438B2 (en) Artificial intelligence device capable of being controlled according to user's gaze and method of operating the same
JPWO2020129309A1 (en) Guidance robot control device, guidance system using it, and guidance robot control method
CN112238458B (en) Robot management device, robot management method, and robot management system
US11455529B2 (en) Artificial intelligence server for controlling a plurality of robots based on guidance urgency
JP2018169787A (en) Autonomous mobility system and control method of autonomous mobility
CN112238454B (en) Robot management device, robot management method, and robot management system
JP7038232B2 (en) Guidance robot control device, guidance system using it, and guidance robot control method
US11994875B2 (en) Control device, control method, and control system
US20220297308A1 (en) Control device, control method, and control system
JP6739017B1 (en) Tourism support device, robot equipped with the device, tourism support system, and tourism support method
JP2021108072A (en) Recommendation system, recommendation method, and program
US20220300982A1 (en) Customer service system, server, control method, and storage medium
US20220301029A1 (en) Information delivery system, information delivery method, and storage medium
JP7226233B2 (en) Vehicle, information processing system, program and control method
US20210268645A1 (en) Control apparatus, control method, and program
JP2023143847A (en) System, information processing apparatus, vehicle, and method
JP2024090075A (en) Vehicle dispatch system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant