WO2024157705A1 - Presentation system, presentation method, and program - Google Patents

Presentation system, presentation method, and program Download PDF

Info

Publication number
WO2024157705A1
WO2024157705A1 PCT/JP2023/046432 JP2023046432W WO2024157705A1 WO 2024157705 A1 WO2024157705 A1 WO 2024157705A1 JP 2023046432 W JP2023046432 W JP 2023046432W WO 2024157705 A1 WO2024157705 A1 WO 2024157705A1
Authority
WO
WIPO (PCT)
Prior art keywords
worker
contact position
presentation
recognize
unit
Prior art date
Application number
PCT/JP2023/046432
Other languages
French (fr)
Japanese (ja)
Inventor
莉奈 赤穗
元貴 吉岡
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2024157705A1 publication Critical patent/WO2024157705A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/42Recording and playback systems, i.e. in which the programme is recorded from a cycle of operations, e.g. the cycle of operations being manually controlled, after which this record is played back on the same machine

Definitions

  • This disclosure relates to a presentation system, a presentation method, and a program that presents the contact position of a robot performing work on a work object.
  • Robots are used to perform tasks on work objects. For example, there are cases where a person works in collaboration with a robot. In addition, there are cases where a person operates a robot and teaches it how to perform a specific task in order to have the robot perform a specific task.
  • Patent document 1 describes a device that intuitively shows a robot's movements and work content to people in a work environment, enabling them to share work information with the robot and work in cooperation without fear of the robot's movements.
  • Patent document 2 describes a robot that allows a person to visually recognize a danger zone that changes as the robot's movements change.
  • the contact position the actual position where the robot plans to make contact
  • the contact position the actual position where the robot plans to make contact
  • This makes it possible to reduce the accumulation of anxiety and stress in people during collaborative work.
  • the robot's contact position is displayed, allowing the person to move the robot while referring to the displayed contact position. This simplifies the teaching work and judgment, and makes it possible to create teaching data that is not dependent on the worker. In this way, by displaying the robot's contact position, it is possible to support people.
  • the present disclosure provides a presentation system, presentation method, and program that make it easier for workers to recognize the robot's contact position.
  • a presentation system includes a contact position acquisition unit that acquires contact position information indicating the contact position between a robot working on a work object and the work object, a status acquisition unit that acquires the status of a worker working on the work object, a recognition determination unit that determines whether the worker can recognize the contact position based on the acquired status of the worker and the contact position information, a presentation mode selection unit that selects a presentation mode for presenting the contact position based on the contact position information and the determination result of the recognition determination unit, and a presentation unit that presents the contact position according to the selected presentation mode.
  • the presentation method includes a contact position acquisition step of acquiring contact position information indicating the contact position between a robot performing work on a work object and the work object, a status acquisition step of acquiring the status of a worker performing work on the work object, a recognition determination step of determining whether or not the worker can recognize the contact position based on the acquired status of the worker and the contact position information, a presentation mode selection step of selecting a presentation mode for presenting the contact position based on the contact position information and the determination result in the recognition determination step, and a presentation step of presenting the contact position according to the selected presentation mode.
  • a program according to one aspect of the present disclosure is a program for causing a computer to execute the above-mentioned presentation method.
  • the presentation system, presentation method, and program according to one embodiment of the present disclosure make it easier for workers to recognize the contact position of the robot.
  • FIG. 1 is a block diagram showing an example of a presentation system according to an embodiment
  • 1A and 1B are diagrams illustrating application examples of a presentation system according to an embodiment.
  • 10 is a flowchart illustrating an example of an operation of the presentation system according to the embodiment.
  • FIG. 13 is a diagram for explaining a line of sight range of a worker.
  • 11 is a diagram showing an example of a state in which the contact position is not included in the line of sight of the worker;
  • FIG. 11 is a diagram showing an example of a state in which the contact position is not included in the line of sight of the worker;
  • FIG. 11 is a diagram showing an example of a state in which the contact position is not included in the line of sight of the worker;
  • FIG. 11 is a diagram showing an example of a state in which the contact position is not included in the line of sight of the worker;
  • FIG. 10 is a flowchart illustrating another example of the operation of the presentation system according to the embodiment.
  • 11A and 11B are diagrams illustrating another application example of the presentation system according to the embodiment.
  • FIG. 13 is a diagram showing an example of a camera image of a worker captured from above the worker's head.
  • 11A and 11B are diagrams illustrating another application example of the presentation system according to the embodiment.
  • FIG. 1 is a diagram illustrating an example of a camera worn by a worker.
  • FIG. 13 is a diagram showing another example of a camera worn by a worker.
  • FIG. 11 is a diagram showing an example of a camera image of a work target captured by a camera worn by a worker.
  • FIG. 13 is a diagram showing an example of a facial expression of a worker.
  • FIG. 11 is a diagram showing another example of a facial expression of a worker.
  • FIG. 13 is a diagram showing an example of presentation according to a newly selected presentation mode.
  • FIG. 13 is a diagram showing another example of presentation according to a newly selected presentation mode.
  • FIG. 11 is a block diagram showing an example of a presentation system according to a first modified example of the embodiment.
  • FIG. 11 is a block diagram showing an example of a presentation system according to a second modified example of the embodiment.
  • FIG. 11 is a block diagram showing an example of a presentation system according to a third modified example of the embodiment.
  • FIG. 13 is a diagram showing an example of presentation according to a presentation mode selected based on the skill level of a worker.
  • FIG. 1 is a block diagram showing an example of a presentation system 10 according to an embodiment.
  • the presentation system 10 is a system for presenting the contact position of a robot performing work on a work object to a worker performing work on the work object.
  • the robot is a manipulator, and the robot comes into contact with the work object to perform work such as grasping.
  • the contact position becomes the grasping position.
  • the presentation system 10 includes a contact position acquisition unit 11, a status acquisition unit 12, a recognition determination unit 13, a presentation mode selection unit 14, and a presentation unit 15.
  • the presentation system 10 includes a computer including a processor and a memory.
  • the memory is a ROM (Read Only Memory) and a RAM (Random Access Memory), etc., and can store programs executed by the processor.
  • the contact position acquisition unit 11, the status acquisition unit 12, the recognition determination unit 13, the presentation mode selection unit 14, and the presentation unit 15 are realized by a processor that executes programs stored in the memory, etc.
  • the presentation system 10 may be a computer in a single housing, a system made up of multiple computers, or a device equipped with one or multiple computers.
  • the presentation system 10 may be mounted on a robot.
  • the presentation system 10 may be a server. Note that the components of the presentation system 10 may be located on one server, or may be distributed across multiple servers.
  • the contact position acquisition unit 11 acquires contact position information indicating the contact position of the robot with the work object. For example, the contact position acquisition unit 11 acquires information indicating the contact position of the robot with the work object from the robot. In this case, the contact position acquisition unit 11 may also acquire information such as the type of work object, the speed at which the robot approaches the contact position, or the pressure when the robot contacts the contact position.
  • the contact position acquisition unit 11 may be a part with a processor, or a device with a processor.
  • the contact position acquisition unit 11 may also be a computer including a processor and memory, or a device including a processor and memory.
  • the contact position acquisition unit 11 may also be equipped with a camera or a sensor, and the camera or sensor may acquire information such as the contact position of the robot and the pressure when the robot contacts the contact position.
  • the contact position acquisition unit 11 may also be a device with a computer, for example a sensor device with a built-in computer including a processor and memory.
  • the status acquisition unit 12 acquires the status of a worker performing work on a work object. For example, the status acquisition unit 12 acquires the status of a worker by acquiring video images of the worker from a camera that captures the worker. Details of the operation of the status acquisition unit 12 will be described later.
  • the status acquisition unit 12 may be a component with a processor, or a device with a processor.
  • the status acquisition unit 12 may also be a computer including a processor and memory, or a device including a processor and memory.
  • the status acquisition unit 12 may also be equipped with an imaging device such as a camera that captures an image of the worker, or may also be equipped with a sensor.
  • the status acquisition unit 12 may also be a device equipped with a computer, for example an imaging device with a built-in computer including a processor and memory.
  • the recognition determination unit 13 determines whether or not the worker can recognize the contact position based on the acquired worker's state and contact position information. Details of the operation of the recognition determination unit 13 will be described later.
  • the recognition determination unit 13 may be a component with a processor, or a device with a processor.
  • the recognition determination unit 13 may also be a computer including a processor and memory, or a device including a processor and memory.
  • the recognition determination unit 13 may also be a device with a computer.
  • the presentation mode selection unit 14 selects a presentation mode for presenting the contact position based on the contact position information and the determination result of the recognition determination unit 13. Details of the operation of the presentation mode selection unit 14 will be described later.
  • the presentation mode selection unit 14 may be a component having a processor, or a device having a processor.
  • the presentation mode selection unit 14 may also be a computer including a processor and memory, or a device including a processor and memory.
  • the presentation mode selection unit 14 may also be a device including a computer.
  • the presentation unit 15 presents the contact position according to the selected presentation mode. Details of the operation of the presentation unit 15 will be described later.
  • the presentation unit 15 may be a component with a processor, or a device with a processor.
  • the presentation unit 15 may also be a computer including a processor and memory, or a device including a processor and memory.
  • the presentation unit 15 may also be a device with a presentation function, such as a projector.
  • the presentation unit 15 may also be provided in a device with a presentation function, and may be a device built into a projector or speaker, for example.
  • FIG. 2 is a diagram showing an application example of the presentation system 10 according to the embodiment.
  • the presentation system 10 is applied when the worker 30 works with the robot 21 to perform work on the work target 40.
  • the presentation system 10 is a system consisting of the computer 24 and the computer 25.
  • the contact position acquisition unit 11, the presentation mode selection unit 14, and the presentation unit 15 are realized by the computer 24, and the state acquisition unit 12 and the recognition determination unit 13 are realized by the computer 25.
  • the presentation system 10 may include at least one of the robot 21, the projector 22, and the camera 23. In the example shown in FIG. 1, the presentation system 10 includes all of the robot 21, the projector 22, and the camera 23.
  • the computer 24 is connected to the robot 21 and the projector 22 via the cable 34a and the cable 34b, respectively.
  • the computer 25 is connected to the camera 23 via the cable 34c.
  • the computer 24 and the computer 25 are also connected to each other via the cable 34d.
  • the cables 34a-d may be, for example, LAN (Local Area Network) cables or USB (Universal Serial Bus) cables.
  • the connection between the computer 24 and the computer 25, the connection between the computer 24 and the robot 21 and the projector 22, and the connection between the computer 25 and the camera 23 may be made using wireless technology such as wireless LAN (Local Area Network) or Bluetooth (Bluetooth (registered trademark)).
  • the presentation unit 15 presents the contact position 50 of the robot 21 with the work target 40 via the projector 22.
  • the presentation mode selection unit 14 selects a display mode for displaying the contact position 50 as the presentation mode based on the contact position information and the judgment result of the recognition judgment unit 13, and the presentation unit 15 uses the projector 22 to perform a display on the work target 40 according to the selected display mode.
  • the line of sight range 31 of the worker 30 is shown, but the details of the line of sight range 31 will be described later.
  • the contact position 50 is shown in a circular display mode, but this will be described later.
  • FIG. 3 is a flowchart showing an example of the operation of the presentation system 10 according to the embodiment. Note that FIG. 3 illustrates an example in which it is determined whether the worker 30 can recognize the contact position 50 before the contact position 50 is presented.
  • the contact position acquisition unit 11 acquires contact position information indicating the contact position 50 between the robot 21 performing work on the work object 40 and the work object 40 (step S11), and the status acquisition unit 12 acquires the status of the worker 30 performing work on the work object 40 (step S12).
  • the status acquisition unit 12 acquires a camera image of the worker 30 captured by the camera 23 as the status of the worker 30.
  • the camera image of the worker 30 includes line-of-sight information of the worker 30, and the recognition determination unit 13 estimates the line-of-sight range 31 of the worker 30 from the line-of-sight information.
  • the line-of-sight range 31 of the worker 30 will be described with reference to FIG. 4.
  • FIG. 4 is a diagram for explaining the line of sight 31 of the worker 30. Note that, as will be described in detail later, FIG. 4 is also a diagram showing an example of a state in which the contact position 50 is included in the line of sight 31 of the worker 30.
  • the line of sight range 31 is a range determined by the field of view angle 32 and the viewing distance 33 of the worker 30.
  • the field of view angle 32 and the viewing distance 33 vary from person to person, but for example, the field of view angle 32 and the viewing distance 33 are predetermined to be the field of view angle and the viewing distance of a standard person.
  • the recognition determination unit 13 estimates the line of sight range 31 based on the field of view angle 32 centered on the direction in which the worker 30 is facing in the camera image and the viewing distance 33 in that direction. Note that the line of sight range 31 of the worker 30 may be estimated by using an eye tracker.
  • the recognition determination unit 13 determines whether or not the worker 30 can recognize the contact position 50 based on the acquired state of the worker 30 and the contact position information (step S13).
  • the recognition determination unit 13 determines whether or not the worker 30 can recognize the contact position 50 if some kind of display or the like is performed in the future on the contact position 50 indicated by the contact position information.
  • the recognition determination unit 13 determines whether or not the worker 30 can recognize the contact position 50 by determining whether or not the contact position 50 is included in the line of sight range 31. This will be explained using FIG. 4 and FIG. 5A to FIG. 5C.
  • FIG. 4 is a diagram showing an example of a state in which the contact position 50 is included in the line of sight 31 of the worker 30, and each of FIGS. 5A to 5C is a diagram showing an example of a state in which the contact position 50 is not included in the line of sight 31 of the worker 30.
  • the recognition determination unit 13 determines that the contact position 50 is within the line of sight 31 and that the worker 30 can recognize the contact position 50.
  • the angle indicating the field of view angle 32 will be small or the viewing distance 33 will be short. In this case, the angle indicating the field of view angle 32 may be small and the viewing distance 33 may be short. Conversely, if the worker 30 is in a state where he or she can adequately recognize the contact position 50, for example, the display mode indicating the contact position 50 can be made small.
  • the display mode indicating the contact position 50 is large or the contrast between the display mode and the surroundings is high, the angle indicating the field of view 32 will be large or the viewing distance 33 will be long. In this case, the angle indicating the field of view 32 may be large and the viewing distance 33 may be long.
  • the display mode indicating the contact position 50 may be made large or the contrast between the display mode and the surroundings may be increased, so that the worker 30 can fully recognize the contact position 50.
  • the recognition determination unit 13 determines that the contact position 50 is not included in the line of sight range 31, and determines that the worker 30 cannot recognize the contact position 50. Also, as shown in FIG. 5B, even if the contact position 50 is within the viewing angle 32, when the contact position 50 is outside the viewing distance 33, the recognition determination unit 13 determines that the contact position 50 is not included in the line of sight range 31, and determines that the worker 30 cannot recognize the contact position 50. Also, as shown in FIG. 5C, when the contact position 50 is in the blind spot of the worker 30, the recognition determination unit 13 determines that the contact position 50 is not included in the line of sight range 31, and determines that the worker 30 cannot recognize the contact position 50. For example, whether or not the contact position 50 is in the blind spot of the worker 30 is determined based on the positional relationship between the work object 40, the worker 30, and the contact position 50 shown in the camera image.
  • the contact position 50 is position information obtained from the robot 21, and has a different coordinate system from the line of sight 31 estimated from the direction in which the worker is facing as seen in the camera image of the camera 23. Therefore, first, the recognition determination unit 13 performs coordinate conversion so as to match the coordinate system of the contact position 50 with the coordinate system of the line of sight 31.
  • the recognition determination unit 13 may perform coordinate conversion from the coordinate system of the contact position 50 to the coordinate system of the line of sight 31, or may perform coordinate conversion from the coordinate system of the line of sight 31 to the coordinate system of the contact position 50.
  • the recognition determination unit 13 may perform coordinate conversion from the coordinate system of the contact position 50 to the coordinate system of the line of sight 31 to the same coordinate system such as a world coordinate system. By performing the coordinate conversion, it becomes possible to compare the contact position 50 with the line of sight 31, and the recognition determination unit 13 can determine whether or not the contact position 50 is included in the line of sight 31.
  • the presentation mode selection unit 14 selects a presentation mode for presenting the contact position 50 based on the contact position information and the determination result of the recognition determination unit 13 (step S14). For example, the presentation mode selection unit 14 selects what type of presentation to perform for the contact position 50 indicated by the contact position information based on the determination result of the recognition determination unit 13. For example, when it is determined that the worker 30 cannot recognize the contact position 50, the presentation mode selection unit 14 selects a presentation mode that makes it easier for the worker 30 to recognize the contact position 50.
  • the presentation mode selection unit 14 selects a display mode that differs in at least one of the display hue, the display color intensity (saturation), the display blinking frequency, and the display size, compared to the display mode selected when it is determined that the worker 30 can recognize the contact position 50.
  • the presentation mode selection unit 14 selects a display mode in which the color of the display is easily recognized by the worker 30 (e.g., a color different from that of the work object 40), selects a display mode in which the color of the display is highly intense, selects a display mode in which the display blinks frequently, or selects a display mode in which the size of the display is large.
  • the display form showing the contact position 50 is a circle projected near the worker 30 by the projector 22.
  • the size and color of this circle are set so that the worker 30 can recognize it.
  • the display form may be a mark other than a circle, such as an oval, triangle, rectangle, polygon with 5 or more sides, a star, a cross or a x (x mark), a mesh, dots or contour lines, letters or arrows, or a heat map.
  • the display form may be in a variety of colors, such as red, yellow, blue, green, white, purple, etc.
  • the display form indicating the contact position 50 may be a mark indicating a danger area.
  • the presentation mode selection unit 14 may select a presentation mode for presenting the additional information.
  • information that can be expressed numerically may be displayed as letters (numbers).
  • the displayed color may change depending on the pressure of the contact. For example, a contact position 50 that is contacted with a strong force may be displayed in red, and a contact position 50 that is contacted with a weak force may be displayed in blue.
  • the size of the display may change depending on the time until contact.
  • the size of the display may gradually decrease as the time until contact approaches.
  • a contour line may be displayed that is highest at the contact position 50 and becomes lower the further away from the contact position 50, or dots or meshes may be displayed that become denser the closer to the contact position 50.
  • the presentation mode selection unit 14 may also instruct the presentation unit 15 to output a sound such as a warning sound or warning voice while selecting the same display mode as the display mode that would be selected if it was determined that the worker 30 can recognize the contact position 50. This makes it easier for the worker 30 who hears the sound such as the warning sound or warning voice to recognize the contact position 50.
  • the presentation mode selection unit 14 may also select a display mode that displays a color according to the color of the work object 40. For example, if the work object 40 is a red object, the red color has low visibility even when displayed on the work object 40, and the worker 30 may have difficulty noticing the display. Therefore, for example, a display based on green, which is the contrasting color of the work object 40, may be performed.
  • the presentation unit 15 presents the contact position 50 according to the selected presentation mode (step S15).
  • the presentation unit 15 causes the projector 22 to display light on the contact position 50 or to output sound from a speaker (not shown) or the like.
  • the presentation mode selection unit 14 selects a presentation mode that makes it easier for the worker 30 to recognize the contact position 50.
  • the presentation unit 15 presents the contact position 50 according to the selected presentation mode, making it easier for the worker 30 to recognize the contact position 50 of the robot 21. For example, a warning using sound or voice may be considered as a presentation mode.
  • a contact position 50 is determined to be unrecognizable to the worker 30, it may be determined that there is no need to present the contact position 50, and the contact position 50 may not be presented.
  • FIG. 6 is a flowchart showing another example of the operation of the presentation system 10 according to the embodiment.
  • the contact position acquisition unit 11 acquires contact position information indicating the contact position 50 of the work object 40 of the robot 21 performing work on the work object 40 (step S21).
  • the presentation mode selection unit 14 selects a presentation mode for presenting the contact position 50 based on the contact position information (step S22).
  • the presentation mode selection unit 14 selects a predetermined presentation mode for the contact position 50 indicated by the contact position information. For example, when the presentation mode is display, the presentation mode selection unit 14 selects a display mode of a predetermined display color, a display color intensity, a display blinking frequency, or a display size. Also, for example, when the presentation is sound output, the presentation mode selection unit 14 selects a presentation mode of a predetermined sound output.
  • the output of the predetermined sound may be the output of a sound indicating the contact position 50, or may be a voice output such as "Grab the sole of the shoe.”
  • the presentation unit 15 presents the contact position 50 according to the selected presentation mode (step S23). For example, the presentation unit 15 causes the projector 22 to display light on the contact position 50 or causes a speaker (not shown) to output sound.
  • the state acquisition unit 12 acquires the state of the worker 30 performing work on the work object 40 (step S24). Specifically, the state of the worker 30 in a state in which the contact position 50 has been presented is acquired. Since the contact position 50 has been presented, if the worker 30 notices the presentation of the contact position 50, the state of the worker 30 becomes a state corresponding to the presentation of the contact position 50.
  • the state acquisition unit 12 acquires a camera image of the worker 30 obtained by the camera 23 as the state of the worker 30.
  • the camera image of the worker 30 includes line-of-sight information of the worker 30, and the recognition determination unit 13 estimates the line-of-sight range 31 of the worker 30 from the line-of-sight information.
  • the recognition determination unit 13 determines whether or not the worker 30 can recognize the contact position 50 based on the acquired state of the worker 30 and the contact position information (step S25).
  • the recognition determination unit 13 determines whether or not the worker 30 recognizes the contact position 50 based on the acquired state of the worker 30 and the contact position information. For example, the recognition determination unit 13 determines whether or not the worker 30 recognizes the contact position 50 by determining whether or not the contact position 50 is included in the line of sight range 31.
  • the presentation mode selection unit 14 selects a presentation mode for presenting the contact position 50 based on the contact position information and the determination result of the recognition determination unit 13 (step S26). As in step S15 of FIG. 3, the presentation unit 15 presents the contact position 50 according to the selected presentation mode (step S27). Note that when the presentation is a sound output, if the presentation mode selection unit 14 determines that the worker 30 cannot recognize the contact position 50, it selects a presentation mode with a different volume or announcement content compared to the presentation mode selected when it is determined that the worker 30 can recognize the contact position 50. For example, the presentation mode selection unit 14 selects a presentation mode with a higher volume or a presentation mode that announces the detailed location of the contact position 50.
  • FIG. 7 shows another application example of the presentation system 10 according to the embodiment.
  • the camera 23 may be positioned above the worker 30.
  • Figure 8 shows an example of a camera image of worker 30 captured from above the worker's head.
  • the state acquisition unit 12 can easily acquire the head angle information of the worker 30 as the line of sight information of the worker 30 from the camera image as shown in FIG. 8. Therefore, the recognition determination unit 13 may estimate the line of sight range 31 of the worker 30 based on the head angle information of the worker 30 as the line of sight information of the worker 30.
  • the recognition determination unit 13 can easily estimate the direction in which the worker 30 is facing from the head angle information of the worker 30, and can easily estimate the line of sight range 31 from the field of view angle 32 centered on the direction in which the worker 30 is facing and the viewing distance 33 to that direction.
  • FIG. 9 is a diagram showing another application example of the presentation system 10 according to the embodiment.
  • the projector and camera may be installed in the same location, and as shown in FIG. 9, one device 26 having the functions of a projector and a camera may be used for the presentation system 10.
  • the contact position acquisition unit 11, state acquisition unit 12, recognition determination unit 13, presentation mode selection unit 14, and presentation unit 15 may be realized by a computer 27 connected to the device 26.
  • the camera used in the presentation system 10 may also be a camera worn by the worker 30 (a wearable camera).
  • FIGS. 10A and 10B show an example of a camera worn by a worker 30.
  • the camera worn by the worker 30 may be an eye gaze camera 28 worn near the eyes of the worker 30.
  • the camera worn by the worker 30 may be AR (Augmented Reality) glasses 29 worn by the worker 30.
  • FIG. 11 shows an example of a camera image of a work object 40 captured by a camera (eye camera 28 or AR glasses 29) worn by a worker 30.
  • a camera image captured by a camera worn near the eyes of worker 30 is an image close to the field of view of worker 30. Therefore, if contact position 50 is included in the camera image, it can be determined that worker 30 can recognize contact position 50, and if contact position 50 is not included in the camera image, it can be determined that worker 30 cannot recognize contact position. In other words, when a camera worn by worker 30 is used, estimation of face direction and line of sight 31 and coordinate transformation are not required, making it possible to simplify the design of the algorithm.
  • the state acquisition unit 12 acquires a camera image captured by a camera worn by the worker 30 as the state of the worker 30 (specifically, acquires where the worker 30 is looking as the state of the worker 30), and the recognition determination unit 13 may determine whether the contact position 50 is included in the camera image, thereby determining whether the worker 30 can recognize the contact position 50.
  • the worker 30 may not be able to recognize the contact position 50. Therefore, whether or not the worker 30 can recognize the contact position 50 may be determined according to the size of the content displayed in the camera image.
  • the presentation system 10 may use an electromyography sensor attached near the eyes of the worker 30.
  • the state acquisition unit 12 may acquire information on the electrooculography of the worker 30 as the gaze information of the worker 30, and the recognition determination unit 13 may estimate the gaze range 31 based on the information on the electrooculography of the worker 30. For example, if the range of movement of the eyeballs of the worker 30 is narrow, it is considered that the worker 30 is gazing at a certain range, and it can be determined that the recognition range of the worker 30 is narrow and that the worker 30 recognizes details within that recognition range. Therefore, in this case, the recognition determination unit 13 can estimate a gaze range 31 with a large visual distance 33 and a narrow field of view angle 32.
  • the recognition determination unit 13 can estimate a gaze range 31 with a wide field of view angle 32.
  • the state acquisition unit 12 may also acquire a video showing the face of the worker 30 (specifically, acquire the facial expression of the worker 30 from the video showing the face of the worker 30 as the state of the worker 30), and the recognition determination unit 13 may further determine whether the worker 30 can recognize the contact position 50 (specifically, whether the worker 30 recognizes it) based on the facial expression of the worker 30 shown in the video.
  • FIG. 12A and 12B are diagrams showing examples of facial expressions of a worker 30.
  • a worker 30 with a frowning expression is shown
  • a worker 30 with a squinting expression is shown.
  • the recognition determination unit 13 can determine that the worker 30 is unable to recognize the presented contact position 50. Therefore, in this case, the presentation mode selection unit 14 may select a presentation mode that makes it easier for the worker 30 to recognize the contact position 50. Furthermore, if the facial expression of the worker 30 shown in the acquired video is one in which the worker can recognize the contact position 50 by opening the eyes wide, moving the eyebrows, or opening the mouth, the recognition determination unit 13 can determine that the worker 30 can recognize the presented contact position 50. Therefore, in this case, the presentation mode selection unit 14 does not need to select a presentation mode that is different from the current situation.
  • the recognition determination unit 13 may estimate the gaze range 31 of the worker 30 based on the facial movement of the worker 30 shown in the moving image as the gaze information of the worker 30. For example, if the facial movement of the worker 30 is slow, the worker 30 is considered to be gazing at a certain range, and it can be determined that the recognition range of the worker 30 is narrow and that the worker recognizes details within that recognition range. Therefore, in this case, the recognition determination unit 13 can estimate a gaze range 31 with a large visual distance 33 and a narrow field of view angle 32. Also, for example, if the facial movement of the worker 30 is fast, the worker 30 is considered to be looking at a wide range, and it can be determined that the recognition range of the worker 30 is wide. Therefore, in this case, the recognition determination unit 13 can estimate a gaze range 31 with a wide field of view angle 32.
  • the recognition determination unit 13 may also determine whether the estimation result of the gaze range 31 of the worker 30 was correct based on the facial expression or movement of the worker 30 shown in the moving image as the gaze information of the worker 30. For example, if the gaze range 31 of the worker 30 was incorrect, the recognition determination unit 13 can re-estimate the gaze range 31. For example, if the facial expression of the worker 30 is a frowning expression or a narrowing expression as in FIG. 12A or FIG. 12B, it can be estimated that the contact position 50 in the gaze range 31 is difficult to see, and it can be determined that the visual distance 33 in the estimated gaze range 31 is too long. Therefore, in this case, it is possible to re-estimate the gaze range 31 with a short visual distance 33.
  • the recognition determination unit 13 can re-estimate the line-of-sight range 31 with a short viewing distance 33 or a narrow viewing angle 32.
  • the presentation mode selection unit 14 may select a new presentation mode in addition to the already selected presentation mode, and the presentation unit 15 may present the contact position 50 according to the newly selected presentation mode in addition to the presentation according to the already selected presentation mode. This will be described with reference to Figures 13A and 13B.
  • FIGS. 13A and 13B are diagrams showing an example of presentation according to a newly selected presentation mode.
  • FIGS. 13A and 13B show a table on which a work object 40 is placed, and an image is projected onto the table by a projector 22. For example, an image is projected onto a contact position 50 on the work object 40, and an additional image is also projected around the work object 40 on the table.
  • the image projected onto the contact position 50 on the work object 40 is an example of presentation according to an already selected presentation mode
  • the image projected around the work object 40 on the table is an example of presentation according to a newly selected presentation mode.
  • an additional image 51 may be displayed that indirectly displays the contact position 50 that the worker 30 cannot recognize, as shown in FIG. 13A.
  • the additional image 51 is an image that indirectly displays the location of the contact position 50 that the worker 30 cannot recognize by using an arrow or text.
  • an additional image 52 may be displayed that directly displays the contact position 50 that the worker 30 cannot recognize, as shown in FIG. 13B.
  • the additional image 52 is an image of the work object 40 captured from the opposite side of the worker 30, and by looking at the additional image 52, the worker 30 can recognize that the unrecognized contact position 50 is on the opposite side to the side the worker 30 is looking at.
  • the presentation according to the newly selected presentation mode may be audio.
  • a voice may be output to inform the operator that the contact position 50 that the operator 30 is unable to recognize is on the opposite side to the side the operator 30 is looking at.
  • a presentation mode that makes it easier for the worker 30 to recognize the contact position 50 is selected, and the contact position 50 is presented according to the selected presentation mode. This makes it easier for the worker 30 to recognize the contact position 50 of the robot 21.
  • FIG. 14 is a block diagram showing an example of a presentation system 10a according to the first variation of the embodiment.
  • the presentation system 10a differs from the presentation system 10 according to the embodiment in that it includes a concentration level estimation unit 16.
  • the presentation system 10a is basically the same as the presentation system 10 according to the embodiment, so the following description will focus on the differences.
  • the concentration level estimation unit 16 is realized by a processor that executes a program stored in a memory provided in the presentation system 10a, and more specifically, by the computer 25 shown in FIG. 2 and FIG. 7 or the computer 27 shown in FIG. 9.
  • the concentration level estimation unit 16 may be a component having a processor, or a semiconductor chip having a processor.
  • the concentration level estimation unit 16 may also be a computer having a processor, a memory, or a device having a processor, a memory, or a device.
  • the concentration level estimation unit 16 may also be a device having a computer.
  • the state acquisition unit 12 acquires a video showing the face of the worker 30 (specifically, acquires the state of the worker 30 from the video showing the face of the worker 30), and the concentration estimation unit 16 estimates the concentration level of the worker 30 on the work target 40 based on the acquired video.
  • the concentration estimation unit 16 extracts at least one of facial expression information, pulse wave information, and eye movement information from the video, and estimates the concentration level of the worker 30 using at least one of these pieces of information.
  • the recognition determination unit 13 determines whether or not the worker 30 can recognize the contact position 50 based on the acquired state of the worker 30, the contact position information, and the concentration level of the worker 30. For example, the recognition determination unit 13 estimates the line of sight 31 of the worker 30 based on the state of the worker 30 and the concentration level of the worker 30, and determines whether or not the contact position 50 indicated by the contact position information is included in the line of sight 31, thereby determining whether or not the worker 30 can recognize the contact position 50.
  • the recognition determination unit 13 can estimate a gaze range 31 with a large viewing distance 33 and a wide field of view angle 32.
  • the recognition determination unit 13 can estimate a gaze range 31 with a small viewing distance 33 and a narrow field of view angle 32.
  • the estimated concentration level can be used to determine whether the worker 30 can recognize the contact position 50.
  • the presentation mode selection unit 14 may select a presentation mode based on the concentration level of the worker 30. If the concentration level is estimated to be low, it is considered that the worker 30 will have difficulty focusing on a wide range. For this reason, in this case, the presentation mode selection unit 14 may select a presentation mode that makes it easier for the worker 30 to recognize the contact position 50 (for example, a display mode in which the color or size is easy to see), and the presentation unit 15 may present the contact position 50 according to the selected presentation mode.
  • the concentration level estimation unit 16 may estimate the fatigue level of the worker 30, the recognition determination unit 13 may determine whether or not the worker 30 can recognize the contact position 50 based on the acquired state of the worker 30, the contact position information, and the worker's fatigue level, and the presentation mode selection unit 14 may select the presentation mode based on the worker's fatigue level.
  • FIG. 15 is a block diagram showing an example of a presentation system 10b according to the second variation of the embodiment.
  • the presentation system 10b differs from the presentation system 10 according to the embodiment in that it includes an operation determination unit 17.
  • the presentation system 10b is basically the same as the presentation system 10 according to the embodiment, so the following description will focus on the differences.
  • the operation determination unit 17 is realized by a processor that executes a program stored in a memory provided in the presentation system 10b, and specifically, by the computer 25 shown in FIG. 2 and FIG. 7 or the computer 27 shown in FIG. 9.
  • the operation determination unit 17 may be a component having a processor, or a semiconductor chip having a processor.
  • the operation determination unit 17 may also be a computer having a processor, a memory, or a device having a processor, a memory, or a device having a computer.
  • the operation determination unit 17 may also be a device having a computer.
  • the state acquisition unit 12 acquires a video in which the worker 30 appears (specifically, acquires the state of the worker 30 from the video in which the worker 30 appears), the movement determination unit 17 determines the movement of the worker 30 with respect to the work object 40 based on the movement, and the recognition determination unit 13 determines whether or not the worker 30 has recognized the contact position 50 due to the presentation of the contact position 50 by the presentation unit 15 based on the movement determination result of the movement determination unit 17.
  • the movement determination unit 17 may determine whether or not the worker 30 has touched the contact position 50 as the movement of the worker 30 with respect to the work object 40, and the recognition determination unit 13 may determine whether or not the worker 30 has recognized the contact position 50 due to the presentation of the contact position 50 by the presentation unit 15 based on whether or not the worker 30 has touched the contact position 50.
  • the recognition determination unit 13 determines that the worker 30 does not recognize the contact position 50 due to the presentation of the contact position 50 made by the presentation unit 15. Therefore, in this case, the recognition determination unit 13 can re-estimate the line of sight range 31 with a short visual distance 33 or a narrow visual angle 32.
  • a motion in which the worker 30 is unable to recognize the contact position 50 is, for example, a motion in which the worker 30 ignores the contact position 50 and moves the robot 21 closer to a place unrelated to the contact position 50 during a teaching operation, or a motion in which the worker tries to approach the contact position 50 during a collaborative operation.
  • a motion in which the worker 30 is unable to recognize the contact position 50 may also be a motion in which the worker 30 moves around as if searching for the contact position 50, or a motion in which the worker is confused because he does not know the contact position 50.
  • the presentation mode selection unit 14 may select a presentation mode based on the judgment result of the action by the action judgment unit 17. For example, when the action judgment unit 17 judges that the worker 30 has performed an action that does not allow the worker 30 to recognize the contact position 50, the presentation mode selection unit 14 may select a presentation mode that makes it easier for the worker 30 to recognize the contact position 50, and the presentation unit 15 may present the contact position 50 according to the selected presentation mode. Note that, in a teaching operation, it is possible that the worker 30 considers the presented contact position 50 to be unnecessary information and deliberately ignores the contact position 50, so the presentation mode selection unit 14 may select a presentation mode that reduces the number of displays or deletes the display.
  • FIG. 16 is a block diagram showing an example of a presentation system 10c according to the third variation of the embodiment.
  • the presentation system 10c differs from the presentation system 10 according to the embodiment in that it includes a proficiency acquisition unit 18. In other respects, it is basically the same as the presentation system 10 according to the embodiment, so the following description will focus on the differences.
  • the proficiency acquisition unit 18 is realized by a processor that executes a program stored in a memory provided in the presentation system 10c, and specifically, by the computer 25 shown in Figures 2 and 7 or the computer 27 shown in Figure 9.
  • the proficiency acquisition unit 18 may be a component equipped with a processor, or a semiconductor chip equipped with a processor.
  • the proficiency acquisition unit 18 may also be a computer equipped with a processor, a memory, or a device equipped with a processor, a memory, or a device equipped with a computer.
  • the proficiency acquisition unit 18 may also be a device equipped with a computer.
  • the proficiency level acquisition unit 18 acquires the proficiency level of the worker 30 in performing the work on the work object 40, and the presentation mode selection unit 14 further selects the presentation mode based on the proficiency level of the worker 30.
  • the proficiency level e.g., the degree of familiarity with the task
  • the proficiency level acquisition unit 18 acquires the proficiency level of the worker 30.
  • the proficiency level may be the time spent engaged in the specific task, or may be a proficiency level divided into multiple stages.
  • the presentation mode selection unit 14 may select a presentation mode for presenting numerical information such as the speed or pressure at the time of contact, in addition to the presentation mode for presenting the contact position 50.
  • FIG. 17 shows an example of a presentation according to a presentation mode selected based on the proficiency level of the worker 30.
  • a display showing the work sequence may be provided.
  • a display showing the work sequence may be provided using colors or marks, etc.
  • the presentation mode selection unit 14 may select a presentation mode for presenting many contact positions 50 based on the knowledge of the worker 30 so that the worker 30 can select the optimal position from among many contact positions 50.
  • the presentation mode selection unit 14 selects a presentation mode that makes it easier for the worker 30 to recognize the contact position 50. For example, a presentation mode that combines display and sound output may be selected. Also, since the worker 30 is not accustomed to the work, there is a risk of confusion if a lot of information is presented. For this reason, the presentation of numerical information may be refrained from, or, if there are multiple contact positions 50, at least one with a high priority or an early task order may be presented.
  • the present disclosure can be realized not only as a presentation system, but also as a presentation method including steps (processing) performed by components that make up the presentation system.
  • the presentation method includes steps S11, S12, S13, S14, and S15.
  • Step S11 is a contact position acquisition step for acquiring contact position information indicating the contact position of a work object of a robot working on the work object.
  • Step S12 is a status acquisition step for acquiring the status of a worker working on the work object.
  • Step S13 is a recognition determination step for determining whether or not the worker can recognize the contact position based on the acquired status of the worker and the contact position information.
  • Step S14 is a presentation mode selection step for selecting a presentation mode for presenting the contact position based on the contact position information and the determination result in the recognition determination step.
  • Step S15 is a presentation step for presenting the contact position according to the selected presentation mode.
  • the steps in the presentation method may be executed by a computer (computer system).
  • the present disclosure can be realized as a program for causing a computer to execute the steps included in the presentation method.
  • the present disclosure can be realized as a non-transitory computer-readable recording medium, such as a CD-ROM, on which the program is recorded.
  • each step is performed by running the program using hardware resources such as a computer's CPU (Central Processing Unit), memory, and input/output circuits.
  • hardware resources such as a computer's CPU (Central Processing Unit), memory, and input/output circuits.
  • each step is performed by the CPU obtaining data from memory or input/output circuits, etc., performing calculations, and outputting the results of the calculations to memory or input/output circuits, etc.
  • each component included in the presentation system of the above embodiment may be realized as a dedicated or general-purpose circuit.
  • each component included in the presentation system of the above embodiment may be realized as an LSI (Large Scale Integration), which is an integrated circuit (IC).
  • LSI Large Scale Integration
  • IC integrated circuit
  • the integrated circuit is not limited to an LSI, and may be realized by a dedicated circuit or a general-purpose processor.
  • a programmable FPGA (Field Programmable Gate Array) or a reconfigurable processor that allows the connections and settings of circuit cells inside the LSI to be reconfigured may also be used.
  • this disclosure also includes forms obtained by applying various modifications to the embodiments that a person skilled in the art may conceive, and forms realized by arbitrarily combining the components and functions of each embodiment within the scope that does not deviate from the spirit of this disclosure.
  • a presentation system comprising: a contact position acquisition unit that acquires contact position information indicating the contact position between a robot working on a work object and the work object; a status acquisition unit that acquires the status of a worker working on the work object; a recognition determination unit that determines whether the worker can recognize the contact position based on the acquired status of the worker and the contact position information; a presentation mode selection unit that selects a presentation mode for presenting the contact position based on the contact position information and the determination result of the recognition determination unit; and a presentation unit that presents the contact position according to the selected presentation mode.
  • a presentation mode that will make it easier for the worker to recognize the contact position is selected, and the contact position is presented according to the selected presentation mode. This makes it easier for the worker to recognize the robot's contact position.
  • the contact position is within the line of sight, it can be determined that the worker can recognize the contact position, and if the contact position is not within the line of sight, it can be determined that the worker cannot recognize the contact position.
  • the direction of the worker's gaze can be estimated from information about the angle of the worker's head, and it can be inferred that the direction of the worker's gaze has a gaze range.
  • the worker's visual field angle and viewing distance can be estimated from the worker's electrooculography, making it possible to estimate the worker's line of sight.
  • the worker's visual field angle and viewing distance can be estimated from the worker's facial movements, making it possible to estimate the worker's line of sight.
  • the gaze range can be re-estimated.
  • the camera image captured by the camera attached near the worker's eye is an image that is close to the worker's field of vision. Therefore, if the contact position is included in the camera image, it can be determined that the worker can recognize the contact position, and if the contact position is not included in the camera image, it can be determined that the worker cannot recognize the contact position.
  • a new presentation mode is selected and the contact position is presented anew according to the newly selected presentation mode, making it even easier for the worker to recognize the robot's contact position.
  • the state acquisition unit acquires a video image showing the face of the worker, and the presentation system further includes a concentration level estimation unit that estimates the concentration level of the worker on the work target based on the video image.
  • a presentation system according to any one of Technologies 1 to 9.
  • the estimated level of concentration can be used to indicate the contact position, etc.
  • the estimated level of concentration can be used to determine whether the worker can recognize the contact position.
  • a presentation system according to any one of techniques 1 to 12, in which the state acquisition unit acquires a video image showing the worker, the presentation system further includes a motion determination unit that determines a motion of the worker with respect to the work object based on the video image, and the recognition determination unit further determines, based on a result of the motion determination, whether or not the worker has recognized the contact position by the presentation of the contact position by the presentation unit.
  • the worker If the worker is performing an action that indicates he/she is aware of the contact position, it is possible to determine that the worker is aware of the contact position based on the already performed presentation of the contact position, and if the worker is performing an action that indicates he/she is not aware of the contact position based on the already performed presentation of the contact position, it is possible to determine that the worker was not able to recognize the contact position.
  • a presentation mode that allows the worker to easily recognize the contact position is selected, and the contact position is presented according to the selected presentation mode. This makes it even easier for the worker to recognize the robot's contact position.
  • the presentation system according to any one of Technologies 1 to 15, further comprising a proficiency acquisition unit that acquires the worker's proficiency in the work performed on the work object, and the presentation mode selection unit further selects the presentation mode based on the proficiency.
  • a display mode that makes it easier for the worker to recognize the contact position is selected, and the display position is presented according to the selected display mode. This makes it easier for the worker to recognize the contact position of the displayed robot.
  • the worker can easily recognize the displayed contact position of the robot.
  • a presentation method including a contact position acquisition step of acquiring contact position information indicating a contact position between a robot performing work on a work object and the work object, a status acquisition step of acquiring a status of a worker performing work on the work object, a recognition determination step of determining whether or not the worker can recognize the contact position based on the acquired status of the worker and the contact position information, a presentation mode selection step of selecting a presentation mode for presenting the contact position based on the contact position information and the determination result in the recognition determination step, and a presentation step of presenting the contact position according to the selected presentation mode.
  • This provides a presentation method that makes it easier for workers to recognize the robot's contact position.
  • the presentation system, presentation method, and program disclosed herein can be applied to systems that control robots that perform work on work objects.
  • the presentation system, presentation method, and program disclosed herein can make it easier for workers to recognize the contact position of the robot, improving worker safety. In this way, the presentation system, presentation method, and program disclosed herein are industrially useful.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

Provided is a presentation system which makes it easy for a worker to recognize a contact position of a robot. The presentation system (10) comprises a contact position acquisition unit (11), a state acquisition unit (12), a recognition determination unit (13), a presentation mode selection unit (14), and a presentation unit (15). The contact position acquisition unit (11) acquires contact position information that indicates a contact position of the robot, which performs an operation on an object-to-be-operated, with the object-to-be-operated. The state acquisition unit (12) acquires the state of a worker who performs an operation on the object-to-be-operated. The recognition determination unit (13) determines, on the basis of the acquired state of a worker and the contact position information, whether the worker can recognize the contact position. The presentation mode selection unit (14) selects, on the basis of the contact position information and the determination result of the recognition determination unit (13), a presentation mode for presenting the contact position. The presentation unit (15) presents the contact position according to the selected presentation mode.

Description

提示システム、提示方法およびプログラムPresentation system, presentation method, and program
 本開示は、作業対象物に対して作業を行うロボットの接触位置に関する提示を行う提示システム、提示方法およびプログラムに関する。 This disclosure relates to a presentation system, a presentation method, and a program that presents the contact position of a robot performing work on a work object.
 作業対象物に対して作業を行う際に、ロボットが用いられている。例えば、人がロボットと協働して作業を行う場合がある。また、例えば、ロボットに特定の作業をさせるために、人がロボットを動かしてロボットに特定の作業を教える教示作業を行う場合がある。 Robots are used to perform tasks on work objects. For example, there are cases where a person works in collaboration with a robot. In addition, there are cases where a person operates a robot and teaches it how to perform a specific task in order to have the robot perform a specific task.
 特許文献1には、作業環境中にいる人にロボットの動作と作業内容を直感的に明示し、ロボットの動作をおそれることなく、ロボットと作業情報を共有し、協調して、作業を行うことを可能とする装置が記載されている。 Patent document 1 describes a device that intuitively shows a robot's movements and work content to people in a work environment, enabling them to share work information with the robot and work in cooperation without fear of the robot's movements.
 特許文献2には、ロボットの動作内容の変化に伴って変動する危険範囲を人に視覚的に認識させるロボットが記載されている。 Patent document 2 describes a robot that allows a person to visually recognize a danger zone that changes as the robot's movements change.
 これにより、人がロボットと協働して作業を行う場合に、ロボットがこれから接触する予定の実際の位置(以下、接触位置という)が提示されることで、人は事前に接触位置を認識することができる。そのため、協働作業時の人の不安感およびストレスの蓄積を抑制することができる。また、人がロボットに対して教示作業を行う場合に、ロボットの接触位置が提示されることで、人は提示された接触位置を参考にしながらロボットを動かすことができる。そのため、教示作業及び判断の簡単化ができ、作業者に依存しない教示データを作成することができる。このように、ロボットの接触位置を提示することにより、人のサポートを行うことができる。 As a result, when a person works collaboratively with a robot, the actual position where the robot plans to make contact (hereinafter referred to as the contact position) is displayed, allowing the person to recognize the contact position in advance. This makes it possible to reduce the accumulation of anxiety and stress in people during collaborative work. Also, when a person is teaching a robot, the robot's contact position is displayed, allowing the person to move the robot while referring to the displayed contact position. This simplifies the teaching work and judgment, and makes it possible to create teaching data that is not dependent on the worker. In this way, by displaying the robot's contact position, it is possible to support people.
特開2004-122313号公報JP 2004-122313 A 特開2009-123045号公報JP 2009-123045 A
 しかしながら、特許文献1または2に記載された技術を用いてロボットの接触位置が提示されても、人の視線範囲外に接触位置が提示されて人が接触位置を認識できない場合や、人の視線範囲内に接触位置があっても人が接触位置を認識していない場合がある。このような場合には、接触位置の提示による人のサポートをすることが難しいという問題がある。 However, even if the contact position of a robot is presented using the technology described in Patent Documents 1 or 2, there are cases where the contact position is presented outside the person's line of sight and the person is unable to recognize it, or where the contact position is within the person's line of sight but is not recognized. In such cases, there is a problem in that it is difficult to support the person by presenting the contact position.
 そこで、本開示は、ロボットの接触位置を作業者が認識しやすくなる提示システム、提示方法およびプログラムを提供する。 The present disclosure provides a presentation system, presentation method, and program that make it easier for workers to recognize the robot's contact position.
 本開示の一態様に係る提示システムは、作業対象物に対して作業を行うロボットの前記作業対象物との接触位置を示す接触位置情報を取得する接触位置取得部と、前記作業対象物に対して作業を行う作業者の状態を取得する状態取得部と、取得された前記作業者の状態と、前記接触位置情報とに基づいて、前記作業者が前記接触位置を認識できるか否かを判定する認識判定部と、前記接触位置情報および前記認識判定部の判定結果に基づいて、前記接触位置を提示するための提示態様を選択する提示態様選択部と、選択された前記提示態様に応じた前記接触位置の提示を行う提示部と、を備える。 A presentation system according to one embodiment of the present disclosure includes a contact position acquisition unit that acquires contact position information indicating the contact position between a robot working on a work object and the work object, a status acquisition unit that acquires the status of a worker working on the work object, a recognition determination unit that determines whether the worker can recognize the contact position based on the acquired status of the worker and the contact position information, a presentation mode selection unit that selects a presentation mode for presenting the contact position based on the contact position information and the determination result of the recognition determination unit, and a presentation unit that presents the contact position according to the selected presentation mode.
 本開示の一態様に係る提示方法は、作業対象物に対して作業を行うロボットの前記作業対象物との接触位置を示す接触位置情報を取得する接触位置取得ステップと、前記作業対象物に対して作業を行う作業者の状態を取得する状態取得ステップと、取得された前記作業者の状態と、前記接触位置情報とに基づいて、前記作業者が前記接触位置を認識できるか否かを判定する認識判定ステップと、前記接触位置情報および前記認識判定ステップでの判定結果に基づいて、前記接触位置を提示するための提示態様を選択する提示態様選択ステップと、選択された前記提示態様に応じた前記接触位置の提示を行う提示ステップと、を含む。 The presentation method according to one aspect of the present disclosure includes a contact position acquisition step of acquiring contact position information indicating the contact position between a robot performing work on a work object and the work object, a status acquisition step of acquiring the status of a worker performing work on the work object, a recognition determination step of determining whether or not the worker can recognize the contact position based on the acquired status of the worker and the contact position information, a presentation mode selection step of selecting a presentation mode for presenting the contact position based on the contact position information and the determination result in the recognition determination step, and a presentation step of presenting the contact position according to the selected presentation mode.
 本開示の一態様に係るプログラムは、上記の提示方法をコンピュータに実行させるためのプログラムである。 A program according to one aspect of the present disclosure is a program for causing a computer to execute the above-mentioned presentation method.
 なお、これらの包括的または具体的な態様は、システム、方法、集積回路、コンピュータプログラムまたはコンピュータ読み取り可能なCD-ROMなどの記録媒体で実現されてもよく、システム、方法、集積回路、コンピュータプログラムおよび記録媒体の任意な組み合わせで実現されてもよい。 These comprehensive or specific aspects may be realized as a system, method, integrated circuit, computer program, or computer-readable recording medium such as a CD-ROM, or may be realized as any combination of a system, method, integrated circuit, computer program, and recording medium.
 本開示の一態様に係る提示システム、提示方法およびプログラムによれば、ロボットの接触位置を作業者が認識しやすくなる。 The presentation system, presentation method, and program according to one embodiment of the present disclosure make it easier for workers to recognize the contact position of the robot.
実施の形態に係る提示システムの一例を示すブロック図である。1 is a block diagram showing an example of a presentation system according to an embodiment; 実施の形態に係る提示システムの適用例を示す図である。1A and 1B are diagrams illustrating application examples of a presentation system according to an embodiment. 実施の形態に係る提示システムの動作の一例を示すフローチャートである。10 is a flowchart illustrating an example of an operation of the presentation system according to the embodiment. 作業者の視線範囲を説明するための図である。FIG. 13 is a diagram for explaining a line of sight range of a worker. 接触位置が作業者の視線範囲に含まれない状態の一例を示す図である。11 is a diagram showing an example of a state in which the contact position is not included in the line of sight of the worker; FIG. 接触位置が作業者の視線範囲に含まれない状態の一例を示す図である。11 is a diagram showing an example of a state in which the contact position is not included in the line of sight of the worker; FIG. 接触位置が作業者の視線範囲に含まれない状態の一例を示す図である。11 is a diagram showing an example of a state in which the contact position is not included in the line of sight of the worker; FIG. 実施の形態に係る提示システムの動作の他の一例を示すフローチャートである。10 is a flowchart illustrating another example of the operation of the presentation system according to the embodiment. 実施の形態に係る提示システムの他の適用例を示す図である。11A and 11B are diagrams illustrating another application example of the presentation system according to the embodiment. 作業者の頭上から撮影された作業者が写るカメラ画像の一例を示す図である。FIG. 13 is a diagram showing an example of a camera image of a worker captured from above the worker's head. 実施の形態に係る提示システムの他の適用例を示す図である。11A and 11B are diagrams illustrating another application example of the presentation system according to the embodiment. 作業者が装着するカメラの一例を示す図である。FIG. 1 is a diagram illustrating an example of a camera worn by a worker. 作業者が装着するカメラの他の一例を示す図である。FIG. 13 is a diagram showing another example of a camera worn by a worker. 作業者が装着するカメラにより撮影された作業対象物が写るカメラ画像の一例を示す図である。FIG. 11 is a diagram showing an example of a camera image of a work target captured by a camera worn by a worker. 作業者の表情の一例を示す図である。FIG. 13 is a diagram showing an example of a facial expression of a worker. 作業者の表情の他の一例を示す図である。FIG. 11 is a diagram showing another example of a facial expression of a worker. 新たに選択された提示態様に応じた提示の一例を示す図である。FIG. 13 is a diagram showing an example of presentation according to a newly selected presentation mode. 新たに選択された提示態様に応じた提示の他の一例を示す図である。FIG. 13 is a diagram showing another example of presentation according to a newly selected presentation mode. 実施の形態の変形例1に係る提示システムの一例を示すブロック図である。FIG. 11 is a block diagram showing an example of a presentation system according to a first modified example of the embodiment. 実施の形態の変形例2に係る提示システムの一例を示すブロック図である。FIG. 11 is a block diagram showing an example of a presentation system according to a second modified example of the embodiment. 実施の形態の変形例3に係る提示システムの一例を示すブロック図である。FIG. 11 is a block diagram showing an example of a presentation system according to a third modified example of the embodiment. 作業者の熟練度に基づいて選択された提示態様に応じた提示の一例を示す図である。FIG. 13 is a diagram showing an example of presentation according to a presentation mode selected based on the skill level of a worker.
 以下、実施の形態について、図面を参照しながら具体的に説明する。 The following describes the embodiment in detail with reference to the drawings.
 なお、以下で説明する実施の形態は、いずれも包括的または具体的な例を示すものである。以下の実施の形態で示される数値、形状、材料、構成要素、構成要素の配置位置、接続形態、ステップ、ステップの順序などは、一例であり、本開示を限定する主旨ではない。 The embodiments described below are all comprehensive or specific examples. The numerical values, shapes, materials, components, component placement positions, connection forms, steps, and order of steps shown in the following embodiments are merely examples and are not intended to limit the present disclosure.
 (実施の形態)
 以下、実施の形態に係る提示システムについて説明する。
(Embodiment)
A presentation system according to an embodiment will be described below.
 図1は、実施の形態に係る提示システム10の一例を示すブロック図である。 FIG. 1 is a block diagram showing an example of a presentation system 10 according to an embodiment.
 提示システム10は、作業対象物に対して作業を行うロボットの作業対象物との接触位置を、作業対象物に対して作業を行う作業者に対して提示するためのシステムである。例えば、ロボットはマニピュレータなどであり、ロボットは、作業対象物に接触して把持などの作業を行う。つまり、ロボットが把持作業を行う場合、接触位置は、把持位置となる。 The presentation system 10 is a system for presenting the contact position of a robot performing work on a work object to a worker performing work on the work object. For example, the robot is a manipulator, and the robot comes into contact with the work object to perform work such as grasping. In other words, when the robot performs a grasping task, the contact position becomes the grasping position.
 提示システム10は、接触位置取得部11、状態取得部12、認識判定部13、提示態様選択部14および提示部15を備える。提示システム10は、プロセッサおよびメモリなどを含むコンピュータを備える。メモリは、ROM(Read Only Memory)およびRAM(Random Access Memory)などであり、プロセッサにより実行されるプログラムを記憶することができる。接触位置取得部11、状態取得部12、認識判定部13、提示態様選択部14および提示部15は、メモリに格納されたプログラムを実行するプロセッサなどによって実現される。 The presentation system 10 includes a contact position acquisition unit 11, a status acquisition unit 12, a recognition determination unit 13, a presentation mode selection unit 14, and a presentation unit 15. The presentation system 10 includes a computer including a processor and a memory. The memory is a ROM (Read Only Memory) and a RAM (Random Access Memory), etc., and can store programs executed by the processor. The contact position acquisition unit 11, the status acquisition unit 12, the recognition determination unit 13, the presentation mode selection unit 14, and the presentation unit 15 are realized by a processor that executes programs stored in the memory, etc.
 例えば、提示システム10は、1つの筐体のコンピュータであってもよいし、複数のコンピュータからなるシステムであってもよいし、1つまたは複数のコンピュータを備えた装置であってもよい。例えば、提示システム10は、ロボットに搭載されてもよい。また、例えば、提示システム10は、サーバであってもよい。なお、提示システム10が備える構成要素は、1つのサーバに配置されていてもよいし、複数のサーバに分散して配置されていてもよい。 For example, the presentation system 10 may be a computer in a single housing, a system made up of multiple computers, or a device equipped with one or multiple computers. For example, the presentation system 10 may be mounted on a robot. Also, for example, the presentation system 10 may be a server. Note that the components of the presentation system 10 may be located on one server, or may be distributed across multiple servers.
 接触位置取得部11は、ロボットの作業対象物との接触位置を示す接触位置情報を取得する。例えば、接触位置取得部11は、ロボットから作業対象物との接触位置を示す情報を取得する。その際に、接触位置取得部11は、作業対象物の種類、ロボットが接触位置へ接近する速度、または、ロボットが接触位置に接触する際の圧力などの情報も取得してもよい。なお、接触位置取得部11は、プロセッサを備えた部品、またはプロセッサを備えた装置でもよい。また、接触位置取得部11は、プロセッサおよびメモリなどを含むコンピュータ、またはプロセッサおよびメモリなどを含む装置であってもよい。また、接触位置取得部11は、カメラまたはセンサを備え、このカメラまたはセンサが、ロボットが接触位置やロボットが接触位置に接触する際の圧力などの情報を取得してもよい。また、接触位置取得部11は、コンピュータを備えた装置、例えばプロセッサおよびメモリなどを含むコンピュータを内蔵したセンサ装置であってもよい。 The contact position acquisition unit 11 acquires contact position information indicating the contact position of the robot with the work object. For example, the contact position acquisition unit 11 acquires information indicating the contact position of the robot with the work object from the robot. In this case, the contact position acquisition unit 11 may also acquire information such as the type of work object, the speed at which the robot approaches the contact position, or the pressure when the robot contacts the contact position. The contact position acquisition unit 11 may be a part with a processor, or a device with a processor. The contact position acquisition unit 11 may also be a computer including a processor and memory, or a device including a processor and memory. The contact position acquisition unit 11 may also be equipped with a camera or a sensor, and the camera or sensor may acquire information such as the contact position of the robot and the pressure when the robot contacts the contact position. The contact position acquisition unit 11 may also be a device with a computer, for example a sensor device with a built-in computer including a processor and memory.
 状態取得部12は、作業対象物に対して作業を行う作業者の状態を取得する。例えば、状態取得部12は、作業者を撮影するカメラなどから作業者が写る動画像を取得することで、作業者の状態を取得する。状態取得部12の動作の詳細については後述する。なお、状態取得部12は、プロセッサを備えた部品、またはプロセッサを備えた装置でもよい。また、状態取得部12は、プロセッサおよびメモリなどを含むコンピュータ、またはプロセッサおよびメモリなどを含む装置であってもよい。また、状態取得部12は、作業者を撮影するカメラなどの撮像装置を備えてもよいし、センサを備えてもよい。また、状態取得部12は、コンピュータを備えた装置、例えばプロセッサおよびメモリなどを含むコンピュータを内蔵した撮像装置であってもよい。 The status acquisition unit 12 acquires the status of a worker performing work on a work object. For example, the status acquisition unit 12 acquires the status of a worker by acquiring video images of the worker from a camera that captures the worker. Details of the operation of the status acquisition unit 12 will be described later. The status acquisition unit 12 may be a component with a processor, or a device with a processor. The status acquisition unit 12 may also be a computer including a processor and memory, or a device including a processor and memory. The status acquisition unit 12 may also be equipped with an imaging device such as a camera that captures an image of the worker, or may also be equipped with a sensor. The status acquisition unit 12 may also be a device equipped with a computer, for example an imaging device with a built-in computer including a processor and memory.
 認識判定部13は、取得された作業者の状態と、接触位置情報とに基づいて、作業者が接触位置を認識できるか否かを判定する。認識判定部13の動作の詳細については後述する。なお、認識判定部13は、プロセッサを備えた部品、またはプロセッサを備えた装置でもよい。また、認識判定部13は、プロセッサおよびメモリなどを含むコンピュータ、またはプロセッサおよびメモリなどを含む装置であってもよい。また、認識判定部13は、コンピュータを備えた装置であってもよい。 The recognition determination unit 13 determines whether or not the worker can recognize the contact position based on the acquired worker's state and contact position information. Details of the operation of the recognition determination unit 13 will be described later. The recognition determination unit 13 may be a component with a processor, or a device with a processor. The recognition determination unit 13 may also be a computer including a processor and memory, or a device including a processor and memory. The recognition determination unit 13 may also be a device with a computer.
 提示態様選択部14は、接触位置情報および認識判定部13の判定結果に基づいて、接触位置を提示するための提示態様を選択する。提示態様選択部14の動作の詳細については後述する。なお、提示態様選択部14は、プロセッサを備えた部品、またはプロセッサを備えた装置でもよい。また、提示態様選択部14は、プロセッサおよびメモリなどを含むコンピュータ、またはプロセッサおよびメモリなどを含む装置であってもよい。また、提示態様選択部14は、コンピュータを備えた装置であってもよい。 The presentation mode selection unit 14 selects a presentation mode for presenting the contact position based on the contact position information and the determination result of the recognition determination unit 13. Details of the operation of the presentation mode selection unit 14 will be described later. The presentation mode selection unit 14 may be a component having a processor, or a device having a processor. The presentation mode selection unit 14 may also be a computer including a processor and memory, or a device including a processor and memory. The presentation mode selection unit 14 may also be a device including a computer.
 提示部15は、選択された提示態様に応じた接触位置の提示を行う。提示部15の動作の詳細については後述する。なお、提示部15は、プロセッサを備えた部品、またはプロセッサを備えた装置でもよい。また、提示部15は、プロセッサおよびメモリなどを含むコンピュータ、またはプロセッサおよびメモリなどを含む装置であってもよい。また、提示部15はプロジェクタなど提示機能を有する装置であってもよい。また、提示部15は提示機能を有する装置に備えられてもよく、例えばプロジェクタやスピーカに内蔵された装置であってもよい。 The presentation unit 15 presents the contact position according to the selected presentation mode. Details of the operation of the presentation unit 15 will be described later. The presentation unit 15 may be a component with a processor, or a device with a processor. The presentation unit 15 may also be a computer including a processor and memory, or a device including a processor and memory. The presentation unit 15 may also be a device with a presentation function, such as a projector. The presentation unit 15 may also be provided in a device with a presentation function, and may be a device built into a projector or speaker, for example.
 図2は、実施の形態に係る提示システム10の適用例を示す図である。 FIG. 2 is a diagram showing an application example of the presentation system 10 according to the embodiment.
 例えば、図2に示されるように、作業者30がロボット21と協働して作業対象物40に対して作業を行うときに、提示システム10が適用される。例えば、図2に示される例では、提示システム10は、コンピュータ24およびコンピュータ25からなるシステムである。例えば、接触位置取得部11、提示態様選択部14および提示部15はコンピュータ24によって実現され、状態取得部12および認識判定部13はコンピュータ25によって実現される。提示システム10は、ロボット21、プロジェクタ22およびカメラ23の少なくとも1つを備えていてもよい。図1に示す例では、提示システム10は、ロボット21、プロジェクタ22およびカメラ23をすべて備えている。コンピュータ24は、ロボット21およびプロジェクタ22と、それぞれケーブル34aおよびケーブル34bを介して接続される。コンピュータ25は、カメラ23と、ケーブル34cを介して接続されている。また、コンピュータ24とコンピュータ25とは、ケーブル34dを介して接続されている。ケーブル34a~dは、例えばLAN(Local Area Network)ケーブルであってもよく、USB(Universal Serial Bus)ケーブルであってもよい。また、コンピュータ24とコンピュータ25との接続や、コンピュータ24とロボット21およびプロジェクタ22との接続、コンピュータ25とカメラ23との接続は、ケーブルを用いる代わりに、例えば無線LAN(Local Area Network)やブルートゥース(Bluetooth(登録商標))といった無線技術によって行うことも可能である。 For example, as shown in FIG. 2, the presentation system 10 is applied when the worker 30 works with the robot 21 to perform work on the work target 40. For example, in the example shown in FIG. 2, the presentation system 10 is a system consisting of the computer 24 and the computer 25. For example, the contact position acquisition unit 11, the presentation mode selection unit 14, and the presentation unit 15 are realized by the computer 24, and the state acquisition unit 12 and the recognition determination unit 13 are realized by the computer 25. The presentation system 10 may include at least one of the robot 21, the projector 22, and the camera 23. In the example shown in FIG. 1, the presentation system 10 includes all of the robot 21, the projector 22, and the camera 23. The computer 24 is connected to the robot 21 and the projector 22 via the cable 34a and the cable 34b, respectively. The computer 25 is connected to the camera 23 via the cable 34c. The computer 24 and the computer 25 are also connected to each other via the cable 34d. The cables 34a-d may be, for example, LAN (Local Area Network) cables or USB (Universal Serial Bus) cables. Also, instead of using cables, the connection between the computer 24 and the computer 25, the connection between the computer 24 and the robot 21 and the projector 22, and the connection between the computer 25 and the camera 23 may be made using wireless technology such as wireless LAN (Local Area Network) or Bluetooth (Bluetooth (registered trademark)).
 例えば、提示部15は、プロジェクタ22を介して、ロボット21の作業対象物40との接触位置50の提示を行う。例えば、提示態様が表示である場合、提示態様選択部14は、接触位置情報および認識判定部13の判定結果に基づいて、提示態様として、接触位置50を表示するための表示態様を選択し、提示部15は、プロジェクタ22を用いて、選択された表示態様に応じた表示を作業対象物40上に行う。図2には、作業者30の視線範囲31が示されているが、視線範囲31の詳細については後述する。なお、図2以降の図面において、接触位置50は円形の表示態様にて記されているが、このことについては後述する。 For example, the presentation unit 15 presents the contact position 50 of the robot 21 with the work target 40 via the projector 22. For example, when the presentation mode is display, the presentation mode selection unit 14 selects a display mode for displaying the contact position 50 as the presentation mode based on the contact position information and the judgment result of the recognition judgment unit 13, and the presentation unit 15 uses the projector 22 to perform a display on the work target 40 according to the selected display mode. In FIG. 2, the line of sight range 31 of the worker 30 is shown, but the details of the line of sight range 31 will be described later. Note that in FIG. 2 and subsequent figures, the contact position 50 is shown in a circular display mode, but this will be described later.
 次に、提示システム10の動作の詳細について説明する。 Next, we will explain the details of the operation of the presentation system 10.
 図3は、実施の形態に係る提示システム10の動作の一例を示すフローチャートである。なお、図3では、接触位置50の提示が行われる前に、作業者30が接触位置50を認識できるか否かの判定が行われる例を説明する。 FIG. 3 is a flowchart showing an example of the operation of the presentation system 10 according to the embodiment. Note that FIG. 3 illustrates an example in which it is determined whether the worker 30 can recognize the contact position 50 before the contact position 50 is presented.
 まず、接触位置取得部11は、作業対象物40に対して作業を行うロボット21の作業対象物40との接触位置50を示す接触位置情報を取得し(ステップS11)、状態取得部12は、作業対象物40に対して作業を行う作業者30の状態を取得する(ステップS12)。なお、ステップS11およびステップS12が行われる順序は特に限定されない。例えば、状態取得部12は、作業者30の状態として、カメラ23により得られた作業者30が写るカメラ画像を取得する。作業者30が写るカメラ画像には、作業者30の視線情報が含まれており、認識判定部13は、視線情報から作業者30の視線範囲31を推定する。ここで、作業者30の視線範囲31について、図4を用いて説明する。 First, the contact position acquisition unit 11 acquires contact position information indicating the contact position 50 between the robot 21 performing work on the work object 40 and the work object 40 (step S11), and the status acquisition unit 12 acquires the status of the worker 30 performing work on the work object 40 (step S12). Note that the order in which steps S11 and S12 are performed is not particularly limited. For example, the status acquisition unit 12 acquires a camera image of the worker 30 captured by the camera 23 as the status of the worker 30. The camera image of the worker 30 includes line-of-sight information of the worker 30, and the recognition determination unit 13 estimates the line-of-sight range 31 of the worker 30 from the line-of-sight information. Here, the line-of-sight range 31 of the worker 30 will be described with reference to FIG. 4.
 図4は、作業者30の視線範囲31を説明するための図である。なお、詳細は後述するが、図4は、接触位置50が作業者30の視線範囲31に含まれる状態の一例を示す図でもある。 FIG. 4 is a diagram for explaining the line of sight 31 of the worker 30. Note that, as will be described in detail later, FIG. 4 is also a diagram showing an example of a state in which the contact position 50 is included in the line of sight 31 of the worker 30.
 例えば、視線範囲31は、図4に示されるように、作業者30の視野角度32および作業者の視距離33によって定められる範囲である。視野角度32および視距離33は個人差があるが、例えば、視野角度32および視距離33は標準的な人の視野角度および視距離に予め定められている。認識判定部13は、カメラ画像における作業者30の向いている方向を中心とする視野角度32および当該方向への視距離33によって視線範囲31を推定する。なお、アイトラッカーを用いることで、作業者30の視線範囲31が推定されてもよい。 For example, as shown in FIG. 4, the line of sight range 31 is a range determined by the field of view angle 32 and the viewing distance 33 of the worker 30. The field of view angle 32 and the viewing distance 33 vary from person to person, but for example, the field of view angle 32 and the viewing distance 33 are predetermined to be the field of view angle and the viewing distance of a standard person. The recognition determination unit 13 estimates the line of sight range 31 based on the field of view angle 32 centered on the direction in which the worker 30 is facing in the camera image and the viewing distance 33 in that direction. Note that the line of sight range 31 of the worker 30 may be estimated by using an eye tracker.
 図3での説明に戻り、認識判定部13は、取得された作業者30の状態と、接触位置情報とに基づいて、作業者30が接触位置50を認識できるか否かを判定する(ステップS13)。ここでは、認識判定部13は、接触位置情報が示す接触位置50上に何かしらの表示などが今後行われた場合に、作業者30が接触位置50を認識できるか否かを判定する。例えば、認識判定部13は、接触位置50が視線範囲31に含まれるか否かを判定することで、作業者30が接触位置50を認識できるか否かを判定する。これについて、図4および図5Aから図5Cを用いて説明する。 Returning to the explanation in FIG. 3, the recognition determination unit 13 determines whether or not the worker 30 can recognize the contact position 50 based on the acquired state of the worker 30 and the contact position information (step S13). Here, the recognition determination unit 13 determines whether or not the worker 30 can recognize the contact position 50 if some kind of display or the like is performed in the future on the contact position 50 indicated by the contact position information. For example, the recognition determination unit 13 determines whether or not the worker 30 can recognize the contact position 50 by determining whether or not the contact position 50 is included in the line of sight range 31. This will be explained using FIG. 4 and FIG. 5A to FIG. 5C.
 上述したように、図4は、接触位置50が作業者30の視線範囲31に含まれる状態の一例を示す図であり、図5Aから図5Cの各々は、接触位置50が作業者30の視線範囲31に含まれない状態の一例を示す図である。 As described above, FIG. 4 is a diagram showing an example of a state in which the contact position 50 is included in the line of sight 31 of the worker 30, and each of FIGS. 5A to 5C is a diagram showing an example of a state in which the contact position 50 is not included in the line of sight 31 of the worker 30.
 図4に示されるように、接触位置50が視野角度32内にあり、かつ、視距離33内にある場合、認識判定部13は、接触位置50が視線範囲31に含まれると判定し、作業者30が接触位置50を認識できると判定する。 As shown in FIG. 4, if the contact position 50 is within the viewing angle 32 and within the viewing distance 33, the recognition determination unit 13 determines that the contact position 50 is within the line of sight 31 and that the worker 30 can recognize the contact position 50.
 接触位置50を示す表示態様が小さい場合や、表示態様と周囲とのコントラストが低い場合は、視野角度32を示す角度が小さくなるか、または視距離33が短くなる。また、この場合、視野角度32を示す角度が小さくなるとともに、視距離33が短くなることもある。逆に言えば、作業者30が十分に接触位置50を認識できる状態にあれば、例えば接触位置50を示す表示態様を小さくすることができる。 If the display mode indicating the contact position 50 is small or the contrast between the display mode and the surroundings is low, the angle indicating the field of view angle 32 will be small or the viewing distance 33 will be short. In this case, the angle indicating the field of view angle 32 may be small and the viewing distance 33 may be short. Conversely, if the worker 30 is in a state where he or she can adequately recognize the contact position 50, for example, the display mode indicating the contact position 50 can be made small.
 一方、接触位置50を示す表示態様が大きい場合や、表示態様と周囲とのコントラストが高い場合は、視野角度32を示す角度が大きくなるか、または視距離33が長くなる。また、この場合、視野角度32を示す角度が大きくなるとともに、視距離33が長くなることもある。逆に言えば、作業者30が十分に接触位置50を認識できない状態にあれば、例えば接触位置50を示す表示態様を大きくしたり、表示態様と周囲とのコントラストを高め、作業者30が十分に接触位置50を認識できるようにする。 On the other hand, if the display mode indicating the contact position 50 is large or the contrast between the display mode and the surroundings is high, the angle indicating the field of view 32 will be large or the viewing distance 33 will be long. In this case, the angle indicating the field of view 32 may be large and the viewing distance 33 may be long. Conversely, if the worker 30 is in a state where he or she cannot fully recognize the contact position 50, for example, the display mode indicating the contact position 50 may be made large or the contrast between the display mode and the surroundings may be increased, so that the worker 30 can fully recognize the contact position 50.
 図5Aに示されるように、接触位置50が視野角度32外にある場合、認識判定部13は、接触位置50が視線範囲31に含まれないと判定し、作業者30が接触位置50を認識できないと判定する。また、図5Bに示されるように、接触位置50が視野角度32内にあったとしても、接触位置50が視距離33外にある場合には、認識判定部13は、接触位置50が視線範囲31に含まれないと判定し、作業者30が接触位置50を認識できないと判定する。また、図5Cに示されるように、接触位置50が作業者30の死角にある場合には、認識判定部13は、接触位置50が視線範囲31に含まれないと判定し、作業者30が接触位置50を認識できないと判定する。例えば、接触位置50が作業者30の死角にあるか否かは、カメラ画像に写る作業対象物40と作業者30と接触位置50との位置関係に基づいて判定される。 5A, when the contact position 50 is outside the viewing angle 32, the recognition determination unit 13 determines that the contact position 50 is not included in the line of sight range 31, and determines that the worker 30 cannot recognize the contact position 50. Also, as shown in FIG. 5B, even if the contact position 50 is within the viewing angle 32, when the contact position 50 is outside the viewing distance 33, the recognition determination unit 13 determines that the contact position 50 is not included in the line of sight range 31, and determines that the worker 30 cannot recognize the contact position 50. Also, as shown in FIG. 5C, when the contact position 50 is in the blind spot of the worker 30, the recognition determination unit 13 determines that the contact position 50 is not included in the line of sight range 31, and determines that the worker 30 cannot recognize the contact position 50. For example, whether or not the contact position 50 is in the blind spot of the worker 30 is determined based on the positional relationship between the work object 40, the worker 30, and the contact position 50 shown in the camera image.
 なお、接触位置50は、ロボット21から得られる位置情報であり、カメラ23のカメラ画像に写る作業者の向いている方向から推定される視線範囲31とは座標系が異なる。そこで、まず、認識判定部13は、接触位置50の座標系と視線範囲31の座標系とを合わせるように座標変換を行う。例えば、認識判定部13は、接触位置50の座標系を視線範囲31の座標系へ座標変換を行ってもよいし、視線範囲31の座標系を接触位置50の座標系へ座標変換を行ってもよい。また、例えば、認識判定部13は、接触位置50の座標系および視線範囲31の座標系をそれぞれ、ワールド座標系などの同じ座標系に座標変換を行ってもよい。座標変換が行われることで、接触位置50と視線範囲31との比較が可能となり、認識判定部13は、接触位置50が視線範囲31に含まれるか否かを判定することが可能となる。 Note that the contact position 50 is position information obtained from the robot 21, and has a different coordinate system from the line of sight 31 estimated from the direction in which the worker is facing as seen in the camera image of the camera 23. Therefore, first, the recognition determination unit 13 performs coordinate conversion so as to match the coordinate system of the contact position 50 with the coordinate system of the line of sight 31. For example, the recognition determination unit 13 may perform coordinate conversion from the coordinate system of the contact position 50 to the coordinate system of the line of sight 31, or may perform coordinate conversion from the coordinate system of the line of sight 31 to the coordinate system of the contact position 50. In addition, for example, the recognition determination unit 13 may perform coordinate conversion from the coordinate system of the contact position 50 to the coordinate system of the line of sight 31 to the same coordinate system such as a world coordinate system. By performing the coordinate conversion, it becomes possible to compare the contact position 50 with the line of sight 31, and the recognition determination unit 13 can determine whether or not the contact position 50 is included in the line of sight 31.
 図3での説明に戻り、提示態様選択部14は、接触位置情報および認識判定部13の判定結果に基づいて、接触位置50を提示するための提示態様を選択する(ステップS14)。例えば、提示態様選択部14は、接触位置情報が示す接触位置50に関して、認識判定部13の判定結果に基づいて、どのような提示を行うかを選択する。例えば、提示態様選択部14は、作業者30が接触位置50を認識できないと判定された場合、作業者30が接触位置50を認識しやすくなる提示態様を選択する。例えば、提示が表示である場合に、提示態様選択部14は、作業者30が接触位置50を認識できないと判定されたときには、作業者30が接触位置50を認識できると判定された場合に選択する表示態様と比べて、表示の色合い、表示の色の強度(彩度)、表示の点滅頻度および表示の大きさの少なくとも1つが異なる表示態様を選択する。例えば、提示態様選択部14は、表示の色合いを作業者30が認識しやすい色合い(例えば作業対象物40とは異なる色合い)の表示態様を選択したり、表示の色の強度が大きい表示態様を選択したり、表示の点滅頻度が多い表示態様を選択したり、表示の大きさが大きい表示態様を選択したりする。 Returning to the explanation in FIG. 3, the presentation mode selection unit 14 selects a presentation mode for presenting the contact position 50 based on the contact position information and the determination result of the recognition determination unit 13 (step S14). For example, the presentation mode selection unit 14 selects what type of presentation to perform for the contact position 50 indicated by the contact position information based on the determination result of the recognition determination unit 13. For example, when it is determined that the worker 30 cannot recognize the contact position 50, the presentation mode selection unit 14 selects a presentation mode that makes it easier for the worker 30 to recognize the contact position 50. For example, when the presentation is a display, when it is determined that the worker 30 cannot recognize the contact position 50, the presentation mode selection unit 14 selects a display mode that differs in at least one of the display hue, the display color intensity (saturation), the display blinking frequency, and the display size, compared to the display mode selected when it is determined that the worker 30 can recognize the contact position 50. For example, the presentation mode selection unit 14 selects a display mode in which the color of the display is easily recognized by the worker 30 (e.g., a color different from that of the work object 40), selects a display mode in which the color of the display is highly intense, selects a display mode in which the display blinks frequently, or selects a display mode in which the size of the display is large.
 なお、接触位置50を示す表示態様は、プロジェクタ22によって作業者30の近傍に投影された円にて記されている。この円は、作業者30が認識できるようにその大きさや色が設定される。 The display form showing the contact position 50 is a circle projected near the worker 30 by the projector 22. The size and color of this circle are set so that the worker 30 can recognize it.
 なお、表示態様としては、円以外のマークであってもよいし、例えば楕円形や三角形、矩形、5角形以上の多角形、星印、十字またはバツ印(×印)よりなってもよいし、メッシュ、ドットまたは等高線であってもよいし、文字または矢印であってもよいし、ヒートマップであってもよい。また、表示態様の色は、例えば赤、黄、青、緑、白、紫等、様々な色が考えられる。接触位置50を示す表示態様は、危険領域を示すマークであってもよい。 The display form may be a mark other than a circle, such as an oval, triangle, rectangle, polygon with 5 or more sides, a star, a cross or a x (x mark), a mesh, dots or contour lines, letters or arrows, or a heat map. The display form may be in a variety of colors, such as red, yellow, blue, green, white, purple, etc. The display form indicating the contact position 50 may be a mark indicating a danger area.
 また、例えば、ロボット21が接触位置50に接触する際の、ロボット21の制御誤差、接触までの時間、接触の圧力、接触の速度、把持安定度または接触順序などの追加情報も提示されてもよい。つまり、提示態様選択部14は、追加情報も提示するための提示態様を選択してもよい。 Furthermore, additional information such as the control error of the robot 21, the time until contact, the contact pressure, the contact speed, the gripping stability, or the contact order when the robot 21 contacts the contact position 50 may also be presented. In other words, the presentation mode selection unit 14 may select a presentation mode for presenting the additional information.
 例えば、追加情報のうち、数値で示すことができる情報は、文字(数字)で表示されてもよい。 For example, among the additional information, information that can be expressed numerically may be displayed as letters (numbers).
 例えば、接触される圧力に応じて、表示される色が変化してもよい。例えば、強い力で接触される接触位置50については赤色で表示され、弱い力で接触される接触位置50については青色で表示されてもよい。 For example, the displayed color may change depending on the pressure of the contact. For example, a contact position 50 that is contacted with a strong force may be displayed in red, and a contact position 50 that is contacted with a weak force may be displayed in blue.
 例えば、接触までの時間に応じて、表示の大きさが変化してもよい。例えば、接触までの時間が近づくにつれて、表示の大きさが徐々に小さくなっていってもよい。 For example, the size of the display may change depending on the time until contact. For example, the size of the display may gradually decrease as the time until contact approaches.
 例えば、ロボット21の制御誤差がある場合には、接触位置50周辺もロボット21が接触する可能性があるため、接触位置50が最も高く、接触位置50から離れるほど低くなるような等高線が表示されたり、接触位置50に近いほど密度が高くなるドットまたはメッシュが表示されたりしてもよい。 For example, if there is a control error in the robot 21, there is a possibility that the robot 21 may also come into contact with the area around the contact position 50. Therefore, a contour line may be displayed that is highest at the contact position 50 and becomes lower the further away from the contact position 50, or dots or meshes may be displayed that become denser the closer to the contact position 50.
 また、提示態様選択部14は、作業者30が接触位置50を認識できると判定された場合に選択する表示態様と同じ表示態様を選択しつつ、警告音または警告音声などの音を出力するように提示部15に指示をしてもよい。これにより、警告音または警告音声などの音を聞いた作業者30が接触位置50を認識しやすくなる。 The presentation mode selection unit 14 may also instruct the presentation unit 15 to output a sound such as a warning sound or warning voice while selecting the same display mode as the display mode that would be selected if it was determined that the worker 30 can recognize the contact position 50. This makes it easier for the worker 30 who hears the sound such as the warning sound or warning voice to recognize the contact position 50.
 また、提示態様選択部14は、作業対象物40の色に応じた色を表示する表示態様を選択してもよい。例えば、作業対象物40の色が赤い物体である場合、赤い色は作業対象物40上に表示されても視認性が低く、作業者30は表示に気付きにくい場合がある。そこで、例えば、作業対象物40の対照色である緑色を基調とする表示が行われてもよい。 The presentation mode selection unit 14 may also select a display mode that displays a color according to the color of the work object 40. For example, if the work object 40 is a red object, the red color has low visibility even when displayed on the work object 40, and the worker 30 may have difficulty noticing the display. Therefore, for example, a display based on green, which is the contrasting color of the work object 40, may be performed.
 図3での説明に戻り、提示部15は、選択された提示態様に応じた接触位置50の提示を行う(ステップS15)。例えば、提示部15は、プロジェクタ22に接触位置50上に光を表示させたり、スピーカ(図示せず)などから音を出力させたりする。 Returning to the explanation in FIG. 3, the presentation unit 15 presents the contact position 50 according to the selected presentation mode (step S15). For example, the presentation unit 15 causes the projector 22 to display light on the contact position 50 or to output sound from a speaker (not shown) or the like.
 認識判定部13によって、作業者30が接触位置50を認識できないと判定された場合には、提示態様選択部14によって作業者30が接触位置50を認識しやすくなる提示態様が選択される。これにより、選択された提示態様に応じた接触位置50の提示を提示部15が行うことで、ロボット21の接触位置50を作業者30が認識しやすくなる。例えば、提示態様として、音や音声による警告が考えられる。 If the recognition determination unit 13 determines that the worker 30 cannot recognize the contact position 50, the presentation mode selection unit 14 selects a presentation mode that makes it easier for the worker 30 to recognize the contact position 50. As a result, the presentation unit 15 presents the contact position 50 according to the selected presentation mode, making it easier for the worker 30 to recognize the contact position 50 of the robot 21. For example, a warning using sound or voice may be considered as a presentation mode.
 なお、作業者30に認識できないと判定された接触位置50について、提示の必要がないと判断して、その接触位置50については提示が行われなくてもよい。 In addition, if a contact position 50 is determined to be unrecognizable to the worker 30, it may be determined that there is no need to present the contact position 50, and the contact position 50 may not be presented.
 次に、接触位置50の提示が行われた後に、作業者30が提示された接触位置50を認識できるか否か(接触位置50を認識しているか否か)の判定が行われる例について、図6を用いて説明する。 Next, an example in which after the contact position 50 is presented, it is determined whether or not the worker 30 can recognize the presented contact position 50 (whether or not the worker recognizes the contact position 50) will be described with reference to FIG. 6.
 図6は、実施の形態に係る提示システム10の動作の他の一例を示すフローチャートである。 FIG. 6 is a flowchart showing another example of the operation of the presentation system 10 according to the embodiment.
 まず、接触位置取得部11は、作業対象物40に対して作業を行うロボット21の作業対象物40の接触位置50を示す接触位置情報を取得する(ステップS21)。 First, the contact position acquisition unit 11 acquires contact position information indicating the contact position 50 of the work object 40 of the robot 21 performing work on the work object 40 (step S21).
 次に、提示態様選択部14は、接触位置情報に基づいて、接触位置50を提示するための提示態様を選択する(ステップS22)。図6の例では、ステップS22の時点では、作業者30が提示された接触位置50を認識できるか否かの判定が行われていないため、例えば、提示態様選択部14は、接触位置情報が示す接触位置50に関して、予め定められた提示態様を選択する。例えば、提示態様が表示である場合に、提示態様選択部14は、予め定められた、表示の色合い、表示の色の強度、表示の点滅頻度または表示の大きさの表示態様を選択する。また、例えば、提示が音の出力である場合に、提示態様選択部14は、予め定められた音の出力の提示態様を選択する。予め定められた音の出力は、接触位置50を示す音の出力であってもよく、例えば、「靴底を把持します」などの音声出力であってもよい。 Next, the presentation mode selection unit 14 selects a presentation mode for presenting the contact position 50 based on the contact position information (step S22). In the example of FIG. 6, since it is not determined at step S22 whether the worker 30 can recognize the presented contact position 50, for example, the presentation mode selection unit 14 selects a predetermined presentation mode for the contact position 50 indicated by the contact position information. For example, when the presentation mode is display, the presentation mode selection unit 14 selects a display mode of a predetermined display color, a display color intensity, a display blinking frequency, or a display size. Also, for example, when the presentation is sound output, the presentation mode selection unit 14 selects a presentation mode of a predetermined sound output. The output of the predetermined sound may be the output of a sound indicating the contact position 50, or may be a voice output such as "Grab the sole of the shoe."
 次に、提示部15は、選択された提示態様に応じた接触位置50の提示を行う(ステップS23)。例えば、提示部15は、プロジェクタ22に接触位置50上に光を表示させたり、スピーカ(図示せず)などから音を出力させたりする。 Then, the presentation unit 15 presents the contact position 50 according to the selected presentation mode (step S23). For example, the presentation unit 15 causes the projector 22 to display light on the contact position 50 or causes a speaker (not shown) to output sound.
 次に、状態取得部12は、作業対象物40に対して作業を行う作業者30の状態を取得する(ステップS24)。具体的には、接触位置50の提示が行われた状態における、作業者30の状態を取得する。接触位置50の提示が行われているため、作業者30が接触位置50の提示に気付いている場合には、作業者30の状態は、接触位置50の提示に応じた状態となる。例えば、状態取得部12は、作業者30の状態として、カメラ23により得られた作業者30が写るカメラ画像を取得する。作業者30が写るカメラ画像には、作業者30の視線情報が含まれており、認識判定部13は、視線情報から作業者30の視線範囲31を推定する。 Next, the state acquisition unit 12 acquires the state of the worker 30 performing work on the work object 40 (step S24). Specifically, the state of the worker 30 in a state in which the contact position 50 has been presented is acquired. Since the contact position 50 has been presented, if the worker 30 notices the presentation of the contact position 50, the state of the worker 30 becomes a state corresponding to the presentation of the contact position 50. For example, the state acquisition unit 12 acquires a camera image of the worker 30 obtained by the camera 23 as the state of the worker 30. The camera image of the worker 30 includes line-of-sight information of the worker 30, and the recognition determination unit 13 estimates the line-of-sight range 31 of the worker 30 from the line-of-sight information.
 次に、認識判定部13は、取得された作業者30の状態と、接触位置情報とに基づいて、作業者30が接触位置50を認識できるか否かを判定する(ステップS25)。ここでは、接触位置50の提示が行われた結果、認識判定部13は、取得された作業者30の状態と、接触位置情報とに基づいて、作業者30が接触位置50を認識しているか否かを判定する。例えば、認識判定部13は、接触位置50が視線範囲31に含まれるか否かを判定することで、作業者30が接触位置50を認識しているか否かを判定する。 Next, the recognition determination unit 13 determines whether or not the worker 30 can recognize the contact position 50 based on the acquired state of the worker 30 and the contact position information (step S25). Here, as a result of the presentation of the contact position 50, the recognition determination unit 13 determines whether or not the worker 30 recognizes the contact position 50 based on the acquired state of the worker 30 and the contact position information. For example, the recognition determination unit 13 determines whether or not the worker 30 recognizes the contact position 50 by determining whether or not the contact position 50 is included in the line of sight range 31.
 そして、図3のステップS14と同じように、提示態様選択部14は、接触位置情報および認識判定部13の判定結果に基づいて、接触位置50を提示するための提示態様を選択する(ステップS26)。図3のステップS15と同じように、提示部15は、選択された提示態様に応じた接触位置50の提示を行う(ステップS27)。なお、提示が音の出力である場合に、提示態様選択部14は、作業者30が接触位置50を認識できないと判定されたときには、作業者30が接触位置50を認識できると判定された場合に選択する提示態様と比べて、音量またはアナウンス内容が異なる提示態様を選択する。例えば、提示態様選択部14は、音量が大きい提示態様を選択したり、接触位置50の詳細な位置をアナウンスする提示態様を選択したりする。 3, the presentation mode selection unit 14 selects a presentation mode for presenting the contact position 50 based on the contact position information and the determination result of the recognition determination unit 13 (step S26). As in step S15 of FIG. 3, the presentation unit 15 presents the contact position 50 according to the selected presentation mode (step S27). Note that when the presentation is a sound output, if the presentation mode selection unit 14 determines that the worker 30 cannot recognize the contact position 50, it selects a presentation mode with a different volume or announcement content compared to the presentation mode selected when it is determined that the worker 30 can recognize the contact position 50. For example, the presentation mode selection unit 14 selects a presentation mode with a higher volume or a presentation mode that announces the detailed location of the contact position 50.
 次に、提示システム10の他の適用例について説明する。 Next, we will explain other application examples of the presentation system 10.
 図7は、実施の形態に係る提示システム10の他の適用例を示す図である。 FIG. 7 shows another application example of the presentation system 10 according to the embodiment.
 図7に示されるように、カメラ23が作業者30の頭上に位置していてもよい。 As shown in FIG. 7, the camera 23 may be positioned above the worker 30.
 図8は、作業者30の頭上から撮影された作業者30が写るカメラ画像の一例を示す図である。 Figure 8 shows an example of a camera image of worker 30 captured from above the worker's head.
 カメラ23が作業者30の頭上に位置している場合には、状態取得部12は、図8に示されるようなカメラ画像から、作業者30の視線情報として、作業者30の頭部の角度情報を容易に取得することができる。このため、認識判定部13は、作業者30の視線情報として、作業者30の頭部の角度情報に基づいて、作業者30の視線範囲31を推定してもよい。認識判定部13は、作業者30の頭部の角度情報から、作業者30の向いている方向を容易に推定でき、作業者30の向いている方向を中心とする視野角度32および当該方向への視距離33によって視線範囲31を容易に推定することができる。 When the camera 23 is positioned above the head of the worker 30, the state acquisition unit 12 can easily acquire the head angle information of the worker 30 as the line of sight information of the worker 30 from the camera image as shown in FIG. 8. Therefore, the recognition determination unit 13 may estimate the line of sight range 31 of the worker 30 based on the head angle information of the worker 30 as the line of sight information of the worker 30. The recognition determination unit 13 can easily estimate the direction in which the worker 30 is facing from the head angle information of the worker 30, and can easily estimate the line of sight range 31 from the field of view angle 32 centered on the direction in which the worker 30 is facing and the viewing distance 33 to that direction.
 図9は、実施の形態に係る提示システム10の他の適用例を示す図である。 FIG. 9 is a diagram showing another application example of the presentation system 10 according to the embodiment.
 例えば、プロジェクタおよびカメラが同じ場所に設けられていてもよく、図9に示されるように、提示システム10に対して、プロジェクタおよびカメラの機能を有する1つの装置26が用いられてもよい。この場合、装置26に接続されたコンピュータ27によって、接触位置取得部11、状態取得部12、認識判定部13、提示態様選択部14および提示部15が実現されてもよい。 For example, the projector and camera may be installed in the same location, and as shown in FIG. 9, one device 26 having the functions of a projector and a camera may be used for the presentation system 10. In this case, the contact position acquisition unit 11, state acquisition unit 12, recognition determination unit 13, presentation mode selection unit 14, and presentation unit 15 may be realized by a computer 27 connected to the device 26.
 また、提示システム10に用いられるカメラは、作業者30が装着するカメラ(ウェアラブルカメラ)であってもよい。 The camera used in the presentation system 10 may also be a camera worn by the worker 30 (a wearable camera).
 図10Aおよび図10Bは、作業者30が装着するカメラの一例を示す図である。 FIGS. 10A and 10B show an example of a camera worn by a worker 30.
 図10Aに示されるように、作業者30が装着するカメラは、作業者30の目付近に装着された視線カメラ28であってもよい。また、図10Bに示されるように、作業者30が装着するカメラは、作業者30に装着されたAR(Augmented Reality)グラス29であってもよい。 As shown in FIG. 10A, the camera worn by the worker 30 may be an eye gaze camera 28 worn near the eyes of the worker 30. Also, as shown in FIG. 10B, the camera worn by the worker 30 may be AR (Augmented Reality) glasses 29 worn by the worker 30.
 図11は、作業者30が装着するカメラ(視線カメラ28またはARグラス29)により撮影された作業対象物40が写るカメラ画像の一例を示す図である。 FIG. 11 shows an example of a camera image of a work object 40 captured by a camera (eye camera 28 or AR glasses 29) worn by a worker 30.
 図11に示されるように、作業者30の目付近に装着されたカメラにより撮影されたカメラ画像は、作業者30の視野に近い画像となる。このため、接触位置50がカメラ画像に含まれている場合には、作業者30が接触位置50を認識できると判定することができ、接触位置50がカメラ画像に含まれていない場合には、作業者30が接触位置を認識できないと判定することができる。つまり、作業者30が装着するカメラが用いられる場合には、顔の向きおよび視線範囲31の推定ならびに座標変換などが不要となり、アルゴリズムの設計の簡易化が可能となる。 As shown in FIG. 11, a camera image captured by a camera worn near the eyes of worker 30 is an image close to the field of view of worker 30. Therefore, if contact position 50 is included in the camera image, it can be determined that worker 30 can recognize contact position 50, and if contact position 50 is not included in the camera image, it can be determined that worker 30 cannot recognize contact position. In other words, when a camera worn by worker 30 is used, estimation of face direction and line of sight 31 and coordinate transformation are not required, making it possible to simplify the design of the algorithm.
 このように、状態取得部12は、作業者30の状態として、作業者30が装着するカメラにより撮影されたカメラ画像を取得し(具体的には、作業者30の状態として、作業者30がどこを見ているかを取得し)、認識判定部13は、接触位置50がカメラ画像に含まれるか否かを判定することで、作業者30が接触位置50を認識できるか否かを判定してもよい。 In this way, the state acquisition unit 12 acquires a camera image captured by a camera worn by the worker 30 as the state of the worker 30 (specifically, acquires where the worker 30 is looking as the state of the worker 30), and the recognition determination unit 13 may determine whether the contact position 50 is included in the camera image, thereby determining whether the worker 30 can recognize the contact position 50.
 なお、作業者30が装着するカメラによるカメラ画像の表示内容が小さい場合(例えば作業者30と作業対象物40とが離れている場合)には、作業者30が接触位置50を認識できない可能性があるため、カメラ画像の表示内容の大きさに応じて、作業者30が接触位置50を認識できるか否かが判定されてもよい。 Note that if the content displayed in the camera image by the camera worn by the worker 30 is small (for example, if the worker 30 is far away from the work object 40), the worker 30 may not be able to recognize the contact position 50. Therefore, whether or not the worker 30 can recognize the contact position 50 may be determined according to the size of the content displayed in the camera image.
 なお、提示システム10に、作業者30の目付近に装着された筋電センサが用いられてもよい。この場合、状態取得部12は、作業者30の視線情報として、作業者30の眼電に関する情報を取得し、認識判定部13は、作業者30の眼電に関する情報に基づいて視線範囲31を推定してもよい。例えば、作業者30の眼球の動く範囲が狭い場合、作業者30は、一定範囲を注視していると考えられるため、作業者30の認識範囲が狭く、かつ、その認識範囲内の詳細を認識していると判定できる。このため、この場合には、認識判定部13は、視距離33が大きく、視野角度32が狭い視線範囲31を推定することができる。また、例えば、作業者30の眼球の動く範囲が広い場合、作業者30は、広い範囲を見ていると考えられるため、作業者30の認識範囲が広いと判定できる。このため、この場合には、認識判定部13は、視野角度32が広い視線範囲31を推定することができる。 Note that the presentation system 10 may use an electromyography sensor attached near the eyes of the worker 30. In this case, the state acquisition unit 12 may acquire information on the electrooculography of the worker 30 as the gaze information of the worker 30, and the recognition determination unit 13 may estimate the gaze range 31 based on the information on the electrooculography of the worker 30. For example, if the range of movement of the eyeballs of the worker 30 is narrow, it is considered that the worker 30 is gazing at a certain range, and it can be determined that the recognition range of the worker 30 is narrow and that the worker 30 recognizes details within that recognition range. Therefore, in this case, the recognition determination unit 13 can estimate a gaze range 31 with a large visual distance 33 and a narrow field of view angle 32. Also, for example, if the range of movement of the eyeballs of the worker 30 is wide, it is considered that the worker 30 is looking at a wide range, and it can be determined that the recognition range of the worker 30 is wide. Therefore, in this case, the recognition determination unit 13 can estimate a gaze range 31 with a wide field of view angle 32.
 また、状態取得部12は、作業者30の顔が映る動画像を取得し(具体的には、作業者30の顔が写る動画像から作業者30の状態として作業者30の表情を取得し)、認識判定部13は、さらに、動画像に映る作業者30の表情に基づいて、作業者30が接触位置50を認識できるか否か(具体的には認識しているか否か)を判定してもよい。 The state acquisition unit 12 may also acquire a video showing the face of the worker 30 (specifically, acquire the facial expression of the worker 30 from the video showing the face of the worker 30 as the state of the worker 30), and the recognition determination unit 13 may further determine whether the worker 30 can recognize the contact position 50 (specifically, whether the worker 30 recognizes it) based on the facial expression of the worker 30 shown in the video.
 図12Aおよび図12Bは、作業者30の表情の一例を示す図である。図12Aには、眉をひそめる表情をしている作業者30が示され、図12Bには、目を細める表情をしている作業者30が示されている。 12A and 12B are diagrams showing examples of facial expressions of a worker 30. In FIG. 12A, a worker 30 with a frowning expression is shown, and in FIG. 12B, a worker 30 with a squinting expression is shown.
 例えば、認識判定部13は、取得された動画像に映る作業者30の表情が図12Aまたは図12Bのように、眉をひそめる表情または目を細める表情である場合には、作業者30が提示された接触位置50を認識できていないと判定することができる。このため、この場合には、提示態様選択部14は、作業者30が接触位置50を認識しやすくなる提示態様を選択してもよい。また、認識判定部13は、取得された動画像に映る作業者30の表情が、目を見開いたり、眉毛が動いたり、口を開いたりして、接触位置50を認識できている表情である場合には、作業者30が提示された接触位置50を認識できていると判定することができる。このため、この場合には、提示態様選択部14は、現状とは異なる提示態様を選択しなくてもよい。 For example, if the facial expression of the worker 30 shown in the acquired video is frowning or squinting as in FIG. 12A or 12B, the recognition determination unit 13 can determine that the worker 30 is unable to recognize the presented contact position 50. Therefore, in this case, the presentation mode selection unit 14 may select a presentation mode that makes it easier for the worker 30 to recognize the contact position 50. Furthermore, if the facial expression of the worker 30 shown in the acquired video is one in which the worker can recognize the contact position 50 by opening the eyes wide, moving the eyebrows, or opening the mouth, the recognition determination unit 13 can determine that the worker 30 can recognize the presented contact position 50. Therefore, in this case, the presentation mode selection unit 14 does not need to select a presentation mode that is different from the current situation.
 なお、認識判定部13は、作業者30の視線情報として、動画像に映る作業者30の顔の動きに基づいて、作業者30の視線範囲31を推定してもよい。例えば、作業者30の顔の動きが遅い場合、作業者30は、一定範囲を注視していると考えられるため、作業者30の認識範囲が狭く、かつ、その認識範囲内の詳細を認識していると判定できる。このため、この場合には、認識判定部13は、視距離33が大きく、視野角度32が狭い視線範囲31を推定することができる。また、例えば、作業者30の顔の動きが速い場合、作業者30は、広い範囲を見ていると考えられるため、作業者30の認識範囲が広いと判定できる。このため、この場合には、認識判定部13は、視野角度32が広い視線範囲31を推定することができる。 The recognition determination unit 13 may estimate the gaze range 31 of the worker 30 based on the facial movement of the worker 30 shown in the moving image as the gaze information of the worker 30. For example, if the facial movement of the worker 30 is slow, the worker 30 is considered to be gazing at a certain range, and it can be determined that the recognition range of the worker 30 is narrow and that the worker recognizes details within that recognition range. Therefore, in this case, the recognition determination unit 13 can estimate a gaze range 31 with a large visual distance 33 and a narrow field of view angle 32. Also, for example, if the facial movement of the worker 30 is fast, the worker 30 is considered to be looking at a wide range, and it can be determined that the recognition range of the worker 30 is wide. Therefore, in this case, the recognition determination unit 13 can estimate a gaze range 31 with a wide field of view angle 32.
 また、認識判定部13は、作業者30の視線情報として、動画像に映る作業者30の表情または動作に基づいて、作業者30の視線範囲31の推定結果が正しかったか否かを判定してもよい。例えば、認識判定部13は、作業者30の視線範囲31が正しくなかった場合には、視線範囲31を推定し直すことができる。例えば、作業者30の表情が図12Aまたは図12Bのように、眉をひそめる表情または目を細める表情である場合には、視線範囲31内にある接触位置50が見えづらいと推定することができ、推定された視線範囲31における視距離33が長すぎたと判定することができる。このため、この場合には、視距離33が短い視線範囲31を推定し直すことができる。また、例えば、作業者30が接触位置50を認識できると判定されたのに、作業者30が接触位置50を認識できていない動作をした場合、推定された視線範囲31が広すぎたと判定することができる。なお、作業者30が接触位置50を認識できていない動作は、例えば、作業者30が接触位置50を無視して、教示作業中に接触位置50と関係ない場所にロボット21を近づける動作や、協働作業中に接触位置50に接近しようとする動作である。このため、この場合には、認識判定部13は、視距離33が短いまたは視野角度32が狭い視線範囲31を推定し直すことができる。 The recognition determination unit 13 may also determine whether the estimation result of the gaze range 31 of the worker 30 was correct based on the facial expression or movement of the worker 30 shown in the moving image as the gaze information of the worker 30. For example, if the gaze range 31 of the worker 30 was incorrect, the recognition determination unit 13 can re-estimate the gaze range 31. For example, if the facial expression of the worker 30 is a frowning expression or a narrowing expression as in FIG. 12A or FIG. 12B, it can be estimated that the contact position 50 in the gaze range 31 is difficult to see, and it can be determined that the visual distance 33 in the estimated gaze range 31 is too long. Therefore, in this case, it is possible to re-estimate the gaze range 31 with a short visual distance 33. Also, for example, if it is determined that the worker 30 can recognize the contact position 50, but the worker 30 performs an action that does not allow the contact position 50 to be recognized, it can be determined that the estimated gaze range 31 is too wide. An example of an action in which the worker 30 is unable to recognize the contact position 50 is an action in which the worker 30 ignores the contact position 50 and moves the robot 21 closer to a location unrelated to the contact position 50 during a teaching operation, or an action in which the worker attempts to approach the contact position 50 during a collaborative operation. Therefore, in this case, the recognition determination unit 13 can re-estimate the line-of-sight range 31 with a short viewing distance 33 or a narrow viewing angle 32.
 また、提示態様選択部14は、認識判定部13が、作業者30が接触位置50を認識できないと判定した場合、すでに選択された提示態様に加えて、新たな提示態様を選択し、提示部15は、すでに選択された提示態様に応じた提示に加えて、新たに選択された提示態様に応じた接触位置50の提示を行ってもよい。これについて、図13Aおよび図13Bを用いて説明する。 Furthermore, when the recognition determination unit 13 determines that the worker 30 cannot recognize the contact position 50, the presentation mode selection unit 14 may select a new presentation mode in addition to the already selected presentation mode, and the presentation unit 15 may present the contact position 50 according to the newly selected presentation mode in addition to the presentation according to the already selected presentation mode. This will be described with reference to Figures 13A and 13B.
 図13Aおよび図13Bは、新たに選択された提示態様に応じた提示の一例を示す図である。図13Aおよび図13Bには、作業対象物40が置かれた台などが示されており、当該台にプロジェクタ22によって画像が投影されている。例えば、作業対象物40上の接触位置50に画像などが投影されており、また、台上の作業対象物40の周辺にも追加画像が投影されている。作業対象物40上の接触位置50に投影されている画像は、すでに選択された提示態様に応じた提示の一例であり、台上の作業対象物40の周辺に投影されている画像は、新たに選択された提示態様に応じた提示の一例である。 FIGS. 13A and 13B are diagrams showing an example of presentation according to a newly selected presentation mode. FIGS. 13A and 13B show a table on which a work object 40 is placed, and an image is projected onto the table by a projector 22. For example, an image is projected onto a contact position 50 on the work object 40, and an additional image is also projected around the work object 40 on the table. The image projected onto the contact position 50 on the work object 40 is an example of presentation according to an already selected presentation mode, and the image projected around the work object 40 on the table is an example of presentation according to a newly selected presentation mode.
 例えば、作業者30が接触位置50を認識できない場合には、図13Aに示されるような、作業者30が認識できていない接触位置50を間接的に表示する追加画像51が表示されてもよい。追加画像51は、作業者30が認識できていない接触位置50の位置を矢印または文字を用いて間接的に表示する画像である。作業者30に追加画像51を見せることで、認識できていない接触位置50が作業者30から見えている側の反対側にあることを認識させ、作業者30に移動して確認するように促すことができる。 For example, if the worker 30 cannot recognize the contact position 50, an additional image 51 may be displayed that indirectly displays the contact position 50 that the worker 30 cannot recognize, as shown in FIG. 13A. The additional image 51 is an image that indirectly displays the location of the contact position 50 that the worker 30 cannot recognize by using an arrow or text. By showing the additional image 51 to the worker 30, the worker 30 can be made to recognize that the contact position 50 that he or she cannot recognize is on the opposite side to the side that the worker 30 can see, and the worker 30 can be prompted to move over and check.
 また、例えば、作業者30が接触位置50を認識できない場合には、図13Bに示されるような、作業者30が認識できていない接触位置50を直接的に表示する追加画像52が表示されてもよい。追加画像52は、作業者30の反対側から撮影された作業対象物40が写る画像であり、作業者30は、追加画像52を見ることで、認識できていない接触位置50が作業者30の見ている側とは反対側にあることを認識することができる。 Also, for example, if the worker 30 cannot recognize the contact position 50, an additional image 52 may be displayed that directly displays the contact position 50 that the worker 30 cannot recognize, as shown in FIG. 13B. The additional image 52 is an image of the work object 40 captured from the opposite side of the worker 30, and by looking at the additional image 52, the worker 30 can recognize that the unrecognized contact position 50 is on the opposite side to the side the worker 30 is looking at.
 なお、新たに選択された提示態様に応じた提示は、音声であってもよく、例えば、作業者30が認識できていない接触位置50が作業者30の見ている側とは反対側にあることなどが音声出力されてもよい。 The presentation according to the newly selected presentation mode may be audio. For example, a voice may be output to inform the operator that the contact position 50 that the operator 30 is unable to recognize is on the opposite side to the side the operator 30 is looking at.
 以上説明した通り、作業者30が接触位置50を認識できないと判定された場合には、作業者30が接触位置50を認識しやすくなるような提示態様が選択されて、選択された提示態様に応じた接触位置50の提示が行われる。これにより、ロボット21の接触位置50を作業者30が認識しやすくなる。 As described above, if it is determined that the worker 30 cannot recognize the contact position 50, a presentation mode that makes it easier for the worker 30 to recognize the contact position 50 is selected, and the contact position 50 is presented according to the selected presentation mode. This makes it easier for the worker 30 to recognize the contact position 50 of the robot 21.
 (変形例1)
 次に、実施の形態の変形例1に係る提示システムについて説明する。
(Variation 1)
Next, a presentation system according to a first modification of the embodiment will be described.
 図14は、実施の形態の変形例1に係る提示システム10aの一例を示すブロック図である。 FIG. 14 is a block diagram showing an example of a presentation system 10a according to the first variation of the embodiment.
 提示システム10aは、集中度推定部16を備える点が、実施の形態に係る提示システム10と異なる。その他の点は、実施の形態に係る提示システム10と基本的には同じであるため、以下では、異なる点を中心に説明する。 The presentation system 10a differs from the presentation system 10 according to the embodiment in that it includes a concentration level estimation unit 16. In other respects, the presentation system 10a is basically the same as the presentation system 10 according to the embodiment, so the following description will focus on the differences.
 集中度推定部16は、提示システム10aが備えるメモリに格納されたプログラムを実行するプロセッサなどによって実現され、具体的には、図2および図7に示されるコンピュータ25または図9に示されるコンピュータ27などにより実現される。なお、集中度推定部16は、プロセッサを備えた部品、またはプロセッサを備えた半導体チップであってもよい。また、集中度推定部16は、プロセッサやメモリなどを備えたコンピュータ、またはプロセッサやメモリなどを備えた装置であってもよい。また、集中度推定部16は、コンピュータを備えた装置であってもよい。 The concentration level estimation unit 16 is realized by a processor that executes a program stored in a memory provided in the presentation system 10a, and more specifically, by the computer 25 shown in FIG. 2 and FIG. 7 or the computer 27 shown in FIG. 9. The concentration level estimation unit 16 may be a component having a processor, or a semiconductor chip having a processor. The concentration level estimation unit 16 may also be a computer having a processor, a memory, or a device having a processor, a memory, or a device. The concentration level estimation unit 16 may also be a device having a computer.
 状態取得部12は、作業者30の顔が映る動画像を取得し(具体的には、作業者30の顔が写る動画像から作業者30の状態を取得し)、集中度推定部16は、取得された動画像に基づいて、作業者30の作業対象物40への作業に対する集中度を推定する。例えば、集中度推定部16は、動画像から表情の情報、脈波の情報、視線の動きの情報の少なくとも1つを抽出し、これらの情報の少なくとも1つを用いて作業者30の集中度を推定する。 The state acquisition unit 12 acquires a video showing the face of the worker 30 (specifically, acquires the state of the worker 30 from the video showing the face of the worker 30), and the concentration estimation unit 16 estimates the concentration level of the worker 30 on the work target 40 based on the acquired video. For example, the concentration estimation unit 16 extracts at least one of facial expression information, pulse wave information, and eye movement information from the video, and estimates the concentration level of the worker 30 using at least one of these pieces of information.
 例えば、認識判定部13は、取得された作業者30の状態と、接触位置情報と、作業者30の集中度とに基づいて、作業者30が接触位置50を認識できるか否かを判定する。例えば、認識判定部13は、作業者30の状態と、作業者30の集中度とに基づいて、作業者30の視線範囲31を推定し、接触位置情報が示す接触位置50が視線範囲31に含まれるか否かを判定することで、作業者30が接触位置50を認識できるか否かを判定する。 For example, the recognition determination unit 13 determines whether or not the worker 30 can recognize the contact position 50 based on the acquired state of the worker 30, the contact position information, and the concentration level of the worker 30. For example, the recognition determination unit 13 estimates the line of sight 31 of the worker 30 based on the state of the worker 30 and the concentration level of the worker 30, and determines whether or not the contact position 50 indicated by the contact position information is included in the line of sight 31, thereby determining whether or not the worker 30 can recognize the contact position 50.
 例えば、集中度が高いと推定された場合には、作業者30は、他に考え事などをせずに、目の前の作業に集中しており、広い範囲への注視が可能と考えられる。このため、この場合には、認識判定部13は、視距離33が大きく、視野角度32が広い視線範囲31を推定することができる。一方で、集中度が低いと推定された場合には、作業者30は、広い範囲への注視が難しいと考えられる。このため、この場合には、認識判定部13は、視距離33が小さく、視野角度32が狭い視線範囲31を推定することができる。このように、作業者30の集中度が高い場合には作業者30が接触位置50を認識しやすくなり、作業者30の集中度が低い場合には作業者30が接触位置50を認識しにくくなるため、推定された集中度を作業者30が接触位置50を認識できるか否かの判定に利用することができる。 For example, if the concentration level is estimated to be high, the worker 30 is concentrating on the task at hand without thinking about other things, and is considered to be able to gaze at a wide range. Therefore, in this case, the recognition determination unit 13 can estimate a gaze range 31 with a large viewing distance 33 and a wide field of view angle 32. On the other hand, if the concentration level is estimated to be low, it is considered that the worker 30 has difficulty gazing at a wide range. Therefore, in this case, the recognition determination unit 13 can estimate a gaze range 31 with a small viewing distance 33 and a narrow field of view angle 32. In this way, when the concentration level of the worker 30 is high, it is easy for the worker 30 to recognize the contact position 50, and when the concentration level of the worker 30 is low, it is difficult for the worker 30 to recognize the contact position 50, so the estimated concentration level can be used to determine whether the worker 30 can recognize the contact position 50.
 また、例えば、提示態様選択部14は、さらに、作業者30の集中度に基づいて、提示態様を選択してもよい。集中度が低いと推定された場合には、作業者30は、広い範囲への注視が難しいと考えられる。このため、この場合には、提示態様選択部14は、作業者30が接触位置50を認識しやすくなるような提示態様(例えば、色または大きさが見やすい表示態様)を選択し、提示部15は、選択された提示態様に応じた接触位置50の提示を行ってもよい。 Furthermore, for example, the presentation mode selection unit 14 may select a presentation mode based on the concentration level of the worker 30. If the concentration level is estimated to be low, it is considered that the worker 30 will have difficulty focusing on a wide range. For this reason, in this case, the presentation mode selection unit 14 may select a presentation mode that makes it easier for the worker 30 to recognize the contact position 50 (for example, a display mode in which the color or size is easy to see), and the presentation unit 15 may present the contact position 50 according to the selected presentation mode.
 なお、作業者30の集中度が高い場合には、作業者30の疲労度が低いと推定することができ、作業者30の集中度が低い場合には、作業者30の疲労度が高いと推定することができる。つまり、作業者30の集中度と疲労度とには相関がある。したがって、集中度推定部16は、作業者30の疲労度を推定してもよく、認識判定部13は、取得された作業者30の状態と、接触位置情報と、作業者の疲労度とに基づいて、作業者30が接触位置50を認識できるか否かを判定してもよく、提示態様選択部14は、作業者の疲労度に基づいて、提示態様を選択してもよい。 When the concentration level of the worker 30 is high, it can be estimated that the fatigue level of the worker 30 is low, and when the concentration level of the worker 30 is low, it can be estimated that the fatigue level of the worker 30 is high. In other words, there is a correlation between the concentration level and fatigue level of the worker 30. Therefore, the concentration level estimation unit 16 may estimate the fatigue level of the worker 30, the recognition determination unit 13 may determine whether or not the worker 30 can recognize the contact position 50 based on the acquired state of the worker 30, the contact position information, and the worker's fatigue level, and the presentation mode selection unit 14 may select the presentation mode based on the worker's fatigue level.
 (変形例2)
 次に、実施の形態の変形例2に係る提示システムについて説明する。
(Variation 2)
Next, a presentation system according to a second modification of the embodiment will be described.
 図15は、実施の形態の変形例2に係る提示システム10bの一例を示すブロック図である。 FIG. 15 is a block diagram showing an example of a presentation system 10b according to the second variation of the embodiment.
 提示システム10bは、動作判定部17を備える点が、実施の形態に係る提示システム10と異なる。その他の点は、実施の形態に係る提示システム10と基本的には同じであるため、以下では、異なる点を中心に説明する。 The presentation system 10b differs from the presentation system 10 according to the embodiment in that it includes an operation determination unit 17. In other respects, the presentation system 10b is basically the same as the presentation system 10 according to the embodiment, so the following description will focus on the differences.
 動作判定部17は、提示システム10bが備えるメモリに格納されたプログラムを実行するプロセッサなどによって実現され、具体的には、図2および図7に示されるコンピュータ25または図9に示されるコンピュータ27などにより実現される。なお、動作判定部17は、プロセッサを備えた部品、またはプロセッサを備えた半導体チップであってもよい。また、動作判定部17は、プロセッサやメモリなどを備えたコンピュータ、またはプロセッサやメモリなどを備えた装置であってもよい。また、動作判定部17は、コンピュータを備えた装置であってもよい。 The operation determination unit 17 is realized by a processor that executes a program stored in a memory provided in the presentation system 10b, and specifically, by the computer 25 shown in FIG. 2 and FIG. 7 or the computer 27 shown in FIG. 9. The operation determination unit 17 may be a component having a processor, or a semiconductor chip having a processor. The operation determination unit 17 may also be a computer having a processor, a memory, or a device having a processor, a memory, or a device having a computer. The operation determination unit 17 may also be a device having a computer.
 状態取得部12は、作業者30が映る動画像を取得し(具体的には、作業者30が写る動画像から作業者30の状態を取得し)、動作判定部17は、動画像に基づいて、作業者30の作業対象物40に対する動作を判定し、認識判定部13は、動作判定部17の動作の判定結果に基づいて、提示部15が行った接触位置50の提示によって作業者30が接触位置50を認識したか否かを判定する。例えば、動作判定部17は、作業者30の作業対象物40に対する動作として、作業者30が接触位置50に接触したか否かを判定してもよく、認識判定部13は、作業者30が接触位置50に接触したか否かに基づいて、提示部15が行った接触位置50の提示によって作業者30が接触位置50を認識したか否かを判定してもよい。 The state acquisition unit 12 acquires a video in which the worker 30 appears (specifically, acquires the state of the worker 30 from the video in which the worker 30 appears), the movement determination unit 17 determines the movement of the worker 30 with respect to the work object 40 based on the movement, and the recognition determination unit 13 determines whether or not the worker 30 has recognized the contact position 50 due to the presentation of the contact position 50 by the presentation unit 15 based on the movement determination result of the movement determination unit 17. For example, the movement determination unit 17 may determine whether or not the worker 30 has touched the contact position 50 as the movement of the worker 30 with respect to the work object 40, and the recognition determination unit 13 may determine whether or not the worker 30 has recognized the contact position 50 due to the presentation of the contact position 50 by the presentation unit 15 based on whether or not the worker 30 has touched the contact position 50.
 例えば、作業者30が接触位置50を認識できるとの判定結果に基づいて選択された提示態様に応じた提示が行われた場合に、動作判定部17が、作業者30が接触位置50を認識できていない動作をしたと判定した場合、認識判定部13は、提示部15が行った接触位置50の提示によって作業者30が接触位置50を認識していないと判定する。このため、この場合には、認識判定部13は、視距離33が短いまたは視野角度32が狭い視線範囲31を推定し直すことができる。なお、作業者30が接触位置50を認識できていない動作は、上述したように、例えば、作業者30が接触位置50を無視して、教示作業中に接触位置50と関係ない場所にロボット21を近づける動作や、協働作業中に接触位置50に接近しようとする動作である。また、作業者30が接触位置50を認識できていない動作は、接触位置50を探すように動き回る動作や、接触位置50がわからずうろたえているような動作であってもよい。 For example, when a presentation is made according to the presentation mode selected based on the determination result that the worker 30 can recognize the contact position 50, if the motion determination unit 17 determines that the worker 30 has performed a motion that does not recognize the contact position 50, the recognition determination unit 13 determines that the worker 30 does not recognize the contact position 50 due to the presentation of the contact position 50 made by the presentation unit 15. Therefore, in this case, the recognition determination unit 13 can re-estimate the line of sight range 31 with a short visual distance 33 or a narrow visual angle 32. Note that, as described above, a motion in which the worker 30 is unable to recognize the contact position 50 is, for example, a motion in which the worker 30 ignores the contact position 50 and moves the robot 21 closer to a place unrelated to the contact position 50 during a teaching operation, or a motion in which the worker tries to approach the contact position 50 during a collaborative operation. A motion in which the worker 30 is unable to recognize the contact position 50 may also be a motion in which the worker 30 moves around as if searching for the contact position 50, or a motion in which the worker is confused because he does not know the contact position 50.
 また、例えば、提示態様選択部14は、さらに、動作判定部17の動作の判定結果に基づいて、提示態様を選択してもよい。例えば、提示態様選択部14は、動作判定部17が、作業者30が接触位置50を認識できていない動作をしたと判定した場合、作業者30が接触位置50を認識しやすくなるような提示態様を選択し、提示部15は、選択された提示態様に応じた接触位置50の提示を行ってもよい。なお、教示作業においては、作業者30は、提示された接触位置50が不要な情報と考えて、敢えて接触位置50を無視した場合も考えられるため、提示態様選択部14は、表示数を減らしたり、表示を削除したりする提示態様を選択してもよい。 Furthermore, for example, the presentation mode selection unit 14 may select a presentation mode based on the judgment result of the action by the action judgment unit 17. For example, when the action judgment unit 17 judges that the worker 30 has performed an action that does not allow the worker 30 to recognize the contact position 50, the presentation mode selection unit 14 may select a presentation mode that makes it easier for the worker 30 to recognize the contact position 50, and the presentation unit 15 may present the contact position 50 according to the selected presentation mode. Note that, in a teaching operation, it is possible that the worker 30 considers the presented contact position 50 to be unnecessary information and deliberately ignores the contact position 50, so the presentation mode selection unit 14 may select a presentation mode that reduces the number of displays or deletes the display.
 (変形例3)
 次に、実施の形態の変形例3に係る提示システムについて説明する。
(Variation 3)
Next, a presentation system according to a third modification of the embodiment will be described.
 図16は、実施の形態の変形例3に係る提示システム10cの一例を示すブロック図である。 FIG. 16 is a block diagram showing an example of a presentation system 10c according to the third variation of the embodiment.
 提示システム10cは、熟練度取得部18を備える点が、実施の形態に係る提示システム10と異なる。その他の点は、実施の形態に係る提示システム10と基本的には同じであるため、以下では、異なる点を中心に説明する。 The presentation system 10c differs from the presentation system 10 according to the embodiment in that it includes a proficiency acquisition unit 18. In other respects, it is basically the same as the presentation system 10 according to the embodiment, so the following description will focus on the differences.
 熟練度取得部18は、提示システム10cが備えるメモリに格納されたプログラムを実行するプロセッサなどによって実現され、具体的には、図2および図7に示されるコンピュータ25または図9に示されるコンピュータ27などにより実現される。なお、熟練度取得部18は、プロセッサを備えた部品、またはプロセッサを備えた半導体チップであってもよい。また、熟練度取得部18は、プロセッサやメモリなどを備えたコンピュータ、またはプロセッサやメモリなどを備えた装置であってもよい。また、熟練度取得部18は、コンピュータを備えた装置であってもよい。 The proficiency acquisition unit 18 is realized by a processor that executes a program stored in a memory provided in the presentation system 10c, and specifically, by the computer 25 shown in Figures 2 and 7 or the computer 27 shown in Figure 9. The proficiency acquisition unit 18 may be a component equipped with a processor, or a semiconductor chip equipped with a processor. The proficiency acquisition unit 18 may also be a computer equipped with a processor, a memory, or a device equipped with a processor, a memory, or a device equipped with a computer. The proficiency acquisition unit 18 may also be a device equipped with a computer.
 熟練度取得部18は、作業者30の作業対象物40に対する作業の熟練度を取得し、提示態様選択部14は、さらに、作業者30の熟練度に基づいて、提示態様を選択する。 The proficiency level acquisition unit 18 acquires the proficiency level of the worker 30 in performing the work on the work object 40, and the presentation mode selection unit 14 further selects the presentation mode based on the proficiency level of the worker 30.
 例えば、入力インタフェース(キーボード、マウス、タッチパネルなど)を介して、作業者30が作業対象物40に対して現在行おうとしている特定の作業における熟練度(例えば、作業の慣れ度合い)が入力されることで、熟練度取得部18は、作業者30の熟練度を取得する。なお、熟練度は、特定の作業に従事した時間であってもよいし、複数段階に分かれた熟練レベルであってもよい。 For example, the proficiency level (e.g., the degree of familiarity with the task) of the specific task that the worker 30 is currently performing on the work object 40 is input via an input interface (keyboard, mouse, touch panel, etc.), and the proficiency level acquisition unit 18 acquires the proficiency level of the worker 30. Note that the proficiency level may be the time spent engaged in the specific task, or may be a proficiency level divided into multiple stages.
 作業者30の熟練度が高い場合、協働作業では、作業者30は、ロボット21の動きを予測する能力に長けており、作業に余裕があると考えられる。このため、提示態様選択部14は、接触位置50を提示するための提示態様に加えて、接触時の速度または圧力などの数値情報を提示するための提示態様を選択してもよい。 If the worker 30 is highly skilled, it is considered that in collaborative work, the worker 30 is good at predicting the movements of the robot 21 and has time to work. For this reason, the presentation mode selection unit 14 may select a presentation mode for presenting numerical information such as the speed or pressure at the time of contact, in addition to the presentation mode for presenting the contact position 50.
 図17は、作業者30の熟練度に基づいて選択された提示態様に応じた提示の一例を示す図である。 FIG. 17 shows an example of a presentation according to a presentation mode selected based on the proficiency level of the worker 30.
 図17に示されるように、作業者30の熟練度が高い場合には、図17に示されるような数値情報53の提示が行われてもよい。 As shown in FIG. 17, when the worker 30 has a high level of proficiency, numerical information 53 as shown in FIG. 17 may be presented.
 なお、複数の接触位置50の作業順序が決まっている場合に、作業者30の熟練度が高いときには、作業順序を示す提示が行われてもよい。例えば、色またはマークなどによって作業順序がわかるような提示が行われてもよい。 If the work sequence for multiple contact positions 50 has been determined, and the worker 30 has a high level of proficiency, a display showing the work sequence may be provided. For example, a display showing the work sequence may be provided using colors or marks, etc.
 また、作業者30の熟練度が高い場合、教示作業では、作業者30は、教示する作業や接触(把持)などの知識が豊富であると考えられる。このため、提示態様選択部14は、作業者30の知識に基づいて、数多くの接触位置50の中から最適な位置を作業者30が選択できるように、多くの接触位置50を提示するための提示態様を選択してもよい。 Furthermore, when the worker 30 is highly skilled, it is considered that the worker 30 has a wealth of knowledge about the work to be taught and about contact (grasping) during the teaching work. For this reason, the presentation mode selection unit 14 may select a presentation mode for presenting many contact positions 50 based on the knowledge of the worker 30 so that the worker 30 can select the optimal position from among many contact positions 50.
 一方で、作業者30の熟練度が低い場合、協働作業では、作業者30は、ロボット21の動きを予測する能力が低く、不安感を抱きやすいと考えられる。このため、提示態様選択部14は、作業者30が接触位置50を認識しやすくなるような提示態様を選択する。例えば、表示と音の出力とが組み合わされた提示態様が選択されてもよい。また、作業者30は作業に慣れていないため、多くの情報が提示されると混乱するおそれがある。このため、数値情報の提示が控えられたり、複数の接触位置50がある場合には、優先度が高いものまたは作業順序が早いものが少なくとも1つ提示されたりしてもよい。 On the other hand, if the worker 30 has a low level of proficiency, it is considered that in collaborative work, the worker 30 has a low ability to predict the movements of the robot 21 and is likely to feel anxious. For this reason, the presentation mode selection unit 14 selects a presentation mode that makes it easier for the worker 30 to recognize the contact position 50. For example, a presentation mode that combines display and sound output may be selected. Also, since the worker 30 is not accustomed to the work, there is a risk of confusion if a lot of information is presented. For this reason, the presentation of numerical information may be refrained from, or, if there are multiple contact positions 50, at least one with a high priority or an early task order may be presented.
 (その他の実施の形態)
 以上のように、本開示に係る技術の例示として実施の形態を説明した。しかしながら、本開示に係る技術は、これに限定されず、適宜、変更、置き換え、付加、省略などを行った実施の形態にも適用可能である。例えば、以下のような変形例も本開示の一実施の形態に含まれる。
(Other embodiments)
As described above, the embodiment has been described as an example of the technology according to the present disclosure. However, the technology according to the present disclosure is not limited to this, and can be applied to an embodiment in which appropriate changes, substitutions, additions, omissions, etc. are made. For example, the following modified examples are also included in one embodiment of the present disclosure.
 例えば、本開示は、提示システムとして実現できるだけでなく、提示システムを構成する構成要素が行うステップ(処理)を含む提示方法として実現できる。 For example, the present disclosure can be realized not only as a presentation system, but also as a presentation method including steps (processing) performed by components that make up the presentation system.
 提示方法は、図3に示されるように、ステップS11と、ステップS12と、ステップS13と、ステップS14と、ステップS15と、を含む。ステップS11は、作業対象物に対して作業を行うロボットの作業対象物の接触位置を示す接触位置情報を取得する接触位置取得ステップである。ステップS12は、作業対象物に対して作業を行う作業者の状態を取得する状態取得ステップである。ステップS13は、取得された作業者の状態と、接触位置情報とに基づいて、作業者が接触位置を認識できるか否かを判定する認識判定ステップである。ステップS14は、接触位置情報および認識判定ステップでの判定結果に基づいて、接触位置を提示するための提示態様を選択する提示態様選択ステップである。ステップS15は、選択された提示態様に応じた接触位置の提示を行う提示ステップである。 As shown in FIG. 3, the presentation method includes steps S11, S12, S13, S14, and S15. Step S11 is a contact position acquisition step for acquiring contact position information indicating the contact position of a work object of a robot working on the work object. Step S12 is a status acquisition step for acquiring the status of a worker working on the work object. Step S13 is a recognition determination step for determining whether or not the worker can recognize the contact position based on the acquired status of the worker and the contact position information. Step S14 is a presentation mode selection step for selecting a presentation mode for presenting the contact position based on the contact position information and the determination result in the recognition determination step. Step S15 is a presentation step for presenting the contact position according to the selected presentation mode.
 例えば、提示方法におけるステップは、コンピュータ(コンピュータシステム)によって実行されてもよい。そして、本開示は、提示方法に含まれるステップを、コンピュータに実行させるためのプログラムとして実現できる。 For example, the steps in the presentation method may be executed by a computer (computer system). The present disclosure can be realized as a program for causing a computer to execute the steps included in the presentation method.
 さらに、本開示は、そのプログラムを記録したCD-ROMなどの非一時的なコンピュータ読み取り可能な記録媒体として実現できる。 Furthermore, the present disclosure can be realized as a non-transitory computer-readable recording medium, such as a CD-ROM, on which the program is recorded.
 例えば、本開示が、プログラム(ソフトウェア)で実現される場合には、コンピュータのCPU(Central Processing Unit)、メモリおよび入出力回路などのハードウェア資源を利用してプログラムが実行されることによって、各ステップが実行される。つまり、CPUがデータをメモリまたは入出力回路などから取得して演算したり、演算結果をメモリまたは入出力回路などに出力したりすることによって、各ステップが実行される。 For example, when the present disclosure is realized as a program (software), each step is performed by running the program using hardware resources such as a computer's CPU (Central Processing Unit), memory, and input/output circuits. In other words, each step is performed by the CPU obtaining data from memory or input/output circuits, etc., performing calculations, and outputting the results of the calculations to memory or input/output circuits, etc.
 また、上記実施の形態の提示システムに含まれる各構成要素は、専用または汎用の回路として実現されてもよい。 Furthermore, each component included in the presentation system of the above embodiment may be realized as a dedicated or general-purpose circuit.
 また、上記実施の形態の提示システムに含まれる各構成要素は、集積回路(IC:Integrated Circuit)であるLSI(Large Scale Integration)として実現されてもよい。 In addition, each component included in the presentation system of the above embodiment may be realized as an LSI (Large Scale Integration), which is an integrated circuit (IC).
 また、集積回路はLSIに限られず、専用回路または汎用プロセッサで実現されてもよい。プログラム可能なFPGA(Field Programmable Gate Array)、または、LSI内部の回路セルの接続および設定が再構成可能なリコンフィギュラブル・プロセッサが、利用されてもよい。 In addition, the integrated circuit is not limited to an LSI, and may be realized by a dedicated circuit or a general-purpose processor. A programmable FPGA (Field Programmable Gate Array) or a reconfigurable processor that allows the connections and settings of circuit cells inside the LSI to be reconfigured may also be used.
 さらに、半導体技術の進歩または派生する別技術によりLSIに置き換わる集積回路化の技術が登場すれば、当然、その技術を用いて、提示システムに含まれる各構成要素の集積回路化が行われてもよい。 Furthermore, if an integrated circuit technology that can replace LSIs emerges due to advances in semiconductor technology or other derived technologies, that technology may naturally be used to integrate the various components included in the presentation system.
 その他、実施の形態に対して当業者が思いつく各種変形を施して得られる形態、本開示の趣旨を逸脱しない範囲で各実施の形態における構成要素および機能を任意に組み合わせることで実現される形態も本開示に含まれる。 In addition, this disclosure also includes forms obtained by applying various modifications to the embodiments that a person skilled in the art may conceive, and forms realized by arbitrarily combining the components and functions of each embodiment within the scope that does not deviate from the spirit of this disclosure.
 (付記)
 以上の実施の形態の記載により、下記の技術が開示される。
(Additional Note)
The above description of the embodiments discloses the following techniques.
 (技術1)作業対象物に対して作業を行うロボットの前記作業対象物との接触位置を示す接触位置情報を取得する接触位置取得部と、前記作業対象物に対して作業を行う作業者の状態を取得する状態取得部と、取得された前記作業者の状態と、前記接触位置情報とに基づいて、前記作業者が前記接触位置を認識できるか否かを判定する認識判定部と、前記接触位置情報および前記認識判定部の判定結果に基づいて、前記接触位置を提示するための提示態様を選択する提示態様選択部と、選択された前記提示態様に応じた前記接触位置の提示を行う提示部と、を備える、提示システム。 (Technology 1) A presentation system comprising: a contact position acquisition unit that acquires contact position information indicating the contact position between a robot working on a work object and the work object; a status acquisition unit that acquires the status of a worker working on the work object; a recognition determination unit that determines whether the worker can recognize the contact position based on the acquired status of the worker and the contact position information; a presentation mode selection unit that selects a presentation mode for presenting the contact position based on the contact position information and the determination result of the recognition determination unit; and a presentation unit that presents the contact position according to the selected presentation mode.
 作業者が接触位置を認識できないと判定された場合には、作業者が接触位置を認識しやすくなるような提示態様が選択されて、選択された提示態様に応じた接触位置の提示が行われる。これにより、ロボットの接触位置を作業者が認識しやすくなる。 If it is determined that the worker cannot recognize the contact position, a presentation mode that will make it easier for the worker to recognize the contact position is selected, and the contact position is presented according to the selected presentation mode. This makes it easier for the worker to recognize the robot's contact position.
 (技術2)前記状態取得部は、前記作業者の状態として、前記作業者の視線情報を取得し、前記認識判定部は、前記視線情報に基づいて前記作業者の視線範囲を推定し、前記接触位置が前記視線範囲に含まれるか否かを判定することで、前記作業者が前記接触位置を認識できるか否かを判定する、技術1に記載の提示システム。 (Technology 2) The presentation system described in Technology 1, in which the state acquisition unit acquires line-of-sight information of the worker as the state of the worker, and the recognition determination unit estimates the line-of-sight range of the worker based on the line-of-sight information and determines whether the contact position is included in the line-of-sight range, thereby determining whether the worker can recognize the contact position.
 接触位置が視線範囲に含まれている場合には、作業者が接触位置を認識できると判定することができ、接触位置が視線範囲に含まれていない場合には、作業者が接触位置を認識できないと判定することができる。 If the contact position is within the line of sight, it can be determined that the worker can recognize the contact position, and if the contact position is not within the line of sight, it can be determined that the worker cannot recognize the contact position.
 (技術3)前記認識判定部は、前記視線情報として、前記作業者の頭部の角度情報に基づいて、前記視線範囲を推定する、技術2に記載の提示システム。 (Technology 3) The presentation system described in Technology 2, in which the recognition determination unit estimates the line of sight range based on information about the angle of the worker's head as the line of sight information.
 作業者の頭部の角度情報から、作業者の視線の向きを推定でき、作業者の視線の向きに視線範囲があると推定することができる。 The direction of the worker's gaze can be estimated from information about the angle of the worker's head, and it can be inferred that the direction of the worker's gaze has a gaze range.
 (技術4)前記認識判定部は、前記視線情報として、前記作業者の眼電に関する情報に基づいて、前記視線範囲を推定する、技術2または3に記載の提示システム。 (Technology 4) The presentation system described in Technology 2 or 3, in which the recognition determination unit estimates the gaze range based on information related to the worker's electrooculography as the gaze information.
 作業者の眼電から、作業者の視野角度および視距離を推定でき、作業者の視線範囲を推定することができる。 The worker's visual field angle and viewing distance can be estimated from the worker's electrooculography, making it possible to estimate the worker's line of sight.
 (技術5)前記認識判定部は、前記視線情報として、前記作業者の顔の動きに関する情報に基づいて、前記視線範囲を推定する、技術2~4のいずれかに記載の提示システム。 (Technology 5) A presentation system according to any one of technologies 2 to 4, in which the recognition determination unit estimates the gaze range based on information about the movement of the worker's face as the gaze information.
 作業者の顔の動きから、作業者の視野角度および視距離を推定でき、作業者の視線範囲を推定することができる。 The worker's visual field angle and viewing distance can be estimated from the worker's facial movements, making it possible to estimate the worker's line of sight.
 (技術6)前記認識判定部は、前記視線情報として、前記作業者の表情または動作に関する情報に基づいて、前記視線範囲の推定結果が正しかったか否かを判定する、技術2~5のいずれかに記載の提示システム。 (Technology 6) A presentation system according to any one of techniques 2 to 5, in which the recognition determination unit determines whether the estimation result of the gaze range is correct based on information about the facial expression or movement of the worker as the gaze information.
 作業者の視線範囲の推定結果が正しくなかった場合には、視線範囲を推定し直すことができる。 If the estimated result of the worker's gaze range is incorrect, the gaze range can be re-estimated.
 (技術7)前記状態取得部は、前記作業者の状態として、前記作業者が装着するカメラにより撮影されたカメラ画像を取得し、前記認識判定部は、前記接触位置が前記カメラ画像に含まれるか否かを判定することで、前記作業者が前記接触位置を認識できるか否かを判定する、技術1に記載の提示システム。 (Technology 7) The presentation system described in Technology 1, in which the state acquisition unit acquires a camera image taken by a camera worn by the worker as the state of the worker, and the recognition determination unit determines whether the contact position is included in the camera image, thereby determining whether the worker can recognize the contact position.
 作業者の目付近に装着されたカメラにより撮影されたカメラ画像は、作業者の視野に近い画像となる。このため、接触位置がカメラ画像に含まれている場合には、作業者が接触位置を認識できると判定することができ、接触位置がカメラ画像に含まれていない場合には、作業者が接触位置を認識できないと判定することができる。 The camera image captured by the camera attached near the worker's eye is an image that is close to the worker's field of vision. Therefore, if the contact position is included in the camera image, it can be determined that the worker can recognize the contact position, and if the contact position is not included in the camera image, it can be determined that the worker cannot recognize the contact position.
 (技術8)前記状態取得部は、前記作業者の顔が映る動画像を取得し、前記認識判定部は、さらに、前記動画像に映る前記作業者の表情に基づいて、前記作業者が前記接触位置を認識できるか否かを判定する、技術1~7のいずれかに記載の提示システム。 (Technology 8) A presentation system according to any one of techniques 1 to 7, in which the state acquisition unit acquires a video showing the face of the worker, and the recognition determination unit further determines whether the worker can recognize the contact position based on the facial expression of the worker shown in the video.
 作業者の表情から、作業者が接触位置を認識しているか否かを判定することができ、作業者が接触位置を認識していない表情をしているときには、作業者が接触位置を認識しやすくなるような提示態様が選択されて、選択された提示態様に応じた接触位置の提示が行われる。これにより、ロボットの接触位置を作業者が認識しやすくなる。 It is possible to determine whether or not the worker is aware of the contact position from the worker's facial expression, and when the worker has an expression that indicates he or she does not recognize the contact position, a presentation mode that makes it easier for the worker to recognize the contact position is selected, and the contact position is presented according to the selected presentation mode. This makes it easier for the worker to recognize the robot's contact position.
 (技術9)前記提示態様選択部は、前記認識判定部が、前記作業者が前記接触位置を認識できないと判定した場合、すでに選択された前記提示態様に加えて、新たな提示態様を選択し、前記提示部は、すでに選択された前記提示態様に応じた提示に加えて、新たに選択された前記提示態様に応じた前記接触位置の提示を行う、技術1~8のいずれかに記載の提示システム。 (Technology 9) A presentation system according to any one of techniques 1 to 8, in which the presentation mode selection unit selects a new presentation mode in addition to the already selected presentation mode when the recognition determination unit determines that the worker cannot recognize the contact position, and the presentation unit presents the contact position according to the newly selected presentation mode in addition to the presentation according to the already selected presentation mode.
 すでに選択された提示態様に加えて、新たな提示態様が選択されて、選択された新たな提示態様に応じた接触位置の提示が新たに行われることで、ロボットの接触位置を作業者がさらに認識しやすくなる。 In addition to the presentation mode already selected, a new presentation mode is selected and the contact position is presented anew according to the newly selected presentation mode, making it even easier for the worker to recognize the robot's contact position.
 (技術10)前記状態取得部は、前記作業者の顔が映る動画像を取得し、前記提示システムは、さらに、前記動画像に基づいて、前記作業者の前記作業対象物への作業に対する集中度を推定する集中度推定部を備える、技術1~9のいずれかに記載の提示システム。 (Technology 10) The state acquisition unit acquires a video image showing the face of the worker, and the presentation system further includes a concentration level estimation unit that estimates the concentration level of the worker on the work target based on the video image. A presentation system according to any one of Technologies 1 to 9.
 作業者の集中度を推定することで、推定された集中度を接触位置の提示などに利用することができる。 By estimating the worker's level of concentration, the estimated level of concentration can be used to indicate the contact position, etc.
 (技術11)前記認識判定部は、取得された前記作業者の状態と、前記接触位置情報と、前記集中度とに基づいて、前記作業者が前記接触位置を認識できるか否かを判定する、技術10に記載の提示システム。 (Technology 11) The presentation system described in Technology 10, in which the recognition determination unit determines whether the worker can recognize the contact position based on the acquired state of the worker, the contact position information, and the concentration level.
 作業者の集中度が高い場合には作業者が接触位置を認識しやすくなり、作業者の集中度が低い場合には作業者が接触位置を認識しにくくなるため、推定された集中度を作業者が接触位置を認識できるか否かの判定に利用することができる。 When the worker's level of concentration is high, it is easier for the worker to recognize the contact position, and when the worker's level of concentration is low, it is harder for the worker to recognize the contact position. Therefore, the estimated level of concentration can be used to determine whether the worker can recognize the contact position.
 (技術12)前記提示態様選択部は、さらに、前記集中度に基づいて、前記提示態様を選択する、技術10または11に記載の提示システム。 (Technology 12) The presentation system described in Technology 10 or 11, wherein the presentation mode selection unit further selects the presentation mode based on the concentration level.
 作業者の集中度が低い場合には、作業者が接触位置を認識しにくくなるため、作業者が接触位置を認識しやすくなるような提示態様が選択されて、選択された提示態様に応じた接触位置の提示が行われる。これにより、ロボットの接触位置を作業者がさらに認識しやすくなる。 When the worker's level of concentration is low, it becomes difficult for the worker to recognize the contact position, so a presentation mode that allows the worker to easily recognize the contact position is selected, and the contact position is presented according to the selected presentation mode. This makes it even easier for the worker to recognize the robot's contact position.
 (技術13)前記状態取得部は、前記作業者が映る動画像を取得し、前記提示システムは、さらに、前記動画像に基づいて、前記作業者の前記作業対象物に対する動作を判定する動作判定部を備え、前記認識判定部は、さらに、前記動作の判定結果に基づいて、前記提示部が行った前記接触位置の提示によって前記作業者が前記接触位置を認識したか否かを判定する、技術1~12のいずれかに記載の提示システム。 (Technology 13) A presentation system according to any one of techniques 1 to 12, in which the state acquisition unit acquires a video image showing the worker, the presentation system further includes a motion determination unit that determines a motion of the worker with respect to the work object based on the video image, and the recognition determination unit further determines, based on a result of the motion determination, whether or not the worker has recognized the contact position by the presentation of the contact position by the presentation unit.
 作業者が接触位置を認識しているような動作をしている場合には、すでに行われた接触位置の提示によって、作業者が接触位置を認識していると判定することができ、作業者が接触位置を認識していないような動作をしている場合には、すでに行われた接触位置の提示では、作業者が接触位置を認識できなかったと判定することができる。 If the worker is performing an action that indicates he/she is aware of the contact position, it is possible to determine that the worker is aware of the contact position based on the already performed presentation of the contact position, and if the worker is performing an action that indicates he/she is not aware of the contact position based on the already performed presentation of the contact position, it is possible to determine that the worker was not able to recognize the contact position.
 (技術14)前記動作判定部は、前記作業者の前記作業対象物に対する動作として、前記作業者が前記接触位置に接触したか否かを判定し、前記認識判定部は、前記作業者が前記接触位置に接触したか否かに基づいて、前記提示部が行った前記接触位置の提示によって前記作業者が前記接触位置を認識したか否かを判定する、技術13に記載の提示システム。 (Technology 14) The presentation system described in Technology 13, in which the action determination unit determines whether or not the worker has touched the contact position as the action of the worker on the work object, and the recognition determination unit determines whether or not the worker has recognized the contact position by the presentation of the contact position by the presentation unit based on whether or not the worker has touched the contact position.
 このように、作業者が接触位置を接触したか否かによって、すでに行われた接触位置の提示によって、作業者が接触位置を認識したか否かを判定することができる。 In this way, depending on whether or not the worker touches the contact position, it is possible to determine whether or not the worker has recognized the contact position through the presentation of the contact position that has already been made.
 (技術15)前記提示態様選択部は、さらに、前記動作の判定結果に基づいて、前記提示態様を選択する、技術13または14に記載の提示システム。 (Technology 15) The presentation system described in Technology 13 or 14, wherein the presentation mode selection unit further selects the presentation mode based on the result of the judgment of the action.
 作業者が接触位置を認識していないような動作をしている場合には、作業者が接触位置を認識しやすくなるような提示態様が選択されて、選択された提示態様に応じた接触位置の提示が行われる。これにより、ロボットの接触位置を作業者がさらに認識しやすくなる。 If the worker is performing an action that does not allow them to recognize the contact position, a presentation mode that allows the worker to easily recognize the contact position is selected, and the contact position is presented according to the selected presentation mode. This makes it even easier for the worker to recognize the robot's contact position.
 (技術16)前記提示システムは、さらに、前記作業者の前記作業対象物に対する作業の熟練度を取得する熟練度取得部を備え、前記提示態様選択部は、さらに、前記熟練度に基づいて、前記提示態様を選択する、技術1~15のいずれかに記載の提示システム。 (Technology 16) The presentation system according to any one of Technologies 1 to 15, further comprising a proficiency acquisition unit that acquires the worker's proficiency in the work performed on the work object, and the presentation mode selection unit further selects the presentation mode based on the proficiency.
 熟練度が高い作業者は、熟練度が低い作業者よりも、より多くの情報が提示されても処理できる場合が多いため、例えば、作業者の熟練度が高い場合には、作業者に多くの情報が提示される提示態様が選択され、作業者の熟練度が低い場合には、作業者に少ない情報が提示される提示態様が選択される。これにより、作業者の熟練度に応じた提示を行うことができる。 Since highly skilled workers are often able to process more information than less skilled workers, for example, if the worker is highly skilled, a presentation mode in which more information is presented to the worker is selected, and if the worker is less skilled, a presentation mode in which less information is presented to the worker is selected. This makes it possible to present information according to the worker's level of skill.
 (技術17)前記提示態様選択部は、前記接触位置情報および前記認識判定部の判定結果に基づいて、前記提示態様として、前記接触位置を表示するための表示態様を選択し、前記提示部は、選択された前記表示態様に応じた表示を前記作業対象物上に行う、技術1~16のいずれかに記載の提示システム。 (Technology 17) A presentation system according to any one of techniques 1 to 16, in which the presentation mode selection unit selects a display mode for displaying the contact position as the presentation mode based on the contact position information and the judgment result of the recognition judgment unit, and the presentation unit performs a display on the work object according to the selected display mode.
 作業者が接触位置を認識できないと判定された場合には、作業者が接触位置を認識しやすくなるような表示態様が選択されて、選択された表示態様に応じた表示位置の提示が行われる。これにより、表示されたロボットの接触位置を作業者が認識しやすくなる。 If it is determined that the worker cannot recognize the contact position, a display mode that makes it easier for the worker to recognize the contact position is selected, and the display position is presented according to the selected display mode. This makes it easier for the worker to recognize the contact position of the displayed robot.
 (技術18)前記提示態様選択部は、前記作業者が前記接触位置を認識できないと判定された場合、前記作業者が前記接触位置を認識できると判定された場合に選択する前記表示態様と比べて、表示の色合い、表示の色の強度、表示の点滅頻度および表示の大きさの少なくとも1つが異なる表示態様を選択する、技術17に記載の提示システム。 (Technology 18) The presentation system described in Technology 17, in which the presentation mode selection unit, when it is determined that the worker cannot recognize the contact position, selects a display mode that differs in at least one of the display hue, the display color intensity, the display blinking frequency, and the display size compared to the display mode that would be selected when it is determined that the worker can recognize the contact position.
 表示の色合い、表示の色の強度、表示の点滅頻度または表示の大きさについて、作業者が見やすい表示態様が選択されることで、表示されたロボットの接触位置を作業者が認識しやすくなる。 By selecting a display mode that is easy for workers to see, such as the color tone, color intensity, blinking frequency, or size of the display, the worker can easily recognize the displayed contact position of the robot.
 (技術19)作業対象物に対して作業を行うロボットの前記作業対象物との接触位置を示す接触位置情報を取得する接触位置取得ステップと、前記作業対象物に対して作業を行う作業者の状態を取得する状態取得ステップと、取得された前記作業者の状態と、前記接触位置情報とに基づいて、前記作業者が前記接触位置を認識できるか否かを判定する認識判定ステップと、前記接触位置情報および前記認識判定ステップでの判定結果に基づいて、前記接触位置を提示するための提示態様を選択する提示態様選択ステップと、選択された前記提示態様に応じた前記接触位置の提示を行う提示ステップと、を含む、提示方法。 (Technology 19) A presentation method including a contact position acquisition step of acquiring contact position information indicating a contact position between a robot performing work on a work object and the work object, a status acquisition step of acquiring a status of a worker performing work on the work object, a recognition determination step of determining whether or not the worker can recognize the contact position based on the acquired status of the worker and the contact position information, a presentation mode selection step of selecting a presentation mode for presenting the contact position based on the contact position information and the determination result in the recognition determination step, and a presentation step of presenting the contact position according to the selected presentation mode.
 これによれば、ロボットの接触位置を作業者が認識しやすくなる提示方法を提供できる。 This provides a presentation method that makes it easier for workers to recognize the robot's contact position.
 (技術20)技術19に記載の提示方法をコンピュータに実行させるためのプログラム。 (Technology 20) A program for causing a computer to execute the presentation method described in Technology 19.
 これによれば、ロボットの接触位置を作業者が認識しやすくなるプログラムを提供できる。 This makes it possible to provide a program that makes it easier for workers to recognize the robot's contact position.
 本開示の提示システム、提示方法およびプログラムは、作業対象物に対して作業を行うロボットを制御するシステムなどに適用できる。本開示の提示システム、提示方法およびプログラムは、ロボットの接触位置を作業者が認識しやすくなり、作業者の安全性を向上させることができる。このように、本開示の提示システム、提示方法およびプログラムは、産業上有用である。 The presentation system, presentation method, and program disclosed herein can be applied to systems that control robots that perform work on work objects. The presentation system, presentation method, and program disclosed herein can make it easier for workers to recognize the contact position of the robot, improving worker safety. In this way, the presentation system, presentation method, and program disclosed herein are industrially useful.
 10、10a、10b、10c 提示システム
 11 接触位置取得部
 12 状態取得部
 13 認識判定部
 14 提示態様選択部
 15 提示部
 16 集中度推定部
 17 動作判定部
 18 熟練度取得部
 21 ロボット
 22 プロジェクタ
 23 カメラ
 24、25、27 コンピュータ
 26 装置
 28 視線カメラ
 29 ARグラス
 30 作業者
 31 視線範囲
 32 視野角度
 33 視距離
 40 作業対象物
 50 接触位置
 51、52 追加画像
 53 数値情報
REFERENCE SIGNS LIST 10, 10a, 10b, 10c Presentation system 11 Contact position acquisition unit 12 Status acquisition unit 13 Recognition determination unit 14 Presentation mode selection unit 15 Presentation unit 16 Concentration level estimation unit 17 Action determination unit 18 Skill level acquisition unit 21 Robot 22 Projector 23 Camera 24, 25, 27 Computer 26 Device 28 Gaze camera 29 AR glasses 30 Worker 31 Gaze range 32 View angle 33 Viewing distance 40 Work object 50 Contact position 51, 52 Additional image 53 Numerical information

Claims (20)

  1.  作業対象物に対して作業を行うロボットの前記作業対象物との接触位置を示す接触位置情報を取得する接触位置取得部と、
     前記作業対象物に対して作業を行う作業者の状態を取得する状態取得部と、
     取得された前記作業者の状態と、前記接触位置情報とに基づいて、前記作業者が前記接触位置を認識できるか否かを判定する認識判定部と、
     前記接触位置情報および前記認識判定部の判定結果に基づいて、前記接触位置を提示するための提示態様を選択する提示態様選択部と、
     選択された前記提示態様に応じた前記接触位置の提示を行う提示部と、を備える、
     提示システム。
    a contact position acquisition unit that acquires contact position information indicating a contact position between a robot performing a task on a work object and the work object;
    a status acquisition unit that acquires a status of a worker performing work on the work object;
    a recognition determination unit that determines whether or not the worker can recognize the contact position based on the acquired state of the worker and the contact position information;
    a presentation mode selection unit that selects a presentation mode for presenting the contact position based on the contact position information and a determination result of the recognition determination unit;
    a presentation unit that presents the contact position according to the selected presentation mode,
    Presentation system.
  2.  前記状態取得部は、前記作業者の状態として、前記作業者の視線情報を取得し、
     前記認識判定部は、前記視線情報に基づいて前記作業者の視線範囲を推定し、前記接触位置が前記視線範囲に含まれるか否かを判定することで、前記作業者が前記接触位置を認識できるか否かを判定する、
     請求項1に記載の提示システム。
    the state acquisition unit acquires line-of-sight information of the worker as the state of the worker,
    the recognition determination unit estimates a line-of-sight range of the worker based on the line-of-sight information, and determines whether the contact position is included in the line-of-sight range, thereby determining whether the worker can recognize the contact position.
    The presentation system of claim 1 .
  3.  前記認識判定部は、前記作業者の頭部の角度情報に基づいて、前記視線範囲を推定する、
     請求項2に記載の提示システム。
    The recognition determination unit estimates the line of sight range based on angle information of the worker's head.
    The presentation system of claim 2 .
  4.  前記認識判定部は、前記作業者の眼電に関する情報に基づいて、前記視線範囲を推定する、
     請求項2に記載の提示システム。
    The recognition determination unit estimates the line of sight range based on information regarding the operator's electrooculography.
    The presentation system of claim 2 .
  5.  前記認識判定部は、前記作業者の顔の動きに関する情報に基づいて、前記視線範囲を推定する、
     請求項2に記載の提示システム。
    The recognition determination unit estimates the line of sight range based on information regarding a movement of the face of the worker.
    The presentation system of claim 2 .
  6.  前記認識判定部は、前記作業者の表情または動作に関する情報に基づいて、前記視線範囲の推定結果が正しかったか否かを判定する、
     請求項2に記載の提示システム。
    the recognition determination unit determines whether or not the estimation result of the line of sight range is correct based on information regarding a facial expression or a movement of the worker.
    The presentation system of claim 2 .
  7.  前記状態取得部は、前記作業者の状態として、前記作業者が装着するカメラにより撮影されたカメラ画像を取得し、
     前記認識判定部は、前記接触位置が前記カメラ画像に含まれるか否かを判定することで、前記作業者が前記接触位置を認識できるか否かを判定する、
     請求項1に記載の提示システム。
    The status acquisition unit acquires, as the status of the worker, a camera image captured by a camera worn by the worker,
    the recognition determination unit determines whether the contact position is included in the camera image, thereby determining whether the worker can recognize the contact position;
    The presentation system of claim 1 .
  8.  前記状態取得部は、前記作業者の顔が映る動画像を取得し、
     前記認識判定部は、さらに、前記動画像に映る前記作業者の表情に基づいて、前記作業者が前記接触位置を認識できるか否かを判定する、
     請求項1~7のいずれか1項に記載の提示システム。
    The state acquisition unit acquires a moving image showing a face of the worker,
    The recognition determination unit further determines whether or not the worker can recognize the contact position based on a facial expression of the worker shown in the moving image.
    A presentation system according to any one of claims 1 to 7.
  9.  前記提示態様選択部は、前記認識判定部が、前記作業者が前記接触位置を認識できないと判定した場合、すでに選択された前記提示態様に加えて、新たな提示態様を選択し、
     前記提示部は、すでに選択された前記提示態様に応じた提示に加えて、新たに選択された前記提示態様に応じた前記接触位置の提示を行う、
     請求項1~7のいずれか1項に記載の提示システム。
    the presentation mode selection unit, when the recognition determination unit determines that the operator cannot recognize the contact position, selects a new presentation mode in addition to the presentation mode already selected;
    the presentation unit presents the contact position according to the newly selected presentation mode in addition to the presentation according to the already selected presentation mode.
    A presentation system according to any one of claims 1 to 7.
  10.  前記状態取得部は、前記作業者の顔が映る動画像を取得し、
     前記提示システムは、さらに、前記動画像に基づいて、前記作業者の前記作業対象物への作業に対する集中度を推定する集中度推定部を備える、
     請求項1~7のいずれか1項に記載の提示システム。
    The state acquisition unit acquires a moving image showing a face of the worker,
    The presentation system further includes a concentration level estimation unit that estimates a concentration level of the worker on the work target object based on the moving image.
    A presentation system according to any one of claims 1 to 7.
  11.  前記認識判定部は、取得された前記作業者の状態と、前記接触位置情報と、前記集中度とに基づいて、前記作業者が前記接触位置を認識できるか否かを判定する、
     請求項10に記載の提示システム。
    the recognition determination unit determines whether or not the worker can recognize the contact position based on the acquired state of the worker, the contact position information, and the concentration level.
    The presentation system of claim 10.
  12.  前記提示態様選択部は、さらに、前記集中度に基づいて、前記提示態様を選択する、
     請求項10に記載の提示システム。
    The presentation mode selection unit further selects the presentation mode based on the concentration degree.
    The presentation system of claim 10.
  13.  前記状態取得部は、前記作業者が映る動画像を取得し、
     前記提示システムは、さらに、前記動画像に基づいて、前記作業者の前記作業対象物に対する動作を判定する動作判定部を備え、
     前記認識判定部は、さらに、前記動作の判定結果に基づいて、前記提示部が行った前記接触位置の提示によって前記作業者が前記接触位置を認識したか否かを判定する、
     請求項1~7のいずれか1項に記載の提示システム。
    The state acquisition unit acquires a moving image in which the worker is shown,
    The presentation system further includes a motion determination unit that determines a motion of the worker with respect to the work target based on the moving image,
    The recognition determination unit further determines, based on a result of the determination of the motion, whether or not the worker has recognized the contact position through the presentation of the contact position by the presentation unit.
    A presentation system according to any one of claims 1 to 7.
  14.  前記動作判定部は、前記作業者の前記作業対象物に対する動作として、前記作業者が前記接触位置に接触したか否かを判定し、
     前記認識判定部は、前記作業者が前記接触位置に接触したか否かに基づいて、前記提示部が行った前記接触位置の提示によって前記作業者が前記接触位置を認識したか否かを判定する、
     請求項13に記載の提示システム。
    the motion determination unit determines, as a motion of the worker on the work object, whether or not the worker has contacted the contact position;
    the recognition determination unit determines whether or not the worker has recognized the contact position through the presentation of the contact position by the presentation unit, based on whether or not the worker has touched the contact position.
    The presentation system of claim 13.
  15.  前記提示態様選択部は、さらに、前記動作の判定結果に基づいて、前記提示態様を選択する、
     請求項13に記載の提示システム。
    The presentation mode selection unit further selects the presentation mode based on a result of the determination of the motion.
    The presentation system of claim 13.
  16.  前記提示システムは、さらに、前記作業者の前記作業対象物に対する作業の熟練度を取得する熟練度取得部を備え、
     前記提示態様選択部は、さらに、前記熟練度に基づいて、前記提示態様を選択する、
     請求項1~7のいずれか1項に記載の提示システム。
    The presentation system further includes a proficiency level acquisition unit that acquires a proficiency level of the worker in the work for the work object,
    The presentation mode selection unit further selects the presentation mode based on the proficiency level.
    A presentation system according to any one of claims 1 to 7.
  17.  前記提示態様選択部は、前記接触位置情報および前記認識判定部の判定結果に基づいて、前記提示態様として、前記接触位置を表示するための表示態様を選択し、
     前記提示部は、選択された前記表示態様に応じた表示を前記作業対象物上に行う、
     請求項1~7のいずれか1項に記載の提示システム。
    the presentation mode selection unit selects, as the presentation mode, a display mode for displaying the contact position based on the contact position information and a determination result of the recognition determination unit;
    The presentation unit performs a display on the work object according to the selected display mode.
    A presentation system according to any one of claims 1 to 7.
  18.  前記提示態様選択部は、前記作業者が前記接触位置を認識できないと判定された場合、前記作業者が前記接触位置を認識できると判定された場合に選択する前記表示態様と比べて、表示の色合い、表示の色の強度、表示の点滅頻度および表示の大きさの少なくとも1つが異なる表示態様を選択する、
     請求項17に記載の提示システム。
    When it is determined that the worker cannot recognize the contact position, the presentation mode selection unit selects a display mode that is different in at least one of a hue of the display, a color intensity of the display, a blinking frequency of the display, and a size of the display, as compared with the display mode that is selected when it is determined that the worker can recognize the contact position.
    20. The presentation system of claim 17.
  19.  作業対象物に対して作業を行うロボットの前記作業対象物との接触位置を示す接触位置情報を取得する接触位置取得ステップと、
     前記作業対象物に対して作業を行う作業者の状態を取得する状態取得ステップと、
     取得された前記作業者の状態と、前記接触位置情報とに基づいて、前記作業者が前記接触位置を認識できるか否かを判定する認識判定ステップと、
     前記接触位置情報および前記認識判定ステップでの判定結果に基づいて、前記接触位置を提示するための提示態様を選択する提示態様選択ステップと、
     選択された前記提示態様に応じた前記接触位置の提示を行う提示ステップと、を含む、
     提示方法。
    a contact position acquisition step of acquiring contact position information indicating a contact position between the robot performing a task on a work object and the work object;
    a status acquisition step of acquiring a status of a worker performing work on the work object;
    a recognition determination step of determining whether or not the worker can recognize the contact position based on the acquired state of the worker and the contact position information;
    a presentation mode selection step of selecting a presentation mode for presenting the contact position based on the contact position information and a determination result in the recognition determination step;
    and presenting the contact position according to the selected presentation mode.
    Presentation method.
  20.  請求項19に記載の提示方法をコンピュータに実行させるためのプログラム。 A program for causing a computer to execute the presentation method described in claim 19.
PCT/JP2023/046432 2023-01-23 2023-12-25 Presentation system, presentation method, and program WO2024157705A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2023008135 2023-01-23
JP2023-008135 2023-01-23

Publications (1)

Publication Number Publication Date
WO2024157705A1 true WO2024157705A1 (en) 2024-08-02

Family

ID=91970459

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/046432 WO2024157705A1 (en) 2023-01-23 2023-12-25 Presentation system, presentation method, and program

Country Status (1)

Country Link
WO (1) WO2024157705A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09247580A (en) * 1996-03-14 1997-09-19 Jamco Corp Display supporting device for seat
JP2010023184A (en) * 2008-07-18 2010-02-04 Fanuc Ltd Setting method for working coordinate system, and abnormality detection device for working coordinate system
JP2011224696A (en) * 2010-04-19 2011-11-10 Yaskawa Electric Corp Robot teaching replaying device and teaching replaying method
JP2017136677A (en) * 2015-07-29 2017-08-10 キヤノン株式会社 Information processing apparatus, information processing method, robot control apparatus, and robot system
WO2018100760A1 (en) * 2016-12-02 2018-06-07 Cyberdyne株式会社 Upper limb motion assisting device and upper limb motion assisting system
JP2021165864A (en) * 2018-06-18 2021-10-14 ソニーグループ株式会社 Information processing device, information processing method, and program
JP6994292B2 (en) * 2017-05-08 2022-01-14 達闥机器人有限公司 Robot wake-up methods, devices and robots

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09247580A (en) * 1996-03-14 1997-09-19 Jamco Corp Display supporting device for seat
JP2010023184A (en) * 2008-07-18 2010-02-04 Fanuc Ltd Setting method for working coordinate system, and abnormality detection device for working coordinate system
JP2011224696A (en) * 2010-04-19 2011-11-10 Yaskawa Electric Corp Robot teaching replaying device and teaching replaying method
JP2017136677A (en) * 2015-07-29 2017-08-10 キヤノン株式会社 Information processing apparatus, information processing method, robot control apparatus, and robot system
WO2018100760A1 (en) * 2016-12-02 2018-06-07 Cyberdyne株式会社 Upper limb motion assisting device and upper limb motion assisting system
JP6994292B2 (en) * 2017-05-08 2022-01-14 達闥机器人有限公司 Robot wake-up methods, devices and robots
JP2021165864A (en) * 2018-06-18 2021-10-14 ソニーグループ株式会社 Information processing device, information processing method, and program

Similar Documents

Publication Publication Date Title
US11896893B2 (en) Information processing device, control method of information processing device, and program
US10140773B2 (en) Rendering virtual objects in 3D environments
EP3204837B1 (en) Docking system
CN116348836A (en) Gesture tracking for interactive game control in augmented reality
KR20230025904A (en) Integration of artificial reality interaction mode
Huy et al. See-through and spatial augmented reality-a novel framework for human-robot interaction
JP6350772B2 (en) Information processing system, information processing apparatus, control method, and program
Haslgrübler et al. Getting through: modality selection in a multi-sensor-actuator industrial IoT environment
CA3034847A1 (en) System and method for receiving user commands via contactless user interface
CN111487946A (en) Robot system
WO2016208261A1 (en) Information processing device, information processing method, and program
CN108369451B (en) Information processing apparatus, information processing method, and computer-readable storage medium
JP2016082411A (en) Head-mounted display, image display method and program
US11137600B2 (en) Display device, display control method, and display system
WO2024157705A1 (en) Presentation system, presentation method, and program
WO2020184317A1 (en) Information processing device, information processing method, and recording medium
JP6858159B2 (en) A telepresence framework that uses a head-mounted device to label areas of interest
Chacón-Quesada et al. Augmented reality control of smart wheelchair using eye-gaze–enabled selection of affordances
De Pace et al. Supporting Human–Robot Interaction by Projected Augmented Reality and a Brain Interface
Nagy et al. Evaluation of AI-Supported Input Methods in Augmented Reality Environment
EP4418177A1 (en) Control device and information presentation method
WO2023145250A1 (en) Wearable terminal, presentation method, and program
JP2024135098A (en) Remote Control System
JP2024042545A (en) Work support system and work support method
WO2023283154A1 (en) Artificial reality teleportation via hand gestures

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23918659

Country of ref document: EP

Kind code of ref document: A1