WO2020213194A1 - Display control system and display control method - Google Patents

Display control system and display control method Download PDF

Info

Publication number
WO2020213194A1
WO2020213194A1 PCT/JP2019/040676 JP2019040676W WO2020213194A1 WO 2020213194 A1 WO2020213194 A1 WO 2020213194A1 JP 2019040676 W JP2019040676 W JP 2019040676W WO 2020213194 A1 WO2020213194 A1 WO 2020213194A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
robot arm
display control
prediction processing
data
Prior art date
Application number
PCT/JP2019/040676
Other languages
French (fr)
Japanese (ja)
Inventor
吉田 修一
剛 大濱
勁峰 今西
良一 今中
Original Assignee
日本金銭機械株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本金銭機械株式会社 filed Critical 日本金銭機械株式会社
Publication of WO2020213194A1 publication Critical patent/WO2020213194A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/06Safety devices

Definitions

  • the present invention relates to a technique for detecting danger when a human enters the robot work area and a technique for displaying a danger area (or safety area).
  • the safety device may be activated after the robot arm and a human contact (collision), or the possibility of collision may be predicted and the robot arm may be controlled to move slowly. , There is a problem that work efficiency is lowered.
  • the present invention is a technique for improving work safety while ensuring high work efficiency by appropriately displaying a safety area (or a danger area) when a robot arm and a human perform joint work.
  • the purpose is to realize.
  • the first invention is a safety area which is a region where it is determined that it is safe even if a movable object exists in a space where a robot arm and a movable object may coexist.
  • a display control system for displaying a moving object on a projection surface that can be recognized in space includes an imaging unit, a prediction processing unit, and a projection unit.
  • the imaging unit is installed at a position above the robot arm (for example, a position where the space within the movable range of the robot arm can be photographed).
  • the prediction processing unit is (1) an image captured by the imaging unit from a position above the robot arm in space, and is an image captured when the robot arm is in a predetermined state, or the robot arm is Learning processing is performed using data including control data for controlling the robot arm so as to be in a predetermined state and (2) information for specifying a safety area when the robot arm is in the predetermined state as teacher data.
  • Prediction processing is executed using the trained model acquired by execution. Further, at the time of prediction processing, the prediction processing unit executes prediction processing using the trained model on the prediction processing image which is an image captured from a position above the robot arm by the imaging unit. ,
  • the safety area in the space when the image for prediction processing is acquired is predicted, the predicted safety area is acquired as the prediction safety area, and the projected image data is generated based on the prediction safety area.
  • the projection unit projects the projected image formed by the projected image data onto the projection surface.
  • images for example, frame images
  • An image for example, a frame image
  • control data for controlling the robot arm so that the robot arm is in a predetermined state, and (2) the robot arm is in the predetermined state.
  • training data teacher data
  • the data that identifies the safety area at the time of is used as training data, and prediction processing is performed using the trained trained model. Then, in this display control system, when the input image is captured by performing the prediction processing by the trained model using the image (for example, the frame image) captured in the same state as when it was trained.
  • the safety area (safe area on the projection plane) can be predicted (specified). Then, in this display control system, the predicted (specified) safety area is projected onto the projection surface (for example, the floor surface) by the projection unit so that, for example, the worker can easily and surely recognize the safety area. Can be displayed on the projection surface (for example, the floor surface FLR). That is, in this display control system, learning processing and prediction processing are performed using an image taken from above where the shielding area is small, so that the robot arm is in any state according to the state (or). The prediction process for dynamically specifying (predicting) the safety area can be performed appropriately and with high accuracy (according to the transition status of the operation phase of the robot arm specified from the control sequence).
  • the safety area can be appropriately displayed when the robot arm and a movable object (for example, a human being) collaborate. As a result, it is possible to improve work safety while ensuring high work efficiency.
  • the "movable object” is a movable object, for example, a human or an animal that can move spontaneously.
  • the second invention is the first invention, in which the prediction processing unit includes a plurality of image regions having colors specified according to the degree of safety based on control data for controlling the robot arm.
  • the prediction process is executed using the trained model obtained by executing the training process using the image captured by the imaging unit. To do.
  • this display control system it is possible to execute the learning process using the images hierarchically color-coded according to the degree of safety as training data, and to execute the prediction process using the acquired learned model.
  • the third invention is the first invention, in which the prediction processing unit includes a plurality of image regions having brightness specified according to the degree of safety based on control data for controlling the robot arm.
  • the prediction process is executed using the trained model obtained by executing the learning process using the image captured by the imaging unit. To do.
  • the learning process is executed using the image whose brightness is changed hierarchically according to the degree of safety as training data, and the prediction process can be executed using the acquired model.
  • the fourth invention is any one of the first to third inventions, in which the prediction processing unit generates a plurality of image regions having colors specified according to the degree of safety and forms them from projected image data.
  • the projected image data is generated so that the image to be generated includes the generated plurality of image regions.
  • the image (projected image) indicating the safety area can be made into an image consisting of a plurality of image areas hierarchically color-coded according to the degree of safety, and the worker is projected on the projection surface.
  • the degree of safety can be appropriately recognized by the color.
  • the color of the plurality of image areas may be one that changes stepwise according to the degree of safety (an image in which the gradation value of each pixel takes a discrete value), or one that changes continuously. (An image in which the gradation value of each pixel takes a continuous value (for example, a gradation image)) may be used.
  • the fifth invention is any one of the first to third inventions, in which the projection unit generates a plurality of image regions having brightness specified according to the degree of safety, and is formed by the projected image data.
  • the projected image data is generated so that the image includes a plurality of generated image areas.
  • an image (projected image) indicating a safety area can be made into an image consisting of a plurality of image areas hierarchically divided by brightness (brightness) according to the degree of safety.
  • the degree of safety can be appropriately recognized by the brightness (brightness) projected on the projection surface.
  • the brightness (brightness) of the plurality of image areas may change stepwise according to the degree of safety (an image in which the gradation value of each pixel takes a discrete value), or continuously. It may be an image that changes (an image in which the gradation value of each pixel takes a continuous value (for example, an image in which the brightness (brightness) continuously changes)).
  • the movable object in a space where a robot arm and a movable object may coexist, can move in a dangerous area, which is an area determined to be dangerous if the movable object exists.
  • a display control system for displaying on a recognizable projection surface, and includes an imaging unit, a prediction processing unit, and a projection unit.
  • the imaging unit is installed at a position above the robot arm.
  • the prediction processing unit is (1) an image captured from a position above the robot arm by the imaging unit in space, and is an image captured when the robot arm is in a predetermined state, or the robot arm is predetermined.
  • the learning process is executed using data including control data for controlling the robot arm so as to be in the state of (2) information for identifying a dangerous area when the robot arm is in the predetermined state as teacher data.
  • Prediction processing is executed using the trained model acquired in the above. Further, the prediction processing unit executes prediction processing using the trained model on the prediction processing image which is an image captured from a position above the robot arm by the imaging unit at the time of prediction processing. ,
  • the danger area in the space when the image for prediction processing is acquired is predicted, the predicted danger area is acquired as the prediction danger area, and the projected image data is generated based on the prediction danger area.
  • the projection unit projects the projected image formed by the projected image data onto the projection surface.
  • an image for example, a frame image
  • an imaging unit installed above the robot arm for example, the position and state of the robot arm, the position of a movable object (for example, a worker), and the like.
  • Prediction processing is performed using the trained trained model using the data that identifies the dangerous area of the robot as training data (teacher data). Then, in this display control system, when the input image is captured by performing the prediction processing by the trained model using the image (for example, the frame image) captured in the same state as when learning.
  • the predicted (identified) dangerous area is projected onto the projection surface (for example, the floor surface) by the projection unit so that, for example, the worker can easily and surely recognize the dangerous area.
  • the "movable object” is a movable object, such as a human being or an animal.
  • a seventh aspect of the invention is the sixth aspect, wherein the prediction processing unit includes a plurality of image regions having colors specified according to the degree of danger based on control data for controlling the robot arm.
  • the prediction process is executed using the trained model obtained by executing the learning process using the image captured by the imaging unit. To do.
  • this display control system it is possible to execute the learning process using the images hierarchically color-coded according to the degree of danger as training data, and to execute the prediction process using the acquired learned model.
  • the eighth invention is the sixth invention, in which the prediction processing unit includes a plurality of image regions having brightness specified according to the degree of danger based on control data for controlling the robot arm.
  • the prediction process is executed using the trained model obtained by executing the learning process using the image captured by the imaging unit. To do.
  • the learning process is executed using the image whose brightness is changed hierarchically according to the degree of danger as training data, and the prediction process can be executed using the acquired model.
  • the ninth invention is any one of the sixth to eighth inventions, in which the prediction processing unit generates a plurality of image regions each having a color specified according to the degree of danger and forms them from the projected image data.
  • the projected image data is generated so that the image to be generated includes a plurality of generated image areas.
  • the image (projected image) indicating the dangerous area can be made into an image consisting of a plurality of image areas hierarchically color-coded according to the degree of danger, and the worker is projected on the projection surface.
  • the degree of danger can be appropriately recognized by the color.
  • the colors of the plurality of image areas may be those that change stepwise according to the degree of danger (images in which the gradation values of each pixel take discrete values), and those that change continuously. (An image in which the gradation value of each pixel takes a continuous value (for example, a gradation image)) may be used.
  • a tenth invention is any one of the sixth to eighth inventions, in which the prediction processing unit generates a plurality of image regions having brightness specified according to the degree of danger and forms them from projected image data.
  • the projected image data is generated so that the image to be generated includes a plurality of generated image regions.
  • the image (projected image) indicating the dangerous area can be made into an image consisting of a plurality of image areas hierarchically divided by the brightness (brightness) according to the degree of danger.
  • the degree of danger can be appropriately recognized by the brightness (brightness) projected on the projection surface.
  • the brightness (brightness) of the plurality of image regions may change stepwise according to the degree of danger (an image in which the gradation value of each pixel takes a discrete value), or continuously. It may be an image that changes (an image in which the gradation value of each pixel takes a continuous value (for example, an image in which the brightness (brightness) continuously changes)).
  • the eleventh invention is any one of the first to tenth inventions, in which the space has a floor surface and the projection surface is the floor surface in the space.
  • the projection surface can be the floor surface.
  • the twelfth invention is any one of the first to eleventh inventions, in which the space has a ceiling surface, and the imaging unit is installed on the ceiling surface of the space.
  • an imaging unit installed on the ceiling surface can be used.
  • the thirteenth invention is any one of the first to twelfth inventions, further including a robot arm control unit for controlling the robot arm. Then, when the prediction processing unit determines that the movable object is likely to move out of the safe area or the movable object is likely to move into the dangerous area, the robot arm control unit moves the robot arm. Executes a process that stops the operation and / or generates a warning.
  • this display control system when a movable object moves from the inside of the safety area to the outside of the safety area and is likely to come into contact with or collide with the robot arm, it comes into contact with the robot arm by a process of avoiding danger. , Collision and other serious accidents can be prevented. That is, in this display control system, regardless of the state of the robot arm, it is possible to appropriately and accurately perform the prediction process of dynamically identifying (predicting) the safety area according to the state. In addition, appropriate risk avoidance processing can be performed.
  • the fourteenth invention includes an imaging unit installed at a position above the robot arm in a space where a robot arm and a movable object may coexist, a projection unit that projects an image onto a predetermined projection surface, and the like.
  • This is a display control method used in a display control system including.
  • the display control method is a method for displaying a safety area, which is an area determined to be safe even if a movable object exists, on a projection surface in which the movable object can be recognized in space. It includes a processing step and a projection step.
  • the prediction processing step in (1) space, an image captured from a position above the robot arm by the imaging unit, the image captured when the robot arm is in a predetermined state, or the robot arm Learning processing is performed using data including control data for controlling the robot arm so as to be in a predetermined state and (2) information for specifying a safety area when the robot arm is in the predetermined state as teacher data.
  • Prediction processing is executed using the trained model acquired by execution.
  • the imaging unit executes prediction processing using the trained model on the image for prediction processing which is an image captured from a position above the robot arm.
  • the safety area in the space when the image for prediction processing is acquired is predicted, the predicted safety area is acquired as the prediction safety area, and the projected image data is generated based on the prediction safety area.
  • the projected image formed by the projected image data is projected onto the projection surface.
  • a fifteenth invention includes an imaging unit installed at a position above the robot arm in a space where a robot arm and a movable object may coexist, a projection unit that projects an image onto a predetermined projection surface, and the like.
  • This is a display control method used in a display control system including.
  • the display control method is a method for displaying a dangerous area, which is an area determined to be dangerous when a movable object is present, on a projection surface in which the movable object can be recognized in space. It includes a prediction processing step and a projection step.
  • the prediction processing step in (1) space, an image captured from a position above the robot arm by the imaging unit, the image captured when the robot arm is in a predetermined state, or the robot arm Learning processing is performed using data including control data for controlling the robot arm so as to be in a predetermined state and (2) information for identifying a dangerous area when the robot arm is in the predetermined state as teacher data.
  • Prediction processing is executed using the trained model acquired by execution.
  • the image pickup unit executes prediction processing using the trained model on the prediction processing image which is an image captured from a position above the robot arm.
  • the danger area in the space when the image for prediction processing is acquired is predicted, the predicted danger area is acquired as the prediction danger area, and the projected image data is generated based on the prediction danger area.
  • the projected image formed by the projected image data is projected onto the projection surface.
  • a technique for improving work safety while ensuring high work efficiency by appropriately displaying a safety area (or a danger area) when a robot arm and a human perform joint work. Can be realized.
  • the figure for demonstrating the processing of the learning mode of the display control system 1000 which concerns on 1st Embodiment.
  • the figure (timing chart) which shows an example (pattern 1) of the training data used in the display control system 1000A which concerns on the 1st modification of 1st Embodiment.
  • FIG. 6 is a diagram (timing chart) showing an example (pattern 2) of training data used in the display control system 1000A according to the first modification of the first embodiment.
  • the schematic block diagram of the display control system 2000 which concerns on 2nd Embodiment.
  • the schematic block diagram of the display control device 100A which concerns on 2nd Embodiment.
  • the schematic block diagram of the prediction processing part 5A of the display control apparatus 100A which concerns on 2nd Embodiment.
  • It is a flowchart of the process of the prediction mode of the display control system 2000 which concerns on 2nd Embodiment.
  • the figure for demonstrating the processing of the prediction mode of the display control system 2000 which concerns on 2nd Embodiment.
  • the figure for demonstrating the processing of the prediction mode of the display control system 2000 which concerns on 2nd Embodiment.
  • the figure for demonstrating the processing of the prediction mode of the display control system 2000 which concerns on 2nd Embodiment.
  • FIG. 1 is a schematic configuration diagram of the display control system 1000 according to the first embodiment.
  • FIG. 2 is a schematic configuration diagram of the display control device 100 according to the first embodiment.
  • the display control system 1000 includes a projection unit Prj1, an imaging unit Cmr1, a display control device 100, and a robot Rbt.
  • the projection unit Prj1 is installed at a position higher than the highest point (for example, the ceiling) of the worker or the robot Rbt, and is an image or a boundary line or the like downward from the projection point P_prj (for example, with respect to the floor surface FLR). It is a device that projects.
  • the projection unit Prj1 is realized by, for example, a projector device that projects an image onto the floor surface FLR, or a device that can display a line of a predetermined color on the floor surface such as an LED scanner or a laser scanner.
  • the projection unit Prj1 inputs the control signal Ctl_prj (D_prj) output from the display control device 100, and based on the control signal Ctl_prj (D_prj), the projection target (for example, the floor surface) has an image or a boundary line. Etc. (data D_prj) are projected.
  • the image pickup unit Cmr1 is installed at a position higher than the highest point of the worker or the robot Rbt (for example, the ceiling), captures an image projected by the projection unit Prj1 on the floor surface, a display of a boundary line, or the like, and also a worker ( For example, the worker Psn1 of FIG. 1, the helmet worn by the worker (for example, the helmet Hat1 worn by the worker Psn1 of FIG. 1), or the marker worn by the worker (for example, Infrared light reflection marker) etc. are imaged.
  • the imaging unit Cmr1 is equipped with, for example, an image sensor for visible light and is equipped with an imaging device capable of capturing a color image, an image sensor for infrared light, and an infrared camera capable of capturing an infrared light image. It will be realized.
  • the imaging unit Cmr1 outputs the captured data (image data, video data) as data D1_img to the display control device 100.
  • the display control device 100 includes a selector SEL1, a projection control unit 1, a selector SEL2, a training data acquisition unit 2, a training data storage unit DB1, and a CG synthesis unit 3 (CG: Computer Graphics). ), The extended training data storage unit DB2, the learning unit 4, the optimization parameter storage unit DB3, and the prediction processing unit 5.
  • the selector SEL1 is output from the data D1_prj_train including the projection data at the time of training data acquisition and the prediction processing unit 5 according to the mode signal model output from the control unit (not shown) that controls each functional unit of the display control device 100.
  • This is a selector that selects one of the data Dp1_prj including the projection data at the time of prediction and outputs the selected data as the data D_prj to the projection control unit 1.
  • the signal value of the mode signal Mode is set to "0”
  • the selector SEL1 selects the data D1_prj_train including the projection data at the time of training data acquisition.
  • the data is output to the projection control unit 1 as data D_prj.
  • the signal value of the mode signal Mode is set to "1"
  • the selector SEL1 selects the data Dp1_prj including the projection data at the time of prediction output from the prediction processing unit 5.
  • the data Dp1_prj is output to the projection control unit 1 as the data D_prj.
  • the projection control unit 1 inputs the data D_prj output from the selector SEL1, and the projection data (image data for projection or boundary line data to be displayed on the projection surface) included in the data D_prj is the projection unit.
  • a control signal Ctl_prj (D_prj) that controls the projection from Prj1 so as to be projected onto the projection target is generated, and the generated control signal Ctl_prj (D_prj) is output to the projection unit Prj1.
  • the selector SEL2 obtains the data D1_img output from the imaging unit Cmr1 according to the mode signal model output from the control unit (not shown) that controls each functional unit of the display control device 100, to the training data acquisition unit 2 and the prediction processing unit. This is a selector that outputs to any one of 5. Specifically, (1) at the time of training data acquisition, the signal value of the mode signal Mode is set to "0", and the selector SEL2 outputs the data D1_img to the training data acquisition unit 2. (2) At the time of prediction, the signal value of the mode signal Mode is set to "1", and the selector SEL2 outputs the data D1_img to the prediction processing unit 5.
  • the training data acquisition unit 2 inputs the data D1_img output from the selector SEL2, and generates training data Dtr1 for training the learning model of the prediction processing unit 5 based on the data D1_img. Then, the training data acquisition unit 2 stores the generated training data Dtr1 in the training data storage unit DB1.
  • the training data storage unit DB1 stores the training data Dtr1 output from the training data acquisition unit 2 in accordance with the instruction from the training data acquisition unit 2. Further, the training data storage unit DB1 outputs the stored training data Dtr1 to the CG synthesis unit 3 in accordance with the instruction from the CG synthesis unit 3.
  • the CG synthesis unit 3 reads (acquires) the training data Dtr1 from the training data storage unit DB1. Then, the CG synthesis unit 3 creates training data synthesized by CG using the training data Dtr1, and stores the created data as the extended training data Dtr2 in the extended training data storage unit DB2. The CG synthesis unit 3 may also include the training data Dtr1 used for creating the extended training data Dtr2 in the extended training data Dtr2 and store it in the extended training data storage unit DB2.
  • the extended training data storage unit DB2 stores the extended training data Dtr2 output from the CG synthesis unit 3 in accordance with the instruction from the CG synthesis unit 3. Further, the extended training data storage unit DB2 outputs the stored extended training data Dtr2 to the learning unit 4 in accordance with the instruction from the learning unit 4.
  • the learning unit 4 reads (acquires) the extended training data Dtr2 from the extended training data storage unit DB2, and performs learning processing using the extended training data Dtr2. Then, the learning unit 4 acquires a parameter (optimization parameter ⁇ _opt) for optimizing the learning model by the learning process, and stores the acquired optimization parameter ⁇ _opt in the optimization parameter storage unit DB3.
  • the optimization parameter storage unit DB3 stores the optimization parameter ⁇ _opt output from the learning unit 4 in accordance with the instruction from the learning unit 4. Further, the optimization parameter storage unit DB3 outputs the stored optimization parameter ⁇ _opt to the prediction processing unit 5 in accordance with the instruction from the prediction processing unit 5.
  • the prediction processing unit 5 reads the optimization parameter ⁇ _opt from the optimization parameter storage unit DB3, and acquires a trained model based on the optimization parameter ⁇ _opt.
  • the prediction processing unit 5 inputs the data D1_img output from the selector SEL2, and executes the prediction processing using the trained model for the data D1_img.
  • the prediction processing unit 5 generates data Dp1_prj (data projected from the projection unit Prj1) to be output to the projection control unit 1 based on the prediction processing result.
  • the prediction processing unit 5 outputs the generated data Dp1_prj to the selector SEL1.
  • the robot Rbt includes a robot control unit Rbt_C1 and a robot arm Rbt_arm.
  • the robot control unit Rbt_C1 is a functional unit for controlling the robot arm Rbt_arm.
  • the robot control unit Rbt_C1 inputs training data D1_rb_train and a predetermined control signal, and generates a control signal for controlling the robot arm Rbt_arm based on the input training data D1_rb_train and a predetermined control signal.
  • the robot arm Rbt_arm performs a predetermined operation (for example, grasping, transporting, etc.) based on a command (control signal) from the robot control unit Rbt_C1.
  • the display control system 1000 is installed in a narrow space (indoor space) such as a factory, and the projection unit Prj1 and the imaging unit Cmr1 are installed on the ceiling, and the worker Psn1 and the robot Rbt Will be described below assuming that is present on the floor surface. Further, the projection unit Prj1 targets the floor surface FLR as a projection target.
  • 3 and 4 are diagrams for explaining the processing of the training data acquisition mode of the display control system 1000 according to the first embodiment.
  • the robot control unit Rbt_C1 inputs data D1_rb_train for causing the robot arm Rbt_arm to execute a predetermined operation (this is referred to as a first operation) in order to acquire training data. Further, it is safe even if the projection control unit 1 of the display control device 100 has a safety area when the robot arm Rbt_arm is first operated, that is, a worker (for example, worker Psn1 in FIG. 3) exists. Input the data D1_prj_train for displaying the area (the area where the worker can safely perform the work without touching the robot arm Rbt_arm during the first operation) on the floor surface to be projected. ..
  • the robot arm Rbt_arm performs the first operation based on the data D1_rb_train.
  • the projection unit Prj1 is performing the first operation when the robot arm Rbt_arm is performing the first operation based on the data D1_prj_train input to the projection control unit 1 of the display control device 100.
  • An image or a boundary line showing the safety area of the robot is displayed on the floor surface FLR to be projected.
  • the projection control unit 1 controls the projection unit Prj1 so as to project a predetermined test image including a test pattern having a known size on the image onto the floor surface.
  • the distance from the projection point P_prj of the projection unit Prj1 to the projection surface (floor surface FLR) is acquired by examining the size of the test pattern in the image (image captured image) obtained by the imaging unit Cmr1 of the test image. .. It is assumed that the imaging parameters (angle of view, focal length, etc.) of the imaging unit Cmr1 are known. Then, the display control device 100 controls the projection control unit 1 so that the image showing the safety area or the boundary line is projected on the floor surface FLR which is the projection surface based on the acquired distance.
  • the area Ar1 is a safety area when the robot arm Rbt_arm is performing the first operation, and a boundary line indicating the safety area is projected from the projection unit Prj1 onto the floor surface FLR.
  • the worker Psn1 is asked to wear a helmet Hat1 (for example, a yellow helmet) to work or move in the safe area (in the area Ar1), and the situation at that time is captured by the imaging unit Cmr1.
  • a helmet Hat1 for example, a yellow helmet
  • the imaging unit Cmr1 takes an image with.
  • the imaging unit Cmr1 will be described below assuming that the imaging unit Cmr1 is equipped with an image sensor for visible light and is capable of capturing a color image. Further, the imaging unit Cmr1 has camera parameters (optical axis direction, angle of view, focal length) so that the safety region (region Ar1 in the case of FIG. 3) and the robot arm Rbt_arm are included in the captured image (captured image). Etc.) shall be adjusted.
  • the imaging unit Cmr1 outputs an image of the situation when the robot arm Rbt_arm is performing the first operation as an image (frame image) (data D1_img) for each frame to the training data acquisition unit 2.
  • the signal value of the mode signal Mode is set to "0", so the selector SEL2 outputs the data D1_img to the training data acquisition unit 2.
  • the training data acquisition unit 2 inputs the data D1_img output from the selector SEL2, and generates training data Dtr1 for training the learning model of the prediction processing unit 5 based on the data D1_img. Specifically, the training data acquisition unit 2 stores the data D1_img (frame image data) as the training data Dtr1 as the actual image data in the training data storage unit DB1. Further, the training data acquisition unit 2 sets, for example, an image obtained by extracting a predetermined image feature amount from the data D1_img (frame image data) (for example, the image feature amount is the same color as the color of the helmet worn by the worker). Even if an image obtained by extracting the image area of the color portion) is acquired and combined with the original image D1_img (frame image data) (actual image data), and the combined image is generated as training data Dtr1. Good.
  • the training data acquisition unit 2 may include the image data and the information (additional information (label information)) of the state when the image is acquired in the training data Dtr1.
  • additional information label information
  • information about the operation phase of the robot arm Rbt_arm, information indicating that the safety area is correct in the area Ar1, and the like may be included in the training data Dtr1 together with the image data as additional information.
  • the training data acquisition unit 2 stores the generated training data Dtr1 in the training data storage unit DB1.
  • the CG synthesis unit 3 reads (acquires) the training data Dtr1 from the training data storage unit DB1. Then, the CG synthesis unit 3 creates the training data synthesized by CG using the training data Dtr1. For example, in the image data of the training data Dtr1, the CG synthesis unit 3 generates a CG image area in which the image area of the helmet portion of the worker is changed to a color different from the color of the helmet by CG processing, and the CG image area is used. , CG image data is generated by replacing the image area of the helmet portion of the actual image. That is, this CG image data becomes an image in which the color of the helmet portion is changed by the CG processing.
  • the CG synthesis unit 3 performs CG processing (CG synthesis) such as changing the helmet color to another color or texture, or replacing the helmet color with the worker's hair (when the helmet is not worn). Processing) generates various CG composite images from the original image. By processing in this way, a large amount of training data can be acquired from one training data Dtr1. Then, the CG synthesis unit 3 adds the CG image data generated as described above to the original image data to generate the extended training data Dtr2. Then, the CG synthesis unit 3 stores the generated extended training data Dtr2 in the extended training data storage unit DB2.
  • CG processing CG synthesis
  • the display control system 1000 acquires training data (learning data) for the case of FIG. 3 (when the robot arm Rbt_arm is performing the first operation).
  • the state when the robot arm Rbt_arm is performing the second operation different from the first operation is imaged, and the training data is further acquired.
  • the robot control unit Rbt_C1 inputs data D2_rb_train for causing the robot arm Rbt_arm to execute a second operation different from the first operation. Further, it is safe even if the projection control unit 1 of the display control device 100 has a safety area when the robot arm Rbt_arm is subjected to the second operation, that is, a worker (for example, worker Psn1 in FIG. 4) exists. Input data D2_prj_train to display the area (the area where the worker can safely work without touching the robot arm Rbt_arm during the second operation) on the floor surface to be projected. ..
  • the robot arm Rbt_arm performs the second operation based on the data D2_rb_train.
  • the projection unit Prj1 is performing the second operation when the robot arm Rbt_arm is performing the second operation based on the data D2_prj_train input to the projection control unit 1 of the display control device 100.
  • An image or a boundary line showing the safety area of the robot is displayed on the floor surface FLR to be projected.
  • the area Ar2 is a safety area when the robot arm Rbt_arm is performing the second operation, and the boundary line indicating the safety area is projected from the projection unit Prj1 onto the floor surface FLR.
  • the worker Psn1 is asked to wear a helmet Hat1 (for example, a yellow helmet) to work or move in the safe area (in the area Ar1), and the situation at that time is captured by the imaging unit Cmr1. Take an image with.
  • a helmet Hat1 for example, a yellow helmet
  • the display control system 1000 performs the process when the robot arm Rbt_arm is performing the second operation.
  • the extended training data Dtr2 is acquired, and the extended training data Dtr2 is stored in the extended training data storage unit DB2.
  • the processing of the training data acquisition mode is executed, and the training data (learning data) (extended training data Dtr2) is acquired.
  • an image showing a safety area on the floor surface or a boundary line is projected from the projection unit Prj1 and displayed on the floor surface.
  • an image is acquired by the imaging unit Cmr1 without projecting an image or a boundary line showing the safety area on the floor surface from the projection unit Prj1, and the acquired frame image is the safety area.
  • the image or boundary line indicating the above may be synthesized (for example, by CG processing) to generate training data (image data for training).
  • the safety area may be specified manually or automatically (for example, by calculation) depending on the position and state of the worker and the robot arm Rbt_arm in the acquired frame image.
  • FIG. 5 is a diagram for explaining the processing of the learning mode of the display control system 1000 according to the first embodiment.
  • the learning unit 4 reads (acquires) the extended training data Dtr2 from the extended training data storage unit DB2, and performs learning processing using the extended training data Dtr2.
  • the learning unit 4 performs learning processing using the extended training data Dtr2 as teacher data.
  • the learning unit 4 uses the input as image data included in the extended training data Dtr2 (this image data is referred to as "Dtr2.img"), and outputs the output as a safety area when the image data is acquired ( As information for specifying the safety area to be displayed on the projection surface), learning is performed on a learning model (for example, a model realized by a neural network).
  • a learning model for example, a model realized by a neural network.
  • the learning model is, for example, a model by a neural network including an input layer, a plurality of intermediate layers, and an output layer. It is assumed that the weighting coefficient (weighting of synaptic connections connecting each layer) between each layer of the training model is set (adjusted) by the parameter ⁇ .
  • the learning unit 4 has input data to the learning model Dtr2. Let x be the set of img, let y be the set of output data from the training model, and P (y
  • x) is set as follows.
  • Standard deviation
  • x i is a vector included in the set x (data obtained by converting the data of each pixel of the two-dimensional image into a one-dimensional vector)
  • y i is a vector included in the set y
  • i_select is teacher data (correct answer data) (vector data) when xi is input.
  • H (x i ; ⁇ ) represents an operator corresponding to, for example, processing of a neural network composed of a plurality of layers on the input x i and acquiring an output.
  • the parameter ⁇ is, for example, a parameter that determines the weighting of synaptic connections of the neural network.
  • H; the (x i ⁇ ) may include the calculation of the nonlinear.
  • the learning unit 4 When the learning unit 4 performs learning processing using the learning model using the extended training data and determines that it has sufficiently converged, the parameter ⁇ set in the learning model at that time is set as the optimization parameter ⁇ _opt. It is acquired, and the acquired optimization parameter ⁇ _opt is stored in the optimization parameter storage unit DB3.
  • the display control system 1000 executes the learning mode processing.
  • 6 to 9 are diagrams for explaining the processing of the prediction mode of the display control system 1000 according to the first embodiment.
  • phase 1 operation a control signal Ctrl_Rbt (phase1) is input to the robot control unit Rbt_C1 and the robot arm Rbt_arm executes a predetermined operation (this is referred to as “phase 1 operation”) will be described.
  • the robot control unit Rbt_C1 controls the robot arm Rbt_arm so that the robot arm Rbt_arm executes an operation (phase 1 operation) according to the control signal Ctrl_Rbt (phase1).
  • the robot arm Rbt_arm performs the operation of Phase 1 in accordance with a command from the robot control unit Rbt_C1.
  • the imaging unit Cmr1 captures the situation at that time (robot arm Rbt_arm, worker, floor surface FLR, etc.) in the same state as when the training data was acquired. Since the positional relationship between the projection unit Prj1, the imaging unit Cmr1, the robot Rbt, and the floor FLR may deviate, calibration (relative positional relationship adjustment processing) should be performed before executing the prediction mode processing. You may. For example, two different points (for example, points P1 and P2 shown in FIG. 6) on the pedestal of the robot Rbt existing on a plane parallel to the floor surface FLR are detected and detected in the image captured by the imaging unit Cmr1.
  • the image acquired by the imaging unit Cmr1 has the same situation as the image when the training data is acquired, that is, the positional relationship in the image. Since the same image is obtained, the accuracy of the prediction process is improved.
  • a point group of 3 or more points is used as a calibration point, and each calibration parameter (direction of optical axis, camera angle, angle of view, etc.) generated by the point group is used. ) May be used.
  • the imaging unit Cmr1 After performing the calibration, while the robot arm Rbt_arm is performing the phase 1 operation, the imaging unit Cmr1 images the robot Rbt, the floor surface FLR, and the worker (if any) from above and images the image.
  • the image (frame image) is continuously output to the display control device 100.
  • the signal value of the mode signal Mode is set to "1", so the selector SEL2 outputs the data D1_img to the prediction processing unit 5.
  • the prediction processing unit 5 reads the optimization parameter ⁇ _opt from the optimization parameter storage unit DB3 and acquires a trained model based on the optimization parameter ⁇ _opt. To do. That is, the prediction processing unit 5 sets the parameter ⁇ in the learning model by the optimization parameter ⁇ _opt. As a result, the trained model is constructed, and the prediction processing unit 5 causes the trained model to input the image data D1_img output from the selector SEL2.
  • the prediction processing unit 5 inputs the image data D1_img acquired during the period during which the robot arm Rbt_arm is performing the operation of Phase 1 into the trained model, and acquires the output from the trained model as data Dp1_prj.
  • the position and state of the robot arm Rbt_arm, and (2) the position and state of the worker Psn1 are the training data in the case of FIG. 3 (when the safety area is the area Ar1). It is similar to (1) the position and state of the robot arm Rbt_arm, and (2) the position and state of the worker Psn1 in the state when the above is acquired.
  • the prediction processing unit 5 outputs the acquired data Dp1_prj to the selector SEL1.
  • the selector SEL1 outputs the data Dp1_prj output from the prediction processing unit 5 to the projection control unit 1.
  • the projection control unit 1 inputs data Dp1_prj and outputs a control signal Ctrl_prj (Dp1_prj) that controls the image based on the data Dp1_prj (or the boundary line displayed on the projection surface) to be projected from the projection unit Prj1 onto the projection target.
  • the generated control signal Ctrl_prj (Dp1_prj) is output to the projection unit Prj1.
  • the projection unit Prj1 projects an image (or a boundary line to be displayed on the projection surface) based on the data Dp1_prj on the projection surface (floor surface FLR) based on the control signal Ctl_prj (Dp1_prj).
  • an image (or boundary line) indicating that the safety region is the region Ar1 is projected on the floor surface FLR.
  • the worker Psn1 can easily determine where the safety area (area Ar1 in the case of FIG. 6) is from the image (or boundary line) projected on the floor surface FLR, and works in the safety area. It does not interfere with the operation of the robot arm Rbt_arm, or does not come into contact with or collide with the robot arm Rbt_arm by moving within the safe area. Therefore, safety is ensured. Further, since the operation of the robot arm Rbt_arm does not come into contact with or collide with the worker, the high-speed operation can be continued, and as a result, the work efficiency by the robot arm Rbt_arm can be maintained in a high state.
  • phase 2 operation a control signal Ctrl_Rbt (phase2) is input to the robot control unit Rbt_C1 and the robot arm Rbt_arm executes a predetermined operation (this is referred to as “phase 2 operation”) explain.
  • the robot control unit Rbt_C1 controls the robot arm Rbt_arm so that the robot arm Rbt_arm executes an operation (phase 2 operation) according to the control signal Ctrl_Rbt (phase2).
  • the robot arm Rbt_arm performs the operation of Phase 2 in accordance with a command from the robot control unit Rbt_C1.
  • the imaging unit Cmr1 captures the situation at that time (robot arm Rbt_arm, worker, floor surface FLR, etc.) in the same state as when the training data was acquired.
  • the imaging unit Cmr1 images the robot Rbt, the floor surface FLR, and the worker (if any) from above, and displays the captured image (frame image). Continue to output to the control device 100.
  • the signal value of the mode signal Mode is set to "1", so the selector SEL2 outputs the data D1_img to the prediction processing unit 5.
  • the prediction processing unit 5 inputs the image data D1_img acquired during the period during which the robot arm Rbt_arm is performing the operation of Phase 2 into the trained model, and acquires the output from the trained model as data Dp1_prj.
  • the position and state of the robot arm Rbt_arm, and (2) the position and state of the worker Psn1 are the training data in the case of FIG. 4 (when the safety area is the area Ar2). It is similar to (1) the position and state of the robot arm Rbt_arm, and (2) the position and state of the worker Psn1 in the state when the above is acquired.
  • the prediction processing unit 5 outputs the acquired data Dp1_prj to the selector SEL1.
  • the selector SEL1 outputs the data Dp1_prj output from the prediction processing unit 5 to the projection control unit 1.
  • the projection control unit 1 inputs the data Dp1_prj and outputs a control signal Ctrl_prj (Dp1_prj) that controls the image (or the boundary line displayed on the projection surface) based on the data Dp1_prj to be projected from the projection unit Prj1 onto the projection target.
  • the generated control signal Ctrl_prj (Dp1_prj) is output to the projection unit Prj1.
  • the projection unit Prj1 projects an image (or a boundary line to be displayed on the projection surface) based on the data Dp1_prj on the projection surface (floor surface FLR) based on the control signal Ctl_prj (Dp1_prj).
  • an image (or boundary line) indicating that the safety region is the region Ar2 is projected on the floor surface FLR.
  • the worker Psn1 can easily determine where the safety area (area Ar2 in the case of FIG. 7) is from the image (or boundary line) projected on the floor surface FLR, and works in the safety area. It does not interfere with the operation of the robot arm Rbt_arm, or does not come into contact with or collide with the robot arm Rbt_arm by moving within the safe area. Therefore, safety is ensured. Further, since the operation of the robot arm Rbt_arm does not come into contact with or collide with the worker, the high-speed operation can be continued, and as a result, the work efficiency by the robot arm Rbt_arm can be maintained in a high state.
  • Prediction processing using a trained model is performed using an image (frame image) that can determine the position and state of the robot and data that identifies a safe area when the image is acquired as training data (teacher data).
  • the safety area when the input image is captured is performed by performing the prediction processing by the trained model using the image (frame image) captured in the same state as when learning. (Safety area on the projection plane) can be predicted (specified).
  • the predicted (specified) safety area is projected onto the projection surface (floor surface FLR) by the projection unit Prj1, so that the safety area can be easily and surely recognized by the operator. It can be displayed on a surface (floor surface FLR). That is, in the display control system 1000, since the learning process and the prediction process are performed using the image taken from above where the shielding area is small, the robot Rbt and the robot arm Rbt_arm are in any state according to the state. Therefore, the prediction process for dynamically identifying (predicting) the safety area can be performed appropriately and with high accuracy.
  • the present invention is not limited to this, and in the display control system 1000, as shown in FIG. 9, an image (or an image showing the danger area) showing the danger area is described.
  • the boundary line may be displayed on the projection surface (floor surface FLR).
  • the area Ar_rb1 is a high-risk area (dangerous area)
  • the area Ar_rb2 is a lower risk area (dangerous area) than the area Ar_rb1. In this case, the safety of the worker is ensured by performing work, movement, etc. outside the dangerous area.
  • the safety area (or the danger area) can be appropriately displayed when the robot arm and the human perform joint work. As a result, it is possible to improve work safety while ensuring high work efficiency.
  • FIG. 10 is a schematic configuration diagram of the display control system 1000A according to the first modification of the first embodiment.
  • FIG. 11 is a schematic configuration diagram of the display control device 100A according to the first modification of the first embodiment.
  • FIG. 12 is a diagram for explaining the processing of the training data acquisition mode of the display control system 1000A according to the first modification of the first embodiment.
  • FIG. 13 is a diagram (timing chart) showing an example (pattern 1) of training data used in the display control system 1000A according to the first modification of the first embodiment.
  • FIG. 14 is a diagram (timing chart) showing an example (pattern 2) of training data used in the display control system 1000A according to the first modification of the first embodiment.
  • the display control system 1000A has a configuration in which the display control device 100 is replaced with the display control device 100A in the display control system 1000 of the first embodiment. Then, as shown in FIG. 11, the display control device 100A has a configuration in which the training data acquisition unit 2 is replaced with the training data acquisition unit 2A in the display control device 100 of the first embodiment.
  • the training data acquisition unit 2A inputs the data D1_img output from the selector SEL2 and the training data D1_rb_train input to the robot control unit Rbt_C1.
  • the training data acquisition unit 2A generates training data Dtr1 for training the learning model of the prediction processing unit 5 based on the data D1_img and the training data D1_rb_train.
  • training data is acquired by associating a predetermined control sequence of the robot Rbt with a safety area (or danger area) determined accordingly.
  • the robot arm Rbt_arm has (1) Phase 1 (risk: low), (2) Phase 2 (risk: high), and (3) Phase 3 (risk: low (danger)). If it is predetermined that the degree is controlled to operate in the order of (the degree is the same as the degree of danger in Phase 1)), the safety area (or danger area) is also determined according to the above phase. can do.
  • the training data D1_rb_train for the robot Rbt is determined from a predetermined control sequence of the robot Rbt, and the degree of risk is determined according to the phase determined by the control sequence.
  • the safety area or danger area
  • the image corresponding to Phase 2 is longer than the projection surface (for example, the floor surface FLR).
  • the training data D1_prj_train is generated so that it is projected for a period of time. That is, in the case of FIG.
  • the image corresponding to the phase 2 (an image clearly indicating the safety area or the danger area) is projected onto the projection surface (for example, the floor surface FLR).
  • the training data D1_prj_train is generated so as to be performed.
  • the image corresponding to Phase 1 (the image clearly indicating the safety area or the danger area) is projected on the projection surface (for example, the floor surface FLR) in the period from time t0 to time t01.
  • the training data D1_prj_train is generated.
  • the training data D1_prj_train is generated so that the projected image is switched from the phase 1 image to the phase 2 image at a time before the time t1 when the phase 1 shifts to the phase 2.
  • the worker can recognize that the safety area becomes smaller before shifting to the phase where the safety area becomes smaller, and as a result, the safety of the worker is ensured.
  • training data D1_prj_train so that an image composed of an image region having a color (or brightness) determined according to the degree of risk is projected on a projection surface (for example, a floor surface FLR). May be generated.
  • the training data D1_rb_train for the robot Rbt is determined from a predetermined control sequence of the robot Rbt, and is determined by the control sequence.
  • the degree of danger is determined according to the phase, and the safety area (or danger area) is determined.
  • the image area corresponding to Phase 1 and the image area corresponding to Phase 2 are used in the period before the transition from Phase 1 to Phase 2.
  • the training data D1_prj_train is generated so that the image (an image in which the safety area or the danger area is hierarchically divided by color or brightness according to the degree of danger) is projected on the projection surface (for example, the floor surface FLR). That is, in the case of FIG. 14, in the period from the time t01 to the time t1 before the start time t1 of the phase 2, the image consisting of the image area corresponding to the phase 1 and the image area corresponding to the phase 2 (safety area or danger).
  • the training data D1_prj_train is generated so that the area (an image in which the area is hierarchically divided by color or brightness according to the degree of danger) is projected on the projection surface (for example, the floor surface FLR).
  • the training data D1_prj_train is generated. That is, from the time t01 before the time t1 when the transition from the phase 1 to the phase 2 occurs, the image consisting of the image area corresponding to the phase 1 and the image area corresponding to the phase 2 (safety area or dangerous area according to the degree of danger).
  • the operator can properly grasp that the degree of danger will soon change and the safety area will change. Therefore, by learning from the training data generated in this way, it is possible to construct a trained model for prediction processing that appropriately ensures the safety of workers.
  • the projection image associated with the control sequence of the robot Rbt is acquired as training data as described above. Then, the training process is performed using the training data acquired in this way, and the trained model is acquired. Then, by executing the prediction process using the learning model, the display control system 1000A appropriately projects an image clearly indicating the safety area (or danger area) on the projection surface (for example, the floor surface FLR). be able to. As a result, the worker can properly grasp that the safety area will change soon, and the safety of the worker is ensured.
  • the safety area or danger area
  • training data is generated from an image projected on a projection surface (color and brightness of the image), and learning processing is performed based on the training data. You may. That is, in the display control system 1000A of this modified example, training data is acquired and learned based on the image (color and brightness of the image) projected on the projection surface without recognizing the state of the robot arm Rbt_arm. The process may be performed.
  • FIG. 15 is a schematic configuration diagram of the display control system 2000 according to the second embodiment.
  • FIG. 16 is a schematic configuration diagram of the display control device 100B according to the second embodiment.
  • FIG. 17 is a schematic configuration diagram of the prediction processing unit 5A of the display control device 100B according to the second embodiment.
  • the display control system 2000 has a configuration in which the display control device 100 is replaced with the display control device 100B in the display control system 1000 of the first embodiment. Then, as shown in FIG. 16, the display control device 100B has a configuration in which the prediction processing unit 5 is replaced with the prediction processing unit 5A in the display control device 100 of the first embodiment.
  • the prediction processing unit 5A includes a prediction unit 51, a detection target position determination unit 52, a safety range map generation unit 53, and a danger determination unit 54.
  • the prediction processing unit 5A outputs the data Dp1_prj of the prediction processing result to the selector SEL1 and the safety range map generation unit 53.
  • the prediction unit 51 is a functional unit that executes the same processing as the prediction processing unit 5 of the first embodiment.
  • the detection target position determination unit 52 inputs the image data D1_img output from the selector SEL2, performs image recognition processing on the image data D1_img, and performs an image recognition process on the image data D1_img to obtain an image of an image area corresponding to the detection target (for example, a worker). Identify the top position. Then, the detection target position determination unit 52 outputs the data including the acquired position information on the image of the detection target to the danger determination unit 54 as data D_pos.
  • the safety range map generation unit 53 inputs the data Dp1_prj output from the prediction unit 51, and from the data Dp1_prj, map information for specifying the safety area (information for specifying the position, size, shape, etc. of the safety area). ) To get. Then, the safety range map generation unit 53 outputs the data including the acquired map information as data D_map to the danger determination unit 54.
  • the danger determination unit 54 inputs the data D_pos output from the detection target position determination unit 52 and the data D_map output from the safety range map generation unit 53. Based on the data D_pos and the data D_map, the danger determination unit 54 determines whether or not the detection target (for example, a worker) is in the danger area (or is likely to be within a short period of time from the current time). Is determined. Then, when the danger determination unit 54 determines that the detection target (for example, a worker) is in the danger area (or is likely to be within a short period of time from the current time), the danger determination unit 54 sends a warning signal Sigma_wrn. Generated and output the generated warning signal Sigma_wrn to the robot control unit Rbt_C1A.
  • the robot control unit Rbt_C1A has the same function as the robot control unit Rbt_C1 of the first embodiment, and further inputs a warning signal Sig_wrn output from the display control device 100B. Then, the robot control unit Rbt_C1A determines that it is dangerous to continue the operation of the robot arm Rbt_arm when the warning signal Sig_wrn is input from the display control device 100B, and determines that it is dangerous to continue the operation of the robot arm Rbt_arm. Process) and / or perform risk avoidance processing such as stopping the robot arm Rbt_arm.
  • FIG. 18 is a flowchart of processing in the prediction mode of the display control system 2000 according to the second embodiment.
  • 19 and 20 are diagrams for explaining the processing of the prediction mode of the display control system 2000 according to the second embodiment.
  • the processing of the prediction mode of the display control system 2000 will be described with reference to the flowchart of FIG.
  • the processing of the training data acquisition mode and the processing of the learning mode are the same as those of the display control system 1000 of the first embodiment.
  • phase 2 operation a control signal Ctrl_Rbt (phase2) is input to the robot control unit Rbt_C1A and the robot arm Rbt_arm executes a predetermined operation (this is referred to as “phase 2 operation”) will be described.
  • the robot control unit Rbt_C1A controls the robot arm Rbt_arm so that the robot arm Rbt_arm executes an operation (phase 2 operation) according to the control signal Ctrl_Rbt (phase2).
  • the robot arm Rbt_arm performs the operation of Phase 2 in accordance with a command from the robot control unit Rbt_C1A.
  • the imaging unit Cmr1 captures the situation at that time (robot arm Rbt_arm, worker, floor surface FLR, etc.) in the same state as when the training data was acquired.
  • the imaging unit Cmr1 images the robot Rbt, the floor surface FLR, and the worker (if any) from above, and displays the captured image (frame image). Continue to output to the control device 100 (step S1).
  • the signal value of the mode signal Mode is set to "1", so the selector SEL2 outputs the data D1_img to the prediction processing unit 5A.
  • the prediction processing unit 5A inputs the image data D1_img acquired during the period during which the robot arm Rbt_arm is performing the operation of Phase 2 into the trained model, and acquires the output from the trained model as data Dp1_prj.
  • (1) the position and state of the robot arm Rbt_arm, and (2) the position and state of the worker Psn1 are the training data in the case of FIG. 4 (when the safety area is the area Ar2). It is similar to (1) the position and state of the robot arm Rbt_arm, and (2) the position and state of the worker Psn1 in the state when the above is acquired.
  • the prediction processing unit 5A outputs the acquired data Dp1_prj to the selector SEL1.
  • the selector SEL1 outputs the data Dp1_prj output from the prediction processing unit 5A to the projection control unit 1.
  • the projection control unit 1 inputs the data Dp1_prj and outputs a control signal Ctrl_prj (Dp1_prj) that controls the image (or the boundary line displayed on the projection surface) based on the data Dp1_prj to be projected from the projection unit Prj1 onto the projection target.
  • the generated control signal Ctrl_prj (Dp1_prj) is output to the projection unit Prj1.
  • the projection unit Prj1 projects an image (or a boundary line to be displayed on the projection surface) based on the data Dp1_prj on the projection surface (floor surface FLR) based on the control signal Ctrl_prj (Dp1_prj).
  • an image (or boundary line) indicating that the safety region is the region Ar2 is projected on the floor surface FLR.
  • the frame image data D1_img acquired by the imaging unit Cmr1 is input to the detection target position determination unit 52.
  • the detection target position determination unit 52 performs image recognition processing on the frame image data D1_img to specify the position on the image of the image region corresponding to the detection target (for example, the worker Psn1 in FIG. 19) (step S2). .. Then, the detection target position determination unit 52 outputs the data including the acquired position information on the image of the detection target to the danger determination unit 54 as data D_pos.
  • the safety range map generation unit 53 inputs the data Dp1_prj output from the prediction unit 51, and from the data Dp1_prj, map information for specifying the safety area (information for specifying the position, size, shape, etc. of the safety area). ) Is acquired (step S3). In the case of FIG. 19, the safety range map generation unit 53 acquires the map information that identifies the safety area Ar2.
  • the safety range map generation unit 53 outputs the data including the acquired map information as data D_map to the danger determination unit 54.
  • the danger determination unit 54 determines the detection target (for example, the worker Psn1 in FIG. 19) based on the data D_pos output from the detection target position determination unit 52 and the data D_map output from the safety range map generation unit 53. Determine if you are in the danger zone (or likely to be in a short period of time from the current time). For example, a motion vector to be detected is acquired, and from the motion vector, it is determined whether or not there is a high possibility that the motion vector is within a short period of time from the current time (map collation process, steps S4 and S5).
  • the danger determination unit 54 determines that the detection target (for example, a worker) is in the danger area (or is likely to be within a short period of time from the current time)
  • the danger determination unit 54 sends a warning signal Sigma_wrn. Generated and output the generated warning signal Sigma_wrn to the robot control unit Rbt_C1A.
  • the robot control unit Rbt_C1A determines that it is dangerous to continue the operation of the robot arm Rbt_arm, and performs a warning operation (for example, a process of generating a warning sound). ) And / or, risk avoidance processing such as stopping the robot arm Rbt_arm is performed (step S6).
  • the worker Psn1 may move from the inside of the safety area Ar2 to the outside of the safety area Ar2 and come into contact with or collide with the robot arm Rbt_arm.
  • the danger avoidance process can prevent the occurrence of serious accidents such as contact and collision with the robot arm Rbt_arm. That is, even in the display control system 2000, regardless of the state of the robot Rbt and the robot arm Rbt_arm, the prediction process for dynamically identifying (predicting) the safety area according to the state is appropriately and highly accurate. Can be done.
  • the present invention is not limited to this, and in the display control system 2000, the degree of safety is hierarchically divided into image areas (for example, an image having a hierarchically color-coded image or an image having a hierarchically changed brightness may be projected onto a projection surface (floor surface FLR).
  • the above-mentioned process of specifying the position of the detection target (step S2) and the process of acquiring the safety range map (step S3) may be performed using the image data including the image area in which the degree of safety is hierarchically divided.
  • the present invention is not limited to this, and in the display control system 2000, as shown in FIG. 20, an image (or an image showing the danger area) showing the danger area is described.
  • the boundary line) may be displayed on the projection surface (floor surface FLR).
  • the region Ar_rb1 is a region with a high degree of risk (dangerous region)
  • the region Ar_rb2 is a region with a lower degree of danger than the region Ar_rb1 (dangerous region).
  • the display control device 100B when the detection target (for example, the worker Psn1) enters the danger range or is likely to enter the danger range, the display control device 100B outputs a warning signal Sig_wrn to the robot control unit Rbt_C1A to perform the danger avoidance process. You just have to do.
  • the display control system and the display control device may be configured by combining the above embodiments.
  • training data is acquired by the same method as in the first modification of the first embodiment, learning processing is performed using the acquired training data, and further.
  • the prediction process may be performed using the trained model acquired by the training process.
  • the display control system may include a plurality of imaging units. Then, in the display control system configured as described above, in order to further reduce the occlusion, danger detection, danger determination processing, and the like may be performed using images captured by a plurality of cameras.
  • the imaging unit is preferably installed at a fixed position, but may be installed at a variable position.
  • the safety area (or the danger area) is dynamically changed according to the operating state of the robot arm Rbt_arm and projected onto the projection surface (floor surface FLR) has been described. It is not limited, and for example, the maximum area of the safety area may be detected and the maximum area may be statically displayed on the projection surface (floor surface FLR). In this case, a physically recognizable boundary line or the like may be displayed on the floor surface FLR (for example, the boundary may be emitted by an optical fiber so that a worker or the like can easily recognize it). ..
  • the projection unit Prj1 uses the projector device
  • the present invention is not limited to this, and for example, an LED scanner, a laser scanner, or the like may be used as the projection unit Prj1.
  • the display control system includes a plurality of projection units (for example, a projector device). You may. Then, in the display control system configured as described above, a plurality of projection units (for example, a projector device) may be used to project and clearly indicate a safety area or a danger area without shielding.
  • a plurality of projection units for example, a projector device
  • the projection unit is preferably installed at a fixed position, but may be installed at a variable position.
  • the present invention is not limited to this, and the number of reference points for calibration is 2.
  • the above may be the above, or another position may be used as a reference point.
  • each block may be individually integrated into one chip by a semiconductor device such as an LSI, or may be integrated into one chip so as to include a part or all of the blocks. You may.
  • LSI Although it is referred to as LSI here, it may be referred to as IC, system LSI, super LSI, or ultra LSI depending on the degree of integration.
  • the method of making an integrated circuit is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor.
  • An FPGA Field Programmable Gate Array
  • a reconfigurable processor that can reconfigure the connection and settings of the circuit cells inside the LSI may be used.
  • a part or all of the processing of each functional block of each of the above embodiments may be realized by a program. Then, a part or all of the processing of each functional block of each of the above embodiments is performed by the central processing unit (CPU) in the computer. Further, the program for performing each process is stored in a storage device such as a hard disk or a ROM, and is read and executed in the ROM or the RAM.
  • a storage device such as a hard disk or a ROM
  • each process of the above embodiment may be realized by hardware, or may be realized by software (including the case where it is realized together with an OS (operating system), middleware, or a predetermined library). Further, it may be realized by mixed processing of software and hardware.
  • OS operating system
  • middleware middleware
  • predetermined library a predetermined library
  • the hardware configuration for example, CPU, GPU, ROM, RAM, input unit, output unit, etc.
  • the hardware configuration for example, CPU, GPU, ROM, RAM, input unit, output unit, etc.
  • FIG. 21 is busted. (Hardware configuration connected by Bus) may be used to realize each functional unit by software processing.
  • each functional unit of the above embodiment is realized by software
  • the software may be realized by using a single computer having the hardware configuration shown in FIG. 21, or a plurality of computers. It may be realized by the distributed processing using.
  • execution order of the processing methods in the above embodiment is not necessarily limited to the description of the above embodiment, and the execution order can be changed without departing from the gist of the invention.
  • a computer program that causes a computer to execute the above-mentioned method and a computer-readable recording medium that records the program are included in the scope of the present invention.
  • examples of computer-readable recording media include flexible disks, hard disks, SSDs, CD-ROMs, MOs, DVDs, DVD-ROMs, DVD-RAMs, Blu-ray (registered trademarks), next-generation optical disks, and semiconductors. Memory can be mentioned.
  • the computer program is not limited to the one recorded on the recording medium, and may be transmitted via a telecommunication line, a wireless or wired communication line, a network typified by the Internet, or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The present invention realizes a technique with which it is possible to improve safety of work while maintaining high work efficiency by suitably displaying a safe region when a robot arm and a person are performing a shared operation. In this display control system (1000), a prediction process is performed using a learned model in which an image captured by an imaging unit installed above a robot arm (Rbt_arm), such as an image in which it is possible to distinguish the position and state of the robot arm as well as the position and state of a worker, and data that specifies a safe region when the image is acquired are learned as training data. Moreover, in the display control system (1000), the prediction process carried out through the learned model is performed using an image captured under the same conditions as those in learning, whereby a safe region from when the inputted image is captured is predicted, and the safe region is projected onto a projection surface by a projection unit.

Description

表示制御システム、および、表示制御方法Display control system and display control method
 本発明は、ロボット作業エリア内に人間が入り込む場合の危険検知について技術、および、危険領域(あるいは安全領域)表示についての技術に関する。 The present invention relates to a technique for detecting danger when a human enters the robot work area and a technique for displaying a danger area (or safety area).
 近年、ロボットと人間が共同して作業を行う状況が増加しつつある。それに伴い、人間とロボットとの衝突の可能性が増大する。例えば、複数の関節を持ったロボットアームを有するロボットは、その動きが複雑になるため、当該ロボットアームの動きを人間が予想するのは難しい。その結果、ロボットアームの突発的な動きによって、ロボットアームと人間とが接触し重大な事故が引き起こされてしまう可能性がある。 In recent years, the situation where robots and humans work together is increasing. Along with this, the possibility of collision between humans and robots increases. For example, it is difficult for a human to predict the movement of a robot arm having a robot arm having a plurality of joints because the movement of the robot is complicated. As a result, the sudden movement of the robot arm may cause the robot arm to come into contact with a human and cause a serious accident.
 ロボットアームによる作業時の安全確保のため、例えば、特許文献1に開示されている技術では、ロボットアーム関節内の力センサによって衝突を検知し、衝突が検知された場合、ロボットアームを停止させる。 In order to ensure safety during work by the robot arm, for example, in the technique disclosed in Patent Document 1, a collision is detected by a force sensor in the joint of the robot arm, and when the collision is detected, the robot arm is stopped.
特開2006-21287号公報Japanese Unexamined Patent Publication No. 2006-21287
 しかしながら、従来技術のように、ロボットアームと人間とが接触(衝突)した後に安全装置を作動させる、あるいは、衝突する可能性があることを予測し、ロボットアームをゆっくり動かすように制御するのでは、作業効率が低下してしまうという問題がある。 However, as in the conventional technology, the safety device may be activated after the robot arm and a human contact (collision), or the possibility of collision may be predicted and the robot arm may be controlled to move slowly. , There is a problem that work efficiency is lowered.
 そこで、本発明は、ロボットアームと人間とが共同作業を行うときの安全領域(あるいは、危険領域)を適切に表示することで、高い作業効率を確保しつつ、作業の安全性を向上させる技術を実現することを目的とする。 Therefore, the present invention is a technique for improving work safety while ensuring high work efficiency by appropriately displaying a safety area (or a danger area) when a robot arm and a human perform joint work. The purpose is to realize.
 上記課題を解決するために、第1の発明は、ロボットアームと可動物体とが混在する可能性がある空間において、可動物体が存在していても安全であると判定される領域である安全領域を、空間内において可動物体が認識可能な投影面に、表示するための表示制御システムであって、撮像部と、予測処理部と、投影部と、を備える。 In order to solve the above problems, the first invention is a safety area which is a region where it is determined that it is safe even if a movable object exists in a space where a robot arm and a movable object may coexist. Is a display control system for displaying a moving object on a projection surface that can be recognized in space, and includes an imaging unit, a prediction processing unit, and a projection unit.
 撮像部は、ロボットアームよりも上方の位置(例えば、ロボットアームの可動範囲内の空間を撮影可能な位置)に設置される。 The imaging unit is installed at a position above the robot arm (for example, a position where the space within the movable range of the robot arm can be photographed).
 予測処理部は、(1)空間において、撮像部により、ロボットアームよりも上方の位置から撮像した画像であって、ロボットアームの所定の状態であるときに撮像した画像、または、前記ロボットアームが所定の状態となるように前記ロボットアームを制御するための制御データと、(2)ロボットアームが当該所定の状態であるときの安全領域を特定する情報とを含むデータを教師データとして学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する。また、予測処理部は、予測処理時において、撮像部により、ロボットアームよりも上方の位置から撮像した画像である予測処理用画像に対して、学習済みモデルを用いた予測処理を実行することで、予測処理用画像を取得したときの空間における安全領域を予測し、予測した安全領域を予測安全領域として取得し、予測安全領域に基づいて、投影画像データを生成する。 The prediction processing unit is (1) an image captured by the imaging unit from a position above the robot arm in space, and is an image captured when the robot arm is in a predetermined state, or the robot arm is Learning processing is performed using data including control data for controlling the robot arm so as to be in a predetermined state and (2) information for specifying a safety area when the robot arm is in the predetermined state as teacher data. Prediction processing is executed using the trained model acquired by execution. Further, at the time of prediction processing, the prediction processing unit executes prediction processing using the trained model on the prediction processing image which is an image captured from a position above the robot arm by the imaging unit. , The safety area in the space when the image for prediction processing is acquired is predicted, the predicted safety area is acquired as the prediction safety area, and the projected image data is generated based on the prediction safety area.
 投影部は、投影画像データにより形成される投影画像を投影面に投影する。 The projection unit projects the projected image formed by the projected image data onto the projection surface.
 この表示制御システムでは、ロボットアームよりも上方に設置した撮像部により撮像した画像(例えば、フレーム画像)であって、例えば、(1)ロボットアームの位置、状態、可動物体(例えば、作業員)の位置、状態を判別できる画像(例えば、フレーム画像)、または、前記ロボットアームが所定の状態となるように前記ロボットアームを制御するための制御データと、(2)ロボットアームが上記所定の状態であるときの安全領域を特定するデータとを訓練データ(教師データ)として、学習させた学習済みモデルを用いた予測処理を行う。そして、この表示制御システムでは、学習したときの同様の状態で撮像した画像(例えば、フレーム画像)を用いて、学習済みモデルによる予測処理を行うことで、入力された画像が撮像されたときの安全領域(投影面における安全領域)を予測(特定)することができる。そして、この表示制御システムでは、予測(特定)した安全領域が投影面(例えば、床面)に、投影部により投影することで、例えば、作業員が容易かつ確実に認識できるように、安全領域を投影面(例えば、床面FLR)に表示させることができる。つまり、この表示制御システムでは、遮蔽領域が少ない上方から撮影した画像を用いて、学習処理、予測処理を行うため、ロボットアームがどのような状態であっても、その状態に応じて(あるいは、制御シーケンスから特定されるロボットアームの動作フェーズの移行状況に応じて)、動的に安全領域を特定(予測)する予測処理を、適切かつ高精度に行うことができる。 In this display control system, images (for example, frame images) captured by an imaging unit installed above the robot arm, for example, (1) the position, state, and movable object (for example, a worker) of the robot arm. An image (for example, a frame image) capable of determining the position and state of the robot arm, or control data for controlling the robot arm so that the robot arm is in a predetermined state, and (2) the robot arm is in the predetermined state. As training data (teacher data), the data that identifies the safety area at the time of is used as training data, and prediction processing is performed using the trained trained model. Then, in this display control system, when the input image is captured by performing the prediction processing by the trained model using the image (for example, the frame image) captured in the same state as when it was trained. The safety area (safe area on the projection plane) can be predicted (specified). Then, in this display control system, the predicted (specified) safety area is projected onto the projection surface (for example, the floor surface) by the projection unit so that, for example, the worker can easily and surely recognize the safety area. Can be displayed on the projection surface (for example, the floor surface FLR). That is, in this display control system, learning processing and prediction processing are performed using an image taken from above where the shielding area is small, so that the robot arm is in any state according to the state (or). The prediction process for dynamically specifying (predicting) the safety area can be performed appropriately and with high accuracy (according to the transition status of the operation phase of the robot arm specified from the control sequence).
 したがって、この表示制御システムでは、ロボットアームと可動物体(例えば、人間)とが共同作業を行うときに、安全領域を適切に表示することができる。その結果、高い作業効率を確保しつつ、作業の安全性を向上させることができる。 Therefore, in this display control system, the safety area can be appropriately displayed when the robot arm and a movable object (for example, a human being) collaborate. As a result, it is possible to improve work safety while ensuring high work efficiency.
 なお、「可動物体」は、移動できる物体であり、例えば、自発的に移動できる人間や動物等である。 The "movable object" is a movable object, for example, a human or an animal that can move spontaneously.
 第2の発明は、第1の発明であって、予測処理部は、ロボットアームを制御するための制御データに基づいて、それぞれ安全度合いに応じて特定される色を有する複数の画像領域を含むように生成された訓練用投影画像データを前記投影面に投影されたときに、撮像部により撮像された画像を用いて、学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する。 The second invention is the first invention, in which the prediction processing unit includes a plurality of image regions having colors specified according to the degree of safety based on control data for controlling the robot arm. When the training projection image data generated as described above is projected onto the projection surface, the prediction process is executed using the trained model obtained by executing the training process using the image captured by the imaging unit. To do.
 これにより、この表示制御システムでは、安全度合いにより階層的に色分けされた画像を訓練用データとして学習処理を実行し取得された学習済みモデルを用いて、予測処理を実行することができる。 As a result, in this display control system, it is possible to execute the learning process using the images hierarchically color-coded according to the degree of safety as training data, and to execute the prediction process using the acquired learned model.
 第3の発明は、第1の発明であって、予測処理部は、ロボットアームを制御するための制御データに基づいて、それぞれ安全度合いに応じて特定される輝度を有する複数の画像領域を含むように生成された訓練用投影画像データを前記投影面に投影されたときに、撮像部により撮像された画像を用いて、学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する。 The third invention is the first invention, in which the prediction processing unit includes a plurality of image regions having brightness specified according to the degree of safety based on control data for controlling the robot arm. When the training projection image data generated as described above is projected onto the projection surface, the prediction process is executed using the trained model obtained by executing the learning process using the image captured by the imaging unit. To do.
 これにより、この表示制御システムでは、安全度合いにより階層的に輝度を変えた画像を訓練用データとして学習処理を実行し取得された学習済みモデルを用いて、予測処理を実行することができる。 As a result, in this display control system, the learning process is executed using the image whose brightness is changed hierarchically according to the degree of safety as training data, and the prediction process can be executed using the acquired model.
 第4の発明は、第1から第3のいずれかの発明であって、予測処理部は、それぞれ安全度合いに応じて特定される色を有する複数の画像領域を生成し、投影画像データにより形成される画像が、生成した前記複数の画像領域を含むように、投影画像データを生成する。 The fourth invention is any one of the first to third inventions, in which the prediction processing unit generates a plurality of image regions having colors specified according to the degree of safety and forms them from projected image data. The projected image data is generated so that the image to be generated includes the generated plurality of image regions.
 これにより、この表示制御システムでは、安全領域を示す画像(投影画像)を安全度により階層的に色分けした複数の画像領域からなる画像とすることができ、作業員は、投影面に投影された色により、安全度を適切に認識することができる。 As a result, in this display control system, the image (projected image) indicating the safety area can be made into an image consisting of a plurality of image areas hierarchically color-coded according to the degree of safety, and the worker is projected on the projection surface. The degree of safety can be appropriately recognized by the color.
 なお、複数の画像領域の色は、安全度に応じて、段階的に変化するもの(各画素の階調値が離散値をとる画像)であってもよく、また、連続的に変化するもの(各画素の階調値が連続値をとる画像(例えば、グラデーション画像))であってもよい。 The color of the plurality of image areas may be one that changes stepwise according to the degree of safety (an image in which the gradation value of each pixel takes a discrete value), or one that changes continuously. (An image in which the gradation value of each pixel takes a continuous value (for example, a gradation image)) may be used.
 第5の発明は、第1から第3のいずれかの発明であって、投影部は、それぞれ安全度合いに応じて特定される輝度を有する複数の画像領域を生成し、投影画像データにより形成される画像が、生成した複数の画像領域を含むように、投影画像データを生成する。 The fifth invention is any one of the first to third inventions, in which the projection unit generates a plurality of image regions having brightness specified according to the degree of safety, and is formed by the projected image data. The projected image data is generated so that the image includes a plurality of generated image areas.
 これにより、この表示制御システムでは、安全領域を示す画像(投影画像)を安全度により階層的に輝度(明度)で分けられた複数の画像領域からなる画像とすることができ、作業員は、投影面に投影された輝度(明度)により、安全度を適切に認識することができる。 As a result, in this display control system, an image (projected image) indicating a safety area can be made into an image consisting of a plurality of image areas hierarchically divided by brightness (brightness) according to the degree of safety. The degree of safety can be appropriately recognized by the brightness (brightness) projected on the projection surface.
 なお、複数の画像領域の輝度(明度)は、安全度に応じて、段階的に変化するもの(各画素の階調値が離散値をとる画像)であってもよく、また、連続的に変化するもの(各画素の階調値が連続値をとる画像(例えば、輝度(明度)が連続的に変化する画像))であってもよい。 The brightness (brightness) of the plurality of image areas may change stepwise according to the degree of safety (an image in which the gradation value of each pixel takes a discrete value), or continuously. It may be an image that changes (an image in which the gradation value of each pixel takes a continuous value (for example, an image in which the brightness (brightness) continuously changes)).
 第6の発明は、ロボットアームと可動物体とが混在する可能性がある空間において、可動物体が存在していると危険であると判定される領域である危険領域を、空間内において可動物体が認識可能な投影面に、表示するための表示制御システムであって、撮像部と、予測処理部と、投影部と、を備える。 In the sixth invention, in a space where a robot arm and a movable object may coexist, the movable object can move in a dangerous area, which is an area determined to be dangerous if the movable object exists. It is a display control system for displaying on a recognizable projection surface, and includes an imaging unit, a prediction processing unit, and a projection unit.
 撮像部は、ロボットアームよりも上方の位置に設置される。 The imaging unit is installed at a position above the robot arm.
 予測処理部は、(1)空間において、撮像部により、ロボットアームよりも上方の位置から撮像した画像であって、ロボットアームの所定の状態であるときに撮像した画像、または、ロボットアームが所定の状態となるようにロボットアームを制御するための制御データと、(2)ロボットアームが当該所定の状態であるときの危険領域を特定する情報とを含むデータを教師データとして学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する。また、予測処理部は、予測処理時において、撮像部により、ロボットアームよりも上方の位置から撮像した画像である予測処理用画像に対して、学習済みモデルを用いた予測処理を実行することで、予測処理用画像を取得したときの空間における危険領域を予測し、予測した危険領域を予測危険領域として取得し、予測危険領域に基づいて、投影画像データを生成する。 The prediction processing unit is (1) an image captured from a position above the robot arm by the imaging unit in space, and is an image captured when the robot arm is in a predetermined state, or the robot arm is predetermined. The learning process is executed using data including control data for controlling the robot arm so as to be in the state of (2) information for identifying a dangerous area when the robot arm is in the predetermined state as teacher data. Prediction processing is executed using the trained model acquired in the above. Further, the prediction processing unit executes prediction processing using the trained model on the prediction processing image which is an image captured from a position above the robot arm by the imaging unit at the time of prediction processing. , The danger area in the space when the image for prediction processing is acquired is predicted, the predicted danger area is acquired as the prediction danger area, and the projected image data is generated based on the prediction danger area.
 投影部は、投影画像データにより形成される投影画像を投影面に投影する。 The projection unit projects the projected image formed by the projected image data onto the projection surface.
 この表示制御システムでは、ロボットアームよりも上方に設置した撮像部により撮像した画像(例えば、フレーム画像)であって、例えば、ロボットアームの位置、状態、可動物体(例えば、作業員)の位置、状態を判別できる画像(例えば、フレーム画像)、または、前記ロボットアームが所定の状態となるように前記ロボットアームを制御するための制御データと、(2)ロボットアームが上記所定の状態であるときの危険領域を特定するデータとを訓練データ(教師データ)として、学習させた学習済みモデルを用いた予測処理を行う。そして、この表示制御システムでは、学習したときの同様の状態で撮像した画像(例えば、フレーム画像)を用いて、学習済みモデルによる予測処理を行うことで、入力された画像が撮像されたときの危険領域(投影面における危険領域)を予測(特定)することができる。そして、この表示制御システムでは、予測(特定)した危険領域が投影面(例えば、床面)に、投影部により投影することで、例えば、作業員が容易かつ確実に認識できるように、危険領域を投影面(例えば、床面FLR)に表示させることができる。つまり、この表示制御システムでは、遮蔽領域が少ない上方から撮影した画像を用いて、学習処理、予測処理を行うため、ロボットアームRbt_armがどのような状態であっても、その状態に応じて(あるいは、制御シーケンスから特定されるロボットアームの動作フェーズの移行状況に応じて)、動的に危険領域を特定(予測)する予測処理を、適切かつ高精度に行うことができる。 In this display control system, an image (for example, a frame image) captured by an imaging unit installed above the robot arm, for example, the position and state of the robot arm, the position of a movable object (for example, a worker), and the like. An image that can determine the state (for example, a frame image) or control data for controlling the robot arm so that the robot arm is in a predetermined state, and (2) when the robot arm is in the predetermined state. Prediction processing is performed using the trained trained model using the data that identifies the dangerous area of the robot as training data (teacher data). Then, in this display control system, when the input image is captured by performing the prediction processing by the trained model using the image (for example, the frame image) captured in the same state as when learning. It is possible to predict (identify) a dangerous area (dangerous area on the projection surface). Then, in this display control system, the predicted (identified) dangerous area is projected onto the projection surface (for example, the floor surface) by the projection unit so that, for example, the worker can easily and surely recognize the dangerous area. Can be displayed on the projection surface (for example, the floor surface FLR). That is, in this display control system, since the learning process and the prediction process are performed using the image taken from above where the shielding area is small, the robot arm Rbt_arm is in any state according to the state (or , According to the transition status of the operation phase of the robot arm identified from the control sequence), the prediction process for dynamically identifying (predicting) the dangerous area can be performed appropriately and with high accuracy.
 したがって、この表示制御システムでは、ロボットアームと可動物体(例えば、人間)とが共同作業を行うときに、危険領域を適切に表示することができる。その結果、高い作業効率を確保しつつ、作業の安全性を向上させることができる。 Therefore, in this display control system, when the robot arm and a movable object (for example, a human being) collaborate, the dangerous area can be appropriately displayed. As a result, it is possible to improve work safety while ensuring high work efficiency.
 なお、「可動物体」は、移動できる物体であり、例えば、人間や動物等である。 The "movable object" is a movable object, such as a human being or an animal.
 第7の発明は、第6の発明であって、予測処理部は、ロボットアームを制御するための制御データに基づいて、それぞれ危険度合いに応じて特定される色を有する複数の画像領域を含むように生成された訓練用投影画像データを前記投影面に投影されたときに、撮像部により撮像された画像を用いて、学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する。 A seventh aspect of the invention is the sixth aspect, wherein the prediction processing unit includes a plurality of image regions having colors specified according to the degree of danger based on control data for controlling the robot arm. When the training projection image data generated as described above is projected onto the projection surface, the prediction process is executed using the trained model obtained by executing the learning process using the image captured by the imaging unit. To do.
 これにより、この表示制御システムでは、危険度合いにより階層的に色分けされた画像を訓練用データとして学習処理を実行し取得された学習済みモデルを用いて、予測処理を実行することができる。 As a result, in this display control system, it is possible to execute the learning process using the images hierarchically color-coded according to the degree of danger as training data, and to execute the prediction process using the acquired learned model.
 第8の発明は、第6の発明であって、予測処理部は、ロボットアームを制御するための制御データに基づいて、それぞれ危険度合いに応じて特定される輝度を有する複数の画像領域を含むように生成された訓練用投影画像データを前記投影面に投影されたときに、撮像部により撮像された画像を用いて、学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する。 The eighth invention is the sixth invention, in which the prediction processing unit includes a plurality of image regions having brightness specified according to the degree of danger based on control data for controlling the robot arm. When the training projection image data generated as described above is projected onto the projection surface, the prediction process is executed using the trained model obtained by executing the learning process using the image captured by the imaging unit. To do.
 これにより、この表示制御システムでは、危険度合いにより階層的に輝度を変えた画像を訓練用データとして学習処理を実行し取得された学習済みモデルを用いて、予測処理を実行することができる。 As a result, in this display control system, the learning process is executed using the image whose brightness is changed hierarchically according to the degree of danger as training data, and the prediction process can be executed using the acquired model.
 第9の発明は、第6から第8のいずれかの発明であって、予測処理部は、それぞれ危険度合いに応じて特定される色を有する複数の画像領域を生成し、投影画像データにより形成される画像が、生成した複数の画像領域を含むように、投影画像データを生成する。 The ninth invention is any one of the sixth to eighth inventions, in which the prediction processing unit generates a plurality of image regions each having a color specified according to the degree of danger and forms them from the projected image data. The projected image data is generated so that the image to be generated includes a plurality of generated image areas.
 これにより、この表示制御システムでは、危険領域を示す画像(投影画像)を危険度により階層的に色分けした複数の画像領域からなる画像とすることができ、作業員は、投影面に投影された色により、危険度を適切に認識することができる。 As a result, in this display control system, the image (projected image) indicating the dangerous area can be made into an image consisting of a plurality of image areas hierarchically color-coded according to the degree of danger, and the worker is projected on the projection surface. The degree of danger can be appropriately recognized by the color.
 なお、複数の画像領域の色は、危険度に応じて、段階的に変化するもの(各画素の階調値が離散値をとる画像)であってもよく、また、連続的に変化するもの(各画素の階調値が連続値をとる画像(例えば、グラデーション画像))であってもよい。 The colors of the plurality of image areas may be those that change stepwise according to the degree of danger (images in which the gradation values of each pixel take discrete values), and those that change continuously. (An image in which the gradation value of each pixel takes a continuous value (for example, a gradation image)) may be used.
 第10の発明は、第6から第8のいずれかの発明であって、予測処理部は、それぞれ危険度合いに応じて特定される輝度を有する複数の画像領域を生成し、投影画像データにより形成される画像が、生成した複数の画像領域を含むように、投影画像データを生成する。 A tenth invention is any one of the sixth to eighth inventions, in which the prediction processing unit generates a plurality of image regions having brightness specified according to the degree of danger and forms them from projected image data. The projected image data is generated so that the image to be generated includes a plurality of generated image regions.
 これにより、この表示制御システムでは、危険領域を示す画像(投影画像)を危険度により階層的に輝度(明度)で分けられた複数の画像領域からなる画像とすることができ、作業員は、投影面に投影された輝度(明度)により、危険度を適切に認識することができる。 As a result, in this display control system, the image (projected image) indicating the dangerous area can be made into an image consisting of a plurality of image areas hierarchically divided by the brightness (brightness) according to the degree of danger. The degree of danger can be appropriately recognized by the brightness (brightness) projected on the projection surface.
 なお、複数の画像領域の輝度(明度)は、危険度に応じて、段階的に変化するもの(各画素の階調値が離散値をとる画像)であってもよく、また、連続的に変化するもの(各画素の階調値が連続値をとる画像(例えば、輝度(明度)が連続的に変化する画像))であってもよい。 The brightness (brightness) of the plurality of image regions may change stepwise according to the degree of danger (an image in which the gradation value of each pixel takes a discrete value), or continuously. It may be an image that changes (an image in which the gradation value of each pixel takes a continuous value (for example, an image in which the brightness (brightness) continuously changes)).
 第11の発明は、第1から第10のいずれかの発明であって、空間は、床面を有しており、投影面は、空間内の床面である。 The eleventh invention is any one of the first to tenth inventions, in which the space has a floor surface and the projection surface is the floor surface in the space.
 これにより、この表示制御システムでは、投影面を床面とすることができる。 As a result, in this display control system, the projection surface can be the floor surface.
 第12の発明は、第1から第11のいずれかの発明であって、空間は、天井面を有しており、撮像部は、空間の天井面に設置されている。 The twelfth invention is any one of the first to eleventh inventions, in which the space has a ceiling surface, and the imaging unit is installed on the ceiling surface of the space.
 これにより、この表示制御システムでは、天井面に設置した撮像部を用いることができる。 As a result, in this display control system, an imaging unit installed on the ceiling surface can be used.
 第13の発明は、第1から第12のいずれかの発明であって、ロボットアームを制御するロボットアーム制御部をさらに備える。そして、予測処理部が、可動物体が安全領域外に移動する可能性が高い、あるいは、可動物体が危険領域内に移動する可能性が高いと判断した場合、ロボットアーム制御部は、ロボットアームの動作を停止させる、および/または、警告を発生させる処理を実行する。 The thirteenth invention is any one of the first to twelfth inventions, further including a robot arm control unit for controlling the robot arm. Then, when the prediction processing unit determines that the movable object is likely to move out of the safe area or the movable object is likely to move into the dangerous area, the robot arm control unit moves the robot arm. Executes a process that stops the operation and / or generates a warning.
 これにより、この表示制御システムでは、可動物体が安全領域内から、安全領域外へ移動し、ロボットアームと接触、衝突する可能性が高くなった場合、危険を回避させる処理により、ロボットアームと接触、衝突する等の重大事故の発生を防止することができる。つまり、この表示制御システムでは、ロボットアームがどのような状態であっても、その状態に応じて、動的に安全領域を特定(予測)する予測処理を、適切かつ高精度に行うことができ、さらに、適切な危険回避処理を行うことができる。 As a result, in this display control system, when a movable object moves from the inside of the safety area to the outside of the safety area and is likely to come into contact with or collide with the robot arm, it comes into contact with the robot arm by a process of avoiding danger. , Collision and other serious accidents can be prevented. That is, in this display control system, regardless of the state of the robot arm, it is possible to appropriately and accurately perform the prediction process of dynamically identifying (predicting) the safety area according to the state. In addition, appropriate risk avoidance processing can be performed.
 第14の発明は、ロボットアームと可動物体とが混在する可能性がある空間において、ロボットアームよりも上方の位置に設置される撮像部と、画像を所定の投影面に投影する投影部と、を備える表示制御システムに用いられる表示制御方法である。表示制御方法は、可動物体が存在していても安全であると判定される領域である安全領域を、空間内において可動物体が認識可能な投影面に、表示するための方法であって、予測処理ステップと、投影ステップと、を備える。 The fourteenth invention includes an imaging unit installed at a position above the robot arm in a space where a robot arm and a movable object may coexist, a projection unit that projects an image onto a predetermined projection surface, and the like. This is a display control method used in a display control system including. The display control method is a method for displaying a safety area, which is an area determined to be safe even if a movable object exists, on a projection surface in which the movable object can be recognized in space. It includes a processing step and a projection step.
 予測処理ステップでは、(1)空間において、撮像部により、ロボットアームよりも上方の位置から撮像した画像であって、ロボットアームの所定の状態であるときに撮像した画像、または、前記ロボットアームが所定の状態となるように前記ロボットアームを制御するための制御データと、(2)ロボットアームが当該所定の状態であるときの安全領域を特定する情報とを含むデータを教師データとして学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する。そして、予測処理ステップでは、予測処理時において、撮像部により、ロボットアームよりも上方の位置から撮像した画像である予測処理用画像に対して、学習済みモデルを用いた予測処理を実行することで、予測処理用画像を取得したときの空間における安全領域を予測し、予測した安全領域を予測安全領域として取得し、予測安全領域に基づいて、投影画像データを生成する。 In the prediction processing step, in (1) space, an image captured from a position above the robot arm by the imaging unit, the image captured when the robot arm is in a predetermined state, or the robot arm Learning processing is performed using data including control data for controlling the robot arm so as to be in a predetermined state and (2) information for specifying a safety area when the robot arm is in the predetermined state as teacher data. Prediction processing is executed using the trained model acquired by execution. Then, in the prediction processing step, at the time of prediction processing, the imaging unit executes prediction processing using the trained model on the image for prediction processing which is an image captured from a position above the robot arm. , The safety area in the space when the image for prediction processing is acquired is predicted, the predicted safety area is acquired as the prediction safety area, and the projected image data is generated based on the prediction safety area.
 投影ステップでは、投影画像データにより形成される投影画像を投影面に投影する。 In the projection step, the projected image formed by the projected image data is projected onto the projection surface.
 これにより、第1の発明と同様の効果を奏する表示制御方法を実現することができる。 Thereby, a display control method having the same effect as that of the first invention can be realized.
 第15の発明は、ロボットアームと可動物体とが混在する可能性がある空間において、ロボットアームよりも上方の位置に設置される撮像部と、画像を所定の投影面に投影する投影部と、を備える表示制御システムに用いられる表示制御方法である。表示制御方法は、可動物体が存在していていると危険であると判定される領域である危険領域を、空間内において可動物体が認識可能な投影面に、表示するための方法であって、予測処理ステップと、投影ステップと、を備える。 A fifteenth invention includes an imaging unit installed at a position above the robot arm in a space where a robot arm and a movable object may coexist, a projection unit that projects an image onto a predetermined projection surface, and the like. This is a display control method used in a display control system including. The display control method is a method for displaying a dangerous area, which is an area determined to be dangerous when a movable object is present, on a projection surface in which the movable object can be recognized in space. It includes a prediction processing step and a projection step.
 予測処理ステップでは、(1)空間において、撮像部により、ロボットアームよりも上方の位置から撮像した画像であって、ロボットアームの所定の状態であるときに撮像した画像、または、前記ロボットアームが所定の状態となるように前記ロボットアームを制御するための制御データと、(2)ロボットアームが当該所定の状態であるときの危険領域を特定する情報とを含むデータを教師データとして学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する。そして、予測処理ステップでは、予測処理時において、撮像部により、ロボットアームよりも上方の位置から撮像した画像である予測処理用画像に対して、学習済みモデルを用いた予測処理を実行することで、予測処理用画像を取得したときの空間における危険領域を予測し、予測した危険領域を予測危険領域として取得し、予測危険領域に基づいて、投影画像データを生成する。 In the prediction processing step, in (1) space, an image captured from a position above the robot arm by the imaging unit, the image captured when the robot arm is in a predetermined state, or the robot arm Learning processing is performed using data including control data for controlling the robot arm so as to be in a predetermined state and (2) information for identifying a dangerous area when the robot arm is in the predetermined state as teacher data. Prediction processing is executed using the trained model acquired by execution. Then, in the prediction processing step, at the time of prediction processing, the image pickup unit executes prediction processing using the trained model on the prediction processing image which is an image captured from a position above the robot arm. , The danger area in the space when the image for prediction processing is acquired is predicted, the predicted danger area is acquired as the prediction danger area, and the projected image data is generated based on the prediction danger area.
 投影ステップでは、投影画像データにより形成される投影画像を投影面に投影する。 In the projection step, the projected image formed by the projected image data is projected onto the projection surface.
 これにより、第6の発明と同様の効果を奏する表示制御方法を実現することができる。 Thereby, a display control method having the same effect as that of the sixth invention can be realized.
 本発明によれば、ロボットアームと人間とが共同作業を行うときの安全領域(あるいは、危険領域)を適切に表示することで、高い作業効率を確保しつつ、作業の安全性を向上させる技術を実現することができる。 According to the present invention, a technique for improving work safety while ensuring high work efficiency by appropriately displaying a safety area (or a danger area) when a robot arm and a human perform joint work. Can be realized.
第1実施形態に係る表示制御システム1000の概略構成図。The schematic block diagram of the display control system 1000 which concerns on 1st Embodiment. 第1実施形態に係る表示制御装置100の概略構成図。The schematic block diagram of the display control device 100 which concerns on 1st Embodiment. 第1実施形態に係る表示制御システム1000の訓練データ取得モードの処理を説明するための図。The figure for demonstrating the processing of the training data acquisition mode of the display control system 1000 which concerns on 1st Embodiment. 第1実施形態に係る表示制御システム1000の訓練データ取得モードの処理を説明するための図。The figure for demonstrating the processing of the training data acquisition mode of the display control system 1000 which concerns on 1st Embodiment. 第1実施形態に係る表示制御システム1000の学習モードの処理を説明するための図。The figure for demonstrating the processing of the learning mode of the display control system 1000 which concerns on 1st Embodiment. 第1実施形態に係る表示制御システム1000の予測モードの処理を説明するための図。The figure for demonstrating the processing of the prediction mode of the display control system 1000 which concerns on 1st Embodiment. 第1実施形態に係る表示制御システム1000の予測モードの処理を説明するための図。The figure for demonstrating the processing of the prediction mode of the display control system 1000 which concerns on 1st Embodiment. 第1実施形態に係る表示制御システム1000の予測モードの処理を説明するための図。The figure for demonstrating the processing of the prediction mode of the display control system 1000 which concerns on 1st Embodiment. 第1実施形態に係る表示制御システム1000の予測モードの処理を説明するための図。The figure for demonstrating the processing of the prediction mode of the display control system 1000 which concerns on 1st Embodiment. 第1実施形態の第1変形例に係る表示制御システム1000Aの概略構成図。The schematic block diagram of the display control system 1000A which concerns on 1st modification of 1st Embodiment. 第1実施形態の第1変形例に係る表示制御装置100Aの概略構成図。The schematic block diagram of the display control device 100A which concerns on 1st modification of 1st Embodiment. 第1実施形態の第1変形例に係る表示制御システム1000Aの訓練データ取得モードの処理を説明するための図。The figure for demonstrating the processing of the training data acquisition mode of the display control system 1000A which concerns on 1st modification of 1st Embodiment. 第1実施形態の第1変形例に係る表示制御システム1000Aにおいて使用する訓練データの一例(パターン1)を示す図(タイミングチャート)。The figure (timing chart) which shows an example (pattern 1) of the training data used in the display control system 1000A which concerns on the 1st modification of 1st Embodiment. 第1実施形態の第1変形例に係る表示制御システム1000Aにおいて使用する訓練データの一例(パターン2)を示す図(タイミングチャート)。FIG. 6 is a diagram (timing chart) showing an example (pattern 2) of training data used in the display control system 1000A according to the first modification of the first embodiment. 第2実施形態に係る表示制御システム2000の概略構成図。The schematic block diagram of the display control system 2000 which concerns on 2nd Embodiment. 第2実施形態に係る表示制御装置100Aの概略構成図。The schematic block diagram of the display control device 100A which concerns on 2nd Embodiment. 第2実施形態に係る表示制御装置100Aの予測処理部5Aの概略構成図。The schematic block diagram of the prediction processing part 5A of the display control apparatus 100A which concerns on 2nd Embodiment. 第2実施形態に係る表示制御システム2000の予測モードの処理のフローチャートである。It is a flowchart of the process of the prediction mode of the display control system 2000 which concerns on 2nd Embodiment. 第2実施形態に係る表示制御システム2000の予測モードの処理を説明するための図。The figure for demonstrating the processing of the prediction mode of the display control system 2000 which concerns on 2nd Embodiment. 第2実施形態に係る表示制御システム2000の予測モードの処理を説明するための図。The figure for demonstrating the processing of the prediction mode of the display control system 2000 which concerns on 2nd Embodiment. CPUバス構成を示す図。The figure which shows the CPU bus configuration.
 [第1実施形態]
 第1実施形態について、図面を参照しながら、以下、説明する。
[First Embodiment]
The first embodiment will be described below with reference to the drawings.
 <1.1:表示制御システムの構成>
 図1は、第1実施形態に係る表示制御システム1000の概略構成図である。
<1.1: Display control system configuration>
FIG. 1 is a schematic configuration diagram of the display control system 1000 according to the first embodiment.
 図2は、第1実施形態に係る表示制御装置100の概略構成図である。 FIG. 2 is a schematic configuration diagram of the display control device 100 according to the first embodiment.
 表示制御システム1000は、図1に示すように、投影部Prj1と、撮像部Cmr1と、表示制御装置100と、ロボットRbtとを備える。 As shown in FIG. 1, the display control system 1000 includes a projection unit Prj1, an imaging unit Cmr1, a display control device 100, and a robot Rbt.
 投影部Prj1は、作業員やロボットRbtの最高点よりも高い位置(例えば、天井)に設置され、投影点P_prjから、下方に向かって(例えば、床面FLRに対して)画像あるいは境界線等を投影する装置である。投影部Prj1は、例えば、画像を床面FLRに投影するプロジェクタ装置や、LEDスキャナ、レーザースキャナ等の床面に所定の色の線を表示することができる装置により実現される。 The projection unit Prj1 is installed at a position higher than the highest point (for example, the ceiling) of the worker or the robot Rbt, and is an image or a boundary line or the like downward from the projection point P_prj (for example, with respect to the floor surface FLR). It is a device that projects. The projection unit Prj1 is realized by, for example, a projector device that projects an image onto the floor surface FLR, or a device that can display a line of a predetermined color on the floor surface such as an LED scanner or a laser scanner.
 投影部Prj1は、表示制御装置100から出力される制御信号Ctl_prj(D_prj)を入力し、当該制御信号Ctl_prj(D_prj)に基づいて、投影対象(例えば、床面)に対して、画像または境界線等(データD_prj)を投影する。 The projection unit Prj1 inputs the control signal Ctl_prj (D_prj) output from the display control device 100, and based on the control signal Ctl_prj (D_prj), the projection target (for example, the floor surface) has an image or a boundary line. Etc. (data D_prj) are projected.
 撮像部Cmr1は、作業員やロボットRbtの最高点よりも高い位置(例えば、天井)に設置され、投影部Prj1が床面に投影する画像または境界線の表示等を撮像するとともに、作業員(例えば、図1の作業員Psn1)や、作業員が装着しているヘルメット(例えば、図1の作業員Psn1が装着しているヘルメットHat1)、あるいは、作業員に装着されているマーカー(例えば、赤外光反射マーカー)等を撮像する。撮像部Cmr1は、例えば、可視光用イメージセンサを搭載しており、カラー映像を撮像できる撮像装置や、赤外光用イメージセンサを搭載しており、赤外光映像を撮像できる赤外線カメラ等により実現される。 The image pickup unit Cmr1 is installed at a position higher than the highest point of the worker or the robot Rbt (for example, the ceiling), captures an image projected by the projection unit Prj1 on the floor surface, a display of a boundary line, or the like, and also a worker ( For example, the worker Psn1 of FIG. 1, the helmet worn by the worker (for example, the helmet Hat1 worn by the worker Psn1 of FIG. 1), or the marker worn by the worker (for example, Infrared light reflection marker) etc. are imaged. The imaging unit Cmr1 is equipped with, for example, an image sensor for visible light and is equipped with an imaging device capable of capturing a color image, an image sensor for infrared light, and an infrared camera capable of capturing an infrared light image. It will be realized.
 撮像部Cmr1は、撮像したデータ(画像データ、映像データ)をデータD1_imgとして、表示制御装置100に出力する。 The imaging unit Cmr1 outputs the captured data (image data, video data) as data D1_img to the display control device 100.
 表示制御装置100は、図2に示すように、セレクタSEL1と、投影制御部1と、セレクタSEL2と、訓練データ取得部2と、訓練データ格納部DB1と、CG合成部3(CG:Computer Graphics)と、拡張訓練データ格納部DB2と、学習部4と、最適化パラメータ格納部DB3と、予測処理部5とを備える。 As shown in FIG. 2, the display control device 100 includes a selector SEL1, a projection control unit 1, a selector SEL2, a training data acquisition unit 2, a training data storage unit DB1, and a CG synthesis unit 3 (CG: Computer Graphics). ), The extended training data storage unit DB2, the learning unit 4, the optimization parameter storage unit DB3, and the prediction processing unit 5.
 セレクタSEL1は、表示制御装置100の各機能部を制御する制御部(不図示)から出力されるモード信号Modeに従い、訓練データ取得時の投影データを含むデータD1_prj_trainと、予測処理部5から出力される予測時の投影データを含むデータDp1_prjのいずれか一方を選択し、選択したデータをデータD_prjとして投影制御部1に出力するセレクタである。具体的には、(1)訓練データ取得時において、モード信号Modeは、その信号値が「0」にセットされており、セレクタSEL1は、訓練データ取得時の投影データを含むデータD1_prj_trainを選択し、当該データをデータD_prjとして、投影制御部1に出力する。(2)予測時において、モード信号Modeは、その信号値が「1」にセットされており、セレクタSEL1は、予測処理部5から出力される予測時の投影データを含むデータDp1_prjを選択し、データDp1_prjをデータD_prjとして投影制御部1に出力する。 The selector SEL1 is output from the data D1_prj_train including the projection data at the time of training data acquisition and the prediction processing unit 5 according to the mode signal model output from the control unit (not shown) that controls each functional unit of the display control device 100. This is a selector that selects one of the data Dp1_prj including the projection data at the time of prediction and outputs the selected data as the data D_prj to the projection control unit 1. Specifically, (1) at the time of training data acquisition, the signal value of the mode signal Mode is set to "0", and the selector SEL1 selects the data D1_prj_train including the projection data at the time of training data acquisition. , The data is output to the projection control unit 1 as data D_prj. (2) At the time of prediction, the signal value of the mode signal Mode is set to "1", and the selector SEL1 selects the data Dp1_prj including the projection data at the time of prediction output from the prediction processing unit 5. The data Dp1_prj is output to the projection control unit 1 as the data D_prj.
 投影制御部1は、セレクタSEL1から出力されるデータD_prjを入力し、当該データD_prjに含まれる投影用のデータ(投影用の画像データ、あるいは、投影面に表示する境界線のデータ)が投影部Prj1から投影対象に投影されるように制御する制御信号Ctl_prj(D_prj)を生成し、生成した当該制御信号Ctl_prj(D_prj)を投影部Prj1に出力する。 The projection control unit 1 inputs the data D_prj output from the selector SEL1, and the projection data (image data for projection or boundary line data to be displayed on the projection surface) included in the data D_prj is the projection unit. A control signal Ctl_prj (D_prj) that controls the projection from Prj1 so as to be projected onto the projection target is generated, and the generated control signal Ctl_prj (D_prj) is output to the projection unit Prj1.
 セレクタSEL2は、表示制御装置100の各機能部を制御する制御部(不図示)から出力されるモード信号Modeに従い、撮像部Cmr1から出力されるデータD1_imgを、訓練データ取得部2および予測処理部5のいずれか一方に出力するセレクタである。具体的には、(1)訓練データ取得時において、モード信号Modeは、その信号値が「0」にセットされており、セレクタSEL2は、データD1_imgを訓練データ取得部2に出力する。(2)予測時において、モード信号Modeは、その信号値が「1」にセットされており、セレクタSEL2は、データD1_imgを予測処理部5に出力する。 The selector SEL2 obtains the data D1_img output from the imaging unit Cmr1 according to the mode signal model output from the control unit (not shown) that controls each functional unit of the display control device 100, to the training data acquisition unit 2 and the prediction processing unit. This is a selector that outputs to any one of 5. Specifically, (1) at the time of training data acquisition, the signal value of the mode signal Mode is set to "0", and the selector SEL2 outputs the data D1_img to the training data acquisition unit 2. (2) At the time of prediction, the signal value of the mode signal Mode is set to "1", and the selector SEL2 outputs the data D1_img to the prediction processing unit 5.
 訓練データ取得部2は、セレクタSEL2から出力されるデータD1_imgを入力し、データD1_imgに基づいて、予測処理部5の学習モデルを訓練するための訓練データDtr1を生成する。そして、訓練データ取得部2は、生成した訓練データDtr1を訓練データ格納部DB1に記憶させる。 The training data acquisition unit 2 inputs the data D1_img output from the selector SEL2, and generates training data Dtr1 for training the learning model of the prediction processing unit 5 based on the data D1_img. Then, the training data acquisition unit 2 stores the generated training data Dtr1 in the training data storage unit DB1.
 訓練データ格納部DB1は、訓練データ取得部2からの指示に従い、訓練データ取得部2から出力される訓練データDtr1を記憶する。また、訓練データ格納部DB1は、CG合成部3からの指示に従い、記憶保持している訓練データDtr1をCG合成部3に出力する。 The training data storage unit DB1 stores the training data Dtr1 output from the training data acquisition unit 2 in accordance with the instruction from the training data acquisition unit 2. Further, the training data storage unit DB1 outputs the stored training data Dtr1 to the CG synthesis unit 3 in accordance with the instruction from the CG synthesis unit 3.
 CG合成部3は、訓練データ格納部DB1から、訓練データDtr1を読み出す(取得する)。そして、CG合成部3は、訓練データDtr1を用いて、CG合成した訓練データを作成し、作成したデータを拡張訓練データDtr2として、拡張訓練データ格納部DB2に記憶させる。なお、CG合成部3は、拡張訓練データDtr2を作成するために用いた訓練データDtr1も、拡張訓練データDtr2に含めて、拡張訓練データ格納部DB2に記憶させるようにしてもよい。 The CG synthesis unit 3 reads (acquires) the training data Dtr1 from the training data storage unit DB1. Then, the CG synthesis unit 3 creates training data synthesized by CG using the training data Dtr1, and stores the created data as the extended training data Dtr2 in the extended training data storage unit DB2. The CG synthesis unit 3 may also include the training data Dtr1 used for creating the extended training data Dtr2 in the extended training data Dtr2 and store it in the extended training data storage unit DB2.
 拡張訓練データ格納部DB2は、CG合成部3からの指示に従い、CG合成部3から出力される拡張訓練データDtr2を記憶する。また、拡張訓練データ格納部DB2は、学習部4からの指示に従い、記憶保持している拡張訓練データDtr2を学習部4に出力する。 The extended training data storage unit DB2 stores the extended training data Dtr2 output from the CG synthesis unit 3 in accordance with the instruction from the CG synthesis unit 3. Further, the extended training data storage unit DB2 outputs the stored extended training data Dtr2 to the learning unit 4 in accordance with the instruction from the learning unit 4.
 学習部4は、拡張訓練データ格納部DB2から拡張訓練データDtr2を読み出し(取得し)、拡張訓練データDtr2を用いて、学習処理を行う。そして、学習部4は、学習処理により、学習用モデルを最適化するパラメータ(最適化パラメータθ_opt)を取得し、取得した最適化パラメータθ_optを最適化パラメータ格納部DB3に記憶させる。 The learning unit 4 reads (acquires) the extended training data Dtr2 from the extended training data storage unit DB2, and performs learning processing using the extended training data Dtr2. Then, the learning unit 4 acquires a parameter (optimization parameter θ_opt) for optimizing the learning model by the learning process, and stores the acquired optimization parameter θ_opt in the optimization parameter storage unit DB3.
 最適化パラメータ格納部DB3は、学習部4からの指示に従い、学習部4から出力される最適化パラメータθ_optを記憶する。また、最適化パラメータ格納部DB3は、予測処理部5からの指示に従い、記憶保持している最適化パラメータθ_optを予測処理部5に出力する。 The optimization parameter storage unit DB3 stores the optimization parameter θ_opt output from the learning unit 4 in accordance with the instruction from the learning unit 4. Further, the optimization parameter storage unit DB3 outputs the stored optimization parameter θ_opt to the prediction processing unit 5 in accordance with the instruction from the prediction processing unit 5.
 予測処理部5は、最適化パラメータ格納部DB3から最適化パラメータθ_optを読み出し、当該最適化パラメータθ_optに基づいて、学習済みモデルを取得する。予測処理部5は、セレクタSEL2から出力されるデータD1_imgを入力し、当該データD1_imgに対して、学習済みモデルを用いた予測処理を実行する。そして、予測処理部5は、予測処理結果に基づいて、投影制御部1に出力するデータDp1_prj(投影部Prj1から投影させるデータ)を生成する。そして、予測処理部5は、生成したデータDp1_prjをセレクタSEL1に出力する。 The prediction processing unit 5 reads the optimization parameter θ_opt from the optimization parameter storage unit DB3, and acquires a trained model based on the optimization parameter θ_opt. The prediction processing unit 5 inputs the data D1_img output from the selector SEL2, and executes the prediction processing using the trained model for the data D1_img. Then, the prediction processing unit 5 generates data Dp1_prj (data projected from the projection unit Prj1) to be output to the projection control unit 1 based on the prediction processing result. Then, the prediction processing unit 5 outputs the generated data Dp1_prj to the selector SEL1.
 ロボットRbtは、ロボット制御部Rbt_C1と、ロボットアームRbt_armとを備える。 The robot Rbt includes a robot control unit Rbt_C1 and a robot arm Rbt_arm.
 ロボット制御部Rbt_C1は、ロボットアームRbt_armを制御するための機能部である。ロボット制御部Rbt_C1は、訓練用データD1_rb_trainや所定の制御信号を入力し、入力された訓練用データD1_rb_trainや所定の制御信号に基づいて、ロボットアームRbt_armを制御する制御信号を生成する。 The robot control unit Rbt_C1 is a functional unit for controlling the robot arm Rbt_arm. The robot control unit Rbt_C1 inputs training data D1_rb_train and a predetermined control signal, and generates a control signal for controlling the robot arm Rbt_arm based on the input training data D1_rb_train and a predetermined control signal.
 ロボットアームRbt_armは、ロボット制御部Rbt_C1から指令(制御信号)に基づいて、所定の動作(例えば、物の把持、運搬等)を行う。 The robot arm Rbt_arm performs a predetermined operation (for example, grasping, transporting, etc.) based on a command (control signal) from the robot control unit Rbt_C1.
 <1.2:表示制御システムの動作>
 以上のように構成された表示制御システム1000の動作について、以下、説明する。
<1.2: Operation of display control system>
The operation of the display control system 1000 configured as described above will be described below.
 以下では、表示制御システム1000の動作について、(1)訓練データ取得モードの処理、(2)学習モードの処理、(3)予測モードの処理に分けて説明する。 In the following, the operation of the display control system 1000 will be described separately for (1) training data acquisition mode processing, (2) learning mode processing, and (3) prediction mode processing.
 なお、説明便宜のため、表示制御システム1000は、工場等の狭空間(室内空間)に設置されており、投影部Prj1および撮像部Cmr1は、天井に設置されており、作業員Psn1、ロボットRbtは、床面に存在しているものとして、以下、説明する。また、投影部Prj1は、床面FLRを投影対象とするものとする。 For convenience of explanation, the display control system 1000 is installed in a narrow space (indoor space) such as a factory, and the projection unit Prj1 and the imaging unit Cmr1 are installed on the ceiling, and the worker Psn1 and the robot Rbt Will be described below assuming that is present on the floor surface. Further, the projection unit Prj1 targets the floor surface FLR as a projection target.
 (1.2.1:訓練データ取得モードの処理)
 まず、訓練データ取得モードの処理について、説明する。
(1.2.1: Training data acquisition mode processing)
First, the processing of the training data acquisition mode will be described.
 図3、図4は、第1実施形態に係る表示制御システム1000の訓練データ取得モードの処理を説明するための図である。 3 and 4 are diagrams for explaining the processing of the training data acquisition mode of the display control system 1000 according to the first embodiment.
 図3に示すように、ロボット制御部Rbt_C1に、訓練データを取得するために、ロボットアームRbt_armに所定の動作(これを第1動作という)を実行させるためのデータD1_rb_trainを入力する。また、表示制御装置100の投影制御部1に、ロボットアームRbt_armに第1動作をさせたときの安全領域、すなわち、作業員(例えば、図3では、作業員Psn1)が存在しても安全な領域(第1動作を行っているときのロボットアームRbt_armに作業員が接触することなく、安全に作業を行うことができる領域)を投影対象である床面に表示させるためのデータD1_prj_trainを入力する。 As shown in FIG. 3, the robot control unit Rbt_C1 inputs data D1_rb_train for causing the robot arm Rbt_arm to execute a predetermined operation (this is referred to as a first operation) in order to acquire training data. Further, it is safe even if the projection control unit 1 of the display control device 100 has a safety area when the robot arm Rbt_arm is first operated, that is, a worker (for example, worker Psn1 in FIG. 3) exists. Input the data D1_prj_train for displaying the area (the area where the worker can safely perform the work without touching the robot arm Rbt_arm during the first operation) on the floor surface to be projected. ..
 ロボットアームRbt_armは、データD1_rb_trainに基づいて、第1動作を行う。ロボットアームRbt_armが第1動作を行っている期間において、投影部Prj1は、表示制御装置100の投影制御部1に入力されたデータD1_prj_trainに基づいて、ロボットアームRbt_armが第1動作を行っているときの安全領域を示す画像、あるいは、境界線を、投影対象である床面FLRに表示させる。なお、表示制御装置100において、例えば、投影部Prj1から、例えば、画像上の大きさが既知であるテストパターンを含む所定のテスト画像を床面に投影するように、投影制御部1により制御し、当該テスト画像を撮像部Cmr1により取得した画像(撮像画像)において、上記テストパターンの大きさを調べることで、投影部Prj1の投影点P_prjから投影面(床面FLR)までの距離を取得する。なお、撮像部Cmr1の撮影パラメータ(画角、焦点距離等)は分かっているものとする。そして、表示制御装置100において、取得した距離に基づいて、安全領域を示す画像、あるいは、境界線が投影面である床面FLRに投影されるように投影制御部1を制御する。 The robot arm Rbt_arm performs the first operation based on the data D1_rb_train. During the period in which the robot arm Rbt_arm is performing the first operation, the projection unit Prj1 is performing the first operation when the robot arm Rbt_arm is performing the first operation based on the data D1_prj_train input to the projection control unit 1 of the display control device 100. An image or a boundary line showing the safety area of the robot is displayed on the floor surface FLR to be projected. In the display control device 100, for example, the projection control unit 1 controls the projection unit Prj1 so as to project a predetermined test image including a test pattern having a known size on the image onto the floor surface. , The distance from the projection point P_prj of the projection unit Prj1 to the projection surface (floor surface FLR) is acquired by examining the size of the test pattern in the image (image captured image) obtained by the imaging unit Cmr1 of the test image. .. It is assumed that the imaging parameters (angle of view, focal length, etc.) of the imaging unit Cmr1 are known. Then, the display control device 100 controls the projection control unit 1 so that the image showing the safety area or the boundary line is projected on the floor surface FLR which is the projection surface based on the acquired distance.
 図3に示した場合、領域Ar1が、ロボットアームRbt_armが第1動作を行っているときの安全領域であり、当該安全領域を示す境界線が投影部Prj1から床面FLRに投影されている。 In the case shown in FIG. 3, the area Ar1 is a safety area when the robot arm Rbt_arm is performing the first operation, and a boundary line indicating the safety area is projected from the projection unit Prj1 onto the floor surface FLR.
 この状態において、作業員Psn1にヘルメットHat1(例えば、黄色のヘルメット)をかぶってもらい、安全領域内(領域Ar1内)で作業、あるいは、移動等を行ってもらい、そのときの状況を撮像部Cmr1で撮像する。なお、説明便宜のため、撮像部Cmr1は、可視光用イメージセンサを搭載しており、カラー映像を撮像できる撮像装置であるものとして、以下説明する。また、撮像部Cmr1は、安全領域(図3の場合、領域Ar1)と、ロボットアームRbt_armとが撮像画像(撮像映像)に含まれるように、カメラパラメータ(光軸の向き、画角、焦点距離等)が調整されているものとする。 In this state, the worker Psn1 is asked to wear a helmet Hat1 (for example, a yellow helmet) to work or move in the safe area (in the area Ar1), and the situation at that time is captured by the imaging unit Cmr1. Take an image with. For convenience of explanation, the imaging unit Cmr1 will be described below assuming that the imaging unit Cmr1 is equipped with an image sensor for visible light and is capable of capturing a color image. Further, the imaging unit Cmr1 has camera parameters (optical axis direction, angle of view, focal length) so that the safety region (region Ar1 in the case of FIG. 3) and the robot arm Rbt_arm are included in the captured image (captured image). Etc.) shall be adjusted.
 撮像部Cmr1は、ロボットアームRbt_armが第1動作を行っているときの状況を撮像した映像をフレームごとの画像(フレーム画像)(データD1_img)として、訓練データ取得部2に出力する。 The imaging unit Cmr1 outputs an image of the situation when the robot arm Rbt_arm is performing the first operation as an image (frame image) (data D1_img) for each frame to the training data acquisition unit 2.
 訓練データ取得時において、モード信号Modeは、その信号値が「0」にセットされているので、セレクタSEL2は、データD1_imgを訓練データ取得部2に出力する。 At the time of training data acquisition, the signal value of the mode signal Mode is set to "0", so the selector SEL2 outputs the data D1_img to the training data acquisition unit 2.
 訓練データ取得部2は、セレクタSEL2から出力されるデータD1_imgを入力し、データD1_imgに基づいて、予測処理部5の学習モデルを訓練するための訓練データDtr1を生成する。具体的には、訓練データ取得部2は、データD1_img(フレーム画像データ)を実画像データのまま、訓練データDtr1として、訓練データ格納部DB1に記憶させる。また、訓練データ取得部2は、データD1_img(フレーム画像データ)から、例えば、所定の画像特徴量を抽出した画像(例えば、画像特徴量を作業員がかぶっているヘルメットの色と同一色とし、当該色の部分の画像領域を抽出した画像)を取得し、原画像D1_img(フレーム画像データ)(実画像データ)に合成するようにし、このように合成した画像を訓練データDtr1として生成してもよい。 The training data acquisition unit 2 inputs the data D1_img output from the selector SEL2, and generates training data Dtr1 for training the learning model of the prediction processing unit 5 based on the data D1_img. Specifically, the training data acquisition unit 2 stores the data D1_img (frame image data) as the training data Dtr1 as the actual image data in the training data storage unit DB1. Further, the training data acquisition unit 2 sets, for example, an image obtained by extracting a predetermined image feature amount from the data D1_img (frame image data) (for example, the image feature amount is the same color as the color of the helmet worn by the worker). Even if an image obtained by extracting the image area of the color portion) is acquired and combined with the original image D1_img (frame image data) (actual image data), and the combined image is generated as training data Dtr1. Good.
 また、訓練データ取得部2は、訓練データDtr1に、画像データとともに、当該画像が取得されたときの状態の情報(追加情報(ラベル情報))を含めるようにしてもよい。例えば、ロボットアームRbt_armの動作フェーズについての情報、安全領域が領域Ar1で正しいことを示す情報等を追加情報として、画像データとともに、訓練データDtr1に含めるようにしてもよい。 Further, the training data acquisition unit 2 may include the image data and the information (additional information (label information)) of the state when the image is acquired in the training data Dtr1. For example, information about the operation phase of the robot arm Rbt_arm, information indicating that the safety area is correct in the area Ar1, and the like may be included in the training data Dtr1 together with the image data as additional information.
 訓練データ取得部2は、生成した訓練データDtr1を訓練データ格納部DB1に記憶させる。 The training data acquisition unit 2 stores the generated training data Dtr1 in the training data storage unit DB1.
 CG合成部3は、訓練データ格納部DB1から、訓練データDtr1を読み出す(取得する)。そして、CG合成部3は、訓練データDtr1を用いて、CG合成した訓練データを作成する。例えば、CG合成部3は、訓練データDtr1の画像データにおいて、作業員のヘルメット部分の画像領域を、CG処理により当該ヘルメットの色と異なる色にしたCG画像領域を生成し、当該CG画像領域により、実画像のヘルメット部分の画像領域を置換することで、CG画像データを生成する。つまり、このCG画像データは、CG処理により、ヘルメット部分の色が変更された画像となる。CG合成部3は、上記と同様にして、例えば、ヘルメット色を他の色、テクスチャーに変更する、あるいは、作業員の髪(ヘルメットをかぶっていない状態)に置換する等のCG処理(CG合成処理)により、元の画像から、多様なCG合成画像を生成する。このように処理することで、1つの訓練データDtr1から、多量の訓練用データを取得することができる。そして、CG合成部3は、上記のようにして生成したCG画像データを、元の画像データに追加して、拡張訓練データDtr2を生成する。そして、CG合成部3は、生成した拡張訓練データDtr2を拡張訓練データ格納部DB2に記憶させる。 The CG synthesis unit 3 reads (acquires) the training data Dtr1 from the training data storage unit DB1. Then, the CG synthesis unit 3 creates the training data synthesized by CG using the training data Dtr1. For example, in the image data of the training data Dtr1, the CG synthesis unit 3 generates a CG image area in which the image area of the helmet portion of the worker is changed to a color different from the color of the helmet by CG processing, and the CG image area is used. , CG image data is generated by replacing the image area of the helmet portion of the actual image. That is, this CG image data becomes an image in which the color of the helmet portion is changed by the CG processing. In the same manner as above, the CG synthesis unit 3 performs CG processing (CG synthesis) such as changing the helmet color to another color or texture, or replacing the helmet color with the worker's hair (when the helmet is not worn). Processing) generates various CG composite images from the original image. By processing in this way, a large amount of training data can be acquired from one training data Dtr1. Then, the CG synthesis unit 3 adds the CG image data generated as described above to the original image data to generate the extended training data Dtr2. Then, the CG synthesis unit 3 stores the generated extended training data Dtr2 in the extended training data storage unit DB2.
 なお、元の画像データから、CG処理(CG合成処理)により、多量の画像データ(訓練用画像データ)を取得する方法については、特願2019-008307号に記載の方法を用いてもよい。 As a method for acquiring a large amount of image data (training image data) from the original image data by CG processing (CG synthesis processing), the method described in Japanese Patent Application No. 2019-008307 may be used.
 上記のように処理することで、表示制御システム1000では、図3の場合(ロボットアームRbt_armが第1動作をしている場合)についての訓練データ(学習用データ)が取得される。 By processing as described above, the display control system 1000 acquires training data (learning data) for the case of FIG. 3 (when the robot arm Rbt_arm is performing the first operation).
 また、表示制御システム1000では、ロボットアームRbt_armが第1動作とは異なる第2動作をしているときの状態を撮像し、さらに、訓練データを取得する。 Further, in the display control system 1000, the state when the robot arm Rbt_arm is performing the second operation different from the first operation is imaged, and the training data is further acquired.
 図4に示すように、ロボット制御部Rbt_C1に、訓練データを取得するために、ロボットアームRbt_armに第1動作とは異なる第2動作を実行させるためのデータD2_rb_trainを入力する。また、表示制御装置100の投影制御部1に、ロボットアームRbt_armに第2動作をさせたときの安全領域、すなわち、作業員(例えば、図4では、作業員Psn1)が存在しても安全な領域(第2動作を行っているときのロボットアームRbt_armに作業員が接触することなく、安全に作業を行うことができる領域)を投影対象である床面に表示させるためのデータD2_prj_trainを入力する。 As shown in FIG. 4, in order to acquire training data, the robot control unit Rbt_C1 inputs data D2_rb_train for causing the robot arm Rbt_arm to execute a second operation different from the first operation. Further, it is safe even if the projection control unit 1 of the display control device 100 has a safety area when the robot arm Rbt_arm is subjected to the second operation, that is, a worker (for example, worker Psn1 in FIG. 4) exists. Input data D2_prj_train to display the area (the area where the worker can safely work without touching the robot arm Rbt_arm during the second operation) on the floor surface to be projected. ..
 ロボットアームRbt_armは、データD2_rb_trainに基づいて、第2動作を行う。ロボットアームRbt_armが第2動作を行っている期間において、投影部Prj1は、表示制御装置100の投影制御部1に入力されたデータD2_prj_trainに基づいて、ロボットアームRbt_armが第2動作を行っているときの安全領域を示す画像、あるいは、境界線を、投影対象である床面FLRに表示させる。図4に示した場合、領域Ar2が、ロボットアームRbt_armが第2動作を行っているときの安全領域であり、当該安全領域を示す境界線が投影部Prj1から床面FLRに投影されている。 The robot arm Rbt_arm performs the second operation based on the data D2_rb_train. During the period when the robot arm Rbt_arm is performing the second operation, the projection unit Prj1 is performing the second operation when the robot arm Rbt_arm is performing the second operation based on the data D2_prj_train input to the projection control unit 1 of the display control device 100. An image or a boundary line showing the safety area of the robot is displayed on the floor surface FLR to be projected. In the case shown in FIG. 4, the area Ar2 is a safety area when the robot arm Rbt_arm is performing the second operation, and the boundary line indicating the safety area is projected from the projection unit Prj1 onto the floor surface FLR.
 この状態において、作業員Psn1にヘルメットHat1(例えば、黄色のヘルメット)をかぶってもらい、安全領域内(領域Ar1内)で作業、あるいは、移動等を行ってもらい、そのときの状況を撮像部Cmr1で撮像する。 In this state, the worker Psn1 is asked to wear a helmet Hat1 (for example, a yellow helmet) to work or move in the safe area (in the area Ar1), and the situation at that time is captured by the imaging unit Cmr1. Take an image with.
 そして、ロボットアームRbt_armが第1動作をしているときの訓練データを取得する処理と同様の処理を実行することで、表示制御システム1000は、ロボットアームRbt_armが第2動作をしているときの拡張訓練データDtr2を取得し、当該拡張訓練データDtr2は、拡張訓練データ格納部DB2に記憶される。 Then, by executing the same process as the process of acquiring the training data when the robot arm Rbt_arm is performing the first operation, the display control system 1000 performs the process when the robot arm Rbt_arm is performing the second operation. The extended training data Dtr2 is acquired, and the extended training data Dtr2 is stored in the extended training data storage unit DB2.
 以上のようにして、表示制御システム1000では、訓練データ取得モードの処理が実行され、訓練データ(学習用データ)(拡張訓練データDtr2)が取得される。 As described above, in the display control system 1000, the processing of the training data acquisition mode is executed, and the training data (learning data) (extended training data Dtr2) is acquired.
 なお、上記では、表示制御システム1000において、訓練データを取得するために、投影部Prj1から安全領域を床面に示す画像または境界線を投影し床面に表示する場合について説明したが、これに限定されることはない。例えば、表示制御システム1000において、投影部Prj1から安全領域を床面に示す画像または境界線を投影せずに、撮像部Cmr1により映像(フレーム画像)を取得し、取得したフレーム画像において、安全領域を示す画像または境界線を(例えば、CG処理により)合成して、訓練データ(訓練用の画像データ)を生成するようにしてもよい。また、安全領域については、取得されたフレーム画像内の作業員、ロボットアームRbt_armの位置、状態に応じて、手動または自動(例えば、算出)により、特定するようにしてもよい。 In the above description, in the display control system 1000, in order to acquire training data, an image showing a safety area on the floor surface or a boundary line is projected from the projection unit Prj1 and displayed on the floor surface. There is no limitation. For example, in the display control system 1000, an image (frame image) is acquired by the imaging unit Cmr1 without projecting an image or a boundary line showing the safety area on the floor surface from the projection unit Prj1, and the acquired frame image is the safety area. The image or boundary line indicating the above may be synthesized (for example, by CG processing) to generate training data (image data for training). Further, the safety area may be specified manually or automatically (for example, by calculation) depending on the position and state of the worker and the robot arm Rbt_arm in the acquired frame image.
 (1.2.2:学習モードの処理)
 次に、学習モードの処理について、説明する。
(1.2.2: Learning mode processing)
Next, the processing of the learning mode will be described.
 図5は、第1実施形態に係る表示制御システム1000の学習モードの処理を説明するための図である。 FIG. 5 is a diagram for explaining the processing of the learning mode of the display control system 1000 according to the first embodiment.
 図5に示すように、学習部4は、拡張訓練データ格納部DB2から拡張訓練データDtr2を読み出し(取得し)、拡張訓練データDtr2を用いて、学習処理を行う。学習部4は、拡張訓練データDtr2を教師データとして、学習処理を行う。具体的には、学習部4は、入力を拡張訓練データDtr2に含まれる画像データ(この画像データを「Dtr2.img」と表記する)とし、出力を画像データが取得されたときの安全領域(投影面に表示する安全領域)を特定するための情報として、学習用モデル(例えば、ニューラルネットワークにより実現されるモデル)に対する学習を行う。学習用モデルは、例えば、入力層と、複数の中間層と、出力層とを備えるニューラルネットワークによるモデルである。学習用モデルの各層間の重み付け係数(各層間を繋ぐシナプス結合の重み付け)は、パラメータθにより設定(調整)されるものとする。 As shown in FIG. 5, the learning unit 4 reads (acquires) the extended training data Dtr2 from the extended training data storage unit DB2, and performs learning processing using the extended training data Dtr2. The learning unit 4 performs learning processing using the extended training data Dtr2 as teacher data. Specifically, the learning unit 4 uses the input as image data included in the extended training data Dtr2 (this image data is referred to as "Dtr2.img"), and outputs the output as a safety area when the image data is acquired ( As information for specifying the safety area to be displayed on the projection surface), learning is performed on a learning model (for example, a model realized by a neural network). The learning model is, for example, a model by a neural network including an input layer, a plurality of intermediate layers, and an output layer. It is assumed that the weighting coefficient (weighting of synaptic connections connecting each layer) between each layer of the training model is set (adjusted) by the parameter θ.
 学習部4は、学習用モデルへの入力データDtr2.imgの集合をxとし、学習用モデルからの出力データの集合をyとし、入力データxが学習用モデルに入力されたときに出力データyが出力される条件付き確率をP(y|x)とすると、
Figure JPOXMLDOC01-appb-M000001
 
を満たす最適パラメータθ_optを、上記学習用モデルのパラメータを更新(調整)する処理を繰り返して取得する。なお、条件付きP(y|x)は、出力データが教師データに近い程、大きな値をとるものとする。
The learning unit 4 has input data to the learning model Dtr2. Let x be the set of img, let y be the set of output data from the training model, and P (y | x) be the conditional probability that the output data y will be output when the input data x is input to the training model. Then
Figure JPOXMLDOC01-appb-M000001

The optimum parameter θ_opt that satisfies the condition is acquired by repeating the process of updating (adjusting) the parameters of the training model. It should be noted that the conditional P (y | x) takes a larger value as the output data is closer to the teacher data.
 例えば、条件付きP(y|x)は、以下のように設定される。
Figure JPOXMLDOC01-appb-M000002
 
  σ:標準偏差
 なお、xは、集合xに含まれるベクトル(2次元画像の各画素のデータを1次元ベクトルにしたデータ)であり、yは、集合yに含まれるベクトルであり、yi_correctは、xを入力としたときの教師データ(正解データ)(ベクトルデータ)である。H(x;θ)は、入力xに対して、例えば、複数層からなるニューラルネットワークの処理を施し、出力を取得する処理に相当する演算子を表している。パラメータθは、例えば、当該ニューラルネットワークのシナプス結合の重み付け等を決定するパラメータである。なお、H(x;θ)には、非線形の演算が含まれてもよい。
For example, conditional P (y | x) is set as follows.
Figure JPOXMLDOC01-appb-M000002

σ: Standard deviation Note that x i is a vector included in the set x (data obtained by converting the data of each pixel of the two-dimensional image into a one-dimensional vector), and y i is a vector included in the set y, y. i_select is teacher data (correct answer data) (vector data) when xi is input. H (x i ; θ) represents an operator corresponding to, for example, processing of a neural network composed of a plurality of layers on the input x i and acquiring an output. The parameter θ is, for example, a parameter that determines the weighting of synaptic connections of the neural network. Incidentally, H; the (x i θ), may include the calculation of the nonlinear.
 学習部4は、拡張訓練データを用いて、学習用モデルを用いた学習処理を行い、十分収束したと判断した場合、そのときの学習用モデルに設定されていたパラメータθを最適化パラメータθ_optとして取得し、取得した最適化パラメータθ_optを最適化パラメータ格納部DB3に記憶させる。 When the learning unit 4 performs learning processing using the learning model using the extended training data and determines that it has sufficiently converged, the parameter θ set in the learning model at that time is set as the optimization parameter θ_opt. It is acquired, and the acquired optimization parameter θ_opt is stored in the optimization parameter storage unit DB3.
 以上のようにして、表示制御システム1000では、学習モードの処理が実行される。 As described above, the display control system 1000 executes the learning mode processing.
 (1.2.3:予測モードの処理)
 次に、予測モードの処理について、説明する。
(12.3: Prediction mode processing)
Next, the processing of the prediction mode will be described.
 図6~図9は、第1実施形態に係る表示制御システム1000の予測モードの処理を説明するための図である。 6 to 9 are diagrams for explaining the processing of the prediction mode of the display control system 1000 according to the first embodiment.
 図6に示すように、ロボット制御部Rbt_C1に、制御信号Ctrl_Rbt(phase1)が入力され、ロボットアームRbt_armが所定の動作(これを「フェーズ1の動作」という)を実行する場合について、説明する。 As shown in FIG. 6, a case where a control signal Ctrl_Rbt (phase1) is input to the robot control unit Rbt_C1 and the robot arm Rbt_arm executes a predetermined operation (this is referred to as “phase 1 operation”) will be described.
 この場合、ロボット制御部Rbt_C1は、ロボットアームRbt_armが、制御信号Ctrl_Rbt(phase1)に従う動作(フェーズ1の動作)を実行するように、ロボットアームRbt_armを制御する。 In this case, the robot control unit Rbt_C1 controls the robot arm Rbt_arm so that the robot arm Rbt_arm executes an operation (phase 1 operation) according to the control signal Ctrl_Rbt (phase1).
 ロボットアームRbt_armは、ロボット制御部Rbt_C1から指令に従い、フェーズ1の動作を行う。 The robot arm Rbt_arm performs the operation of Phase 1 in accordance with a command from the robot control unit Rbt_C1.
 そして、撮像部Cmr1は、そのときの状況(ロボットアームRbt_arm、作業員、床面FLR等)を、訓練データを取得したときと同様の状態で撮像する。なお、投影部Prj1、撮像部Cmr1、ロボットRbt、床FLRとの位置関係がずれる場合があるので、予測モードの処理を実行する前に、キャリブレーション(相対位置関係の調整処理)を行うようにしてもよい。例えば、床面FLRと平行な平面上に存在するロボットRbtの台座上の異なる2点(例えば、図6に示した点P1、点P2)を撮像部Cmr1により撮像した画像内で検出し、検出した2点に対応する画像と、表示制御装置100において、訓練データを取得したときの画像の当該2点に対応する位置とが一致するように、撮像部Cmr1の光軸の向き、カメラアングル、画角等を調整する。このキャリブレーション(相対位置関係の調整処理)により、表示制御システム1000において、撮像部Cmr1で取得される画像は、訓練データを取得したときの画像と同様の状況、つまり、画像内の位置関係が同じ画像となるので、予測処理の精度が向上する。なお、キャリブレーション時の精度を高めるために、3点以上の点群をキャリブレーション点として利用し、当該点群によって生成された各キャリブレーション用パラメータ(光軸の向き、カメラアングル、画角等)の平均値を用いるなどの方法を採用しても良い。 Then, the imaging unit Cmr1 captures the situation at that time (robot arm Rbt_arm, worker, floor surface FLR, etc.) in the same state as when the training data was acquired. Since the positional relationship between the projection unit Prj1, the imaging unit Cmr1, the robot Rbt, and the floor FLR may deviate, calibration (relative positional relationship adjustment processing) should be performed before executing the prediction mode processing. You may. For example, two different points (for example, points P1 and P2 shown in FIG. 6) on the pedestal of the robot Rbt existing on a plane parallel to the floor surface FLR are detected and detected in the image captured by the imaging unit Cmr1. The direction of the optical axis of the imaging unit Cmr1 and the camera angle so that the image corresponding to the two points corresponds to the position corresponding to the two points of the image when the training data is acquired in the display control device 100. Adjust the angle of view, etc. By this calibration (adjustment process of relative positional relationship), in the display control system 1000, the image acquired by the imaging unit Cmr1 has the same situation as the image when the training data is acquired, that is, the positional relationship in the image. Since the same image is obtained, the accuracy of the prediction process is improved. In addition, in order to improve the accuracy at the time of calibration, a point group of 3 or more points is used as a calibration point, and each calibration parameter (direction of optical axis, camera angle, angle of view, etc.) generated by the point group is used. ) May be used.
 キャリブレーションを実行した後、ロボットアームRbt_armがフェーズ1の動作を行っている期間中において、撮像部Cmr1は、ロボットRbt、床面FLR、(存在すれば)作業員を上方から撮像し、撮像した画像(フレーム画像)を表示制御装置100に出力し続ける。 After performing the calibration, while the robot arm Rbt_arm is performing the phase 1 operation, the imaging unit Cmr1 images the robot Rbt, the floor surface FLR, and the worker (if any) from above and images the image. The image (frame image) is continuously output to the display control device 100.
 予測処理時(予測モード)において、モード信号Modeは、その信号値が「1」にセットされているので、セレクタSEL2は、データD1_imgを予測処理部5に出力する。 At the time of prediction processing (prediction mode), the signal value of the mode signal Mode is set to "1", so the selector SEL2 outputs the data D1_img to the prediction processing unit 5.
 予測処理部5は、表示制御装置100の処理モードが予測モードに設定されると、最適化パラメータ格納部DB3から最適化パラメータθ_optを読み出し、当該最適化パラメータθ_optに基づいて、学習済みモデルを取得する。つまり、予測処理部5は、学習用モデルにおいて、そのパラメータθを最適化パラメータθ_optにより設定する。これにより、学習済みモデルが構築され、予測処理部5は、当該学習済みモデルに、セレクタSEL2から出力される画像データD1_imgを入力させる。 When the processing mode of the display control device 100 is set to the prediction mode, the prediction processing unit 5 reads the optimization parameter θ_opt from the optimization parameter storage unit DB3 and acquires a trained model based on the optimization parameter θ_opt. To do. That is, the prediction processing unit 5 sets the parameter θ in the learning model by the optimization parameter θ_opt. As a result, the trained model is constructed, and the prediction processing unit 5 causes the trained model to input the image data D1_img output from the selector SEL2.
 予測処理部5は、ロボットアームRbt_armがフェーズ1の動作を行っている期間中において取得された画像データD1_imgを、学習済みモデルに入力し、当該学習済みモデルからの出力をデータDp1_prjとして取得する。例えば、図6に示す場合における、(1)ロボットアームRbt_armの位置、状態、および、(2)作業員Psn1の位置、状態は、図3の場合(安全領域が領域Ar1であるときの訓練データを取得したときの状態)における、(1)ロボットアームRbt_armの位置、状態、および、(2)作業員Psn1の位置、状態と類似している。したがって、この状態において、撮像した画像データD1_imgが、予測処理部5の学習済みモデルに入力された場合、安全領域が領域Ar1であることを示すデータ(床面FLRに領域Ar1を表示させるためのデータDp1_prj)が、学習済みモデルから出力される。 The prediction processing unit 5 inputs the image data D1_img acquired during the period during which the robot arm Rbt_arm is performing the operation of Phase 1 into the trained model, and acquires the output from the trained model as data Dp1_prj. For example, in the case shown in FIG. 6, (1) the position and state of the robot arm Rbt_arm, and (2) the position and state of the worker Psn1 are the training data in the case of FIG. 3 (when the safety area is the area Ar1). It is similar to (1) the position and state of the robot arm Rbt_arm, and (2) the position and state of the worker Psn1 in the state when the above is acquired. Therefore, in this state, when the captured image data D1_img is input to the trained model of the prediction processing unit 5, data indicating that the safety region is the region Ar1 (for displaying the region Ar1 on the floor surface FLR). Data Dp1_prj) is output from the trained model.
 そして、予測処理部5は、取得したデータDp1_prjをセレクタSEL1に出力する。予測モードの場合、モード信号Modeの信号値は「1」であるので、セレクタSEL1は、予測処理部5から出力されるデータDp1_prjを投影制御部1に出力する。 Then, the prediction processing unit 5 outputs the acquired data Dp1_prj to the selector SEL1. In the prediction mode, since the signal value of the mode signal Mode is “1”, the selector SEL1 outputs the data Dp1_prj output from the prediction processing unit 5 to the projection control unit 1.
 投影制御部1は、データDp1_prjを入力し、データDp1_prjに基づく画像(あるいは、投影面に表示する境界線)が投影部Prj1から投影対象に投影されるように制御する制御信号Ctl_prj(Dp1_prj)を生成し、生成した当該制御信号Ctl_prj(Dp1_prj)を投影部Prj1に出力する。 The projection control unit 1 inputs data Dp1_prj and outputs a control signal Ctrl_prj (Dp1_prj) that controls the image based on the data Dp1_prj (or the boundary line displayed on the projection surface) to be projected from the projection unit Prj1 onto the projection target. The generated control signal Ctrl_prj (Dp1_prj) is output to the projection unit Prj1.
 そして、投影部Prj1は、制御信号Ctl_prj(Dp1_prj)に基づいて、投影面(床面FLR)に、データDp1_prjに基づく画像(あるいは、投影面に表示する境界線)を投影する。図6に示した場合、安全領域が領域Ar1であることを示す画像(あるいは、境界線)が床面FLRに投影される。 Then, the projection unit Prj1 projects an image (or a boundary line to be displayed on the projection surface) based on the data Dp1_prj on the projection surface (floor surface FLR) based on the control signal Ctl_prj (Dp1_prj). When shown in FIG. 6, an image (or boundary line) indicating that the safety region is the region Ar1 is projected on the floor surface FLR.
 作業員Psn1は、床面FLRに投影されている画像(あるいは境界線)から、安全領域(図6の場合は、領域Ar1)がどこであるか容易に判断することができ、安全領域内で作業する、あるいは、安全領域内で移動することで、ロボットアームRbt_armの動作を妨害する、あるいは、ロボットアームRbt_armと接触、衝突することない。したがって、安全が確保される。また、ロボットアームRbt_armの動作も、作業員に接触、衝突することがなくなるため、高速動作を継続することができ、その結果、ロボットアームRbt_armによる作業効率も高い状態で維持できる。 The worker Psn1 can easily determine where the safety area (area Ar1 in the case of FIG. 6) is from the image (or boundary line) projected on the floor surface FLR, and works in the safety area. It does not interfere with the operation of the robot arm Rbt_arm, or does not come into contact with or collide with the robot arm Rbt_arm by moving within the safe area. Therefore, safety is ensured. Further, since the operation of the robot arm Rbt_arm does not come into contact with or collide with the worker, the high-speed operation can be continued, and as a result, the work efficiency by the robot arm Rbt_arm can be maintained in a high state.
 次に、図7に示すように、ロボット制御部Rbt_C1に、制御信号Ctrl_Rbt(phase2)が入力され、ロボットアームRbt_armが所定の動作(これを「フェーズ2の動作」という)を実行する場合について、説明する。 Next, as shown in FIG. 7, a case where a control signal Ctrl_Rbt (phase2) is input to the robot control unit Rbt_C1 and the robot arm Rbt_arm executes a predetermined operation (this is referred to as “phase 2 operation”) explain.
 この場合、ロボット制御部Rbt_C1は、ロボットアームRbt_armが、制御信号Ctrl_Rbt(phase2)に従う動作(フェーズ2の動作)を実行するように、ロボットアームRbt_armを制御する。 In this case, the robot control unit Rbt_C1 controls the robot arm Rbt_arm so that the robot arm Rbt_arm executes an operation (phase 2 operation) according to the control signal Ctrl_Rbt (phase2).
 ロボットアームRbt_armは、ロボット制御部Rbt_C1から指令に従い、フェーズ2の動作を行う。 The robot arm Rbt_arm performs the operation of Phase 2 in accordance with a command from the robot control unit Rbt_C1.
 そして、撮像部Cmr1は、そのときの状況(ロボットアームRbt_arm、作業員、床面FLR等)を、訓練データを取得したときと同様の状態で撮像する。 Then, the imaging unit Cmr1 captures the situation at that time (robot arm Rbt_arm, worker, floor surface FLR, etc.) in the same state as when the training data was acquired.
 ロボットアームRbt_armがフェーズ2の動作を行っている期間中において、撮像部Cmr1は、ロボットRbt、床面FLR、(存在すれば)作業員を上方から撮像し、撮像した画像(フレーム画像)を表示制御装置100に出力し続ける。 During the period in which the robot arm Rbt_arm is performing the phase 2 operation, the imaging unit Cmr1 images the robot Rbt, the floor surface FLR, and the worker (if any) from above, and displays the captured image (frame image). Continue to output to the control device 100.
 予測処理時(予測モード)において、モード信号Modeは、その信号値が「1」にセットされているので、セレクタSEL2は、データD1_imgを予測処理部5に出力する。 At the time of prediction processing (prediction mode), the signal value of the mode signal Mode is set to "1", so the selector SEL2 outputs the data D1_img to the prediction processing unit 5.
 予測処理部5は、ロボットアームRbt_armがフェーズ2の動作を行っている期間中において取得された画像データD1_imgを、学習済みモデルに入力し、当該学習済みモデルからの出力をデータDp1_prjとして取得する。例えば、図7に示す場合における、(1)ロボットアームRbt_armの位置、状態、および、(2)作業員Psn1の位置、状態は、図4の場合(安全領域が領域Ar2であるときの訓練データを取得したときの状態)における、(1)ロボットアームRbt_armの位置、状態、および、(2)作業員Psn1の位置、状態と類似している。したがって、この状態において、撮像した画像データD1_imgが、予測処理部5の学習済みモデルに入力された場合、安全領域が領域Ar2であることを示すデータ(床面FLRに領域Ar2を表示させるためのデータDp1_prj)が、学習済みモデルから出力される。 The prediction processing unit 5 inputs the image data D1_img acquired during the period during which the robot arm Rbt_arm is performing the operation of Phase 2 into the trained model, and acquires the output from the trained model as data Dp1_prj. For example, in the case shown in FIG. 7, (1) the position and state of the robot arm Rbt_arm, and (2) the position and state of the worker Psn1 are the training data in the case of FIG. 4 (when the safety area is the area Ar2). It is similar to (1) the position and state of the robot arm Rbt_arm, and (2) the position and state of the worker Psn1 in the state when the above is acquired. Therefore, in this state, when the captured image data D1_img is input to the trained model of the prediction processing unit 5, data indicating that the safety region is the region Ar2 (for displaying the region Ar2 on the floor surface FLR). Data Dp1_prj) is output from the trained model.
 そして、予測処理部5は、取得したデータDp1_prjをセレクタSEL1に出力する。予測モードの場合、モード信号Modeの信号値は「1」であるので、セレクタSEL1は、予測処理部5から出力されるデータDp1_prjを投影制御部1に出力する。 Then, the prediction processing unit 5 outputs the acquired data Dp1_prj to the selector SEL1. In the prediction mode, since the signal value of the mode signal Mode is “1”, the selector SEL1 outputs the data Dp1_prj output from the prediction processing unit 5 to the projection control unit 1.
 投影制御部1は、データDp1_prjを入力し、データDp1_prjに基づく画像(あるいは、投影面に表示する境界線)が投影部Prj1から投影対象に投影されるように制御する制御信号Ctl_prj(Dp1_prj)を生成し、生成した当該制御信号Ctl_prj(Dp1_prj)を投影部Prj1に出力する。 The projection control unit 1 inputs the data Dp1_prj and outputs a control signal Ctrl_prj (Dp1_prj) that controls the image (or the boundary line displayed on the projection surface) based on the data Dp1_prj to be projected from the projection unit Prj1 onto the projection target. The generated control signal Ctrl_prj (Dp1_prj) is output to the projection unit Prj1.
 そして、投影部Prj1は、制御信号Ctl_prj(Dp1_prj)に基づいて、投影面(床面FLR)に、データDp1_prjに基づく画像(あるいは、投影面に表示する境界線)を投影する。図7に示した場合、安全領域が領域Ar2であることを示す画像(あるいは、境界線)が床面FLRに投影される。 Then, the projection unit Prj1 projects an image (or a boundary line to be displayed on the projection surface) based on the data Dp1_prj on the projection surface (floor surface FLR) based on the control signal Ctl_prj (Dp1_prj). When shown in FIG. 7, an image (or boundary line) indicating that the safety region is the region Ar2 is projected on the floor surface FLR.
 作業員Psn1は、床面FLRに投影されている画像(あるいは境界線)から、安全領域(図7の場合は、領域Ar2)がどこであるか容易に判断することができ、安全領域内で作業する、あるいは、安全領域内で移動することで、ロボットアームRbt_armの動作を妨害する、あるいは、ロボットアームRbt_armと接触、衝突することない。したがって、安全が確保される。また、ロボットアームRbt_armの動作も、作業員に接触、衝突することがなくなるため、高速動作を継続することができ、その結果、ロボットアームRbt_armによる作業効率も高い状態で維持できる。 The worker Psn1 can easily determine where the safety area (area Ar2 in the case of FIG. 7) is from the image (or boundary line) projected on the floor surface FLR, and works in the safety area. It does not interfere with the operation of the robot arm Rbt_arm, or does not come into contact with or collide with the robot arm Rbt_arm by moving within the safe area. Therefore, safety is ensured. Further, since the operation of the robot arm Rbt_arm does not come into contact with or collide with the worker, the high-speed operation can be continued, and as a result, the work efficiency by the robot arm Rbt_arm can be maintained in a high state.
 以上のように、表示制御システム1000では、ロボットアームRbt_arm、作業員の上方に設置した撮像部Cmr1により撮像した画像(フレーム画像)であって、ロボットRbt、ロボットアームRbt_armの位置、状態、作業員の位置、状態を判別できる画像(フレーム画像)と、当該画像が取得されたときの安全領域を特定するデータとを訓練データ(教師データ)として、学習させた学習済みモデルを用いた予測処理を行う。そして、表示制御システム1000では、学習したときの同様の状態で撮像した画像(フレーム画像)を用いて、学習済みモデルによる予測処理を行うことで、入力された画像が撮像されたときの安全領域(投影面における安全領域)を予測(特定)することができる。そして、表示制御システム1000では、予測(特定)した安全領域が投影面(床面FLR)に、投影部Prj1により投影することで、作業員が容易かつ確実に認識できるように、安全領域を投影面(床面FLR)に表示させることができる。つまり、表示制御システム1000では、遮蔽領域が少ない上方から撮影した画像を用いて、学習処理、予測処理を行うため、ロボットRbt、ロボットアームRbt_armがどのような状態であっても、その状態に応じて、動的に安全領域を特定(予測)する予測処理を、適切かつ高精度に行うことができる。 As described above, in the display control system 1000, the image (frame image) captured by the robot arm Rbt_arm and the imaging unit Cmr1 installed above the worker, and the position, state, and worker of the robot Rbt and the robot arm Rbt_arm. Prediction processing using a trained model is performed using an image (frame image) that can determine the position and state of the robot and data that identifies a safe area when the image is acquired as training data (teacher data). Do. Then, in the display control system 1000, the safety area when the input image is captured is performed by performing the prediction processing by the trained model using the image (frame image) captured in the same state as when learning. (Safety area on the projection plane) can be predicted (specified). Then, in the display control system 1000, the predicted (specified) safety area is projected onto the projection surface (floor surface FLR) by the projection unit Prj1, so that the safety area can be easily and surely recognized by the operator. It can be displayed on a surface (floor surface FLR). That is, in the display control system 1000, since the learning process and the prediction process are performed using the image taken from above where the shielding area is small, the robot Rbt and the robot arm Rbt_arm are in any state according to the state. Therefore, the prediction process for dynamically identifying (predicting) the safety area can be performed appropriately and with high accuracy.
 なお、上記では、投影面(床面FLR)に境界線が表示される場合について説明したが、これに限定されることなく、表示制御システム1000において、図8に示すように、安全度合いを階層的に分けた画像領域(例えば、階層的に色分けした画像、あるいは、階層的に輝度を変えた画像)を有する画像を投影面(床面FLR)に投影するようにしてもよい。このようにすることで、作業員は、安全度合いも把握して、安全領域を認識することができる。 In the above description, the case where the boundary line is displayed on the projection surface (floor surface FLR) has been described, but the present invention is not limited to this, and in the display control system 1000, as shown in FIG. An image having a hierarchically divided image area (for example, a hierarchically color-coded image or a hierarchically changed brightness) may be projected onto a projection surface (floor surface FLR). By doing so, the worker can grasp the degree of safety and recognize the safety area.
 また、上記では、投影部Prj1から、安全領域を表示する場合について説明したが、これに限定されることはなく、表示制御システム1000において、図9に示すように、危険領域を示す画像(あるいは境界線)を投影面(床面FLR)に表示するようにしてもよい。なお、図9の場合、領域Ar_rb1が、危険度が高い領域(危険領域)であり、領域Ar_rb2が領域Ar_rb1よりも危険度が低い領域(危険領域)である。この場合、作業員は、危険領域外で、作業、移動等を行うことで、安全が確保される。 Further, in the above, the case where the safety area is displayed from the projection unit Prj1 has been described, but the present invention is not limited to this, and in the display control system 1000, as shown in FIG. 9, an image (or an image showing the danger area) showing the danger area is described. The boundary line) may be displayed on the projection surface (floor surface FLR). In the case of FIG. 9, the area Ar_rb1 is a high-risk area (dangerous area), and the area Ar_rb2 is a lower risk area (dangerous area) than the area Ar_rb1. In this case, the safety of the worker is ensured by performing work, movement, etc. outside the dangerous area.
 このように、表示制御システム1000では、ロボットアームと人間とが共同作業を行うときに、安全領域(あるいは、危険領域)を適切に表示することができる。その結果、高い作業効率を確保しつつ、作業の安全性を向上させることができる。 As described above, in the display control system 1000, the safety area (or the danger area) can be appropriately displayed when the robot arm and the human perform joint work. As a result, it is possible to improve work safety while ensuring high work efficiency.
 ≪第1変形例≫
 次に、第1実施形態の第1変形例について、説明する。
≪First modification≫
Next, a first modification of the first embodiment will be described.
 なお、上記実施形態と同様の部分については、同一符号を付し、詳細な説明を省略する。 The same parts as those in the above embodiment are designated by the same reference numerals, and detailed description thereof will be omitted.
 図10は、第1実施形態の第1変形例に係る表示制御システム1000Aの概略構成図である。 FIG. 10 is a schematic configuration diagram of the display control system 1000A according to the first modification of the first embodiment.
 図11は、第1実施形態の第1変形例に係る表示制御装置100Aの概略構成図である。 FIG. 11 is a schematic configuration diagram of the display control device 100A according to the first modification of the first embodiment.
 図12は、第1実施形態の第1変形例に係る表示制御システム1000Aの訓練データ取得モードの処理を説明するための図。 FIG. 12 is a diagram for explaining the processing of the training data acquisition mode of the display control system 1000A according to the first modification of the first embodiment.
 図13は、第1実施形態の第1変形例に係る表示制御システム1000Aにおいて使用する訓練データの一例(パターン1)を示す図(タイミングチャート)である。 FIG. 13 is a diagram (timing chart) showing an example (pattern 1) of training data used in the display control system 1000A according to the first modification of the first embodiment.
 図14は、第1実施形態の第1変形例に係る表示制御システム1000Aにおいて使用する訓練データの一例(パターン2)を示す図(タイミングチャート)である。 FIG. 14 is a diagram (timing chart) showing an example (pattern 2) of training data used in the display control system 1000A according to the first modification of the first embodiment.
 本変形例に係る表示制御システム1000Aは、図10に示すように、第1実施形態の表示制御システム1000において、表示制御装置100を表示制御装置100Aに置換した構成を有している。そして、表示制御装置100Aは、図11に示すように、第1実施形態の表示制御装置100において、訓練データ取得部2を訓練データ取得部2Aに置換した構成を有している。 As shown in FIG. 10, the display control system 1000A according to the present modification has a configuration in which the display control device 100 is replaced with the display control device 100A in the display control system 1000 of the first embodiment. Then, as shown in FIG. 11, the display control device 100A has a configuration in which the training data acquisition unit 2 is replaced with the training data acquisition unit 2A in the display control device 100 of the first embodiment.
 訓練データ取得部2Aは、セレクタSEL2から出力されるデータD1_imgと、ロボット制御部Rbt_C1に入力される訓練用データD1_rb_trainとを入力する。訓練データ取得部2Aは、データD1_imgと訓練用データD1_rb_trainとに基づいて、予測処理部5の学習モデルを訓練するための訓練データDtr1を生成する。 The training data acquisition unit 2A inputs the data D1_img output from the selector SEL2 and the training data D1_rb_train input to the robot control unit Rbt_C1. The training data acquisition unit 2A generates training data Dtr1 for training the learning model of the prediction processing unit 5 based on the data D1_img and the training data D1_rb_train.
 本変形例の表示制御システム1000Aでは、予め決められているロボットRbtの制御シーケンスと、それに応じて決定される安全領域(あるいは危険領域)とを対応付けて訓練データを取得する。例えば、図13に示すように、ロボットアームRbt_armが、(1)フェーズ1(危険度:低)、(2)フェーズ2(危険度:高)、(3)フェーズ3(危険度:低(危険度は、フェーズ1の危険度と同じであるものとする))の順番に、動作するよう制御されることが予め決まっている場合、上記フェーズに応じて、安全領域(あるいは危険領域)も決定することができる。 In the display control system 1000A of this modification, training data is acquired by associating a predetermined control sequence of the robot Rbt with a safety area (or danger area) determined accordingly. For example, as shown in FIG. 13, the robot arm Rbt_arm has (1) Phase 1 (risk: low), (2) Phase 2 (risk: high), and (3) Phase 3 (risk: low (danger)). If it is predetermined that the degree is controlled to operate in the order of (the degree is the same as the degree of danger in Phase 1)), the safety area (or danger area) is also determined according to the above phase. can do.
 例えば、図13の場合、表示制御システム1000Aにおいて、ロボットRbt用の訓練データD1_rb_trainは、予め決められているロボットRbtの制御シーケンスから決定され、当該制御シーケンスにより決定されるフェーズに応じて危険度が決定されるとともに、安全領域(あるいは危険領域)が決定される。図13の場合、フェーズ2の危険度がフェーズ1の危険度よりも高いので、フェーズ2に対応する画像(安全領域または危険領域を明示する画像)が投影面(例えば、床面FLR)により長い期間投影されるように訓練用データD1_prj_trainを生成する。つまり、図13の場合、フェーズ2の開始時刻t1よりも前の時刻t01から、フェーズ2に対応する画像(安全領域または危険領域を明示する画像)が投影面(例えば、床面FLR)に投影されるように訓練用データD1_prj_trainを生成する。そして、それに応じて、フェーズ1に対応する画像(安全領域または危険領域を明示する画像)が、時刻t0から時刻t01までの期間において、投影面(例えば、床面FLR)に投影されるように訓練用データD1_prj_trainを生成する。つまり、フェーズ1からフェーズ2に移行する時刻t1よりも前の時刻において、投影画像がフェーズ1の画像からフェーズ2の画像に切り替わるように訓練用データD1_prj_trainを生成する。このようにすることで、安全領域が小さくなるフェーズに移行するよりも前に、作業員は、安全領域が小さくなることを認識でき、その結果、作業員の安全が確保される。 For example, in the case of FIG. 13, in the display control system 1000A, the training data D1_rb_train for the robot Rbt is determined from a predetermined control sequence of the robot Rbt, and the degree of risk is determined according to the phase determined by the control sequence. Along with the determination, the safety area (or danger area) is determined. In the case of FIG. 13, since the risk of Phase 2 is higher than the risk of Phase 1, the image corresponding to Phase 2 (the image clearly indicating the safety area or the danger area) is longer than the projection surface (for example, the floor surface FLR). The training data D1_prj_train is generated so that it is projected for a period of time. That is, in the case of FIG. 13, from the time t01 before the start time t1 of the phase 2, the image corresponding to the phase 2 (an image clearly indicating the safety area or the danger area) is projected onto the projection surface (for example, the floor surface FLR). The training data D1_prj_train is generated so as to be performed. Then, accordingly, the image corresponding to Phase 1 (the image clearly indicating the safety area or the danger area) is projected on the projection surface (for example, the floor surface FLR) in the period from time t0 to time t01. The training data D1_prj_train is generated. That is, the training data D1_prj_train is generated so that the projected image is switched from the phase 1 image to the phase 2 image at a time before the time t1 when the phase 1 shifts to the phase 2. By doing so, the worker can recognize that the safety area becomes smaller before shifting to the phase where the safety area becomes smaller, and as a result, the safety of the worker is ensured.
 また、図14に示すように、危険度に応じて決定した色(または輝度)を有する画像領域から構成される画像が投影面(例えば、床面FLR)に投影されるように訓練用データD1_prj_trainを生成してもよい。 Further, as shown in FIG. 14, training data D1_prj_train so that an image composed of an image region having a color (or brightness) determined according to the degree of risk is projected on a projection surface (for example, a floor surface FLR). May be generated.
 例えば、図14の場合、図13の場合と同様に、表示制御システム1000Aにおいて、ロボットRbt用の訓練データD1_rb_trainは、予め決められているロボットRbtの制御シーケンスから決定され、当該制御シーケンスにより決定されるフェーズに応じて危険度が決定されるとともに、安全領域(あるいは危険領域)が決定される。図14の場合、フェーズ2の危険度がフェーズ1の危険度よりも高いので、フェーズ1からフェーズ2に移行する前の期間において、フェーズ1に対応する画像領域とフェーズ2に対応する画像領域からなる画像(安全領域または危険領域を危険度に応じて色または輝度により階層的に分けた画像)が投影面(例えば、床面FLR)に投影されるように訓練用データD1_prj_trainを生成する。つまり、図14の場合、フェーズ2の開始時刻t1よりも前の時刻t01から時刻t1までの期間において、フェーズ1に対応する画像領域とフェーズ2に対応する画像領域からなる画像(安全領域または危険領域を危険度に応じて色または輝度により階層的に分けた画像)が投影面(例えば、床面FLR)に投影されるように訓練用データD1_prj_trainを生成する。そして、それに応じて、フェーズ1に対応する画像(安全領域または危険領域を明示する画像)が、時刻t0から時刻t01までの期間において、投影面(例えば、床面FLR)に投影されるように訓練用データD1_prj_trainを生成する。つまり、フェーズ1からフェーズ2に移行する時刻t1よりも前の時刻t01から、フェーズ1に対応する画像領域とフェーズ2に対応する画像領域からなる画像(安全領域または危険領域を危険度に応じて色または輝度により階層的に分けた画像)を投影面に投影することで、作業員が、まもなく危険度が変化し、安全領域が変化することを適切に把握することができる。したがって、このようにして生成した訓練用データにより学習することで、作業員の安全を適切に確保する予測処理用の学習済みモデルを構築することができる。 For example, in the case of FIG. 14, as in the case of FIG. 13, in the display control system 1000A, the training data D1_rb_train for the robot Rbt is determined from a predetermined control sequence of the robot Rbt, and is determined by the control sequence. The degree of danger is determined according to the phase, and the safety area (or danger area) is determined. In the case of FIG. 14, since the risk of Phase 2 is higher than the risk of Phase 1, the image area corresponding to Phase 1 and the image area corresponding to Phase 2 are used in the period before the transition from Phase 1 to Phase 2. The training data D1_prj_train is generated so that the image (an image in which the safety area or the danger area is hierarchically divided by color or brightness according to the degree of danger) is projected on the projection surface (for example, the floor surface FLR). That is, in the case of FIG. 14, in the period from the time t01 to the time t1 before the start time t1 of the phase 2, the image consisting of the image area corresponding to the phase 1 and the image area corresponding to the phase 2 (safety area or danger). The training data D1_prj_train is generated so that the area (an image in which the area is hierarchically divided by color or brightness according to the degree of danger) is projected on the projection surface (for example, the floor surface FLR). Then, accordingly, the image corresponding to Phase 1 (the image clearly indicating the safety area or the danger area) is projected on the projection surface (for example, the floor surface FLR) in the period from time t0 to time t01. The training data D1_prj_train is generated. That is, from the time t01 before the time t1 when the transition from the phase 1 to the phase 2 occurs, the image consisting of the image area corresponding to the phase 1 and the image area corresponding to the phase 2 (safety area or dangerous area according to the degree of danger). By projecting (images hierarchically divided by color or brightness) onto the projection surface, the operator can properly grasp that the degree of danger will soon change and the safety area will change. Therefore, by learning from the training data generated in this way, it is possible to construct a trained model for prediction processing that appropriately ensures the safety of workers.
 本変形例の表示制御システム1000Aでは、上記のようにロボットRbtの制御シーケンスと対応付けた投影画像を訓練用データとして取得する。そして、このようにして取得した訓練用データを用いて学習処理を行い、学習済みモデルを取得する。そして、当該学習モデルを用いて、予測処理を実行することで、表示制御システム1000Aでは、適切に、安全領域(あるいは危険領域)を明示する画像を投影面(例えば、床面FLR)に投影することができる。その結果、作業員は、安全領域がまもなく変化することも適切に把握でき、作業員の安全が確保される。 In the display control system 1000A of this modification, the projection image associated with the control sequence of the robot Rbt is acquired as training data as described above. Then, the training process is performed using the training data acquired in this way, and the trained model is acquired. Then, by executing the prediction process using the learning model, the display control system 1000A appropriately projects an image clearly indicating the safety area (or danger area) on the projection surface (for example, the floor surface FLR). be able to. As a result, the worker can properly grasp that the safety area will change soon, and the safety of the worker is ensured.
 なお、本変形例の表示制御システム1000Aにおいて、例えば、投影面に投影されている画像(当該画像の色、輝度)により、訓練用データを生成し、当該訓練データにより、学習処理を行うようにしてもよい。つまり、本変形例の表示制御システム1000Aにおいて、ロボットアームRbt_armの状態を認識することなく、投影面に投影されている画像(当該画像の色、輝度)に基づいて、訓練用データの取得、学習処理を行うようにしてもよい。 In the display control system 1000A of this modification, for example, training data is generated from an image projected on a projection surface (color and brightness of the image), and learning processing is performed based on the training data. You may. That is, in the display control system 1000A of this modified example, training data is acquired and learned based on the image (color and brightness of the image) projected on the projection surface without recognizing the state of the robot arm Rbt_arm. The process may be performed.
 [第2実施形態]
 次に、第2実施形態について、説明する。
[Second Embodiment]
Next, the second embodiment will be described.
 なお、上記実施形態と同様の部分については、同一符号を付し、詳細な説明を省略する。 The same parts as those in the above embodiment are designated by the same reference numerals, and detailed description thereof will be omitted.
 <2.1:表示制御システムの構成>
 図15は、第2実施形態に係る表示制御システム2000の概略構成図である。
<2.1: Display control system configuration>
FIG. 15 is a schematic configuration diagram of the display control system 2000 according to the second embodiment.
 図16は、第2実施形態に係る表示制御装置100Bの概略構成図である。 FIG. 16 is a schematic configuration diagram of the display control device 100B according to the second embodiment.
 図17は、第2実施形態に係る表示制御装置100Bの予測処理部5Aの概略構成図である。 FIG. 17 is a schematic configuration diagram of the prediction processing unit 5A of the display control device 100B according to the second embodiment.
 第2実施形態に係る表示制御システム2000は、図15に示すように、第1実施形態の表示制御システム1000において、表示制御装置100を表示制御装置100Bに置換した構成を有している。そして、表示制御装置100Bは、図16に示すように、第1実施形態の表示制御装置100において、予測処理部5を予測処理部5Aに置換した構成を有している。 As shown in FIG. 15, the display control system 2000 according to the second embodiment has a configuration in which the display control device 100 is replaced with the display control device 100B in the display control system 1000 of the first embodiment. Then, as shown in FIG. 16, the display control device 100B has a configuration in which the prediction processing unit 5 is replaced with the prediction processing unit 5A in the display control device 100 of the first embodiment.
 予測処理部5Aは、図17に示すように、予測部51と、検知対象位置判定部52と、安全範囲マップ生成部53と、危険判定部54とを備える。予測処理部5Aは、予測処理結果のデータDp1_prjをセレクタSEL1および安全範囲マップ生成部53に出力する。 As shown in FIG. 17, the prediction processing unit 5A includes a prediction unit 51, a detection target position determination unit 52, a safety range map generation unit 53, and a danger determination unit 54. The prediction processing unit 5A outputs the data Dp1_prj of the prediction processing result to the selector SEL1 and the safety range map generation unit 53.
 予測部51は、第1実施形態の予測処理部5と同様の処理を実行する機能部である。 The prediction unit 51 is a functional unit that executes the same processing as the prediction processing unit 5 of the first embodiment.
 検知対象位置判定部52は、セレクタSEL2から出力される画像データD1_imgを入力し、当該画像データD1_imgに対して、画像認識処理を行い、検知対象(例えば、作業員)に相当する画像領域の画像上の位置を特定する。そして、検知対象位置判定部52は、取得した検知対象の画像上の位置情報を含むデータをデータD_posとして、危険判定部54に出力する。 The detection target position determination unit 52 inputs the image data D1_img output from the selector SEL2, performs image recognition processing on the image data D1_img, and performs an image recognition process on the image data D1_img to obtain an image of an image area corresponding to the detection target (for example, a worker). Identify the top position. Then, the detection target position determination unit 52 outputs the data including the acquired position information on the image of the detection target to the danger determination unit 54 as data D_pos.
 安全範囲マップ生成部53は、予測部51から出力されるデータDp1_prjを入力し、当該データDp1_prjから、安全領域を特定するマップ情報(安全領域の位置、大きさ、形状等を特定するための情報)を取得する。そして、安全範囲マップ生成部53は、取得したマップ情報を含むデータをデータD_mapとして、危険判定部54に出力する。 The safety range map generation unit 53 inputs the data Dp1_prj output from the prediction unit 51, and from the data Dp1_prj, map information for specifying the safety area (information for specifying the position, size, shape, etc. of the safety area). ) To get. Then, the safety range map generation unit 53 outputs the data including the acquired map information as data D_map to the danger determination unit 54.
 危険判定部54は、検知対象位置判定部52から出力されるデータD_posと、安全範囲マップ生成部53から出力されるデータD_mapとを入力する。危険判定部54は、データD_posおよびデータD_mapに基づいて、検知対象(例えば、作業員)が危険領域に入っている(あるいは、現時刻から短い期間内に入っている可能性が高い)か否かを判定する。そして、危険判定部54は、検知対象(例えば、作業員)が危険領域に入っている(あるいは、現時刻から短い期間内に入っている可能性が高い)と判定した場合、警告信号Sig_wrnを生成し、生成した警告信号Sig_wrnをロボット制御部Rbt_C1Aに出力する。 The danger determination unit 54 inputs the data D_pos output from the detection target position determination unit 52 and the data D_map output from the safety range map generation unit 53. Based on the data D_pos and the data D_map, the danger determination unit 54 determines whether or not the detection target (for example, a worker) is in the danger area (or is likely to be within a short period of time from the current time). Is determined. Then, when the danger determination unit 54 determines that the detection target (for example, a worker) is in the danger area (or is likely to be within a short period of time from the current time), the danger determination unit 54 sends a warning signal Sigma_wrn. Generated and output the generated warning signal Sigma_wrn to the robot control unit Rbt_C1A.
 ロボット制御部Rbt_C1Aは、第1実施形態のロボット制御部Rbt_C1と同様の機能を有しており、さらに、表示制御装置100Bから出力される警告信号Sig_wrnを入力する。そして、ロボット制御部Rbt_C1Aは、表示制御装置100Bから警告信号Sig_wrnが入力された場合、ロボットアームRbt_armの動作を継続することが危険であると判断し、警告動作(例えば、警告音を発生させる等の処理)、および/または、ロボットアームRbt_armを停止させる等の危険回避処理を行う。 The robot control unit Rbt_C1A has the same function as the robot control unit Rbt_C1 of the first embodiment, and further inputs a warning signal Sig_wrn output from the display control device 100B. Then, the robot control unit Rbt_C1A determines that it is dangerous to continue the operation of the robot arm Rbt_arm when the warning signal Sig_wrn is input from the display control device 100B, and determines that it is dangerous to continue the operation of the robot arm Rbt_arm. Process) and / or perform risk avoidance processing such as stopping the robot arm Rbt_arm.
 <2.2:表示制御システムの動作>
 以上のように構成された表示制御システム2000の動作について、説明する。なお、第1実施形態と同様の部分については、説明を省略する。
<2.2: Operation of display control system>
The operation of the display control system 2000 configured as described above will be described. The description of the same parts as those in the first embodiment will be omitted.
 図18は、第2実施形態に係る表示制御システム2000の予測モードの処理のフローチャートである。 FIG. 18 is a flowchart of processing in the prediction mode of the display control system 2000 according to the second embodiment.
 図19、図20は、第2実施形態に係る表示制御システム2000の予測モードの処理を説明するための図である。 19 and 20 are diagrams for explaining the processing of the prediction mode of the display control system 2000 according to the second embodiment.
 以下では、図18のフローチャートを参照しながら、表示制御システム2000の予測モードの処理について、説明する。なお、表示制御システム2000において、訓練データ取得モードの処理、学習モードの処理は、第1実施形態の表示制御システム1000と同様である。 Hereinafter, the processing of the prediction mode of the display control system 2000 will be described with reference to the flowchart of FIG. In the display control system 2000, the processing of the training data acquisition mode and the processing of the learning mode are the same as those of the display control system 1000 of the first embodiment.
 図19に示すように、ロボット制御部Rbt_C1Aに、制御信号Ctrl_Rbt(phase2)が入力され、ロボットアームRbt_armが所定の動作(これを「フェーズ2の動作」という)を実行する場合について、説明する。 As shown in FIG. 19, a case where a control signal Ctrl_Rbt (phase2) is input to the robot control unit Rbt_C1A and the robot arm Rbt_arm executes a predetermined operation (this is referred to as “phase 2 operation”) will be described.
 この場合、ロボット制御部Rbt_C1Aは、ロボットアームRbt_armが、制御信号Ctrl_Rbt(phase2)に従う動作(フェーズ2の動作)を実行するように、ロボットアームRbt_armを制御する。 In this case, the robot control unit Rbt_C1A controls the robot arm Rbt_arm so that the robot arm Rbt_arm executes an operation (phase 2 operation) according to the control signal Ctrl_Rbt (phase2).
 ロボットアームRbt_armは、ロボット制御部Rbt_C1Aから指令に従い、フェーズ2の動作を行う。 The robot arm Rbt_arm performs the operation of Phase 2 in accordance with a command from the robot control unit Rbt_C1A.
 そして、撮像部Cmr1は、そのときの状況(ロボットアームRbt_arm、作業員、床面FLR等)を、訓練データを取得したときと同様の状態で撮像する。 Then, the imaging unit Cmr1 captures the situation at that time (robot arm Rbt_arm, worker, floor surface FLR, etc.) in the same state as when the training data was acquired.
 ロボットアームRbt_armがフェーズ2の動作を行っている期間中において、撮像部Cmr1は、ロボットRbt、床面FLR、(存在すれば)作業員を上方から撮像し、撮像した画像(フレーム画像)を表示制御装置100に出力し続ける(ステップS1)。 During the period in which the robot arm Rbt_arm is performing the phase 2 operation, the imaging unit Cmr1 images the robot Rbt, the floor surface FLR, and the worker (if any) from above, and displays the captured image (frame image). Continue to output to the control device 100 (step S1).
 予測処理時(予測モード)において、モード信号Modeは、その信号値が「1」にセットされているので、セレクタSEL2は、データD1_imgを予測処理部5Aに出力する。 At the time of prediction processing (prediction mode), the signal value of the mode signal Mode is set to "1", so the selector SEL2 outputs the data D1_img to the prediction processing unit 5A.
 予測処理部5Aは、ロボットアームRbt_armがフェーズ2の動作を行っている期間中において取得された画像データD1_imgを、学習済みモデルに入力し、当該学習済みモデルからの出力をデータDp1_prjとして取得する。例えば、図19に示す場合における、(1)ロボットアームRbt_armの位置、状態、および、(2)作業員Psn1の位置、状態は、図4の場合(安全領域が領域Ar2であるときの訓練データを取得したときの状態)における、(1)ロボットアームRbt_armの位置、状態、および、(2)作業員Psn1の位置、状態と類似している。したがって、この状態において、撮像した画像データD1_imgが、予測処理部5Aの学習済みモデルに入力された場合、安全領域が領域Ar2であることを示すデータ(床面FLRに領域Ar2を表示させるためのデータDp1_prj)が、学習済みモデルから出力される。 The prediction processing unit 5A inputs the image data D1_img acquired during the period during which the robot arm Rbt_arm is performing the operation of Phase 2 into the trained model, and acquires the output from the trained model as data Dp1_prj. For example, in the case shown in FIG. 19, (1) the position and state of the robot arm Rbt_arm, and (2) the position and state of the worker Psn1 are the training data in the case of FIG. 4 (when the safety area is the area Ar2). It is similar to (1) the position and state of the robot arm Rbt_arm, and (2) the position and state of the worker Psn1 in the state when the above is acquired. Therefore, in this state, when the captured image data D1_img is input to the trained model of the prediction processing unit 5A, the data indicating that the safety region is the region Ar2 (for displaying the region Ar2 on the floor surface FLR). Data Dp1_prj) is output from the trained model.
 そして、予測処理部5Aは、取得したデータDp1_prjをセレクタSEL1に出力する。予測モードの場合、モード信号Modeの信号値は「1」であるので、セレクタSEL1は、予測処理部5Aから出力されるデータDp1_prjを投影制御部1に出力する。 Then, the prediction processing unit 5A outputs the acquired data Dp1_prj to the selector SEL1. In the prediction mode, since the signal value of the mode signal Mode is “1”, the selector SEL1 outputs the data Dp1_prj output from the prediction processing unit 5A to the projection control unit 1.
 投影制御部1は、データDp1_prjを入力し、データDp1_prjに基づく画像(あるいは、投影面に表示する境界線)が投影部Prj1から投影対象に投影されるように制御する制御信号Ctl_prj(Dp1_prj)を生成し、生成した当該制御信号Ctl_prj(Dp1_prj)を投影部Prj1に出力する。 The projection control unit 1 inputs the data Dp1_prj and outputs a control signal Ctrl_prj (Dp1_prj) that controls the image (or the boundary line displayed on the projection surface) based on the data Dp1_prj to be projected from the projection unit Prj1 onto the projection target. The generated control signal Ctrl_prj (Dp1_prj) is output to the projection unit Prj1.
 そして、投影部Prj1は、制御信号Ctl_prj(Dp1_prj)に基づいて、投影面(床面FLR)に、データDp1_prjに基づく画像(あるいは、投影面に表示する境界線)を投影する。図19に示した場合、安全領域が領域Ar2であることを示す画像(あるいは、境界線)が床面FLRに投影される。 Then, the projection unit Prj1 projects an image (or a boundary line to be displayed on the projection surface) based on the data Dp1_prj on the projection surface (floor surface FLR) based on the control signal Ctrl_prj (Dp1_prj). When shown in FIG. 19, an image (or boundary line) indicating that the safety region is the region Ar2 is projected on the floor surface FLR.
 この状態において、撮像部Cmr1により取得されたフレーム画像データD1_imgが検知対象位置判定部52に入力される。 In this state, the frame image data D1_img acquired by the imaging unit Cmr1 is input to the detection target position determination unit 52.
 検知対象位置判定部52はフレーム画像データD1_imgに対して、画像認識処理を行い、検知対象(例えば、図19の作業員Psn1)に相当する画像領域の画像上の位置を特定する(ステップS2)。そして、検知対象位置判定部52は、取得した検知対象の画像上の位置情報を含むデータをデータD_posとして、危険判定部54に出力する。 The detection target position determination unit 52 performs image recognition processing on the frame image data D1_img to specify the position on the image of the image region corresponding to the detection target (for example, the worker Psn1 in FIG. 19) (step S2). .. Then, the detection target position determination unit 52 outputs the data including the acquired position information on the image of the detection target to the danger determination unit 54 as data D_pos.
 安全範囲マップ生成部53は、予測部51から出力されるデータDp1_prjを入力し、当該データDp1_prjから、安全領域を特定するマップ情報(安全領域の位置、大きさ、形状等を特定するための情報)を取得する(ステップS3)。図19の場合、安全範囲マップ生成部53は、安全領域Ar2を特定するマップ情報を取得する。 The safety range map generation unit 53 inputs the data Dp1_prj output from the prediction unit 51, and from the data Dp1_prj, map information for specifying the safety area (information for specifying the position, size, shape, etc. of the safety area). ) Is acquired (step S3). In the case of FIG. 19, the safety range map generation unit 53 acquires the map information that identifies the safety area Ar2.
 そして、安全範囲マップ生成部53は、取得したマップ情報を含むデータをデータD_mapとして、危険判定部54に出力する。 Then, the safety range map generation unit 53 outputs the data including the acquired map information as data D_map to the danger determination unit 54.
 危険判定部54は、検知対象位置判定部52から出力されるデータD_posと、安全範囲マップ生成部53から出力されるデータD_mapとに基づいて、検知対象(例えば、図19の作業員Psn1)が危険領域に入っている(あるいは、現時刻から短い期間内に入っている可能性が高い)か否かを判定する。例えば、検知対象の動きベクトルを取得し、当該動きベクトルから、現時刻から短い期間内に入っている可能性が高いか否かを判定する(マップ照合処理、ステップS4、S5)。 The danger determination unit 54 determines the detection target (for example, the worker Psn1 in FIG. 19) based on the data D_pos output from the detection target position determination unit 52 and the data D_map output from the safety range map generation unit 53. Determine if you are in the danger zone (or likely to be in a short period of time from the current time). For example, a motion vector to be detected is acquired, and from the motion vector, it is determined whether or not there is a high possibility that the motion vector is within a short period of time from the current time (map collation process, steps S4 and S5).
 そして、危険判定部54は、検知対象(例えば、作業員)が危険領域に入っている(あるいは、現時刻から短い期間内に入っている可能性が高い)と判定した場合、警告信号Sig_wrnを生成し、生成した警告信号Sig_wrnをロボット制御部Rbt_C1Aに出力する。 Then, when the danger determination unit 54 determines that the detection target (for example, a worker) is in the danger area (or is likely to be within a short period of time from the current time), the danger determination unit 54 sends a warning signal Sigma_wrn. Generated and output the generated warning signal Sigma_wrn to the robot control unit Rbt_C1A.
 ロボット制御部Rbt_C1Aは、表示制御装置100Bから警告信号Sig_wrnが入力された場合、ロボットアームRbt_armの動作を継続することが危険であると判断し、警告動作(例えば、警告音を発生させる等の処理)、および/または、ロボットアームRbt_armを停止させる等の危険回避処理を行う(ステップS6)。 When the warning signal Sig_wrn is input from the display control device 100B, the robot control unit Rbt_C1A determines that it is dangerous to continue the operation of the robot arm Rbt_arm, and performs a warning operation (for example, a process of generating a warning sound). ) And / or, risk avoidance processing such as stopping the robot arm Rbt_arm is performed (step S6).
 表示制御システム2000では、このように処理することで、例えば、図19の場合において、作業員Psn1が安全領域Ar2内から、安全領域Ar2外へ移動し、ロボットアームRbt_armと接触、衝突する可能性が高くなった場合、危険回避処理により、ロボットアームRbt_armと接触、衝突する等の重大事故の発生を防止することができる。つまり、表示制御システム2000においても、ロボットRbt、ロボットアームRbt_armがどのような状態であっても、その状態に応じて、動的に安全領域を特定(予測)する予測処理を、適切かつ高精度に行うことができる。 In the display control system 2000, by processing in this way, for example, in the case of FIG. 19, the worker Psn1 may move from the inside of the safety area Ar2 to the outside of the safety area Ar2 and come into contact with or collide with the robot arm Rbt_arm. When the value becomes high, the danger avoidance process can prevent the occurrence of serious accidents such as contact and collision with the robot arm Rbt_arm. That is, even in the display control system 2000, regardless of the state of the robot Rbt and the robot arm Rbt_arm, the prediction process for dynamically identifying (predicting) the safety area according to the state is appropriately and highly accurate. Can be done.
 なお、上記では、投影面(床面FLR)に境界線が表示される場合について説明したが、これに限定されることなく、表示制御システム2000において、安全度合いを階層的に分けた画像領域(例えば、階層的に色分けした画像、あるいは、階層的に輝度を変えた画像)を有する画像を投影面(床面FLR)に投影するようにしてもよい。この場合、安全度合いを階層的に分けた画像領域を含む画像データを用いて、上記の検知対象の位置特定処理(ステップS2)、安全範囲マップ取得処理(ステップS3)の処理を行えばよい。 In the above description, the case where the boundary line is displayed on the projection surface (floor surface FLR) has been described, but the present invention is not limited to this, and in the display control system 2000, the degree of safety is hierarchically divided into image areas ( For example, an image having a hierarchically color-coded image or an image having a hierarchically changed brightness may be projected onto a projection surface (floor surface FLR). In this case, the above-mentioned process of specifying the position of the detection target (step S2) and the process of acquiring the safety range map (step S3) may be performed using the image data including the image area in which the degree of safety is hierarchically divided.
 また、上記では、投影部Prj1から、安全領域を表示する場合について説明したが、これに限定されることはなく、表示制御システム2000において、図20に示すように、危険領域を示す画像(あるいは境界線)を投影面(床面FLR)に表示するようにしてもよい。なお、図20の場合、領域Ar_rb1が、危険度が高い領域(危険領域)であり、領域Ar_rb2が領域Ar_rb1よりも危険度が低い領域(危険領域)である。この場合、検知対象(例えば、作業員Psn1)が危険範囲に入る、あるいは、短期間に入る可能性が高い場合、表示制御装置100Bからロボット制御部Rbt_C1Aに警告信号Sig_wrnを出力し、危険回避処理を行うようにすればよい。 Further, in the above, the case where the safety area is displayed from the projection unit Prj1 has been described, but the present invention is not limited to this, and in the display control system 2000, as shown in FIG. 20, an image (or an image showing the danger area) showing the danger area is described. The boundary line) may be displayed on the projection surface (floor surface FLR). In the case of FIG. 20, the region Ar_rb1 is a region with a high degree of risk (dangerous region), and the region Ar_rb2 is a region with a lower degree of danger than the region Ar_rb1 (dangerous region). In this case, when the detection target (for example, the worker Psn1) enters the danger range or is likely to enter the danger range, the display control device 100B outputs a warning signal Sig_wrn to the robot control unit Rbt_C1A to perform the danger avoidance process. You just have to do.
 [他の実施形態]
 上記実施形態を組み合わせて、表示制御システム、表示制御装置を構成するようにしてもよい。例えば、第2実施形態の表示制御システムにおいて、第1実施形態の第1変形例と同様の方法により、訓練用データを取得し、取得した当該訓練用データを用いて学習処理を行い、さらに、当該学習処理により取得した学習済みモデルを用いて、予測処理を行うようにしてもよい。
[Other Embodiments]
The display control system and the display control device may be configured by combining the above embodiments. For example, in the display control system of the second embodiment, training data is acquired by the same method as in the first modification of the first embodiment, learning processing is performed using the acquired training data, and further. The prediction process may be performed using the trained model acquired by the training process.
 また、上記実施形態では、ロボットRbtが1つ、作業員が一人である場合について説明したが、これに限定されることはなく、ロボットRbtの数、作業員の数は、任意の数でよい。 Further, in the above embodiment, the case where there is one robot Rbt and one worker has been described, but the present invention is not limited to this, and the number of robot Rbts and the number of workers may be any number. ..
 また、上記実施形態では、撮像部Cmr1が1つである場合について説明したが、これに限定されることはなく、表示制御システムは、複数の撮像部を備えるものであってもよい。そして、このように構成された表示制御システムにおいて、より遮蔽を減少させるために、複数のカメラで撮像した画像を用いて危険検知、危険判定処理等を行うようにしてもよい。 Further, in the above embodiment, the case where the imaging unit Cmr1 is one has been described, but the present invention is not limited to this, and the display control system may include a plurality of imaging units. Then, in the display control system configured as described above, in order to further reduce the occlusion, danger detection, danger determination processing, and the like may be performed using images captured by a plurality of cameras.
 なお、撮像部は、固定位置に設置されることが好ましいが、可変位置に設置されてもよい。 The imaging unit is preferably installed at a fixed position, but may be installed at a variable position.
 また、上記実施形態では、安全領域(あるいは、危険領域)を、ロボットアームRbt_armの動作状態により、動的に変化させて、投影面(床面FLR)に投影する場合について説明したが、これに限定されることはなく、例えば、安全領域の最大領域を検出し、当該最大領域を静的に投影面(床面FLR)に表示するようにしてもよい。この場合、床面FLRにおいて、物理的に認識できる境界線等を表示するようにしてもよい(例えば、光ファイバーにより、当該境界が発光し、作業員等が容易に認識できるようにしてもよい)。 Further, in the above embodiment, the case where the safety area (or the danger area) is dynamically changed according to the operating state of the robot arm Rbt_arm and projected onto the projection surface (floor surface FLR) has been described. It is not limited, and for example, the maximum area of the safety area may be detected and the maximum area may be statically displayed on the projection surface (floor surface FLR). In this case, a physically recognizable boundary line or the like may be displayed on the floor surface FLR (for example, the boundary may be emitted by an optical fiber so that a worker or the like can easily recognize it). ..
 また、上記実施形態では、投影部Prj1は、プロジェクタ装置を使用する場合について説明したが、これに限定されることはなく、例えば、LEDスキャナ、レーザースキャナ等を投影部Prj1として用いてもよい。 Further, in the above embodiment, the case where the projection unit Prj1 uses the projector device has been described, but the present invention is not limited to this, and for example, an LED scanner, a laser scanner, or the like may be used as the projection unit Prj1.
 また、上記実施形態では、投影部Prj1が1つである場合について説明したが、これに限定されることはなく、表示制御システムは、複数の投影部(例えば、プロジェクタ装置)を備えるものであってもよい。そして、このように構成された表示制御システムにおいて、複数の投影部(例えば、プロジェクタ装置)を用いて、遮蔽なく安全領域または危険領域を投影して明示するようにしてもよい。 Further, in the above embodiment, the case where the number of projection units Prj1 is one has been described, but the present invention is not limited to this, and the display control system includes a plurality of projection units (for example, a projector device). You may. Then, in the display control system configured as described above, a plurality of projection units (for example, a projector device) may be used to project and clearly indicate a safety area or a danger area without shielding.
 なお、投影部は、固定位置に設置されることが好ましいが、可変位置に設置されてもよい。 The projection unit is preferably installed at a fixed position, but may be installed at a variable position.
 また、上記実施形態では、キャリブレーションにおいて、ロボットRbtの台座の2点を基準点とする場合について、説明したが、これに限定されることはなく、キャリブレーション用の基準点の数は、2以上であってもよいし、また、別の位置を基準点としてもよい。 Further, in the above embodiment, the case where two points of the pedestal of the robot Rbt are used as reference points in the calibration has been described, but the present invention is not limited to this, and the number of reference points for calibration is 2. The above may be the above, or another position may be used as a reference point.
 また、上記実施形態で説明した表示制御システム、表示制御装置において、各ブロックは、LSIなどの半導体装置により個別に1チップ化されても良いし、一部又は全部を含むように1チップ化されても良い。 Further, in the display control system and the display control device described in the above embodiment, each block may be individually integrated into one chip by a semiconductor device such as an LSI, or may be integrated into one chip so as to include a part or all of the blocks. You may.
 なお、ここでは、LSIとしたが、集積度の違いにより、IC、システムLSI、スーパーLSI、ウルトラLSIと呼称されることもある。 Although it is referred to as LSI here, it may be referred to as IC, system LSI, super LSI, or ultra LSI depending on the degree of integration.
 また、集積回路化の手法はLSIに限るものではなく、専用回路又は汎用プロセサで実現してもよい。LSI製造後に、プログラムすることが可能なFPGA(Field Programmable Gate Array)や、LSI内部の回路セルの接続や設定を再構成可能なリコンフィギュラブル・プロセッサーを利用しても良い。 Further, the method of making an integrated circuit is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor. An FPGA (Field Programmable Gate Array) that can be programmed after the LSI is manufactured, or a reconfigurable processor that can reconfigure the connection and settings of the circuit cells inside the LSI may be used.
 また、上記各実施形態の各機能ブロックの処理の一部または全部は、プログラムにより実現されるものであってもよい。そして、上記各実施形態の各機能ブロックの処理の一部または全部は、コンピュータにおいて、中央演算装置(CPU)により行われる。また、それぞれの処理を行うためのプログラムは、ハードディスク、ROMなどの記憶装置に格納されており、ROMにおいて、あるいはRAMに読み出されて実行される。 Further, a part or all of the processing of each functional block of each of the above embodiments may be realized by a program. Then, a part or all of the processing of each functional block of each of the above embodiments is performed by the central processing unit (CPU) in the computer. Further, the program for performing each process is stored in a storage device such as a hard disk or a ROM, and is read and executed in the ROM or the RAM.
 また、上記実施形態の各処理をハードウェアにより実現してもよいし、ソフトウェア(OS(オペレーティングシステム)、ミドルウェア、あるいは、所定のライブラリとともに実現される場合を含む。)により実現してもよい。さらに、ソフトウェアおよびハードウェアの混在処理により実現しても良い。 Further, each process of the above embodiment may be realized by hardware, or may be realized by software (including the case where it is realized together with an OS (operating system), middleware, or a predetermined library). Further, it may be realized by mixed processing of software and hardware.
 例えば、上記実施形態(変形例を含む)の各機能部を、ソフトウェアにより実現する場合、図21に示したハードウェア構成(例えば、CPU、GPU、ROM、RAM、入力部、出力部等をバスBusにより接続したハードウェア構成)を用いて、各機能部をソフトウェア処理により実現するようにしてもよい。 For example, when each functional unit of the above embodiment (including a modification) is realized by software, the hardware configuration (for example, CPU, GPU, ROM, RAM, input unit, output unit, etc.) shown in FIG. 21 is busted. (Hardware configuration connected by Bus) may be used to realize each functional unit by software processing.
 また、上記実施形態の各機能部をソフトウェアにより実現する場合、当該ソフトウェアは、図21に示したハードウェア構成を有する単独のコンピュータを用いて実現されるものであってもよいし、複数のコンピュータを用いて分散処理により実現されるものであってもよい。 Further, when each functional unit of the above embodiment is realized by software, the software may be realized by using a single computer having the hardware configuration shown in FIG. 21, or a plurality of computers. It may be realized by the distributed processing using.
 また、上記実施形態における処理方法の実行順序は、必ずしも、上記実施形態の記載に制限されるものではなく、発明の要旨を逸脱しない範囲で、実行順序を入れ替えることができるものである。 Further, the execution order of the processing methods in the above embodiment is not necessarily limited to the description of the above embodiment, and the execution order can be changed without departing from the gist of the invention.
 前述した方法をコンピュータに実行させるコンピュータプログラム及びそのプログラムを記録したコンピュータ読み取り可能な記録媒体は、本発明の範囲に含まれる。ここで、コンピュータ読み取り可能な記録媒体としては、例えば、フレキシブルディスク、ハードディスク、SSD、CD-ROM、MO、DVD、DVD-ROM、DVD-RAM、Blu-ray(登録商標)、次世代光ディスク、半導体メモリを挙げることができる。 A computer program that causes a computer to execute the above-mentioned method and a computer-readable recording medium that records the program are included in the scope of the present invention. Here, examples of computer-readable recording media include flexible disks, hard disks, SSDs, CD-ROMs, MOs, DVDs, DVD-ROMs, DVD-RAMs, Blu-ray (registered trademarks), next-generation optical disks, and semiconductors. Memory can be mentioned.
 上記コンピュータプログラムは、上記記録媒体に記録されたものに限られず、電気通信回線、無線又は有線通信回線、インターネットを代表とするネットワーク等を経由して伝送されるものであってもよい。 The computer program is not limited to the one recorded on the recording medium, and may be transmitted via a telecommunication line, a wireless or wired communication line, a network typified by the Internet, or the like.
 なお、本発明の具体的な構成は、前述の実施形態に限られるものではなく、発明の要旨を逸脱しない範囲で種々の変更および修正が可能である。 The specific configuration of the present invention is not limited to the above-described embodiment, and various changes and modifications can be made without departing from the gist of the invention.
1000、1000A、2000 表示制御システム
Cmr1 撮像部
Prj1 投影部
100、100A、100B 表示制御装置
5、5A 予測処理部
Rbt ロボット
Rbt_arm ロボットアーム
Rbt_C1、Rbt_C1A ロボット制御部
1000, 1000A, 2000 Display control system Cmr1 Imaging unit Prj1 Projection unit 100, 100A, 100B Display control device 5, 5A Prediction processing unit Rbt Robot Rbt_arm Robot arm Rbt_C1, Rbt_C1A Robot control unit

Claims (15)

  1.  ロボットアームと可動物体とが混在する可能性がある空間において、可動物体が存在していても安全であると判定される領域である安全領域を、前記空間内において可動物体が認識可能な投影面に、表示するための表示制御システムであって、
     前記ロボットアームよりも上方の位置に設置される撮像部と、
     (1)前記空間において、前記撮像部により、前記ロボットアームよりも上方の位置から撮像した画像であって、前記ロボットアームの所定の状態であるときに撮像した前記画像、または、前記ロボットアームが所定の状態となるように前記ロボットアームを制御するための制御データと、(2)前記ロボットアームが当該所定の状態であるときの安全領域を特定する情報とを含むデータを教師データとして学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する予測処理部であって、予測処理時において、前記撮像部により、前記ロボットアームよりも上方の位置から撮像した画像である予測処理用画像に対して、前記学習済みモデルを用いた前記予測処理を実行することで、前記予測処理用画像を取得したときの前記空間における安全領域を予測し、予測した安全領域を予測安全領域として取得し、前記予測安全領域に基づいて、投影画像データを生成する前記予測処理部と、
     前記投影画像データにより形成される投影画像を前記投影面に投影する投影部と、
    を備える表示制御システム。
    In a space where a robot arm and a movable object may coexist, a projection surface in which the movable object can be recognized in the safety area, which is an area determined to be safe even if the movable object exists. It is a display control system for displaying.
    An imaging unit installed at a position above the robot arm and
    (1) In the space, the image captured by the imaging unit from a position above the robot arm, and the image captured when the robot arm is in a predetermined state, or the robot arm Learning processing using data including control data for controlling the robot arm so as to be in a predetermined state and (2) information for specifying a safety area when the robot arm is in the predetermined state as teacher data. It is a prediction processing unit that executes prediction processing using the trained model acquired by executing the above, and is an image captured from a position above the robot arm by the imaging unit at the time of prediction processing. By executing the prediction processing using the trained model on the image for prediction processing, a safety area in the space when the image for prediction processing is acquired is predicted, and the predicted safety area is used as the prediction safety area. The prediction processing unit that acquires and generates projected image data based on the prediction safety area,
    A projection unit that projects a projection image formed from the projection image data onto the projection surface,
    Display control system with.
  2.  前記予測処理部は、
     前記ロボットアームを制御するための前記制御データに基づいて、それぞれ安全度合いに応じて特定される色を有する複数の画像領域を含むように生成された訓練用投影画像データを前記投影面に投影されたときに、前記撮像部により撮像された画像を用いて、学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する、
     請求項1に記載の表示制御システム。
    The prediction processing unit
    Based on the control data for controlling the robot arm, training projected image data generated so as to include a plurality of image regions having colors specified according to the degree of safety is projected onto the projection surface. At that time, the prediction process is executed using the trained model obtained by executing the learning process using the image captured by the imaging unit.
    The display control system according to claim 1.
  3.  前記予測処理部は、
     前記ロボットアームを制御するための前記制御データに基づいて、それぞれ安全度合いに応じて特定される輝度を有する複数の画像領域を含むように生成された訓練用投影画像データを前記投影面に投影されたときに、前記撮像部により撮像された画像を用いて、学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する、
     請求項1に記載の表示制御システム。
    The prediction processing unit
    Based on the control data for controlling the robot arm, training projected image data generated so as to include a plurality of image regions having brightness specified according to the degree of safety is projected onto the projection surface. At that time, the prediction process is executed using the trained model obtained by executing the learning process using the image captured by the imaging unit.
    The display control system according to claim 1.
  4.  前記予測処理部は、
     それぞれ安全度合いに応じて特定される色を有する複数の画像領域を生成し、前記投影画像データにより形成される画像が、生成した前記複数の画像領域を含むように、前記投影画像データを生成する、
     請求項1から3のいずれかに記載の表示制御システム。
    The prediction processing unit
    A plurality of image regions having colors specified according to the degree of safety are generated, and the projected image data is generated so that the image formed by the projected image data includes the generated plurality of image regions. ,
    The display control system according to any one of claims 1 to 3.
  5.  前記予測処理部は、
     それぞれ安全度合いに応じて特定される輝度を有する複数の画像領域を生成し、前記投影画像データにより形成される画像が、生成した前記複数の画像領域を含むように、前記投影画像データを生成する、
     請求項1から3のいずれかに記載の表示制御システム。
    The prediction processing unit
    A plurality of image regions having brightness specified according to the degree of safety are generated, and the projected image data is generated so that the image formed by the projected image data includes the generated plurality of image regions. ,
    The display control system according to any one of claims 1 to 3.
  6.  ロボットアームと可動物体とが混在する可能性がある空間において、可動物体が存在していると危険であると判定される領域である危険領域を、前記空間内において可動物体が認識可能な投影面に、表示するための表示制御システムであって、
     前記ロボットアームよりも上方の位置に設置される撮像部と、
     (1)前記空間において、前記撮像部により、前記ロボットアームよりも上方の位置から撮像した画像であって、前記ロボットアームの所定の状態であるときに撮像した前記画像、または、前記ロボットアームが所定の状態となるように前記ロボットアームを制御するための制御データと、(2)前記ロボットアームが当該所定の状態であるときの危険領域を特定する情報とを含むデータを教師データとして学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する予測処理部であって、予測処理時において、前記撮像部により、前記ロボットアームよりも上方の位置から撮像した画像である予測処理用画像に対して、前記学習済みモデルを用いた前記予測処理を実行することで、前記予測処理用画像を取得したときの前記空間における危険領域を予測し、予測した危険領域を予測危険領域として取得し、前記予測危険領域に基づいて、投影画像データを生成する前記予測処理部と、
     前記投影画像データにより形成される投影画像を前記投影面に投影する投影部と、
    を備える表示制御システム。
    In a space where a robot arm and a movable object may coexist, a projection surface in which the movable object can be recognized in the dangerous area, which is an area determined to be dangerous if the movable object exists. It is a display control system for displaying.
    An imaging unit installed at a position above the robot arm and
    (1) In the space, the image captured by the imaging unit from a position above the robot arm, and the image captured when the robot arm is in a predetermined state, or the robot arm Learning processing using data including control data for controlling the robot arm so as to be in a predetermined state and (2) information for identifying a dangerous area when the robot arm is in the predetermined state as teacher data. It is a prediction processing unit that executes prediction processing using the trained model acquired by executing the above, and is an image captured from a position above the robot arm by the imaging unit at the time of prediction processing. By executing the prediction processing using the trained model on the image for prediction processing, the danger area in the space when the image for prediction processing is acquired is predicted, and the predicted danger area is used as the prediction danger area. The prediction processing unit that acquires and generates projected image data based on the prediction danger region,
    A projection unit that projects a projection image formed from the projection image data onto the projection surface,
    Display control system with.
  7.  前記予測処理部は、
     前記ロボットアームを制御するための前記制御データに基づいて、それぞれ危険度合いに応じて特定される色を有する複数の画像領域を含むように生成された訓練用投影画像データを前記投影面に投影されたときに、前記撮像部により撮像された画像を用いて、学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する、
     請求項6に記載の表示制御システム。
    The prediction processing unit
    Based on the control data for controlling the robot arm, training projected image data generated so as to include a plurality of image regions having colors specified according to the degree of danger is projected onto the projection surface. At that time, the prediction process is executed using the trained model obtained by executing the learning process using the image captured by the imaging unit.
    The display control system according to claim 6.
  8.  前記予測処理部は、
     前記ロボットアームを制御するための前記制御データに基づいて、それぞれ危険度合いに応じて特定される輝度を有する複数の画像領域を含むように生成された訓練用投影画像データを前記投影面に投影されたときに、前記撮像部により撮像された画像を用いて、学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する、
     請求項6に記載の表示制御システム。
    The prediction processing unit
    Based on the control data for controlling the robot arm, training projected image data generated so as to include a plurality of image regions having brightness specified according to the degree of danger is projected onto the projection surface. At that time, the prediction process is executed using the trained model obtained by executing the learning process using the image captured by the imaging unit.
    The display control system according to claim 6.
  9.  前記予測処理部は、
     それぞれ危険度合いに応じて特定される色を有する複数の画像領域を生成し、前記投影画像データにより形成される画像が、生成した前記複数の画像領域を含むように、前記投影画像データを生成する、
     請求項6から8のいずれかに記載の表示制御システム。
    The prediction processing unit
    A plurality of image regions having colors specified according to the degree of danger are generated, and the projected image data is generated so that the image formed by the projected image data includes the generated plurality of image regions. ,
    The display control system according to any one of claims 6 to 8.
  10.  前記予測処理部は、
     それぞれ危険度合いに応じて特定される輝度を有する複数の画像領域を生成し、前記投影画像データにより形成される画像が、生成した前記複数の画像領域を含むように、前記投影画像データを生成する、
     請求項6から8のいずれかに記載の表示制御システム。
    The prediction processing unit
    A plurality of image regions having brightness specified according to the degree of danger are generated, and the projected image data is generated so that the image formed by the projected image data includes the generated plurality of image regions. ,
    The display control system according to any one of claims 6 to 8.
  11.  前記空間は、床面を有しており、
     前記投影面は、前記空間内の床面である、
     請求項1から10のいずれかに記載の表示制御システム。
    The space has a floor surface and
    The projection surface is a floor surface in the space.
    The display control system according to any one of claims 1 to 10.
  12.  前記空間は、天井面を有しており、
     前記撮像部は、前記空間の天井面に設置されている、
     請求項1から11のいずれかに記載の表示制御システム。
    The space has a ceiling surface and
    The imaging unit is installed on the ceiling surface of the space.
    The display control system according to any one of claims 1 to 11.
  13.  前記ロボットアームを制御するロボットアーム制御部をさらに備え、
     前記予測処理部が、前記可動物体が安全領域外に移動する可能性が高い、あるいは、前記可動物体が危険領域内に移動する可能性が高いと判断した場合、前記ロボットアーム制御部は、前記ロボットアームの動作を停止させる、および/または、警告を発生させる処理を実行する、
     請求項1から12のいずれかに記載の表示制御システム。
    A robot arm control unit for controlling the robot arm is further provided.
    When the prediction processing unit determines that the movable object is likely to move out of the safe area or the movable object is likely to move into the dangerous area, the robot arm control unit determines that the movable object is likely to move out of the safe area. Stop the movement of the robot arm and / or execute the process to generate a warning,
    The display control system according to any one of claims 1 to 12.
  14.  ロボットアームと可動物体とが混在する可能性がある空間において、
     前記ロボットアームよりも上方の位置に設置される撮像部と、
     画像を所定の投影面に投影する投影部と、
    を備える表示制御システムに用いられる表示制御方法であり、
     可動物体が存在していても安全であると判定される領域である安全領域を、前記空間内において可動物体が認識可能な投影面に、表示するための表示制御方法であって、
     (1)前記空間において、前記撮像部により、前記ロボットアームよりも上方の位置から撮像した画像であって、前記ロボットアームの所定の状態であるときに撮像した前記画像、または、前記ロボットアームが所定の状態となるように前記ロボットアームを制御するための制御データと、(2)前記ロボットアームが当該所定の状態であるときの安全領域を特定する情報とを含むデータを教師データとして学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する予測処理ステップであって、予測処理時において、前記撮像部により、前記ロボットアームよりも上方の位置から撮像した画像である予測処理用画像に対して、前記学習済みモデルを用いた前記予測処理を実行することで、前記予測処理用画像を取得したときの前記空間における安全領域を予測し、予測した安全領域を予測安全領域として取得し、前記予測安全領域に基づいて、投影画像データを生成する前記予測処理ステップと、
     前記投影画像データにより形成される投影画像を前記投影面に投影する投影ステップと、
    を備える表示制御方法。
    In a space where robot arms and moving objects may coexist
    An imaging unit installed at a position above the robot arm and
    A projection unit that projects an image onto a predetermined projection surface,
    It is a display control method used in a display control system including.
    A display control method for displaying a safety area, which is an area determined to be safe even if a movable object exists, on a projection surface on which a movable object can be recognized in the space.
    (1) In the space, the image captured by the imaging unit from a position above the robot arm, and the image captured when the robot arm is in a predetermined state, or the robot arm Learning processing using data including control data for controlling the robot arm so as to be in a predetermined state and (2) information for specifying a safety area when the robot arm is in the predetermined state as teacher data. This is a prediction processing step of executing prediction processing using the trained model acquired by executing the above, and is an image captured from a position above the robot arm by the imaging unit at the time of prediction processing. By executing the prediction processing using the trained model on the image for prediction processing, a safety area in the space when the image for prediction processing is acquired is predicted, and the predicted safety area is used as the prediction safety area. The prediction processing step to acquire and generate projected image data based on the prediction safety area.
    A projection step of projecting a projection image formed from the projection image data onto the projection surface,
    Display control method including.
  15.  ロボットアームと可動物体とが混在する可能性がある空間において、
     前記ロボットアームよりも上方の位置に設置される撮像部と、
     画像を所定の投影面に投影する投影部と、
    を備える表示制御システムに用いられる表示制御方法であり、
     可動物体が存在していていると危険であると判定される領域である危険領域を、前記空間内において可動物体が認識可能な投影面に、表示するための表示制御方法であって、
     (1)前記空間において、前記撮像部により、前記ロボットアームよりも上方の位置から撮像した画像であって、前記ロボットアームの所定の状態であるときに撮像した前記画像、または、前記ロボットアームが所定の状態となるように前記ロボットアームを制御するための制御データと、(2)前記ロボットアームが当該所定の状態であるときの危険領域を特定する情報とを含むデータを教師データとして学習処理を実行して取得した学習済みモデルを用いて予測処理を実行する予測処理ステップであって、予測処理時において、前記撮像部により、前記ロボットアームよりも上方の位置から撮像した画像である予測処理用画像に対して、前記学習済みモデルを用いた前記予測処理を実行することで、前記予測処理用画像を取得したときの前記空間における危険領域を予測し、予測した危険領域を予測危険領域として取得し、前記予測危険領域に基づいて、投影画像データを生成する前記予測処理ステップと、
     前記投影画像データにより形成される投影画像を前記投影面に投影する投影ステップと、
    を備える表示制御方法。
    In a space where robot arms and moving objects may coexist
    An imaging unit installed at a position above the robot arm and
    A projection unit that projects an image onto a predetermined projection surface,
    It is a display control method used in a display control system including.
    A display control method for displaying a dangerous area, which is an area determined to be dangerous when a movable object is present, on a projection surface in which the movable object can be recognized in the space.
    (1) In the space, the image captured by the imaging unit from a position above the robot arm, and the image captured when the robot arm is in a predetermined state, or the robot arm Learning processing using data including control data for controlling the robot arm so as to be in a predetermined state and (2) information for identifying a dangerous area when the robot arm is in the predetermined state as teacher data. This is a prediction processing step of executing prediction processing using the trained model acquired by executing the above, and is an image captured from a position above the robot arm by the imaging unit at the time of prediction processing. By executing the prediction processing using the trained model on the image for prediction processing, the danger area in the space when the image for prediction processing is acquired is predicted, and the predicted danger area is used as the prediction danger area. The prediction processing step to acquire and generate projected image data based on the prediction danger region.
    A projection step of projecting a projection image formed from the projection image data onto the projection surface,
    Display control method including.
PCT/JP2019/040676 2019-04-15 2019-10-16 Display control system and display control method WO2020213194A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-076791 2019-04-15
JP2019076791A JP6647640B1 (en) 2019-04-15 2019-04-15 Display control system, display control method, and program

Publications (1)

Publication Number Publication Date
WO2020213194A1 true WO2020213194A1 (en) 2020-10-22

Family

ID=69568157

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/040676 WO2020213194A1 (en) 2019-04-15 2019-10-16 Display control system and display control method

Country Status (2)

Country Link
JP (1) JP6647640B1 (en)
WO (1) WO2020213194A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102594983B1 (en) * 2023-03-07 2023-10-27 주식회사 아임토리 System for providing smartfactory based safety distance maintenance service using collaborative robot trajectory analysis
KR102567743B1 (en) * 2023-03-07 2023-08-17 주식회사 아임토리 System for providing vision based data synchronization service between collaborative robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014140920A (en) * 2013-01-23 2014-08-07 Denso Wave Inc System and method for monitoring intrusion of object around robot
WO2016000770A1 (en) * 2014-07-02 2016-01-07 Siemens Aktiengesellschaft Warning method and robot system
US20160229068A1 (en) * 2013-09-18 2016-08-11 Kuka Systems Gmbh Workstation
JP2016159407A (en) * 2015-03-03 2016-09-05 キヤノン株式会社 Robot control device and robot control method
JP2017013206A (en) * 2015-07-03 2017-01-19 株式会社デンソーウェーブ Robot system
JP2019030941A (en) * 2017-08-08 2019-02-28 ファナック株式会社 Control device and learning device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014140920A (en) * 2013-01-23 2014-08-07 Denso Wave Inc System and method for monitoring intrusion of object around robot
US20160229068A1 (en) * 2013-09-18 2016-08-11 Kuka Systems Gmbh Workstation
WO2016000770A1 (en) * 2014-07-02 2016-01-07 Siemens Aktiengesellschaft Warning method and robot system
JP2016159407A (en) * 2015-03-03 2016-09-05 キヤノン株式会社 Robot control device and robot control method
JP2017013206A (en) * 2015-07-03 2017-01-19 株式会社デンソーウェーブ Robot system
JP2019030941A (en) * 2017-08-08 2019-02-28 ファナック株式会社 Control device and learning device

Also Published As

Publication number Publication date
JP6647640B1 (en) 2020-02-14
JP2020172011A (en) 2020-10-22

Similar Documents

Publication Publication Date Title
US20180211138A1 (en) Information processing device, information processing method, and storage medium
US10824853B2 (en) Human detection system for construction machine
US9621793B2 (en) Information processing apparatus, method therefor, and measurement apparatus
EP1955647B1 (en) Eyelid detection apparatus and programs therefor
JP7167453B2 (en) APPEARANCE INSPECTION SYSTEM, SETTING DEVICE, IMAGE PROCESSING DEVICE, SETTING METHOD AND PROGRAM
JP7458741B2 (en) Robot control device and its control method and program
WO2020213194A1 (en) Display control system and display control method
US9595095B2 (en) Robot system
US20170073934A1 (en) Human detection system for construction machine
WO2013145615A1 (en) Site estimation device, site estimation method, and site estimation program
JP2017111638A (en) Image processing method, image processing apparatus, image processing system, production apparatus, program, and recording medium
JP6973444B2 (en) Control system, information processing device and control method
WO2020241540A1 (en) System control method and system
JP7151873B2 (en) inspection equipment
US20160110840A1 (en) Image processing method, image processing device, and robot system
JP6020439B2 (en) Image processing apparatus, imaging apparatus, and image processing program
US20180191951A1 (en) Imaging apparatus and imaging condition setting method and program
JP6880457B2 (en) Gripping method, gripping system and program
JP7372076B2 (en) image processing system
JP2015232771A (en) Face detection method, face detection system and face detection program
JPH10222663A (en) Picture recognition system and device therefor
JP7361342B2 (en) Learning methods, learning devices, and programs
KR20240025248A (en) A method of teaching a screw assembly location based on a deep learning automatically, an apparatus of teaching a screw assembly location based on a deep learning automatically, and medium of storitng a program teaching a screw assembly location based on a deep learning automatically
CN116724224A (en) Machining surface determination device, machining surface determination program, machining surface determination method, machining system, inference device, and machine learning device
JP6114154B2 (en) Defect determination apparatus, defect inspection apparatus, and defect determination method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19925340

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19925340

Country of ref document: EP

Kind code of ref document: A1