WO2018016192A1 - Virtual sensory evaluation assistance system - Google Patents

Virtual sensory evaluation assistance system Download PDF

Info

Publication number
WO2018016192A1
WO2018016192A1 PCT/JP2017/020077 JP2017020077W WO2018016192A1 WO 2018016192 A1 WO2018016192 A1 WO 2018016192A1 JP 2017020077 W JP2017020077 W JP 2017020077W WO 2018016192 A1 WO2018016192 A1 WO 2018016192A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
evaluation
evaluation target
change
video presentation
Prior art date
Application number
PCT/JP2017/020077
Other languages
French (fr)
Japanese (ja)
Inventor
里奈 林
俊樹 磯貝
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2018016192A1 publication Critical patent/WO2018016192A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • This disclosure relates to a virtual sensitivity evaluation support system.
  • a virtual sensibility evaluation support system that performs sensibility and ergonomic evaluation of virtual evaluation objects without prototyping a real machine is provided.
  • This type of virtual sensitivity evaluation support system switches the right-eye video and the left-eye video alternately for each frame on the screen surrounding the evaluator, and changes the line of sight of the video presentation device worn by the evaluator.
  • the three-dimensional video is presented to the evaluator by alternately interrupting in synchronization with the video switching.
  • the virtual emotion evaluation support system calculates the position and posture of the evaluator based on the sensor signal output from the sensor attached to the evaluator's body, and performs the emotional and ergonomic evaluation of the virtual evaluation target. (For example, refer to Patent Document 1).
  • Patent Document 1 Since the technology described in Patent Document 1 is configured to project an image on a screen surrounding the evaluator and to attach a sensor to the evaluator's body, the evaluator acts only in a pre-designed virtual environment. Cannot be evaluated, and the evaluation of the virtual evaluation target must be performed under a relatively high constraint. For this reason, only evaluation results that have a relatively large difference from the actual environment can be obtained, and the probability that an actual machine manufactured based on the evaluation result will satisfy the specifications is low, resulting in the effort to re-create the actual machine. As a result, there is a problem that the development cost increases and the development period is prolonged.
  • the present disclosure provides a virtual sensitivity evaluation support system that can appropriately perform evaluation of a virtual evaluation object under a relatively low constraint condition and can appropriately obtain an evaluation result with a relatively small deviation from the actual environment. It is to provide.
  • the evaluation target storage unit stores a physical model of the virtual evaluation target .
  • the environment change detection unit detects a change in the surrounding environment of the virtual evaluation target using a preset reference point.
  • the evaluation target control unit uses a physical model to reflect changes in the surrounding environment in the operation of the virtual evaluation target.
  • the video presenting unit superimposes a video in which the change in the surrounding environment is reflected on the operation of the virtual evaluation target on the actual environment and presents it to the evaluator.
  • Detect changes in the surrounding environment of the virtual evaluation target using a preset reference point use the physical model to reflect the changes in the surrounding environment in the operation of the virtual evaluation target, and The video reflected in the motion was superimposed on the real environment and presented to the evaluator. Since it is not necessary to project an image on a screen surrounding the evaluator and it is not necessary to attach a sensor to the evaluator's body, the restraint can be reduced as compared with the conventional configuration. Thereby, the evaluation of the virtual evaluation target can be appropriately performed under a condition with relatively low restraint, and an evaluation result having a relatively small deviation from the actual environment can be appropriately obtained.
  • FIG. 1 is a functional block diagram showing the first embodiment.
  • FIG. 2 is a diagram illustrating a virtual evaluation target and an evaluator.
  • FIG. 3 is a diagram showing a video presented to the evaluator.
  • FIG. 4 is a flowchart.
  • FIG. 5 is a functional block diagram showing the second embodiment.
  • FIG. 6 is a flowchart.
  • FIG. 7 is a functional block diagram showing the third embodiment.
  • FIG. 8 is a diagram illustrating a virtual evaluation target and an evaluator.
  • FIG. 9 is a functional block diagram showing the fourth embodiment.
  • FIG. 10 is a diagram illustrating a virtual evaluation target and an evaluator.
  • the virtual sensitivity evaluation support system 1 includes two cameras 2 a and 2 b that photograph the evaluation space M, a server 3 arranged outside the evaluation space M, and a virtual robot 4 (virtual And an image presentation device 5 worn by an evaluator H who performs work on (equivalent to an evaluation object).
  • the video presentation device 5 is, for example, a head mounted display that the evaluator H wears on the head.
  • the two cameras 2a and 2b are each arranged at an end of the evaluation space M (that is, a position that does not interfere with the action of the evaluator H). If the evaluation space M is a three-dimensional space of x-axis, y-axis, and z-axis, the point P shown in FIG. 2 is a reference point, and the coordinates of the reference point are (0, 0, 0), The coordinates are (0, 0, Z), and the coordinates of the arrangement position of the camera 2b are (X, 0, Z). There may be three or more cameras.
  • the server 3 may be a physical server with a physical entity or a cloud server (that is, a virtual server) without a physical entity.
  • the two cameras 2a and 2b each capture substantially the entire evaluation space M with a predetermined viewing angle, and transmit a video signal including the captured images to the server 3. That is, for example, if the evaluator H contacts the desk 6 disposed in the evaluation space M and the position and posture of the desk 6 change, the two cameras 2a and 2b change the position and posture of the desk 6. A video signal indicating the change is transmitted to the server 3.
  • the two cameras 2a and 2b transmit video signals indicating the actions of the evaluator H to the server 3 when the evaluator H acts, for example, by moving the head or hand or changing the location. .
  • the two cameras 2a and 2b each transmit a video signal to the server 3 by a wired communication method.
  • the server 3 includes an evaluation target storage unit 7, an environment change detection unit 8, an evaluation target control unit 9, and a transmission unit 10.
  • the evaluation target storage unit 7 stores a physical model of the virtual robot 4.
  • the physical model of the virtual robot 4 is data necessary for physical simulation of the virtual robot 4 to be evaluated, and is data indicating the shape, weight, design operation, constraint condition, and the like.
  • the environment change detection unit 8 uses the video signals received from the two cameras 2a and 2b, and uses the reference points described above as changes in the surrounding environment of the virtual robot 4 to change the real environment and the actions of the evaluator H. To detect. More specifically, the environment change detection unit 8 determines that the position of a predetermined part (for example, an end portion) of the desk 6 has changed from (X1, Y1, Z1) to (X2, Y2, Z2), for example. Detect as change. The environment change detection unit 8 also indicates that the position of a predetermined part (e.g., head or hand) of the evaluator H has changed from (X11, Y11, Z11) to (X12, Y12, Z12). Detect as an action. The environment change detection unit 8 detects a change in the real environment and the behavior of the evaluator H as events that interfere with the virtual robot 4.
  • a predetermined part for example, an end portion
  • the environment change detection unit 8 also indicates that the position of a predetermined part (e.
  • the evaluation target control unit 9 uses the physical model stored in the evaluation target storage unit 7 to generate three-dimensional data of the virtual robot 4, and changes the surrounding environment detected by the environment change detection unit 8. It is reflected in the operation. That is, when the evaluation target control unit 9 generates the three-dimensional data of the virtual robot 4 arranged on the desk 6, for example, if the position or posture of the desk 6 changes, the evaluation object control unit 9 Following the movement of the dimensional coordinates, the three-dimensional coordinates of the virtual robot 4 are moved, and changes in the position and posture of the desk 6 are reflected in the operation of the virtual robot 4.
  • the evaluation target control unit 9 collates the three-dimensional coordinates of the evaluator H with the three-dimensional coordinates of the virtual robot 4, and the evaluator H virtually contacts the virtual robot 4. Determine whether or not.
  • the evaluation target control unit 9 determines that the evaluator H has virtually contacted the virtual robot 4, and follows the movement of the three-dimensional coordinates of the virtual robot 4 if the position or posture of the virtual robot 4 changes. Then, the three-dimensional coordinates of the virtual robot 4 are moved, and changes in the position and orientation of the virtual robot 4 (that is, virtual contact of the evaluator H with the virtual robot 4) are reflected in the operation of the virtual robot 4.
  • the evaluation target control part 9 changes the position, posture, and movable amount of the movable part (that is, virtual contact of the evaluator H with the arm). Is reflected in the operation of the virtual robot 4.
  • the environment change detection unit 8 reflects an event that interferes with the virtual robot 4 in the operation of the virtual robot 4.
  • the transmitting unit 10 transmits transmission information including information in which the change in the surrounding environment is reflected in the operation of the virtual robot 4 to the video presentation device 5 by a wireless communication method.
  • the wireless communication is, for example, a wireless LAN defined by IEEE 802.11, Bluetooth (registered trademark), BLE (Bluetooth Low Energy) (registered trademark), WiFi (registered trademark), or the like.
  • the video presentation device 5 includes a reception unit 11 and a video presentation unit 12.
  • the receiving unit 11 receives the transmission information transmitted from the transmitting unit 10.
  • the video presentation unit 12 presents to the evaluator H a video included in the transmission information received by the reception unit 11, that is, a video in which changes in the surrounding environment are reflected in the operation of the virtual robot 4.
  • the video presented to the evaluator H is a video obtained by superimposing the virtual environment including the virtual robot 4 on the real environment. For example, if the position or posture of the desk 6 changes, the desk 6 For example, if the evaluator H virtually touches the virtual robot 4, the evaluator H will be directed to the virtual robot 4.
  • the actual environment may or may not be visualized. That is, when the video presentation unit 12 presents a video whose real environment is not visualized to the evaluator H, the video presentation unit 12 superimposes the video of the virtual robot 4 on the background of the real environment and presents it to the evaluator H. On the other hand, when presenting the video in which the real environment is visualized to the evaluator H, the video presentation unit 12 superimposes the video of the virtual robot 4 on the real environment and presents it to the evaluator H.
  • the server 3 and the video presentation device 5 cooperate to perform video presentation processing.
  • the server 3 uses the video signals received from the two cameras 2a and 2b to detect a change in the surrounding environment of the virtual robot 4 by the environment change detection unit 8 (A1).
  • the evaluation target control unit 9 calculates the influence of the change in the surrounding environment on the virtual robot 4 (A3). For example, if the evaluator H contacts the desk 6 and the position and posture of the desk 6 are changed, the server 3 calculates the influence of the change in the position and posture of the desk 6 on the virtual robot 4. For example, if the evaluator H is virtually in contact with the virtual robot 4, the server 3 calculates the influence of the virtual contact on the virtual robot 4.
  • the server 3 determines whether or not the virtual robot 4 is operating (A4).
  • the server 3 determines that the virtual robot 4 is operating (A4: YES)
  • the current operating state of the virtual robot 4 is determined.
  • Calculate (A5) The server 3 causes the transmission unit 10 to transmit the transmission information including the video in which the change in the surrounding environment is reflected in the operation of the virtual robot 4 to the video presentation device 5 (A6).
  • the server 3 determines whether or not the video presentation process end condition is satisfied (A7). If the server 3 determines that the video presentation process end condition is not satisfied (A7: NO), the server 3 proceeds to Step A1 described above. Return and repeat step A1 and subsequent steps.
  • the video presentation device 5 waits for reception of transmission information from the server 3 (B1) and determines that the transmission information from the server 3 has been received by the reception unit 11 (B1: YES), the received transmission information is received.
  • the video presentation unit 12 presents the video included in the video, that is, the video in which the change in the surrounding environment is reflected in the operation of the virtual robot 4 (B2). Then, the video presentation device 5 determines whether or not the video presentation process end condition is satisfied (B3), and determines that the video presentation process end condition is not satisfied (B3: NO), the above-described steps. Returning to B1, step B1 and the subsequent steps are repeated.
  • the following effects can be obtained.
  • a change in the surrounding environment of the virtual robot 4 is detected, a change in the surrounding environment is reflected in the operation of the virtual robot 4 using a physical model, An image in which a change in the environment is reflected in the operation of the virtual robot 4 is superimposed on the actual environment and presented to the evaluator H. Since it is not necessary to project an image on the screen surrounding the evaluator H, and there is no need to attach a sensor to the evaluator H's body, the restraint can be reduced as compared with the conventional configuration. Thereby, the evaluation of the virtual robot 4 can be appropriately performed under the condition of relatively low restraint, and an evaluation result with a relatively small deviation from the actual environment can be appropriately obtained.
  • the server 3 is configured to include the evaluation target storage unit 7 and the evaluation target control unit 9. As a result, the server 3 performs a process of reflecting the change in the surrounding environment on the operation of the virtual robot 4, thereby reducing the load on the video presentation device 5.
  • the server 3 includes the evaluation target storage unit 7 and the evaluation target control unit 9, but in the second embodiment, the video presentation device includes the evaluation target storage unit and the evaluation target control unit. It is a configuration.
  • the server 22 includes an environment change detection unit 23 and a transmission unit 24.
  • the environment change detection unit 23 is the same as the environment change detection unit 8 described in the first embodiment, and uses video signals received from the two cameras 2a and 2b as a change in the surrounding environment of the virtual robot 4, A change in the actual environment and the behavior of the evaluator H are detected using the reference point.
  • the transmission unit 24 transmits transmission information including change data indicating changes in the surrounding environment to the video presentation device 25 using a wireless communication method.
  • the video presentation device 25 includes a receiving unit 26, an evaluation target storage unit 27, an evaluation target control unit 28, and a video presentation unit 29.
  • the evaluation target storage unit 27 is the same as the evaluation target storage unit 7 described in the first embodiment, and stores a physical model of the virtual robot 4.
  • the evaluation target control unit 28 is the same as the evaluation target control unit 9 described in the first embodiment, and uses the physical model stored in the evaluation target storage unit 27 to generate three-dimensional data of the virtual robot 4. The change in the surrounding environment detected by the environment change detection unit 23 is reflected in the operation of the virtual robot 4.
  • the server 22 and the video presentation device 25 cooperate to perform video presentation processing.
  • the server 22 detects changes in the surrounding environment of the virtual robot 4 by the environment change detection unit 23 using the video signals received from the two cameras 2a and 2b (A11).
  • the server 22 determines that a change in the surrounding environment of the virtual robot 4 has occurred (A12: YES)
  • the server 22 transmits the transmission information including change data indicating the change in the surrounding environment from the transmission unit 24 to the video presentation device 25 (A13).
  • the server 22 determines whether or not the video presentation process end condition is satisfied (A14). If the server 22 determines that the video presentation process end condition is not satisfied (A14: NO), the server 22 proceeds to Step A11 described above. Return and repeat step A11 and subsequent steps.
  • the video presentation device 25 waits for reception of transmission information from the server 22 (B11), and determines that the transmission information from the server 22 has been received by the reception unit 26 (B11: YES), the received transmission information.
  • Change data that is, data indicating a change in the surrounding environment is specified, and the evaluation target control unit 28 calculates the influence of the change in the surrounding environment on the virtual robot 4 (B12). For example, if the evaluator H contacts the desk 6 and the position and orientation of the desk 6 are changed, the server 3 calculates the influence of the change in the position and orientation of the desk 6 on the virtual robot 4, for example, evaluation If the person H is virtually in contact with the virtual robot 4, the influence of the virtual contact on the virtual robot 4 is calculated.
  • the video presentation device 25 determines whether or not the virtual robot 4 is operating (B13). If the video presentation device 25 determines that the virtual robot 4 is operating (B13: YES), the current operation of the virtual robot 4 is determined. The state is calculated (B14), and the video presentation unit 29 presents an image in which the change in the surrounding environment is reflected in the operation of the virtual robot 4 (B15). Then, the video presentation device 25 determines whether or not the video presentation process end condition is satisfied (B16). If the video presentation device 25 determines that the video presentation process end condition is not satisfied (B16: NO), the above-described step is performed. Returning to B11, step B11 and subsequent steps are repeated.
  • the second embodiment it is possible to obtain the same operational effects as those of the first embodiment, and to appropriately evaluate the virtual robot 4 under conditions where the restraint property is relatively low. It is possible to appropriately obtain an evaluation result with a relatively small deviation from the actual environment.
  • the video presentation device 25 has an evaluation target storage unit 27 and an evaluation target control unit 28. Thereby, the data amount of the transmission information transmitted from the server 22 to the video presentation device 25 can be reduced, the real-time property can be improved, and the communication load between the server 22 and the video presentation device 25 can be reduced. it can.
  • the video presentation device 25 performs a process of reflecting changes in the surrounding environment in the operation of the virtual robot 4, so that the load on the server 22 can be reduced.
  • a plurality of (that is, two) evaluators Ha and Hb wear video presentation devices 5a and 5b, respectively, and the virtual robot 4 and the video presentation devices 5a and 5b are one-to-multiple. It is a relationship.
  • the video presentation devices 5a and 5b are the same as the video presentation device 5 described in the first embodiment, and include reception units 11a and 11b and video presentation units 12a and 12b, respectively.
  • the environment change detection unit 8 detects the behaviors of the evaluators Ha and Hb using the video signals received from the two cameras 2a and 2b.
  • the evaluation target control unit 9 collates the three-dimensional coordinates of the evaluators Ha and Hb with the three-dimensional coordinates of the virtual robot 4 for each of the evaluators Ha and Hb, and the evaluators Ha and Hb If the contact is made, the virtual contact of the evaluators Ha and Hb to the virtual robot 4 is reflected in the operation of the virtual robot 4 by moving the three-dimensional coordinates of the virtual robot 4 following the virtual contact.
  • the transmission unit 10 transmits transmission information including information in which a change in the surrounding environment is reflected in the operation of the virtual robot 4 to the video presentation devices 5a and 5b by a wireless communication method.
  • the video presentation units 12a and 12b reflect the video included in the transmission information received from the server 3 to the reception units 11a and 11b, that is, changes in the surrounding environment, in the operation of the virtual robot 4.
  • the displayed video is presented to the evaluators Ha and Hb. That is, the video presentation devices 5a and 5b simultaneously present the video of the virtual robot 4 to the evaluators Ha and Hb.
  • the video of the virtual robot 4 is simultaneously presented to a plurality of evaluators Ha and Hb.
  • a plurality of evaluators Ha and Hb can share the evaluation result of one virtual robot 4.
  • the video presentation devices 5a and 5b may each have an evaluation object storage unit and an evaluation object control unit.
  • the virtual sensitivity evaluation support system 41 a plurality of (that is, two) evaluators Ha and Hb perform work on a plurality of (that is, two) virtual robots 4a and 4b, respectively.
  • the evaluation target storage unit 7 stores the physical models of the virtual robots 4a and 4b.
  • the video presentation units 12a and 12b present to the evaluators Ha and Hb videos in which changes in the surrounding environment are reflected in the operations of the virtual robots 4a and 4b, respectively. That is, the video presentation devices 5a and 5b simultaneously present videos of the virtual robots 4a and 4b to the evaluators Ha and Hb.
  • videos of a plurality of virtual robots 4a and 4b are simultaneously presented to a plurality of evaluators Ha and Hb.
  • the plurality of evaluators Ha and Hb can share the evaluation results of the plurality of virtual robots 4a and 4b.
  • the evaluation of the virtual robots 4a and 4b can be appropriately performed under a relatively low constraint condition. it can.
  • it is effective when the evaluation is performed assuming a robot or the like used in a production line in which work processes are continuous (that is, the preceding and following processes are correlated).
  • the video presentation devices 5a and 5b may each include an evaluation target storage unit and an evaluation target control unit.
  • a human coexistence type virtual robot is exemplified as a virtual evaluation target.
  • any apparatus may be used as long as the evaluator performs a sensitivity and ergonomic evaluation.
  • a keyboard on which a person performs a key input operation, a door on which a person performs an opening / closing operation, or the like may be used.
  • the configuration in which the change in the surrounding environment of the virtual evaluation target is detected using the camera is illustrated, but the change in the surrounding environment of the virtual evaluation target is detected using an infrared sensor, a magnetic field sensor, or the like. Also good. If the configuration uses an infrared sensor, the position of the infrared transmitter may be used as a reference point. If the configuration uses a magnetic field sensor, the position of the magnetic field generation source may be used as a reference point.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A virtual sensory evaluation assistance system 1, which performs sensory and ergonomic evaluation of a virtual object to be evaluated without manufacturing an actual trail product, is provided with: an evaluation object storage unit (7) that stores a physical model of a virtual object to be evaluated; an environmental change detection unit (8) that detects a change in the surrounding environment of the virtual object to be evaluated by using a preset reference point; an evaluation object control unit (9) that reflects the change in the surrounding environment on the motion of the virtual object to be evaluated by using the physical model; and an image presentation unit (12) that superposes, on the actual environment, an image in which the change in the surrounding environment has been reflected on the motion of the virtual object to be evaluated, and presents the image to an evaluator.

Description

仮想感性評価支援システムVirtual Kansei Evaluation Support System 関連出願の相互参照Cross-reference of related applications
 本出願は、2016年7月19日に出願された日本出願番号2016-141346号に基づくもので、ここにその記載内容を援用する。 This application is based on Japanese Patent Application No. 2016-141346 filed on July 19, 2016, the contents of which are incorporated herein by reference.
 本開示は仮想感性評価支援システムに関する。 This disclosure relates to a virtual sensitivity evaluation support system.
 実機を試作せず仮想評価対象の感性的及び人間工学的な評価を行う仮想感性評価支援システムが供されている。この種の仮想感性評価支援システムは、評価者を取り囲むスクリーンに、1フレーム毎に右目用の映像と左目用の映像とを切換えて交互に投映し、評価者が装着する映像提示デバイスの視線を映像の切換えに同期して交互に遮断し、立体的な映像を評価者に提示する。そして、仮想感性評価支援システムは、評価者の身体に装着されているセンサから出力されるセンサ信号により、評価者の位置や姿勢を計算し、仮想評価対象の感性的及び人間工学的な評価を行う(例えば特許文献1参照)。 A virtual sensibility evaluation support system that performs sensibility and ergonomic evaluation of virtual evaluation objects without prototyping a real machine is provided. This type of virtual sensitivity evaluation support system switches the right-eye video and the left-eye video alternately for each frame on the screen surrounding the evaluator, and changes the line of sight of the video presentation device worn by the evaluator. The three-dimensional video is presented to the evaluator by alternately interrupting in synchronization with the video switching. The virtual emotion evaluation support system calculates the position and posture of the evaluator based on the sensor signal output from the sensor attached to the evaluator's body, and performs the emotional and ergonomic evaluation of the virtual evaluation target. (For example, refer to Patent Document 1).
特開2005-115430号公報JP 2005-115430 A
 特許文献1に記載されている技術は、評価者を取り囲むスクリーンに映像を投映すると共に評価者の身体にセンサを装着する構成であるので、予め設計された仮想環境内でしか評価者が行動することができず、拘束性が比較的高い条件で仮想評価対象の評価を行わざるを得ない。そのため、実環境下との乖離が比較的大きい評価結果しか得ることができず、評価結果を受けて製作した実機が仕様を満足する確率は低く、結果として実機を試作し直す手間が発生する等して開発費用の高騰や開発期間の長期化を招くという問題がある。 Since the technology described in Patent Document 1 is configured to project an image on a screen surrounding the evaluator and to attach a sensor to the evaluator's body, the evaluator acts only in a pre-designed virtual environment. Cannot be evaluated, and the evaluation of the virtual evaluation target must be performed under a relatively high constraint. For this reason, only evaluation results that have a relatively large difference from the actual environment can be obtained, and the probability that an actual machine manufactured based on the evaluation result will satisfy the specifications is low, resulting in the effort to re-create the actual machine. As a result, there is a problem that the development cost increases and the development period is prolonged.
 本開示は、拘束性が比較的低い条件で仮想評価対象の評価を適切に行うことができ、実環境下との乖離が比較的小さい評価結果を適切に得ることができる仮想感性評価支援システムを提供することにある。 The present disclosure provides a virtual sensitivity evaluation support system that can appropriately perform evaluation of a virtual evaluation object under a relatively low constraint condition and can appropriately obtain an evaluation result with a relatively small deviation from the actual environment. It is to provide.
 本開示の一態様によれば、実機を試作せず仮想評価対象の感性的及び人間工学的な評価を行う仮想感性評価支援システムにおいて、評価対象記憶部は、仮想評価対象の物理モデルを記憶する。環境変化検出部は、予め設定されている基準点を用い、仮想評価対象の周辺環境の変化を検出する。評価対象制御部は、物理モデルを用い、周辺環境の変化を仮想評価対象の動作に反映させる。映像提示部は、周辺環境の変化が仮想評価対象の動作に反映された映像を実環境に重ね合わせて評価者に提示する。 According to one aspect of the present disclosure, in a virtual sensitivity evaluation support system that performs sensibility and ergonomic evaluation of a virtual evaluation target without prototyping a real machine, the evaluation target storage unit stores a physical model of the virtual evaluation target . The environment change detection unit detects a change in the surrounding environment of the virtual evaluation target using a preset reference point. The evaluation target control unit uses a physical model to reflect changes in the surrounding environment in the operation of the virtual evaluation target. The video presenting unit superimposes a video in which the change in the surrounding environment is reflected on the operation of the virtual evaluation target on the actual environment and presents it to the evaluator.
 予め設定されている基準点を用い、仮想評価対象の周辺環境の変化を検出し、物理モデルを用い、周辺環境の変化を仮想評価対象の動作に反映させ、周辺環境の変化が仮想評価対象の動作に反映された映像を実環境に重ね合わせて評価者に提示するようにした。評価者を取り囲むスクリーンに映像を投映する必要がなく、評価者の身体にセンサを装着する必要もないので、従来構成よりも拘束性を低減することができる。これにより、拘束性が比較的低い条件で仮想評価対象の評価を適切に行うことができ、実環境下との乖離が比較的小さい評価結果を適切に得ることができる。 Detect changes in the surrounding environment of the virtual evaluation target using a preset reference point, use the physical model to reflect the changes in the surrounding environment in the operation of the virtual evaluation target, and The video reflected in the motion was superimposed on the real environment and presented to the evaluator. Since it is not necessary to project an image on a screen surrounding the evaluator and it is not necessary to attach a sensor to the evaluator's body, the restraint can be reduced as compared with the conventional configuration. Thereby, the evaluation of the virtual evaluation target can be appropriately performed under a condition with relatively low restraint, and an evaluation result having a relatively small deviation from the actual environment can be appropriately obtained.
 本開示についての上記目的及びその他の目的、特徴や利点は、添付の図面を参照しながら下記の詳細な記述により、より明確になる。その図面は、
図1は、第1の実施形態を示す機能ブロック図であり、 図2は、仮想評価対象及び評価者を示す図であり、 図3は、評価者に提示する映像を示す図であり、 図4は、フローチャートであり、 図5は、第2の実施形態を示す機能ブロック図であり、 図6は、フローチャートであり、 図7は、第3の実施形態を示す機能ブロック図であり、 図8は、仮想評価対象及び評価者を示す図であり、 図9は、第4の実施形態を示す機能ブロック図であり、 図10は、仮想評価対象及び評価者を示す図である。
The above and other objects, features, and advantages of the present disclosure will become more apparent from the following detailed description with reference to the accompanying drawings. The drawing
FIG. 1 is a functional block diagram showing the first embodiment. FIG. 2 is a diagram illustrating a virtual evaluation target and an evaluator. FIG. 3 is a diagram showing a video presented to the evaluator. FIG. 4 is a flowchart. FIG. 5 is a functional block diagram showing the second embodiment. FIG. 6 is a flowchart. FIG. 7 is a functional block diagram showing the third embodiment. FIG. 8 is a diagram illustrating a virtual evaluation target and an evaluator. FIG. 9 is a functional block diagram showing the fourth embodiment. FIG. 10 is a diagram illustrating a virtual evaluation target and an evaluator.
 (第1の実施形態)
 以下、人と共存する人共存型の仮想ロボットを仮想評価対象として適用した第1の実施形態について図1から図4を参照して説明する。仮想感性評価支援システム1は、図2に示すように、評価空間M内を撮影する2台のカメラ2a,2bと、評価空間Mの外部に配置されているサーバ3と、仮想ロボット4(仮想評価対象に相当する)に対して作業を行う評価者Hが装着する映像提示デバイス5とを有する。映像提示デバイス5は、例えば評価者Hが頭部に装着するヘッドマウントディスプレイである。2台のカメラ2a,2bは、それぞれ評価空間Mの端部(即ち評価者Hの行動を妨げない位置)に配置されている。評価空間Mをx軸,y軸,z軸の3次元空間とし、図2で示すP点を基準点とし、基準点の座標を(0,0,0)とすると、カメラ2aの配置位置の座標は(0,0,Z)となり、カメラ2bの配置位置の座標は(X,0,Z)となる。尚、カメラは3台以上であっても良い。又、サーバ3は、物理的な実体がある物理サーバでも良いし物理的な実体がないクラウドサーバ(即ち仮想サーバ)でも良い。
(First embodiment)
Hereinafter, a first embodiment in which a coexisting virtual robot coexisting with a person is applied as a virtual evaluation target will be described with reference to FIGS. 1 to 4. As shown in FIG. 2, the virtual sensitivity evaluation support system 1 includes two cameras 2 a and 2 b that photograph the evaluation space M, a server 3 arranged outside the evaluation space M, and a virtual robot 4 (virtual And an image presentation device 5 worn by an evaluator H who performs work on (equivalent to an evaluation object). The video presentation device 5 is, for example, a head mounted display that the evaluator H wears on the head. The two cameras 2a and 2b are each arranged at an end of the evaluation space M (that is, a position that does not interfere with the action of the evaluator H). If the evaluation space M is a three-dimensional space of x-axis, y-axis, and z-axis, the point P shown in FIG. 2 is a reference point, and the coordinates of the reference point are (0, 0, 0), The coordinates are (0, 0, Z), and the coordinates of the arrangement position of the camera 2b are (X, 0, Z). There may be three or more cameras. The server 3 may be a physical server with a physical entity or a cloud server (that is, a virtual server) without a physical entity.
 2台のカメラ2a,2bは、それぞれ所定の視野角で評価空間M内の略全体を撮影し、その撮影した映像を含む映像信号をサーバ3に送信する。即ち、2台のカメラ2a,2bは、例えば評価者Hが評価空間M内に配置されている机6に接触して机6の位置や姿勢が変化すれば、その机6の位置や姿勢の変化を示す映像信号をサーバ3に送信する。又、2台のカメラ2a,2bは、例えば評価者Hが頭や手を動かしたり場所を変えたりする等して行動すれば、その評価者Hの行動を示す映像信号をサーバ3に送信する。2台のカメラ2a,2bは、それぞれ映像信号を有線通信の通信方式でサーバ3に送信する。 The two cameras 2a and 2b each capture substantially the entire evaluation space M with a predetermined viewing angle, and transmit a video signal including the captured images to the server 3. That is, for example, if the evaluator H contacts the desk 6 disposed in the evaluation space M and the position and posture of the desk 6 change, the two cameras 2a and 2b change the position and posture of the desk 6. A video signal indicating the change is transmitted to the server 3. The two cameras 2a and 2b transmit video signals indicating the actions of the evaluator H to the server 3 when the evaluator H acts, for example, by moving the head or hand or changing the location. . The two cameras 2a and 2b each transmit a video signal to the server 3 by a wired communication method.
 サーバ3は、図1に示すように、評価対象記憶部7と、環境変化検出部8と、評価対象制御部9と、送信部10とを有する。評価対象記憶部7は、仮想ロボット4の物理モデルを記憶する。仮想ロボット4の物理モデルとは、評価対象とする仮想ロボット4の物理シミュレーションに必要なデータであり、形状、重量、設計動作、拘束条件等を示すデータである。 As shown in FIG. 1, the server 3 includes an evaluation target storage unit 7, an environment change detection unit 8, an evaluation target control unit 9, and a transmission unit 10. The evaluation target storage unit 7 stores a physical model of the virtual robot 4. The physical model of the virtual robot 4 is data necessary for physical simulation of the virtual robot 4 to be evaluated, and is data indicating the shape, weight, design operation, constraint condition, and the like.
 環境変化検出部8は、2台のカメラ2a,2bから受信する映像信号を用い、仮想ロボット4の周辺環境の変化として、前述した基準点を用いて実環境の変化及び評価者Hの行動を検出する。具体的に説明すると、環境変化検出部8は、例えば机6の所定部位(例えば端部等)の位置が(X1,Y1,Z1)から(X2,Y2,Z2)に変化したことを実環境の変化として検出する。又、環境変化検出部8は、例えば評価者Hの所定部位(例えば頭や手等)の位置が(X11,Y11,Z11)から(X12,Y12,Z12)に変化したことを評価者Hの行動として検出する。環境変化検出部8は、仮想ロボット4に干渉する事象として、実環境の変化及び評価者Hの行動を検出する。 The environment change detection unit 8 uses the video signals received from the two cameras 2a and 2b, and uses the reference points described above as changes in the surrounding environment of the virtual robot 4 to change the real environment and the actions of the evaluator H. To detect. More specifically, the environment change detection unit 8 determines that the position of a predetermined part (for example, an end portion) of the desk 6 has changed from (X1, Y1, Z1) to (X2, Y2, Z2), for example. Detect as change. The environment change detection unit 8 also indicates that the position of a predetermined part (e.g., head or hand) of the evaluator H has changed from (X11, Y11, Z11) to (X12, Y12, Z12). Detect as an action. The environment change detection unit 8 detects a change in the real environment and the behavior of the evaluator H as events that interfere with the virtual robot 4.
 評価対象制御部9は、評価対象記憶部7に記憶されている物理モデルを用い、仮想ロボット4の3次元データを生成し、環境変化検出部8により検出された周辺環境の変化を仮想ロボット4の動作に反映させる。即ち、評価対象制御部9は、机6の上に配置されている仮想ロボット4の3次元データを生成している場合に、例えば机6の位置や姿勢が変化すれば、その机6の3次元座標の移動に追従して仮想ロボット4の3次元座標を移動させ、その机6の位置や姿勢の変化を仮想ロボット4の動作に反映させる。又、評価対象制御部9は、例えば評価者Hが行動すれば、評価者Hの3次元座標と仮想ロボット4の3次元座標とを照合し、評価者Hが仮想ロボット4に仮想的に接触したか否かを判定する。評価対象制御部9は、評価者Hが仮想ロボット4に仮想的に接触したと判定し、その仮想ロボット4の位置や姿勢が変化すれば、その仮想ロボット4の3次元座標の移動に追従して仮想ロボット4の3次元座標を移動させ、その仮想ロボット4の位置や姿勢の変化(即ち評価者Hの仮想ロボット4への仮想的な接触)を仮想ロボット4の動作に反映させる。又、評価対象制御部9は、仮想ロボット4がアーム等の可動部を有する構成であれば、可動部の位置や姿勢や可動量の変化(即ち評価者Hのアームへの仮想的な接触)を仮想ロボット4の動作に反映させる。環境変化検出部8は、仮想ロボット4に干渉する事象を、仮想ロボット4の動作に反映させる。 The evaluation target control unit 9 uses the physical model stored in the evaluation target storage unit 7 to generate three-dimensional data of the virtual robot 4, and changes the surrounding environment detected by the environment change detection unit 8. It is reflected in the operation. That is, when the evaluation target control unit 9 generates the three-dimensional data of the virtual robot 4 arranged on the desk 6, for example, if the position or posture of the desk 6 changes, the evaluation object control unit 9 Following the movement of the dimensional coordinates, the three-dimensional coordinates of the virtual robot 4 are moved, and changes in the position and posture of the desk 6 are reflected in the operation of the virtual robot 4. For example, when the evaluator H acts, the evaluation target control unit 9 collates the three-dimensional coordinates of the evaluator H with the three-dimensional coordinates of the virtual robot 4, and the evaluator H virtually contacts the virtual robot 4. Determine whether or not. The evaluation target control unit 9 determines that the evaluator H has virtually contacted the virtual robot 4, and follows the movement of the three-dimensional coordinates of the virtual robot 4 if the position or posture of the virtual robot 4 changes. Then, the three-dimensional coordinates of the virtual robot 4 are moved, and changes in the position and orientation of the virtual robot 4 (that is, virtual contact of the evaluator H with the virtual robot 4) are reflected in the operation of the virtual robot 4. Further, if the virtual robot 4 has a movable part such as an arm, the evaluation target control part 9 changes the position, posture, and movable amount of the movable part (that is, virtual contact of the evaluator H with the arm). Is reflected in the operation of the virtual robot 4. The environment change detection unit 8 reflects an event that interferes with the virtual robot 4 in the operation of the virtual robot 4.
 送信部10は、周辺環境の変化が仮想ロボット4の動作に反映された情報を含む送信情報を無線通信の通信方式で映像提示デバイス5に送信する。無線通信は、例えばIEEE802.11で規定される無線LAN、Bluetooth(登録商標)、BLE(Bluetooth Low Energy)(登録商標)、WiFi(登録商標)等である。 The transmitting unit 10 transmits transmission information including information in which the change in the surrounding environment is reflected in the operation of the virtual robot 4 to the video presentation device 5 by a wireless communication method. The wireless communication is, for example, a wireless LAN defined by IEEE 802.11, Bluetooth (registered trademark), BLE (Bluetooth Low Energy) (registered trademark), WiFi (registered trademark), or the like.
 映像提示デバイス5は、受信部11と、映像提示部12とを有する。受信部11は、送信部10から送信された送信情報を受信する。映像提示部12は、受信部11に受信された送信情報に含まれる映像、即ち、周辺環境の変化が仮想ロボット4の動作に反映された映像を評価者Hに提示する。評価者Hに提示される映像は、図3に示すように、仮想ロボット4を含む仮想環境を実環境に重ね合わせた映像であり、例えば机6の位置や姿勢が変化すれば、その机6の位置や姿勢の変化に応じて仮想ロボット4の位置や姿勢が変化された映像であり、例えば評価者Hが仮想ロボット4に仮想的に接触すれば、その評価者Hの仮想ロボット4への仮想的な接触に応じて仮想ロボット4の位置や姿勢を変化された映像である。尚、実環境は映像化されていなくても良いし映像化されていても良い。即ち、映像提示部12は、実環境が映像化されていない映像を評価者Hに提示する場合には、仮想ロボット4の映像を実環境の背景に重ね合わせて評価者Hに提示する。一方、映像提示部12は、実環境が映像化された映像を評価者Hに提示する場合には、仮想ロボット4の映像を実環境の映像に重ね合わせて評価者Hに提示する。 The video presentation device 5 includes a reception unit 11 and a video presentation unit 12. The receiving unit 11 receives the transmission information transmitted from the transmitting unit 10. The video presentation unit 12 presents to the evaluator H a video included in the transmission information received by the reception unit 11, that is, a video in which changes in the surrounding environment are reflected in the operation of the virtual robot 4. As shown in FIG. 3, the video presented to the evaluator H is a video obtained by superimposing the virtual environment including the virtual robot 4 on the real environment. For example, if the position or posture of the desk 6 changes, the desk 6 For example, if the evaluator H virtually touches the virtual robot 4, the evaluator H will be directed to the virtual robot 4. This is an image in which the position and posture of the virtual robot 4 are changed according to virtual contact. Note that the actual environment may or may not be visualized. That is, when the video presentation unit 12 presents a video whose real environment is not visualized to the evaluator H, the video presentation unit 12 superimposes the video of the virtual robot 4 on the background of the real environment and presents it to the evaluator H. On the other hand, when presenting the video in which the real environment is visualized to the evaluator H, the video presentation unit 12 superimposes the video of the virtual robot 4 on the real environment and presents it to the evaluator H.
 次に、上記した構成の作用について図4を参照して説明する。仮想評価対象支援システム1は、サーバ3と映像提示デバイス5とが連携して映像提示処理を行う。
 サーバ3は、2台のカメラ2a,2bから受信する映像信号を用い、仮想ロボット4の周辺環境の変化を環境変化検出部8により検出する(A1)。サーバ3は、仮想ロボット4の周辺環境の変化が発生したと判定すると(A2:YES)、その周辺環境の変化が仮想ロボット4に及ぼす影響を評価対象制御部9により計算する(A3)。サーバ3は、例えば評価者Hが机6に接触して机6の位置や姿勢が変化していれば、その机6の位置や姿勢の変化が仮想ロボット4に及ぼす影響を計算する。又、サーバ3は、例えば評価者Hが仮想ロボット4に仮想的に接触していれば、その仮想的な接触が仮想ロボット4に及ぼす影響を計算する。
Next, the operation of the above configuration will be described with reference to FIG. In the virtual evaluation target support system 1, the server 3 and the video presentation device 5 cooperate to perform video presentation processing.
The server 3 uses the video signals received from the two cameras 2a and 2b to detect a change in the surrounding environment of the virtual robot 4 by the environment change detection unit 8 (A1). When the server 3 determines that a change in the surrounding environment of the virtual robot 4 has occurred (A2: YES), the evaluation target control unit 9 calculates the influence of the change in the surrounding environment on the virtual robot 4 (A3). For example, if the evaluator H contacts the desk 6 and the position and posture of the desk 6 are changed, the server 3 calculates the influence of the change in the position and posture of the desk 6 on the virtual robot 4. For example, if the evaluator H is virtually in contact with the virtual robot 4, the server 3 calculates the influence of the virtual contact on the virtual robot 4.
 次いで、サーバ3は、仮想ロボット4が動作中であるか否かを判定し(A4)、仮想ロボット4が動作中であると判定すると(A4:YES)、仮想ロボット4の現在の動作状態を計算する(A5)。サーバ3は、周辺環境の変化が仮想ロボット4の動作に反映された映像を含む送信情報を送信部10から映像提示デバイス5に送信させる(A6)。そして、サーバ3は、映像提示処理の終了条件が成立したか否かを判定し(A7)、映像提示処理の終了条件が成立していないと判定すると(A7:NO)、前述したステップA1に戻り、ステップA1以降を繰り返して行う。 Next, the server 3 determines whether or not the virtual robot 4 is operating (A4). When the server 3 determines that the virtual robot 4 is operating (A4: YES), the current operating state of the virtual robot 4 is determined. Calculate (A5). The server 3 causes the transmission unit 10 to transmit the transmission information including the video in which the change in the surrounding environment is reflected in the operation of the virtual robot 4 to the video presentation device 5 (A6). Then, the server 3 determines whether or not the video presentation process end condition is satisfied (A7). If the server 3 determines that the video presentation process end condition is not satisfied (A7: NO), the server 3 proceeds to Step A1 described above. Return and repeat step A1 and subsequent steps.
 映像提示デバイス5は、サーバ3からの送信情報の受信を待機しており(B1)、サーバ3からの送信情報を受信部11により受信したと判定すると(B1:YES)、その受信した送信情報に含まれる映像、即ち、周辺環境の変化が仮想ロボット4の動作に反映された映像を映像提示部12により提示する(B2)。そして、映像提示デバイス5は、映像提示処理の終了条件が成立したか否かを判定し(B3)、映像提示処理の終了条件が成立していないと判定すると(B3:NO)、前述したステップB1に戻り、ステップB1以降を繰り返して行う。 When the video presentation device 5 waits for reception of transmission information from the server 3 (B1) and determines that the transmission information from the server 3 has been received by the reception unit 11 (B1: YES), the received transmission information is received. The video presentation unit 12 presents the video included in the video, that is, the video in which the change in the surrounding environment is reflected in the operation of the virtual robot 4 (B2). Then, the video presentation device 5 determines whether or not the video presentation process end condition is satisfied (B3), and determines that the video presentation process end condition is not satisfied (B3: NO), the above-described steps. Returning to B1, step B1 and the subsequent steps are repeated.
 以上に説明したように第1の実施形態によれば、次に示す効果を得ることができる。
 仮想感性評価支援システム1において、予め設定されている基準点を用い、仮想ロボット4の周辺環境の変化を検出し、物理モデルを用い、周辺環境の変化を仮想ロボット4の動作に反映させ、周辺環境の変化が仮想ロボット4の動作に反映された映像を実環境に重ね合わせて評価者Hに提示するようにした。評価者Hを取り囲むスクリーンに映像を投映する必要がなく、評価者Hの身体にセンサを装着する必要もないので、従来構成よりも拘束性を低減することができる。これにより、拘束性が比較的低い条件で仮想ロボット4の評価を適切に行うことができ、実環境下との乖離が比較的小さい評価結果を適切に得ることができる。
As described above, according to the first embodiment, the following effects can be obtained.
In the virtual sensitivity evaluation support system 1, using a preset reference point, a change in the surrounding environment of the virtual robot 4 is detected, a change in the surrounding environment is reflected in the operation of the virtual robot 4 using a physical model, An image in which a change in the environment is reflected in the operation of the virtual robot 4 is superimposed on the actual environment and presented to the evaluator H. Since it is not necessary to project an image on the screen surrounding the evaluator H, and there is no need to attach a sensor to the evaluator H's body, the restraint can be reduced as compared with the conventional configuration. Thereby, the evaluation of the virtual robot 4 can be appropriately performed under the condition of relatively low restraint, and an evaluation result with a relatively small deviation from the actual environment can be appropriately obtained.
 又、仮想感性評価支援システム1において、仮想ロボット4の周辺環境の変化として実環境の変化及び評価者Hの行動を検出するようにした。これにより、実環境の変化及び評価者Hの行動を仮想ロボット4の動作に反映された映像を実環境に重ね合わせて評価者Hに提示することができる。 Moreover, in the virtual sensitivity evaluation support system 1, changes in the real environment and actions of the evaluator H are detected as changes in the surrounding environment of the virtual robot 4. As a result, a video reflecting the change in the real environment and the behavior of the evaluator H in the operation of the virtual robot 4 can be superimposed on the real environment and presented to the evaluator H.
 又、仮想感性評価支援システム1において、仮想ロボット4が動作中であるときに、周辺環境の変化を仮想ロボット4の現在の動作状態に反映させるようにした。これにより、周辺環境の変化を仮想ロボット4の現在の動作状態に反映された映像を実環境に重ね合わせて評価者Hに提示することができる。 Also, in the virtual sensitivity evaluation support system 1, when the virtual robot 4 is operating, changes in the surrounding environment are reflected in the current operating state of the virtual robot 4. As a result, an image reflecting changes in the surrounding environment in the current operation state of the virtual robot 4 can be superimposed on the actual environment and presented to the evaluator H.
 又、仮想感性評価支援システム1において、サーバ3が評価対象記憶部7と評価対象制御部9とを有する構成とした。これにより、周辺環境の変化を仮想ロボット4の動作に反映させる処理をサーバ3が行うことで、映像提示デバイス5の負荷を低減することができる。 Further, in the virtual sensitivity evaluation support system 1, the server 3 is configured to include the evaluation target storage unit 7 and the evaluation target control unit 9. As a result, the server 3 performs a process of reflecting the change in the surrounding environment on the operation of the virtual robot 4, thereby reducing the load on the video presentation device 5.
 (第2の実施形態)
 次に、第2の実施形態について図5及び図6を参照して説明する。尚、前述した第1の実施形態と同一部分については説明を省略し、異なる部分について説明する。第1の実施形態は、サーバ3が評価対象記憶部7及び評価対象制御部9を有する構成であるが、第2の実施形態は、映像提示デバイスが評価対象記憶部及び評価対象制御部を有する構成である。
(Second Embodiment)
Next, a second embodiment will be described with reference to FIGS. In addition, description is abbreviate | omitted about the same part as 1st Embodiment mentioned above, and a different part is demonstrated. In the first embodiment, the server 3 includes the evaluation target storage unit 7 and the evaluation target control unit 9, but in the second embodiment, the video presentation device includes the evaluation target storage unit and the evaluation target control unit. It is a configuration.
 仮想感性評価支援システム21において、サーバ22は、環境変化検出部23と、送信部24とを有する。環境変化検出部23は、第1の実施形態で説明した環境変化検出部8と同様であり、2台のカメラ2a,2bから受信する映像信号を用い、仮想ロボット4の周辺環境の変化として、基準点を用いて実環境の変化及び評価者Hの行動を検出する。送信部24は、周辺環境の変化を示す変化データを含む送信情報を無線通信の通信方式で映像提示デバイス25に送信する。 In the virtual sensitivity evaluation support system 21, the server 22 includes an environment change detection unit 23 and a transmission unit 24. The environment change detection unit 23 is the same as the environment change detection unit 8 described in the first embodiment, and uses video signals received from the two cameras 2a and 2b as a change in the surrounding environment of the virtual robot 4, A change in the actual environment and the behavior of the evaluator H are detected using the reference point. The transmission unit 24 transmits transmission information including change data indicating changes in the surrounding environment to the video presentation device 25 using a wireless communication method.
 映像提示デバイス25は、受信部26と、評価対象記憶部27と、評価対象制御部28と、映像提示部29とを有する。評価対象記憶部27は、第1の実施形態で説明した評価対象記憶部7と同様であり、仮想ロボット4の物理モデルを記憶する。評価対象制御部28は、第1の実施形態で説明した評価対象制御部9と同様であり、評価対象記憶部27に記憶されている物理モデルを用い、仮想ロボット4の3次元データを生成し、環境変化検出部23により検出された周辺環境の変化を仮想ロボット4の動作に反映させる。 The video presentation device 25 includes a receiving unit 26, an evaluation target storage unit 27, an evaluation target control unit 28, and a video presentation unit 29. The evaluation target storage unit 27 is the same as the evaluation target storage unit 7 described in the first embodiment, and stores a physical model of the virtual robot 4. The evaluation target control unit 28 is the same as the evaluation target control unit 9 described in the first embodiment, and uses the physical model stored in the evaluation target storage unit 27 to generate three-dimensional data of the virtual robot 4. The change in the surrounding environment detected by the environment change detection unit 23 is reflected in the operation of the virtual robot 4.
 次に、上記した構成の作用について図6を参照して説明する。仮想評価対象支援システム21は、サーバ22と映像提示デバイス25とが連携して映像提示処理を行う。
 サーバ22は、2台のカメラ2a,2bから受信する映像信号を用い、仮想ロボット4の周辺環境の変化を環境変化検出部23により検出する(A11)。サーバ22は、仮想ロボット4の周辺環境の変化が発生したと判定すると(A12:YES)、周辺環境の変化を示す変化データを含む送信情報として送信部24から映像提示デバイス25に送信させる(A13)。そして、サーバ22は、映像提示処理の終了条件が成立したか否かを判定し(A14)、映像提示処理の終了条件が成立していないと判定すると(A14:NO)、前述したステップA11に戻り、ステップA11以降を繰り返して行う。
Next, the operation of the above configuration will be described with reference to FIG. In the virtual evaluation target support system 21, the server 22 and the video presentation device 25 cooperate to perform video presentation processing.
The server 22 detects changes in the surrounding environment of the virtual robot 4 by the environment change detection unit 23 using the video signals received from the two cameras 2a and 2b (A11). When the server 22 determines that a change in the surrounding environment of the virtual robot 4 has occurred (A12: YES), the server 22 transmits the transmission information including change data indicating the change in the surrounding environment from the transmission unit 24 to the video presentation device 25 (A13). ). Then, the server 22 determines whether or not the video presentation process end condition is satisfied (A14). If the server 22 determines that the video presentation process end condition is not satisfied (A14: NO), the server 22 proceeds to Step A11 described above. Return and repeat step A11 and subsequent steps.
 映像提示デバイス25は、サーバ22からの送信情報の受信を待機しており(B11)、サーバ22からの送信情報を受信部26により受信したと判定すると(B11:YES)、その受信した送信情報に含まれる変化データ、即ち、周辺環境の変化を示すデータを特定し、周辺環境の変化が仮想ロボット4に及ぼす影響を評価対象制御部28により計算する(B12)。サーバ3は、例えば評価者Hが机6に接触して机6の位置や姿勢が変化していれば、その机6の位置や姿勢の変化が仮想ロボット4に及ぼす影響を計算し、例えば評価者Hが仮想ロボット4に仮想的に接触していれば、その仮想的な接触が仮想ロボット4に及ぼす影響を計算する。 The video presentation device 25 waits for reception of transmission information from the server 22 (B11), and determines that the transmission information from the server 22 has been received by the reception unit 26 (B11: YES), the received transmission information. Change data, that is, data indicating a change in the surrounding environment is specified, and the evaluation target control unit 28 calculates the influence of the change in the surrounding environment on the virtual robot 4 (B12). For example, if the evaluator H contacts the desk 6 and the position and orientation of the desk 6 are changed, the server 3 calculates the influence of the change in the position and orientation of the desk 6 on the virtual robot 4, for example, evaluation If the person H is virtually in contact with the virtual robot 4, the influence of the virtual contact on the virtual robot 4 is calculated.
 次いで、映像提示デバイス25は、仮想ロボット4が動作中であるか否かを判定し(B13)、仮想ロボット4が動作中であると判定すると(B13:YES)、仮想ロボット4の現在の動作状態を計算し(B14)、周辺環境の変化が仮想ロボット4の動作に反映された映像を映像提示部29により提示する(B15)。そして、映像提示デバイス25は、映像提示処理の終了条件が成立したか否かを判定し(B16)、映像提示処理の終了条件が成立していないと判定すると(B16:NO)、前述したステップB11に戻り、ステップB11以降を繰り返して行う。 Next, the video presentation device 25 determines whether or not the virtual robot 4 is operating (B13). If the video presentation device 25 determines that the virtual robot 4 is operating (B13: YES), the current operation of the virtual robot 4 is determined. The state is calculated (B14), and the video presentation unit 29 presents an image in which the change in the surrounding environment is reflected in the operation of the virtual robot 4 (B15). Then, the video presentation device 25 determines whether or not the video presentation process end condition is satisfied (B16). If the video presentation device 25 determines that the video presentation process end condition is not satisfied (B16: NO), the above-described step is performed. Returning to B11, step B11 and subsequent steps are repeated.
 以上に説明したように第2の実施形態によれば、第1の実施形態と同様の作用効果を得ることができ、拘束性が比較的低い条件で仮想ロボット4の評価を適切に行うことができ、実環境下との乖離が比較的小さい評価結果を適切に得ることができる。 As described above, according to the second embodiment, it is possible to obtain the same operational effects as those of the first embodiment, and to appropriately evaluate the virtual robot 4 under conditions where the restraint property is relatively low. It is possible to appropriately obtain an evaluation result with a relatively small deviation from the actual environment.
 又、仮想感性評価支援システム21において、映像提示デバイス25が評価対象記憶部27と評価対象制御部28とを有する構成とした。これにより、サーバ22から映像提示デバイス25に送信される送信情報のデータ量を低減し、リアルタイム性を高めることができ、サーバ22と映像提示デバイス25との間の通信の負荷を低減することもできる。又、周辺環境の変化を仮想ロボット4の動作に反映させる処理を映像提示デバイス25が行うことで、サーバ22の負荷を低減することができる。 Further, in the virtual sensitivity evaluation support system 21, the video presentation device 25 has an evaluation target storage unit 27 and an evaluation target control unit 28. Thereby, the data amount of the transmission information transmitted from the server 22 to the video presentation device 25 can be reduced, the real-time property can be improved, and the communication load between the server 22 and the video presentation device 25 can be reduced. it can. In addition, the video presentation device 25 performs a process of reflecting changes in the surrounding environment in the operation of the virtual robot 4, so that the load on the server 22 can be reduced.
 (第3の実施形態)
 次に、第3の実施形態について図7及び図8を参照して説明する。尚、前述した第1の実施形態と同一部分については説明を省略し、異なる部分について説明する。第1の実施形態は、仮想ロボット4と評価者Hとが1対1の関係であるが、第3の実施形態は、仮想ロボット4と評価者Hとが1対複数の関係である。
(Third embodiment)
Next, a third embodiment will be described with reference to FIGS. In addition, description is abbreviate | omitted about the same part as 1st Embodiment mentioned above, and a different part is demonstrated. In the first embodiment, the virtual robot 4 and the evaluator H have a one-to-one relationship, but in the third embodiment, the virtual robot 4 and the evaluator H have a one-to-many relationship.
 仮想感性評価支援システム31において、複数の(即ち2人の)評価者Ha,Hbはそれぞれ映像提示デバイス5a,5bを装着しており、仮想ロボット4と映像提示デバイス5a,5bとが1対複数の関係である。映像提示デバイス5a,5bは、第1の実施形態で説明した映像提示デバイス5と同様であり、それぞれ受信部11a,11bと、映像提示部12a,12bとを有する。 In the virtual sensitivity evaluation support system 31, a plurality of (that is, two) evaluators Ha and Hb wear video presentation devices 5a and 5b, respectively, and the virtual robot 4 and the video presentation devices 5a and 5b are one-to-multiple. It is a relationship. The video presentation devices 5a and 5b are the same as the video presentation device 5 described in the first embodiment, and include reception units 11a and 11b and video presentation units 12a and 12b, respectively.
 サーバ3において、環境変化検出部8は、2台のカメラ2a,2bから受信する映像信号を用い、評価者Ha,Hbのそれぞれの行動を検出する。評価対象制御部9は、評価者Ha,Hbのそれぞれについて、評価者Ha,Hbの3次元座標と仮想ロボット4の3次元座標とを照合し、評価者Ha,Hbが仮想ロボット4に仮想的に接触すれば、その仮想的な接触に追従して仮想ロボット4の3次元座標を移動させることで、評価者Ha,Hbの仮想ロボット4への仮想的な接触を仮想ロボット4の動作に反映させる。そして、送信部10は、周辺環境の変化が仮想ロボット4の動作に反映された情報を含む送信情報を無線通信の通信方式で映像提示デバイス5a,5bに送信する。 In the server 3, the environment change detection unit 8 detects the behaviors of the evaluators Ha and Hb using the video signals received from the two cameras 2a and 2b. The evaluation target control unit 9 collates the three-dimensional coordinates of the evaluators Ha and Hb with the three-dimensional coordinates of the virtual robot 4 for each of the evaluators Ha and Hb, and the evaluators Ha and Hb If the contact is made, the virtual contact of the evaluators Ha and Hb to the virtual robot 4 is reflected in the operation of the virtual robot 4 by moving the three-dimensional coordinates of the virtual robot 4 following the virtual contact. Let Then, the transmission unit 10 transmits transmission information including information in which a change in the surrounding environment is reflected in the operation of the virtual robot 4 to the video presentation devices 5a and 5b by a wireless communication method.
 映像提示デバイス5a,5bにおいて、映像提示部12a,12bは、それぞれサーバ3から受信部11a,11bに受信された送信情報に含まれる映像、即ち、周辺環境の変化が仮想ロボット4の動作に反映された映像を評価者Ha,Hbに提示する。即ち、映像提示デバイス5a,5bは、仮想ロボット4の映像を評価者Ha,Hbに同時に提示する。 In the video presentation devices 5a and 5b, the video presentation units 12a and 12b reflect the video included in the transmission information received from the server 3 to the reception units 11a and 11b, that is, changes in the surrounding environment, in the operation of the virtual robot 4. The displayed video is presented to the evaluators Ha and Hb. That is, the video presentation devices 5a and 5b simultaneously present the video of the virtual robot 4 to the evaluators Ha and Hb.
 以上に説明したように第3の実施形態によれば、仮想ロボット4の映像を複数の評価者Ha,Hbに同時に提示するようにした。これにより、一の仮想ロボット4の評価結果を複数の評価者Ha,Hbが共有することができる。又、複数の評価者Ha,Hbのそれぞれの行動を1つの基準点により検出することで、複数の評価者Ha,Hbのそれぞれの行動を容易に検出することができる。尚、第3の実施形態でも、前述した第2の実施形態と同様に、映像提示デバイス5a,5bがそれぞれ評価対象記憶部及び評価対象制御部を有する構成であっても良い。 As described above, according to the third embodiment, the video of the virtual robot 4 is simultaneously presented to a plurality of evaluators Ha and Hb. Thereby, a plurality of evaluators Ha and Hb can share the evaluation result of one virtual robot 4. Further, by detecting the behavior of each of the plurality of evaluators Ha and Hb using one reference point, the behavior of each of the plurality of evaluators Ha and Hb can be easily detected. In the third embodiment as well, as in the second embodiment described above, the video presentation devices 5a and 5b may each have an evaluation object storage unit and an evaluation object control unit.
 (第4の実施形態)
 次に、第4の実施形態について図9及び図10を参照して説明する。尚、前述した第3の実施形態と同一部分については説明を省略し、異なる部分について説明する。第3の実施形態は、仮想ロボット4と評価者Hとが1対複数の関係であるが、第4の実施形態は、仮想ロボット4と評価者Hとが複数対複数の関係である。
(Fourth embodiment)
Next, a fourth embodiment will be described with reference to FIGS. The description of the same parts as those of the third embodiment described above will be omitted, and different parts will be described. In the third embodiment, the virtual robot 4 and the evaluator H have a one-to-multiple relationship, but in the fourth embodiment, the virtual robot 4 and the evaluator H have a multiple-to-multiple relationship.
 仮想感性評価支援システム41において、複数の(即ち2人の)評価者Ha,Hbはそれぞれ複数の(即ち2台の)仮想ロボット4a,4bに対して作業を行う。この場合、サーバ3において、評価対象記憶部7は、仮想ロボット4a,4bのそれぞれの物理モデルを記憶する。映像提示デバイス5a,5bにおいて、映像提示部12a,12bは、それぞれ周辺環境の変化が仮想ロボット4a,4bの動作に反映された映像を評価者Ha,Hbに提示する。即ち、映像提示デバイス5a,5bは、仮想ロボット4a,4bの映像を評価者Ha,Hbに同時に提示する。 In the virtual sensitivity evaluation support system 41, a plurality of (that is, two) evaluators Ha and Hb perform work on a plurality of (that is, two) virtual robots 4a and 4b, respectively. In this case, in the server 3, the evaluation target storage unit 7 stores the physical models of the virtual robots 4a and 4b. In the video presentation devices 5a and 5b, the video presentation units 12a and 12b present to the evaluators Ha and Hb videos in which changes in the surrounding environment are reflected in the operations of the virtual robots 4a and 4b, respectively. That is, the video presentation devices 5a and 5b simultaneously present videos of the virtual robots 4a and 4b to the evaluators Ha and Hb.
 以上に説明したように第4の実施形態によれば、複数の仮想ロボット4a,4bの映像を複数の評価者Ha,Hbに同時に提示するようにした。これにより、複数の仮想ロボット4a,4bの評価結果を複数の評価者Ha,Hbが共有することができる。例えば複数の評価者Ha,Hbが交替して複数の仮想ロボット4a,4bに対して作業を行う場合でも、それぞれ拘束性が比較的低い条件で仮想ロボット4a,4bの評価を適切に行うことができる。例えば作業工程が連続する(即ち前後の工程に相関性がある)製造ラインで用いられるロボット等を想定して評価を行う場合等に有効である。尚、第4の実施形態でも、前述した第2の実施形態と同様に、映像提示デバイス5a,5bがそれぞれ評価対象記憶部及び評価対象制御部を有する構成であっても良い。 As described above, according to the fourth embodiment, videos of a plurality of virtual robots 4a and 4b are simultaneously presented to a plurality of evaluators Ha and Hb. Thereby, the plurality of evaluators Ha and Hb can share the evaluation results of the plurality of virtual robots 4a and 4b. For example, even when a plurality of evaluators Ha and Hb are switched to perform work on the plurality of virtual robots 4a and 4b, the evaluation of the virtual robots 4a and 4b can be appropriately performed under a relatively low constraint condition. it can. For example, it is effective when the evaluation is performed assuming a robot or the like used in a production line in which work processes are continuous (that is, the preceding and following processes are correlated). In the fourth embodiment, as in the second embodiment described above, the video presentation devices 5a and 5b may each include an evaluation target storage unit and an evaluation target control unit.
 (その他の実施形態)
 本開示は、実施例に準拠して記述されたが、当該実施例や構造に限定されるものではないと理解される。本開示は、様々な変形例や均等範囲内の変形をも包含する。加えて、様々な組み合わせや形態、更には、それらに一要素のみ、それ以上、或いはそれ以下を含む他の組み合わせや形態をも、本開示の範疇や思想範囲に入るものである。
(Other embodiments)
Although the present disclosure has been described with reference to the embodiments, it is understood that the present disclosure is not limited to the embodiments and structures. The present disclosure includes various modifications and modifications within the equivalent range. In addition, various combinations and forms, as well as other combinations and forms including only one element, more or less, are within the scope and spirit of the present disclosure.
 本実施形態では、仮想評価対象として人共存型の仮想ロボットを例示したが、評価者が感性的及び人間工学的な評価を行う対象であればどのような装置であっても良い。例えば人がキー入力操作を行うキーボード、人が開閉操作を行うドア等であっても良い。 In the present embodiment, a human coexistence type virtual robot is exemplified as a virtual evaluation target. However, any apparatus may be used as long as the evaluator performs a sensitivity and ergonomic evaluation. For example, a keyboard on which a person performs a key input operation, a door on which a person performs an opening / closing operation, or the like may be used.
 本実施形態では、カメラを用いて仮想評価対象の周辺環境の変化を検出する構成を例示したが、赤外線センサや磁場センサ等を用いて仮想評価対象の周辺環境の変化を検出する構成であっても良い。赤外線センサを用いる構成であれば、赤外線の送信機の位置を基準点とすれば良く、磁場センサを用いる構成であれば、磁場の発生源の位置を基準点とすれば良い。 In the present embodiment, the configuration in which the change in the surrounding environment of the virtual evaluation target is detected using the camera is illustrated, but the change in the surrounding environment of the virtual evaluation target is detected using an infrared sensor, a magnetic field sensor, or the like. Also good. If the configuration uses an infrared sensor, the position of the infrared transmitter may be used as a reference point. If the configuration uses a magnetic field sensor, the position of the magnetic field generation source may be used as a reference point.

Claims (10)

  1.  実機を試作せず仮想評価対象(4)の感性的及び人間工学的な評価を行う仮想感性評価支援システム(1,21,31,41)において、
     前記仮想評価対象の物理モデルを記憶する評価対象記憶部(7,27)と、
     予め設定されている基準点を用い、前記仮想評価対象の周辺環境の変化を検出する環境変化検出部(8,23)と、
     前記物理モデルを用い、前記環境変化検出部により検出された前記周辺環境の変化を前記仮想評価対象の動作に反映させる評価対象制御部(9,28)と、
     前記周辺環境の変化が前記仮想評価対象の動作に反映された映像を実環境に重ね合わせて評価者に提示する映像提示部(12,29)と、を備えた仮想感性評価支援システム。
    In the virtual sensibility evaluation support system (1, 21, 31, 41) that performs the sensibility and ergonomic evaluation of the virtual evaluation object (4) without prototyping a real machine,
    An evaluation object storage unit (7, 27) for storing the physical model of the virtual evaluation object;
    An environment change detection unit (8, 23) for detecting a change in the surrounding environment of the virtual evaluation target using a preset reference point;
    An evaluation target control unit (9, 28) that uses the physical model to reflect the change in the surrounding environment detected by the environment change detection unit in the operation of the virtual evaluation target;
    A virtual sensitivity evaluation support system comprising: a video presentation unit (12, 29) that superimposes a video in which a change in the surrounding environment is reflected in the operation of the virtual evaluation target on a real environment and presents the video to an evaluator.
  2.  前記環境変化検出部は、前記仮想評価対象の周辺環境の変化として実環境の変化及び評価者の行動を検出する請求項1に記載した仮想感性評価支援システム。 The virtual sensitivity evaluation support system according to claim 1, wherein the environment change detection unit detects a change in a real environment and a behavior of an evaluator as a change in a surrounding environment of the virtual evaluation target.
  3.  前記評価対象制御部は、前記仮想評価対象が動作中であるときに、前記周辺環境の変化を前記仮想評価対象の現在の動作状態に反映させる請求項1又は2に記載した仮想感性評価支援システム。 The virtual sensation evaluation support system according to claim 1, wherein the evaluation target control unit reflects a change in the surrounding environment in a current operation state of the virtual evaluation target when the virtual evaluation target is operating. .
  4.  前記評価対象制御部は、前記周辺環境の変化を前記仮想評価対象の位置及び姿勢に反映させる請求項1から3の何れか一項に記載した仮想感性評価支援システム。 The virtual sensitivity evaluation support system according to any one of claims 1 to 3, wherein the evaluation target control unit reflects the change in the surrounding environment on the position and orientation of the virtual evaluation target.
  5.  前記仮想評価対象が可動部を有する構成であり、
     前記評価対象制御部は、前記周辺環境の変化を前記仮想評価対象の位置及び姿勢に反映させることに加え、前記可動部の位置、姿勢及び可動量に反映させる請求項4に記載した仮想感性評価支援システム。
    The virtual evaluation target has a movable part,
    The virtual sensitivity evaluation according to claim 4, wherein the evaluation target control unit reflects the change in the surrounding environment in the position, posture, and movable amount of the movable unit in addition to reflecting the change in the virtual evaluation target in the position and posture. Support system.
  6.  サーバ(3)と、前記評価者が装着する映像提示デバイス(5)とが通信可能に構成され、
     前記サーバは、前記評価対象記憶部(7)と、前記環境変化検出部(8)と、前記評価対象制御部(9)とを有し、
     前記映像提示デバイスは、前記映像提示部(5)を有する請求項1から5の何れか一項に記載した仮想感性評価支援システム(1)。
    The server (3) and the video presentation device (5) worn by the evaluator are configured to be communicable,
    The server includes the evaluation target storage unit (7), the environment change detection unit (8), and the evaluation target control unit (9).
    The virtual sensitivity evaluation support system (1) according to any one of claims 1 to 5, wherein the video presentation device includes the video presentation unit (5).
  7.  サーバ(22)と、前記評価者が装着する映像提示デバイス(25)とが通信可能に構成され、
     前記サーバは、前記環境変化検出部(23)を有し、
     前記映像提示デバイスは、前記評価対象記憶部(27)と、前記評価対象制御部(28)と、前記映像提示部(29)とを有する請求項1から5の何れか一項に記載した仮想感性評価支援システム(21)。
    The server (22) and the video presentation device (25) worn by the evaluator are configured to be communicable,
    The server includes the environment change detection unit (23),
    The virtual presentation device according to any one of claims 1 to 5, wherein the video presentation device includes the evaluation target storage unit (27), the evaluation target control unit (28), and the video presentation unit (29). Kansei evaluation support system (21).
  8.  前記サーバと前記映像提示デバイスとが無線通信可能に構成されている請求項6又は7に記載した仮想感性評価支援システム。 The virtual sensitivity evaluation support system according to claim 6 or 7, wherein the server and the video presentation device are configured to be capable of wireless communication.
  9.  前記仮想評価対象と前記映像提示部とが1対複数の関係にあり、
     前記複数の映像提示部は、一の仮想評価対象の映像を複数の評価者に同時に提示する請求項1から8の何れか一項に記載した仮想感性評価支援システム(31)。
    The virtual evaluation target and the video presentation unit are in a one-to-multiple relationship,
    The virtual sensitivity evaluation support system (31) according to any one of claims 1 to 8, wherein the plurality of video presentation units simultaneously present one virtual evaluation target video to a plurality of evaluators.
  10.  前記仮想評価対象と前記映像提示部とが複数対複数の関係にあり、
     前記複数の映像提示部は、複数の仮想評価対象の映像をそれぞれ複数の評価者に同時に提示する請求項1から8の何れか一項に記載した仮想感性評価支援システム(41)。
    The virtual evaluation target and the video presentation unit are in a multiple-to-multiple relationship,
    The virtual sensitivity evaluation support system (41) according to any one of claims 1 to 8, wherein the plurality of video presentation units simultaneously present a plurality of virtual evaluation target videos to a plurality of evaluators, respectively.
PCT/JP2017/020077 2016-07-19 2017-05-30 Virtual sensory evaluation assistance system WO2018016192A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-141346 2016-07-19
JP2016141346A JP2018013839A (en) 2016-07-19 2016-07-19 Virtual sensibility evaluation support system

Publications (1)

Publication Number Publication Date
WO2018016192A1 true WO2018016192A1 (en) 2018-01-25

Family

ID=60992041

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/020077 WO2018016192A1 (en) 2016-07-19 2017-05-30 Virtual sensory evaluation assistance system

Country Status (2)

Country Link
JP (1) JP2018013839A (en)
WO (1) WO2018016192A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10869632B2 (en) * 2018-01-24 2020-12-22 C.R.F. Società Consortile Per Azioni System and method for ergonomic analysis, in particular of a worker

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364479A (en) * 2020-09-30 2021-02-12 深圳市为汉科技有限公司 Virtual welding evaluation method and related system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005071285A (en) * 2003-08-28 2005-03-17 New Industry Research Organization Collision detection method that change detail degree according to interaction in space and virtual space formation device using its method
JP2006302034A (en) * 2005-04-21 2006-11-02 Canon Inc Image processing method and image processor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005071285A (en) * 2003-08-28 2005-03-17 New Industry Research Organization Collision detection method that change detail degree according to interaction in space and virtual space formation device using its method
JP2006302034A (en) * 2005-04-21 2006-11-02 Canon Inc Image processing method and image processor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KLINKER, GUDRUN ET AL.: "Fata Morgana - A Presentation System for Product Design[ online", PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR'02, 30 September 2002 (2002-09-30), pages 76 - 85, XP010620944, Retrieved from the Internet <URL:http://ieeexplore.ieee.org/document/1115076> [retrieved on 20170804], DOI: 10.1109/ISMAR.2002.1115076 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10869632B2 (en) * 2018-01-24 2020-12-22 C.R.F. Società Consortile Per Azioni System and method for ergonomic analysis, in particular of a worker

Also Published As

Publication number Publication date
JP2018013839A (en) 2018-01-25

Similar Documents

Publication Publication Date Title
JP6436604B2 (en) Method and system implemented by a computing system
JP4850984B2 (en) Action space presentation device, action space presentation method, and program
US8875041B1 (en) Methods and systems for providing feedback on an interface controlling a robotic device
US10665014B2 (en) Tap event location with a selection apparatus
CA2951087C (en) Collision detection
JP2019519387A (en) Visualization of Augmented Reality Robot System
US11192249B2 (en) Simulation device for robot
US11904478B2 (en) Simulation device and robot system using augmented reality
JP2021520528A (en) Coordinate system alignment of the coordinate system used in computer-generated reality and tactile devices
KR20160014601A (en) Method and apparatus for rendering object for multiple 3d displays
US11279037B2 (en) Force-sense visualization apparatus, robot, and force-sense visualization program
JP6950192B2 (en) Information processing equipment, information processing systems and programs
JP2016218534A (en) Image display system and image display method
WO2018016192A1 (en) Virtual sensory evaluation assistance system
JP2016085602A (en) Sensor information integrating method, and apparatus for implementing the same
JP2022502791A (en) Systems and methods for estimating robot posture, robots, and storage media
Vogel et al. Exploring the possibilities of supporting robot-assisted work places using a projection-based sensor system
Arreghini et al. A Long-Range Mutual Gaze Detector for HRI
Chakraborty et al. Autonomous vehicle for industrial supervision based on google assistant services & IoT analytics
Vogel et al. A projection-based sensor system for ensuring safety while grasping and transporting objects by an industrial robot
JP2019101476A (en) Operation guide system
US20180101226A1 (en) Information processing apparatus
WO2020139105A1 (en) Method and system for predictively avoiding a collision between a manipulator and a person
KR101472314B1 (en) 3D Input Method and Apparatus using Single Camera
JP7196894B2 (en) Information processing device, information processing system and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17830712

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17830712

Country of ref document: EP

Kind code of ref document: A1