WO2011080882A1 - 動作空間提示装置、動作空間提示方法およびプログラム - Google Patents
動作空間提示装置、動作空間提示方法およびプログラム Download PDFInfo
- Publication number
- WO2011080882A1 WO2011080882A1 PCT/JP2010/007167 JP2010007167W WO2011080882A1 WO 2011080882 A1 WO2011080882 A1 WO 2011080882A1 JP 2010007167 W JP2010007167 W JP 2010007167W WO 2011080882 A1 WO2011080882 A1 WO 2011080882A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- unit
- movable robot
- imaging
- dimensional
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1674—Programme controls characterised by safety, monitoring, diagnostic
- B25J9/1676—Avoiding collision or forbidden zones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40478—Graphic display of work area of robot, forbidden, permitted zone
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/49—Nc machine tool, till multiple
- G05B2219/49137—Store working envelop, limit, allowed zone
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- the present invention relates to a motion space presentation device and a motion space presentation method for generating data for presenting a motion space of a movable robot that operates according to a motion plan.
- the person When a person performs a work adjacent to such a robot, the person may come into contact with the robot and be injured if the person does not properly perceive the movement space of the robot. Even if a person is not injured, the emergency stop of the robot increases idling time, or the task being executed by the robot fails.
- the mobile robot disclosed in Patent Document 1 projects a dangerous range on the road surface on which the mobile robot moves. Thereby, the mobile robot presents a danger range and realizes a smooth operation.
- FIG. 27 is a diagram showing an outline of the mobile robot disclosed in Patent Document 1.
- the operation planning unit 9902 determines the operation content of the mobile robot 9901.
- the operation control unit 9903 calculates information for driving the wheel 9906 based on the determined operation content and information on the wheel 9906 measured by the encoder 9908.
- the driving unit 9904 drives the wheels 9906 based on the calculated information.
- the operation control unit 9903 calculates information for driving the arm unit 9907 based on the determined operation content and information on the arm unit 9907 measured by the encoder 9909.
- the driving unit 9905 drives the arm unit 9907 according to the calculated information.
- the display control unit 9910 determines a danger range that may move and operate based on the determined operation content.
- the projector 9911 projects the determined danger range on the road surface on which the mobile robot 9901 moves. Thereby, the mobile robot 9901 presents a danger range and realizes a smooth operation.
- the image projected on the installation surface of the arm robot by the projector also moves at high speed. Furthermore, when the ground contact surface projected by the projector is a curved surface or the like, the projected image may be deformed, and the operation space may be erroneously determined.
- an object of the present invention is to provide an operation space presentation device that generates data for displaying the operation space of a robot in a three-dimensional manner.
- the motion space presentation device is a motion space presentation device that generates data for presenting the motion space of the movable robot, and generates a three-dimensional region in which the movable robot operates.
- a position and orientation detection unit that detects a work area generation unit, an imaging unit that captures an actual image, an imaging position that is a position of the imaging unit, and an imaging direction of the imaging unit, and an imaging unit that captures the image.
- the real image includes a line segment approximation model image that is an image of the movable robot when the line segment approximation model is viewed from the imaging position toward the imaging direction, and the three-dimensional area from the imaging position.
- a work area image that is an image when viewed in the direction, and an overlay unit that selectively superimposes either of them according to the difficulty of being seen.
- the motion space presentation device can generate data for stereoscopically displaying the motion space of the movable robot that operates according to the motion plan. Also, data suitable for display is generated by selective superimposition based on invisibility.
- the overlay unit may superimpose the line segment approximate model image on the actual image when evaluating that the work area image is difficult to see according to a criterion for evaluating the invisibility.
- the work area generation unit includes a line segment approximation model holding unit that holds the line segment approximation model of the movable robot and an operation plan for operating the movable robot in the virtual space.
- An operation simulator that operates a model, and a three-dimensional region generation unit that generates the three-dimensional region in which the movable robot operates from the result of the operation simulator operating the line segment approximation model may be provided.
- the motion space presentation device generates a three-dimensional region that is the motion space of the movable robot using the line segment approximation model of the movable robot. Therefore, the motion space presentation device can reduce the arithmetic processing.
- the overlay unit is configured so that the line on which the motion simulator is operated on the real image according to at least one of the smoothness of the three-dimensional region and the distance from the imaging position to the position of the movable robot.
- the line segment approximate model image which is an image when the segment approximate model is viewed from the imaging position toward the imaging direction, or the work area image may be superimposed.
- the overlay unit superimposes the line segment approximate model image on the real image when evaluating that the three-dimensional region is smooth, and when evaluating that the three-dimensional region is not smooth,
- the work area image may be superimposed.
- the overlay unit superimposes the line segment approximate model image on the real image when the distance from the imaging position to the position of the movable robot satisfies a predetermined condition, and the movable robot from the imaging position When the distance to the position does not satisfy the predetermined condition, the work area image may be superimposed on the actual image.
- the action space presentation device may further include an area expression adjustment unit that changes an expression format of the three-dimensional area generated by the work area generation unit.
- the region representation adjustment unit may change the representation form of the two-dimensional plane constituting the three-dimensional region according to the amount of movement per unit time that the movable robot moves the three-dimensional region according to the motion plan. Good.
- the motion space presentation device can express that the danger is higher for a part having a high speed.
- the region representation adjustment unit may change the representation format of the two-dimensional plane that constitutes the three-dimensional region according to the difference of modes classified in the operation plan.
- the overlay unit may further superimpose the execution order determined in accordance with the operation plan on at least one of the work area image and the actual image.
- the overlay unit may further superimpose the scheduled end time of the currently executed mode on at least one of the work area image and the actual image.
- the action space presentation device can express that a space other than the action space used in the currently executing mode can be used until the scheduled end time.
- the overlay portion has a distance from the imaging position to the position of the movable robot as described above.
- the work area image may be superimposed on the actual image by lowering the transparency of the work area image than when the length is the first length.
- the motion space presentation device can express that the danger is approaching by lowering the transparency.
- the work area generation unit generates the three-dimensional area by associating an area where the movable robot operates in the three-dimensional area and a time when the movable robot operates in the three-dimensional area,
- the overlay unit extracts the region in which the movable robot operates after the current time from the three-dimensional region, and when the extracted region is viewed from the imaging position toward the imaging direction in the real image.
- An image may be superimposed as the work area image.
- the motion space presentation device can generate in advance a three-dimensional region based on the motion plan, which is the motion space of the movable robot. Therefore, the motion space presentation device can reduce the arithmetic processing when superimposing.
- the position / orientation detection unit may detect the imaging position and the imaging direction using a visual tag indicating the position imaged by the imaging unit.
- the motion space presentation device can detect the imaging position and the imaging direction of the imaging unit without using the position sensing device and the posture sensing device.
- the operation space presentation device may further include a display unit that moves together with the imaging unit, and the display unit may display the work area image superimposed on the actual image by the overlay unit. .
- the action space presentation device can display the superimposed image without using an external display means. Further, when the imaging unit and the display unit move together, the image is displayed on the display unit without a sense of incongruity.
- the motion space presentation device further includes an motion plan holding unit that holds a motion plan for operating the movable robot, and the work area generation unit uses the motion plan holding unit to move the three-dimensional area. You may produce
- the motion space presentation device can acquire the motion plan of the movable robot without using communication means such as a network.
- the motion space presentation method is a motion space presentation method for generating data for presenting the motion space of a movable robot, and a work region generation step for generating a three-dimensional region in which the movable robot operates.
- a position and orientation detection step for detecting an imaging position that is a position of an imaging unit that captures an actual image and an imaging direction of the imaging unit; and the actual image captured by the imaging unit
- a line segment approximation model image that is an image when the line segment approximation model is viewed from the imaging position toward the imaging direction, and an image when the three-dimensional region is viewed from the imaging position toward the imaging direction.
- It may be a motion space presentation method including a certain work area image and an overlay step for selectively superimposing either of them according to the difficulty of being seen.
- This generates data for stereoscopically displaying the motion space of the movable robot that operates according to the motion plan. Also, data suitable for display is generated by selective superimposition based on invisibility.
- the program according to the present invention is a program for generating data for presenting an operation space of a movable robot, and includes a work area generation step for generating a three-dimensional area in which the movable robot operates, and an actual image is captured.
- a position and orientation detection step for detecting an imaging position that is the position of the imaging unit and an imaging direction of the imaging unit, and a line segment approximation model of the movable robot is added to the real image captured by the imaging unit.
- the operation space presentation method is realized as a program.
- FIG. 1 is a functional block diagram of the action space presentation device according to the first embodiment.
- FIG. 2 is a diagram illustrating a usage state of the motion space presentation device according to the first embodiment.
- FIG. 3 is a diagram illustrating an appearance of the motion space presentation device according to the first embodiment.
- FIG. 4 is a diagram illustrating an example of display contents of the action space presentation device according to the first embodiment.
- FIG. 5 is a diagram illustrating an example of an operation plan of the operation plan holding unit in the first embodiment.
- FIG. 6 is a functional block diagram of the work area generation unit in the first embodiment.
- FIG. 7 is a diagram illustrating an example of the line segment approximation model in the first embodiment.
- FIG. 8 is a diagram illustrating an example of the operation of the three-dimensional region generation unit in the first embodiment.
- FIG. 9 is a diagram illustrating an example of the operation of the region expression adjustment unit in the first embodiment.
- FIG. 10 is a diagram illustrating an example of the operation of the overlay unit in the first embodiment.
- FIG. 11 is a diagram illustrating an example of an action space displayed on the action space presentation device according to the first embodiment.
- FIG. 12 is a diagram illustrating an example of the operation of the region expression adjustment unit in the second embodiment.
- FIG. 13 is a diagram illustrating an example of an action space displayed on the action space presentation device according to the second embodiment.
- FIG. 14 is a diagram illustrating an example of an operation plan of the operation plan holding unit in the third embodiment.
- FIG. 15 is a diagram illustrating an example of the operation of the region expression adjustment unit in the third embodiment.
- FIG. 16 is a diagram illustrating an example of an action space displayed on the action space presentation device according to the third embodiment.
- FIG. 17 is a diagram illustrating a first modification of the motion space displayed on the motion space presentation device in the third embodiment.
- FIG. 18 is a diagram illustrating a second modification of the motion space displayed on the motion space presentation device in the third embodiment.
- FIG. 19 is a diagram illustrating an example of the operation of the region expression adjustment unit in the fourth embodiment.
- FIG. 20 is a diagram illustrating an example of a three-dimensional region in the fourth embodiment.
- FIG. 21 is a diagram illustrating an example of an action space displayed on the action space presentation device according to the fourth embodiment.
- FIG. 22 is a functional block diagram of the action space presentation device according to the fifth embodiment.
- FIG. 23 is a diagram illustrating a usage state of the motion space presentation device according to the fifth embodiment.
- FIG. 24 is a diagram illustrating an appearance of the motion space presentation device according to the fifth embodiment.
- FIG. 25 is a diagram illustrating an example of the operation of the overlay unit in the fifth embodiment.
- FIG. 26 is a functional block diagram of the action space presentation device according to the sixth embodiment.
- FIG. 27 is a functional block diagram of a device for presenting a danger range in the prior art.
- FIG. 1 is a functional block diagram of the action space presentation device according to the first embodiment.
- the operation plan holding unit 101 in FIG. 1 holds an operation plan of a movable robot such as an arm robot.
- the work area generation unit 102 generates a three-dimensional area that is an operation space of the movable robot from the operation plan held by the operation plan holding unit 101.
- the region representation adjustment unit 103 deforms or processes the texture of a two-dimensional plane constituting the three-dimensional region so that the generated three-dimensional region can be easily perceived by humans.
- the imaging unit 104 captures a real world image.
- the position and orientation detection unit 105 detects the position and orientation of the imaging unit 104 in the real world.
- the overlay unit 106 extracts a three-dimensional region used by the movable robot after the current time from the three-dimensional region output from the region representation adjustment unit 103, and uses the viewpoint of the imaging unit 104 to generate a two-dimensional plane of the real world image. Superimpose on.
- the display unit 107 is a display unit for a person to confirm a three-dimensional area.
- the work area generation unit 102 and the area expression adjustment unit 103 complete the process until the movable robot starts operation.
- the overlay unit 106 displays a three-dimensional area corresponding to the operation space of the movable robot according to the position and posture of the imaging unit 104 and the elapsed time of the operation of the movable robot. 107.
- the motion space presentation device 100 presents the motion space of the movable robot to a person.
- the work area generation unit 102 and the area expression adjustment unit 103 may perform processing in parallel with the operation of the movable robot.
- FIG. 2 is a diagram showing a usage state of the action space presentation device 100 shown in FIG.
- the motion space presentation device 100 is used in an environment where a person 201 and a movable robot 202 work adjacent to each other.
- the motion plan holding unit 101, the work area generation unit 102, the area expression adjustment unit 103, and the overlay unit 106 are implemented as electronic circuits of the motion space presentation device 100.
- the imaging unit 104 and the display unit 107 are mounted on the surface of the operation space presentation device 100.
- the position / orientation detection unit 105 detects the imaging position and the imaging direction of the imaging unit 104 by using a sensing device mounted on the operation space presentation device 100 and a sensing device mounted in the work environment.
- a sensing device mounted on the operation space presentation device 100 and a sensing device mounted in the work environment.
- a plurality of position sensing devices 204 are installed on the ceiling, and a plurality of posture sensing devices 205 are installed on the floor.
- FIG. 3 is a diagram showing an appearance of the action space presentation device 100 shown in FIG.
- the motion space presentation device 100 is configured as a display terminal including a camera 301 that captures an image of the real world and a display screen 302 that displays an image in which a three-dimensional region is superimposed on the real world.
- the imaging unit 104 illustrated in FIG. 1 is implemented by a camera 301 or the like.
- the display unit 107 shown in FIG. 1 is implemented by a display screen 302 or the like.
- the display screen 302 may be realized in a size that the person 201 can hold in a hand, such as a mobile phone and a PDA (Personal Digital Assistant).
- the display screen 302 is about the size of a spectacle lens, and may be realized as a head mounted display with an eyepiece.
- the position sensing device 303 is a position sensing device of the action space presentation device 100, and detects the position of the action space presentation device 100 in the real world in cooperation with the position sensing device 204 on the environment side.
- the position sensing device 303 of the motion space presentation device 100 and the position sensing device 204 in the work environment realize position detection by, for example, a well-known indoor GPS (Global Positioning System) using UWB (Ultra Wide Band).
- the posture sensing device 304 is a posture sensing device of the motion space presentation device 100 and detects the posture of the motion space presentation device 100 in cooperation with the posture sensing device 205 in the work environment.
- the posture sensing device 304 and the environment-side posture sensing device 205 of the motion space presentation device 100 realize posture detection by using, for example, a known three-dimensional tilt sensor using magnetism.
- the position and orientation of the motion space presentation device 100 can be replaced with the imaging position and imaging direction of the imaging unit 104 shown in FIG.
- the imaging position means the position of the imaging unit 104 itself.
- the imaging direction means the direction from the imaging unit 104 to the subject.
- FIG. 4 is a diagram illustrating an example of display contents of the action space presentation device 100 according to the first embodiment.
- the person 201 can intuitively grasp the motion space of the movable robot 202 by observing the movable robot 202 via the motion space presentation device 100. For example, when the distance between the person 201 and the movable robot 202 is long, the display content 401 is displayed on the display screen 302. An operation space 402 of the display content 401 is an operation space of the movable robot 202.
- the display content 403 is displayed on the display screen 302.
- the operation space 404 of the movable robot 202 is displayed larger than the operation space 402 because the distance between the person 201 and the movable robot 202 is short.
- the motion space presentation device 100 presents the motion space to the person 201 more clearly by reducing the transparency of the motion space.
- the overlay unit 106 of the motion space presentation device 100 operates more when the distance from the imaging position to the movable robot 202 is the second length shorter than the first length than when the distance is the first length. Decrease the transparency of the work area image showing the space. Then, the overlay unit 106 superimposes the work area image with reduced transparency and the real world image.
- FIG. 5 is a diagram showing an example of an operation plan held by the operation plan holding unit 101 shown in FIG.
- the operation plan 501 shown in FIG. 5 is described in SLIM (Standard Language for Industrial Manipulator) which is a JIS standard industrial robot language. There are various languages for operating robots, from those whose specifications are standardized to those designed for specific robots.
- the operation plan 501 may be described by a language specification other than SLIM.
- the present invention can be implemented even in other languages.
- the operation based on the operation plan 501 is as follows. That is, when the motion plan 501 is executed when the tip of the movable robot 202 is at the spatial position C, the tip of the movable robot 202 moves to the spatial position A with a speed of 100. Next, the tip of the movable robot 202 moves to the spatial position B with the speed being 50, and then moves to the spatial position C. The tip of the movable robot 202 repeats the operation of moving sequentially to the spatial position A, the spatial position B, and the spatial position C ten times.
- FIG. 6 is a functional block diagram of the work area generation unit 102 shown in FIG.
- the shape of the movable robot 202 is expressed by approximated line segments, and is stored in the line segment approximate model holding unit 601 as a line segment approximate model (also called a linear approximate model or a piecewise linear approximate model).
- a line segment approximate model also called a linear approximate model or a piecewise linear approximate model.
- FIG. 7 is a diagram illustrating an example of a line segment approximation model according to the first embodiment.
- the actual robot shape 701 is generally not a straight line but a shape that uses many curved surfaces in consideration of safety and weight reduction.
- the motion space presentation device 100 can more accurately represent the motion space presented to the person 201 by using a mathematical expression that represents a curved surface.
- the calculation of the shape of the robot based on the mathematical expression expressing the curved surface has a problem that the amount of calculation increases. Therefore, for example, the actual robot shape 701 is converted into an approximate model in advance like a line segment approximate model 702 expressed by a plurality of approximate line segments.
- the line segment approximation model 702 is an example of a line segment approximation model, and is the smallest polyhedron that can contain the actual robot shape 701.
- the line segment approximation model 702 is expressed by, for example, an internal format 703 of the line segment approximation model.
- the internal format 703 of the line segment approximation model is expressed by XML (Extensible Markup Language) as a set of line segments constituting the polyhedron.
- the line segment is represented by a ⁇ line> tag.
- the attribute start indicates the start point of the line segment, and the attribute end indicates the end point of the line segment.
- the attribute attr is “body”
- the line segment is a line segment constituting the shape of the movable robot 202, and when the attribute attr is traj, it indicates that the line segment is a trajectory of movement.
- a set of line segments is represented by a ⁇ polygons> tag.
- three-dimensional coordinate values having units are stored in p1 to p8 of the internal format 703 of the line segment approximation model shown in FIG.
- the three-dimensional region generation unit 603 samples the operation of the movable robot 202 at a constant time interval ⁇ T, generates a three-dimensional region that is an operation space of the movable robot 202, and outputs it.
- FIG. 8 is a diagram illustrating an example of the operation of the three-dimensional region generation unit 603 illustrated in FIG.
- the three-dimensional region generation unit 603 sets the start time of the operation plan 501 in the attribute time of the ⁇ polygons> tag.
- the notation of time is described as [hour]: [minute]: [second].
- the three-dimensional region internal format 804 shown in FIG. 8 includes p1 (0), p2 (0), p1 ( ⁇ T), p2 ( ⁇ T), p1 (2 ⁇ T), and p2 (2 ⁇ T).
- a three-dimensional coordinate value having a unit is stored.
- the region representation adjustment unit 103 processes the shape of the three-dimensional region output from the work region generation unit 102 or the texture of the two-dimensional plane constituting the three-dimensional region.
- FIG. 9 is a diagram illustrating an example in which the region expression adjustment unit 103 changes the color of the two-dimensional plane that forms the three-dimensional region according to the operation speed of the movable robot 202.
- the line segments that represent the three-dimensional area form a polygon having the line segments as sides.
- the ⁇ polygon> tag in the internal format 903 of the three-dimensional area indicates a polygon.
- the value of the attribute “start” indicates a relative time when the movable robot 202 enters the two-dimensional plane.
- the value of the attribute end indicates a relative time when the movable robot 202 leaves the two-dimensional plane.
- the internal format 903 of the three-dimensional region shown in FIG. 9 includes a two-dimensional plane 901 formed by the operation from time 0 to ⁇ T, and a two-dimensional plane formed by the operation from time ⁇ T to 2 ⁇ T. And 902.
- a line segment generated by the movement trajectory is represented by a ⁇ line> tag whose attr attribute value is traj. Further, the length of the line segment generated by the movement trajectory is proportional to the operation speed.
- the color of the two-dimensional plane 901 including the trajectory indicating the fast motion is red
- the value of the attribute color of the ⁇ polygon> tag is red
- the color of the two-dimensional plane 902 including the trajectory indicating the slow motion is yellow
- the value of the attribute color of the ⁇ polygon> tag is yellow.
- the overlay unit 106 extracts a three-dimensional region used by the movable robot 202 after the current time from the three-dimensional region processed by the region representation adjustment unit 103. More specifically, the overlay unit 106 calculates the current relative time with the time when the movable robot 202 starts to move as the relative time 00:00:00. Then, only the three-dimensional region in which the value of the attribute time of the ⁇ polygons> tag is after the current relative time is extracted.
- the overlay unit 106 superimposes a three-dimensional region, which is an operation space of the movable robot 202, on a real-world image captured by the camera 301 as a translucent texture.
- FIG. 10 is a diagram illustrating an example in which the overlay unit 106 superimposes an image obtained when the three-dimensional region is viewed from the imaging position toward the imaging direction on the actual image captured by the camera 301.
- the position of the movable robot 202 is known.
- the imaging position and imaging direction of the imaging unit 104 in the real world are traced by the position and orientation detection unit 105.
- the specifications of the camera 301 (lens focal length, image sensor shape) are known.
- a determinant for converting the operation space of the movable robot 202 into an image when viewed from the imaging position of the imaging unit 104 toward the imaging direction is calculated by processing of CG (Computer Graphics). That is, a determinant for converting the world coordinate system (Xw, Yw, Zw) to the display coordinate system (xc, yc) is calculated.
- the overlay unit 106 projects a three-dimensional region onto a two-dimensional plane using this determinant and superimposes it on a real image including the movable robot 202.
- the transparency when superimposing is set to a value proportional to the distance calculated from the imaging position and the position of the movable robot 202.
- the distance between the person 201 and the movable robot 202 approximates the distance between the imaging position and the movable robot 202 and the distance between the motion space presentation device 100 and the movable robot 202.
- the world coordinates Pr of the movable robot 202 are known.
- the overlay unit 106 can superimpose the image of the three-dimensional region on the actual image according to the calculated distance.
- the motion space presentation device 100 presents a motion space as shown in FIG. Thereby, the person 201 can perceive the movement space of the movable robot 202 in a three-dimensional and intuitive manner, and can avoid dangers such as a collision.
- the person 201 can effectively use the motion space by perceiving a three-dimensional region that is within the movable range of the movable robot 202 but is not currently used.
- Embodiment 2 Next, a second embodiment will be described. Description of the same parts as those in Embodiment 1 is omitted.
- the motion space presentation device according to the second embodiment includes the same components as those of the motion space presentation device 100 according to the first embodiment shown in FIG.
- FIG. 12 is a diagram illustrating an example of the operation of the region expression adjustment unit according to the second embodiment.
- the region expression adjustment unit 103 changes the color indicating the three-dimensional region.
- the region representation adjustment unit 103 copies based on the internal format of the two-dimensional plane 1201 constituting the three-dimensional region.
- the region representation adjustment unit 103 changes the internal format 1202 of the three-dimensional region by setting the base two-dimensional plane color to red and the copied two-dimensional plane color to yellow.
- the operation space presentation device 100 can display the operation space in different colors.
- FIG. 13 is a diagram illustrating an example of an operation space displayed on the operation space presentation device 100 according to the second embodiment.
- the motion space presentation device 100 displays an image as the display content 1301 at a position far from the movable robot 202 and enlarges and displays the image as a display content 1302 at a position near the movable robot 202. Further, when the motion space presentation device 100 approaches the movable robot 202, the three-dimensional area is displayed as a display content 1303 with a different color.
- the distance between the person 201 and the movable robot 202 can be calculated.
- the overlay unit 106 superimposes three-dimensional Change the color of the area to red.
- the overlay unit 106 changes the color of the three-dimensional region to be superimposed to yellow.
- the person 201 can perceive the distance between the person 201 and the operation space of the movable robot 202 by color, and can more intuitively understand the operation space of the movable robot 202.
- the action space presentation device in the third embodiment includes the same components as those of the action space presentation device 100 in the first embodiment shown in FIG.
- the movable robot 202 operates according to a plurality of operation modes determined by the operation plan.
- the action space presentation device 100 changes the expression format of the action space according to the action mode.
- FIG. 14 is a diagram illustrating an example of an operation plan of the operation plan holding unit 101 according to the third embodiment.
- the area representation adjustment unit 103 changes the color of the three-dimensional area by switching the operation mode of the movable robot 202.
- the movable robot 202 has two operation modes and executes the operation mode 2 after the operation mode 1 is executed.
- FIG. 15 is a diagram illustrating an example of the operation of the region expression adjustment unit 103 according to the third embodiment.
- each three-dimensional region is generated according to each operation mode.
- the three-dimensional region of the operation mode 1 is set to yellow, and the attribute color of the ⁇ polygons> tag corresponding to the operation mode 1 is set to yellow.
- the three-dimensional area of the operation mode 2 is set to green, and the attribute color of the ⁇ polygons> tag corresponding to the operation mode 2 is set to green.
- the color of the three-dimensional area is changed.
- the relative time at which the operation in each operation mode starts is stored with the time at which the entire operation starts as 00:00:00. More specifically, this relative time is stored as the value of the attribute time of the ⁇ polygons> tag.
- FIG. 16 is a diagram illustrating an example of an operation space displayed on the operation space presentation device 100 according to the third embodiment.
- the three-dimensional area of the current motion mode is displayed in yellow, and the three-dimensional area of the next motion mode is displayed. Is displayed in green.
- the motion space presentation device 100 can call attention to the person 201 so as not to approach the motion space of the current motion mode. Moreover, the motion space presentation device 100 can notify the person 201 that the danger is approaching the motion space of the next motion mode.
- FIG. 17 is a diagram illustrating a first modification of the motion space displayed on the motion space presentation device 100 according to the third embodiment.
- the overlay unit 106 After projecting the three-dimensional region onto the two-dimensional plane, the overlay unit 106 acquires a numerical value indicating the order of the operation mode from the attribute mode of the ⁇ polygons> tag. The overlay unit 106 further superimposes a numerical value 1701 indicating the order of the current operation mode and a numerical value 1702 indicating the order of the next operation mode on the two-dimensional plane. This makes it easier for the person 201 to understand the order in which the operation space transitions.
- FIG. 18 is a diagram illustrating a second modification of the motion space displayed on the motion space presentation device 100 according to the third embodiment.
- the overlay unit 106 After projecting the three-dimensional region onto the two-dimensional plane, the overlay unit 106 acquires the start time of the next operation mode from the attribute time of the ⁇ polygons> tag. Then, the overlay unit 106 superimposes the difference between the current time and the start time of the next operation mode on the two-dimensional plane as the time required until the current operation mode ends. This makes it easier for the person 201 to know the time at which the operation space transitions.
- the required time 1801 until the end of the operation mode 1 is displayed on the operation space presentation device 100 at the relative time 00:00:00 from the operation start time of the movable robot 202. Further, at the relative time 00:05:30, a required time 1802 until the end of the operation mode 1 is displayed.
- the region representation adjustment unit 103 changes the representation form of the two-dimensional plane that configures the three-dimensional region according to the difference of modes divided in the operation plan. Then, the overlay unit 106 superimposes the execution order and the scheduled end time on the real image or the image of the three-dimensional region.
- the position where the execution order and the scheduled end time are superimposed may be either a real image or a three-dimensional region image.
- the execution order may be superimposed in the vicinity of an image indicating an operation space corresponding to the operation mode.
- the action space presentation device in the fourth embodiment includes the same components as those of the action space presentation device 100 in the first embodiment shown in FIG.
- FIG. 19 is a diagram illustrating an example of the operation of the region expression adjustment unit 103 according to the fourth embodiment.
- the area representation adjustment unit 103 processes the shape of the three-dimensional area output from the work area generation unit 102 or the texture of the two-dimensional plane constituting the three-dimensional area.
- FIG. 19 shows an example in which the area representation adjustment unit 103 changes the color of the two-dimensional plane that constitutes the three-dimensional area according to the operation speed of the movable robot 202.
- the internal format 1903 of the three-dimensional region shown in FIG. 19 includes a two-dimensional plane 1901 formed by an operation from time 0 to ⁇ T and a two-dimensional plane formed by an operation from time ⁇ T to 2 ⁇ T. 1902.
- a line segment generated by the movement trajectory is represented by a ⁇ line> tag whose attr attribute value is traj. Further, the length of the line segment generated by the movement trajectory is proportional to the operation speed.
- the color of the two-dimensional plane 1901 including the trajectory indicating the fast motion is red
- the value of the attribute color of the ⁇ polygon> tag is red
- the color of the two-dimensional plane 1902 including the trajectory indicating the slow motion is yellow
- the value of the attribute color of the ⁇ polygon> tag is yellow.
- the area representation adjustment unit 103 calculates a unit normal vector of each two-dimensional plane and adds it to the polygon attribute.
- the added unit normal vector is stored in the attribute norm of the ⁇ polygon> tag in the internal format 1903 of the three-dimensional region.
- three-dimensional coordinate values having units are described in n1 and n2 of the internal format 1903 of the three-dimensional area shown in FIG.
- the overlay unit 106 calculates an angle ⁇ ij formed by all adjacent two-dimensional planes i and j of the three-dimensional region output by the region representation adjustment unit 103. Then, the overlay unit 106 calculates the number n of angles ( ⁇ ij ⁇ th) where ⁇ ij is smaller than a preset threshold value ⁇ th. Then, the overlay unit 106 calculates the value of n ( ⁇ ij ⁇ th) / n ( ⁇ ij) using the number n ( ⁇ ij) of all the corners including other corners.
- the overlay unit 106 determines whether the three-dimensional region output from the region representation adjustment unit 103 is undulated. Judged to be small and smooth. When the three-dimensional region is not smooth, or when the distance between the person 201 and the movable robot 202 is equal to or greater than a predetermined threshold value dth, the overlay unit 106 superimposes the three-dimensional region output by the region expression adjustment unit 103.
- the overlay unit 106 changes the shape of the movable robot 202 approximated to a line segment to a two-dimensional plane. Superimpose on.
- n ( ⁇ ij) 12 is established.
- the overlay unit 106 calculates the angle ⁇ ij formed by each two-dimensional plane by applying an inverse cosine function to the inner product of the normal vectors.
- the angle 2007 formed by the two-dimensional plane 2001 and the two-dimensional plane 2003 is ⁇ .
- n ( ⁇ ij ⁇ th) 4
- the overlay unit 106 determines that the three-dimensional region is smooth. Therefore, when the distance between the person 201 and the movable robot 202 is less than a predetermined threshold value dth, the overlay unit 106 forms the shape of the movable robot 202 approximated by a line segment from the three-dimensional region output from the region representation adjustment unit 103. Is extracted and superimposed on a two-dimensional plane.
- the extraction of the shape of the movable robot 202 approximated to the line segment is realized as follows. That is, first, the overlay unit 106 calculates the current relative time by setting the time when the movable robot 202 starts to operate as the relative time 00:00:00. Next, the overlay unit 106 selects a line segment ( ⁇ line> tag) in which the value of the attribute time of the ⁇ polygons> tag includes only a specific time after the current relative time (for example, a time after ⁇ T has elapsed). Extract. Thereafter, the overlay unit 106 reconstructs the two-dimensional plane constituting the line segment approximation model 702 by determining the connection relation of the line segments from the matching points.
- the overlay unit 106 may reconstruct a two-dimensional plane constituting a plurality of line segment approximation models by extracting line segments at a plurality of times.
- the overlay unit 106 superimposes an image and a real image when the line segment approximation model 702 is viewed from the imaging position of the imaging unit 104 in the imaging direction according to a predetermined condition.
- FIG. 21 is a diagram illustrating an example of an operation space displayed on the operation space presentation device 100 according to the fourth embodiment.
- the operation space presentation device 100 When the operation space presentation device 100 is far from the movable robot 202, the operation space presentation device 100 displays an image like the display content 2101. When the action space presentation device 100 is close to the movable robot 202, the operation space presentation device 100 enlarges and displays the image as in the display content 2102.
- the motion space presentation device 100 when the motion space presentation device 100 is closer to the movable robot 202 and the surface of the three-dimensional region is smooth, it becomes difficult for the person 201 to intuitively perceive the motion space. Therefore, the motion space presentation device 100 automatically switches to a display indicating the shape of the movable robot 202 approximated to a line segment. Then, the operation space presentation device 100 displays an image like the display content 2103. As a result, the person 201 can grasp the operation space of the movable robot 202 more intuitively.
- the distance between the person 201 and the movable robot 202 can be calculated.
- the overlay unit 106 superimposes either the line segment approximate model image or the three-dimensional region image on the actual image according to the smoothness and the distance. Depending on at least one of them, either the image of the line segment approximation model or the image of the three-dimensional region may be superimposed on the actual image.
- the overlay unit 106 may determine whether to superimpose the image of the line segment approximation model or the image of the three-dimensional region according to only the smoothness of the three-dimensional region. In this case, the overlay unit 106 superimposes the image of the line segment approximation model on the real image when evaluating that the three-dimensional region is smooth, and superimposes the image of the three-dimensional region on the real image when evaluating that the three-dimensional region is not smooth. May be.
- the overlay unit 106 may determine which of the image of the line segment approximate model and the image of the three-dimensional region is to be superimposed according to only the distance between the imaging position and the movable robot 202. In this case, when the distance between the imaging position and the position of the movable robot 202 satisfies a predetermined condition, the overlay unit 106 superimposes an image of a line segment approximation model on the actual image, and the imaging position and the position of the movable robot 202 When the distance between and does not satisfy a predetermined condition, an image of a three-dimensional region may be superimposed on the actual image.
- the overlay unit 106 may superimpose either the image of the line segment approximation model or the image of the three-dimensional region on the actual image by a separate switching unit.
- the area representation adjustment unit 103 calculates the unit normal vector of each two-dimensional plane, but the work area generation unit 102 may calculate the unit normal vector.
- the overlay unit 106 can evaluate the smoothness of the three-dimensional area and can selectively superimpose a line segment approximation model or an image of the three-dimensional area on the actual image.
- FIG. 22 is a functional block diagram of the motion space presentation device according to the fifth embodiment.
- the position / orientation detection unit 2205 detects the imaging position and imaging direction of the imaging unit 104 from the real world image captured by the imaging unit 104.
- FIG. 23 is a diagram showing a usage state of the action space presentation device 2200 shown in FIG.
- the motion space presentation device 2200 is used in an environment where a person 201 and a movable robot 202 work adjacent to each other.
- the motion plan holding unit 101, the work area generation unit 102, the region expression adjustment unit 103, the position / orientation detection unit 2205, and the overlay unit 106 are implemented as electronic circuits of the motion space presentation device 2200.
- the imaging unit 104 and the display unit 107 are mounted on the surface of the action space presentation device 2200.
- the visual tag 2304 is used when the position / orientation detection unit 2205 detects an imaging position and an imaging direction.
- FIG. 24 is a diagram showing an appearance of the action space presentation device 2200 shown in FIG.
- the motion space presentation device 2200 includes a camera 301 that images the real world and a display screen 302 that displays an image in which a three-dimensional region is superimposed on the real world.
- the display screen 302 may be realized by a size that the person 201 can hold in the hand, such as a mobile phone and a PDA (Personal Digital Assistant).
- the display screen 302 is about the size of a spectacle lens, and may be realized as a head mounted display with an eyepiece.
- the visual tag 2304 is imaged by the imaging unit 104 and displayed on the display screen 302.
- the position / orientation detection unit 2205 detects the imaging position and imaging direction of the imaging unit 104 using the captured visual tag 2304.
- FIG. 25 is a diagram illustrating an example of the operation of the overlay unit 106 illustrated in FIG.
- the overlay unit 106 projects a three-dimensional region onto a two-dimensional plane using the visual tag 2304.
- the visual tag 2304 calibrated in advance is arranged so as to be included in the imaging range of the camera 301.
- the overlay unit 106 projects the three-dimensional region with a translucent texture on the real-world image captured by the camera 301.
- image processing such as ARTToolKit
- the marker coordinate system and the display coordinate system are easily associated by calibration or the like.
- the overlay unit 106 associates the visual tag 2304 with the marker coordinate system in advance, so that when the visual tag 2304 is imaged, the overlay coordinate system (Xm, Ym, Zm) displays the display coordinate system (xc, The determinant to convert to yc) can be determined.
- the overlay unit 106 projects the three-dimensional region onto the two-dimensional plane using this determinant and superimposes it on the real-world image.
- FIG. 26 is a functional block diagram of the motion space presentation device according to the sixth embodiment.
- the action space presentation device 2600 includes a work area generation unit 2602, an imaging unit 104, a position / orientation detection unit 105, and an overlay unit 2606.
- the work area generation unit 2602 acquires an operation plan of the movable robot 202 via a communication network, for example. Thus, even without the motion plan holding unit 101 shown in the first embodiment, the work area generation unit 2602 can generate a three-dimensional region according to the motion plan of the movable robot 202.
- the imaging unit 104 captures a real image. That is, the imaging unit 104 captures a subject and generates a real image.
- the position / orientation detection unit 105 detects the imaging position and imaging direction of the imaging unit 104.
- the position and orientation detection unit 105 may use the visual tag 2304 shown in the fifth embodiment in order to detect the imaging position and the imaging direction.
- the overlay unit 2606 acquires the three-dimensional region generated by the work region generation unit 2602 and calculates a work region image that is an image when the three-dimensional region is viewed from the imaging position of the imaging unit 104 in the imaging direction. . Then, the calculated work area image and the actual image captured by the imaging unit 104 are superimposed. The superimposed data is output to the outside. The output data is displayed by, for example, an external display unit.
- the motion space presentation device 2600 displays the motion space of the movable robot 202 in a three-dimensional manner without the motion plan holding unit 101, the region expression adjustment unit 103, and the display unit 107 shown in the first embodiment.
- Data can be generated.
- the generated data is displayed, for example, as an image that stereoscopically displays the operation space by an external display unit. Thereby, the person 201 can intuitively perceive the operation space of the movable robot 202.
- the overlay unit 2606 displays either a line segment approximate model image that is an image obtained when the line segment approximate model of the movable robot 202 is viewed from the image capturing position toward the image capturing direction, or a work area image. May be selectively superimposed depending on the difficulty of seeing the line segment approximate model image or the work area image. For example, the overlay unit 2606 superimposes the line segment approximate model image on the actual image when evaluating that the work area image is difficult to see according to the criterion for evaluating the invisibility. Thereby, data for realizing a more easily understandable display is generated.
- the overlay unit 2606 makes it difficult to see whether the distance from the imaging position to the movable robot 202 is short, or whether the three-dimensional area generated by the work area generation unit 2602 is smooth. It may be used as a reference for evaluation, and the invisibility may be evaluated.
- the motion space presentation device can generate data for stereoscopically displaying the motion space of the movable robot generated according to the motion plan.
- the generated data is displayed as an image for stereoscopically displaying the operation space by the display means. Thereby, the person can intuitively perceive the operation space of the movable robot.
- the present invention can be realized not only as a motion space presentation device, but also as a method using the processing means constituting the motion space presentation device as a step. And this invention is realizable as a program which makes a computer perform the step contained in those methods. Furthermore, the present invention can be realized as a computer-readable storage medium such as a CD-ROM storing the program.
- the motion space presentation device can generate data for displaying the motion space in a three-dimensional manner, and is useful for avoiding a collision with a robot in a site that works closely with a robot such as a cell production site. is there. Moreover, the motion space presentation device according to the present invention can be applied as a tool for presenting the motion of the robot.
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Manipulator (AREA)
Abstract
Description
図1は、実施の形態1における動作空間提示装置の機能ブロック図である。
次に、実施の形態2について説明する。実施の形態1と同じ部分は、記述を省略する。また、実施の形態2における動作空間提示装置は、図1に示された実施の形態1における動作空間提示装置100と同じ構成要素を備える。
次に、実施の形態3について説明する。実施の形態1と同じ部分は、記述を省略する。また、実施の形態3における動作空間提示装置は、図1に示された実施の形態1における動作空間提示装置100と同じ構成要素を備える。実施の形態3において、可動ロボット202は、動作計画により定められた複数の動作モードに従って動作する。動作空間提示装置100は、動作モードにより、動作空間の表現形式を変更する。
次に、実施の形態4について、説明する。実施の形態1と同じ部分については記述を省略する。また、実施の形態4における動作空間提示装置は、図1に示された実施の形態1における動作空間提示装置100と同じ構成要素を備える。
(二次元平面2001,二次元平面2003)
(二次元平面2001,二次元平面2005)
(二次元平面2001,二次元平面2006)
(二次元平面2002,二次元平面2005)
(二次元平面2002,二次元平面2003)
(二次元平面2003,二次元平面2006)
(二次元平面2005,二次元平面2006)
(二次元平面2004,二次元平面2002)
(二次元平面2004,二次元平面2003)
(二次元平面2004,二次元平面2006)
(二次元平面2004,二次元平面2005)
次に、実施の形態5について説明する。実施の形態1と同じ部分については記述を省略する。
次に、実施の形態6について説明する。実施の形態1と同じ部分については記述を省略する。
101 動作計画保持部
102、2602 作業領域生成部
103 領域表現調整部
104 撮像部
105、2205 位置姿勢検出部
106、2606 オーバレイ部
107 表示部
201 人
202 可動ロボット
204、303 位置センシングデバイス
205、304 姿勢センシングデバイス
301 カメラ
302 表示画面
401、403、405、1301、1302、1303、2101、2102、2103 表示内容
402、404、406 動作空間
501 動作計画
601 線分近似モデル保持部
602 動作シミュレータ
603 三次元領域生成部
701 ロボットの形状
702 線分近似モデル
703 線分近似モデルの内部形式
801、802、803 線分
804、903、1202、1501、1903 三次元領域の内部形式
901、902、1201、1901、1902、2001、2002、2003、2004、2005、2006 二次元平面
1701、1702 数値
1801、1802 所要時間
2007 角度
2304 ビジュアルタグ
9901 移動ロボット
9902 動作計画部
9903 動作制御部
9904、9905 駆動部
9906 車輪
9907 腕部
9908、9909 エンコーダ
9910 表示制御部
9911 プロジェクタ
Claims (18)
- 可動ロボットの動作空間を提示するためのデータを生成する動作空間提示装置であって、
前記可動ロボットが動作する三次元領域を生成する作業領域生成部と、
実画像を撮像する撮像部と、
前記撮像部の位置である撮像位置と、前記撮像部の撮像方向と、を検出する位置姿勢検出部と、
前記撮像部により撮像された前記実画像に、前記可動ロボットの線分近似モデルを前記撮像位置から前記撮像方向に向かって見たときの像である線分近似モデル像と、前記三次元領域を前記撮像位置から前記撮像方向に向かって見たときの像である作業領域像と、のいずれかを見えにくさに応じて選択的に重畳するオーバレイ部とを備える
動作空間提示装置。 - 前記オーバレイ部は、前記見えにくさを評価するための基準に従って前記作業領域像が見えにくいと評価するとき、前記実画像に前記線分近似モデル像を重畳する
請求項1に記載の動作空間提示装置。 - 前記作業領域生成部は、
前記可動ロボットの前記線分近似モデルを保持する線分近似モデル保持部と、
前記可動ロボットを動作させるための動作計画に従って、仮想空間内で、前記線分近似モデルを動作させる動作シミュレータと、
前記動作シミュレータが前記線分近似モデルを動作させた結果から、前記可動ロボットが動作する前記三次元領域を生成する三次元領域生成部とを備える
請求項1または請求項2に記載の動作空間提示装置。 - 前記オーバレイ部は、前記三次元領域の滑らかさと、前記撮像位置から前記可動ロボットの位置までの距離と、の少なくとも一方に応じて、前記実画像に、前記動作シミュレータが動作させた前記線分近似モデルを前記撮像位置から前記撮像方向に向かって見たときの像である前記線分近似モデル像と、前記作業領域像と、のいずれかを重畳する
請求項3に記載の動作空間提示装置。 - 前記オーバレイ部は、前記三次元領域が滑らかであると評価するとき、前記実画像に前記線分近似モデル像を重畳し、前記三次元領域が滑らかでないと評価するとき、前記実画像に前記作業領域像を重畳する
請求項4に記載の動作空間提示装置。 - 前記オーバレイ部は、前記撮像位置から前記可動ロボットの位置までの距離が予め定められた条件を満たすとき、前記実画像に前記線分近似モデル像を重畳し、前記撮像位置から前記可動ロボットの位置までの距離が前記予め定められた条件を満たさないとき、前記実画像に前記作業領域像を重畳する
請求項4に記載の動作空間提示装置。 - 前記動作空間提示装置は、さらに、前記作業領域生成部により生成された前記三次元領域の表現形式を変更する領域表現調整部を備える
請求項3~6のいずれか1項に記載の動作空間提示装置。 - 前記領域表現調整部は、前記可動ロボットが前記動作計画に従って前記三次元領域を動作する単位時間当たりの移動量によって、前記三次元領域を構成する二次元平面の表現形式を変更する
請求項7に記載の動作空間提示装置。 - 前記領域表現調整部は、前記動作計画において区分されるモードの違いにより前記三次元領域を構成する二次元平面の表現形式を変更する
請求項7または請求項8に記載の動作空間提示装置。 - 前記オーバレイ部は、さらに、前記モードの実行順序であって、前記動作計画に従って決定される前記実行順序を前記作業領域像と前記実画像とのうち少なくとも一方に重畳する
請求項9に記載の動作空間提示装置。 - 前記オーバレイ部は、さらに、現在実行中の前記モードの終了予定時間を前記作業領域像と前記実画像とのうち少なくとも一方に重畳する
請求項9または請求項10に記載の動作空間提示装置。 - 前記オーバレイ部は、前記撮像位置から前記可動ロボットの位置までの距離が第1の長さよりも短い第2の長さであるとき、前記撮像位置から前記可動ロボットの位置までの距離が前記第1の長さであるときよりも前記作業領域像の透明度を下げて、前記実画像に前記作業領域像を重畳する
請求項1~11のいずれか1項に記載の動作空間提示装置。 - 前記作業領域生成部は、前記三次元領域において前記可動ロボットが動作する領域と、前記三次元領域において前記可動ロボットが動作する時刻と、を対応づけて前記三次元領域を生成し、
前記オーバレイ部は、前記三次元領域から現在時刻以降に前記可動ロボットが動作する前記領域を抽出し、前記実画像に、抽出された前記領域を前記撮像位置から前記撮像方向に向かって見たときの像を、前記作業領域像として、重畳する
請求項1~12のいずれか1項に記載の動作空間提示装置。 - 前記位置姿勢検出部は、前記撮像部によって撮像された、位置を示すビジュアルタグにより、前記撮像位置と前記撮像方向とを検出する
請求項1~13のいずれか1項に記載の動作空間提示装置。 - 前記動作空間提示装置は、さらに、前記撮像部と一緒に移動する表示部を備え、
前記表示部は、前記オーバレイ部により前記実画像に重畳された前記作業領域像を表示する
請求項1~14のいずれか1項に記載の動作空間提示装置。 - 前記動作空間提示装置は、さらに、前記可動ロボットを動作させるための動作計画を保持する動作計画保持部を備え、
前記作業領域生成部は、前記三次元領域を、前記動作計画保持部により保持された前記動作計画に従って生成する
請求項1~15のいずれか1項に記載の動作空間提示装置。 - 可動ロボットの動作空間を提示するためのデータを生成する動作空間提示方法であって、
前記可動ロボットが動作する三次元領域を生成する作業領域生成ステップと、
実画像を撮像する撮像部の位置である撮像位置と、前記撮像部の撮像方向と、を検出する位置姿勢検出ステップと、
前記撮像部により撮像された前記実画像に、前記可動ロボットの線分近似モデルを前記撮像位置から前記撮像方向に向かって見たときの像である線分近似モデル像と、前記三次元領域を前記撮像位置から前記撮像方向に向かって見たときの像である作業領域像と、のいずれかを見えにくさに応じて選択的に重畳するオーバレイステップとを含む
動作空間提示方法。 - 可動ロボットの動作空間を提示するためのデータを生成するプログラムであって、
前記可動ロボットが動作する三次元領域を生成する作業領域生成ステップと、
実画像を撮像する撮像部の位置である撮像位置と、前記撮像部の撮像方向と、を検出する位置姿勢検出ステップと、
前記撮像部により撮像された前記実画像に、前記可動ロボットの線分近似モデルを前記撮像位置から前記撮像方向に向かって見たときの像である線分近似モデル像と、前記三次元領域を前記撮像位置から前記撮像方向に向かって見たときの像である作業領域像と、のいずれかを見えにくさに応じて選択的に重畳するオーバレイステップとを
コンピュータに実行させるためのプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011532405A JP4850984B2 (ja) | 2009-12-28 | 2010-12-09 | 動作空間提示装置、動作空間提示方法およびプログラム |
CN201080023195.1A CN102448681B (zh) | 2009-12-28 | 2010-12-09 | 动作空间提示装置、动作空间提示方法以及程序 |
US13/220,749 US8731276B2 (en) | 2009-12-28 | 2011-08-30 | Motion space presentation device and motion space presentation method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009-297790 | 2009-12-28 | ||
JP2009297790 | 2009-12-28 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/220,749 Continuation US8731276B2 (en) | 2009-12-28 | 2011-08-30 | Motion space presentation device and motion space presentation method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011080882A1 true WO2011080882A1 (ja) | 2011-07-07 |
Family
ID=44226310
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/007167 WO2011080882A1 (ja) | 2009-12-28 | 2010-12-09 | 動作空間提示装置、動作空間提示方法およびプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US8731276B2 (ja) |
JP (1) | JP4850984B2 (ja) |
CN (1) | CN102448681B (ja) |
WO (1) | WO2011080882A1 (ja) |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011227879A (ja) * | 2010-03-30 | 2011-11-10 | Ns Solutions Corp | 情報提供装置、情報提供方法、及びプログラム |
JP2013094961A (ja) * | 2011-11-04 | 2013-05-20 | Fanuc Robotics America Corp | 3次元表示部を備えたロボット教示装置 |
JP2013240849A (ja) * | 2012-05-18 | 2013-12-05 | Fanuc Ltd | ロボットシステムの動作シミュレーション装置 |
EP2783812A2 (en) | 2013-03-18 | 2014-10-01 | Kabushiki Kaisha Yaskawa Denki | Robot device and method for manufacturing an object |
US9001152B2 (en) | 2010-03-30 | 2015-04-07 | Ns Solutions Corporation | Information processing apparatus, information processing method, and program |
JP2017094466A (ja) * | 2015-11-26 | 2017-06-01 | 株式会社デンソーウェーブ | ロボットモニタシステム |
JP2017100205A (ja) * | 2015-11-30 | 2017-06-08 | 株式会社デンソーウェーブ | バーチャルフェンス表示システム |
JP2017100204A (ja) * | 2015-11-30 | 2017-06-08 | 株式会社デンソーウェーブ | ロボット操作システム |
JP2017100207A (ja) * | 2015-11-30 | 2017-06-08 | 株式会社デンソーウェーブ | ロボット安全システム |
JP2017100206A (ja) * | 2015-11-30 | 2017-06-08 | 株式会社デンソーウェーブ | ロボット安全システム |
JP2017102242A (ja) * | 2015-12-01 | 2017-06-08 | 株式会社デンソーウェーブ | 情報表示システム |
JP2017520419A (ja) * | 2014-07-02 | 2017-07-27 | シーメンス アクチエンゲゼルシヤフトSiemens Aktiengesellschaft | 警報方法およびロボットシステム |
JP2017523054A (ja) * | 2014-07-16 | 2017-08-17 | エックス デベロップメント エルエルシー | ロボット装置用仮想セーフティケージ |
JP2017148905A (ja) * | 2016-02-25 | 2017-08-31 | ファナック株式会社 | ロボットシステムおよびロボット制御装置 |
JP2018008347A (ja) * | 2016-07-13 | 2018-01-18 | 東芝機械株式会社 | ロボットシステムおよび動作領域表示方法 |
WO2018020568A1 (ja) * | 2016-07-26 | 2018-02-01 | 三菱電機株式会社 | ケーブル可動域表示装置、ケーブル可動域表示方法、及びケーブル可動域表示プログラム |
JPWO2017199619A1 (ja) * | 2016-05-16 | 2018-08-09 | 三菱電機株式会社 | ロボット動作評価装置、ロボット動作評価方法及びロボットシステム |
JP2019008473A (ja) * | 2017-06-22 | 2019-01-17 | ファナック株式会社 | 複合現実シミュレーション装置及び複合現実シミュレーションプログラム |
JP2019084615A (ja) * | 2017-11-06 | 2019-06-06 | トヨタ自動車株式会社 | マスタ操縦装置 |
US10406689B2 (en) | 2016-02-17 | 2019-09-10 | Fanuc Corporation | Robot simulation apparatus that calculates swept space |
JP2019188531A (ja) * | 2018-04-25 | 2019-10-31 | ファナック株式会社 | ロボットのシミュレーション装置 |
JP2019206050A (ja) * | 2018-05-29 | 2019-12-05 | セイコーエプソン株式会社 | 制御装置、ヘッドマウントディスプレイ、及びロボットシステム |
JP2020011357A (ja) * | 2018-07-20 | 2020-01-23 | セイコーエプソン株式会社 | 制御装置、ヘッドマウントディスプレイおよびロボットシステム |
WO2020066475A1 (ja) * | 2018-09-26 | 2020-04-02 | コベルコ建機株式会社 | 作業機械情報表示システム |
WO2021095316A1 (ja) * | 2019-11-11 | 2021-05-20 | 株式会社日立製作所 | ロボットシステム |
JPWO2021220915A1 (ja) * | 2020-04-27 | 2021-11-04 | ||
US11218480B2 (en) | 2015-09-21 | 2022-01-04 | Payfone, Inc. | Authenticator centralization and protection based on authenticator type and authentication policy |
US11223948B2 (en) | 2015-04-15 | 2022-01-11 | Payfone, Inc. | Anonymous authentication and remote wireless token access |
WO2022131068A1 (ja) * | 2020-12-14 | 2022-06-23 | ファナック株式会社 | 拡張現実表示装置、及び拡張現実表示システム |
WO2023037456A1 (ja) * | 2021-09-08 | 2023-03-16 | ファナック株式会社 | シミュレーション装置 |
Families Citing this family (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5708196B2 (ja) | 2011-04-21 | 2015-04-30 | セイコーエプソン株式会社 | 衝突検出システム、ロボットシステム、衝突検出方法及びプログラム |
US9656392B2 (en) * | 2011-09-20 | 2017-05-23 | Disney Enterprises, Inc. | System for controlling robotic characters to enhance photographic results |
CN102841679B (zh) * | 2012-05-14 | 2015-02-04 | 乐金电子研发中心(上海)有限公司 | 一种非接触式人机互动方法与装置 |
US20130343640A1 (en) | 2012-06-21 | 2013-12-26 | Rethink Robotics, Inc. | Vision-guided robots and methods of training them |
US9103873B1 (en) * | 2013-03-01 | 2015-08-11 | Anritsu Company | Systems and methods for improved power control in millimeter wave transceivers |
JP5673717B2 (ja) * | 2013-03-19 | 2015-02-18 | 株式会社安川電機 | ロボットシステム及び被加工物の製造方法 |
US10888998B2 (en) * | 2013-10-07 | 2021-01-12 | Abb Schweiz Ag | Method and device for verifying one or more safety volumes for a movable mechanical unit |
JP5911933B2 (ja) * | 2014-09-16 | 2016-04-27 | ファナック株式会社 | ロボットの動作監視領域を設定するロボットシステム |
JP2016107379A (ja) | 2014-12-08 | 2016-06-20 | ファナック株式会社 | 拡張現実対応ディスプレイを備えたロボットシステム |
DE102015209896B3 (de) * | 2015-05-29 | 2016-08-18 | Kuka Roboter Gmbh | Ermittlung der Roboterachswinkel und Auswahl eines Roboters mit Hilfe einer Kamera |
JP6554945B2 (ja) * | 2015-07-03 | 2019-08-07 | 株式会社デンソーウェーブ | ロボットシステム |
US9919427B1 (en) | 2015-07-25 | 2018-03-20 | X Development Llc | Visualizing robot trajectory points in augmented reality |
US9916506B1 (en) | 2015-07-25 | 2018-03-13 | X Development Llc | Invisible fiducial markers on a robot to visualize the robot in augmented reality |
CN107921639B (zh) | 2015-08-25 | 2021-09-21 | 川崎重工业株式会社 | 多个机器人系统间的信息共享系统及信息共享方法 |
US9855664B2 (en) * | 2015-11-25 | 2018-01-02 | Denso Wave Incorporated | Robot safety system |
JP6690203B2 (ja) * | 2015-11-25 | 2020-04-28 | 株式会社デンソーウェーブ | ロボット安全システム |
US10712566B2 (en) | 2015-11-26 | 2020-07-14 | Denso Wave Incorporated | Information displaying system provided with head-mounted type display |
JP6420229B2 (ja) | 2015-12-10 | 2018-11-07 | ファナック株式会社 | 仮想物体の画像をロボットの映像に重畳表示する映像表示装置を備えるロボットシステム |
CN107717982B (zh) * | 2016-08-12 | 2020-09-25 | 财团法人工业技术研究院 | 机械手臂的控制装置及操作方法 |
JP6809267B2 (ja) * | 2017-02-10 | 2021-01-06 | 富士ゼロックス株式会社 | 情報処理装置、情報処理システム及びプログラム |
JP6950192B2 (ja) * | 2017-02-10 | 2021-10-13 | 富士フイルムビジネスイノベーション株式会社 | 情報処理装置、情報処理システム及びプログラム |
US11219422B2 (en) * | 2017-03-14 | 2022-01-11 | Canon Medical Systems Corporation | Image displaying system, image processing apparatus and x-ray imaging system |
CN106956261A (zh) * | 2017-04-11 | 2017-07-18 | 华南理工大学 | 一种具有安全识别区的人机交互机械臂系统及方法 |
US11106967B2 (en) | 2017-07-03 | 2021-08-31 | X Development Llc | Update of local features model based on correction to robot action |
US10562181B2 (en) * | 2017-07-03 | 2020-02-18 | X Development Llc | Determining and utilizing corrections to robot actions |
AU2018341547B2 (en) * | 2017-09-26 | 2021-07-08 | Palfinger Ag | Operating device and loading crane having an operating device |
JP6633584B2 (ja) * | 2017-10-02 | 2020-01-22 | ファナック株式会社 | ロボットシステム |
US10676022B2 (en) | 2017-12-27 | 2020-06-09 | X Development Llc | Visually indicating vehicle caution regions |
KR102499576B1 (ko) * | 2018-01-08 | 2023-02-15 | 삼성전자주식회사 | 전자 장치 및 그 제어 방법 |
JP7232437B2 (ja) * | 2018-02-19 | 2023-03-03 | 国立大学法人 東京大学 | 作業車両の表示システム及び生成方法 |
DE102018113336A1 (de) * | 2018-06-05 | 2019-12-05 | GESTALT Robotics GmbH | Verfahren zum Verwenden mit einer Maschine zum Einstellen einer Erweiterte-Realität-Anzeigeumgebung |
JP7204513B2 (ja) * | 2019-02-13 | 2023-01-16 | 株式会社東芝 | 制御装置及びプログラム |
WO2021060466A1 (ja) * | 2019-09-27 | 2021-04-01 | 株式会社タダノ | クレーン情報表示システム |
US11775148B2 (en) * | 2020-11-06 | 2023-10-03 | Motional Ad Llc | Augmented reality enabled autonomous vehicle command center |
DE102020129823B4 (de) * | 2020-11-12 | 2022-07-07 | Sick Ag | Visualisieren eines Schutzfeldes |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH01162904A (ja) * | 1987-05-27 | 1989-06-27 | Mitsubishi Electric Corp | 自動プログラミング装置 |
JPH044306U (ja) * | 1990-04-21 | 1992-01-16 | ||
JPH11237902A (ja) * | 1998-02-23 | 1999-08-31 | Agency Of Ind Science & Technol | マニピュレータ作業教示装置 |
JP2004160588A (ja) * | 2002-11-12 | 2004-06-10 | Nissan Motor Co Ltd | 複数ロボットの干渉領域検出方法およびそのプログラム |
JP2004209641A (ja) * | 2002-12-30 | 2004-07-29 | Abb Res Ltd | 工業ロボットをプログラミングするための方法およびシステム |
JP2004213673A (ja) * | 2002-12-30 | 2004-07-29 | Abb Res Ltd | 強化現実システム及び方法 |
JP2004243516A (ja) * | 2003-02-11 | 2004-09-02 | Kuka Roboter Gmbh | コンピュータによって生成された情報を現実環境の画像へとフェードインするための方法、およびコンピュータによって生成された情報を現実環境の画像に視覚化するための装置 |
JP2005081445A (ja) * | 2003-09-04 | 2005-03-31 | Fanuc Ltd | ロボットの干渉領域確認装置 |
JP2009098982A (ja) * | 2007-10-18 | 2009-05-07 | Sodick Co Ltd | 加工シミュレーション装置およびそのプログラム |
JP2009123045A (ja) | 2007-11-16 | 2009-06-04 | Toyota Motor Corp | 移動ロボット及び移動ロボットの危険範囲の表示方法 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2627811B2 (ja) * | 1990-04-18 | 1997-07-09 | 日本スピンドル製造 株式会社 | 油圧方向制御弁 |
US5706195A (en) * | 1995-09-05 | 1998-01-06 | General Electric Company | Augmented reality maintenance system for multiple rovs |
US5745387A (en) * | 1995-09-28 | 1998-04-28 | General Electric Company | Augmented reality maintenance system employing manipulator arm with archive and comparison device |
JPH11254360A (ja) | 1998-03-13 | 1999-09-21 | Yaskawa Electric Corp | ロボットのシミュレーション装置 |
US20030012410A1 (en) * | 2001-07-10 | 2003-01-16 | Nassir Navab | Tracking and pose estimation for augmented reality using real features |
JP4066168B2 (ja) | 2003-03-13 | 2008-03-26 | オムロン株式会社 | 侵入物監視装置 |
JPWO2005015466A1 (ja) * | 2003-08-07 | 2006-10-05 | 松下電器産業株式会社 | 生活支援システム及びその制御用プログラム |
JP4304133B2 (ja) | 2004-07-30 | 2009-07-29 | 東芝機械株式会社 | 産業用ロボットの移動時間表示装置 |
JP2006113858A (ja) * | 2004-10-15 | 2006-04-27 | Mitsubishi Heavy Ind Ltd | 移動体の遠隔操作支援方法及びシステム |
DE102006048163B4 (de) | 2006-07-31 | 2013-06-06 | Pilz Gmbh & Co. Kg | Kamerabasierte Überwachung bewegter Maschinen und/oder beweglicher Maschinenelemente zur Kollisionsverhinderung |
FR2917598B1 (fr) * | 2007-06-19 | 2010-04-02 | Medtech | Plateforme robotisee multi-applicative pour la neurochirurgie et procede de recalage |
DE602007003849D1 (de) * | 2007-10-11 | 2010-01-28 | Mvtec Software Gmbh | System und Verfahren zur 3D-Objekterkennung |
WO2012063397A1 (ja) * | 2010-11-12 | 2012-05-18 | パナソニック株式会社 | 移動経路探索装置および移動経路探索方法 |
-
2010
- 2010-12-09 CN CN201080023195.1A patent/CN102448681B/zh not_active Expired - Fee Related
- 2010-12-09 WO PCT/JP2010/007167 patent/WO2011080882A1/ja active Application Filing
- 2010-12-09 JP JP2011532405A patent/JP4850984B2/ja not_active Expired - Fee Related
-
2011
- 2011-08-30 US US13/220,749 patent/US8731276B2/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH01162904A (ja) * | 1987-05-27 | 1989-06-27 | Mitsubishi Electric Corp | 自動プログラミング装置 |
JPH044306U (ja) * | 1990-04-21 | 1992-01-16 | ||
JPH11237902A (ja) * | 1998-02-23 | 1999-08-31 | Agency Of Ind Science & Technol | マニピュレータ作業教示装置 |
JP2004160588A (ja) * | 2002-11-12 | 2004-06-10 | Nissan Motor Co Ltd | 複数ロボットの干渉領域検出方法およびそのプログラム |
JP2004209641A (ja) * | 2002-12-30 | 2004-07-29 | Abb Res Ltd | 工業ロボットをプログラミングするための方法およびシステム |
JP2004213673A (ja) * | 2002-12-30 | 2004-07-29 | Abb Res Ltd | 強化現実システム及び方法 |
JP2004243516A (ja) * | 2003-02-11 | 2004-09-02 | Kuka Roboter Gmbh | コンピュータによって生成された情報を現実環境の画像へとフェードインするための方法、およびコンピュータによって生成された情報を現実環境の画像に視覚化するための装置 |
JP2005081445A (ja) * | 2003-09-04 | 2005-03-31 | Fanuc Ltd | ロボットの干渉領域確認装置 |
JP2009098982A (ja) * | 2007-10-18 | 2009-05-07 | Sodick Co Ltd | 加工シミュレーション装置およびそのプログラム |
JP2009123045A (ja) | 2007-11-16 | 2009-06-04 | Toyota Motor Corp | 移動ロボット及び移動ロボットの危険範囲の表示方法 |
Cited By (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011227879A (ja) * | 2010-03-30 | 2011-11-10 | Ns Solutions Corp | 情報提供装置、情報提供方法、及びプログラム |
US9001152B2 (en) | 2010-03-30 | 2015-04-07 | Ns Solutions Corporation | Information processing apparatus, information processing method, and program |
US9030494B2 (en) | 2010-03-30 | 2015-05-12 | Ns Solutions Corporation | Information processing apparatus, information processing method, and program |
JP2013094961A (ja) * | 2011-11-04 | 2013-05-20 | Fanuc Robotics America Corp | 3次元表示部を備えたロボット教示装置 |
US9418394B2 (en) | 2012-05-18 | 2016-08-16 | Fanuc Corporation | Operation simulation system of robot system |
JP2013240849A (ja) * | 2012-05-18 | 2013-12-05 | Fanuc Ltd | ロボットシステムの動作シミュレーション装置 |
EP2783812A2 (en) | 2013-03-18 | 2014-10-01 | Kabushiki Kaisha Yaskawa Denki | Robot device and method for manufacturing an object |
JP2017520419A (ja) * | 2014-07-02 | 2017-07-27 | シーメンス アクチエンゲゼルシヤフトSiemens Aktiengesellschaft | 警報方法およびロボットシステム |
JP2017523054A (ja) * | 2014-07-16 | 2017-08-17 | エックス デベロップメント エルエルシー | ロボット装置用仮想セーフティケージ |
JP2018086724A (ja) * | 2014-07-16 | 2018-06-07 | エックス デベロップメント エルエルシー | ロボット装置用仮想セーフティケージ |
US11223948B2 (en) | 2015-04-15 | 2022-01-11 | Payfone, Inc. | Anonymous authentication and remote wireless token access |
US11218480B2 (en) | 2015-09-21 | 2022-01-04 | Payfone, Inc. | Authenticator centralization and protection based on authenticator type and authentication policy |
US11991175B2 (en) | 2015-09-21 | 2024-05-21 | Payfone, Inc. | User authentication based on device identifier further identifying software agent |
JP2017094466A (ja) * | 2015-11-26 | 2017-06-01 | 株式会社デンソーウェーブ | ロボットモニタシステム |
JP2017100207A (ja) * | 2015-11-30 | 2017-06-08 | 株式会社デンソーウェーブ | ロボット安全システム |
JP2017100206A (ja) * | 2015-11-30 | 2017-06-08 | 株式会社デンソーウェーブ | ロボット安全システム |
JP2017100204A (ja) * | 2015-11-30 | 2017-06-08 | 株式会社デンソーウェーブ | ロボット操作システム |
JP2017100205A (ja) * | 2015-11-30 | 2017-06-08 | 株式会社デンソーウェーブ | バーチャルフェンス表示システム |
JP2017102242A (ja) * | 2015-12-01 | 2017-06-08 | 株式会社デンソーウェーブ | 情報表示システム |
US10406689B2 (en) | 2016-02-17 | 2019-09-10 | Fanuc Corporation | Robot simulation apparatus that calculates swept space |
JP2017148905A (ja) * | 2016-02-25 | 2017-08-31 | ファナック株式会社 | ロボットシステムおよびロボット制御装置 |
JPWO2017199619A1 (ja) * | 2016-05-16 | 2018-08-09 | 三菱電機株式会社 | ロボット動作評価装置、ロボット動作評価方法及びロボットシステム |
JP2018008347A (ja) * | 2016-07-13 | 2018-01-18 | 東芝機械株式会社 | ロボットシステムおよび動作領域表示方法 |
JPWO2018020568A1 (ja) * | 2016-07-26 | 2019-01-31 | 三菱電機株式会社 | ケーブル可動域表示装置、ケーブル可動域表示方法、及びケーブル可動域表示プログラム |
WO2018020568A1 (ja) * | 2016-07-26 | 2018-02-01 | 三菱電機株式会社 | ケーブル可動域表示装置、ケーブル可動域表示方法、及びケーブル可動域表示プログラム |
JP6440909B2 (ja) * | 2016-07-26 | 2018-12-19 | 三菱電機株式会社 | ケーブル可動域表示装置、ケーブル可動域表示方法、及びケーブル可動域表示プログラム |
JP2019008473A (ja) * | 2017-06-22 | 2019-01-17 | ファナック株式会社 | 複合現実シミュレーション装置及び複合現実シミュレーションプログラム |
JP2019084615A (ja) * | 2017-11-06 | 2019-06-06 | トヨタ自動車株式会社 | マスタ操縦装置 |
JP2019188531A (ja) * | 2018-04-25 | 2019-10-31 | ファナック株式会社 | ロボットのシミュレーション装置 |
JP7187820B2 (ja) | 2018-05-29 | 2022-12-13 | セイコーエプソン株式会社 | 制御装置、ヘッドマウントディスプレイ、及びロボットシステム |
JP2019206050A (ja) * | 2018-05-29 | 2019-12-05 | セイコーエプソン株式会社 | 制御装置、ヘッドマウントディスプレイ、及びロボットシステム |
JP7167518B2 (ja) | 2018-07-20 | 2022-11-09 | セイコーエプソン株式会社 | 制御装置、ヘッドマウントディスプレイおよびロボットシステム |
JP2020011357A (ja) * | 2018-07-20 | 2020-01-23 | セイコーエプソン株式会社 | 制御装置、ヘッドマウントディスプレイおよびロボットシステム |
JP2020051092A (ja) * | 2018-09-26 | 2020-04-02 | コベルコ建機株式会社 | 作業機械情報表示システム |
WO2020066475A1 (ja) * | 2018-09-26 | 2020-04-02 | コベルコ建機株式会社 | 作業機械情報表示システム |
JP2021074827A (ja) * | 2019-11-11 | 2021-05-20 | 株式会社日立製作所 | ロボットシステム |
WO2021095316A1 (ja) * | 2019-11-11 | 2021-05-20 | 株式会社日立製作所 | ロボットシステム |
JP7282016B2 (ja) | 2019-11-11 | 2023-05-26 | 株式会社日立製作所 | ロボットシステム |
WO2021220915A1 (ja) * | 2020-04-27 | 2021-11-04 | ファナック株式会社 | 産業機械の表示装置 |
JP7381729B2 (ja) | 2020-04-27 | 2023-11-15 | ファナック株式会社 | 産業機械の表示装置 |
US11978168B2 (en) | 2020-04-27 | 2024-05-07 | Fanuc Corporation | Display device for industrial machine |
JPWO2021220915A1 (ja) * | 2020-04-27 | 2021-11-04 | ||
JPWO2022131068A1 (ja) * | 2020-12-14 | 2022-06-23 | ||
WO2022131068A1 (ja) * | 2020-12-14 | 2022-06-23 | ファナック株式会社 | 拡張現実表示装置、及び拡張現実表示システム |
WO2023037456A1 (ja) * | 2021-09-08 | 2023-03-16 | ファナック株式会社 | シミュレーション装置 |
Also Published As
Publication number | Publication date |
---|---|
JP4850984B2 (ja) | 2012-01-11 |
JPWO2011080882A1 (ja) | 2013-05-09 |
CN102448681A (zh) | 2012-05-09 |
US8731276B2 (en) | 2014-05-20 |
CN102448681B (zh) | 2014-09-10 |
US20110311127A1 (en) | 2011-12-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4850984B2 (ja) | 動作空間提示装置、動作空間提示方法およびプログラム | |
CA3016539C (en) | Image processing method, display device, and inspection system | |
JP5936155B2 (ja) | 3次元ユーザインタフェース装置及び3次元操作方法 | |
CN106313049B (zh) | 一种仿人机械臂体感控制系统及控制方法 | |
US11498216B2 (en) | Remote control manipulator system and control device | |
CN104897091B (zh) | 关节臂坐标测量机 | |
JP5871345B2 (ja) | 3次元ユーザインタフェース装置及び3次元操作方法 | |
JP6570742B2 (ja) | ロボット動作評価装置、ロボット動作評価方法及びロボットシステム | |
US10888998B2 (en) | Method and device for verifying one or more safety volumes for a movable mechanical unit | |
KR101768958B1 (ko) | 고품질 콘텐츠 제작을 위한 하이브리드 모션캡쳐 시스템 | |
CN112313046A (zh) | 使用增强现实可视化和修改操作界定区域 | |
US9008442B2 (en) | Information processing apparatus, information processing method, and computer program | |
JP2012218120A (ja) | マニピュレーター動作予告装置、ロボットシステム及びマニピュレーター動作予告方法 | |
JP2010271536A (ja) | 作業訓練システム及び作業訓練方法並びに該作業訓練方法を記録した記録媒体 | |
JP7000253B2 (ja) | 力覚視覚化装置、ロボットおよび力覚視覚化プログラム | |
JP2018202514A (ja) | ロボットの教示のための情報を表示するロボットシステム | |
US11422625B2 (en) | Proxy controller suit with optional dual range kinematics | |
JP2009258884A (ja) | ユーザインタフェイス | |
WO2017155005A1 (en) | Image processing method, display device, and inspection system | |
WO2020235539A1 (ja) | オブジェクトの位置及び姿勢を特定する方法及び装置 | |
WO2024190414A1 (ja) | 画像処理装置、画像処理システム及び画像処理プログラム | |
Zhu et al. | Fusing multiple sensors information into mixed reality-based user interface for robot teleoperation | |
JP2015152338A (ja) | 距離情報取得方法、距離情報取得装置及びロボット |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080023195.1 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011532405 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010840742 Country of ref document: EP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10840742 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10840742 Country of ref document: EP Kind code of ref document: A1 |