CN116945157A - Information processing system and method, robot system and control method, article manufacturing method, and recording medium - Google Patents

Information processing system and method, robot system and control method, article manufacturing method, and recording medium Download PDF

Info

Publication number
CN116945157A
CN116945157A CN202310442985.2A CN202310442985A CN116945157A CN 116945157 A CN116945157 A CN 116945157A CN 202310442985 A CN202310442985 A CN 202310442985A CN 116945157 A CN116945157 A CN 116945157A
Authority
CN
China
Prior art keywords
measurement
information
robot
model
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310442985.2A
Other languages
Chinese (zh)
Inventor
菅谷聪
温品聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2023039698A external-priority patent/JP2023162117A/en
Application filed by Canon Inc filed Critical Canon Inc
Publication of CN116945157A publication Critical patent/CN116945157A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The present disclosure relates to an information processing system and method, a robot system and control method, an article manufacturing method, and a recording medium. The information processing system includes: an apparatus comprising a movable unit comprising a measurement unit configured to measure a shape of an object; and a simulation unit that performs operation simulation of the device in a virtual space by using a virtual model. The movable unit moves the measuring unit to a predetermined measuring point. The measuring unit measures an object present in the surroundings of the device at the predetermined measuring point. A model including position information of the object is acquired by using the measurement result and information about the predetermined measurement point. The simulation unit sets a virtual model of the object in the virtual space by using the model.

Description

Information processing system and method, robot system and control method, article manufacturing method, and recording medium
Technical Field
The present invention relates to an information processing system and a robot system.
Background
As a method of developing a control program for causing an apparatus such as a robot to perform a predetermined operation, there is known a method of teaching the apparatus to perform the operation while actually operating the apparatus and checking whether the apparatus interferes with an object in the (interface) surrounding environment. However, in the case of developing a program by operating an actual device, there is a risk that disturbance and damage to the device actually occur, and the inspection operation takes time, and the control program may not be efficiently developed.
Thus, the following methods have been attempted: in the method, a device model is operated in a virtual space by using three-dimensional model information of the device and objects in the surrounding environment of the device, and a control program of the device is developed while checking whether the device model interferes with the object model. In order to properly perform simulation (simulation) in a virtual space, it is necessary to construct an accurate three-dimensional model of the device and the surrounding environment in the simulation device in advance.
Examples of objects in the surroundings of the device include structures such as walls and pillars, and other devices installed around the device, but three-dimensional shape information (e.g., CAD data) of the object does not necessarily exist.
Japanese patent application laid-open No.2003-345840 discloses a method of measuring an object placed on a reference surface to acquire point cloud data, creating surface data from the point cloud data, and creating entity data by using the surface data to create a three-dimensional model on a CAD system.
Disclosure of Invention
According to a first aspect of the present invention, an information processing system includes: an apparatus comprising a movable unit comprising a measurement unit configured to measure a shape of an object; and a simulation unit that performs operation simulation of the device in a virtual space by using a virtual model. The movable unit moves the measuring unit to a predetermined measuring point. The measuring unit measures an object present in the surroundings of the device at the predetermined measuring point. A model including position information of the object is acquired by using the measurement result and information about the predetermined measurement point. The simulation unit sets a virtual model of the object in the virtual space by using the model.
According to a second aspect of the invention, a robotic system comprises: a robot including a movable unit including a measurement unit configured to measure a shape of an object; and a simulation unit that performs operation simulation of the robot in a virtual space by using a virtual model. The movable unit moves the measuring unit to a predetermined measuring point. The measuring unit measures an object present in the surrounding environment of the robot at the predetermined measuring point. A model including position information of the object is acquired by using the measurement result and information about the predetermined measurement point. The simulation unit sets a virtual model of the object in the virtual space by using the model.
Further features of the invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Drawings
Fig. 1 is a schematic diagram illustrating a schematic configuration of an information processing system 100 according to a first embodiment.
Fig. 2 is a functional block diagram for describing the information processing system 100 according to the first embodiment.
Fig. 3 is a diagram illustrating a configuration of each of the robot control apparatus a, the vision sensor control apparatus B, the model creation apparatus C, and the simulation apparatus D.
Fig. 4 is a flowchart for describing a process of imaging preparation.
Fig. 5A is a diagram for describing a measurement range IMA (measurement range) that can be accurately measured (imaged) by the vision sensor 102.
Fig. 5B is a conceptual diagram for describing the setting of the measurement point.
Fig. 6A is a schematic diagram for describing a second teaching method for teaching measurement points.
Fig. 6B is a diagram for describing automatic setting of measurement points.
Fig. 7 is a diagram for describing a suitable setting method for the measurement points.
Fig. 8 is a flowchart for describing an imaging (measurement) process.
Fig. 9 is a schematic diagram for describing a synthesizing process and a filtering process for point cloud data acquired at each measurement point.
Fig. 10 is a flowchart for describing the three-dimensional model generation process.
FIG. 11 is a schematic diagram for describing the transition of the dataform in each step of model generation.
Fig. 12 is a diagram illustrating an example in which a virtual model 101M and a virtual model 103M generated in a correct positional relationship in a virtual space are displayed on a display device E.
Fig. 13 is an exemplary diagram of a measurement setting screen 400 according to the second embodiment.
Fig. 14 is a flowchart according to a second embodiment.
Fig. 15 is an exemplary diagram illustrating a virtual space according to a second embodiment.
Fig. 16A is a schematic diagram for describing a division width calculated from sensor information for acquiring a measurement point according to the second embodiment.
Fig. 16B is a schematic diagram for describing the division of the measurement region.
Fig. 17A is a schematic diagram of a measurement point and a posture of a robot according to the second embodiment, which illustrates a state of the measurement point corresponding to an angle of 0 degrees.
Fig. 17B is a schematic diagram of the state of the measurement point corresponding to the inclination angle of Drx in the X-axis direction.
Fig. 18A is a schematic diagram of the measurement point and the posture of the robot according to the second embodiment, which illustrates a state in which the movement prohibition area and the robot interfere with each other.
Fig. 18B is a schematic diagram illustrating a state of a robot whose angle differs for a measurement point at the same position.
Fig. 19 is a flowchart according to a third embodiment.
Fig. 20 is a schematic diagram for describing layering of measurement points according to the third embodiment.
Fig. 21A is a schematic diagram illustrating a measurement point and a measurement state of an nth layer for describing a procedure for excluding the measurement point according to the third embodiment.
Fig. 21B is a schematic diagram at the point cloud data acquisition processing.
Fig. 21C is a schematic diagram of a meshing model.
Fig. 21D is a schematic diagram illustrating a state in which a straight line (linear) model and a gridding model of the focal length of the sensor for the measurement point in the n+1th layer intersect with each other.
Fig. 21E is a schematic diagram illustrating the excluded measurement points.
Fig. 22 is an exemplary diagram of a measurement setting screen 400 according to the fourth embodiment.
Fig. 23 is a schematic diagram illustrating a schematic configuration of an information processing system according to a fifth embodiment.
Fig. 24 is a schematic diagram illustrating a schematic configuration of an information processing system according to a sixth embodiment.
Fig. 25 is a functional block diagram for describing the configuration of an information processing system according to the fifth embodiment and the sixth embodiment.
Fig. 26 is a schematic diagram illustrating a robot 101 according to a seventh embodiment.
Detailed Description
In the case where three-dimensional shape information (e.g., 3D CAD data) of an object in the surrounding environment of the apparatus does not exist, if the object can be placed on the reference surface, a three-dimensional model can be created on the CAD system by the method of japanese patent application laid-open No. 2003-345840. However, with a structure that cannot be placed on a reference surface of a measuring apparatus, such as a wall or a pillar, three-dimensional shape information cannot be acquired by the method of japanese patent application laid-open No. 2003-345840.
Further, even if three-dimensional shape information (e.g., 3D CAD data) of an object in the surrounding environment of the apparatus can be obtained, since the positional relationship with respect to the apparatus cannot be known only with this information, it is not easy to construct an accurate three-dimensional model of the apparatus and the surrounding environment in a virtual space. Therefore, starting the so-called off-line simulation using the simulation device takes a lot of time and effort, which hinders the rapid development of the control program of the device.
Thus, there has been a need for a method that enables a simulation device to efficiently acquire a model of the device and the surrounding environment of the device.
An information processing system, a robot system, an information processing method, and the like according to an embodiment of the present invention will be described with reference to the drawings. The embodiments described below are merely examples, and those skilled in the art can appropriately change and implement detailed configurations without departing from the gist of the present invention, for example.
In the drawings referred to in the following embodiments and description, elements denoted by the same reference numerals have the same functions unless otherwise specified. Moreover, for ease of illustration and description, the drawings may be schematic, and thus the shapes, sizes, arrangements, etc. in the drawings do not necessarily exactly match the actual objects.
First embodiment
Configuration of information processing system
Fig. 1 is a schematic diagram illustrating a schematic configuration of an information processing system 100 (robot system) according to the first embodiment. Further, fig. 2 is a functional block diagram for describing the configuration of the information processing system 100. In fig. 2, functional elements required for describing the characteristics of the present embodiment are represented by functional blocks, but descriptions of general functional elements not directly related to the principle of solving the problem according to the present invention are omitted. Furthermore, each of the functional elements shown in fig. 2 is functionally conceptual, and does not necessarily have to be physically configured as shown. For example, the specific form of dispersion or integration of the functional blocks is not limited to the illustrated example, and all or some of the functional blocks may be functionally or physically dispersed and integrated in any unit according to the use condition or the like. Each functional block may be configured using hardware or software.
Reference numeral 101 denotes a robot as a device including a movable unit, reference numeral 102 denotes a vision sensor as a measurement unit, reference numeral 103 denotes a model creation target as a measurement target, and reference numeral a denotes a robot control device that controls the robot 101. Reference symbol B denotes a vision sensor control device that controls the vision sensor 102, reference symbol C denotes a model creation device, reference symbol D denotes a simulation device, and reference symbol E denotes a display device.
The information processing system 100 of the present embodiment measures the model creation target 103 by using the vision sensor 102 as a measurement unit mounted on the robot 101. Then, a three-dimensional model for simulation is automatically created using the measurement results and stored in the simulation device D as a simulation unit. The model creation target 103 is an object existing in the surrounding environment of the robot 101, and is an object for which a three-dimensional model (virtual model) for simulation has not been created. Examples of the model creation target 103 include, but are not limited to, objects existing in a movable range of the robot 101, such as devices (component carrying devices, processing devices, and the like) cooperating with the robot 101, and structures such as walls or posts.
The robot 101 shown in fig. 1 as a device having a movable unit is a six-axis multi-joint robot, but the robot 101 may be another type of robot or device. For example, the robot 101 may be a device including a movable unit capable of performing an operation of expanding and contracting, bending and expanding, vertical movement, horizontal movement, or rotation, or a combination thereof.
The vision sensor 102 is an imaging device mounted at a predetermined location (such as a distal arm portion or a hand of the robot 101) suitable for imaging the surroundings of the robot 101. In order to enable correlation between the captured image (measurement result) and the robot coordinate system, it is desirable that the vision sensor 102 is firmly fixed to the movable unit of the robot 101, but the vision sensor 102 may also be temporarily fixed in a detachable manner as long as positioning accuracy is ensured. The robot coordinate system is a three-dimensional coordinate system (X, Y, Z) in which a non-moving part (e.g., a base) in the mounted robot is set as an origin (see fig. 6A).
The vision sensor 102 as a measuring device may be any device as long as image data (or three-dimensional measurement data) suitable for creating a three-dimensional model can be acquired, and for example, a stereo camera having an illumination light source is suitably used. Further, the apparatus capable of acquiring three-dimensional point cloud data based on a robot coordinate system suitable for creating a three-dimensional model by measurement is not limited to a stereo camera, and for example, an object may be imaged from a plurality of positions with convergence (parallax) using a monocular camera to acquire three-dimensional measurement data. Further, instead of an imaging sensor, for example, a light detection ranging scanner (LiDAR scanner) capable of measuring the shape of an object by using a laser beam may be used as the measuring device. In the following description, imaging using the vision sensor 102 may be referred to as measurement.
The robot control device a has a function of generating operation control information for operating each joint of the robot 101 and controlling the operation of the robot 101 according to the commands related to the position and posture of the robot 101 transmitted from the model creation device C.
The vision-sensor control device B generates a control signal for controlling the vision sensor 102 based on the measurement command transmitted from the model creation device C, and transmits the control signal to the vision sensor 102. Meanwhile, the vision sensor control apparatus B has a function of transmitting the measurement data output from the vision sensor 102 to the model creation apparatus C.
The model creation device C has a function of transmitting a command to the robot control device a to move the robot 101 to a predetermined position and posture (measurement point) for measuring the model creation target 103, and transmitting a command to the vision sensor control device B to cause the vision sensor 102 to measure the model creation target 103 and acquire measurement data. In addition, the model creation device C has a function of creating a three-dimensional model of the target 103 by generating a model using the acquired measurement data and storing the generated three-dimensional model in the simulation device D together with position information based on the robot coordinate system. The three-dimensional model generation process will be described in detail below.
The simulation device D as a simulation unit constructs a virtual model of the robot 101 and the surrounding environment of the robot 101 on a virtual space by using the three-dimensional model and the positional information acquired from the model creation device C. Then, the simulation device D has a function of performing offline simulation on the robot 101. The simulation device D has a function of causing the display device E to appropriately display the three-dimensional model created by the model creation device C, the virtual model of the robot 101 and the surrounding environment of the robot 101, information on the off-line simulation, and the like. The simulation device D may also cause the display device E to display information acquired from the robot control device a, the vision sensor control device B, and the model creation device C via the communication unit.
The display device E as a display unit is a display serving as a user interface of the analog device D. For example, a direct-view flat panel display such as a liquid crystal display device or an organic EL display device, a projection display, a goggle type stereoscopic display, a hologram display, or the like may be used. Further, the information processing system of the present embodiment may include an input device (not shown) such as a keyboard, a jog dial, a mouse, a pointing device, or a voice input device.
In fig. 1, a robot 101, a robot control device a, a vision sensor 102, a vision sensor control device B, a model creation device C, a simulation device D, and a display device E are connected by wired communication, but the present invention is not limited thereto. For example, some or all of them may be connected by wireless communication, or may be connected via a general network such as a LAN or the internet.
Each of the robot control apparatus a, the vision sensor control apparatus B, the model creation apparatus C, and the simulation apparatus D is a computer that performs each of the functions described above. In fig. 1, these devices are shown as separate devices, but some or all of the devices may be integrated.
Each of these devices has a configuration such as that shown in fig. 3. That is, each device includes a Central Processing Unit (CPU) 201 as a processor, a storage unit 203, and input and output interfaces 204. Each device may also include a Graphics Processing Unit (GPU) 202 as desired. The storage unit 203 includes a Read Only Memory (ROM) 203a, a Random Access Memory (RAM) 203b, and a Hard Disk Drive (HDD) 203c. The CPU 201, the GPU 202, the storage unit 203, and the input and output interface 204 are connected through a bus (not shown) in such a manner as to be able to communicate with each other.
The ROM 203a included in the storage unit 203 is a non-transitory storage device, and stores a basic program read by the CPU 201 at the time of computer startup. The RAM 203b is a temporary storage device for the arithmetic processing of the CPU 201. The HDD 203c is a non-transitory storage device storing various data such as a processing program executed by the CPU 201 and the arithmetic processing result of the CPU 201. Here, the processing program executed by the CPU 201 is a processing program for each device to execute the above-described functions, and at least some of the functional blocks shown in fig. 2 may be implemented in each device by the CPU 201 executing the program. For example, in the case of the model creation apparatus C, functional blocks such as a setting unit, a modeling control unit, an image processing unit, a filtering processing unit, a mesh processing unit, and a model creation unit may be realized by the CPU 201 executing a processing program. However, functional blocks that perform typical processing related to image processing may be implemented by the GPU 202 instead of the CPU 201 in order to accelerate the processing.
Other devices and networks may be connected to the input and output interface 204. For example, the data may be backed up in database 230, or information such as commands and data may be exchanged with other devices.
Three-dimensional model generation and simulation
A three-dimensional model generation process using the information processing system 100 will be described. The preparation of the measurement, the generation of the three-dimensional model, and the simulation using the three-dimensional model will be described in sequence.
Preparation of measurements
As shown in fig. 1, the preparation step of measuring the model creation target 103 by using the vision sensor 102 is performed in a state where the positions of the robot 101 and the model creation target 103 are fixed. Fig. 4 is a flowchart for describing a procedure of preparation of measurement.
As soon as the measurement preparation step starts, in step S11, the operator registers in the model creation apparatus C the measurement position and measurement posture to be taken by the robot 101 when the vision sensor 102 images the model creation target 103. In the following description, the measurement position and the measurement pose may be collectively referred to as a measurement point.
The first method for registering measurement points is a method in which an operator operates the robot 101 online and registers a plurality (for example, N) of measurement points around the model creation target 103 as setting information. At this time, the image captured by the vision sensor 102 may be displayed on the display device E, and the operator may set the measurement point while confirming the image.
As shown in fig. 5A, a measurement range IMA (imaging range) in which the vision sensor 102 (for example, a stereo camera) can accurately measure (image) is limited to a certain narrow range in consideration of depth of field and image distortion. Therefore, as shown in fig. 5B, in order to generate an accurate three-dimensional model, it is necessary to set measurement points in such a manner that the measurement range IMA covers the outer surface of the model creation target 103 without gaps. Thus, in the first method, a skilled operator is required to perform work, and the workload and the required time tend to increase.
Thus, in the second method for registering a measurement point, as shown in fig. 6A, first, the operator sets a measurement target area 301 (imaging target area) as setting information in such a manner as to include the model creation target 103. Then, as schematically shown in fig. 6B, the model creation device C divides the measurement target area 301 (imaging target area) by a square in such a manner that the measurement range IMA that the vision sensor 102 can accurately image covers the measurement target area 301 without a gap. Then, the position and posture to be taken by the imaging robot 101 for each measurement range IMA are automatically set and registered as measurement points. The operator may set in advance a movement prohibition area 302 that prohibits the movement of the robot 101 together with the measurement target area 301 (imaging target area). In this case, the model creation apparatus C does not set the measurement point to which the robot 101 needs to move in the movement prohibition area 302. The model creation device C may be configured in such a manner that the operator can set the measurement target area 301 and/or the movement prohibited area 302 while displaying the image captured by the vision sensor 102 on the display device E.
As shown in fig. 7, in order to appropriately detect external characteristics such as edges and recesses according to the shape of the model creation target, not only a measurement point a in the Z direction of the measurement direction PD (imaging direction), but also a measurement point B in which the measurement direction PD (imaging direction) rotates about the X axis or the Y axis are set. In this case, the setting condition of the measurement point a (for example, the width of the divided area or the number of divided areas in the case where the measurement target area 301 is divided in each of X, Y and Z direction) and the setting condition of the measurement point B (for example, the number of types of rotation angles for measurement or rotation angles around the X axis and the Y axis) may be set in advance by the operator, and the model creation apparatus C may automatically generate the measurement point a and/or the measurement point B based on the setting.
In the first method or the second method, a plurality of (N) measurement points set based on the installation information input by the operator are registered in the setting unit of the model creation apparatus C. The model creation device C may be configured to display a plurality of set measurement points on the display device E so that an operator can confirm or edit the measurement points.
Upon completion of registration of the measurement points in step S11, the process proceeds to step S12, and the operator sets the number of times measurement (imaging) is performed at each measurement point. If measurement data (imaging data) for generating a three-dimensional model can be reliably acquired by performing one measurement (imaging), it is sufficient to perform one measurement (imaging) at each measurement point. However, the captured image may vary depending on the material, shape, and surface state of the model creation target, the state of external light reaching the model creation target, and the like. For example, in the case where the model creation target is formed of a glossy material such as a metal, or there is a concave-convex portion or texture in the model creation target, the luminance distribution, the contrast, the appearance of the concave-convex portion or texture, or the like changes depending on the state of external light, and therefore, there is a possibility that measurement data (imaging data) suitable for generating a three-dimensional model cannot be acquired by performing imaging (measurement) once. In particular, in the case of using a stereo camera as the vision sensor 102, since both eyes form convergence, measurement data (imaging data) tends to be easily affected by a state of external light or the like.
Therefore, in the present embodiment, the operator can set the number of times M of measurement in such a manner that measurement (imaging) is performed a plurality of times at each measurement point in consideration of the appearance characteristics of the model creation target and the state of external light, so that point cloud data to be described below can be acquired without omission. In the case of using the vision sensor 102 having an illumination light source, the operation condition (e.g., illumination intensity or illumination direction) of the illumination light source may be set to change at each imaging. The result set in step S12 is registered in the setting unit of the model creation apparatus C. The model creation device C may be configured to display an operation screen, the number of times of settings, and the like when performing these settings on the display device E, so that the operator can confirm or edit the operation screen, the number of times of settings, and the like. As soon as step S12 is completed, the preparation step of measuring (imaging) the model creation target 103 ends.
Measurement of
After the measurement preparation step is ended, a measurement step (imaging step) of creating the target 103 by measuring (imaging) the model using the vision sensor 102 is performed. Fig. 8 is a flowchart for describing a measurement (imaging) process.
As soon as measurement (imaging) starts, in step S21, the model creation device C reads one of the plurality of measurement points registered in the setting unit, and transmits a command to the robot control device a via the communication unit in such a manner that the robot 101 is moved to the measurement point. The robot control device a interprets the received command and moves the robot 101 to the measurement point.
Next, in step S22, the model creation device C transmits a command to the vision-sensor control device B via the communication unit in such a manner that the vision sensor 102 performs measurement (imaging). The vision sensor control device B interprets the received command and causes the vision sensor 102 to perform measurement (imaging).
Next, in step S23, the model creation device C requests the robot control device a to transmit the position of the vision sensor 102 at the time of measurement (imaging) as position information based on a robot coordinate system in which the robot 101 is the origin. The modeling control unit of the model creation device C stores the position information received via the communication unit in the storage unit.
Next, in step S24, the model creation device C requests the vision-sensor control device B to transmit a measurement result (imaging result) obtained by the vision sensor 102. The modeling control unit of the model creation device C stores the measurement result (imaging result) received via the communication unit in the storage unit in association with the positional information acquired from the robot control device a.
Next, in step S25, the image processing unit of the model creation apparatus C acquires three-dimensional point cloud data expressed based on the robot coordinate system by using a measurement result (imaging result) associated with the positional information of the vision sensor 102 expressed based on the robot coordinate system. The three-dimensional point cloud data is point cloud data related to the appearance of the model creation target 103 measured at the measurement points, and each piece of point data included in the three-dimensional point cloud data has position information (spatial coordinates) expressed based on the robot coordinate system. The three-dimensional point cloud data acquired by the image processing unit is stored in the storage unit of the model creation device C.
Next, in step S26, the model creation device C determines whether or not measurement (imaging) is completed at the measurement point based on the number M of measurements set in step S12. In the case where the measurement (imaging) of the set number of times is not completed (step S26: no), the process returns to step S22, and the process of step S22 and subsequent processes are performed again at the measurement point. In the case where the measurement (imaging) of the set number of times is completed (step S26: yes), the process proceeds to step S27.
In step S27, the image processing unit of the model creation apparatus C reads M pieces of point cloud data acquired at the measurement points from the storage unit, and synthesizes (superimposes) the M pieces of point cloud data. For example, in the case where the measurement point is measurement point 1, as shown in fig. 9, M pieces of point cloud data of PG11 to PG1M are superimposed to generate synthesized point cloud data SG1 including all pieces of point cloud data acquired at the measurement point 1. The composition (superposition) of the M pieces of point cloud data may be performed using known image composition software.
Next, in step S28, the filter processing unit of the model creation apparatus C performs filter processing on the synthesized point cloud data generated in step S27 to remove noise, and generates partial point cloud data for model creation. That is, for example, in the case where the measurement point is the measurement point 1, as shown in fig. 9, noise is removed by performing a filter process on the synthesized point cloud data SG1, and partial point cloud data FG1 for model creation is generated. The filtering process may be performed using, for example, open3D, which is known as Open source software, and may be performed by other methods.
Next, in step S29, the image processing unit of the model creation device C stores the partial point cloud data for model creation generated in step S28 in the storage unit.
Next, in step S30, the modeling control unit of the model creation device C determines whether or not the storage of the partial point cloud data for model creation is completed for all N measurement points. In the case where the number of measurement points stored is less than N (step S30: NO), the process proceeds to step S31, and the model creation apparatus C newly reads another measurement point from the N measurement points registered in the setting unit, and transmits a command to the robot control apparatus A via the communication unit in such a manner that the robot 101 is moved to the measurement point. Then, the process of step S22 and subsequent processes are performed again.
In the case where it is determined in step S30 that the storage of the partial point cloud data for model creation is completed for all N measurement points (step S30: yes), the measurement step (imaging step) ends.
The number of targets whose interference with the robot 101 is to be checked, in other words, the number of model creation targets existing within the movable range of the robot 101 is not limited to one as shown in fig. 1. In the case where there are a plurality of model creation targets, the measurement processing for all the model creation targets may be performed at once according to the processing procedure shown in fig. 8, or the measurement processing may be performed separately for each model creation target.
Three-dimensional model generation and simulation
A procedure for generating a three-dimensional model of the model creation target 103 by using part of the point cloud data as a measurement result (imaging result) and performing an offline simulation will be described. Fig. 10 is a flowchart for describing the three-dimensional model generation process. FIG. 11 is a schematic diagram for describing the transition of data in each step of model generation.
At the beginning of model generation, in step S41, the model creation unit of the model creation device C reads pieces of partial point cloud data FG1 to FGN for model creation stored in the storage unit. In fig. 11, the read pieces of partial point cloud data FG1 to FGN are schematically shown on the left side by being surrounded by dotted lines.
Next, in step S42, the model creation unit of the model creation apparatus C superimposes and synthesizes the read pieces of partial point cloud data based on the robot coordinate system. That is, the entire point cloud data WPG related to the entire appearance of the model creation target 103 is synthesized using the pieces of partial point cloud data FG1 to FGN acquired at each measurement point. The entire point cloud data WPG can be synthesized from a plurality of pieces of partial point cloud data by using known image synthesis software.
Next, in step S43, the filter processing unit of the model creation device C performs filter processing on the entire point cloud data WPG generated in step S42 to remove noise, and generates point cloud data FWPG for model creation, as shown in fig. 11. The filtering process may be performed using, for example, open3D, which is known as Open source software, and may be performed by other methods.
Next, in step S44, the mesh processing unit of the model creation device C performs mesh processing on the point cloud data FWPG to acquire mesh information MSH, i.e., polygon information that is an aggregate of triangular polygons. The grid processing may be performed using, for example, meshLab, which is known as open source software, and may be performed by other methods. The model creation device C may be configured to display the generated mesh information MSH on the display device E based on the robot coordinate system so that the operator can confirm the mesh information MSH.
Next, in step S45, the model creating unit of the model creating apparatus C creates a contour line such as an edge appearing in the appearance of the model creating target 103 by using the mesh information MSH, and creates a surface model. In the case where a solid model including not only the surface (outer surface) of the model creation target 103 but also the volume (inner) is required, the solid model may be generated based on the surface model. The three-dimensional MODEL generated based on the robot coordinate system is stored in the storage unit of the MODEL creation device C. The creation of the three-dimensional model using the mesh information MSH may be performed using, for example, quacksface, which is 3D modeling software manufactured by System Create co., ltd. The MODEL creation device C may be configured to display the generated three-dimensional MODEL on the display device E based on the robot coordinate system, so that the operator can confirm the appropriateness/inappropriateness of the three-dimensional MODEL.
Next, in step S46, the MODEL creation device C transmits the generated data of the three-dimensional MODEL to the simulation device D via the communication unit. The simulation device D stores the received data of the three-dimensional MODEL in the storage unit. Further, the simulation device D may format data of the three-dimensional MODEL and store the data as a backup file F in an external database via an external input and output unit.
Next, in step S47, the virtual environment control unit of the simulation apparatus D uses the data of the three-dimensional MODEL to generate a virtual environment MODEL in which the targets are arranged based on the robot coordinate system. Then, for example, as shown in fig. 12, the condition that the virtual model 101M of the robot 101 and the virtual model 103M of the surrounding environment are arranged in the virtual space in the correct positional relationship may be displayed to the operator by using the display device E.
Next, in step S48, the simulation device D automatically sets and registers the virtual model 103M of the surrounding environment as a target of which interference with the robot 101 is to be checked. The simulation device D may be configured in such a manner that an operator can select and register a target whose interference on the robot 101 is to be checked with reference to the virtual environment model displayed on the display device E. In this way, the construction of the virtual model of the surrounding environment of the robot 101 is completed, and preparations are made for performing simulation such as interference check offline.
The operator can perform offline simulation by using the simulation device D and operate the virtual model 101M of the robot 101 in the virtual space to check the execution of work and the presence or absence of interference with the surrounding environment. For example, a production line in which a robot is installed is virtually modeled by the above-described process, and a work operation to be performed by the robot (for example, assembly of components, setting of components in a processing apparatus, movement of components, etc.) is performed by a virtual model of the robot in a virtual space, so that the presence or absence of interference with the surrounding environment and execution of the work can be checked. Control data related to the working operation of the robot checked as described above is transmitted from the simulation device D to the robot control device a via the communication unit, and may be stored as training data in the robot control device a. The robot control apparatus a may cause the robot 101 to perform a working operation trained in this way (e.g., assembly of components, setting of components in a processing apparatus, movement of components, etc.), and cause the robot 101 to manufacture an article. The article manufacturing method performed in such a process may also be included in the present embodiment.
In the present embodiment, a measurement device (imaging device) fixed to a movable unit of a robot is used to measure (image) a model creation target while operating the robot, and point cloud data of the model creation target based on a robot coordinate system is acquired. Then, since a 3D model of the object is generated based on the point cloud data, modeling including not only three-dimensional shape information of the object but also positional information with respect to the robot can be performed. Therefore, after creating the three-dimensional shape model of the object, the operator does not need to perform positioning of the virtual object model with respect to the virtual robot model in the virtual space, and the virtual model of the working environment of the robot can be efficiently constructed.
When the information processing system of the present embodiment is used, for example, when a new production line is formed, after a robot is installed at a position in the production line where a predetermined operation is performed, the surrounding environment is measured using the robot, and a virtual model of the surrounding environment can be easily constructed in a simulation apparatus. Alternatively, in the existing production line in which the robot is installed, in the case where the type or position of the equipment installed around the robot is changed in order to change the work content, the equipment is measured using the robot. Then, a virtual model of the changed surrounding environment can be easily constructed in the simulation device. According to the present embodiment, since a simulation model of the surrounding environment of the robot can be easily created, an offline simulation work of the robot using the simulation apparatus can be started in a short time.
Second embodiment
The second embodiment describes in detail the method of automatically generating measurement points described in the first embodiment. The description of matters common to the first embodiment will be simplified or omitted. Fig. 13 is a diagram for describing a measurement setting screen 400 according to the second embodiment, and fig. 14 is a flowchart for automatic generation of measurement points according to the second embodiment.
As shown in fig. 13, a sensor information setting section 401, a reference point setting section 402, a measurement area setting section 404, a movement prohibition area setting section 405, and a calculation button 408 are displayed on the measurement setting screen 400. Although not shown in fig. 13, it is assumed that a virtual space described below is also displayed on a separate screen.
The sensor information setting section 401 displays numerical value setting fields of the field-of-view ranges θx and θy, the focal length h, the focusing distance ±fh, and the measurement range IMA of the sensor for measuring the surrounding area of the robot. The field of view range θx may be set in the numerical setting column 401 a. The field of view range θy may be set in the numerical setting column 401 b. The focal length h may be set in the value setting column 401 c. The focusing distance-Fh may be set in the numerical setting column 401 d. The focusing distance +fh may be set in the numerical setting column 401 e. The measurement range IMA may be set in the value setting field 401 f. Further, in the sensor display section 401g, sensors are schematically displayed, and setting information of the sensors corresponding to the numerical values set in the numerical value setting column is shown. As a result, the user can easily set the measurement conditions in the sensor according to the surrounding area of the robot.
The reference point setting section 402 displays numerical value setting fields 402a, 402b, and 402c in which values of X, Y and Z can be input, and is provided with a position acquisition button 403.
The measurement region setting section 404 displays a minimum value Min and a maximum value Max of the range settings X, Y and Z and the angle settings Rx, ry, and Rz as a numerical value setting column of the measurement range settings of the sensor. In the value setting column 404a, a minimum value of the value of X may be set, and in the value setting column 404b, a maximum value of the value of X may be set. In the value setting column 404c, the minimum value of the value of Y may be set, and in the value setting column 404d, the maximum value of the value of Y may be set. In the value setting column 404e, the minimum value of the value Z may be set, and in the value setting column 404f, the maximum value of the value Z may be set. In the value setting column 404g, the minimum value of the value of Rx may be set, and in the value setting column 404h, the maximum value of the value of Rx may be set. In the value setting column 404i, the minimum value of the value of Ry may be set, and in the value setting column 404j, the maximum value of the value of Ry may be set. In the value setting column 404k, a minimum value of the value of Rz may be set, and in the value setting column 404l, a maximum value of the value of Rz may be set. In addition, in the angle settings Rx, ry, and Rz, the division angle of the measurement range may be set. The division angle of Rx may be set in the numerical setting column 404 m. The dividing angle of Ry may be set in the numerical setting column 404 n. The division angle of Rz may be set in the numerical setting column 404 o.
The movement prohibited area setting section 405 is provided with a list 409 of movement prohibited areas that are displayed and an add button 406 and a delete button 407 for movement prohibited areas selected in the virtual space. The area displayed in the list 409 is an area into which the sensor is prohibited from entering when the measurement is performed. In the second embodiment, the measurement points covering the necessary measurement areas can be automatically generated by performing the operation procedures described below.
As shown in fig. 14, in step S50, sensor information is set using the measurement setting screen 400 described with reference to fig. 13. The necessary information is input to the sensor information setting section 401. The minimum required sensor information includes the field of view range θx and θy of the sensor and the focal length h. Further, in order to perform measurement with high accuracy, a focusing distance ±fh and a measurement range IMA are required.
Next, in step S51, a reference point is set. The values of X, Y and Z of the reference points for measurement are directly input to the reference point setting section 402, or a selected position in the virtual space for constructing the virtual model is set by pressing the acquire button 403.
Next, in step S52, a measurement area is set. The measurement region setting section 404 inputs the minimum value Min and the maximum value Max of the regions X, Y and Z from the reference point. Fig. 15 is a diagram illustrating a measurement region displayed in a virtual space according to the second embodiment. By setting the measurement region, a measurement target region 301 corresponding to the input value is set around the virtual model 101M of the robot 101. In real space, it is assumed that a target peripheral object exists in the measurement target region 301.
Next, in step S53, a movement prohibition area is set in which movement of the sensor is prohibited when measurement is performed. Each of the movement prohibited areas 302 (area_1) and 303 (area_2) displayed in the virtual space is selected, and the movement prohibited Area is registered by pressing the add button 406 of the movement prohibited Area setting section 405. Of course, the user can set and register the movement prohibited area by directly inputting the position in the virtual space. In the case of deleting a registered movement prohibited area from the list, a movement prohibited area to be deleted is selected from the list and the delete button 407 is pressed to delete the movement prohibited area. By performing the above-described sensor information setting, reference point setting, measurement area setting, and movement prohibition area setting, preparation for automatic generation of measurement points is made.
Next, in step S54, calculation of measurement points is performed. Upon pressing the calculation button 408, a calculation process is performed. Fig. 16A and 16B are diagrams for describing automatic generation of measurement points according to the second embodiment. Fig. 16A is a schematic diagram for describing a division width calculated from sensor information. Fig. 16B is a schematic diagram for describing the division of the measurement region.
As shown in fig. 16A, the division widths Dx and Dy of X and Y are calculated by calculating the field-of-view ranges of X and Y using values of the field-of-view ranges θx and θy of the sensor information and twice the focal length h, and multiplying by the measurement range IMA, which is a range in which measurement can be performed with high accuracy. The focusing distance +fh is used as the division width Dz of Z.
As shown in fig. 16B, a number obtained by dividing the minimum value Min and the maximum value Max of the measurement area X, Y, Z by division widths Dx, dy, dz acquired from the sensor information is the number of divisions, and the positions of the measurement points are acquired in a lattice shape for each division width calculated from the reference point (P). Next, the number of divisions of each angle is acquired from the measurement angles Rx, ry, rz, and division angle Drx, dry, drz, and each divided angle is given to each point of positions arranged in a lattice, and the measurement points are automatically created.
Fig. 17A and 17B are schematic views of the pose of the robot 101 and the vision sensor 102 at the measuring point according to the second embodiment. Fig. 17A illustrates a state of a measurement point corresponding to an angle of 0 degrees, and fig. 17B illustrates a state of a measurement point corresponding to an inclination angle of Drx in the X-axis direction.
Next, in step S55, inverse kinematics calculation is performed for each measurement point to exclude points to which the robot 101 cannot move. Since the robot 101 has an operation range and cannot move to a point outside the operation range, the point is excluded. It is assumed that a known technique is used for inverse kinematics calculation, and a detailed description thereof is omitted.
Next, in step S56, points where the robot interferes with the movement-prohibited area or the surrounding environment are excluded from the measurement points. Fig. 18A and 18B are schematic diagrams illustrating a state of checking for interference with a measurement point according to the second embodiment. Fig. 18A illustrates a state in which the movement inhibition areas 302 and 303 and the robot 101 interfere with each other, and the measurement points are excluded. Fig. 18B illustrates a state of a robot whose angle is different for the same measurement point, and since there is no interference with the movement prohibition area, the measurement point is not excluded. In this way, the interference check is performed on all the measurement points, and the measurement points to which the robot cannot move are excluded in advance. The vision sensor 102 is arranged to take at least two different poses for each measurement point. By so doing, in the case where a robot that takes a certain posture for a certain measurement point interferes with the movement-prohibited area, the robot does not interfere with the movement-prohibited area by changing the posture and can perform measurement at the measurement point. Thus, a certain number of measurement points can be ensured while avoiding interference with the movement-prohibited area.
Next, in step S57, the result of the measurement point is displayed. The result is displayed as a list or a model in a virtual space, and in the case where the displayed measurement point is selected, the pose of the robot at that point can be confirmed in the virtual space.
As described above, according to the present embodiment, a measurement point can be automatically generated, and measurement can be performed within a movable measurement range. Accordingly, the burden on the user caused by the measurement of the surrounding environment of the robot by the robot and the sensor can be reduced.
Third embodiment
In the second embodiment, the measurement points are automatically set, but the measurement points are also included in the model, which is useless. Therefore, in the third embodiment, a mode of determining whether a measurement point is a measurement point inside the model during measurement of the surrounding environment and excluding the measurement point inside the model will be described. Descriptions of matters common to the first and second embodiments will be simplified or omitted. Fig. 19 is a flowchart of a measurement method according to the third embodiment. Fig. 20 is a schematic view of a layer structure of a measurement point according to a third embodiment. Fig. 21A to 21E are schematic diagrams for describing an exclusion target among measurement points according to the third embodiment.
First, fig. 20 illustrates a state in which points on the same XY plane in the measurement area are grouped, the groups are further layered in descending order of the value of the Z axis, and layer numbers are assigned to measurement points of the respective layers. The layer width is the dividing width of the Z-axis. In the present embodiment, a layer is set and acquired for a measurement point, and a flowchart of measurement shown in fig. 19 is performed. In the third embodiment, measurement is started at the measurement point of the first layer, and measurement is performed in the lower layer in turn.
As shown in fig. 19, steps S21 to S29 are performed as in the first embodiment as soon as measurement starts. Since steps S21 to S29 are similar to those in the first embodiment, a description thereof is omitted.
Next, in step S60, it is checked whether imaging for one layer is completed. In the case where the measurement is not completed (no), the process proceeds to step S31, and the same process as in step S31 of the first embodiment is performed to continue the measurement for the same layer. In the case where the processing is completed (yes), the processing proceeds to step S61. Fig. 21A illustrates a measurement point and a measurement state of the nth layer. As soon as the measurement of the measurement point of the layer is ended, the process proceeds to step S61.
Next, in step S61, it is checked whether the measurement for all layers has ended. In the case where the measurement has ended (yes), the process proceeds to the measurement completion to end the flow. In the case where the processing has not ended (no), the processing proceeds to step S62, and processing of automatically excluding the measurement points in the lower layer is performed.
Next, in step S62, pieces of point cloud data in the measurement range and the focusing distance range are acquired from all pieces of three-dimensional data and synthesized. Fig. 21B is a schematic diagram at the point cloud data acquisition processing, and creates a bounding box VBox in which all imaging ranges in the nth layer are set to the XY plane and the focusing distance ±fh is set to the Z direction, and acquires point cloud data existing therein. As a result, only the point cloud data in the range that can be accurately measured can be acquired.
Next, in step S63, mesh processing is performed on the synthesized three-dimensional point cloud data. Fig. 21C is a diagram of a mesh model, and side information of a measurement object existing in a bounding box VBox can be acquired from point cloud data.
Next, in step S64, measurement points where the straight line model of the focal length h of the sensor and the gridding model intersect are excluded. Fig. 21D illustrates a state in which a straight line model and a gridding model of the focal length of the sensor for the measurement point in the n+1 layer intersect (interfere) with each other. Since the measurement point in this state is inside the measurement target, the measurement point can be excluded from among the measurement points. Fig. 21E is a schematic diagram of the excluded measurement points.
Next, in step S65, interference between the gridding model and the robot is checked, and interference measurement points are excluded. As a result, the robot can be prevented from coming into contact with the measurement target. As soon as the exclusion processing ends, the processing returns to step S31.
By performing the elimination processing as described above, unnecessary measurement points and measurement points having a possibility of interference can be eliminated, and it is possible to shorten the measurement time and reduce the risk of damage to the robot or the peripheral object.
Fourth embodiment
In the second embodiment, the movement prohibited areas 302 and 303 are set by the movement prohibited area setting section 405. However, for example, in the case where a model of a peripheral device or a model of a wall or ceiling of a device has been set in the virtual space, the model may be set as a measurement prohibition area by the movement prohibition area setting section 405. Descriptions of matters common to the first to third embodiments will be simplified or omitted.
Fig. 22 is an explanatory diagram of a surrounding model according to the fourth embodiment, and illustrates a state in which a Wall model 501M (Wall) and a Ceiling model 502M (blocking) are displayed on a virtual space screen 410. By selecting the wall model 501M or the ceiling model 502M and pressing the add button 406 of the movement prohibited area setting section 405 on the measurement setting screen 400, an existing model can be selected and set as the movement prohibited area, and step S56 of the second embodiment can be performed.
By performing the above arrangement, interference with peripheral devices, walls, ceilings, and the like at the measurement point can be prevented. Further, the ceiling, the wall, and the like rarely move in the production line, and if the ceiling, the wall, and the like are set in advance as a model, updating and the like are rarely performed. Therefore, if the movement prohibition region can be set by the model as in the present embodiment, this is useful because the user can easily set the movement prohibition region.
Fifth embodiment
The form of implementing the information processing system of the present invention is not limited to the examples of the embodiment described with reference to fig. 1 and 2. Fig. 23 is a schematic diagram illustrating a schematic configuration of an information processing system according to a fifth embodiment, and fig. 25 is a functional block diagram for describing the configuration of the information processing system according to the fifth embodiment. Descriptions of matters common to the first to fourth embodiments will be simplified or omitted.
In the fifth embodiment, the robot control device a, the vision sensor control device B, the model creation device C, and the simulation device D described in the first embodiment are integrated into a single information processing device H, and the tablet terminal G1 is communicably connected to the information processing device H. The connection between the information processing apparatus H and the tablet terminal G1 may be a wired connection or a wireless connection as shown in the drawing.
In the present embodiment, measurement preparation (imaging preparation), measurement (imaging), three-dimensional model generation, and simulation in a virtual space are performed in the same procedure as in the first embodiment. At this time, various settings can be input and information can be displayed using the tablet terminal G1, thus improving the work efficiency of the operator. The tablet terminal G1 may have a function as a teaching tool. During the offline simulation, the virtual space may be displayed on the display screen of the tablet terminal G1.
Sixth embodiment
The sixth embodiment is an information processing system in which a head mounted display G2 capable of stereoscopic display is connected to an information processing device H similar to the fifth embodiment. Fig. 24 is a schematic diagram illustrating a schematic configuration of an information processing system according to a sixth embodiment, and fig. 25 is a functional block diagram for describing the configuration of the information processing system according to the sixth embodiment. Descriptions of matters common to the first to fifth embodiments will be simplified or omitted.
In the sixth embodiment, the robot control device a, the visual sensor control device B, the model creation device C, and the simulation device D described in the first embodiment are integrated into a single information processing device H. Meanwhile, the head mounted display G2 is communicably connected to the information processing apparatus H. The connection between the information processing apparatus H and the head-mounted display G2 may be a wired connection or a wireless connection as shown in the figure.
In the present embodiment, measurement preparation (imaging preparation), measurement (imaging), three-dimensional model generation, and simulation in a virtual space are performed in the same procedure as in the first embodiment. For example, by using the head mounted display G2 capable of stereoscopic display in the confirmation of the generated three-dimensional model and the simulation in the virtual space, the operator becomes easy to grasp and recognize the robot environment spatially and the work efficiency is improved. The head-mounted display G2 may be any device capable of stereoscopic display, and various types of devices such as a helmet type and a goggle type may be used. The information processing apparatus H may display the virtual model and the simulation result to the operator in a form such as Virtual Reality (VR), augmented Reality (AR), mixed Reality (MR), or cross reality (XR) by using the virtual model of the robot and the surrounding environment of the robot.
Seventh embodiment
In the seventh embodiment, another embodiment in the case of acquiring a simulation model of the surrounding environment of the robot will be described. Descriptions of matters common to the first to sixth embodiments will be simplified or omitted.
Fig. 26 is a schematic diagram illustrating a schematic configuration of a robot and a peripheral object according to the seventh embodiment. In the present embodiment, the robot 101 is mounted on a mobile cart (cart) 105, and a person can move the robot 101. The hand 104 is mounted on the robot 101. The box 111 is mounted on a pedestal (platform) 110 in front of the mobile carriage 105. This is a condition in which the mobile carriage 105 is installed in front of the stand 110 and a work of picking up the workpiece in the box 111 is performed, and the layout of the robot 101, the stand 110, and the box 111 may be arbitrarily changed. In order to pick up the workpiece in such a manner that the hand 104 does not contact the case 111, it is necessary to adjust the positional relationship between the robot 101 and the case 111.
Therefore, after the moving carriage 105 is moved, the surrounding environment of the robot is measured by the three-dimensional vision sensor as described in the first embodiment, and the box 111 based on the robot coordinate system is modeled. Further, by setting the created model of the box 111 as a target for which interference with the robot 101 is to be checked, pickup can be performed while avoiding the box 111.
As described above, by modeling the real environment in the case where the positional relationship between the target and the robot 101 is ambiguous, the pick-up work can be smoothly performed without manually creating the CAD model and performing the layout. In addition, the positional relationship between the robot and the peripheral object can be grasped by the acquired model based on the robot coordinate system. Therefore, even in the case where the person roughly disposes the robot 101 in front of the pedestal 110, it is possible to make the robot 101 perform work smoothly while avoiding interference between the hand 104 and the box 111. The moving carriage 105 may be an Automatic Guided Vehicle (AGV) that is a carriage (transport vehicle) capable of autonomous movement.
Modification of the embodiments
Note that the present invention is not limited to the above-described embodiments, and many modifications may be made within the technical idea of the present invention. For example, the different embodiments described above may be implemented in combination.
Control programs for executing the processes such as virtual model creation, offline simulation, and operation control of the actual device based on the control program created by the offline simulation in the above-described embodiments are also included in the embodiments of the present invention. Further, a computer-readable recording medium storing a control program is also included in the embodiments of the present invention. As the recording medium, for example, a nonvolatile memory such as a flexible disk, an optical disk, a magneto-optical disk, a magnetic tape, or a USB memory, a Solid State Drive (SSD), or the like can be used.
The information processing system and the information processing method of the present invention can be applied to software design and program development of various machines and facilities such as an industrial robot, a service robot, and a processing machine operated by numerical control of a computer, in addition to a production facility. For example, based on information of a storage device provided in a control device, a virtual model of the surrounding environment of the device including a movable unit capable of automatically performing an operation of expanding and contracting, bending and expanding, vertical movement, horizontal movement, or rotation, or a combination operation thereof may be generated. In addition, the present invention can be applied to a case where an operation simulation of a device is performed in a virtual space.
The present invention can also be realized by a process in which a program for realizing one or more functions of the embodiments is supplied to a system or apparatus via a network or a storage medium, and one or more processors in a computer of the system or apparatus read and execute the program. The invention may also be implemented by circuitry (e.g., an Application Specific Integrated Circuit (ASIC)) that performs one or more functions.
OTHER EMBODIMENTS
Embodiments of the present invention may also be implemented by a computer of a system or apparatus including one or more circuits (e.g., application Specific Integrated Circuits (ASICs)) for performing the functions of one or more of the above embodiments, and by a method performed by a computer of a system or apparatus by, for example, reading and executing computer-executable instructions from a storage medium to perform the functions of one or more of the above embodiments, and/or controlling one or more circuits to perform the functions of one or more of the above embodiments, by, for example, reading and executing computer-executable instructions from a storage medium. The computer may include one or more processors (e.g., a Central Processing Unit (CPU), a Micro Processing Unit (MPU)), and may include a separate computer or a network of separate processors to read out and execute the computer-executable instructions. The computer-executable instructions may be provided to the computer, for example, from a network or a storage medium. The storage medium may include, for example, a hard disk, random Access Memory (RAM), read Only Memory (ROM), a storage device for a distributed computing system, an optical disk (such as a Compact Disk (CD), digital Versatile Disk (DVD), or blu-ray disc (BD) TM ) One or more of a flash memory device, memory card, etc.
The embodiments of the present invention can also be realized by a method in which software (program) that performs the functions of the above embodiments is supplied to a system or apparatus, a computer of the system or apparatus or a method in which a Central Processing Unit (CPU), a Micro Processing Unit (MPU), or the like reads out and executes the program, through a network or various storage mediums.
While the invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

Claims (27)

1. An information processing system, comprising:
an apparatus comprising a movable unit comprising a measurement unit configured to measure a shape of an object; and
a simulation unit that performs operation simulation of the device in a virtual space by using a virtual model,
wherein the method comprises the steps of
The movable unit moves the measuring unit to a predetermined measuring point,
the measuring unit measures an object present in the surroundings of the device at the predetermined measuring point,
A model including position information of the target is acquired by using a measurement result and information about the predetermined measurement point, and the simulation unit sets a virtual model of the target in the virtual space by using the model.
2. The information handling system of claim 1, wherein
The information about the predetermined measurement point includes information about a position and a measurement direction of the measurement unit based on a position of the device.
3. The information processing system according to claim 1 or 2, wherein
The predetermined measurement point is registered based on setting information input by an operator.
4. The information processing system according to claim 3, wherein
The setting information is information input in advance by an operator when operating the apparatus.
5. The information processing system according to claim 3, wherein
The setting information includes information about a measurement target area including the target.
6. The information processing system according to claim 3, wherein
The setting information includes information of a movement prohibition area to which the movable unit is prohibited from moving.
7. The information handling system of claim 6, wherein
The movement prohibition region may be set by a virtual model of a peripheral object existing in the surrounding environment that is preset.
8. The information processing system according to claim 3, wherein
The setting information includes information on the number of times of measurement performed by the measurement unit at the predetermined measurement point.
9. The information processing system according to claim 3, wherein
The setting information and/or the information about the predetermined measurement point are displayed on a display unit.
10. The information processing system according to claim 1 or 2, wherein
A plurality of measurements are performed at the predetermined measurement points, measurement results of the plurality of measurements are synthesized to acquire three-dimensional point cloud data including positional information of the target, and the model is acquired based on the three-dimensional point cloud data.
11. The information processing system according to claim 1 or 2, wherein
A plurality of measurement points are registered as the predetermined measurement points in such a manner that a measurement range of the measurement unit covers the target.
12. The information handling system of claim 11 wherein
Measurement results obtained at the plurality of measurement points are synthesized to obtain three-dimensional point cloud data including position information of the object.
13. The information handling system of claim 10 wherein
After synthesizing the measurement results, a filtering process is performed on the three-dimensional point cloud data.
14. The information processing system according to claim 1 or 2, wherein
The simulation unit is configured to display a virtual model of the object set in the virtual space on a display unit.
15. The information handling system of claim 1, wherein
A setting screen is displayed with which a user can set information about the measuring unit and information about a measuring area related to a measurement.
16. The information handling system of claim 15 wherein
The predetermined measurement point is automatically acquired based on information set using the setting screen.
17. The information handling system of claim 6, wherein
Setting at least two postures of the measuring unit at the predetermined measuring point, and
in the case where the measurement unit interferes with the movement-prohibited area due to the measurement unit moving to the predetermined measurement point, the predetermined measurement point interfering with the movement-prohibited area is excluded.
18. The information handling system of claim 1, wherein
The predetermined measurement points outside the movable range of the device are excluded.
19. The information handling system of claim 1, wherein
The predetermined measuring point is divided into at least two layers based on a measuring region related to measurement, and
the predetermined measurement points interfering with the model are excluded based on the layer.
20. The information handling system of claim 14 wherein
The simulation unit is configured to display a virtual model of the object set in the virtual space on a tablet terminal or a head-mounted display.
21. The information handling system of claim 14 wherein
The simulation unit is configured to display a virtual model of the object in the form of any one of Virtual Reality (VR), augmented Reality (AR), mixed Reality (MR), and cross reality (XR).
22. The information handling system of claim 1, wherein
The apparatus is mounted on a trolley and the target is placed on a table.
23. An information processing method, comprising:
the control program for a device is created by performing an operation simulation for the device in a virtual space by using the information processing system according to claim 1 or 2.
24. A non-transitory computer-readable recording medium recording a program for causing a computer to execute the information processing method according to claim 23.
25. A robotic system, comprising:
a robot including a movable unit including a measurement unit configured to measure a shape of an object; and
a simulation unit that performs operation simulation of the robot in a virtual space by using a virtual model,
wherein the method comprises the steps of
The movable unit moves the measuring unit to a predetermined measuring point,
the measuring unit measures an object present in the surroundings of the robot at the predetermined measuring point,
a model including position information of the target is acquired by using a measurement result and information about the predetermined measurement point, and the simulation unit sets a virtual model of the target in the virtual space by using the model.
26. A robot system control method, comprising:
a control program for a robot by performing an operation simulation of the robot in a virtual space by using the robot system according to claim 25.
27. A method of manufacturing an article using a robotic system, the method comprising:
performing simulation related to operation of a robot for manufacturing an article in a virtual space by using the robot system control method according to claim 26; creating a control program for the robot in connection with the manufacture of the article; and operating a robot to manufacture the article by using the control program.
CN202310442985.2A 2022-04-26 2023-04-23 Information processing system and method, robot system and control method, article manufacturing method, and recording medium Pending CN116945157A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2022-072226 2022-04-26
JP2023039698A JP2023162117A (en) 2022-04-26 2023-03-14 Information processing system, information processing method, robot system, robot system control method, article manufacturing method using robot system, program and recording medium
JP2023-039698 2023-03-14

Publications (1)

Publication Number Publication Date
CN116945157A true CN116945157A (en) 2023-10-27

Family

ID=88446789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310442985.2A Pending CN116945157A (en) 2022-04-26 2023-04-23 Information processing system and method, robot system and control method, article manufacturing method, and recording medium

Country Status (1)

Country Link
CN (1) CN116945157A (en)

Similar Documents

Publication Publication Date Title
US11345042B2 (en) Robot system equipped with video display apparatus that displays image of virtual object in superimposed fashion on real image of robot
CN110394780B (en) Simulation device of robot
US7236854B2 (en) Method and a system for programming an industrial robot
EP1435280B1 (en) A method and a system for programming an industrial robot
CN106873550B (en) Simulation device and simulation method
Pan et al. Recent progress on programming methods for industrial robots
CN111844053B (en) Teaching apparatus, teaching method, and robot system
CN103678754B (en) Information processor and information processing method
JP4167954B2 (en) Robot and robot moving method
US10310054B2 (en) Relative object localization process for local positioning system
KR101973917B1 (en) Three-dimensional measuring device and its supporting method of measurement
JP2019519387A (en) Visualization of Augmented Reality Robot System
US11148299B2 (en) Teaching apparatus and teaching method for robots
JP5113666B2 (en) Robot teaching system and display method of robot operation simulation result
CN104802186A (en) Robot programming apparatus for creating robot program for capturing image of workpiece
US20180290304A1 (en) Offline programming apparatus and method having workpiece position detection program generation function using contact sensor
US20180374265A1 (en) Mixed reality simulation device and computer readable medium
KR20130075712A (en) A laser-vision sensor and calibration method thereof
Pachidis et al. Vision-based path generation method for a robot-based arc welding system
JP4558682B2 (en) Manipulator remote control method for mobile robot system
CN116945157A (en) Information processing system and method, robot system and control method, article manufacturing method, and recording medium
US11725939B2 (en) System and method for controlling a light projector in a construction site
US20230339103A1 (en) Information processing system, information processing method, robot system, robot system control method, article manufacturing method using robot system, and recording medium
JPH11338532A (en) Teaching device
Dinh et al. Augmented reality interface for taping robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination