WO1995017995A1 - Procede et appareil de detection de position et d'orientation et systeme de production flexible utilisant ledit appareil - Google Patents

Procede et appareil de detection de position et d'orientation et systeme de production flexible utilisant ledit appareil

Info

Publication number
WO1995017995A1
WO1995017995A1 PCT/JP1994/002212 JP9402212W WO9517995A1 WO 1995017995 A1 WO1995017995 A1 WO 1995017995A1 JP 9402212 W JP9402212 W JP 9402212W WO 9517995 A1 WO9517995 A1 WO 9517995A1
Authority
WO
WIPO (PCT)
Prior art keywords
work
scene
posture
work environment
camera
Prior art date
Application number
PCT/JP1994/002212
Other languages
English (en)
Japanese (ja)
Inventor
Shiyuki Sakaue
Shi-Yu Wang
Hideo Yonemura
Original Assignee
Hitachi, Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi, Ltd. filed Critical Hitachi, Ltd.
Publication of WO1995017995A1 publication Critical patent/WO1995017995A1/fr

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Definitions

  • the present invention relates to a field of a production system that utilizes a computer to automate a product assembling operation, and a method and an apparatus for detecting a position and a posture of an object to be operated by a robot system without tees.
  • a book on flexible production systems using this In particular, a pseudo-work environment model created by computer graphics based on product design data, machining / assembly equipment data, and peripheral device data, and an actual work environment obtained by a visual device using a TV camera were compared.
  • the present invention relates to a position / posture detection method and apparatus suitable for realizing a function capable of autonomous work, and a flexible production system using the same. Background art
  • robots work in a teaching playback style. Although it is becoming increasingly possible to determine the robot's operation sequence offline, it is common practice to perform online positioning for ⁇ H f reports. In addition to the need to stop for a long time, it also requires skilled workers to operate the robot, and if many robots are used, it takes a long time to start up the entire production line.
  • the mouth control method according to the first prior art is intended to give information about a moving object existing in the work environment ⁇ to the robot and to cause the mouth pot to perform an appropriate action.
  • a knowledge database was used to store knowledge such as action procedures and actions according to various situations.
  • a simulated cocoon image of the working environment is displayed in real time, and the real environment and the simulation environment are displayed as overlays, and the position and posture of the simulation environment model are displayed in real environment.
  • a conventional example in which a person can modify the position and orientation data of a simulation environment model to ⁇ ? ⁇ so that it matches that of a real environment is disclosed in Japanese Unexamined Patent Publication No. - Five
  • This first conventional technique aims at remote control of a mouth bot, and image analysis itself is not performed. People were going.
  • the knowledge database stores the knowledge such as the robot's action procedure and the action based on various situations, and exists in the work environment.
  • the scene input device ⁇ j to pray the work cycle obtained as iff jf but what good sea urchin glance Eru must be known in advance.
  • the tree invention analyzes the working environment and corrects the position / posture deviation of the robot or the like based on the analyzed results.
  • An object of the present invention is to provide a method of detecting a position and a posture of a photographed object, which can accurately grasp the position and the posture of the object and automatically correct the deviation. Another object is to provide a position / posture detecting device capable of accurately implementing the above method, and another object is to provide a flexible production system having high flexibility capable of coping with high-mix low-volume production. To provide.
  • shooting of the object in the working environment by the scene detecting means is performed by changing the position of the principal point of the camera lens in the scene detecting means as a center, and a plurality of ⁇ ⁇ ⁇ images obtained from-'
  • the image is projected onto a plane and synthesized into one wide-angle image, and: fi, f JI j is output to 'Factory': J3 ⁇ 4 and upper fi ' Upper position ⁇ ⁇
  • the above scene detection means is shooting 7)
  • the ⁇ ill image is formed into a pseudo ⁇ '1'J, and the object to be photographed on the pseudo image is
  • At least two scene inspection means capable of capturing an image of an object to be processed and a processing device for processing the object (and their peripheral devices) are installed in advance.
  • the positions and postures of each of the working environments consisting of these two scene inspection means, the work object, the processing equipment, and the peripheral equipment at the predicted time are simulated as images based on pre-designed data.
  • the working ring reaches the time axis at which the pseudo image was generated, the actual working environment is imaged by each scene inspection means, and the data of the actually shot working environment and If there is a discrepancy between the two and the data that generated the pseudo image, correct the position and orientation in the work environment based on the discrepancy and IE. Modify each corresponding design data And having to perform one of the possible.
  • Attitude detection device i scene detection means for photographing the object, the information: actual data indicating the actual position of the detection means, and the design of the object Assuming that the object to be photographed is in the design position and posture described above, and that the object to be photographed is in the design position and posture described above, the li'f output stage will be photographing.
  • An image is pseudo-created using the actual data and the r, set-il data, and the position of each feature point on the object in both the pseudo-images, and The scene detection means detects the object to be photographed. ⁇ H / ⁇ / ⁇ !
  • the deviation between the above-mentioned feature points on the object to be photographed and the corresponding ⁇ ⁇ ⁇ ⁇ ' ⁇ : point ⁇ ,' /:, ': in the' removed inn 'image is calculated, and the deviation is calculated based on the deviation.
  • the position detection device for detecting the position of the tree can process the work object, the processing device for processing the work object, these peripheral equipment, the work object, the processing device, and the peripheral device.
  • a work environment having at least two scene inspection means, a work object, a processing device, a peripheral equipment, a design data storage device storing information on the shape of each scene inspection means and characteristic points, and a work object , Machining equipment, peripheral equipment, work procedure data for each scene inspection means, work process data, and operation plan data, which are stored in the design data storage and work plan data storage, respectively.
  • each position and orientation of the work environment is detected by comparing the position and posture scene of the work environment in which the image data is generated in a pseudo manner, and the detected deviation is absorbed. And an assembling station control means for correcting the position and posture of each of the working wheels in the direction.
  • a working device having at least two scene inspection means capable of imaging the work object, the peripheral device, the work object, the processing device, and the peripheral forgery; and a work object, the processing device, the peripheral device, A design data storage device that stores the information that corresponds to the shape and characteristic points of each scene inspection means, and the work procedure data and work process data for the work object, machining, peripheral equipment, and each scene inspection stage.
  • a design data storage device that stores the information that corresponds to the shape and characteristic points of each scene inspection means, and the work procedure data and work process data for the work object, machining, peripheral equipment, and each scene inspection stage.
  • a position / posture detection device which has assembly station control means for detecting a deviation and correcting each position / posture in the working environment in a direction to absorb the detected deviation.
  • the scene input sensor and the actual position and orientation of the object are detected and corrected by the method described below, and their outlines will be described.
  • the scene input sensor preferably has a zoom function.
  • nii i in the performed pseudo work, select a feature iili of the expected feature point such as a color or an edge, and in a real work environment, ⁇ ′I ′, ( 1 ! In this 3 ⁇ 4, the work ⁇ ⁇ is already I.
  • the human-power sensor which is displaying the image, and the method of detecting the dimensional position and orientation of the work object being photographed will be explained in detail. .
  • FIG. 4 shows a model of the working environment that is the basis of the following explanations. The robot and the work are assumed to be rectangular in the explanation. In this work environment, one unit as a scene input sensor is referred to as “object j”.
  • the TV camera should be set to the position and posture where the object can be seen.
  • the four objects also have independent object coordinate systems, and the coordinates of the feature points (vertexes) of each object are defined on the object coordinate system.
  • each coordinate of an object in the visual coordinate system and the object coordinate system hereinafter, the visual coordinate system and the object coordinate system are sometimes collectively referred to as “local coordinate system”).
  • the origin of the 20-kind is defined on the coordinate system of conflict.
  • Equations 1 and 2 show C c c 3 ⁇ 4 ⁇ . Note that here
  • Trans in the equation is a translation transformation matrix.
  • R ot is a rotation-transformation transformation matrix, and R ot (y, ⁇ 2) rotates 0 2 times around the y-axis c s oo
  • the coordinates Ec of an arbitrary point on the visual coordinate system form an image on the point G of the image pickup ⁇ ⁇ through the lens as shown in FIG.
  • the visual coordinate system is set so that its origin is located at the center of the lens.
  • Points 1 and 2 in the figure are on a plane perpendicular to Z c ⁇ in the visual coordinate system.
  • SX is a coefficient representing the apparent focal length in the X direction
  • S y is a coefficient representing the apparent point in the y direction
  • Figure 10 shows how the object looks on the TV camera.
  • the distance to the television camera can be determined based on the distance information represented by OZc, and only the object that is positioned on the near side can be displayed. In this way, a pseudo work environment screen can be generated.
  • the TV camera obtained the two images obtained by shooting the actual working environment.
  • This section describes how to obtain three-dimensional space information by using one-dimensional screen position information.
  • the three-dimensional space position information is obtained by comparing the simulated work environment screen obtained by the above processing with the work screen actually obtained by shooting with the TV camera.
  • ⁇ ⁇ can be obtained by using iili. In this case,
  • the G components GX and G y in Equation 5 are the position (C x, C y, C z) of the origin C of the visual coordinate system on the stationary coordinate system. , Postures 0 1, 0 2, 0 3, and apparent focal lengths S x, S y.
  • the visual coordinate system (television camera) is moved on the stationary coordinate system, the actual position of the TV camera (CX, Cy ', Cz', ⁇ 1 ', 02', 63 ', Sx ', S y') and the design position (C x, C y, C z, 01, 02, 03, S x, S y).
  • the first term of the mumm GX can be expressed as -S Exc Exc 3E ZC , ⁇ C X (Equation 10) dCx 3 ⁇ 4c ⁇ dc x 3 ⁇ 4c ac x from Equation 5. Further, 5 E xc / SC x, ⁇ E zc / ⁇ CX included in the equation 10 are, according to equations 3 and 1,
  • Equation 12 ⁇ ⁇ ⁇ , X, and R n are the screen position difference ⁇ ] column of the feature point, the parameter error matrix, and the partial differential coefficient matrix, which are expressed by the following Equations 1 ⁇ ′ to 15, respectively. .
  • Equation 13 A G n in Equation 13 is obtained from the screen positions of the feature points of the pseudo image and the real image.
  • the partial differential coefficient matrix R n can be calculated. Therefore, the parameter error X can be obtained by the following equation from Equation 13.
  • Equation 12 is an approximate equation when the parameter error is assumed to be sufficiently small
  • Equation 16 is a solution of the approximate equation.
  • the actual solution is to substitute the approximate solution into Equation 7 to update the design value, substitute this into Equation 15 and perform a repeat operation to find Equation 16 to converge within a certain range Ask for.
  • the position and orientation of the visual coordinate system (television camera) on the stationary coordinate system and the apparent focal length can be determined.
  • this is simply referred to as “TV camera positioning method”.
  • a difference between the work screen and the simulated work screen is defined as a shift in the position and orientation of the target object.
  • the actual position and orientation of the television camera are known in advance.
  • the simulated work screen when the above-mentioned object is in the position and orientation according to the predetermined design, an image showing how this object looks on the screen of the TV camera is created. I do. Then, the position and orientation of the object are determined by comparing the image with the actual screen. The comparison process is actually performed by performing the following mathematical operation.
  • the components G p X and G py of G p in Equation 6 are the position (P x, P y, P z) of the origin P of the object coordinate system on the stationary coordinate system, and the posture ⁇ , ⁇ ⁇ , 0 y and the apparent focal length of the TV camera SX, S y, which can be represented by eight parameters.
  • the actual position of the object (P x ′, P y ′, P z, ⁇ ⁇ ′, ⁇ ⁇ , ⁇ ⁇ ′, S ⁇ ', S y') and the design position (P x, P y, P z, ⁇ a, ⁇ ⁇ , ⁇ y, S x, S y) , Pz, ⁇ 0 ⁇ , A ⁇ ft, ⁇ 0 ⁇ , ⁇ S ⁇ , ⁇ Sy), the difference between the pseudo work screen and the screen actually shot is Is generated.
  • the TV camera position determination method determines whether the parameter error is obtained from the screen position difference and the partial differential coefficient matrix. If the parameter error is obtained from the screen position difference and the partial differential coefficient matrix, the actual position, posture, and apparent focal length of the object on the stationary coordinate system can be obtained. Can be determined. Hereinafter, this is simply referred to as “object position determination method”.
  • the difference between the above-described TV camera position determination method and this object position determination method is that the former is based on the feature point information of a plurality of objects on the screen, and the position of the self (TV camera) is determined. Whereas, the latter is a point in which the position and orientation of the target object itself are determined based on the feature point information of the target object.
  • the principle of the object position determination method described in the above description is based on the assumption that the actual position and orientation of the TV camera are accurately known. If the exact position is not known, a pseudo image viewed from the lens principal point different from the actual TV camera will be created, and as a result, the accuracy of the object position determination will be degraded.
  • the above-mentioned TV camera position determination method is based on the premise that the actual position and orientation of the object position are accurately known. In other words, the two depend on each other, and accurate position detection etc. cannot be performed unless the actual position of either the TV camera or all the objects is accurately known in advance. .
  • the inventor of the present application has also been able to solve the problem (1) to the extent that there is no practical problem by the first and second two methods described below.
  • the accuracy of determining the position of the TV camera depends on the correctness of the position of the object, so if the position of the object is determined based on the position of the TV camera determined based on position data that is not at the design position, Of course, the accuracy will be worse. Then, in order to determine the exact position of the object, first find the object that is misaligned, and then use the feature point information of the object as the rest of the design position, and once again use the TV camera Determine the position.
  • the position / posture of the object that is displaced at the determined position of the TV camera ' is determined.
  • the position of the TV camera is temporarily determined using multiple objects. Then, those objects that are out of position are searched for. Then, the position of the TV camera is determined again using the feature point information of the object located at the design position. After that, the position and orientation of the object whose position has been shifted are determined this time at the re-determined position of the TV camera.
  • the TV camera position determination method described above (in this case, the positions and postures of the objects P0 to P3 are all Perform assuming that the camera is in the correct position.) That is, the position of the origin of the visual coordinate system on the stationary coordinate system and the posture of the visual coordinate system are obtained.
  • the position and orientation of the television camera obtained here will be simply referred to as “estimated camera position”.
  • the position ⁇ ⁇ ⁇ attitude of the TV camera obtained here that is, the estimated camera position is different from the actual position on the stationary coordinate system.
  • the estimated power obtained in step a Using a design data which has a television camera at the camera position, and which can be seen when each object is at the design position, using the design data which has in advance. Create a pseudo. Then, using the created pseudo image and the image actually taken by the TV camera, the above-described object position determination method is used (in this case, it is assumed that a TV camera is actually present at the estimated camera position).
  • the position and orientation of each object P0 to P3 on the stationary coordinate system that is, the position of the origin of each object coordinate system and the orientation of each object coordinate system on the stationary coordinate system are provisional. Is decided.
  • the position and orientation of the object obtained here are simply referred to as “estimated object position J”.
  • the position of the estimated object obtained in this way does not always match the actual position and orientation of the objects P0 to P3 on the stationary coordinate system. This is because the coincidence between the estimated camera position assumed when performing the object position determination method and the actual position of the television camera is not always true.
  • each object is set to the above-mentioned estimated object position, and the television camera is set to the estimated camera position.
  • the setting in this case is performed in a simulated manner, and is not a setting in which a television camera or each object is particularly set.
  • the above objects are moved to the similar position '1-J up to the design position.
  • the TV camera along with the movement of each of the objects, the TV camera also moves while maintaining the relative position r, i system of m! ⁇ between the m ⁇ ⁇ ⁇ ⁇ object ⁇ and the camera position ⁇ .
  • the camera moves in a simulated manner, and the position of the TV camera when each object is at the above-mentioned setting in- (, '/:' Attitude.
  • the position of the TV camera obtained here is referred to as “;
  • the next estimated camera position is called j.
  • Equation 18 The feature point sequence X n on the object P 0 is converted to a visual coordinate system by 0. Assuming that Y n, Y n is expressed as in Equation 19.
  • Equation 20 The actual position of the telecamera [1 ⁇ Assume that the posture is C3X, C3y, C:? 7, 0 ⁇ -J32, 033.
  • a matrix for transforming the target (defined in the visual coordinate system I.) into the stationary coordinate system can be expressed as in the following Equation 21.
  • Equation 19 can be obtained as Equations 25 and 26 below.
  • the second estimated force mera positions calculated using the objects P 0, p i, and P 2 where the actual position and the design position match should all coincide.
  • the second estimated camera position calculated using the object P3 located at a position deviated from the design position should not match the other. Therefore, if an object having a distance outside the permissible range of the quantization error is determined, an object that is not at the design position can be determined.
  • the position of the TV camera is determined accurately again.
  • step a6 Using the exact position of the television camera determined in step a6 above, the position and orientation of the object P3 outside the allowable range are determined.
  • the position of the TV camera and the object not at the design position can be determined with high accuracy.
  • a TV camera position determination method is applied to each object, Determine the position of the TV camera.
  • provisional position the position of the TV camera obtained here is referred to as “provisional position”.
  • R ⁇ ⁇ is a partial derivative matrix, ⁇ ⁇ 9Gpxi
  • Equation 31 is obtained from the screen position difference between the feature points of the pseudo image and the real image.
  • the partial differential coefficient matrix R pn can be calculated. Therefore, the parameter error X p can be obtained from the following equation from the equation (30).
  • c R pn, o R pn, and s R pn are the partial differential coefficient matrix related to the position and attitude parameters of the TV camera, the partial differential coefficient matrix related to the position and attitude parameters of the object, and the TV camera, respectively.
  • 7 is a partial derivative matrix related to an apparent focal length parameter of FIG.
  • the parameter error vector Xp is
  • c X p, o X p, and s X p are the position and attitude parameter vector of the TV camera, the position of the object and the attitude parameter vector, and the magnification parameter vector of the television camera, respectively.
  • Equation 4 5 shows that the parameter error X a 1 1 is
  • Equation 46 URallj ⁇ Rail) ⁇ Rpn ⁇ mum Gall (Equation 47).
  • Equation 46 is an approximate equation when the parameter error is sufficiently small
  • Equation 47 is a solution of the approximate equation.
  • the posture parameter is the position of one TV camera. ⁇ The posture is limited by the parameter.Therefore, the partial differential coefficient matrix Ra1 1 at this time is
  • Equation 50 K a 1
  • the screen position difference vector ⁇ G a 11 of the feature point on the object is the feature point between the pseudo-two images and the real image.
  • the partial differential coefficient matrix R a 1] for the parameters can be calculated. Therefore, the parameter error vector X a 11 1 is expressed as ⁇ -(. L
  • Equation 50 is the approximate equation when the parameter error is sufficiently small
  • Equation 5 5 5 is the solution of the approximate equation.
  • multiple convergence The position of the television camera and the object can be determined when there is a predetermined relationship between ⁇ of the camera and the camera.
  • FIG. 1 is a basic configuration diagram of an embodiment of the present invention.
  • FIG. 2, c 3 Figure illustrates a hard Douea configuration example of the assembling stations the control system shown in FIG. 1 is an illustration of one example of a working environment shown in .1
  • FIG. 4 is an explanatory diagram of a work environment model according to the present invention.
  • FIG. 5 is an explanatory diagram of the relationship between the coordinate systems in FIG.
  • FIG. 6 is an explanatory diagram showing a perspective transformation of an arbitrary point Ec on the visual coordinate system to an imaging surface.
  • FIG. 7 is a flowchart showing a process for determining the position and orientation of the television camera and the object.
  • FIG. 8 is an explanatory view showing a posture changing device having two rotation axes.
  • FIG. 9 is an explanatory diagram showing a posture changing device having three rotation axes.
  • FIG. 10 is an explanatory diagram showing a posture changing device having two rotation axes and one linear moving means.
  • FIG. 11 is an explanatory view showing a posture changing device having three rotation axes and one linear moving means.
  • FIG. 12 is an explanatory view showing a posture changing device when the position of the rotation center of the camera lens does not coincide with the position of the principal point.
  • FIG. 13 is an explanatory diagram for projecting the imaged feature points on the same plane when the position of the rotation center of the camera lens and the position of the principal point in the posture changing device do not match.
  • FIG. 14 is an explanatory diagram in which a plurality of television cameras are arranged at the same radial position with respect to the principal point position.
  • Fig. 15 shows the arrangement of multiple TV cameras arranged around the principal point. It is explanatory drawing when it sees from a point position.
  • FIG. 16 is an explanatory diagram when measuring the position of each feature point when capturing each feature point of the object.
  • 7th] is an explanatory diagram of measuring the position of one feature point of an object using two attitude change equipment each having a scene detecting means.
  • FIG. 18 is an explanatory diagram in the case where one point on a known curved surface is obtained by one scene detecting means.
  • FIG. 19 is an explanatory diagram of a coordinate system in the case where one point on a known curved surface is similarly obtained by scene detection means.
  • the flexible production system includes a work environment 1, an assembly station control device 2, a design data storage device 3, and a work plan data storage device 4, as shown in FIG. It is configured.
  • the work environment 1 is a work object 12 as a work, peripheral devices 13, a work robot 14 as a processing device, the power of the work robot 14, a posture change having a visual robot 15, and a television camera 34.
  • Apparatus 33 consists of 3 and other structures.
  • the visual robot 15 captures images of the work object 12, the peripheral device 13, the work robot 14, and the television camera 34, and the television camera 34 captures the work object 12.
  • the peripheral device 13, the working robot 14, and the visual robot 5], and all of the captured images are hereinafter referred to as “objects”.
  • the peripheral device 13 supports and transports the work object 12 in the work environment 1, and the details will be described with reference to FIG.
  • the work robot 14 has a hand effector that works on the work object 12, so-called mouth bot hand and other processing tools, and assembles the work object 12 using these.
  • the work robot 14 itself is configured to be movable.
  • the visual robot 15 has a means of detecting a scene similar to a television camera 34 to capture an object to be photographed in the work environment 1 as image data. It is configured to be movable as in 4.
  • the posture changing device 33 is a device that changes the posture of the television camera 34.
  • the TV camera 34 is a scene detection means for taking in the work environment 1 as image data, like the video camera of the visual robot 15.
  • the assembly station control device 2 determines the position of the object based on the work environment data captured by the visual robot 15 and the television camera 34 and the design data storage device 3 and the work plan data storage device 4.
  • the posture shift is detected, and the object is moved and corrected in a direction to absorb the detected shift. The details will be described later. Further, when detecting a shift in the position and orientation of the object to be photographed, the assembly station control device 2 changes and corrects the design data by the amount of the shift.
  • the design data storage device 3 stores information on the shapes of products and robots, their sizes, feature points, and the like, that is, stores product design data, robot design data, and peripheral device design data.
  • the work plan data storage device 4 stores work procedure data, work process data, and operation route data for the work object 12, the work robot 14, the peripheral device 13, the visual robot 15, and the posture change device 33. Is stored.
  • the assembling station control device 2 includes, as shown in FIG. Processing means 2 1, working environment scene Solution 22, Peripheral device control 23, Work robot control 24, Visual robot control 25, Operation command generation 26, Work environment database construction ⁇ Stage 28, Work It comprises environmental data storage means 29, image data storage means 30, both-image combining means 31, and attitude changing device control means 32.
  • the work environment database construction means 28 constructs a work environment database from the data stored in the design data storage device 3 and the work plan data storage device 4 and the information obtained by the work environment scene understanding means 22. Things.
  • the data constructed here is configured to be stored in the work environment data storage means 29.
  • the simulated work environment scene generation means 20 has a function of simulating a work environment scene at a specific time during the work by using the data stored in the work environment data storage means 29. .
  • the work environment scene generated simulated based on the data in this manner is referred to as a “pseudo scene image”.
  • the image processing means 21 processes the image data of the actual working environment obtained by the television camera and the television camera 34 of the attitude changing device 33 as the scene detecting means of the visual robot 15 and the working environment. It has a function of outputting to the scene understanding means 22.
  • the image processing means 21 also has a function of storing a plurality of image data in the image data storage means 30.
  • an image of the actual working environment obtained through the scene detecting means is referred to as a “real scene image” corresponding to the pseudo scene image.
  • the two-image combining means 31 has a function of combining a single high-definition image or a wide-angle high-definition image based on a plurality of image data stored in the image data storage means 30. These single high-definition images and wide-angle high-definition images are selectively executed with the support of the work environment scene understanding means 22 described later. You. In the following description, a single high-definition image or an image synthesized on a single high-definition image based on multiple images of the actual working environment obtained through the visual robot 15 and the TV camera 34 will be referred to as “high-definition synthesis”. These are referred to as “images” or “wide-angle composite images”, but these are also considered real scene images.
  • the work environment scene analysis means 22 compares the pseudo scene image generated by the pseudo work environment scene generation means 20 with the real scene image obtained by the visual robot 15 or the like, thereby obtaining the work object 1 2 It is designed to detect misalignment of the position and posture of the etc.
  • the operation command generation means 26 has a function of generating an operation command based on the information obtained by the work environment scene understanding means 22, and, based on the generated operation command, a peripheral device control means.
  • 23 is a private edge device 13
  • a work robot control means 24 is a work robot 14
  • a visual robot control means 25 is a visual robot 15
  • a posture changing device control means 32 is a posture.
  • the change devices 33 are controlled individually.
  • the assembly station control device 2 includes a system bus 101, a bus control device 102, a central processing unit 103a, a main storage device 103b, a magnetic disk device 104, and a keyboard 105.
  • Display 106 image generation device 107, image processing device 108, TV camera 109, zoom motor 'focus motor / iris motor control device 110, zoom motor.
  • the work environment database construction means 28 the work environment data storage means 29, the image data storage means 30, the image synthesis means 31, the work environment
  • the scene understanding means 22, peripheral device control means 23, work robot control means 24, visual robot control means 25, motion command generation means 26, and attitude change device control means 32 are mainly composed of This is realized by the central processing unit 103a, its main storage unit 103b, and the magnetic disk unit 104.
  • the pseudo working environment scene generating means 20 is realized mainly by the image generating device 107.
  • the image processing means 21 is mainly realized by the image processing device 108.
  • each component does not have these functions independently, but achieves these functions in close cooperation with others, the correspondences raised here are not necessarily strict. .
  • the design data storage device 3 and the work plan data storage device 4 in FIG. 1 exchange data with each other via a network 119.
  • the television camera 109 and the lens system 111 shown in FIG. 2 are mounted on the visual robot 15 and the posture changing device 33 instead of the assembly station control device 2 shown in FIG. It is.
  • the motor 116 is mounted on the peripheral device 13 or the working robot 14 shown in FIG. 1 and drives them.
  • table-shaped robots 301a and 301b are for mounting the work object 12 and constitute peripheral devices 13 shown in FIG. .
  • Each of the robots 301a and 301b is configured to move by itself as needed.
  • the arm-shaped robots 302a, 302b, and 302c are each It has an arm that can control its movement. At the end of the arm of each robot, there are hand effectors 30 21 a, 30 21 b, and a telecamera 3 for performing some work such as machining and assembling on the work 12. 0 2 2 is provided.
  • the robots 32 0 2 a and 30 2 b having the hand effectors 30 21 a and 30 21 b are the working robots 14 shown in FIG.
  • the robot 302c having 22 is the visual robot 15 shown in FIG.
  • the television camera 302 corresponds to the television camera 109 and the lens system 111 in FIG.
  • Each of the robots 302a to 302c also has a configuration in which it travels by itself as necessary, similarly to the robots 301a and 301b in the form of the Tabnoke type.
  • a television camera 34 is arranged at a position near each of the robots so that they can be imaged.
  • the television camera 34 is installed in a posture changing device 35.
  • This television camera 34 also corresponds to the television camera 109 and the lens system 111 in FIG.
  • the work transfer robots 303a and 303b are for mounting the work object 12 and moving between the processes. Similarly, peripheral devices 13 are formed.
  • the equipment configuration is changed dynamically according to the product design information and the type and quantity of the product, and processing and assembly are performed according to the online instructions from the work planning department as per the work plan. .
  • the peripheral device 13 the work robot 14, the visual robot ⁇ 5, and the posture changing device 33 shown in FIG.
  • the work robot control means 24 It is controlled by the sensation bot control means 25 and the attitude change device control means 32.
  • the central processing unit 103a generates an operation command (digital signal).
  • the operation command is sent to the D / A converter 114 via the system bus 101, where it is converted into an analog signal, and then sent to the motor driving device 115.
  • the motor drive unit 115 drives the motor 116 in accordance with the operation command (analog signal), and the robot 310 as a peripheral device 13 0 a-3 0 1 b and the work robot 14
  • the robot 302a-302b as the robot, the robot 302c as the visual robot 15 and the posture changing device 35 are operated.
  • the counters 112 and the noiseless generator 113 also operate, and the robots 310a, 310b, 302a, 302b, and 302b, respectively. Used to control the c and attitude changer 35.
  • the robots 302a and 302b use the hand effectors 302la and 302lb to assemble the work object 12 placed on the robot 301b. Will be done.
  • the visual robot 300c having the television camera 302 and the television camera 34 installed in the posture changing device 35 monitor the assembling work.
  • the above-mentioned television cameras 302, 34 have a zoom motor, a focus motor, and a lens system 1 1 1 with an iris motor controlled by a zoom motor control device 110. If you want to see the work being monitored precisely, change the orientation of the TV cameras 302 and 34 to the desired positions using the robot 302c and the posture change device 35, A high-definition composite image is obtained by making the lens system 1 1 1 telephoto.
  • data stored in the magnetic disk 104 is used as data indicating the shape and size of the parts to be assembled necessary for the work.
  • Ma Similarly, as an operation command to each of the robots 302a ⁇ 302b, 310a ⁇ 301b and the hand effector 3002a / 302b.
  • the moving path, arrangement, assembling work procedure data, work process data and the like stored on the magnetic disk 104 are used.
  • each work robot 14 and the like are monitored by the visual port robot 15 and the television camera 34 to correct the position and posture.
  • the procedure for monitoring the position / posture and correcting it will be described below.
  • the work environment database construction means 28 shown in FIG. 1 is used to analyze the work environment from the data stored in the design data storage device 3 and the work plan data storage device 4.
  • the data is read as work environment data, and the read data is registered in work environment data storage means 29 (in FIG. 2, it becomes magnetic disk 104).
  • the work robot 14 and the like are set to the position and orientation according to the work environment data by the pseudo work environment scene generation means 20.
  • a pseudo-scene is generated, that is, a pseudo-scene image at a specific time during the work, which would be captured by the visual robot 15 in the event that there was.
  • the processing environment stored in the magnetic disk 104 by the central processing unit 103a, the main storage device 103b, and the image generating device 107 in FIG. This is to generate a pseudo scene image using the data.
  • the pseudo working environment scene generating means 20 outputs the generated pseudo scene to the working environment scene understanding means 22 shown in FIG. Since the pseudo scene generation processing has already been described in the section of the operation, the description thereof is omitted here.
  • the two image processing means 21 perform predetermined processing on the taken image. Then, while outputting to the working environment scene understanding means 22 as a real scene image, a plurality of image data as a real scene image are stored in the image data storage means 30.
  • the image processing device 108 processes an image obtained by the television camera 109 and outputs the processed image to the central processing device 103a.
  • the image combining means 31 combines the plurality of image data stored in the image data storage means 30 into one composite image, and outputs the combined image to the working environment scene understanding means 22. . This is to be processed by the central processing unit 103a in FIG.
  • the work environment scene understanding means 22 includes a pseudo scene image generated by the pseudo work environment scene generation means 20 and a real scene image of the work environment 1 obtained by the visual robot 15 and the television camera 34 (wide-angle image). (Including high-definition composite images or high-definition composite images) and extract the differences. In other words, information about the feature points of each of the work object 12, peripheral device 13, work robot 14, visual robot 15, and television camera 34 is extracted from the actual scene image, and the information about the feature points and the pseudo information are extracted. By comparing with the scene image, the positions and postures of the respective parts 12 to 15 and 34 are specified, and the position / posture deviation is detected. The processing for specifying the position / posture has already been described in the section of the operation, and a description thereof will be omitted. ⁇
  • the work environment scene understanding means 22 changes the work environment data itself of the work environment data storage means 29 ⁇ based on the pseudo scene image so as to match.
  • the pseudo scene image and the real scene image match.
  • an operation command is issued to correct the displacement of the position / posture displacement.
  • the operation command generating means 26 issues an operation command corresponding to the command, and operates each of the units 12 to 15 and 34 so as to match the pseudo scene image.
  • the work target 12, peripheral device 13, work robot 14 (3 O la, 30 1 b), visual robot 15, and TV camera 34 move during the work of each process. It is performed every time a specific operation is performed, that is, each time one specific operation is performed. As a result, the work target 12, the peripheral device 13, the work robot 14, the visual robot 15, and the television camera 34 are performed while checking the work position each time the work position is moved, so that the work is performed according to the work plan. Work can be carried out. In addition, if the above-described object position determination method is executed for each object (the work robot 14 etc.), it is possible to know the relative positional relationship and distance between them.
  • each of the work object 12 as the work environment 1, the peripheral device 13, the work robot 14, the visual robot 15, and the television camera 34 is provided.
  • the position and orientation are predicted based on the work environment data, and the pseudo scene images of the predicted parts 12 to 15 and 34 are obtained, and the obtained pseudo scene image and the actual captured parts 1 are obtained.
  • the actual scene images 2 to 15 and 34 are compared, and if there is a difference between the pseudo scene image and the actual scene image, the corresponding parts 12 to 15 and 34 are compared. Correct the direction to absorb the deviation, or correct the work environment data by the deviation Therefore, the actual work environment can be clearly analyzed. In addition, these repairs;; ⁇ Since the analysis of the working environment when making changes is also performed automatically, each unit 12 to 15 and 3 can be operated accurately using only numerical data. Eliminates the need for online teaching.
  • the production system can be operated autonomously.
  • the equipment can be manufactured at low cost.
  • the scene detection means consisting of the visual robot 15 and the TV camera 34 is independent of the work robot 14 and the peripheral devices 13, and the work environment is controlled while operating the work robot and the peripheral devices. By taking in the scene, the scene can be accurately judged, so that the manufacturing time can be reduced as much as possible.
  • the position of the scene detection means can be accurately determined based on the feature points in the work environment obtained by the scene detection means, the relative relationship between any two persons in the work environment can be accurately determined. You can ask.
  • FIGS. 8 to 11 show an embodiment of the posture changing device 33.
  • FIG. First a specific configuration for realizing the posture changing device 33 will be described with reference to FIG.
  • the attitude changing device 33 includes a base 401, a first rotation axis 402, a second rotation axis 400, and a television camera 404. That is, the first rotating shaft 402 is mounted on the base 401, and the output section 402a is rotated about the vertical axis. A second rotating shaft 400 is attached to the tip of this output unit 402 via a base 400a, and the rotating shaft 40: 3 has an L-shaped output unit 4003a. Rotate around the horizontal axis. Then, a television camera 404 is fixed to the tip of the output section 403a.
  • the extension Y of the output section 402 a of the rotation axis 402 of ⁇ 1 and the extension line X of the output section 400 a of the second rotation axis 400 are orthogonal to each other.
  • the center axis Z of the force camera lens 405 in the TV camera 404 is located at a position D perpendicular to each other on the extension line, and the rotation center of the camera lens 405 coincides, and the coincident position D is arranged so that the principal point of camera lens 405 also coincides with D. Therefore, the center axis Z of the first rotation axis 402 and the second rotation axis 400 and the center axis Z of the camera lens 405 are at one point at the principal point (D) of the force camera lens 405. They are arranged to intersect.
  • the first and second rotating shafts 402, 403 are both composed of motors with reduction gears, and are directly connected to high-resolution encoders. Since the rotation of the motor is transmitted via the reduction gear, one rotation on the output shaft side of the reduction gear is equivalent to 100 rotations on the motor side, and if an encoder that divides one rotation of the motor into 400 rotations is used, the reduction gear High accuracy can be obtained such that one rotation of the output shaft is divided into 400,000.
  • the rotating shafts 402, 403 are connected to the attitude changing device control means 32 of the assembling station control device 2 as shown in FIG. 1, and the motor driving device 115 as shown in FIG. Driven by The pulse encoders attached to the rotating shafts 402 and 403 are counted by the counter 112 and used to determine the position and orientation of the television camera 404.
  • the posture changing device 33 has the first rotating shaft 402 and the second rotating shaft 400, and furthermore, the output section 400 of the first rotating shaft 402.
  • the center of rotation of the camera lens of the television camera is located at a position D perpendicular to the extension line Y of the 2a and the output section of the second rotation axis 400.
  • the TV camera 404 is centered on the rotation center of the camera lens 405. It has a swing function that rotates and moves. Thus, the detection accuracy when detecting the object to be photographed can be improved.
  • the embodiment shown in FIG. 9 is obtained by adding one rotating shaft 406 to the embodiment shown in FIG. That is, in this case, the output portion 400 a of the second rotating shaft 403 is connected to the third mounting portion 407 via the mounting plate 407 and the L-shaped support portion 408 provided on the mounting plate.
  • a rotation axis 406 is installed, and a television camera 404 is attached to an output section (not shown) of the rotation axis 406.
  • the television camera 406 is driven by driving a third rotation axis 406. 4 rotates around the horizontal axis orthogonal to the second rotation axis 400 on the mounting plate 407 and the support section 408, that is, around the rotation center of the camera lens 405. I have.
  • the force lens is located at the intersection D between the extension Y of the output section 402 of the first rotation axis 402 and the extension X of the output section 400 of the second rotation axis 400.
  • the center of rotation of the camera lens 405 is matched with the center of rotation of the camera lens 405, and the principal point of the camera lens 405 is located at the intersection D of the center of rotation and the extension lines Y and X.
  • the television camera 400 has the first to third rotating shafts 402, 403, and 406, and by these driving, the television camera 405 can be subtly rotated. Therefore, as compared with the embodiment shown in FIG. 8, it is possible to improve the detection accuracy when detecting an object to be photographed.
  • FIG. 10 is a view in which a translation axis is added to the posture changing device 33 of the embodiment of FIG. That is, a mounting plate 410 is attached to the output section 400 a of the second rotating shaft 400, and a parallel translation shaft 409 made of a cylinder is installed at one end of the mounting plate 410.
  • a television camera 404 is connected to the parallel moving section 409a of the moving axis 409. Accordingly, the television camera 404 is driven in the same plane as the second rotation axis 403 by driving the translation axis 409. It can be moved linearly in a direction orthogonal to the rotation axis 400.
  • the camera lens 400 of the television camera 400 has a center axis Z having an extension Y of the output section 402 a of the first rotation axis 402 and a second rotation axis 4. It is arranged at the position D where it crosses with the extension X of the output part 40 3 a of 0 3, and the principal point is located on the sub-axis Z.
  • the television camera 404 can be rotated by driving the first and second rotating shafts 402, 403, and the force of the television camera 404, the lens 4 05 moves linearly in the same plane as the second rotation axis 4003 in the direction orthogonal to the rotation axis 4003, so that the focal position of the camera lens 400 can be easily and quickly adjusted. Can be.
  • FIG. 11 is obtained by adding a rotating shaft 406 similar to the embodiment of FIG. 9 to the embodiment of FIG. That is, on the mounting plate 410 attached to the second rotating shaft 403, the third rotating shaft 406 is installed via the supporting portion 408, and by the driving of the rotating shaft 406
  • the television camera 404 rotates about the rotation center of the camera lens 405 with respect to the support portion 408. Therefore, according to this embodiment, as compared with the embodiment shown in FIG. 10, the television camera 404 is orthogonal to the second rotation axis 400 on the same plane as the second rotation axis 400. In addition to the linear movement in the direction, it also rotates around the center axis Z of the camera lens 405, so that the detection can be performed more precisely at any time.
  • the rotation center of the camera lens 405 and the principal point often do not coincide. Therefore, when the principal point of the camera lens 405 and the center of rotation cannot be matched, the feature point is aligned with a specific position in the screen, for example, the center of the screen using the posture changing device.
  • the rotation axis angle data is used.
  • a posture changing device using two axes as shown in Fig. 8 will be specifically described with reference to Fig. 12.
  • a vertically downward axis is defined as Y, horizontal.
  • the axis of direction is X, the intersection of these two axes Let ⁇ be the ⁇ '(and ⁇ that intersect with them.
  • the number of feature points to be used is calculated in the same way, and all of them are cast on the same flat ifli I eight J]; :: i-ru. Use this ⁇ ⁇ ⁇ data as one image data ⁇
  • a posture changing device to set a feature point to an arbitrary point on the screen. If a telephoto camera is used with a zoom lens mounted on the camera lens 405, focus cannot be achieved. If the feature points are blurred. Also, simply shifting the focus will cause the principal point of the lens to shift. However, if the feature point is set to an arbitrary point on the metaphor, it is not necessary to know the position of the principal point of the lens. Therefore, there is an advantage that the parameter term to be measured can be reduced. In addition, it is generally difficult to manufacture a lens without distortion, and it is said that using the entire surface of an image will lead to a reduction in measurement accuracy.
  • the change device has advantages.
  • a plurality of TV cameras are arranged in three dimensions, a wide area is simultaneously captured as images using a high-power camera lens, processed into one high-definition image, and moved by a posture changing device. It is time-consuming.
  • Fig. 14 shows TV cameras arranged in an arc, with the principal points of each camera lens aligned to one point.
  • Fig. 13 shows a television camera arranged on a spherical surface, viewed from the center line of the camera at the center.
  • the procedure for measuring the characteristic points of the captured data will be described with reference to FIG.
  • the posture measurement object in the figure is composed of three boxes, i, mouth, and c
  • these three boxes have feature points with clear transformers. This is indicated by a “+” mark in the figure.
  • These feature points are measured in advance in the coordinate system of each of the three boxes. If the workpiece is machined as designed, it is natural that the design values may be used instead of the measured values.
  • the mouth, the box of c, and the box of c are made random.
  • the measurement is to know the position and orientation of the box entrance and the box c with respect to- ⁇ Is Rukoto.
  • the posture changing device 33 in the figure is driven so that the characteristic points of each box, mouth, and c come to a specific position on the screen, for example, the center of the screen, and detect the rotation angle at that time.
  • the projection is re-projected to a point on the same plane according to Equations 53 and 54 described above. That is, the position and orientation of the attitude changing device 33 are set with the box as the origin.
  • the parameters of the box mouth and the crocodile posture are obtained by the procedure described above.
  • the posture changing device 33 when using the posture changing device '33, the minimum number of required feature points will be described. It is necessary to find the posture changing device 33 for a certain object, for example, the box in Fig. 16. There are six parameters: the position of the origin (CX, Cy, Cz) of the posture change equipment 33 and the posture (01, ⁇ 2, 03) related to the direction of the coordinate axes. On the other hand, the image information of feature points is two coordinate values of each feature point on the same plane. Therefore, if there are three feature points, six parameters of the position / posture of the posture changing device 33 with respect to the object can be obtained.
  • the attitude changing device 33 shown in FIG. 16 is a two-axis type similar to that shown in FIG. 12, and its coordinate system is also the same as that shown in FIG.
  • the rotation angles of the first and second rotating shafts 402 and 403 are denoted by ⁇ 1 and ⁇ /> 2, respectively.
  • the direction of the central axis of the camera lens 405 can be expressed by the above equation (52).
  • the position of the origin C of the coordinate system of the posture changing device on the stationary coordinate system is (CX, CY, CZ), and the posture is the rotation angle around the X, Y, and Z axes of the stationary coordinate system. Represented by 3.
  • the position of the origin P of the coordinate system of the position / posture object on the stationary coordinate system is (Px, Py, ⁇ ⁇ ), and the posture is around the X, ⁇ , and ⁇ axes of the stationary coordinate system. 0 a, 0 ⁇ ,.
  • the parameters decided here are the one TV camera described in Sakugawa and
  • Equation 29 (Cv. 4 ⁇ The object is a river V, when it is
  • the position of the measuring method s one feature point will be described in the position of one feature point of the object (X, Upsilon , Z).
  • the parameters of one scene detection means are (G x, G y) because they are plane images. Therefore, the scene detecting means cannot measure the position of one feature point unless there are two or more.
  • the method for determining the positional relationship between the two is described in Section 17I.
  • the two attitude changers 3 3 1 and 3 3 2 are attached respectively.
  • Ilta ' ⁇ ' ⁇ ⁇ 1 ⁇ Output stage (television camera) There are two units, ⁇ and ⁇ , and the Caribbean River leak 33 and the object to be measured 334.
  • Two attitude change devices:! 1,]] 2 have two axes as shown in Fig. 8.
  • the workpiece 3 for the calibration 3 3 3 has a plurality of calibration marks 3 ⁇ ⁇ a at the position of ⁇ : negligible; each mark 3 3 3 a is a dimension measuring machine in advance. By measuring in the work coordinate system, their relative positions are captured as data.
  • Each of the scene detecting means ⁇ and B detects the mark 33 33 a on the calibration work 33 33 and detects the mark 33 33 a on the work coordinate 33, for example, the scene detecting means B and the calibration means in the coordinate system of the scene detecting means A.
  • the three-dimensional position and orientation of the workpiece 3 3 3 are measured. By this operation, the relative position of the two scene detecting means A and B is determined.
  • the P point of the measurement object 3334 is set to the center of both sides by the scene detection means A and B, respectively, and the angle of the rotation axis in the posture changing devices 331 and 332 at that time is obtained. From this angle, one point on each plane is obtained, and three-dimensional coordinates are obtained by the method described above. In the same way, find the Q point. From the three-dimensional coordinates of the points P and Q, the distance L between the points P and Q is determined.
  • a method of obtaining the three-dimensional position will be described in more detail.
  • P X, P y, and P z be the design positions of point P in the stationary coordinate system, and let P x ′, P y ′, and P z ′ be the actual positions. If the displacement at that time is ⁇ P X, mm P y, ⁇ P z, the following relational expression is obtained.
  • Equations 63 to 66 are solved as simultaneous equations of ⁇ P x, ⁇ P y, and ⁇ ⁇ ⁇ . These ⁇ ⁇ ⁇ , ⁇ P y, and ⁇ ⁇ ⁇ are approximate values, and from the relationship of the aforementioned equations 60 to 62, the following equation is obtained.
  • FIG. 18 it is now known from the value of the posture changing device 331 that the solid going from the scene detecting means 180 to the point B is known.
  • To obtain the coordinates of point B at this time as shown in Fig. 19, on the surface represented by (xO, y0) in the coordinates (X0, y0, z0) of point A, Find point C of Next, a vector a that connects points on a curved surface obtained in the vicinity of the point C, for example, ( ⁇ ⁇ + ⁇ ⁇ , y0), ( ⁇ 0- ⁇ 0, y0) is obtained.
  • the vector b connecting the points on the surface obtained by (X0, y0 + ⁇ y0), (xO, yO—AyO) is obtained.
  • the tangent plane of point C can be defined by the outer product of these vectors a and b. Let the intersection of this tangent plane and the vector from A to B be D (X 1, y 1, z 1). Find the plane tangent to the point E on the curve represented by (xl, yl, z1), and find the intersection F of this plane and the vector from A to B. By repeating this operation, the coordinates of point B on the curved surface can be obtained.
  • a line along the hatched area in the figure is used as the scene detection means 180.
  • the area on the three-dimensional free-form surface can be defined by defining on the screen obtained in step 1 and finding the corresponding point on the surface for one pixel on the screen by the method described above.
  • the actual work environment scene and the pseudo work scene in design are analyzed, and each work environment in the work environment is analyzed based on the analysis result.
  • the position and posture in the work environment are configured so that the actual control data can be changed and each position and posture in the work environment can be corrected to match the design values.
  • It has the effect of automatically correcting the position deviation with high accuracy.
  • it is necessary to conduct online teaching of the processing equipment by automatically analyzing the work environment when making corrections and changes. There is also an effect such as disappearance.
  • the scene detecting means and The camera lens of a typical TV camera can change its attitude with its principal point at the center, enabling more accurate detection without being affected by lens characteristics such as camera lens distortion. It is effective, and it is economical because a commercially available and inexpensive television camera can easily obtain high-definition images at a wide angle.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un système de production à autonomie élevée présentant la structure suivante. Un moyen (20) de production de scènes simulées de l'environnement de travail est conçu pour synthétiser une scène de manière simulée, lorsqu'un robot visuel (15) et un robot de travail (14) sont dans des positions déterminées, scène qui doit être photographiée par le robot visuel (15), en fonction des données enregistrées dans un moyen (29) de mémorisation de données relatives à l'environnement de travail. Le robot visuel (15) est conçu pour photographier réellement la scène d'un environnement de travail (1). Un moyen (22) de compréhension de scènes de l'environnement de travail compare l'image réelle avec la scène simulée et détecte les écarts de position et d'orientation entre le robot de travail (14) et le robot visuel (15). Ainsi, même lorsqu'un point de référence absolu n'est pas mis en image par le robot visuel (15), la position et l'orientation d'un objet de travail peuvent être détectées et rectifiées, et un système de production d'autonomie élevée peut être mis en place.
PCT/JP1994/002212 1993-12-28 1994-12-26 Procede et appareil de detection de position et d'orientation et systeme de production flexible utilisant ledit appareil WO1995017995A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP5/335362 1993-12-28
JP33536293 1993-12-28

Publications (1)

Publication Number Publication Date
WO1995017995A1 true WO1995017995A1 (fr) 1995-07-06

Family

ID=18287685

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP1994/002212 WO1995017995A1 (fr) 1993-12-28 1994-12-26 Procede et appareil de detection de position et d'orientation et systeme de production flexible utilisant ledit appareil

Country Status (1)

Country Link
WO (1) WO1995017995A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8335666B2 (en) 2006-09-01 2012-12-18 Intelligent Manufacturing Systems International Three-dimensional model data generating method, and three dimensional model data generating apparatus
CN110794773A (zh) * 2019-09-26 2020-02-14 青岛海信智慧家居系统股份有限公司 一种点击式场景创建的方法及装置
CN112947424A (zh) * 2021-02-01 2021-06-11 国网安徽省电力有限公司淮南供电公司 配网作业机器人自主作业路径规划方法和配网作业系统
CN114385002A (zh) * 2021-12-07 2022-04-22 达闼机器人有限公司 智能设备控制方法、装置、服务器和存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63288695A (ja) * 1987-05-22 1988-11-25 株式会社東芝 位置ずれ検出装置
JPH03166077A (ja) * 1989-11-22 1991-07-18 Agency Of Ind Science & Technol 脚歩行制御装置およびその制御方法
JPH04348673A (ja) * 1991-05-27 1992-12-03 Matsushita Electric Ind Co Ltd 追尾雲台装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63288695A (ja) * 1987-05-22 1988-11-25 株式会社東芝 位置ずれ検出装置
JPH03166077A (ja) * 1989-11-22 1991-07-18 Agency Of Ind Science & Technol 脚歩行制御装置およびその制御方法
JPH04348673A (ja) * 1991-05-27 1992-12-03 Matsushita Electric Ind Co Ltd 追尾雲台装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8335666B2 (en) 2006-09-01 2012-12-18 Intelligent Manufacturing Systems International Three-dimensional model data generating method, and three dimensional model data generating apparatus
CN110794773A (zh) * 2019-09-26 2020-02-14 青岛海信智慧家居系统股份有限公司 一种点击式场景创建的方法及装置
CN112947424A (zh) * 2021-02-01 2021-06-11 国网安徽省电力有限公司淮南供电公司 配网作业机器人自主作业路径规划方法和配网作业系统
CN112947424B (zh) * 2021-02-01 2023-04-25 国网安徽省电力有限公司淮南供电公司 配网作业机器人自主作业路径规划方法和配网作业系统
CN114385002A (zh) * 2021-12-07 2022-04-22 达闼机器人有限公司 智能设备控制方法、装置、服务器和存储介质

Similar Documents

Publication Publication Date Title
CN110076277B (zh) 基于增强现实技术的配钉方法
EP3011362B1 (fr) Systèmes et procédés pour suivre la localisation de d'objet cible mobile
JP2013043271A (ja) 情報処理装置、情報処理装置の制御方法、およびプログラム
CN102135776B (zh) 基于视觉定位的工业机器人控制方法
JP4508252B2 (ja) ロボット教示装置
CN104864807B (zh) 一种基于主动双目视觉的机械手手眼标定方法
CN103302666A (zh) 信息处理设备和信息处理方法
US20210187745A1 (en) Automated calibration system and method for a workpiece coordinate frame of a robot
EP2932191A2 (fr) Appareil et procédé conçus pour la mesure tridimensionnelle de surfaces
JP2006329903A (ja) 3次元計測方法および3次元計測システム
WO2022000713A1 (fr) Procédé d'auto-positionnement par réalité augmentée basé sur un ensemble d'aviation
CN102855620B (zh) 基于球形投影模型的纯旋转摄像机自标定方法
CN111476909B (zh) 一种基于虚拟现实弥补时延的遥操作控制方法及系统
CN114705122A (zh) 一种大视场立体视觉标定方法
CN112894209A (zh) 一种基于十字激光的管板智能焊接机器人自动平面校正方法
CN110686595A (zh) 非正交轴系激光全站仪的激光束空间位姿标定方法
JP2021146499A (ja) ビジョンシステムの3次元校正のためのシステム及び方法
CN106737859A (zh) 基于不变平面的传感器与机器人的外部参数标定方法
CN105374067A (zh) 一种基于pal相机的三维重建方法及其重建系统
JPH07237158A (ja) 位置・姿勢検出方法及びその装置並びにフレキシブル生産システム
JPH0790494B2 (ja) 視覚センサのキャリブレ−ション方法
WO1995017995A1 (fr) Procede et appareil de detection de position et d'orientation et systeme de production flexible utilisant ledit appareil
Tian et al. A camera calibration method for large field vision metrology
Zeng et al. A 3D passive optical localization system based on binocular infrared cameras
Zhang et al. Camera calibration algorithm for long distance binocular measurement

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase