NL2016960B1 - System and method for controlling a machine, in particular a robot - Google Patents

System and method for controlling a machine, in particular a robot Download PDF

Info

Publication number
NL2016960B1
NL2016960B1 NL2016960A NL2016960A NL2016960B1 NL 2016960 B1 NL2016960 B1 NL 2016960B1 NL 2016960 A NL2016960 A NL 2016960A NL 2016960 A NL2016960 A NL 2016960A NL 2016960 B1 NL2016960 B1 NL 2016960B1
Authority
NL
Netherlands
Prior art keywords
virtual
processing unit
orientation
image
points
Prior art date
Application number
NL2016960A
Other languages
Dutch (nl)
Inventor
Wilhelm Anton Tollenaar Roland
Original Assignee
Tollenaar Holding B V
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tollenaar Holding B V filed Critical Tollenaar Holding B V
Priority to NL2016960A priority Critical patent/NL2016960B1/en
Application granted granted Critical
Publication of NL2016960B1 publication Critical patent/NL2016960B1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1671Programme controls characterised by programming, planning systems for manipulators characterised by simulation, either to verify existing program or to create and verify new program, CAD/CAM oriented, graphic oriented programming systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40122Manipulate virtual object, for trajectory planning of real object, haptic display
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40125Overlay real time stereo image of object on existing, stored memory image argos
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40129Virtual graphic 3-D pointer, manipulator commands real manipulator

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to system and a method for controlling a machine, in particular a robot, wherein the system comprises an optical camera, a measuring unit and a user interface with a display, wherein the system further comprises a processing unit for detecting a first object in the image and for adding a virtual representation of the first object to the image at a virtual position and in a virtual orientation corresponding to the actual position and the actual orientation, respectively, of the first object in the captured image, wherein the user interface comprises control elements for manipulating the virtual representation with respect to the first object, wherein the processing unit is arranged for sending control signals to the machine to cause the machine to move the first object into the manipulated virtual position and/or virtual orientation.

Description

NLP199274A
System and method for controlling a machine, in particular a robot
BACKGROUND
The invention relates to a system and method for controlling a machine, in particular a robot. A system and method are known for remote control of a mobile robot. Using a point-and-click device, the user is able to choose a target location within a heads-up display toward which to move a mobile robot. The technique can also be used to point to a spot on a wall, given the plane of a wall, or to point at specific item in a crowded collection of parts, given a three-dimensional range map of items being looked at. As such, the interface can be used with a robotic arm to point to things to grasp, e.g. as part of a grocery shopping robot that picks items up off shelves, with the interface being used to point out the item to be selected.
Furthermore, an industrial safety system is known that integrates optical safety monitoring with machine control. The safety monitoring system includes an imaging sensor that allows one or more specified portions of the pixel array to be selected for 3D (time-of-flight) analysis in order to obtain distance information for pixels in that portion of the pixel array, while the remaining pixel array areas will be processed using 2D image analysis. The imaging sensor will apply 2D analysis (RGB or grayscale analysis, edge detection, contour analysis, image sharpening, contrast adjustment, difference and additive imaging, facial recognition) to the pixel array in order to detect, identify, classify, and/or correlate objects within the viewing area. Since 2D imaging processes more quickly than 3D processing, processing load is reduced and sensor response time is improved by limiting 3D analysis to only those areas of the scene for which distance information is required. In this manner, the safety system can detect an object is within a potentially hazardous area near a controlled industrial machine and take appropriate action, e.g. power down the machine.
Although the known systems provide a way to point at objects and to detect, identify, classify and/or correlate said objects, the known systems are limited to analyzing the environment as it is.
It is an object of the present invention to provide a system and method that uses the information obtained about objects in the environment more effectively for the purpose of controlling a machine, in particular a robot.
SUMMARY OF THE INVENTION
According to a first aspect, the invention provides a system for controlling a machine, in particular a robot, wherein the system comprises an optical camera for capturing an image of an environment, a measuring unit that is arranged for acquiring depth information from points in said environment and a user interface with a display for displaying the captured image, wherein the system further comprises a processing unit that is arranged for detecting a first object in the image and for adding a virtual representation of the first object to the image at a virtual position and in a virtual orientation corresponding to the actual position and the actual orientation, respectively, of the first object in the captured image, wherein the user interface comprises one or more control elements for manipulating the virtual position and/or the virtual orientation of the virtual representation in the captured image with respect to the actual position and/or orientation of the first object, wherein the system further comprises a storage unit for storing the depth information from the measuring unit, wherein the processing unit is electronically connected to the storage unit for transforming the stored depth information of points of the first object from the actual position and the actual orientation into the manipulated virtual position and/or virtual orientation, wherein the processing unit is arranged for sending control signals to the machine to cause the machine to move the first object into the manipulated virtual position and/or virtual orientation.
The system provides a virtual representation in the captured image based on actual measurements of the first object, which virtually relocated or reoriented virtual representation can be manipulated for various uses in machine control. The virtual representation in the captured image can provide an augmented reality or mixed reality interface with which the user or operator can interact. The processing unit can subsequently cause the machine to move the first object into the position and/or orientation of the virtual representation. Hence, the virtual representation can be used to predict and/or verify the actual movement of the first object, e.g. to check for collision or to virtually validate the assembly.
In a preferred embodiment the one or more control elements are arranged for manipulating the virtual representation with respect to a second object in the environment, wherein the processing unit is arranged for relating the transformed depth information of points of the first object to stored depth information of points of the second object. Hence, the relation between the first object and the second object can be virtually assessed and displayed.
In an embodiment thereof the processing unit is arranged for virtually mating and/or virtually assembling the virtual representation of the first object with the stored depth information of points of the second object and for adding feedback thereon to the image. The virtual assembly can reveal issues which can be dealt with appropriately prior to the actual assembly. A use may be the medical field, in which one can visualize movements of organs, prosthetic elements or bones prior to actually moving them, to prevent or reduce the risk of injury.
In a preferred embodiment thereof the feedback comprises one or more parameters of the group comprising: positive tolerances, negative tolerances, alignment, collision detection and overall geometric matching.
In an embodiment thereof the storage unit is arranged for storing geometric information of the second object and one or more further objects, wherein the processing unit is arranged for comparing the stored depth information of points of the first object with the geometric information of each of the second object and the one or more further objects, wherein the processing unit is arranged for adding feedback on one or more of said parameters of the first object with respect to said second object and the one or more further objects. The feedback can be used to select the best match candidate for the assembly process. This can ensure that the first object is assembled and/or fitted to an object out of a range of objects that is best suitable. This can be particularly useful when assembling parts of a tolerance critical assembly.
In an embodiment thereof the processing unit is arranged for calculating the best match between the first object and the second object and the one or more further objects and for adding information on said best match to the image. The user can subsequently check and/or confirm the best match and send an instruction to the processing unit to carry out the assembly based on the best match proposal.
In a further embodiment the virtual representation is formed by or comprises the measured points of the first object. Hence, the actually measured points can be shown in the captured image.
In a further embodiment the system comprises one or more position sensors or controllable drives for obtaining position data about the orientation of the optical camera and/or the measuring unit with respect to the environment, wherein the storage unit is arranged for receiving and storing the position data for each of the measured points. The depth information, combined with the position data, can fully define the position of the points with respect to their environment.
In an embodiment thereof the optical camera is movable between different orientations and wherein the captured image is refreshed at said different orientations, wherein the processing unit is arranged for transforming the position data and/or to correct for a change in perspective between the different orientations to maintain the virtual representation in its virtual position and/or virtual orientation relative to the environment. Thus, the points can be updated to show correctly in each orientation of the optical camera. The points will appear to be stationary with respect to the environment or the first object, regardless of the orientation of the optical camera. The combination of the continuously refreshing captured image and the positioning of the virtual points relative to the actual environment in said captured image provides an augmented reality or mixed reality interface.
In a further embodiment the storage unit is arranged for storing a computer model of the first object, wherein the processing unit is arranged for forming the virtual representation of the first object by relating the stored computer model of the first object to one or more of the measured points of the first object. The stored computer model may provide a more detailed virtual representation of the first object, in particular for points, parameters or characteristics of the first object which are hidden from view in the captured image.
In an embodiment thereof the processing unit is arranged for comparing the stored depth information of the points of the first object with the computer model and for adding feedback thereon to the image. Hence, the model can be used to check the quality of the first object. Any discrepancies between the first object and its computer model can be highlighted and appropriate action may be taken.
In a further embodiment the processing unit is arranged for altering the virtual representation of the first object based on user instructions, for comparing said altered virtual representation with the original virtual representation and for visualizing the differences in the captured image, wherein the processing unit is arranged for sending machining instructions to the machine to perform work on the first object so that it ultimately matches the altered virtual representation. Said system can have a wide variety of applications in fields of machining, like metal working and woodworking. A further use for such a system can be envisioned for the medical field, in which bones or prosthetic elements are optimized in shape or form to match other parts of a patients' body. In a further embodiment the processing unit is arranged for detecting the contour of the first object in the image, wherein the processing unit is arranged for limiting the area of the environment that is measured by the measuring unit to the detected contour of the first object. By limiting the measurement area to the contour of the first object, less calculation resources are required from the processing unit to determine the depth information. In addition, the limitation reduces the risk of light scattering and/or reflections which could potentially result in errors and/or inaccuracies in the measurements.
In a further embodiment the processing unit is arranged for detecting the first object in the image by a process of the group comprising: edge detection, depth analysis, segmentation, grayscale or RGB analysis, correlation and/or artificial intelligence. The aforementioned processes or a combination thereof can provide a reliable way of determining the area that is to be measured for depth information. In particular, some of the processes may be useful in low-light conditions.
In a further embodiment the points of the first object are measured according to a pattern, preferably a point cloud. The pattern can provide sufficiently detailed information on the first object to enable the manipulation.
In an embodiment of the system according to the invention, further comprising the machine, the machine comprises a control unit that is arranged for receiving the control signals from the processing unit and for controlling the machine based on said control signals. Hence, the machine can be controlled directly by the control unit.
In an embodiment thereof the system further comprises an intelligent unit that is arranged for machine learning and/or artificial intelligence, wherein the processing unit is arranged for communication with the intelligent unit for manipulating the virtual position and/or the virtual orientation of the virtual representation and/or sending the control signals to the control unit based on input from the intelligent unit. The intelligent unit allows for machine-based and/or computer-aided decision, rather than user-based decision. Hence, the machine can be easily programmed and/or controlled automatically or semi-automatically. The intelligent unit can be optically trained by the user by indicating reference points, which can be subsequently be memorized and trained, e.g. in a neural network.
In another preferred embodiment the measuring unit is arranged to be pointed at a point of said environment for measuring a point distance along a measuring line to said point, wherein the optical camera has an optical axis and a field of view, wherein the point lies in a view plane that is normal to the optical axis and that is bounded by the field of view, wherein the measuring line is positioned with respect to the optical axis such that the point is offset with respect to the optical axis at the view plane, wherein the processing unit is arranged for adding a pointing aid to the captured image at the location of the point in the captured image based on the offset of the point in the view plane. Hence, the optical camera can be used to accurately aim the measuring unit on a specific point of interest to accurately measure said point of interest. The pointing aid can provide useful feedback to the user on the current aim of the measuring unit. The point of interest in the system according to the present invention can be measured directly and accurately.
In an embodiment thereof the point is offset with respect to the optical axis at the view plane over a first offset distance in a first offset direction in a first ratio to a first dimension of the view plane in said first offset direction, wherein the captured image has an image plane, wherein the point is offset in the captured image over a second offset distance in a second offset direction in a second ratio to a second dimension of the image plane in said second offset direction, wherein the processing unit is arranged for adding the pointing aid to the captured image at the location of the point in the captured image based on the second ratio being equal or substantially equal to the first ratio. Using the ratios, one does not require complex calculations to arrive at the second offset distance. Hence, the pointing aid can be offset quickly and accurately to the location in the captured image that corresponds to the location of the point of interest in the view plane.
In a further embodiment thereof the optical axis and the measuring line are parallel or substantially parallel. When the optical axis and the measuring line are parallel, their intermediate spacing is constant for each distance measured by the measuring unit. Hence, the first offset distance in the view planes at each of these distances can be kept constant.
According to a second aspect, the invention provides a set of two or more of the aforementioned systems, wherein each system of the set is arranged in a different orientation with respect to the environment and/or the objects in said environment, wherein the optical camera for each system of the set has its own coordinate system, wherein the processing unit is arranged for relating, combining and/or transforming the points measured by one of the systems of the set with, to and/or into the coordinate system of one or more of the other systems of the set. The set enables the environment and/or the objects in the environment to be measured from different orientations and/or angles, e.g. all around the first object, to obtain better and/or more complete measuring data. By relating, combining and/or transforming the measuring data from the different systems, a better and/or more complete virtual representation can be obtained.
According to a third aspect, the invention provides a method for controlling a machine, in particular a robot, using the aforementioned system, wherein the method comprises the steps of capturing an image of the environment, acquiring depth information from points in said environment, displaying the captured image, detecting the first object in the captured image, adding the virtual representation of the first object to the captured image at the virtual position and in the virtual orientation corresponding to the actual position and the actual orientation, respectively, manipulating the virtual position and/or the virtual orientation of the virtual representation in the captured image with respect to the actual position and/or orientation of the first object, transforming the stored depth information of points of the first object from the actual position and the actual orientation into the manipulated virtual position and/or virtual orientation, and sending control signals to the machine to cause the machine to move the first object into the manipulated virtual position and/or virtual orientation.
Again, the system provides a virtual representation in the captured image based on actual measurements of the first object, which virtually relocated or reoriented virtual representation can be manipulated for various uses in machine control. The processing unit can subsequently cause the machine to move the first object into the position and/or orientation of the virtual representation. Hence, the virtual representation can be used to predict and/or verify the actual movement of the first object, e.g. to check for collision or to virtually validate the assembly.
The various aspects and features described and shown in the specification can be applied, individually, wherever possible. These individual aspects, in particular the aspects and features described in the attached dependent claims, can be made subject of divisional patent applications .
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will be elucidated on the basis of an exemplary embodiment shown in the attached schematic drawings, in which: figure 1 shows a system with an optical camera, a measuring unit and a user interface for controlling a robot, according to the invention; figures 2-5 shows the user interface in more detail during steps of a method for controlling the robot; figure 6 shows the user interface in a quality control application of the system according to figure 1; figure 7 shows the user interface in a machining control application of the system according to figure 1; figure 8A shows a side view of the optical camera and the measuring unit during the capturing and measuring, respectively, of an environment; and figure 8B shows the image captured by the pointing and measuring system in figure 8A.
DETAILED DESCRIPTION OF THE INVENTION
Figure 1 shows a system 1 for controlling a machine according to an exemplary embodiment of the invention. In this exemplary embodiment, the machine is a robot 8. Alternatively, the system 1 can be used to control any other type of machine, including but not limited to machines for drilling, milling, turning, threading, grinding, filing, welding, brazing, soldering, riveting, with or without computer numerical control (CNC). The system 1 is placed in an environment 9 for capturing and measuring one or more points of interest PI, P2, P3 of said environment 9. The system 1 is used for selecting and/or measuring one or more objects 91, 92 in the environment 9 and/or for controlling the robot 8 based on the one or more measured points of interest PI, P2, P3, ..., Pn. The robot 8 can be used for many applications, including but not limited to pick-and-place, assembly and/or welding.
As shown in figure 1, the system 1 comprises an optical camera 2 for capturing an image I of the environment 9, a measuring unit 3 for acquiring depth information about one or more of the points of interest PI, P2, P3, ..., Pn and a user interface 4 with a display 40 for displaying the captured image I. The system 1 further comprises a processing unit 5 for receiving, processing and/or combining the captured image I and the depth information. Preferably, the processing unit 5 is arranged for detecting objects 91, 92 in the captured image I, e.g. by a process of the group comprising: edge detection, texture detection, depth analysis, grayscale or RGB analysis, correlation and/or artificial intelligence. The system 1 is provided with a storage unit 6 for storing depth information on each of the measured points PI, P2, P3, ..., Pn. The processing unit 5 is arranged for controlling the robot 8 directly or via an intermediate control unit 7. Optionally, the system 1 comprises an intelligent unit 100 that is arranged for machine learning, machine decision and/or artificial intelligence to control the robot 8.
As shown in more detail in figures 8A and 8B, the optical camera 2 comprises a lens 20 that defines and optical axis A1 and a field of view FOV of the optical camera 2. The field of view FOV is symmetrical or substantially symmetrical to and diverges at a constant or substantially constant angle from the optical axis A1 in a direction away from the lens 20 for a specific zoom. The optical axis A1 of the optical camera 2 intersects with the first object 91 at a center point that will be the image center C in the image I captured by the optical camera 2.
The measuring unit 3 is arranged to be pointed at and to measure a point PI, P2, P3, ..., Pn along a measuring line A2. The measuring line A2 of the measuring unit 3 intersects with a front face of the first object 91 at a first point PI. The first point PI lies in a view plane F at a point distance D along the measuring line A2 from the measuring unit 3. Said view plane F extends normal to the optical axis A1. The view plane F is bounded by the field of view FOV and has a height HI that is equivalent to the height of the field of view FOV at said point distance D. Said height HI is also known as the 'vertical field of view' or vFOV. The height HI of the view plane F is expressed in appropriate real world units, e.g. in metrics such as centimeters or millimeters. The measuring unit 3 preferably comprises a time-of-flight sensor, e.g. a laser sensor.
The measuring unit 3 is positioned with respect to the optical camera 2 such that the measuring line A2 and the optical axis A1 are not collinear. In other words, the measuring line A2 and the optical axis A1 are offset with respect to each other. Hence, the measuring line A2 of the measuring unit 3 is pointed at a different point PI, P2, P3 than the optical axis A1 of the optical camera 2. The optical axis A1 and the measuring line A2 are parallel or substantially parallel and spaced apart over a first offset distance B in a first offset direction El. Consequently, at the view plane F, the first point PI is offset with respect to the optical axis A1 in a first offset direction El.
The optical camera 2 has an image sensor (not shown) that defines a rectangular or square image plane G for capturing the image I. The image plane G has a plurality of pixels, an image width and an image height H2. The dimensions of the width and the height H2 of the image plane G are expressed in virtual units, e.g. the amount of pixels in the width and height directions.
Because of the parallelism between the optical axis A1 and the measuring line A2, the first offset distance B1 is the same for each view plane F at any distance D from the measuring unit 3. Consequently, the first offset distance B1 of the first point PI with respect to the optical axis A1 at the first view plane FI is known. Furthermore, the height HI of the first view plane FI can be calculated based on the constant angle V of the field of view FOV and the first point distance D1. In particular, the height HI of the first view plane FI can be correlated linearly to the first point distance D1 as measured by the measuring unit 3.
As shown in figure 8A, the first point P is offset with respect to the optical axis A1 at the view plane F over the first offset distance B1 in the first offset direction El in a first ratio to the height HI of the view plane F in said first offset direction El. In the captured image I, as shown in figure 8B, the first point PI is offset in a second offset direction E2 with respect to the optical axis A1 and/or the image center C over a second offset distance B2. The second offset distance B2 is in a second ratio to the height H2 of the image plane G. The processing unit 5 is arranged for calculating the second offset distance B2 based on the second ratio being equal or substantially equal to the first ratio. The first ratio can be obtained by dividing the first offset distance Bl (e.g. in centimeters) by the height HI of the first view plane FI (e.g. in centimeters). The second offset distance Bl (in pixels) can subsequently be obtained by multiplying the first ratio with the height H2 of the image plane H (in pixels). Many variations on the calculation of the second offset distance Bl will be apparent to one skilled in the art that would yet be encompassed by the scope of the present invention.
As shown in figure 8B, the processing unit 5 is arranged for adding a pointing aid 31 to the captured image I at the location of the first point PI in the captured image I to show the user where the measuring line A2 is pointed at. In this exemplary embodiment, the pointing aid 31 is a reticle, in particular a crosshair.
As shown schematically in figure 1, the user interface 4 is provided with control means or one or more control elements 33 for inputting user instructions into the processing unit 5. In this exemplary embodiment, the control elements 33 are schematically shown as a keyboard and mouse control. Alternatively or additionally, user instructions can be received via a stylus or touch screen technology at the display 40. The control elements 33 can be used to log and/or store the depth information for the point that is currently at the pointing aid 31 or to indicate an area of the image I to which the pointing aid 31 is to be moved.
As shown in figure 8A, the optical camera 2 and the measuring unit 3 are supported by a movable support 21, preferably a multi-axis gimbal, controlled by (servo)motors, to enable controlled positioning of the movable support 21 relative to the environment 9. The controlling may be performed from a remote station (not shown) by a user or operator. The optical camera 2 is rotatable with respect to its environment 9 about a first tilting axis T1 and a second tilting axis T2, perpendicular to the first tilting axis Tl, as schematically shown with arrows R and S, respectively.
The system 1 is provided with one or more position sensors 22 in or at the movable support 21 for detecting the angular displacement of the optical camera 2 and/or the measuring unit 3 with respect to the environment 9 about the first tilting axis Tl and the second tilting axis T2. The one or more position sensors 22 are arranged for sending position data about the angular displacements of the optical camera 2 to the processing unit 5. For the purpose of this invention, the movable support 21 may be provided with controllable drives, e.g. in the form of stepper motors, in which case the position data on the angular positions of the controllable drives may be obtained by monitoring the control of said stepper motors via the processing unit 5 and/or the control unit 7. The position data, together with the depth information, is used to determine the coordinates of the measured points PI, P2, P3, ..., Pn in the coordinate system of the optical camera 2 and/or the measuring unit 3. In this example, the optical camera 2 moves in a spherical coordinate system with respect to its environment. The environment 9 itself is defined by a global map (not shown) with a three-dimensional Cartesian coordinate system with three axes X, Y and Z. Based on the known position of the optical camera 2 in the environment 9, the spherical coordinates can be transformed into X, Y and Z coordinates into the global map .
The storage unit 6 comprises a suitable storage medium (not shown) that is part of or electronically connected to the processing unit 5 for storing the captured images, measuring data and/or position data received by the processing unit 5 from the optical camera 2, the measuring unit 3 and the one or more position sensors 22, respectively.
The method for controlling the robot 8 with the use of the aforementioned system 1 will be explained hereafter with reference to figures 1-5. An exemplary environment 9 is provided consisting of a first cylindrical object 91 and a second rectangular or square object 92 that are placed on a table within the field of view FOV of the optical camera 2. The second object 92 is provided with a cylindrical hole 93 that is arranged for receiving the first object 91. The environment 9 can be any random environment, industrial or non-industrial, inside or outside .
Figure 2 shows the situation in which the system of figure 1 has detected an area in the image I the most likely corresponds to the first object 91 in the image I, e.g. on the basis of segmentation and/or depth analysis. This area is subsequently scanned by the measuring unit 3, e.g. according to a pattern, to obtain depth information on a plurality of points PI, P2, P3, ..., Pn of the first object 91. The depth information of points PI, P2, P3, ..., Pn is stored in the storage unit 6 and the processing unit 5 virtually adds said points PI, P2, P3, ..., Pn to the image I. Hence, the points PI, P2, P3, ..., Pn form a virtual representation V of the first object 91 in the captured image I, thereby providing an augmented reality or mixed reality interface with which the user or operator can interact
Figure 3 shows the situation after the scanning of the points PI, P2, P3, ..., Pn has been completed. The visible surfaces of the first object 91 within the automatically detected area have now been scanned. The user can now use the control elements 33 to manipulate the virtual position and/or virtual orientation of the virtual representation V with respect to the actual position and the actual orientation of the first object 91 in the image I, e.g. by shifting, translating, rotating or a combination thereof. As shown in figure 3, the virtual representation V of the first object 91 has been translated upwards and to the left into a position in which the virtual representation V is aligned and/or concentric with the cylindrical hole 93 in the second object 93. Hence, the virtual representation V can be used to virtually assemble the first object 91 with the second object 92, prior to the actual assembly of said objects 91, 92. The processing unit 5 is arranged for transforming the stored depth information of the points PI, P2, P3, ..., Pn of the first object 91 from the actual position and the actual orientation into the manipulated virtual position and/or virtual orientation to show the virtual points PI, P2, P3, ..., Pn in their updated virtual positions.
If the virtual assembly fails, appropriate action can be taken to correct and/or improve the actual assembly. If the virtual assembly is successful, the processing unit 5 can send control signals to the robot 8 via the control unit 7 of figure 1 based on the manipulated virtual points PI, P2, P3, ..., Pn to cause the actual position and/or the actual orientation of the first object 91 to be moved into the virtual position and/or the virtual position relative to the second object 92. In this manner, the result of the assembly can be predicted and optimized virtually, prior to the actual assembly and based on the actual measurements of the first object 91. This can greatly improves the quality of the assembly process.
Figure 4 shows the situation in which the virtual presentation V has been rotated over ninety (90) degrees anti-clockwise into a different virtual orientation with respect to the actual orientation of the first object 91. It will be apparent to one skilled in the art that the virtual representation V can be rotated into any virtual orientation and/or position with respect to the actual orientation and the actual position of the first object 91.
Figure 5 shows a different application of the virtual representation, in which the only the top surface of the first object 91 is automatically detected. The user may select the relevant face by selecting a point within the face with the control elements 33, e.g. via a mouse click, touch or stylus. The processing unit 5 is arranged for subsequently detecting the boundaries of the face, e.g. by edge detection, texture detection, depth analysis, segmentation, RGB or grayscale analysis or artificial intelligence. The measuring unit 3 subsequently scans points PI, P2, P3, ..., Pn of the face, e.g. according to a pattern, to obtain the geometric parameters of the face. The parameters may include the shape of the edges, the circumference, the radii or the diameter. This geometric information is processed and compared by the processing unit 5 with geometric information of the second object 92 or further objects 94, 95, 96 stored in the storage unit 6.
The processing unit 5 is arranged for matching the geometric information of the first object 91 with the geometric information of one or more match candidates out of the group of the second object 92 and the further objects 94, 95, 96 to determine a best fit or best match between said objects, e.g. based on positive tolerances, negative tolerances, alignment, collision detection and overall geometric matching. The processing unit 5 is arranged for virtually representing the best match candidates next to the first object 91 in the image I. Preferably, the processing unit 5 is arranged for adding a measure or indicator to the image I next to each match to indicate the level of matching, e.g. as a percentage. Most preferably, the processing unit 5 is arranged for highlighting, e.g. with a frame or color, the best match candidate. Upon receipt of the users' confirmation that the match is to be used for assembly, the processing unit 5 is arranged for sending control signals to the robot 8 via the control unit 7 to cause the robot 8 to actually assemble the first object 91 with the best match candidate.
In this manner, the first object 91 can be virtually matched to a collection of candidates prior to the actual assembly, such that the best possible fit can be obtained. This is particularly useful when assembling parts of tolerance critical assemblies.
Figure 6 shows a different application of the system 1 in quality control of the first object 91. In this exemplary embodiment, a computer or CAD model of the first object 91 is stored in the storage unit 6, which model is mated to a selection of the points PI, P2, P3 measured by the measuring unit 3. The processing unit 5 is subsequently arranged for overlaying and/or superimposing the virtual model M, e.g. a wireframe model, over the first object 91 in the same virtual orientation and virtual position as the actual orientation and the actual position of the first object 91 in the image I. Hence, one can easily compare the virtual model with the first object 91. To enhance the comparison, the processing unit 5 may be arranged for adding feedback on the comparison to the image I, e.g. by marking areas with abnormalities and/or high tolerances. In this manner, the quality control of the manufactured first object 91 can be checked easily and appropriate action can be taken to correct and/or improve the quality. The virtual model M, e.g. the wireframe model, can also be used to replace the virtual points PI, P2, P3, ..., Pn in the virtual representations V of the previously discussed embodiments of the invention.
Figure 7 shows a machining application of the system 1. The first object 91 is detected and measured to obtain its virtual representation V in the same manner as in the previously discussed embodiments. However, in this particular application, the virtual representation V can be adapted, modified and/or altered in place in the augmented reality or the mixed reality as provided by the user interface 4, e.g. by using the control elements 33, or in a remote interface into an altered virtual representation V'. In figure 7, one of the alterations involves changing the diameter of the top face, e.g. by dragging the first point PI to an offset position PI' . The altered virtual representation V' is subsequently superimposed over the actual first object 91 and/or the original virtual representation V to visualize the differences between the altered virtual representation V' and the original virtual representation V that is based on the actual first object 91. After analysis of the differences, the processing unit 5 can send machining instructions to the control unit 7 to control a machine, e.g. the robot 8 or any other machine, to perform work on the first object 91 so that it ultimately matches the altered virtual representation V' . Work can include many machining actions, including but not limited to drilling, milling, turning, threading, grinding, filing, welding, brazing, soldering, riveting, with or without computer numerical control (CNC). A use for such a system can be envisioned for the medical field, in which bones or prosthetic elements are optimized in shape or form to match other parts of a patients' body.
In each of the previously discussed embodiments, the optical camera 2 and the measuring unit 3 are moved between different orientations to scan the first object 91 or any other object in the environment 9. For each measured point, the first ratio between its first offset distance B1 and the height HI of the respective view plane F, normal to the optical axis Al, is calculated and the second offset distance B2 is determined based on the second ratio being equal to the first ratio. The processing unit 5 may be arranged recalculate the position of the pointing aid 32 in the captured image I continuously, so that the pointing aid 32 is adjusted automatically and continuously for each point measured between the first orientation and the second orientation.
The captured image I may be refreshed continuously during the movement of the optical camera 2. In a sequence of refreshed images I, the first point Pi will appear to shift through image plane G in a direction opposite to the direction of movement. The processing unit 5 is arranged for calculating and storing the X, Y and Z coordinates for each of the points Pi, P2, P3, ..., Pn in the storage unit 6. The processing unit 5 is arranged for transforming the stored X, Y and Z coordinate of the points PI, P2, P3, ..., Pn relative to the new orientation of the optical camera 2, e.g. via an intermediate transformation back to the new spherical coordinates of the optical camera 2, to update the positions of the virtual points Pi, P2, P3, ..., Pn to the correct positions in the refreshed captured image I. By further taking into account optical deformations and/or perspective corrections, the virtual points PI, P2, P3, ..., Pn appear to remain stationary relative to the environment 9, regardless of the orientation of the optical camera 2.
Moreover, in each of the previously discussed embodiments, the optical camera 2 may be provided with focusing means (not shown) that is electronically connected to the measuring unit for focusing the optical camera 2 on the view plane based on the measured point distance. Alternatively or additionally, the optical camera 2 may be capable of zooming or cropping, with the processing unit 5 being arranged for taking into account the changes in perspective, the angle of view, the field of view FOV, the image plane G and/or the view plane F.
The system 1 may be designed to enable the user to assign labels and/or actions to the previously discussed points PI, P2, P3, ..., Pn, e.g. for handling purposes or for controlling pick-and-place, assembly and/or welding robots. Many applications, both industrial and domestic, indoors or outdoors, may be envisioned that would yet be encompassed by the scope of the present invention.
Optionally, the processing unit 5 is arranged for communication with the intelligent unit 100 of figure 1 for manipulating the virtual position and/or the virtual orientation of the virtual representation and/or sending the control signals to the control unit 7 based on input from the intelligent unit 100. Hence, the robot 8 can be controlled solely or mainly by machine-learning and/or artificial intelligence rather than user-decision.
It is to be understood that the above description is included to illustrate the operation of the preferred embodiments and is not meant to limit the scope of the invention. From the above discussion, many variations will be apparent to one skilled in the art that would yet be encompassed by the scope of the present invention.

Claims (22)

1. Systeem voor het aansturen van een machine, in het bijzonder een robot, waarbij het systeem een optische camera omvat voor het vastleggen van een beeld van een omgeving, een meeteenheid die is ingericht voor het verkrijgen van diepte informatie van punten van de omgeving en een gebruikersinterface met een beeldscherm voor het weergeven van het vastgelegde beeld, waarbij het systeem verder een verwerkingseenheid omvat die is ingericht voor het detecteren van het eerste object in het beeld en voor het toevoegen van een virtuele representatie van het eerste object aan het beeld in een virtuele positie en een virtuele oriëntatie in overeenstemming met respectievelijk de werkelijke positie en de werkelijke oriëntatie van het eerste object in het vastgelegde beeld, waarbij de gebruikersinterface een of meer aanstuurelementen omvat voor het manipuleren van de vrituele positie en/of de virtuele oriëntatie van de virtuele representatie ten opzichte van de werkelijke positie en/of de werkelijke oriëntatie van het eerste object, waarbij het systeem verder een opslageenheid omvat voor het opslaan van de diepteinformatie van de meeteenheid, waarbij de verwerkingseenheid elektronisch verbonden is met de opslageenheid voor het transformeren van de opgeslagen diepteinformatie van de punten van het eerste object van de werkelijek positie en de werkelijke oriëntatie naar de gemanipuleerde virtuele positie en/of virtuele oriëntatie, waarbij de verwerkingseenheid is ingericht voor het verzenden van stuursignalen naar de machine teneinde te veroorzaken dat de machine het eerste object beweegt tot in de gemanipuleerde virtuele positie en/of virtuele oriëntatie.A system for controlling a machine, in particular a robot, the system comprising an optical camera for capturing an image of an environment, a measuring unit adapted to obtain depth information from points of the environment and a user interface with a display for displaying the captured image, the system further comprising a processing unit adapted to detect the first object in the image and to add a virtual representation of the first object to the image in a virtual position and a virtual orientation according to the actual position and the actual orientation of the first object in the captured image, respectively, wherein the user interface comprises one or more control elements for manipulating the virtual position and / or the virtual orientation of the virtual representation in relation to the actual position and / or the working orientation of the first object, the system further comprising a storage unit for storing the depth information of the measuring unit, the processing unit being electronically connected to the storage unit for transforming the stored depth information of the points of the first object of the reality position and the actual orientation to the manipulated virtual position and / or virtual orientation, wherein the processing unit is adapted to send control signals to the machine to cause the machine to move the first object into the manipulated virtual position and / or virtual orientation . 2. Systeem volgens conclusie 1, waarbij de een of meer aanstuurelementen zijn ingericht voor het manipuleren van de virtuele representatie ten opzichte van een tweede object in de omgeving, waarbij de verwerkingseenheid is ingericht voor het relateren van de getransformeerde diepteinformatie van punten van het eerste object aan opgeslagen diepteinformatie van punten van het tweede object.A system according to claim 1, wherein the one or more control elements are adapted to manipulate the virtual representation relative to a second object in the environment, the processing unit being adapted to relate the transformed depth information of points of the first object stored depth information of points of the second object. 3. Systeem volgens conclusie 2, waarbij de verwerkingseenheid is ingericht voor het virtueel samenbrengen en/of virtueel assembleren van de virtuele representatie van het eerste object met de opgeslagen diepte informatie van punten van het tweede object en voor het toevoegen van terugkoppeling hierover aan het beeld.The system of claim 2, wherein the processing unit is adapted to virtually assemble and / or virtually assemble the virtual representation of the first object with the stored depth information of points of the second object and to add feedback thereto to the image . 4. Systeem volgens conclusie 3, waarbij de terugkoppeling een of meer parameters omvat uit de groep omvattend: positieve toleranties, negatieve toleranties, uitlijning, botsingsdetectie en algemene geometrische passing.The system of claim 3, wherein the feedback comprises one or more parameters from the group comprising: positive tolerances, negative tolerances, alignment, collision detection, and general geometric fit. 5. Systeem volgens conclusie 4, waarbij de opslageenheid is ingericht voor het opslaan van geometrische informatie van het tweede object en een of meer verdere objecten, waarbij de verwerkingseenheid is ingericht voor het vergelijken van de opgeslagen diepteinformatie van punten van het eerste object met de geometrische informatie van elk van het tweede object en de een of meer verdere objecten, waarbij de verwerkingseenheid is ingericht voor het toevoegen van terugkoppeling aan het beeld over een of meer van de parameters van het eerste object ten opzichte van het tweede object en de een of meer verdere objecten.The system of claim 4, wherein the storage unit is adapted to store geometric information of the second object and one or more further objects, the processing unit being adapted to compare the stored depth information of points of the first object with the geometric information from each of the second object and the one or more further objects, the processing unit being adapted to add feedback to the image over one or more of the parameters of the first object relative to the second object and the one or more further objects. 6. Systeem volgens conclusie 5, waarbij de verwerkingseenheid is ingericht voor het berekenen van de beste passing tussen het eerste object en het tweede object en de een of meer verdere objecten en voor het toevoegen van informatie over de beste passing aan het beeld.The system of claim 5, wherein the processing unit is adapted to calculate the best fit between the first object and the second object and the one or more further objects and to add information about the best fit to the image. 7. Systeem volgens een der voorgaande conclusies, waarbij de virtuele representatie gevormd wordt door of de gemeten punten omvat van het eerste object.A system according to any one of the preceding claims, wherein the virtual representation is formed by or comprises the measured points of the first object. 8. Systeem volgens een der voorgaande conclusies, waarbij het systeem een of meer positiesensors of regelbare aandrijvingen omvat voor het verkrijgen van positiegevens met betrekking tot de oriëntatie van de optische camera en/of de meeteenheid ten opzichte van de omgeving, waarbij de opslageenheid is ingericht voor het ontvangen en opslaan van de positiegegevens voor elk van de gemeten punten.8. System as claimed in any of the foregoing claims, wherein the system comprises one or more position sensors or controllable drives for obtaining positional data with regard to the orientation of the optical camera and / or the measuring unit relative to the environment, wherein the storage unit is arranged for receiving and storing the positional data for each of the measured points. 9. Systeem volgens conclusie 8, waarbij de optische camera beweegbaar is tussen verschillende oriëntaties en waarbij het vastgelegde beeld bij die verschillende oriëntaties wordt ververst, waarbij de verwerkingseenheid is ingericht voor het transformeren van de positiegegevens en/of voor het corrigeren voor een verandering in perspectief tussen de verschillende oriëntaties teneinde de virtuele representatie in zijn virtuele positie en/of virtuele oriëntatie ten opzichte van de omgeving te behouden.The system of claim 8, wherein the optical camera is movable between different orientations and wherein the captured image is refreshed at those different orientations, the processing unit being adapted to transform the position data and / or correct for a change in perspective between the different orientations in order to maintain the virtual representation in its virtual position and / or virtual orientation with respect to the environment. 10. Systeem volgens een der voorgaande conclusies, waarbij de opslageenheid is ingericht voor het opslaan van een computermodel van het eerste object, waarbij de verwerkingseenheid is ingericht voor het vormen van een virtuele representatie van het eerste object door het opgeslagen computermodel van het eerste object te relateren aan een of meer van de gemeten punten van het eerste object.10. System as claimed in any of the foregoing claims, wherein the storage unit is arranged for storing a computer model of the first object, wherein the processing unit is arranged for forming a virtual representation of the first object by scanning the stored computer model of the first object relate to one or more of the measured points of the first object. 11. Systeem volgens conclusie 10, waarbij de verwerkingseenheid is ingericht voor het vergelijken van de opgeslagen diepteinformatie van de punten van het eerste object met het computermodel en voor het toevoegen van terugkoppeling hierover aan het beeld.The system of claim 10, wherein the processing unit is adapted to compare the stored depth information of the points of the first object with the computer model and to add feedback thereto to the image. 12. Systeem volgens een der voorgaande conclusies, waarbij de verwerkingseenheid is ingericht voor het aanpassen van de virtuele representatie van het eerste object gebaseerd op gebruikersinstructies, voor het vergelijken van de aangepaste virtuele representatie met de originele virtuele representatie en voor het visualiseren van de verschillen in het vastgelegde beeld, waarbij de verwerkingseenheid is ingericht voor het versturen van bewerkingsinstructies aan de machine voor het uitvoeren van bewerkingen op het eerste object zodat deze uiteindelijk overeenkomt met de aangepaste virtuele representatie.A system according to any one of the preceding claims, wherein the processing unit is adapted to adjust the virtual representation of the first object based on user instructions, to compare the adjusted virtual representation with the original virtual representation and to visualize the differences in the captured image, wherein the processing unit is adapted to send processing instructions to the machine for performing operations on the first object so that it ultimately corresponds to the adjusted virtual representation. 13. Systeem volgens een der voorgaande conclusies, waarbij de verwerkingseenheid is ingericht voor het detecteren van de contour van het eerste object in het beeld, waarbij de verwerkingseenheid is ingericht voor het beperken van het gebied van de omgeving dat gemeten wordt door de meeteenheid tot de gedetecteerde contour van het eerste object.13. System as claimed in any of the foregoing claims, wherein the processing unit is adapted to detect the contour of the first object in the image, the processing unit being adapted to limit the area of the environment measured by the measuring unit to the detected contour of the first object. 14. Systeem volgens een der voorgaande conclusies, waarbij de verwerkingseenheid is ingericht voor het detecteren van het eerste object in het beeld met een proces uit de groep omvattend: randdetectie, structuurdetectie, diepteanalyse, segmentatie, grijsschaal-of RGB analyse, correlatie en/of kunstmatige intelligentie.A system according to any one of the preceding claims, wherein the processing unit is adapted to detect the first object in the image with a process from the group comprising: edge detection, structure detection, depth analysis, segmentation, gray-scale or RGB analysis, correlation and / or artificial intelligence. 15. Systeem volgens een der voorgaande conclusies, waarbij de punten van het eerste object gemeten worden volgens een patroon, bij voorkeur een puntenwolk.A system according to any one of the preceding claims, wherein the points of the first object are measured according to a pattern, preferably a point cloud. 16. Systeem volgens een der voorgaande conclusies, verder omvattend de machine, waarbij de machine een regeleenheid omvat die is ingericht voor het ontvangen van de stuursignalen van de verwerkingseenheid en voor het aansturen van de machine gebaseerd op die stuursignalen.A system according to any one of the preceding claims, further comprising the machine, the machine comprising a control unit adapted to receive the control signals from the processing unit and to control the machine based on those control signals. 17. Systeem volgens conclusie 16, waarbij het systeem verder een intelligente eenheid omvat die is ingericht voor machineleren en/of kunstmatige intelligentie, waarbij de verwerkingseenheid is ingericht voor communicatie met de intelligente eenheid voor het manipuleren van de virtuele positie en/of de virtuele oriëntatie van de virtuele representatie en/of voor het versturen van stuursignalen naar de regeleenheid gebaseerd op de inbreng van de intelligente eenheid.The system of claim 16, wherein the system further comprises an intelligent unit adapted to machine learning and / or artificial intelligence, the processing unit adapted to communicate with the intelligent unit for manipulating the virtual position and / or the virtual orientation of the virtual representation and / or for sending control signals to the control unit based on the input from the intelligent unit. 18. Systeem volgens een der voorgaande conclusies, waarbij de meeteenheid is ingericht teneinde op een punt van de omgeving gericht te worden voor het meten van de puntafstand langs een meetlijn naar het punt, waarbij de optische camera een optische as heeft en een zichtveld, waarbij het punt ligt in een zichtvlak dat normaal op de optische as staat en dat begrensd is door het zichtveld, waarbij de meetlijn zodanig gepositioneerd is ten opzichte van de optische as dat het punt verschoven is ten opzichte van de optische as bij het zichtvlak, waarbij de verwerkingseenheid is ingericht voor het toevoegen van een richthulpmiddel aan het vastgelegde beeld bij de locatie van het punt in het vastgelegde beeld op basis van de verschuiving van het punt in het zichtvlak.A system according to any one of the preceding claims, wherein the measuring unit is adapted to be directed to a point of the environment for measuring the point distance along a measuring line to the point, wherein the optical camera has an optical axis and a field of view, the point lies in a viewing surface normally on the optical axis and bounded by the visual field, the measuring line being positioned relative to the optical axis such that the point is offset from the optical axis at the viewing surface, the processing unit is adapted to add a directional aid to the captured image at the location of the point in the captured image based on the shift of the point in the viewing plane. 19. Systeem volgens conclusie 18, waarbij het punt verschoven is ten opzichte van de optische as bij het zichtveld over een eerste verschuivingsafstand in een eerste verschuivingsrichting in een eerste verhouding ten opzichte van een eerste afmeting van het zichtvlak in de eerste verschuivingsrichting, waarbij het vastgelegde beeld een beeldvlak heeft, waarbij het punt verschoven is in het vastgelegde beeld over een tweede verschuivingsafstand in een tweede verschuivingsrichting in een tweede verhouding ten opzichte van een tweede afmeting van het beeldvlak in de tweede verschuivingsrichting, waarbij de verwerkingseenheid is ingericht voor het toevoegen van het richthulpmiddel aan het vastgelegde beeld op de locatie van het punt in het vastgelegde beeld gebaseerd op de gelijkheid of in hoofdzaak gelijkheid van de tweede verhouding aan de eerste verhouding.A system according to claim 18, wherein the point is shifted with respect to the optical axis at the field of view by a first shift distance in a first shift direction in a first ratio with respect to a first dimension of the sight plane in the first shift direction, wherein the recorded image has an image plane, the point being shifted in the captured image by a second shift distance in a second shift direction in a second ratio to a second dimension of the image plane in the second shift direction, the processing unit being adapted to add the directing aid to the captured image at the location of the point in the captured image based on the similarity or substantially similarity of the second ratio to the first ratio. 20. Systeem volgens conclusie 18 of 19, waarbij de optische as en de meetlijn evenwijdig of in hoofdzaak evenwijdig zijn.The system of claim 18 or 19, wherein the optical axis and the measurement line are parallel or substantially parallel. 21. Set van twee of meer systemen volgens een der voorgaande conclusies, waarbij elk systeem van de set is aangebracht in een andere oriëntatie ten opzichte van de omgeving en/of de objecten in de omgeving, waarbij de optische camera voor elk systeem van de set een eigen coördinatenstelsel heeft, waarbij de verwerkingseenheid is ingericht voor het relateren, combineren en/of transformeren van de punten gemeten door een van de systemen aan, met of in het coördinatenstelsel van een of meer van de andere systemen van de set.A set of two or more systems according to any one of the preceding claims, wherein each system of the set is arranged in a different orientation with respect to the environment and / or the objects in the environment, the optical camera for each system of the set has its own coordinate system, the processing unit being adapted to relate, combine and / or transform the points measured by one of the systems to, with or in the coordinate system of one or more of the other systems of the set. 22. Werkwijze voor het aansturen van een machine, in het bijzonder een robot, met gebruikmaking van het systeem volgens een der voorgaande conclusies, waarbij de werkwijze de stappen omvat van het vastleggen van een beeld van de omgeving, het verkrijgen van diepteinformatie van punten van de omgeving, het weergeven van het vastgelegde beeld, het toevoegen van de virtuele representatie van het eerste object aan het vastgelegde beeld in de virtuele positie en in de virtuele oriëntatie die overeenkomt met respectievelijk de werkelijke positie en de werkelijke oriëntatie, het manipuleren van de virtuele positie en/of de virtuele oriëntatie van de virtuele representatie in het vastgelegde beeld ten opzichte van de werkelijke positie en/of oriëntatie van het eerste object, het transformeren van de opgeslagen diepte informatie van punten van het eerste object van de werkelijke positie en de werkelijke oriëntatie naar de gemanipuleerde virtuele positie en/of virtuele oriëntatie, en het versturen van stuursignalen naar de machine teneinde te veroorzaken dat de machine het eerste object beweegt tot in de gemanipuleerde virtuele positie en/of virtuele oriëntatie. -o-o-o-o-o-o-o-o-A method of controlling a machine, in particular a robot, using the system according to any one of the preceding claims, wherein the method comprises the steps of capturing an image of the environment, obtaining depth information from points of the environment, displaying the captured image, adding the virtual representation of the first object to the captured image in the virtual position and in the virtual orientation corresponding to the actual position and the actual orientation, respectively, manipulating the virtual position and / or the virtual orientation of the virtual representation in the captured image relative to the actual position and / or orientation of the first object, transforming the stored depth information of points of the first object from the actual position and the actual orientation to the manipulated virtual position and / or virtual orientation, and the v sending control signals to the machine to cause the machine to move the first object into the manipulated virtual position and / or virtual orientation. -o-o-o-o-o-o-o-
NL2016960A 2016-06-14 2016-06-14 System and method for controlling a machine, in particular a robot NL2016960B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
NL2016960A NL2016960B1 (en) 2016-06-14 2016-06-14 System and method for controlling a machine, in particular a robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NL2016960A NL2016960B1 (en) 2016-06-14 2016-06-14 System and method for controlling a machine, in particular a robot

Publications (1)

Publication Number Publication Date
NL2016960B1 true NL2016960B1 (en) 2017-12-21

Family

ID=57346015

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2016960A NL2016960B1 (en) 2016-06-14 2016-06-14 System and method for controlling a machine, in particular a robot

Country Status (1)

Country Link
NL (1) NL2016960B1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030120391A1 (en) * 2001-12-25 2003-06-26 National Inst. Of Advanced Ind. Science And Tech. Robot operation teaching method and apparatus
US20120072023A1 (en) * 2010-09-22 2012-03-22 Toyota Motor Engineering & Manufacturing North America, Inc. Human-Robot Interface Apparatuses and Methods of Controlling Robots

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030120391A1 (en) * 2001-12-25 2003-06-26 National Inst. Of Advanced Ind. Science And Tech. Robot operation teaching method and apparatus
US20120072023A1 (en) * 2010-09-22 2012-03-22 Toyota Motor Engineering & Manufacturing North America, Inc. Human-Robot Interface Apparatuses and Methods of Controlling Robots

Similar Documents

Publication Publication Date Title
US10585167B2 (en) Relative object localization process for local positioning system
US8095237B2 (en) Method and apparatus for single image 3D vision guided robotics
CN109483516A (en) A kind of mechanical arm hand and eye calibrating method based on space length and epipolar-line constraint
EP1584426A1 (en) Tool center point calibration system
US20150316648A1 (en) Detection apparatus, Detection method and Manipulator
Chen et al. Acquisition of weld seam dimensional position information for arc welding robot based on vision computing
US20190193268A1 (en) Robotic arm processing system and method, and non-transitory computer-readable storage medium therefor
CN111028340B (en) Three-dimensional reconstruction method, device, equipment and system in precise assembly
Ryberg et al. Stereo vision for path correction in off-line programmed robot welding
US11446822B2 (en) Simulation device that simulates operation of robot
US20220230348A1 (en) Method and apparatus for determining a three-dimensional position and pose of a fiducial marker
Chen et al. Model analysis and experimental technique on computing accuracy of seam spatial position information based on stereo vision for welding robot
Ng et al. Intuitive robot tool path teaching using laser and camera in augmented reality environment
WO2019013204A1 (en) Information processing device for presenting information, information processing method and program
Pachidis et al. Vision-based path generation method for a robot-based arc welding system
NL2016960B1 (en) System and method for controlling a machine, in particular a robot
Antonelli et al. Training by demonstration for welding robots by optical trajectory tracking
Fröhlig et al. Three-dimensional pose estimation of deformable linear object tips based on a low-cost, two-dimensional sensor setup and AI-based evaluation
US20220410394A1 (en) Method and system for programming a robot
Pajor et al. Stereovision system for motion tracking and position error compensation of loading crane
Qiao Advanced sensing development to support robot accuracy assessment and improvement
Vaníček et al. 3D Vision Based Calibration Approach for Robotic Laser Surfacing Applications
CN206912816U (en) Identify the device of mechanical workpieces pose
CN114654457B (en) Multi-station precise alignment method for mechanical arm with long-short vision distance guidance
WO2023248353A1 (en) Device for acquiring position data pertaining to workpiece, control device, robot system, method, and computer program