US11138684B2 - Image processing apparatus, image processing method, and robot system - Google Patents

Image processing apparatus, image processing method, and robot system Download PDF

Info

Publication number
US11138684B2
US11138684B2 US16/854,919 US202016854919A US11138684B2 US 11138684 B2 US11138684 B2 US 11138684B2 US 202016854919 A US202016854919 A US 202016854919A US 11138684 B2 US11138684 B2 US 11138684B2
Authority
US
United States
Prior art keywords
pixel
image data
distance image
image
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/854,919
Other versions
US20200342563A1 (en
Inventor
Junichirou Yoshida
Shouta TAKIZAWA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fanuc Corp
Original Assignee
Fanuc Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fanuc Corp filed Critical Fanuc Corp
Assigned to FANUC CORPORATION reassignment FANUC CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TAKIZAWA, SHOUTA, Yoshida, Junichirou
Publication of US20200342563A1 publication Critical patent/US20200342563A1/en
Application granted granted Critical
Publication of US11138684B2 publication Critical patent/US11138684B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G06K9/2054
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0014Image feed-back for automatic industrial control, e.g. robot with camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06K2209/21
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the present invention relates to an image processing apparatus, an image processing method, and a robot system.
  • Japanese Unexamined Patent Publication (Kokai) No. 2016-185573A discloses a robot system including a target object selection unit configured to select a target object; a proximity state determination unit configured to determine whether another object is disposed in proximity to the target object; an avoidance vector determination unit configured to determine an avoidance vector such that no interference with the object occurs; and a picking path correction unit configured to generate a corrected path which is obtained by correcting a picking path, based on the avoidance vector.
  • 2013-186088A performs three-dimensional position/attitude measurement of a target object by using a first sensor configured to acquire two-dimensional (2D) information or three-dimensional (3D) information of the target object and a second sensor configured to acquire 2D information or 3D information of the target object.
  • a contour of the object is extracted from the 2D image and the contour is used for separation of the object, and there may be a case in which the contour of the object is not properly extracted due to an influence of a pattern of a surface of the object (e.g. a packing tape attached to a surface of a cardboard box that is the object), and erroneous recognition occurs.
  • a method of extracting a contour of an object without an influence of a surface of the object by using a three-dimensional camera for recognition of the object, the 3D camera being capable of acquiring a distance image representative of a distance to the object.
  • a plurality of objects are arranged close to each other, it may be possible that, with a distance image of a low resolution, the objects are unable to be recognized by separating the objects by a narrow gap between the objects.
  • an image processing apparatus includes a two-dimensional image storage unit configured to store a plurality of two-dimensional image data captured by photographing an identical imaging target object under different exposure conditions; a distance image storage unit configured to store distance image data representative of distance information depending on a spatial position of the imaging target object, the distance image data including a pixel array of a known relationship to a pixel array of the two-dimensional image data; a pixel extraction unit configured to extract, among a plurality of pixels in each of the two-dimensional image data, a first pixel at which a difference in brightness between identical pixels is less than a predetermined value; and a distance image adjusting unit configured to specify a second pixel of the distance image data at a position corresponding to the first pixel in the pixel array, and to set the second pixel as a non-imaging pixel in the distance image data.
  • a robot system includes a robot; a robot controller configured to control the robot; and the above-described image processing apparatus, wherein the robot controller is configured to cause the robot to handle the imaging target object, based on the distance image data acquired as a result of the distance image adjusting unit setting the second pixel as the non-imaging pixel.
  • an imaging processing method includes storing a plurality of two-dimensional image data captured by photographing an identical imaging target object under different exposure conditions; storing distance image data representative of distance information depending on a spatial position of the imaging target object, the distance image data including a pixel array of a known relationship to a pixel array of the two-dimensional image data; extracting, among a plurality of pixels in each of the two-dimensional image data, a first pixel at which a difference in brightness between identical pixels is less than a predetermined value; and specifying a second pixel of the distance image data at a position corresponding to the first pixel in the pixel array, and setting the second pixel as a non-imaging pixel in the distance image data.
  • FIG. 1 is a view illustrating an entire configuration of a robot system including an image processing apparatus according to one embodiment
  • FIG. 2 is a functional block diagram of the image processing apparatus and a robot controller
  • FIG. 3 is a flowchart illustrating an image processing executed by the image processing apparatus
  • FIG. 4 illustrates a two-dimensional (2D) image of two cardboard boxes photographed by the image process
  • FIG. 5 is a view for explaining a distance image
  • FIG. 6A to FIG. 6C are views for explaining image recognition of objects with use of 2D images
  • FIG. 7 illustrates a distance image in a case of photographing two cardboard boxes with use of a three-dimensional (3D) camera of a high resolution
  • FIG. 8 illustrates a distance image in a case of photographing two cardboard boxes with use of a 3D camera of a low resolution
  • FIG. 9 illustrates an example of three 2D images captured by photographing an identical imaging target object under different exposure conditions
  • FIG. 10 is a view for explaining a distance image generated by the image process of FIG. 3 ;
  • FIG. 11 is a flowchart illustrating an object picking process executed in the robot system
  • FIG. 12 illustrates a distance image obtained after the image process of FIG. 3 is executed on a distance image captured by photographing a work in which a hole is formed as a position mark;
  • FIG. 13 illustrates a distance image obtained after the image process of FIG. 3 is executed on a distance image captured by photographing a work in which a narrow slit is formed.
  • FIG. 1 is a view illustrating an entire configuration of a robot system 100 including an image processing apparatus 30 according to an embodiment.
  • FIG. 2 is a functional block diagram of the image processing apparatus 30 and a robot controller 20 .
  • the robot system 100 includes a robot 10 , the robot controller 20 which controls the robot 10 , and the image processing apparatus 30 .
  • the robot controller 20 which controls the robot 10 .
  • the image processing apparatus 30 As illustrated in FIG. 1 , the robot controller 20 which controls the robot 10 , and the image processing apparatus 30 .
  • the image processing apparatus 30 includes a two-dimensional (2D) image storage unit 31 which stores a plurality of two-dimensional (2D) image data captured by photographing an identical imaging target object W under different exposure conditions; a distance image storage unit 32 which stores distance image data representative of distance information depending on a spatial position of the imaging target object W, the distance image data including a pixel array of a known relationship to a pixel array of the 2D image data; a pixel extraction unit 33 which extracts, among a plurality of pixels in each of a plurality of 2D image data, a pixel (hereinafter, also referred to as “first pixel”) at which a difference in brightness between identical pixels is less than a predetermined value; and a distance image adjusting unit 34 which specifies a pixel (hereinafter, also referred to as “second pixel”) of the distance image data at a position corresponding to the first pixel in the pixel array, and sets the second pixel as a non-imaging pixel in the distance image data.
  • 2D two-
  • the robot system 100 further includes an image acquisition apparatus 50 which is connected to the image processing apparatus 30 .
  • the image acquisition apparatus 50 includes a function as a three-dimensional (3D) camera, which acquires a distance image representative of distance information depending a spatial position of the imaging target object W, and a function as a two-dimensional (2D) camera, which acquires a two-dimensional (2D) image of the imaging target object W with an identical pixel array to a pixel array of the distance image.
  • 3D three-dimensional
  • 2D two-dimensional
  • the image acquisition apparatus 50 may include a light source, a projector which projects pattern light, and two cameras disposed on both sides of the projector with the projector being interposed, and may be configured to photograph an object, on which the pattern light is projected, by the two cameras disposed at different positions, and to acquire three-dimensional (3D) position information of the object by a stereo method.
  • the image processing apparatus 50 can acquire a distance image and a 2D image which have an identical pixel array, or a distance image and a 2D image which have pixel arrays of a known correspondence relation.
  • a method for acquiring 3D position information of the object other kinds of methods may be used.
  • the image processing apparatus 50 is disposed at a known position in a working space where the robot 10 is disposed, and photographs the imaging target object W from above. Note that the image processing apparatus 50 may be attached to a wrist portion at an arm tip end of the robot 10 .
  • the image acquisition apparatus 50 a configuration in which one camera acquires both the distance image and the 2D image is adopted.
  • the embodiment is not limited to this, and the image acquisition apparatus 50 may be configured such that a 3D camera that acquires the distance image of the object and a 2D camera that acquires the 2D image are separately disposed in the robot system 100 .
  • the correspondence relation between the pixel array of the distance image and the pixel array of the 2D image is calibrated in advance and set in a known state.
  • the robot controller 20 causes the robot 10 to handle the imaging target object W, based on the distance image data which is adjusted by the image processing apparatus 30 .
  • the robot controller 20 may be configured to include an object recognition unit 21 which performs image recognition of the object by pattern matching by using the distance image data which is adjusted by the image processing apparatus 30 , and a picking operation execution unit 22 which executes a picking operation of the object.
  • a grasping device 15 is attached to the wrist portion at the arm tip end of the robot 10 .
  • the picking operation execution unit 22 moves the robot 10 and grasps and picks, by the grasping device 15 , the object which is recognized by the object recognition unit 21 by using the distance image.
  • the robot 10 is illustrated as being a vertical articulated robot, but some other type of robot may be used as the robot 10 .
  • Each of the robot controller 20 and the image processing apparatus 30 may have a configuration of a general computer including a CPU, a ROM, a RAM, a storage device, an operation unit, a display unit, a network interface, an external device interface, and the like.
  • FIG. 1 illustrates a problem of erroneous recognition, which may possibly occur when an object is recognized by using a 2D image or a distance image.
  • the imaging target object W includes surfaces of a plurality of objects (two cardboard boxes W 1 and W 2 ) which are juxtaposed with a gap therebetween.
  • FIG. 4 illustrates a 2D image 201 captured by photographing two cardboard boxes W 1 and W 2 by the image acquisition apparatus 50 illustrated in FIG. 1 .
  • the 2D image is acquired as a gray-scale image.
  • the two cardboard boxes W 1 and W 2 appear on the 2D image 201 of FIG. 4 .
  • a packing tape 61 extending in a vertical direction appears in a central part of the cardboard box W 1 , with a gray level slightly darker than the main body of the cardboard box W 1
  • a packing tape 62 extending in the vertical direction appears in a central part of the cardboard box W 2 , with a gray level slightly darker than the main body of the cardboard box W 2 .
  • a narrow gap G 1 between the two cardboard boxes W 1 and W 2 appears dark in the 2D image 201 .
  • FIG. 5 is a view for explaining a distance image. Consideration is now given to a case in which a distance image of an object T, which is disposed with an inclination to an installation floor F, as illustrated in FIG. 5 , is acquired by the image acquisition apparatus 50 . When viewed from the image acquisition apparatus 50 , a second portion T 2 side of the object T is farther than a first portion T 1 side of the object T. In the generation of the distance image, distance information is visualized into an image by varying a color or a gray level in accordance with the distance from the camera to the object. A distance image 300 of FIG.
  • the second portion T 2 side of the object T is expressed to be dark, and the first portion T 1 side is expressed to be bright.
  • FIG. 6A illustrates a case in which the two cardboard boxes W 1 and W 2 are correctly recognized as indicated by thick frames 251 and 252 of broken lines.
  • FIG. 6A illustrates a case in which the two cardboard boxes W 1 and W 2 are correctly recognized as indicated by thick frames 251 and 252 of broken lines.
  • a contour line 61 a of the packing tape 61 and a contour line 62 a of the packing tape 62 are erroneously recognized as contour lines of the cardboard boxes of the recognition target object, and the cardboard boxes of the recognition target object are erroneously recognized as existing in the position of the thick frame 253 .
  • a thick frame 254 of a broken line in FIG. 6C there may be a case in which a contour surrounding the entirety of the two cardboard boxes W 1 and W 2 is erroneously recognized as one cardboard box. The reason for the erroneous recognition in FIG.
  • FIG. 6C is that in the recognition process of the object, there is a case in which the size of a template is enlarged or reduced in order to cope with the fact that the size of the object varies on the 2D image in accordance with the height of the object (the distance from the camera to the object).
  • FIG. 6C corresponds to a case in which the cardboard boxes of the recognition target object are erroneously recognized as existing at a position closer to the image acquisition apparatus 50 than in the case of FIG. 6A .
  • the recognition of the object by the 2D image there is a case in which the position of the object is erroneously recognized.
  • FIG. 7 illustrates a distance image 301 in a case of photographing the two cardboard boxes W 1 and W 2 illustrated in FIG. 1 with use of a 3D camera of a high resolution, from the position of the image acquisition apparatus 50 .
  • the gap G 1 between the two cardboard boxes W 1 and W 2 is exactly depicted.
  • a pattern on the object is not visualized into an image.
  • the packing tapes 61 and 62 do not appear on the cardboard boxes W 1 and W 2 , the above-described problem of erroneous recognition due to the contour lines of packing tapes illustrated in FIG. 6B does not occur.
  • FIG. 7 illustrates a distance image 301 in a case of photographing the two cardboard boxes W 1 and W 2 illustrated in FIG. 1 with use of a 3D camera of a high resolution, from the position of the image acquisition apparatus 50 .
  • FIG. 7 illustrates a distance image 301 in a case of photographing the two cardboard boxes W 1 and W 2 illustrated in FIG. 1 with use of a 3D camera of
  • FIG. 8 illustrates a distance image 302 in a case of photographing the two cardboard boxes W 1 and W 2 illustrated in FIG. 1 with use of a 3D camera of a low resolution, from the position of the image acquisition apparatus 50 .
  • an image of the gap G 1 blurs and disappears.
  • image recognition is performed with the distance image 302 , it is not possible to separately recognize the two cardboard boxes W 1 and W 2 . In this manner, while the distance image is not affected by the pattern of the object, there is a need to use an expensive high-resolution 3D camera in order to reproduce details.
  • the image processing apparatus 30 is configured to solve the above-described problem which may occur when the image recognition of the object is performed by using the distance image.
  • FIG. 3 a description will be given of an image processing method which is implemented by the image processing apparatus 30 .
  • FIG. 3 is a flowchart illustrating an image processing executed under the control by a CPU of the image processing apparatus 30 .
  • the image processing apparatus 30 executes the image processing as described below, by using the distance image data and 2D image data acquired by the image acquisition apparatus 50 .
  • the image processing apparatus 30 stores, in the 2D image storage unit 31 , a plurality of 2D image data captured by photographing an identical imaging target object W under different exposure conditions (step S 1 ).
  • the image processing apparatus 30 stores, in the distance image storage unit 32 , distance image data representative of distance information depending on a spatial position of the imaging target object W, the distance image data including a pixel array of a known relationship to a pixel array of the 2D image data (step S 2 ).
  • FIG. 9 illustrates examples of three 2D images 211 to 213 captured by photographing an identical imaging target object W under different exposure conditions.
  • the exposure conditions may be changed by various methods, such as adjustment of the luminance of a light source of illumination light, an exposure time, a diaphragm, the sensitivity of an imaging element, etc. As an example, it is assumed that the exposure time is adjusted.
  • the exposure time of the 2D image 211 is longest
  • the exposure time of the 2D image 212 is second longest
  • the exposure time of the 2D image 213 is shortest.
  • the brightness of a whole image is proportional to the exposure time.
  • the gray level of the surfaces of the cardboard boxes W 1 and W 2 varies in accordance with the length of exposure time. Specifically, the surface of the cardboard box W 1 , W 2 in the 2D image 211 is brightest, the surface of the cardboard box W 1 , W 2 in the 2D image 212 is second brightest, and the surface of the cardboard box W 1 , W 2 in the 2D image 213 is darkest. In contrast to the variation in brightness of the surface of the cardboard box W 1 , W 2 due to the variation in exposure time, the brightness of the image of the part of the gap G 1 between the cardboard box W 1 and cardboard box W 2 does not substantially change, and the image of the part of the gap G 1 remains dark.
  • the reason for this is that since light from a space such as the gap G 1 does not easily return to the image acquisition apparatus 50 , such a space always appears dark regardless of the exposure time.
  • the image processing apparatus 30 pixel extraction unit 33 ) extracts an image (pixel) of the part of the space, based on the fact that the image of the part of the space, such as a gap, a slit or a hole, has a lower degree of variation in brightness relative to the variation of the exposure condition than the image of the other part.
  • step S 3 among the pixels in each of a plurality of 2D image data, a first pixel at which a difference in brightness between identical pixels is less than a predetermined value is extracted.
  • the extraction of the first pixel is performed as follows.
  • the degree of variation in brightness relative to the variation in exposure time is calculated with respect to all pixels between the 2D image 211 and the 2D image 212 .
  • the first pixel which constitutes the image of the part of the gap G 1 , can be extracted.
  • the “predetermined value” may be set by various methods, such as a method of setting a fixed value, for instance, an experimental value or an empirical value, or a method of setting the “predetermined value”, based on the “variation in brightness relative to the variation in exposure time” for each pixel acquired by the above (3).
  • step S 4 the image processing apparatus 30 specifies, in the pixel array in the distance image 302 , a second pixel of distance image data existing at a position corresponding to the first pixel extracted in the 2D image in step S 3 , and sets the second pixel as a non-imaging pixel in the distance image data.
  • To set the second pixel as the non-imaging pixel means that the second pixel is not set as a pixel representative of distance information.
  • the pixel value of the second pixel may be set to zero, or to some other invalid value which is not representative of distance information.
  • step S 4 will be described with reference to FIG. 10 .
  • the first pixel included in the part of the gap G 1 in the 2D image 201 is extracted by the above-described method, and the second pixel corresponding to the part of the gap G 1 in the distance image 302 is set as the non-imaging pixel.
  • a distance image 310 illustrated in a right part of FIG. 10 is obtained.
  • the second pixel in the part of the gap G 1 is set as the non-imaging pixel, and the cardboard box W 1 and cardboard box W 2 are separated by the part of the gap G 1 .
  • the cardboard box W 1 and cardboard box W 2 can individually correctly be recognized.
  • the object picking process is executed by a cooperative operation between the image processing apparatus 30 and robot controller 20 .
  • the image processing apparatus 30 causes the image acquisition apparatus 50 to capture a distance image of the imaging target object W, and stores the captured distance image (step S 101 ).
  • the image processing apparatus 30 causes the image acquisition apparatus 50 to capture 2D images of the imaging target object W by a predetermined number of times, while changing the exposure condition, and stores the captured 2D images (steps S 102 , S 103 and S 104 ).
  • steps S 102 , S 103 and S 104 Thereby, a plurality of 2D images of the imaging target object W under different exposure conditions, as exemplarily illustrated in FIG. 9 , are acquired.
  • the image processing apparatus 30 searches, in the 2D images, a pixel at which the degree of variation in brightness relative to the variation in exposure time is less than a predetermined value (step S 105 ).
  • the image processing apparatus 30 recognizes the searched pixel as a pixel (first pixel) of the part corresponding to the gap G 1 between the cardboard boxes W 1 and W 2 (step S 106 ).
  • the image processing apparatus 30 sets, as a non-imaging pixel, a pixel (second pixel) at a position corresponding to the gap G 1 in the distance image acquired in step S 101 (step S 107 ).
  • the object recognition unit 21 of the robot controller 20 recognizes the cardboard box W 1 or W 2 in the distance image by using model data of the cardboard box W 1 , W 2 (step S 108 ).
  • the model data for object recognition is stored in a storage device (not illustrated) of the robot controller 20 .
  • the picking operation execution unit 22 of the robot controller 20 calculates the position of the cardboard box W 1 or W 2 in a robot coordinate system, based on the position of the cardboard box W 1 or W 2 recognized in the distance image, and the position of the image acquisition apparatus 50 in the robot coordinate system. Based on the calculated position of the cardboard box W 1 or W 2 in the robot coordinate system, the picking operation execution unit 22 executes the operation of moving the robot 10 and individually grasping and picking the cardboard box W 1 or W 2 by the grasping device 15 (step S 109 ).
  • the above-described method which is a method by extracting, among the pixels in each of the 2D image data captured by photographing an identical imaging target object under different exposure conditions, the first pixel at which the difference in brightness between identical pixels is less than the predetermined value, can be used for, in addition to the extraction of a gap between objects, the extraction of a pixel which holds imaging information of a space from which reflective light of illumination light does not easily return to the camera, such as a hole, a slit or the like formed in an object.
  • FIG. 12 illustrates a distance image 410 obtained after the image processing of FIG. 3 is executed on a distance image captured by photographing a plate-shaped work W 3 in which a hole C 1 is formed as a position mark.
  • the part of the hole C 1 too, always appears in a dark state on the 2D image, regardless of the change of the exposure condition. Even when the hole C 1 is relatively small and the hole C 1 is not reproduced by a distance image acquired by a 3D camera of a low resolution, the hole C 1 serving as the position mark on the distance image 410 can be separated from the image of the work W 3 and can be recognized, as illustrated in FIG. 12 , by executing the image processing of FIG. 3 . As a result, the direction of the work W 3 can be correctly recognized by using the distance image 410 .
  • FIG. 13 illustrates a distance image 420 obtained after the image processing of FIG. 3 is executed on a distance image captured by photographing a work W 4 in which a narrow slit L 1 is formed.
  • the part of the slit L 1 too, the part of the slit L 1 on the 2D image always appears in a dark state, regardless of the change of the exposure condition, since reflective light of illumination light does not easily return to the camera.
  • the slit L 1 on the distance image 420 can be separated from the image of the work W 4 and can be recognized, as illustrated in FIG. 13 , by executing the image processing of FIG. 3 .
  • the shape of the work W 4 can be correctly recognized by using the distance image 420 .
  • an object can be recognized with high precision from a distance image.
  • the configuration of the robot system illustrated in FIG. 1 and the configuration of the functional block diagram illustrated in FIG. 2 are merely examples, and the present invention is not limited to the configurations illustrated in FIGS. 1 and 2 .
  • the various functions of the image processing apparatus 30 illustrated in FIG. 1 may be implemented in the robot controller 20 .
  • Some of the functions of the image processing apparatus 30 in FIG. 2 may be disposed on the robot controller 20 side.
  • some of the functions of the robot controller 20 may be disposed in the image processing apparatus 30 .
  • the method in which, among the pixels in each of 2D image data, the first pixel at which the difference in brightness between identical pixels is less than the predetermined value is extracted by using a plurality of 2D image data captured by photographing an identical imaging target object under different exposure conditions, and in which the second pixel of distance image data at a position corresponding to the first pixel is set as the non-imaging pixel can also be expressed as a method in which the information of the distance image is supplemented by using the information of the pixel at which the difference in brightness between identical pixels is less than the predetermined value on the 2D image data.
  • step S 3 the extraction of the first pixel at which the difference in brightness between identical pixels is less than the predetermined value
  • step S 3 the extraction of the first pixel at which the difference in brightness between identical pixels is less than the predetermined value
  • such extraction of the pixel may be performed only when there is a certain cluster of pixels at which the difference in brightness between identical pixels is less than the predetermined value.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

An image processing apparatus includes a two-dimensional image storage unit configured to store a plurality of two-dimensional image data captured by photographing an identical imaging target object under different exposure conditions; a distance image storage unit configured to store distance image data including a pixel array of a known relationship to a pixel array of the two-dimensional image data; a pixel extraction unit configured to extract, among pixels in each of the two-dimensional image data, a first pixel at which a difference in brightness between identical pixels is less than a predetermined value; and a distance image adjusting unit configured to specify a second pixel of the distance image data at a position corresponding to the first pixel in the pixel array, and to set the second pixel as a non-imaging pixel in the distance image data.

Description

RELATED APPLICATIONS
The present application claims priority to Japanese Application Number 2019-084349, filed Apr. 25, 2019, the disclosure of which is hereby incorporated by reference herein in its entirety.
BACKGROUND OF THE INVENTION 1. Field of the Invention
The present invention relates to an image processing apparatus, an image processing method, and a robot system.
2. Description of the Related Art
There is known a robot system configured to recognize an object by a camera and to handle the recognized object by a robot. For example, Japanese Unexamined Patent Publication (Kokai) No. 2016-185573A discloses a robot system including a target object selection unit configured to select a target object; a proximity state determination unit configured to determine whether another object is disposed in proximity to the target object; an avoidance vector determination unit configured to determine an avoidance vector such that no interference with the object occurs; and a picking path correction unit configured to generate a corrected path which is obtained by correcting a picking path, based on the avoidance vector. A system disclosed in Japanese Unexamined Patent Publication (Kokai) No. 2013-186088A performs three-dimensional position/attitude measurement of a target object by using a first sensor configured to acquire two-dimensional (2D) information or three-dimensional (3D) information of the target object and a second sensor configured to acquire 2D information or 3D information of the target object.
SUMMARY OF INVENTION
When a two-dimensional camera which acquires a two-dimensional (2D) image is used for recognition of an object, a contour of the object is extracted from the 2D image and the contour is used for separation of the object, and there may be a case in which the contour of the object is not properly extracted due to an influence of a pattern of a surface of the object (e.g. a packing tape attached to a surface of a cardboard box that is the object), and erroneous recognition occurs. On the other hand, there is a method of extracting a contour of an object without an influence of a surface of the object, by using a three-dimensional camera for recognition of the object, the 3D camera being capable of acquiring a distance image representative of a distance to the object. However, in such a case that a plurality of objects are arranged close to each other, it may be possible that, with a distance image of a low resolution, the objects are unable to be recognized by separating the objects by a narrow gap between the objects.
According to one aspect of the present disclosure, an image processing apparatus includes a two-dimensional image storage unit configured to store a plurality of two-dimensional image data captured by photographing an identical imaging target object under different exposure conditions; a distance image storage unit configured to store distance image data representative of distance information depending on a spatial position of the imaging target object, the distance image data including a pixel array of a known relationship to a pixel array of the two-dimensional image data; a pixel extraction unit configured to extract, among a plurality of pixels in each of the two-dimensional image data, a first pixel at which a difference in brightness between identical pixels is less than a predetermined value; and a distance image adjusting unit configured to specify a second pixel of the distance image data at a position corresponding to the first pixel in the pixel array, and to set the second pixel as a non-imaging pixel in the distance image data.
According to another aspect of the present disclosure, a robot system includes a robot; a robot controller configured to control the robot; and the above-described image processing apparatus, wherein the robot controller is configured to cause the robot to handle the imaging target object, based on the distance image data acquired as a result of the distance image adjusting unit setting the second pixel as the non-imaging pixel.
According to still another aspect of the present disclosure, an imaging processing method includes storing a plurality of two-dimensional image data captured by photographing an identical imaging target object under different exposure conditions; storing distance image data representative of distance information depending on a spatial position of the imaging target object, the distance image data including a pixel array of a known relationship to a pixel array of the two-dimensional image data; extracting, among a plurality of pixels in each of the two-dimensional image data, a first pixel at which a difference in brightness between identical pixels is less than a predetermined value; and specifying a second pixel of the distance image data at a position corresponding to the first pixel in the pixel array, and setting the second pixel as a non-imaging pixel in the distance image data.
BRIEF DESCRIPTION OF THE DRAWINGS
The object, features and advantages of the present invention will be more clearly understood by the description below of an embodiment relating to the accompanying drawings. In the accompanying drawings:
FIG. 1 is a view illustrating an entire configuration of a robot system including an image processing apparatus according to one embodiment;
FIG. 2 is a functional block diagram of the image processing apparatus and a robot controller;
FIG. 3 is a flowchart illustrating an image processing executed by the image processing apparatus;
FIG. 4 illustrates a two-dimensional (2D) image of two cardboard boxes photographed by the image process;
FIG. 5 is a view for explaining a distance image;
FIG. 6A to FIG. 6C are views for explaining image recognition of objects with use of 2D images;
FIG. 7 illustrates a distance image in a case of photographing two cardboard boxes with use of a three-dimensional (3D) camera of a high resolution;
FIG. 8 illustrates a distance image in a case of photographing two cardboard boxes with use of a 3D camera of a low resolution;
FIG. 9 illustrates an example of three 2D images captured by photographing an identical imaging target object under different exposure conditions;
FIG. 10 is a view for explaining a distance image generated by the image process of FIG. 3;
FIG. 11 is a flowchart illustrating an object picking process executed in the robot system;
FIG. 12 illustrates a distance image obtained after the image process of FIG. 3 is executed on a distance image captured by photographing a work in which a hole is formed as a position mark; and
FIG. 13 illustrates a distance image obtained after the image process of FIG. 3 is executed on a distance image captured by photographing a work in which a narrow slit is formed.
DETAILED DESCRIPTION
Hereinafter, an embodiment of the present disclosure will be described with reference to the accompanying drawings. Corresponding constituent elements are denoted by the same reference numerals throughout the drawings. These drawings use different scales as appropriate to facilitate understanding. The mode illustrated in each drawing is one example for carrying out the present invention, and the present invention is not limited to the modes illustrated in the drawings.
FIG. 1 is a view illustrating an entire configuration of a robot system 100 including an image processing apparatus 30 according to an embodiment. FIG. 2 is a functional block diagram of the image processing apparatus 30 and a robot controller 20. As illustrated in FIG. 1, the robot system 100 includes a robot 10, the robot controller 20 which controls the robot 10, and the image processing apparatus 30. As illustrated in FIG. 2, the image processing apparatus 30 includes a two-dimensional (2D) image storage unit 31 which stores a plurality of two-dimensional (2D) image data captured by photographing an identical imaging target object W under different exposure conditions; a distance image storage unit 32 which stores distance image data representative of distance information depending on a spatial position of the imaging target object W, the distance image data including a pixel array of a known relationship to a pixel array of the 2D image data; a pixel extraction unit 33 which extracts, among a plurality of pixels in each of a plurality of 2D image data, a pixel (hereinafter, also referred to as “first pixel”) at which a difference in brightness between identical pixels is less than a predetermined value; and a distance image adjusting unit 34 which specifies a pixel (hereinafter, also referred to as “second pixel”) of the distance image data at a position corresponding to the first pixel in the pixel array, and sets the second pixel as a non-imaging pixel in the distance image data.
The robot system 100 further includes an image acquisition apparatus 50 which is connected to the image processing apparatus 30. The image acquisition apparatus 50 includes a function as a three-dimensional (3D) camera, which acquires a distance image representative of distance information depending a spatial position of the imaging target object W, and a function as a two-dimensional (2D) camera, which acquires a two-dimensional (2D) image of the imaging target object W with an identical pixel array to a pixel array of the distance image. For example, the image acquisition apparatus 50 may include a light source, a projector which projects pattern light, and two cameras disposed on both sides of the projector with the projector being interposed, and may be configured to photograph an object, on which the pattern light is projected, by the two cameras disposed at different positions, and to acquire three-dimensional (3D) position information of the object by a stereo method. In this case, the image processing apparatus 50 can acquire a distance image and a 2D image which have an identical pixel array, or a distance image and a 2D image which have pixel arrays of a known correspondence relation. As a method for acquiring 3D position information of the object, other kinds of methods may be used. The image processing apparatus 50 is disposed at a known position in a working space where the robot 10 is disposed, and photographs the imaging target object W from above. Note that the image processing apparatus 50 may be attached to a wrist portion at an arm tip end of the robot 10.
In the present embodiment, as the image acquisition apparatus 50, a configuration in which one camera acquires both the distance image and the 2D image is adopted. However, the embodiment is not limited to this, and the image acquisition apparatus 50 may be configured such that a 3D camera that acquires the distance image of the object and a 2D camera that acquires the 2D image are separately disposed in the robot system 100. In this case, the correspondence relation between the pixel array of the distance image and the pixel array of the 2D image is calibrated in advance and set in a known state.
The robot controller 20 causes the robot 10 to handle the imaging target object W, based on the distance image data which is adjusted by the image processing apparatus 30. For example, as illustrated in FIG. 2, the robot controller 20 may be configured to include an object recognition unit 21 which performs image recognition of the object by pattern matching by using the distance image data which is adjusted by the image processing apparatus 30, and a picking operation execution unit 22 which executes a picking operation of the object. A grasping device 15 is attached to the wrist portion at the arm tip end of the robot 10. In this configuration, the picking operation execution unit 22 moves the robot 10 and grasps and picks, by the grasping device 15, the object which is recognized by the object recognition unit 21 by using the distance image.
In FIG. 1, the robot 10 is illustrated as being a vertical articulated robot, but some other type of robot may be used as the robot 10. Each of the robot controller 20 and the image processing apparatus 30 may have a configuration of a general computer including a CPU, a ROM, a RAM, a storage device, an operation unit, a display unit, a network interface, an external device interface, and the like.
An explanation is given of a problem of erroneous recognition, which may possibly occur when an object is recognized by using a 2D image or a distance image. As illustrated in FIG. 1 by way of example, a case is considered in which the imaging target object W includes surfaces of a plurality of objects (two cardboard boxes W1 and W2) which are juxtaposed with a gap therebetween. FIG. 4 illustrates a 2D image 201 captured by photographing two cardboard boxes W1 and W2 by the image acquisition apparatus 50 illustrated in FIG. 1. In the present embodiment, it is assumed that the 2D image is acquired as a gray-scale image. The two cardboard boxes W1 and W2 appear on the 2D image 201 of FIG. 4. A packing tape 61 extending in a vertical direction appears in a central part of the cardboard box W1, with a gray level slightly darker than the main body of the cardboard box W1, and similarly a packing tape 62 extending in the vertical direction appears in a central part of the cardboard box W2, with a gray level slightly darker than the main body of the cardboard box W2. A narrow gap G1 between the two cardboard boxes W1 and W2 appears dark in the 2D image 201.
FIG. 5 is a view for explaining a distance image. Consideration is now given to a case in which a distance image of an object T, which is disposed with an inclination to an installation floor F, as illustrated in FIG. 5, is acquired by the image acquisition apparatus 50. When viewed from the image acquisition apparatus 50, a second portion T2 side of the object T is farther than a first portion T1 side of the object T. In the generation of the distance image, distance information is visualized into an image by varying a color or a gray level in accordance with the distance from the camera to the object. A distance image 300 of FIG. 5 is an example of the case in which a brighter expression is used for a position closer to the image acquisition apparatus 50, and a darker expression is used for a position farther from the image acquisition apparatus 50. Thus, in the distance image 300, the second portion T2 side of the object T is expressed to be dark, and the first portion T1 side is expressed to be bright.
Consideration is now given to the case in which the cardboard boxes W1 and W2 are individually recognized by pattern matching by using the 2D image 201 acquired as illustrated in FIG. 4. FIG. 6A illustrates a case in which the two cardboard boxes W1 and W2 are correctly recognized as indicated by thick frames 251 and 252 of broken lines. When the object is recognized by the 2D image, however, there may be a case in which, as indicated by a thick frame 253 of a broken line in FIG. 6B, a contour line 61 a of the packing tape 61 and a contour line 62 a of the packing tape 62 are erroneously recognized as contour lines of the cardboard boxes of the recognition target object, and the cardboard boxes of the recognition target object are erroneously recognized as existing in the position of the thick frame 253. In addition, as indicated by a thick frame 254 of a broken line in FIG. 6C, there may be a case in which a contour surrounding the entirety of the two cardboard boxes W1 and W2 is erroneously recognized as one cardboard box. The reason for the erroneous recognition in FIG. 6C is that in the recognition process of the object, there is a case in which the size of a template is enlarged or reduced in order to cope with the fact that the size of the object varies on the 2D image in accordance with the height of the object (the distance from the camera to the object). FIG. 6C corresponds to a case in which the cardboard boxes of the recognition target object are erroneously recognized as existing at a position closer to the image acquisition apparatus 50 than in the case of FIG. 6A. As described above, in the recognition of the object by the 2D image, there is a case in which the position of the object is erroneously recognized.
FIG. 7 illustrates a distance image 301 in a case of photographing the two cardboard boxes W1 and W2 illustrated in FIG. 1 with use of a 3D camera of a high resolution, from the position of the image acquisition apparatus 50. As illustrated in FIG. 7, in the distance image 301, the gap G1 between the two cardboard boxes W1 and W2 is exactly depicted. In a distance image, a pattern on the object is not visualized into an image. Thus, in the distance image 301, since the packing tapes 61 and 62 do not appear on the cardboard boxes W1 and W2, the above-described problem of erroneous recognition due to the contour lines of packing tapes illustrated in FIG. 6B does not occur. On the other hand, FIG. 8 illustrates a distance image 302 in a case of photographing the two cardboard boxes W1 and W2 illustrated in FIG. 1 with use of a 3D camera of a low resolution, from the position of the image acquisition apparatus 50. As illustrated in FIG. 8, in the case of the distance image 302, an image of the gap G1 blurs and disappears. When image recognition is performed with the distance image 302, it is not possible to separately recognize the two cardboard boxes W1 and W2. In this manner, while the distance image is not affected by the pattern of the object, there is a need to use an expensive high-resolution 3D camera in order to reproduce details.
The image processing apparatus 30 according to the present embodiment is configured to solve the above-described problem which may occur when the image recognition of the object is performed by using the distance image. Referring to FIG. 3, a description will be given of an image processing method which is implemented by the image processing apparatus 30. FIG. 3 is a flowchart illustrating an image processing executed under the control by a CPU of the image processing apparatus 30. The image processing apparatus 30 executes the image processing as described below, by using the distance image data and 2D image data acquired by the image acquisition apparatus 50. To start with, the image processing apparatus 30 stores, in the 2D image storage unit 31, a plurality of 2D image data captured by photographing an identical imaging target object W under different exposure conditions (step S1). Next, the image processing apparatus 30 stores, in the distance image storage unit 32, distance image data representative of distance information depending on a spatial position of the imaging target object W, the distance image data including a pixel array of a known relationship to a pixel array of the 2D image data (step S2).
FIG. 9 illustrates examples of three 2D images 211 to 213 captured by photographing an identical imaging target object W under different exposure conditions. The exposure conditions may be changed by various methods, such as adjustment of the luminance of a light source of illumination light, an exposure time, a diaphragm, the sensitivity of an imaging element, etc. As an example, it is assumed that the exposure time is adjusted. In the 2D images 211 to 213 illustrated in FIG. 9, the exposure time of the 2D image 211 is longest, the exposure time of the 2D image 212 is second longest, and the exposure time of the 2D image 213 is shortest. Normally, the brightness of a whole image is proportional to the exposure time. Thus, the gray level of the surfaces of the cardboard boxes W1 and W2 varies in accordance with the length of exposure time. Specifically, the surface of the cardboard box W1, W2 in the 2D image 211 is brightest, the surface of the cardboard box W1, W2 in the 2D image 212 is second brightest, and the surface of the cardboard box W1, W2 in the 2D image 213 is darkest. In contrast to the variation in brightness of the surface of the cardboard box W1, W2 due to the variation in exposure time, the brightness of the image of the part of the gap G1 between the cardboard box W1 and cardboard box W2 does not substantially change, and the image of the part of the gap G1 remains dark. The reason for this is that since light from a space such as the gap G1 does not easily return to the image acquisition apparatus 50, such a space always appears dark regardless of the exposure time. The image processing apparatus 30 (pixel extraction unit 33) extracts an image (pixel) of the part of the space, based on the fact that the image of the part of the space, such as a gap, a slit or a hole, has a lower degree of variation in brightness relative to the variation of the exposure condition than the image of the other part.
In step S3, among the pixels in each of a plurality of 2D image data, a first pixel at which a difference in brightness between identical pixels is less than a predetermined value is extracted. For example, the extraction of the first pixel is performed as follows.
(1) The degree of variation in brightness relative to the variation in exposure time is calculated with respect to all pixels between the 2D image 211 and the 2D image 212.
(2) The degree of variation in brightness relative to the variation in exposure time is calculated with respect to all pixels between the 2D image 212 and the 2D image 213.
(3) The mean value of the above (1) and (2) is calculated with respect to each pixel.
(4) A pixel, at which the “degree of variation in brightness relative to the variation in unit exposure time” calculated in the above (3) is less than a predetermined value, is extracted.
By the process of the above (1) to (4), the first pixel, which constitutes the image of the part of the gap G1, can be extracted. The “predetermined value” may be set by various methods, such as a method of setting a fixed value, for instance, an experimental value or an empirical value, or a method of setting the “predetermined value”, based on the “variation in brightness relative to the variation in exposure time” for each pixel acquired by the above (3). In the case of the latter, there may be a method in which a value calculated by subtracting a certain value from a maximum value in all pixels of the “variation in brightness relative to the variation in exposure time”, or a value calculated by subtracting a certain value from a mean value in all pixels of the “variation in brightness relative to the variation in exposure time”, is set as the “predetermined value”. Note that the extraction of the first pixel can be performed if there are two 2D images captured by photographing an identical imaging target object W under different exposure conditions.
In step S4, the image processing apparatus 30 specifies, in the pixel array in the distance image 302, a second pixel of distance image data existing at a position corresponding to the first pixel extracted in the 2D image in step S3, and sets the second pixel as a non-imaging pixel in the distance image data. To set the second pixel as the non-imaging pixel means that the second pixel is not set as a pixel representative of distance information. For example, the pixel value of the second pixel may be set to zero, or to some other invalid value which is not representative of distance information. Thereby, in the distance image 302, the objects can be separated by the part of the gap G1, and the objects can exactly be recognized. The process in step S4 will be described with reference to FIG. 10. As illustrated in FIG. 10, the first pixel included in the part of the gap G1 in the 2D image 201 is extracted by the above-described method, and the second pixel corresponding to the part of the gap G1 in the distance image 302 is set as the non-imaging pixel. Thereby, a distance image 310 illustrated in a right part of FIG. 10 is obtained. In the distance image 310, the second pixel in the part of the gap G1 is set as the non-imaging pixel, and the cardboard box W1 and cardboard box W2 are separated by the part of the gap G1. Thereby, the cardboard box W1 and cardboard box W2 can individually correctly be recognized.
Next, referring to FIG. 11, a description will be given of a process (hereinafter referred to as “object picking process”) in a case of executing an operation of picking the cardboard box W1, W2 in the robot system 100. The object picking process is executed by a cooperative operation between the image processing apparatus 30 and robot controller 20. To start with, the image processing apparatus 30 causes the image acquisition apparatus 50 to capture a distance image of the imaging target object W, and stores the captured distance image (step S101). Next, the image processing apparatus 30 causes the image acquisition apparatus 50 to capture 2D images of the imaging target object W by a predetermined number of times, while changing the exposure condition, and stores the captured 2D images (steps S102, S103 and S104). Thereby, a plurality of 2D images of the imaging target object W under different exposure conditions, as exemplarily illustrated in FIG. 9, are acquired.
Next, the image processing apparatus 30 searches, in the 2D images, a pixel at which the degree of variation in brightness relative to the variation in exposure time is less than a predetermined value (step S105). The image processing apparatus 30 recognizes the searched pixel as a pixel (first pixel) of the part corresponding to the gap G1 between the cardboard boxes W1 and W2 (step S106). Next, the image processing apparatus 30 sets, as a non-imaging pixel, a pixel (second pixel) at a position corresponding to the gap G1 in the distance image acquired in step S101 (step S107).
Next, the object recognition unit 21 of the robot controller 20 recognizes the cardboard box W1 or W2 in the distance image by using model data of the cardboard box W1, W2 (step S108). The model data for object recognition is stored in a storage device (not illustrated) of the robot controller 20. Next, the picking operation execution unit 22 of the robot controller 20 calculates the position of the cardboard box W1 or W2 in a robot coordinate system, based on the position of the cardboard box W1 or W2 recognized in the distance image, and the position of the image acquisition apparatus 50 in the robot coordinate system. Based on the calculated position of the cardboard box W1 or W2 in the robot coordinate system, the picking operation execution unit 22 executes the operation of moving the robot 10 and individually grasping and picking the cardboard box W1 or W2 by the grasping device 15 (step S109).
The above-described method, which is a method by extracting, among the pixels in each of the 2D image data captured by photographing an identical imaging target object under different exposure conditions, the first pixel at which the difference in brightness between identical pixels is less than the predetermined value, can be used for, in addition to the extraction of a gap between objects, the extraction of a pixel which holds imaging information of a space from which reflective light of illumination light does not easily return to the camera, such as a hole, a slit or the like formed in an object. FIG. 12 illustrates a distance image 410 obtained after the image processing of FIG. 3 is executed on a distance image captured by photographing a plate-shaped work W3 in which a hole C1 is formed as a position mark. The part of the hole C1, too, always appears in a dark state on the 2D image, regardless of the change of the exposure condition. Even when the hole C1 is relatively small and the hole C1 is not reproduced by a distance image acquired by a 3D camera of a low resolution, the hole C1 serving as the position mark on the distance image 410 can be separated from the image of the work W3 and can be recognized, as illustrated in FIG. 12, by executing the image processing of FIG. 3. As a result, the direction of the work W3 can be correctly recognized by using the distance image 410.
FIG. 13 illustrates a distance image 420 obtained after the image processing of FIG. 3 is executed on a distance image captured by photographing a work W4 in which a narrow slit L1 is formed. As regards the part of the slit L1, too, the part of the slit L1 on the 2D image always appears in a dark state, regardless of the change of the exposure condition, since reflective light of illumination light does not easily return to the camera. Even when the slit L1 is narrow and the slit L1 is not reproduced by a distance image of a low resolution, the slit L1 on the distance image 420 can be separated from the image of the work W4 and can be recognized, as illustrated in FIG. 13, by executing the image processing of FIG. 3. As a result, the shape of the work W4 can be correctly recognized by using the distance image 420.
As described above, according to the present embodiment, an object can be recognized with high precision from a distance image.
Although the embodiment of the present disclosure was described above, it is understood by a skilled person that various modifications and changes can be made without departing from the scope of disclosure of Claims which will be stated below.
The configuration of the robot system illustrated in FIG. 1 and the configuration of the functional block diagram illustrated in FIG. 2 are merely examples, and the present invention is not limited to the configurations illustrated in FIGS. 1 and 2. For example, the various functions of the image processing apparatus 30 illustrated in FIG. 1 may be implemented in the robot controller 20. Some of the functions of the image processing apparatus 30 in FIG. 2 may be disposed on the robot controller 20 side. Alternatively, some of the functions of the robot controller 20 may be disposed in the image processing apparatus 30.
In the above-described embodiment, the method in which, among the pixels in each of 2D image data, the first pixel at which the difference in brightness between identical pixels is less than the predetermined value is extracted by using a plurality of 2D image data captured by photographing an identical imaging target object under different exposure conditions, and in which the second pixel of distance image data at a position corresponding to the first pixel is set as the non-imaging pixel, can also be expressed as a method in which the information of the distance image is supplemented by using the information of the pixel at which the difference in brightness between identical pixels is less than the predetermined value on the 2D image data.
When step S3 (the extraction of the first pixel at which the difference in brightness between identical pixels is less than the predetermined value) in the image process of FIG. 3 is executed, such extraction of the pixel may be performed only when there is a certain cluster of pixels at which the difference in brightness between identical pixels is less than the predetermined value. Thereby, it becomes possible to avoid extraction of a pixel of part of a flaw or the like, which does not need to be reproduced on the distance image.

Claims (6)

The invention claimed is:
1. An image processing apparatus comprising:
a two-dimensional image storage unit configured to store a plurality of two-dimensional image data captured by photographing an identical imaging target object under different exposure conditions;
a distance image storage unit configured to store distance image data representative of distance information depending on a spatial position of the imaging target object, the distance image data including a pixel array of a known relationship to a pixel array of the two-dimensional image data;
a pixel extraction unit configured to extract, among a plurality of pixels in each of the two-dimensional image data, a first pixel at which a difference in brightness between identical pixels is less than a predetermined value; and
a distance image adjusting unit configured to specify a second pixel of the distance image data at a position corresponding to the first pixel in the pixel array, and to set the second pixel as a non-imaging pixel in the distance image data.
2. The image processing apparatus according to claim 1, wherein the first pixel is a pixel which holds imaging information of a space in the imaging target object.
3. A robot system comprising:
a robot;
a robot controller configured to control the robot; and
the image processing apparatus according to claim 1,
wherein the robot controller is configured to cause the robot to handle the imaging target object, based on the distance image data acquired as a result of the distance image adjusting unit setting the second pixel as the non-imaging pixel.
4. The robot system according to claim 3, further comprising:
an image acquisition apparatus configured to acquire the plurality of two-dimensional image data and the distance image data.
5. The robot system according to claim 3, wherein the imaging target object includes surfaces of a plurality of objects juxtaposed with a gap interposed, the first pixel holds imaging information of the gap, and the robot individually grasps each of the plurality of objects.
6. An image processing method comprising:
storing a plurality of two-dimensional image data captured by photographing an identical imaging target object under different exposure conditions;
storing distance image data representative of distance information depending on a spatial position of the imaging target object, the distance image data including a pixel array of a known relationship to a pixel array of the two-dimensional image data;
extracting, among a plurality of pixels in each of the two-dimensional image data, a first pixel at which a difference in brightness between identical pixels is less than a predetermined value; and
specifying a second pixel of the distance image data at a position corresponding to the first pixel in the pixel array, and setting the second pixel as a non-imaging pixel in the distance image data.
US16/854,919 2019-04-25 2020-04-22 Image processing apparatus, image processing method, and robot system Active US11138684B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JPJP2019-084349 2019-04-25
JP2019-084349 2019-04-25
JP2019084349A JP7007324B2 (en) 2019-04-25 2019-04-25 Image processing equipment, image processing methods, and robot systems

Publications (2)

Publication Number Publication Date
US20200342563A1 US20200342563A1 (en) 2020-10-29
US11138684B2 true US11138684B2 (en) 2021-10-05

Family

ID=72921541

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/854,919 Active US11138684B2 (en) 2019-04-25 2020-04-22 Image processing apparatus, image processing method, and robot system

Country Status (4)

Country Link
US (1) US11138684B2 (en)
JP (1) JP7007324B2 (en)
CN (1) CN111862198A (en)
DE (1) DE102020110624A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6959277B2 (en) * 2019-02-27 2021-11-02 ファナック株式会社 3D imaging device and 3D imaging condition adjustment method

Citations (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5386228A (en) * 1991-06-20 1995-01-31 Canon Kabushiki Kaisha Image pickup device including means for adjusting sensitivity of image pickup elements
US6453069B1 (en) * 1996-11-20 2002-09-17 Canon Kabushiki Kaisha Method of extracting image from input image using reference image
US20040125206A1 (en) * 2002-11-06 2004-07-01 Lueze Lumiflex Gmbh + Co. Kg Method and device for monitoring an area of coverage
US20050162644A1 (en) * 2004-01-23 2005-07-28 Norio Watanabe Fabrication method of semiconductor integrated circuit device
US6988610B2 (en) * 2002-01-14 2006-01-24 Carnegie Mellon University Conveyor belt inspection system and method
US20090027509A1 (en) * 2007-07-25 2009-01-29 Giesen Robert J B Vision System With Deterministic Low-Latency Communication
US7599533B2 (en) * 2002-12-05 2009-10-06 Olympus Corporation Image processing system and image processing method
US20100033619A1 (en) * 2008-08-08 2010-02-11 Denso Corporation Exposure determining device and image processing apparatus
US7672517B2 (en) * 2002-03-11 2010-03-02 Bracco Imaging S.P.A. Method for encoding image pixels a method for processing images and a method for processing images aimed at qualitative recognition of the object reproduced by one or more image pixels
US7701455B2 (en) * 2004-07-21 2010-04-20 Che-Chih Tsao Data rendering method for volumetric 3D displays
US7756299B2 (en) * 2004-12-14 2010-07-13 Honda Motor Co., Ltd. Face region estimating device, face region estimating method, and face region estimating program
US8174611B2 (en) * 2009-03-26 2012-05-08 Texas Instruments Incorporated Digital image segmentation using flash
US8340464B2 (en) * 2005-09-16 2012-12-25 Fujitsu Limited Image processing method and image processing device
US8459073B2 (en) * 2009-10-19 2013-06-11 Nippon Steel & Sumitomo Metal Corporation Method for measuring sheet material flatness and method for producing steel sheet using said measuring method
JP2013186088A (en) 2012-03-09 2013-09-19 Canon Inc Information processing device and information processing method
US8654219B2 (en) * 2011-04-26 2014-02-18 Lg Electronics Inc. Method and apparatus for restoring dead pixel using light intensity map in a time-of-flight camera
US8699821B2 (en) * 2010-07-05 2014-04-15 Apple Inc. Aligning images
US8730318B2 (en) * 2010-07-29 2014-05-20 Hitachi-Ge Nuclear Energy, Ltd. Inspection apparatus and method for producing image for inspection
US20140176761A1 (en) * 2012-12-25 2014-06-26 Fanuc Corporation Image processing device and image processing method for executing image processing to detect object in image
US8780113B1 (en) * 2012-08-21 2014-07-15 Pelican Imaging Corporation Systems and methods for performing depth estimation using image data from multiple spectral channels
US9113142B2 (en) * 2012-01-06 2015-08-18 Thomson Licensing Method and device for providing temporally consistent disparity estimations
US20150371398A1 (en) * 2014-06-23 2015-12-24 Gang QIAO Method and system for updating background model based on depth
US9247153B2 (en) * 2013-01-24 2016-01-26 Socionext Inc. Image processing apparatus, method and imaging apparatus
US9270902B2 (en) * 2013-03-05 2016-02-23 Canon Kabushiki Kaisha Image processing apparatus, image capturing apparatus, image processing method, and storage medium for obtaining information on focus control of a subject
US9275464B2 (en) * 2012-04-23 2016-03-01 Qualcomm Technologies, Inc. Method for determining the extent of a foreground object in an image
US9294695B2 (en) * 2011-11-01 2016-03-22 Clarion Co., Ltd. Image processing apparatus, image pickup apparatus, and storage medium for generating a color image
US20160155235A1 (en) * 2014-11-28 2016-06-02 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and non-transitory computer-readable medium
US20160175964A1 (en) * 2014-12-19 2016-06-23 Lincoln Global, Inc. Welding vision and control system
US9412172B2 (en) * 2013-05-06 2016-08-09 Disney Enterprises, Inc. Sparse light field representation
US20160279809A1 (en) * 2015-03-27 2016-09-29 Canon Kabushiki Kaisha Information processing apparatus, and information processing method
JP2016185573A (en) 2015-03-27 2016-10-27 ファナック株式会社 Robot system having function to correct target unloading passage
US9582888B2 (en) * 2014-06-19 2017-02-28 Qualcomm Incorporated Structured light three-dimensional (3D) depth map based on content filtering
US20170257540A1 (en) * 2016-03-07 2017-09-07 Omron Corporation Image measurement system and controller
US9832404B2 (en) * 2013-05-31 2017-11-28 Nikon Corporation Image sensor, imaging apparatus, and image processing device
US20180023947A1 (en) * 2016-07-20 2018-01-25 Mura Inc. Systems and methods for 3d surface measurements
US20180084708A1 (en) * 2016-09-27 2018-03-29 Claas Selbstfahrende Erntemaschinen Gmbh Agricultural work machine for avoiding anomalies
US10127622B2 (en) * 2014-09-16 2018-11-13 Seiko Epson Corporation Image processing apparatus and robot system
US10157495B2 (en) * 2011-03-04 2018-12-18 General Electric Company Method and device for displaying a two-dimensional image of a viewed object simultaneously with an image depicting the three-dimensional geometry of the viewed object
US20190033067A1 (en) * 2017-07-31 2019-01-31 Keyence Corporation Shape Measuring Device And Shape Measuring Method
US10198792B2 (en) * 2009-10-14 2019-02-05 Dolby Laboratories Licensing Corporation Method and devices for depth map processing
US10212408B1 (en) * 2016-06-29 2019-02-19 Amazon Technologies, Inc. Depth-map augmentation techniques
US10339668B2 (en) * 2017-04-26 2019-07-02 Fanuc Corporation Object recognition apparatus
US10380911B2 (en) * 2015-03-09 2019-08-13 Illinois Tool Works Inc. Methods and apparatus to provide visual information associated with welding operations
US10434654B2 (en) * 2017-01-12 2019-10-08 Fanuc Corporation Calibration device, calibration method, and computer readable medium for visual sensor
US10477175B2 (en) * 2015-02-06 2019-11-12 Sony Interactive Entertainment Inc. Image pickup apparatus, information processing system, mat, and image generation method
US10498963B1 (en) * 2017-12-04 2019-12-03 Amazon Technologies, Inc. Motion extracted high dynamic range images
US20200021743A1 (en) * 2018-07-13 2020-01-16 Fanuc Corporation Object inspection device, object inspection system and method for adjusting inspection position
US10582180B2 (en) * 2016-02-03 2020-03-03 Canon Kabushiki Kaisha Depth imaging correction apparatus, imaging apparatus, and depth image correction method
US20200175352A1 (en) * 2017-03-14 2020-06-04 University Of Manitoba Structure defect detection using machine learning algorithms
US20200226729A1 (en) * 2017-09-11 2020-07-16 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image Processing Method, Image Processing Apparatus and Electronic Device
US10863105B1 (en) * 2017-06-27 2020-12-08 Amazon Technologies, Inc. High dynamic range imaging for event detection and inventory management
US10917543B2 (en) * 2017-04-24 2021-02-09 Alcon Inc. Stereoscopic visualization camera and integrated robotics platform
US10986267B2 (en) * 2014-04-28 2021-04-20 Lynx System Developers, Inc. Systems and methods for generating time delay integration color images at increased resolution

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001092955A (en) * 1999-09-21 2001-04-06 Fuji Photo Film Co Ltd Method and device for processing image
JP4466260B2 (en) * 2004-07-30 2010-05-26 パナソニック電工株式会社 Image processing device
CN101326545B (en) * 2005-08-19 2012-05-30 松下电器产业株式会社 Image processing method, image processing system
JP2007206797A (en) * 2006-01-31 2007-08-16 Omron Corp Image processing method and image processor
JP4309439B2 (en) * 2007-03-30 2009-08-05 ファナック株式会社 Object take-out device
DE102010037744B3 (en) * 2010-09-23 2011-12-08 Sick Ag Optoelectronic sensor
JP2013101045A (en) * 2011-11-08 2013-05-23 Fanuc Ltd Recognition device and recognition method of three-dimensional position posture of article
TWI607212B (en) * 2013-01-16 2017-12-01 住友化學股份有限公司 Image generation device, defect inspection device, and defect inspection method
CN104956210B (en) * 2013-01-30 2017-04-19 住友化学株式会社 Image generating device, defect inspecting device, and defect inspecting method
JP2015114292A (en) * 2013-12-16 2015-06-22 川崎重工業株式会社 Workpiece position information identification apparatus and workpiece position information identification method
JP6795993B2 (en) * 2016-02-18 2020-12-02 株式会社ミツトヨ Shape measurement system, shape measurement device and shape measurement method

Patent Citations (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5386228A (en) * 1991-06-20 1995-01-31 Canon Kabushiki Kaisha Image pickup device including means for adjusting sensitivity of image pickup elements
US6453069B1 (en) * 1996-11-20 2002-09-17 Canon Kabushiki Kaisha Method of extracting image from input image using reference image
US6988610B2 (en) * 2002-01-14 2006-01-24 Carnegie Mellon University Conveyor belt inspection system and method
US7672517B2 (en) * 2002-03-11 2010-03-02 Bracco Imaging S.P.A. Method for encoding image pixels a method for processing images and a method for processing images aimed at qualitative recognition of the object reproduced by one or more image pixels
US20040125206A1 (en) * 2002-11-06 2004-07-01 Lueze Lumiflex Gmbh + Co. Kg Method and device for monitoring an area of coverage
US7599533B2 (en) * 2002-12-05 2009-10-06 Olympus Corporation Image processing system and image processing method
US20050162644A1 (en) * 2004-01-23 2005-07-28 Norio Watanabe Fabrication method of semiconductor integrated circuit device
US7701455B2 (en) * 2004-07-21 2010-04-20 Che-Chih Tsao Data rendering method for volumetric 3D displays
US7756299B2 (en) * 2004-12-14 2010-07-13 Honda Motor Co., Ltd. Face region estimating device, face region estimating method, and face region estimating program
US8340464B2 (en) * 2005-09-16 2012-12-25 Fujitsu Limited Image processing method and image processing device
US20090027509A1 (en) * 2007-07-25 2009-01-29 Giesen Robert J B Vision System With Deterministic Low-Latency Communication
US20100033619A1 (en) * 2008-08-08 2010-02-11 Denso Corporation Exposure determining device and image processing apparatus
US8174611B2 (en) * 2009-03-26 2012-05-08 Texas Instruments Incorporated Digital image segmentation using flash
US10198792B2 (en) * 2009-10-14 2019-02-05 Dolby Laboratories Licensing Corporation Method and devices for depth map processing
US8459073B2 (en) * 2009-10-19 2013-06-11 Nippon Steel & Sumitomo Metal Corporation Method for measuring sheet material flatness and method for producing steel sheet using said measuring method
US8699821B2 (en) * 2010-07-05 2014-04-15 Apple Inc. Aligning images
US8730318B2 (en) * 2010-07-29 2014-05-20 Hitachi-Ge Nuclear Energy, Ltd. Inspection apparatus and method for producing image for inspection
US10157495B2 (en) * 2011-03-04 2018-12-18 General Electric Company Method and device for displaying a two-dimensional image of a viewed object simultaneously with an image depicting the three-dimensional geometry of the viewed object
US8654219B2 (en) * 2011-04-26 2014-02-18 Lg Electronics Inc. Method and apparatus for restoring dead pixel using light intensity map in a time-of-flight camera
US9294695B2 (en) * 2011-11-01 2016-03-22 Clarion Co., Ltd. Image processing apparatus, image pickup apparatus, and storage medium for generating a color image
US9113142B2 (en) * 2012-01-06 2015-08-18 Thomson Licensing Method and device for providing temporally consistent disparity estimations
JP2013186088A (en) 2012-03-09 2013-09-19 Canon Inc Information processing device and information processing method
US9275464B2 (en) * 2012-04-23 2016-03-01 Qualcomm Technologies, Inc. Method for determining the extent of a foreground object in an image
US8780113B1 (en) * 2012-08-21 2014-07-15 Pelican Imaging Corporation Systems and methods for performing depth estimation using image data from multiple spectral channels
US20140176761A1 (en) * 2012-12-25 2014-06-26 Fanuc Corporation Image processing device and image processing method for executing image processing to detect object in image
US9247153B2 (en) * 2013-01-24 2016-01-26 Socionext Inc. Image processing apparatus, method and imaging apparatus
US9270902B2 (en) * 2013-03-05 2016-02-23 Canon Kabushiki Kaisha Image processing apparatus, image capturing apparatus, image processing method, and storage medium for obtaining information on focus control of a subject
US9412172B2 (en) * 2013-05-06 2016-08-09 Disney Enterprises, Inc. Sparse light field representation
US9832404B2 (en) * 2013-05-31 2017-11-28 Nikon Corporation Image sensor, imaging apparatus, and image processing device
US10986267B2 (en) * 2014-04-28 2021-04-20 Lynx System Developers, Inc. Systems and methods for generating time delay integration color images at increased resolution
US9582888B2 (en) * 2014-06-19 2017-02-28 Qualcomm Incorporated Structured light three-dimensional (3D) depth map based on content filtering
US20150371398A1 (en) * 2014-06-23 2015-12-24 Gang QIAO Method and system for updating background model based on depth
US10127622B2 (en) * 2014-09-16 2018-11-13 Seiko Epson Corporation Image processing apparatus and robot system
US10059002B2 (en) * 2014-11-28 2018-08-28 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and non-transitory computer-readable medium
US20160155235A1 (en) * 2014-11-28 2016-06-02 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and non-transitory computer-readable medium
US20160175964A1 (en) * 2014-12-19 2016-06-23 Lincoln Global, Inc. Welding vision and control system
US10869010B2 (en) * 2015-02-06 2020-12-15 Sony Interactive Entertainment Inc. Image pickup apparatus, information processing system, mat, and image generation method
US10477175B2 (en) * 2015-02-06 2019-11-12 Sony Interactive Entertainment Inc. Image pickup apparatus, information processing system, mat, and image generation method
US10380911B2 (en) * 2015-03-09 2019-08-13 Illinois Tool Works Inc. Methods and apparatus to provide visual information associated with welding operations
US20160279809A1 (en) * 2015-03-27 2016-09-29 Canon Kabushiki Kaisha Information processing apparatus, and information processing method
JP2016185573A (en) 2015-03-27 2016-10-27 ファナック株式会社 Robot system having function to correct target unloading passage
US10582180B2 (en) * 2016-02-03 2020-03-03 Canon Kabushiki Kaisha Depth imaging correction apparatus, imaging apparatus, and depth image correction method
US20170257540A1 (en) * 2016-03-07 2017-09-07 Omron Corporation Image measurement system and controller
US10313569B2 (en) * 2016-03-07 2019-06-04 Omron Corporation Image measurement system and controller
US10212408B1 (en) * 2016-06-29 2019-02-19 Amazon Technologies, Inc. Depth-map augmentation techniques
US20180023947A1 (en) * 2016-07-20 2018-01-25 Mura Inc. Systems and methods for 3d surface measurements
US10502556B2 (en) * 2016-07-20 2019-12-10 Mura Inc. Systems and methods for 3D surface measurements
US10292321B2 (en) * 2016-09-27 2019-05-21 Claas Selbstfahrende Erntemaschinen Gmbh Agricultural work machine for avoiding anomalies
US20180084708A1 (en) * 2016-09-27 2018-03-29 Claas Selbstfahrende Erntemaschinen Gmbh Agricultural work machine for avoiding anomalies
US10434654B2 (en) * 2017-01-12 2019-10-08 Fanuc Corporation Calibration device, calibration method, and computer readable medium for visual sensor
US20200175352A1 (en) * 2017-03-14 2020-06-04 University Of Manitoba Structure defect detection using machine learning algorithms
US10917543B2 (en) * 2017-04-24 2021-02-09 Alcon Inc. Stereoscopic visualization camera and integrated robotics platform
US10339668B2 (en) * 2017-04-26 2019-07-02 Fanuc Corporation Object recognition apparatus
US10863105B1 (en) * 2017-06-27 2020-12-08 Amazon Technologies, Inc. High dynamic range imaging for event detection and inventory management
US20190033067A1 (en) * 2017-07-31 2019-01-31 Keyence Corporation Shape Measuring Device And Shape Measuring Method
US20200226729A1 (en) * 2017-09-11 2020-07-16 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image Processing Method, Image Processing Apparatus and Electronic Device
US10498963B1 (en) * 2017-12-04 2019-12-03 Amazon Technologies, Inc. Motion extracted high dynamic range images
US20200021743A1 (en) * 2018-07-13 2020-01-16 Fanuc Corporation Object inspection device, object inspection system and method for adjusting inspection position

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Birchfield et al., "Depth Discontinuities by Pixel-to-Pixel Stereo", (pp. 269-293) (Year: 1999). *

Also Published As

Publication number Publication date
JP2020181391A (en) 2020-11-05
CN111862198A (en) 2020-10-30
DE102020110624A1 (en) 2020-11-12
US20200342563A1 (en) 2020-10-29
JP7007324B2 (en) 2022-01-24

Similar Documents

Publication Publication Date Title
US9672630B2 (en) Contour line measurement apparatus and robot system
US10515271B2 (en) Flight device and flight control method
US20170308103A1 (en) Flight device, flight control system and method
JP3951984B2 (en) Image projection method and image projection apparatus
US7526121B2 (en) Three-dimensional visual sensor
US6445814B2 (en) Three-dimensional information processing apparatus and method
US10430650B2 (en) Image processing system
CN110555878B (en) Method and device for determining object space position form, storage medium and robot
CN110869978B (en) Information processing apparatus, information processing method, and computer program
US11138684B2 (en) Image processing apparatus, image processing method, and robot system
US11956537B2 (en) Location positioning device for moving body and location positioning method for moving body
JP2022039719A (en) Position and posture estimation device, position and posture estimation method, and program
CN113221953B (en) Target attitude identification system and method based on example segmentation and binocular depth estimation
US11989928B2 (en) Image processing system
CN113340405A (en) Bridge vibration mode measuring method, device and system
CN111536895B (en) Appearance recognition device, appearance recognition system, and appearance recognition method
JP2006227739A (en) Image processing device and image processing method
WO2021049281A1 (en) Image processing device, head-mounted display, and spatial information acquisition method
JP7321772B2 (en) Image processing device, image processing method, and program
CN113939852A (en) Object recognition device and object recognition method
EP3001141B1 (en) Information processing system and information processing method
KR102631472B1 (en) Methods and device for lossless correction of fisheye distortion image
JP2020064034A (en) Measurement device, calibration method thereof, and program
JP5981353B2 (en) 3D measuring device
US20230083531A1 (en) Three-dimensional measuring device, and three-dimensional measuring method

Legal Events

Date Code Title Description
AS Assignment

Owner name: FANUC CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YOSHIDA, JUNICHIROU;TAKIZAWA, SHOUTA;REEL/FRAME:052458/0851

Effective date: 20200312

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE