WO2020241648A1 - Image acquisition apparatus and image acquisition method - Google Patents

Image acquisition apparatus and image acquisition method Download PDF

Info

Publication number
WO2020241648A1
WO2020241648A1 PCT/JP2020/020781 JP2020020781W WO2020241648A1 WO 2020241648 A1 WO2020241648 A1 WO 2020241648A1 JP 2020020781 W JP2020020781 W JP 2020020781W WO 2020241648 A1 WO2020241648 A1 WO 2020241648A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
work
region
model
unit
Prior art date
Application number
PCT/JP2020/020781
Other languages
French (fr)
Japanese (ja)
Inventor
洋太朗 中根
高木 誠司
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2021522789A priority Critical patent/JP7077485B2/en
Publication of WO2020241648A1 publication Critical patent/WO2020241648A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present disclosure relates to a technique for capturing a focused image of a work to be imaged.
  • the depth of field (the range that appears to be in focus in the image) becomes shallow.
  • the evaluation target portion in the work may not be focused, which tends to adversely affect the evaluation (inspection, positioning, etc.) of the work.
  • a omnifocal image is an image in which multiple images with different focal points are taken while changing the distance between the work and the image pickup device in the direction of the optical axis of the image pickup device, and the degree of focus (degree of focus) is determined for each local area. The whole image is reconstructed by evaluating and combining local images with a high degree of focus. The omnifocal image created in this way is almost in focus at all pixels.
  • the present disclosure has been made to solve the above-mentioned problems, and an object thereof is to enable high-speed acquisition of a focused image of a work to be evaluated.
  • the image acquisition device acquires an image of the work.
  • This image acquisition device aligns with a first registration unit that registers a first region to be acquired using a model image that is an image of a work to be model-registered, and a model image.
  • a second registration unit that registers the second region used for the image, an image pickup device that captures a work image that is an image of the work to be evaluated, and at least one of the image pickup device and the work can be moved in the optical axis direction of the image pickup device.
  • the alignment unit that identifies the region corresponding to the second region in the work image by aligning the portion and the work image at the focused position with the second region, and the second region and the work in the model image.
  • a posture estimation unit configured to be able to acquire information indicating the posture of the work at the time of imaging based on the positional relationship with the region corresponding to the second region in the image, and a second position in the work image using information indicating the posture of the work. It includes an area output unit that outputs the position of the area corresponding to one area.
  • the 1 which shows the structure of the image acquisition system schematicly. It is a flowchart (the 1) which shows an example of the registration procedure of a model area. It is a flowchart (the 1) which shows an example of the calculation procedure of a work target area. It is a figure (the 2) which shows the structure of the image acquisition system schematicly. It is a flowchart (2) which shows an example of the registration procedure of a model area. It is a flowchart (2) which shows an example of the calculation procedure of a work target area. It is a figure (the 3) which shows the structure of the image acquisition system schematicly. It is a flowchart (3) which shows an example of the registration procedure of a model area.
  • FIG. 5 is a diagram (No. 5) schematically showing the configuration of an image acquisition system. It is a figure which shows an example of the model of a neural network. It is a flowchart (No. 5) which shows an example of the registration procedure of a model area. It is a flowchart which shows an example of the learning procedure of a model area.
  • FIG. 1 is a diagram schematically showing a configuration of an image acquisition system including an image acquisition device 1 according to the present embodiment.
  • This image acquisition system includes an image acquisition device 1 and a pedestal 3 on which a work 2 to be imaged is installed.
  • the image acquisition device 1 includes an image pickup device 11, a moving device 12, an image processing unit 13, a model registration unit 14, a posture estimation unit 15, and a focused image acquisition unit 16.
  • the image pickup device 11 includes a camera 111 and a trigger generator 112.
  • the camera 111 photographs the work 2 using an image sensor such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor).
  • the trigger generator 112 outputs a trigger input signal Trig indicating the imaging timing of the camera 111 to the camera 111.
  • the camera 111 photographs the work 2 based on the trigger input signal Trig from the trigger generator 112.
  • the moving device 12 includes an actuator 121.
  • the actuator 121 is configured to be able to adjust the relative distance between the work 2 and the image pickup device 11 in the optical axis direction by moving the image pickup device 11 (camera 111) in the optical axis direction.
  • the direction along the optical axis direction of the image pickup apparatus 11 is the "Z-axis direction”
  • the directions perpendicular to the optical axis direction and perpendicular to each other are the "X-axis direction" and "Y", respectively. Also called "axial direction”.
  • the moving device 12 has a function of moving the work 2 in the X-axis direction and the Y-axis direction (planar direction) by moving the pedestal 3 on which the work 2 is installed along the X-axis direction and the Y-axis direction. You may have. In the present embodiment, it is assumed that the XY coordinate positions (positions in the X-axis direction and the Y-axis direction) of the pedestal 3 are fixed when the work 2 is imaged.
  • the moving device 12 may have a function of adjusting the relative distance between the work 2 and the imaging device 11 in the Z-axis direction by moving the pedestal 3 on which the work 2 is installed along the Z-axis direction. ..
  • the Z coordinate position (position in the Z axis direction) of the pedestal 3 is fixed when the work 2 is imaged.
  • the moving device 12 may have a function of outputting information indicating the XYZ coordinate position of the imaging device 11 to the trigger generator 112 as the moving unit position information Enc.
  • the trigger generator 112 may output the trigger input signal Trig to the camera 111 based on the moving unit position information Enc. For example, when imaging a work 2 while moving the image pickup device 11 in the Z-axis direction, the trigger generator 112 determines that the Z coordinate position of the image pickup device 11 has changed by a predetermined amount based on the moving portion position information Enc.
  • the trigger input signal Trig may be output to the camera 111 each time. In this way, a plurality of images can be captured while moving the image pickup device 11 in the Z-axis direction.
  • the image pickup device 11 When a plurality of images are captured while the image pickup device 11 is moved in the Z-axis direction, the image pickup device 11 registers the captured plurality of images (hereinafter, also referred to as “image stack Stack”) with the image processing unit 13 and the model. Output to unit 14.
  • image stack Stack the captured plurality of images
  • the image processing unit 13 includes a focusing position acquisition unit (first acquisition unit) 131 and an alignment unit 132.
  • the focusing position acquisition unit 131 calculates a focusing index for each of a plurality of images included in the image stack Stack, and based on the calculated size of the focusing index, the focusing position (focus position of the image stack Stack) ( Depth position) PosZ is calculated.
  • the region of each image for which the focus index is calculated may be the entire region of each image or a predetermined local region.
  • the "focused position PosZ” can be represented by, for example, the Z coordinate position of the image pickup apparatus 11 when the most focused image is captured among the plurality of images included in the image stack Stack. Therefore, the "focus index" used in the calculation of the focus position PosZ can be calculated based on the image of the work 2 captured by the image pickup device 11, and the relative distance between the image pickup device 11 and the work 2 in the Z-axis direction. It is desirable that the index changes according to the above and takes the maximum or minimum when the degree of focus is the highest.
  • the in-focus position acquisition unit 131 can calculate the sum of the elements of the image obtained by convolving the matrices shown in the following equations (1) and (2) for each pixel of the image as the in-focus index. ..
  • the focus position acquisition unit 131 selects the image having the maximum focus index among a plurality of images included in the image stack Stack, and the selected image is selected.
  • the Z coordinate position of the imaging device 11 at the time of imaging is calculated as the in-focus position PosZ.
  • the focusing position acquisition unit 131 outputs the calculated focusing position position PosZ to the alignment unit 132.
  • the alignment unit 132 selects an image corresponding to the in-focus position PosZ from the image stack Stack acquired from the image pickup apparatus 11, and calculates the work reference region WRoiPos in the selected image.
  • the "work reference area WRoiPos" is an area corresponding to the model reference area MRoiPos in the image stack Stack, and is specified by the XYZ coordinate position.
  • the "model reference area MRoiPos” is an area including a model reference image (for example, a boundary of a pattern on the work 2) as a reference for alignment, and is specified by an XYZ coordinate position.
  • the model reference area MRoiPos is registered in advance by the model reference registration unit 142 described later.
  • the alignment unit 132 sequentially calculates the difference between each region of the image stack Stack and the image in the model reference region MRoiPos while shifting the image in the model reference region MRoiPos with respect to the image of the image stack Stack in the XY axis direction. Then, a process of specifying the XY coordinate position of the region where the calculated difference is the minimum as the XY coordinate position of the work reference region WRoiPos (hereinafter, also referred to as “plane alignment”) is performed.
  • the alignment unit 132 indicates a signal obtained by combining the focus position PosZ (Z coordinate position) with the XY coordinate position of the work reference region WRoiPos specified by the plane alignment, and indicates the work reference region WRoiPos (XYZ coordinate position). It is output to the attitude estimation unit 15 as a signal.
  • the model registration unit 14 includes a model target registration unit (first registration unit) 141 and a model standard registration unit (second registration unit) 142.
  • the model target registration unit 141 and the model reference registration unit 142 are provided, for example, in a computer (not shown).
  • the model target registration unit 141 registers the XYZ coordinate position of the region designated by the operator as the region on which the focused image I Focus is to be acquired for the evaluation of the work 2 as the "model target region MRoiPat". To do.
  • the operator can specify the model target area MRoiPat by operating the computer on the image displayed on the display to surround the area to be specified.
  • the model reference registration unit 142 registers the XYZ coordinate position of the area designated as the area including the model reference image from the areas on the work 2 as the above-mentioned "model reference area MRoiPos".
  • the operator can specify the model reference area MRoiPos by operating the computer on the image displayed on the display and performing an operation such as surrounding the area to be specified.
  • the image displayed on the display when the model target area MRoiPat and the model reference area MRoiPos are specified may be an image selected by the operator from a plurality of images included in the image stack Stack. , The image may be automatically selected by the computer.
  • the posture estimation unit 15 is the posture of the work 2 imaged as an evaluation target based on the work reference area WRoiPos calculated by the alignment unit 132 and the model reference area MRoiPos registered in advance by the model reference registration unit 142. To estimate.
  • the posture estimation unit 15 In order to estimate the posture of the work 2 three-dimensionally, it is desirable that at least three or more work reference regions WRoiPos exist. Therefore, in the present embodiment, three or more model reference regions MRoiPos are registered in advance by the model reference registration unit 142. Then, the above-mentioned alignment unit 132 outputs three or more work reference regions WRoiPos corresponding to each of the three or more model reference regions MRoiPos to the posture estimation unit 15.
  • the posture estimation unit 15 includes a work posture estimation unit 151 and a focal point region output unit 152.
  • the work posture estimation unit 151 estimates the posture of the imaged work 2 based on the work reference region WRoiPos and the model reference region MRoiPos.
  • each component (a11, a12, ..., A34) of the matrix A on the right side is an arbitrary constant.
  • x, y, z represent the XYZ coordinate position of the arbitrary model reference region MRoiPos
  • x', y', z' represents the XYZ coordinate position of the work reference region WRoiPos
  • the matrix A on the right side of 3) represents the three-dimensional posture P of the work 2.
  • the work The posture P of 2 can be estimated three-dimensionally.
  • the above estimation method is merely an example, and the method for estimating the posture of the work 2 is not limited to the above estimation method.
  • the focus area output unit 152 is a model target in the image stack Stack based on the posture P of the work 2 estimated by the work posture estimation unit 151 and the model target area MRoiPat registered in advance by the model target registration unit 141.
  • the XYZ coordinate position of the region corresponding to the region MRoiPat is calculated as the "work target region WRoiPat".
  • the work target area WRoiPat is a partial area in which the focused image I Focus of the entire image of the work 2 is desired to be acquired, and is an area to be evaluated by the work 2.
  • the focused area output unit 152 outputs the calculated work target area WRoiPat to the outside.
  • the work target area WRoiPat output to the outside is used, for example, for positioning the work 2. Further, the focused area output unit 152 also outputs the calculated work target area WRoiPat to the focused image acquisition unit 16.
  • the focused image acquisition unit 16 identifies and identifies the image most focused on the work target area WRoiPat from the plurality of images included in the image stack Stack by the same method as the image processing unit 13.
  • the image in the work target area WRoiPat of the image is acquired as a focused image IFocus and output to the outside.
  • the in-focus image I Focus is used, for example, for inspection of work 2.
  • the focused image acquisition unit 16 may be provided outside the image acquisition device 1. Further, when the focused image IFocus is not used for the evaluation of the work 2, the focused image acquisition unit 16 can be omitted.
  • FIG. 2 is a flowchart showing an example of a procedure for registering a model area (model target area MRoiPat and model reference area MRoiPos). The flowchart of FIG. 2 is started in a state where the work 2 to be model-registered is set in the image acquisition device 1.
  • model image stack MS Stack a process of acquiring the image stack Stack (hereinafter, also referred to as "model image stack MS Stack") of the work 2 to be model registered is executed (step S10).
  • the model image stack MS stack is acquired by capturing a plurality of images with the image pickup device 11 while moving the image pickup device 11 in the Z-axis direction using the moving device 12.
  • step S20 the process of registering the model target area MRoiPat is executed (step S20).
  • the model target area MRoiPat is registered by the operator using the model target registration unit 141 to perform an operation such as surrounding the area to be registered as the model target area MRoiPat on the model image stack MS stack.
  • step S30 the process of registering the model reference area MRoiPos is executed (step S30).
  • the model reference area MRoiPos is registered by the operator using the model reference registration unit 142 to perform an operation such as surrounding the area to be registered as the model reference area MRoiPos on the model image stack MS stack.
  • FIG. 3 is a flowchart showing an example of a calculation procedure of the work target area WRoiPat (a partial area in which the focused image IFocus of the entire image of the work 2 is desired to be acquired). The flowchart of FIG. 3 is started in a state where the work 2 to be evaluated is set in the image acquisition device 1.
  • a process of acquiring the image stack Stack of the work 2 to be evaluated (hereinafter, also referred to as “work image stack WS Stack”) is executed (step S11).
  • the work image stack WStack is acquired by capturing a plurality of images with the image pickup device 11 while moving the image pickup device 11 in the Z-axis direction using the moving device 12.
  • the focusing position acquisition unit 131 executes a process of calculating the focusing position PosZ of the work image stack WStuck (step S40).
  • this process for example, in the image in the work image stack WStack, three or more calculation areas including three or more model reference areas MRoiPos registered in advance by the model reference registration unit 142 are extracted and extracted.
  • the focal position PosZ is calculated for each of the three or more calculation areas.
  • the same number (three or more) of focus positions PosZ as the number of model reference region MRoiPos is calculated.
  • each model reference In each calculation area extracted in step S40, for example, assuming that the set position of the work 2 at the time of model registration and the set position of the work 2 to be evaluated are not significantly different, each model reference.
  • Each model reference region MRoiPos is set to a region expanded by a predetermined amount centering on the region MRoiPos. By doing so, each model reference region MRoiPos can be included in each calculation region.
  • the alignment unit 132 performs a process of calculating the work reference area WRoiPos (step S50).
  • the work reference region WRoiPos is calculated for each focus position PosZ calculated in step S40.
  • the above-mentioned plane alignment is performed with respect to the calculation area of the focus position PosZ, the XY coordinate position of the work reference area WRoiPos is specified, and the focus position PosZ is set to the XY coordinate position of the specified work reference area WRoiPos.
  • the signal in which the above is combined is defined as the work reference region WRoiPos (XYZ coordinate position).
  • the work posture estimation unit 151 performs a process of estimating the posture of the imaged work 2 (step S60).
  • each of the matrices A in the above equation (3) is based on the combination of the positional relationships between the three or more model reference regions MRoiPos and the three or more work reference regions WRoiPos.
  • the posture P of the work 2 is estimated three-dimensionally.
  • the focused area output unit 152 performs a process of calculating the work target area WRoiPat (step S70).
  • the area corresponding to the model target area MRoiPat in the work image stack WStuck is calculated as the work target area WRoiPat in consideration of the posture P of the work 2 estimated in step S60.
  • the operator may be notified by using a buzzer or the like (not shown).
  • the in-focus image acquisition unit 16 performs a process of acquiring the image most focused on the work target area WRoiPat as the in-focus image IFocus (step S71).
  • the work target is estimated by estimating the focus position PosZ, the work reference area WRoiPos, and the work posture P only for the model reference area MRoiPos registered in advance.
  • the region WRoiPat and its focused image IFocus can be acquired. That is, the work target area WRoiPat and its focused image IFocus can be acquired without generating a omnifocal image. Therefore, the work target area WRoiPat and its in-focus image IFocus can be acquired with a smaller amount of calculation than in the case of generating a full-focus image.
  • the image acquisition device 1 since the omnifocal image is not generated, the amount of calculation is significantly reduced. As a result, the focused image I Focus of the work 2 to be evaluated can be acquired at high speed.
  • Embodiment 2 In the above-described first embodiment, it is assumed that the set position of the work 2 at the time of model registration and the set position of the work 2 to be evaluated are not significantly different. However, in reality, it can be assumed that the set position of the work 2 at the time of model registration and the set position of the work 2 to be evaluated are significantly different.
  • a reference area for alignment in the imaging field of view (hereinafter, also referred to as “field of view alignment area”) is registered in advance, and alignment (coarse) with respect to the field of view alignment area is performed. Alignment) is performed, and then the same processing as in the first embodiment is performed. As a result, it is possible to cope with a case where the set position of the work 2 at the time of model registration and the set position of the work 2 to be evaluated are significantly different.
  • FIG. 4 is a diagram schematically showing the configuration of an image acquisition system including the image acquisition device 1A according to the second embodiment.
  • the image acquisition device 1A adds a model field of view registration unit (third registration unit) 143 inside the model registration unit 14 and adds a field of view image processing unit 17 to the image acquisition device 1 shown in FIG. It was done. Since the other structures, functions, and processes are the same as those of the image acquisition device 1 shown in FIG. 1 described above, the detailed description here will not be repeated.
  • the model field of view registration unit 143 is provided in, for example, a computer (not shown) and registers the field of view alignment area.
  • the operator can register the field of view alignment area by operating a computer on the image displayed on the display and performing an operation such as surrounding an area to be specified.
  • the field of view alignment area includes the field of view focus position calculation area MRoiVFocus at the time of model registration and the model field of view alignment area MRoiVPat.
  • the model field of view registration unit 143 outputs the registered field of view alignment area to the field of view image processing unit 17.
  • the visual field image processing unit 17 includes a visual field focusing position acquisition unit (second acquisition unit) 171 and a visual field alignment unit 172.
  • the visual field focusing position acquisition unit 171 calculates the visual field focusing position VFocus and outputs it to the visual field alignment unit 172 in the same manner as the visual field focusing position acquisition unit 131.
  • the field of view alignment unit 172 outputs the field of view position VPos (field of view position deviation amount OffsetXYZ described later) in the same manner as the alignment unit 132.
  • FIG. 5 is a flowchart showing an example of the procedure for registering the model area according to the second embodiment.
  • the flowchart of FIG. 5 is obtained by adding steps S80 and S81 to the flowchart of FIG. Since the other steps (steps having the same numbers as the steps shown in FIG. 2 above) have already been described, detailed description will not be repeated here.
  • the field of view alignment area is set by the model field of view registration unit 143.
  • the process of registering is performed (step S80). In this process, the calculation area MRoiVFocus of the field of view focus position at the time of model registration and the model field of view alignment area MRoiVPat are registered.
  • step S81 a process of registering the model field-of-view focal position MVFocus is performed (step S81).
  • the focusing index is calculated by using the field focusing focus position acquisition unit 171 for the calculation area MRoiVFocus of each image in the model image stack MStack, and the Z coordinate position of the image having the maximum focusing index is the model field of view. It is registered as the in-focus position MV Focus.
  • FIG. 6 is a flowchart showing an example of the calculation procedure of the work target area WRoiPat according to the second embodiment.
  • the flowchart of FIG. 6 is obtained by changing steps S40 and S50 of FIG. 3 to steps S41 and S51, respectively, and further adding steps S90 and S100. Since the other steps (steps having the same numbers as the steps shown in FIG. 3 above) have already been described, detailed description will not be repeated here.
  • the field-of-view focus position acquisition unit 171 After the work image stack WStuck is acquired (step S11), the field-of-view focus position acquisition unit 171 performs a process of calculating the work field-of-view focus position WVFocus (step S90).
  • the focus index is calculated for the calculation area MRoiVFocus of each image in the work image stack WStack, and the Z coordinate position of the image determined to have the highest focus based on the focus index is the work field focus. Calculated as position WVFocus.
  • the field of view alignment unit 172 performs the field of view alignment (step S100).
  • the work field of view alignment area WRoiVPat corresponding to the model field of view alignment area MRoiVPat is calculated by adding the depth position deviation amount OffsetZ of the work field of view focusing position WVFocus with respect to the model field of view focusing position MVFocus, and the work field of view is calculated.
  • the amount of misalignment between the alignment region WRoiVPat and the model visual field alignment region MRoiVPat is calculated as the visual field misalignment amount OffsetXYZ.
  • This field of view position shift amount OffsetXYZ corresponds to a three-dimensional position shift amount between the set position of the work 2 at the time of model registration and the set position of the work 2 to be evaluated.
  • the above-mentioned visual field position VPos is synonymous with the visual field position deviation amount OffsetXYZ calculated in step S100.
  • the focusing position acquisition unit 131 executes a process of calculating the focusing position PosZ of the work image stack WStack (step S41).
  • the work reference area WRoiPos corresponding to the model reference area MRoiPos is calculated in consideration of the visual field position deviation amount OffsetXYZ calculated in step S100, and for each work reference area WRoiPos as in step S40 of FIG. 3 described above.
  • the focal position of PosZ is calculated.
  • the alignment unit 132 performs a process of calculating the work reference area WRoiPos (step S51).
  • the work reference region WRoiPos is calculated by adding the visual field position deviation amount OffsetXYZ calculated in step S100, and the displacement amount for each work reference region WRoiPos is calculated in the same manner as in step S50 of FIG. ..
  • Step S60 the process of estimating the posture of the work 2 described with reference to FIG. 3 (step S60), the process of calculating the work target area WRoiPat (step S70), and the process of acquiring the focused image IFocus of the work target area WRoiPat. (Step S71) is performed.
  • the work target area WRoiPat of the work 2 to be evaluated and its focused image I Focus can be performed at high speed as in the above-described first embodiment. Can be obtained in. Further, the work target area WRoiPat can be calculated by adding the visual field position deviation amount OffsetXYZ between the set position of the work 2 at the time of model registration and the set position of the work 2 to be evaluated. Therefore, even when the set position of the work 2 at the time of model registration and the set position of the work 2 to be evaluated are significantly different, the work target area WRoiPat and its focused image IFocus can be acquired.
  • Embodiment 3 In the above-described first and second embodiments, it is assumed that there is a symbol that can be registered as a reference for alignment in the imaging field of view when the imaging device 11 or the pedestal 3 is moved in the Z-axis direction. There is. However, in reality, since the size of the work 2 in the XY axis direction is large, it can be registered as a reference for alignment in the imaging field of view when the image pickup device 11 or the pedestal 3 is moved in the Z axis direction. It can be assumed that there is no design.
  • the work 2 has a plurality of imaging fields of view slid in the XY axis direction by moving the image pickup device 11 or the pedestal 3 in the Z axis direction as well as the XY axis direction. Is imaged, and a reference for alignment is registered in the image of one of a plurality of imaging fields of view.
  • FIG. 7 is a diagram schematically showing the configuration of an image acquisition system including the image acquisition device 1B according to the third embodiment.
  • the image acquisition device 1B is an image acquisition device 1A shown in FIG. 4 with an imaging position recording unit 122 added. Since the other structures, functions, and processes are the same as those of the image acquisition device 1A described above, the detailed description here will not be repeated.
  • the imaging position recording unit 122 records the moving unit position information Enc when the work 2 is imaged as an imaging position signal VPos, and outputs the recorded imaging position signal VPos to the work posture estimation unit 151.
  • the imaging device 11 or the pedestal 3 is moved in the XY-axis direction in addition to the Z-axis direction, so that a plurality of imaging fields of view slid in the XY-axis direction can be used. It is assumed that the work 2 is imaged.
  • the work posture estimation unit 151 has a function of receiving an image pickup position signal VPos from an image pickup position recording unit 122 in addition to the functions described in the first and second embodiments.
  • FIG. 8 is a flowchart showing an example of the procedure for registering the model area according to the third embodiment.
  • the flowchart of FIG. 8 is obtained by adding steps S110 and S120 to the flowchart of FIG. Since the other steps (steps having the same numbers as the steps shown in FIG. 5 above) have already been described, detailed description will not be repeated here.
  • step S110 it is determined whether or not the imaging of all the fields of view in the XY axis direction is completed (step S110).
  • the imaging position recording unit 122 performs a process of recording the current imaging position signal VPos (step S120). After that, the processes of steps S10 to S81 shown in FIG. 5 described above are performed. After that, the process is returned to step S110, and the processes of steps S110, S120, and S10 to S81 are repeated for each field of view in the XY axis direction until the imaging of all the fields of view in the XY axis direction is completed. Then, when the imaging of all the visual fields in the XY axis directions is completed (YES in step S110), the model registration process is completed.
  • FIG. 9 is a flowchart showing an example of the calculation procedure of the work target area WRoiPat according to the third embodiment.
  • the steps shown in FIG. 9 the steps having the same numbers as the steps shown in FIG. 6 described above have already been described, and detailed description thereof will not be repeated here.
  • step S111 it is determined whether or not the imaging of all the fields of view in the XY axis direction is completed.
  • the imaging position recording unit 122 performs a process of recording the current imaging position signal VPos (step S121). After that, the processes of steps S11 to S51 shown in FIG. 6 described above are performed. After that, the process is returned to step S111, and the processes of steps S111, S121, and S11 to S51 are repeated for each field of view until the imaging of all the fields of view in the XY axis directions is completed.
  • step S61 the process of estimating the posture of the work 2 is performed (step S61). In this process, it is based on the focused position PosZ of each field of view calculated in step S41, the work reference region WRoiPos of each field of view calculated in step S51, and the imaging position signal VPos of each field of view recorded in step S121. The posture of the work 2 is estimated.
  • step S70 the process of calculating the work target area WRoiPat (step S70) and the process of acquiring the focused image IFocus (step S71) are performed.
  • the work target area WRoiPat of the work 2 to be evaluated and its focused image IFocus are similarly to the above-described first and second embodiments. Can be obtained at high speed. Further, the image acquisition device 1B according to the third embodiment captures the work 2 in a plurality of imaging fields of view slid in the XY axis directions, and aligns the work 2 with an image of one of the plurality of fields of view. I try to register the standard. Therefore, even when there is no structure that can be registered as a reference for alignment in one imaging field of view, the work target area WRoiPat and its focused image IFocus can be acquired.
  • Embodiment 4 In the first, second, and third embodiments, it is assumed that there is no positional deviation between the alignment reference of the work 2 and the image pickup region by the image pickup apparatus 11. However, in the case where the work 2 is a module composed of a plurality of parts assembled, it is assumed that a slight misalignment may occur between the alignment reference and the imaging region due to the influence of the assembly error of the work 2. obtain.
  • the alignment reference for correcting the misalignment with respect to the alignment reference is registered for each imaging region.
  • FIG. 10 is a diagram schematically showing the configuration of an image acquisition system including the image acquisition device 1C according to the fourth embodiment.
  • the image acquisition device 1C is an image acquisition device 1 shown in FIG. 1 described by adding a site reference registration unit 144 and a site image processing unit 18 inside the model registration unit 14. Since the other structures, functions, and processes are the same as those of the image acquisition device 1 shown in FIG. 1 described above, the detailed description here will not be repeated.
  • the site reference registration unit 144 is provided in, for example, a computer (not shown), and registers a "site alignment area" that serves as a reference for alignment of each part of the work 2. It is assumed that the operator registers the site alignment area by operating a computer on the image displayed on the display to surround the area to be specified.
  • the site alignment area includes the calculation area MRoiP Focus of the site focus position at the time of model registration and the model site alignment area MRoiPPat.
  • the site reference registration unit 144 outputs the registered site alignment area to the site image processing unit 18.
  • the site image processing unit 18 includes a site focusing position acquisition unit 181 and a site alignment unit 182.
  • the site focusing position acquisition unit 181 calculates the site focusing position PFOcus by the same method as the site focusing position acquisition unit 131, and outputs it to the site alignment unit 182.
  • the site alignment unit 182 calculates the site position Ppos (site position deviation amount POffsetXYZ, which will be described later) in the same manner as the alignment unit 132, and outputs it to the in-focus region output unit 152.
  • the focused area output unit 152 according to the fourth embodiment has a function of receiving the site misalignment amount POffsetXYZ in addition to the functions of the focused area output unit 152 according to the first to third embodiments described above.
  • FIG. 11 is a flowchart showing an example of the procedure for registering the model area according to the fourth embodiment.
  • the flowchart of FIG. 11 is obtained by adding steps S130 and S131 to the flowchart of FIG. Since the other steps (steps having the same numbers as the steps shown in FIG. 2 above) have already been described, detailed description will not be repeated here.
  • step S10 After the acquisition of the model image stack MS stack (step S10), the registration of the model target area MRoiPat (step S20), and the registration of the model reference area MRoiPos (step S30) are performed.
  • step S30 The process of registering the site alignment area is performed (step S130).
  • the site reference registration unit 144 registers the calculation area MRoiPFocus of the site focus position at the time of model registration and the model site alignment area MRoiPPat.
  • the process of registering the model site focusing position MPFocus is performed (step S131).
  • the focusing index is calculated by using the site focusing position acquisition unit 181 for the region focus position calculation area MRoiPFocus of each image in the model image stack MS tack, and the calculation region of each image in the model image stack MS stack.
  • the focus index is calculated using the visual field focus position acquisition unit 171 and the obtained focus position (Z coordinate position of the image with the maximum focus index) is registered as the model site focus position MPFocus.
  • FIG. 12 is a flowchart showing an example of the calculation procedure of the work target area WRoiPat according to the fourth embodiment.
  • the flowchart of FIG. 12 is obtained by adding steps S140 and S150 to the flowchart of FIG. 3 described above. Since the other steps (steps having the same numbers as the steps shown in FIG. 3 above) have already been described, detailed description will not be repeated here.
  • step S140 a process of calculating the work site in-focus position WP Focus is performed (step S140).
  • the site focus position acquisition unit 181 calculates the work site focus position WP Focus for the area registered as the site focus position calculation area MRoiP Focus in the site reference registration unit 144.
  • step S150 with respect to the region registered as the model site alignment area MRoiPPat by the site reference registration unit 144, the depth position in consideration of the site depth position shift amount POffsetZ of the work site focus position WP Focus with respect to the model site focus position MPFocus.
  • the work site alignment area WRoiPPat is calculated based on the above, and the amount of misalignment between the work site alignment area WRoiPPat and the model site alignment area MRoiPPat is calculated as the site misalignment amount POffsetXYZ.
  • step S40 the process of calculating the in-focus position PosZ (step S40), the process of calculating the work reference region WRoiPos (step S50), and the process of estimating the posture P of the work 2 (step S60) described with reference to FIG. 3 above. Is performed.
  • step S70 the posture P of the work 2 estimated in step S60, the position of the model target area MRoiPat, and the site position calculated in step 150.
  • the work target area WRoiPat in the work image stack WStack is calculated based on the deviation amount POffsetXYZ.
  • step S71 a process (step S71) of acquiring the focused image I Focus of the work target area WRoiPat described with reference to FIG. 3 is performed.
  • the work target area WRoiPat of the work 2 to be evaluated and its focusing focus are the same as those of the above-described first, second, and third embodiments.
  • the image I Focus can be acquired at high speed.
  • the image acquisition device 1C according to the fourth embodiment registers the alignment reference for correcting the misalignment for each imaging region. Therefore, when an image is taken of a module or the like formed by assembling a plurality of parts, it is possible to absorb the assembly error and acquire a focused image without being affected by the positional deviation of each part.
  • Embodiment 5 In the first, second, third, and fourth embodiments, it is assumed that the operator performs the work of registering the model area for each area for which the focused image is desired to be acquired. However, when there are a large number of areas for which a focused image is desired to be acquired for the work 2, there is a concern that the registration work will take an enormous amount of time.
  • the model area is based on the candidate of the imaging area by using the machine learning function that creates the candidate of the imaging area based on the image of the imaging area registered in the past. To register. This makes it possible to efficiently register the model area even when there are a large number of areas for which the focused image is to be acquired.
  • FIG. 13 is a diagram schematically showing the configuration of an image acquisition system including the image acquisition device 1D according to the fifth embodiment.
  • the image acquisition device 1D is an image acquisition device 1 shown in FIG. 1 with a learning unit 19 added. Since the other structures, functions, and processes are the same as those of the image acquisition device 1 shown in FIG. 1 described above, the detailed description here will not be repeated.
  • the learning unit 19 includes a sample image generation unit 191, a machine learning unit 192, a learning data recording unit 193, an image cutting unit 194, and a determination unit 195.
  • the learning unit 19 is not necessarily limited to being built in the image acquisition device 1D, and may exist outside the image acquisition device 1D (for example, a cloud server).
  • the sample image generation unit 191 determines whether the sample image MRoiPatImg, which is an image of the model target area MRoiPat in the image stack Stack, and the sample image MRoiPatImg are the model target area MRoiPat.
  • the combination with the truth value isMRoiPat, which is the truth value of, is output.
  • Sample image when the boolean value isMRoiPat is false MRoiPatImg outputs a region having the same size and not including the model target region MRoiPat based on the model target region MRoiPat.
  • the machine learning unit 192 learns the probability MRoiPatP indicating the probability of being a model target region according to the combination of the sample image MRoiPatImg and its boolean value isMRoiPat, and records the learning result as learning data C in the learning data recording unit 193. Further, the machine learning unit 192 inputs the learning data C recorded in the learning data recording unit 193 and the partial image data RoiImg from the image cutting unit 194, and determines the probability that the partial image data RoiImg is the model target area. It functions as an inference means for outputting the indicated probability MRoiPatP.
  • the machine learning unit 192 is, for example, a device that performs so-called supervised learning according to a neural network model.
  • supervised learning is a model in which a large amount of data sets of a certain input and output results (labels) are given to a learning device to learn the features of those data sets and estimate the output results from the inputs.
  • a neural network is composed of an input layer composed of a plurality of neurons, an intermediate layer (hidden layer) composed of a plurality of neurons, and an output layer composed of a plurality of neurons.
  • the intermediate layer may be one layer or two or more layers.
  • FIG. 14 is a diagram showing an example of a neural network model.
  • the result of multiplying the values by the weights W1 is the intermediate layer (the intermediate layer (X1 to X3). It is input to Y1 and Y2), and the result of multiplying the result by the weights W2 (w21 to w26) is output from the output layers (Z1 to Z3). This output result depends on the values of the weights W1 and W2.
  • the neural network learns the above-mentioned probability MRoiPatP by so-called supervised learning according to a data set created based on the combination of the sample image MRoiPatImg and the boolean value isMRoiPat, and learns the learning result as learning data C. Record in the data recording unit 193.
  • the learning data recording unit 193 has a function of storing the learning data C which is the learning result of the machine learning unit 192.
  • the learning of the machine learning unit 192 may be performed according to the data set created for the plurality of types of work 2.
  • the machine learning unit 192 may acquire data sets from a plurality of workpieces 2 used at the same site, or may use data sets collected from a plurality of machine tools operating independently at different sites. Then, the probability MRoiPatP may be learned. Further, the work 2 for collecting the data set can be added to the target on the way, or conversely, can be removed from the target.
  • the image cutting unit 194 cuts out partial image data RoiImg based on the image stack Stack and outputs it to the machine learning unit 192.
  • the method of cutting out the partial image data RoiImg may be a method of cutting out images at equal pitches and equal sizes in the vertical and horizontal directions with respect to each image constituting the image stack Stack.
  • the determination unit 195 determines whether or not the partial image data RoiImg is a candidate for the model target area based on the probability MRoiPatP from the machine learning unit 192, and the partial image data RoiImg determined to be a candidate for the model target area. Is output as the estimated model target area MRoiPatE. For example, the estimated model target region MRoiPatE may be output as a result of comparing the probability MRoiPatP with the preset threshold value Th.
  • the model target registration unit 141 receives the image stack Stack and the estimated model target area MRoiPatE as inputs, and outputs the model target area MRoiPat.
  • the model target registration unit 141 is provided on a computer (not shown). If there is an estimated model target area MRoiPatE, the estimated model target area MRoiPatE is presented to the operator. The operator operates a computer to register the estimated model target area MRoiPatE as the model target area MRoiPat. Alternatively, it is assumed that the estimated model target area MRoiPatE is modified on the screen on which the image is displayed and registered as the model target area MRoiPat.
  • FIG. 15 is a flowchart showing an example of the procedure for registering the model area according to the fifth embodiment.
  • the flowchart of FIG. 15 is obtained by adding steps S160, S170, and S180 to the flowchart of FIG. Since the other steps (steps having the same numbers as the steps shown in FIG. 2 above) have already been described, detailed description will not be repeated here.
  • step S160 is performed after step S10.
  • step S160 it is confirmed whether or not the learning data C that can be used by the machine learning unit 192 is in the learning data recording unit 193. If there is no training data C (NO in step S160), the model area is registered in step S20, that is, in the same manner as the procedure for registering the model area in the first embodiment.
  • step 170 is executed.
  • step S170 partial image data RoiImg is cut out by the image cutting unit 194 with respect to the image stack Stack acquired in step S10, and is input to the machine learning unit 192 together with the learning data C.
  • the machine learning unit 192 calculates the probability MRoiPatP in which the partial image data RoiImg is the model target region. Based on this probability MRoiPatP, the determination unit 195 determines whether or not the partial image data RoiImg is a candidate for the model target area, and estimates the partial image data RoiImg that is determined to be a candidate for the model target area MRoiPatE. Is output to the model target registration unit 141.
  • step S20 the operator registers the area for which the focused image is to be acquired as the model target area MRoiPat while referring to the estimated model target area MRoiPatE.
  • step S180 is performed after the completion of step S30.
  • step S180 in order to carry out the learning procedure of the model area shown in FIG. 16 described later, the combination of the model image stack MS stack and the model target area MRoiPat is recorded in the sample image generation unit 191.
  • FIG. 16 is a flowchart showing an example of the learning procedure of the model area in the fifth embodiment.
  • step S200 the sample image generation unit 191 sets the sample image MRoiPatImg and the boolean value isMRoiPat based on the combination of the model image stack MS stack and the model target area MRoiPat recorded in advance in step S180 of FIG. Output the combination of.
  • the machine learning unit 192 learns a data set consisting of a group of combinations of the sample image MRoiPatImg and the boolean value isMRoiPat, and infers the probability MRoiPatP which is the probability that the given image is the model target region. To learn.
  • step S201 is carried out.
  • the learning data C is recorded in the learning data recording unit 193 so that the learning data C, which is the learning result of the machine learning unit 192, can be used in the model area registration.
  • the work target area WRoiPat of the work 2 to be evaluated and its focused image as in the above-described first, second, and third embodiments. IFocus can be acquired at high speed.
  • the image acquisition device 1D according to the fifth embodiment uses a machine learning function that creates a candidate for the imaging region based on the image of the imaging region registered in the past, and uses the machine learning function to create the region based on the candidate for the imaging region. to register.
  • the registration work can be performed without hassle.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

An image acquisition apparatus (1) is provided with: a model object registration unit (141) that registers a model object region; a model reference registration unit (142) that registers a model reference region serving as a reference for alignment; an imaging device (11); a moving device (12) that moves the imaging device in an optical axis direction; a focusing point position acquisition unit (131) that acquires the focusing point position of an image having the highest focusing degree among a plurality of workpiece images having different focuses; an alignment unit (132) that aligns the workpiece image at the focusing point position with the model reference region; an attitude estimation unit (151) that estimates the attitude of a workpiece at the time of imaging on the basis of the result of the alignment; and a focusing point region output unit (152) that outputs the position of a region corresponding to the model object region in the workpiece image, using the attitude of the workpiece.

Description

画像取得装置および画像取得方法Image acquisition device and image acquisition method
 本開示は、撮像対象となるワークの合焦点画像を撮像する技術に関する。 The present disclosure relates to a technique for capturing a focused image of a work to be imaged.
 一般に、高倍率のレンズを使用する撮像装置(カメラ)で撮像対象を撮影する場合、被写界深度(画像の中で焦点が合っているかのように見える範囲)が浅くなる。撮像対象となるワークの凹凸によっては、ワーク中の評価対象部分に焦点が合わず、ワークの評価(検査、位置決めなど)に悪影響を及ぼしやすい。 Generally, when an image pickup device (camera) using a high-magnification lens is used to take an image, the depth of field (the range that appears to be in focus in the image) becomes shallow. Depending on the unevenness of the work to be imaged, the evaluation target portion in the work may not be focused, which tends to adversely affect the evaluation (inspection, positioning, etc.) of the work.
 その対策として、撮像対象となるワークの全焦点画像を作成し、この全焦点画像に基づいてワーク中の評価を行なう方法が知られている(たとえば特開2016-21115号公報参照)。全焦点画像とは、ワークと撮像装置との距離を撮像装置の光軸方向に変化させながら焦点の異なる複数の画像を撮影し、局所領域ごとに合焦度(焦点が合っている度合い)を評価し、合焦度の高い局所画像を組合せて全体画像を再構成したものである。このようにして作成された全焦点画像は、全てのピクセルにおいてほぼ焦点が合ったものとなる。 As a countermeasure, there is known a method of creating an omnifocal image of a work to be imaged and evaluating the work based on the omnifocal image (see, for example, Japanese Patent Application Laid-Open No. 2016-21115). A omnifocal image is an image in which multiple images with different focal points are taken while changing the distance between the work and the image pickup device in the direction of the optical axis of the image pickup device, and the degree of focus (degree of focus) is determined for each local area. The whole image is reconstructed by evaluating and combining local images with a high degree of focus. The omnifocal image created in this way is almost in focus at all pixels.
特開2016-21115号公報Japanese Unexamined Patent Publication No. 2016-21115
 特開2016-21115号公報に開示された方法では、評価対象となるワークを撮像するたびに全焦点画像を生成する必要がある。被写界深度に対してワークの凹凸が大きい場合には多数の画像に対して全焦点画像を生成することになり、画像処理の計算量が大きくなる。そのため、高速で全焦点画像を生成することができず、ワークの評価に時間が掛かってしまうという問題がある。 In the method disclosed in Japanese Patent Application Laid-Open No. 2016-21115, it is necessary to generate a omnifocal image every time the workpiece to be evaluated is imaged. When the unevenness of the work is large with respect to the depth of field, omnifocal images are generated for a large number of images, and the amount of calculation of image processing becomes large. Therefore, there is a problem that it is not possible to generate an all-focus image at high speed and it takes time to evaluate the work.
 本開示は、上述の課題を解決するためになされたものであって、その目的は、評価対象となるワークの合焦点画像を高速に取得することを可能にすることである。 The present disclosure has been made to solve the above-mentioned problems, and an object thereof is to enable high-speed acquisition of a focused image of a work to be evaluated.
 本開示による画像取得装置は、ワークの画像を取得する。この画像取得装置は、モデル登録の対象となるワークの画像であるモデル画像を用いて合焦点画像を取得する対象となる第1領域を登録する第1登録部と、モデル画像を用いて位置合わせに用いられる第2領域を登録する第2登録部と、評価対象となるワークの画像であるワーク画像を撮影する撮像装置と、撮像装置およびワークの少なくとも一方を撮像装置の光軸方向に移動可能に構成された移動装置と、移動装置を作動させながら撮像装置が撮像した焦点の異なる複数のワーク画像のうちの最も合焦度の高い画像の深さ位置を合焦点位置として取得する第1取得部と、合焦点位置のワーク画像に対して第2領域との位置合わせを実施することによってワーク画像における第2領域に対応する領域を特定する位置合わせ部と、モデル画像における第2領域とワーク画像における第2領域に対応する領域との位置関係に基づいて撮像時のワークの姿勢を示す情報を取得可能に構成された姿勢推定部と、ワークの姿勢を示す情報を用いてワーク画像における第1領域に対応する領域の位置を出力する領域出力部と、を備える。 The image acquisition device according to the present disclosure acquires an image of the work. This image acquisition device aligns with a first registration unit that registers a first region to be acquired using a model image that is an image of a work to be model-registered, and a model image. A second registration unit that registers the second region used for the image, an image pickup device that captures a work image that is an image of the work to be evaluated, and at least one of the image pickup device and the work can be moved in the optical axis direction of the image pickup device. First acquisition of the moving device configured in the above and the depth position of the image with the highest degree of focus among a plurality of work images with different focal points captured by the imaging device while operating the moving device as the focusing position. The alignment unit that identifies the region corresponding to the second region in the work image by aligning the portion and the work image at the focused position with the second region, and the second region and the work in the model image. A posture estimation unit configured to be able to acquire information indicating the posture of the work at the time of imaging based on the positional relationship with the region corresponding to the second region in the image, and a second position in the work image using information indicating the posture of the work. It includes an area output unit that outputs the position of the area corresponding to one area.
 本開示によれば、評価対象となるワークの合焦点画像を高速に取得することを可能にすることができる。 According to the present disclosure, it is possible to acquire a focused image of the work to be evaluated at high speed.
画像取得システムの構成を概略的に示す図(その1)である。It is a figure (the 1) which shows the structure of the image acquisition system schematicly. モデル領域の登録手順の一例を示すフローチャート(その1)である。It is a flowchart (the 1) which shows an example of the registration procedure of a model area. ワーク対象領域の計算手順の一例を示すフローチャート(その1)である。It is a flowchart (the 1) which shows an example of the calculation procedure of a work target area. 画像取得システムの構成を概略的に示す図(その2)である。It is a figure (the 2) which shows the structure of the image acquisition system schematicly. モデル領域の登録手順の一例を示すフローチャート(その2)である。It is a flowchart (2) which shows an example of the registration procedure of a model area. ワーク対象領域の計算手順の一例を示すフローチャート(その2)である。It is a flowchart (2) which shows an example of the calculation procedure of a work target area. 画像取得システムの構成を概略的に示す図(その3)である。It is a figure (the 3) which shows the structure of the image acquisition system schematicly. モデル領域の登録手順の一例を示すフローチャート(その3)である。It is a flowchart (3) which shows an example of the registration procedure of a model area. ワーク対象領域の計算手順の一例を示すフローチャート(その3)である。It is a flowchart (3) which shows an example of the calculation procedure of a work target area. 画像取得システムの構成を概略的に示す図(その4)である。It is a figure (the 4) which shows the structure of the image acquisition system schematicly. モデル領域の登録手順の一例を示すフローチャート(その4)である。It is a flowchart (4) which shows an example of the registration procedure of a model area. ワーク対象領域の計算手順の一例を示すフローチャート(その4)である。It is a flowchart (4) which shows an example of the calculation procedure of a work target area. 画像取得システムの構成を概略的に示す図(その5)である。FIG. 5 is a diagram (No. 5) schematically showing the configuration of an image acquisition system. ニューラルネットワークのモデルの一例を示す図である。It is a figure which shows an example of the model of a neural network. モデル領域の登録手順の一例を示すフローチャート(その5)である。It is a flowchart (No. 5) which shows an example of the registration procedure of a model area. モデル領域の学習手順の一例を示すフローチャートである。It is a flowchart which shows an example of the learning procedure of a model area.
 以下、本開示の実施の形態について、図面を参照しながら詳細に説明する。なお、図中同一または相当部分には同一符号を付してその説明は繰返さない。 Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. The same or corresponding parts in the drawings are designated by the same reference numerals, and the description is not repeated.
 実施の形態1.
 図1は、本実施の形態による画像取得装置1を含む画像取得システムの構成を概略的に示す図である。この画像取得システムは、画像取得装置1と、撮像対象であるワーク2が設置される台座3とを含む。
Embodiment 1.
FIG. 1 is a diagram schematically showing a configuration of an image acquisition system including an image acquisition device 1 according to the present embodiment. This image acquisition system includes an image acquisition device 1 and a pedestal 3 on which a work 2 to be imaged is installed.
 画像取得装置1は、撮像装置11と、移動装置12と、画像処理部13と、モデル登録部14と、姿勢推定部15と、合焦点画像取得部16とを含む。 The image acquisition device 1 includes an image pickup device 11, a moving device 12, an image processing unit 13, a model registration unit 14, a posture estimation unit 15, and a focused image acquisition unit 16.
 撮像装置11は、カメラ111と、トリガ発生装置112とを含む。カメラ111は、CCD(Charge Coupled Device)、CMOS(Complementary Metal-Oxide Semiconductor)などの撮像素子を用いてワーク2を撮影する。トリガ発生装置112は、カメラ111の撮像タイミングを示すトリガ入力信号Trigをカメラ111に出力する。カメラ111は、トリガ発生装置112からのトリガ入力信号Trigに基づいてワーク2を撮影する。 The image pickup device 11 includes a camera 111 and a trigger generator 112. The camera 111 photographs the work 2 using an image sensor such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor). The trigger generator 112 outputs a trigger input signal Trig indicating the imaging timing of the camera 111 to the camera 111. The camera 111 photographs the work 2 based on the trigger input signal Trig from the trigger generator 112.
 移動装置12は、アクチュエータ121を含む。アクチュエータ121は、撮像装置11(カメラ111)を光軸方向に移動させることによって、ワーク2と撮像装置11との光軸方向の相対距離を調整可能に構成される。以下では、図1に示すように、撮像装置11の光軸方向に沿う方向を「Z軸方向」、光軸方向に垂直であってかつ互いに垂直な方向をそれぞれ「X軸方向」および「Y軸方向」とも称する。 The moving device 12 includes an actuator 121. The actuator 121 is configured to be able to adjust the relative distance between the work 2 and the image pickup device 11 in the optical axis direction by moving the image pickup device 11 (camera 111) in the optical axis direction. In the following, as shown in FIG. 1, the direction along the optical axis direction of the image pickup apparatus 11 is the "Z-axis direction", and the directions perpendicular to the optical axis direction and perpendicular to each other are the "X-axis direction" and "Y", respectively. Also called "axial direction".
 なお、移動装置12は、ワーク2が設置される台座3をX軸方向およびY軸方向に沿って移動させることによって、ワーク2をX軸方向およびY軸方向(平面方向)に移動させる機能を有してもよい。なお、本実施の形態では、ワーク2の撮像時において台座3のXY座標位置(X軸方向およびY軸方向の位置)は固定されているものとする。 The moving device 12 has a function of moving the work 2 in the X-axis direction and the Y-axis direction (planar direction) by moving the pedestal 3 on which the work 2 is installed along the X-axis direction and the Y-axis direction. You may have. In the present embodiment, it is assumed that the XY coordinate positions (positions in the X-axis direction and the Y-axis direction) of the pedestal 3 are fixed when the work 2 is imaged.
 また、移動装置12は、ワーク2が設置される台座3をZ軸方向に沿って移動させることによって、ワーク2と撮像装置11のZ軸方向の相対距離を調整する機能を有してもよい。なお、以下では、説明の便宜上、ワーク2の撮像時において台座3のZ座標位置(Z軸方向の位置)は固定されているものとして説明する。 Further, the moving device 12 may have a function of adjusting the relative distance between the work 2 and the imaging device 11 in the Z-axis direction by moving the pedestal 3 on which the work 2 is installed along the Z-axis direction. .. In the following, for convenience of explanation, it is assumed that the Z coordinate position (position in the Z axis direction) of the pedestal 3 is fixed when the work 2 is imaged.
 移動装置12は、撮像装置11のXYZ座標位置を示す情報を移動部位置情報Encとしてトリガ発生装置112に出力する機能を有してもよい。この場合、トリガ発生装置112は、移動部位置情報Encに基づいてトリガ入力信号Trigをカメラ111に出力するようにしてもよい。たとえば、撮像装置11をZ軸方向に移動させながらあるワーク2を撮像する場合に、トリガ発生装置112は、移動部位置情報Encに基づいて撮像装置11のZ座標位置が所定量変化したと判定される毎にトリガ入力信号Trigをカメラ111に出力するようにしてもよい。このようにすると、撮像装置11をZ軸方向に移動させながら複数枚の画像を撮像することができる。 The moving device 12 may have a function of outputting information indicating the XYZ coordinate position of the imaging device 11 to the trigger generator 112 as the moving unit position information Enc. In this case, the trigger generator 112 may output the trigger input signal Trig to the camera 111 based on the moving unit position information Enc. For example, when imaging a work 2 while moving the image pickup device 11 in the Z-axis direction, the trigger generator 112 determines that the Z coordinate position of the image pickup device 11 has changed by a predetermined amount based on the moving portion position information Enc. The trigger input signal Trig may be output to the camera 111 each time. In this way, a plurality of images can be captured while moving the image pickup device 11 in the Z-axis direction.
 撮像装置11をZ軸方向に移動させながら複数枚の画像を撮像する場合、撮像装置11は、撮像された複数枚の画像(以下「画像スタックStack」ともいう)を画像処理部13およびモデル登録部14に出力する。 When a plurality of images are captured while the image pickup device 11 is moved in the Z-axis direction, the image pickup device 11 registers the captured plurality of images (hereinafter, also referred to as “image stack Stack”) with the image processing unit 13 and the model. Output to unit 14.
 画像処理部13は、合焦点位置取得部(第1取得部)131と、位置合わせ部132とを含む。合焦点位置取得部131は、画像スタックStackに含まれる複数枚の画像の各々に対して合焦点指標を計算し、計算された合焦点指標の大きさに基づいて画像スタックStackの合焦点位置(深さ位置)PosZを計算する。なお、合焦点指標が計算される対象となる各画像の領域は、各画像の全体の領域であってもよいし、予め決められた局所領域であってもよい。 The image processing unit 13 includes a focusing position acquisition unit (first acquisition unit) 131 and an alignment unit 132. The focusing position acquisition unit 131 calculates a focusing index for each of a plurality of images included in the image stack Stack, and based on the calculated size of the focusing index, the focusing position (focus position of the image stack Stack) ( Depth position) PosZ is calculated. The region of each image for which the focus index is calculated may be the entire region of each image or a predetermined local region.
 「合焦点位置PosZ」は、たとえば、画像スタックStackに含まれる複数の画像のうちの、最も焦点が合っている画像が撮像されたときの撮像装置11のZ座標位置で表わすことができる。したがって、合焦点位置PosZの計算に用いられる「合焦点指標」は、撮像装置11でワーク2を撮像した画像をもとに計算可能で、かつ撮像装置11とワーク2のZ軸方向の相対距離に従って変化し、合焦度が最も高い時に極大あるいは極小をとる指標であることが望ましい。 The "focused position PosZ" can be represented by, for example, the Z coordinate position of the image pickup apparatus 11 when the most focused image is captured among the plurality of images included in the image stack Stack. Therefore, the "focus index" used in the calculation of the focus position PosZ can be calculated based on the image of the work 2 captured by the image pickup device 11, and the relative distance between the image pickup device 11 and the work 2 in the Z-axis direction. It is desirable that the index changes according to the above and takes the maximum or minimum when the degree of focus is the highest.
 たとえば、合焦点位置取得部131は、下記の式(1)および式(2)に示す行列を画像の各画素に対して畳み込んだ画像の要素の総和を合焦点指標として計算することができる。 For example, the in-focus position acquisition unit 131 can calculate the sum of the elements of the image obtained by convolving the matrices shown in the following equations (1) and (2) for each pixel of the image as the in-focus index. ..
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 上記の手法で合焦点指標を計算する場合、合焦点位置取得部131は、画像スタックStackに含まれる複数枚の画像のうちの合焦点指標が最大である画像を選択し、選択された画像が撮像されたときの撮像装置11のZ座標位置を合焦点位置PosZとして計算する。合焦点位置取得部131は、計算された合焦点位置PosZを位置合わせ部132に出力する。 When calculating the focus index by the above method, the focus position acquisition unit 131 selects the image having the maximum focus index among a plurality of images included in the image stack Stack, and the selected image is selected. The Z coordinate position of the imaging device 11 at the time of imaging is calculated as the in-focus position PosZ. The focusing position acquisition unit 131 outputs the calculated focusing position position PosZ to the alignment unit 132.
 位置合わせ部132は、撮像装置11から取得した画像スタックStackのうちから、合焦点位置PosZに対応する画像を選択し、選択された画像におけるワーク基準領域WRoiPosを計算する。 The alignment unit 132 selects an image corresponding to the in-focus position PosZ from the image stack Stack acquired from the image pickup apparatus 11, and calculates the work reference region WRoiPos in the selected image.
 「ワーク基準領域WRoiPos」は、画像スタックStackにおけるモデル基準領域MRoiPosに対応する領域であり、XYZ座標位置によって特定される。「モデル基準領域MRoiPos」は、位置合わせの基準となるモデル基準画像(たとえばワーク2上のパターンの境界など)が含まれる領域であり、XYZ座標位置によって特定される。モデル基準領域MRoiPosは、後述のモデル基準登録部142によって予め登録される。 The "work reference area WRoiPos" is an area corresponding to the model reference area MRoiPos in the image stack Stack, and is specified by the XYZ coordinate position. The "model reference area MRoiPos" is an area including a model reference image (for example, a boundary of a pattern on the work 2) as a reference for alignment, and is specified by an XYZ coordinate position. The model reference area MRoiPos is registered in advance by the model reference registration unit 142 described later.
 たとえば、位置合わせ部132は、画像スタックStackの画像に対してモデル基準領域MRoiPos内の画像をXY軸方向にずらしながら画像スタックStackの各領域とモデル基準領域MRoiPos内の画像との差を順次計算し、計算された差が最小となる領域のXY座標位置をワーク基準領域WRoiPosのXY座標位置として特定する処理(以下「平面位置合わせ」ともいう)を行なう。そして、位置合わせ部132は、平面位置合わせによって特定されたワーク基準領域WRoiPosのXY座標位置に合焦点位置PosZ(Z座標位置)を組合せた信号を、ワーク基準領域WRoiPos(XYZ座標位置)を示す信号として姿勢推定部15に出力する。 For example, the alignment unit 132 sequentially calculates the difference between each region of the image stack Stack and the image in the model reference region MRoiPos while shifting the image in the model reference region MRoiPos with respect to the image of the image stack Stack in the XY axis direction. Then, a process of specifying the XY coordinate position of the region where the calculated difference is the minimum as the XY coordinate position of the work reference region WRoiPos (hereinafter, also referred to as “plane alignment”) is performed. Then, the alignment unit 132 indicates a signal obtained by combining the focus position PosZ (Z coordinate position) with the XY coordinate position of the work reference region WRoiPos specified by the plane alignment, and indicates the work reference region WRoiPos (XYZ coordinate position). It is output to the attitude estimation unit 15 as a signal.
 モデル登録部14は、モデル対象登録部(第1登録部)141と、モデル基準登録部(第2登録部)142とを含む。モデル対象登録部141およびモデル基準登録部142は、たとえば図示しないコンピュータに設けられる。 The model registration unit 14 includes a model target registration unit (first registration unit) 141 and a model standard registration unit (second registration unit) 142. The model target registration unit 141 and the model reference registration unit 142 are provided, for example, in a computer (not shown).
 モデル対象登録部141は、作業者がワーク2上の領域のうちからワーク2の評価のために合焦点画像IFocusを取得したい領域として指定した領域のXYZ座標位置を「モデル対象領域MRoiPat」として登録する。作業者は、ディスプレイに表示された画像に対してコンピュータを操作して指定したい領域を囲む等の操作を行なうことによって、モデル対象領域MRoiPatを指定することができる。 The model target registration unit 141 registers the XYZ coordinate position of the region designated by the operator as the region on which the focused image I Focus is to be acquired for the evaluation of the work 2 as the "model target region MRoiPat". To do. The operator can specify the model target area MRoiPat by operating the computer on the image displayed on the display to surround the area to be specified.
 モデル基準登録部142は、作業者がワーク2上の領域のうちからモデル基準画像が含まれる領域として指定した領域のXYZ座標位置を、上述の「モデル基準領域MRoiPos」として登録する。作業者は、ディスプレイに表示された画像に対してコンピュータを操作して指定したい領域を囲む等の操作を行なうことによって、モデル基準領域MRoiPosを指定することができる。 The model reference registration unit 142 registers the XYZ coordinate position of the area designated as the area including the model reference image from the areas on the work 2 as the above-mentioned "model reference area MRoiPos". The operator can specify the model reference area MRoiPos by operating the computer on the image displayed on the display and performing an operation such as surrounding the area to be specified.
 なお、モデル対象領域MRoiPatおよびモデル基準領域MRoiPosを指定する際にディスプレイに表示される画像は、画像スタックStackに含まれる複数枚の画像のうちから、作業者が選択した画像であってもよいし、コンピュータが自動的に選択した画像であってもよい。 The image displayed on the display when the model target area MRoiPat and the model reference area MRoiPos are specified may be an image selected by the operator from a plurality of images included in the image stack Stack. , The image may be automatically selected by the computer.
 姿勢推定部15は、位置合わせ部132によって計算されたワーク基準領域WRoiPosと、モデル基準登録部142によって予め登録されているモデル基準領域MRoiPosとに基づいて、評価対象として撮像されたワーク2の姿勢を推定する。 The posture estimation unit 15 is the posture of the work 2 imaged as an evaluation target based on the work reference area WRoiPos calculated by the alignment unit 132 and the model reference area MRoiPos registered in advance by the model reference registration unit 142. To estimate.
 なお、ワーク2の姿勢を3次元的に推定するためには少なくとも3つ以上のワーク基準領域WRoiPosが存在することが望ましい。そのため、本実施の形態においては、モデル基準登録部142によって、3つ以上のモデル基準領域MRoiPosが予め登録される。そして、上述の位置合わせ部132は、3つ以上のモデル基準領域MRoiPosにそれぞれ対応する3つ以上のワーク基準領域WRoiPosを姿勢推定部15に出力する。 In order to estimate the posture of the work 2 three-dimensionally, it is desirable that at least three or more work reference regions WRoiPos exist. Therefore, in the present embodiment, three or more model reference regions MRoiPos are registered in advance by the model reference registration unit 142. Then, the above-mentioned alignment unit 132 outputs three or more work reference regions WRoiPos corresponding to each of the three or more model reference regions MRoiPos to the posture estimation unit 15.
 姿勢推定部15は、ワーク姿勢推定部151と、合焦点領域出力部152とを含む。ワーク姿勢推定部151は、ワーク基準領域WRoiPosとモデル基準領域MRoiPosとに基づいて、撮像されたワーク2の姿勢を推定する。 The posture estimation unit 15 includes a work posture estimation unit 151 and a focal point region output unit 152. The work posture estimation unit 151 estimates the posture of the imaged work 2 based on the work reference region WRoiPos and the model reference region MRoiPos.
 ワーク2の3つ以上の領域のXYZ座標位置が既知であるとき、下記の式(3)が成り立つ。 When the XYZ coordinate positions of three or more regions of the work 2 are known, the following equation (3) holds.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 式(3)において、右辺の行列Aの各成分(a11,a12,…,a34)は任意の定数である。式(3)において、x,y,zが任意のモデル基準領域MRoiPosのXYZ座標位置を表わし、x’,y’,z’がワーク基準領域WRoiPosのXYZ座標位置を表わすとき、上記の式(3)の右辺の行列Aがワーク2の3次元的な姿勢Pを表わす。したがって、3つ以上のモデル基準領域MRoiPosとワーク基準領域WRoiPosとの位置関係の組合わせをもとに式(3)の右辺と左辺が等しいと近似できる行列Aの各成分を求めることで、ワーク2の姿勢Pを3次元的に推定することができる。なお、上記の推定手法はあくまで一例であって、ワーク2の姿勢を推定する手法は上記の推定手法に限定されるものではない。 In the equation (3), each component (a11, a12, ..., A34) of the matrix A on the right side is an arbitrary constant. In the equation (3), when x, y, z represent the XYZ coordinate position of the arbitrary model reference region MRoiPos, and x', y', z'represent the XYZ coordinate position of the work reference region WRoiPos, the above equation ( The matrix A on the right side of 3) represents the three-dimensional posture P of the work 2. Therefore, by finding each component of the matrix A that can be approximated that the right side and the left side of the equation (3) are equal based on the combination of the positional relationships between the three or more model reference regions MRoiPos and the work reference region WRoiPos, the work The posture P of 2 can be estimated three-dimensionally. The above estimation method is merely an example, and the method for estimating the posture of the work 2 is not limited to the above estimation method.
 合焦点領域出力部152は、ワーク姿勢推定部151によって推定されたワーク2の姿勢Pと、モデル対象登録部141によって予め登録されているモデル対象領域MRoiPatとに基づいて、画像スタックStackにおけるモデル対象領域MRoiPatに対応する領域のXYZ座標位置を「ワーク対象領域WRoiPat」として計算する。ワーク対象領域WRoiPatは、ワーク2の全体画像のうちの合焦点画像IFocusを取得したい部分領域であり、ワーク2の評価対象となる領域である。 The focus area output unit 152 is a model target in the image stack Stack based on the posture P of the work 2 estimated by the work posture estimation unit 151 and the model target area MRoiPat registered in advance by the model target registration unit 141. The XYZ coordinate position of the region corresponding to the region MRoiPat is calculated as the "work target region WRoiPat". The work target area WRoiPat is a partial area in which the focused image I Focus of the entire image of the work 2 is desired to be acquired, and is an area to be evaluated by the work 2.
 合焦点領域出力部152は、計算されたワーク対象領域WRoiPatを外部に出力する。外部に出力されたワーク対象領域WRoiPatは、たとえばワーク2の位置決め等に用いられる。また、合焦点領域出力部152は、計算されたワーク対象領域WRoiPatを合焦点画像取得部16にも出力する。 The focused area output unit 152 outputs the calculated work target area WRoiPat to the outside. The work target area WRoiPat output to the outside is used, for example, for positioning the work 2. Further, the focused area output unit 152 also outputs the calculated work target area WRoiPat to the focused image acquisition unit 16.
 合焦点画像取得部16は、画像スタックStackに含まれる複数枚の画像のうちから、ワーク対象領域WRoiPatに最も焦点が合っている画像を画像処理部13と同様の手法で特定し、特定された画像のワーク対象領域WRoiPat内の画像を合焦点画像IFocusとして取得して外部に出力する。合焦点画像IFocusは、たとえばワーク2の検査などに用いられる。なお、合焦点画像取得部16は、画像取得装置1の外部に設けられてもよい。また、ワーク2の評価に合焦点画像IFocusは用いられない場合には、合焦点画像取得部16を省略可能である。 The focused image acquisition unit 16 identifies and identifies the image most focused on the work target area WRoiPat from the plurality of images included in the image stack Stack by the same method as the image processing unit 13. The image in the work target area WRoiPat of the image is acquired as a focused image IFocus and output to the outside. The in-focus image I Focus is used, for example, for inspection of work 2. The focused image acquisition unit 16 may be provided outside the image acquisition device 1. Further, when the focused image IFocus is not used for the evaluation of the work 2, the focused image acquisition unit 16 can be omitted.
 <モデル領域の登録手順>
 図2は、モデル領域(モデル対象領域MRoiPatおよびモデル基準領域MRoiPos)の登録手順の一例を示すフローチャートである。図2のフローチャートは、モデル登録の対象となるワーク2が画像取得装置1にセットされた状態で開始される。
<Procedure for registering model area>
FIG. 2 is a flowchart showing an example of a procedure for registering a model area (model target area MRoiPat and model reference area MRoiPos). The flowchart of FIG. 2 is started in a state where the work 2 to be model-registered is set in the image acquisition device 1.
 まず、モデル登録の対象となるワーク2の画像スタックStack(以下「モデル画像スタックMStack」ともいう)を取得する処理が実行される(ステップS10)。たとえば、移動装置12を用いて撮像装置11をZ軸方向に移動させながら撮像装置11で複数枚の画像を撮像することによって、モデル画像スタックMStackが取得される。 First, a process of acquiring the image stack Stack (hereinafter, also referred to as "model image stack MS Stack") of the work 2 to be model registered is executed (step S10). For example, the model image stack MS stack is acquired by capturing a plurality of images with the image pickup device 11 while moving the image pickup device 11 in the Z-axis direction using the moving device 12.
 次いで、モデル対象領域MRoiPatを登録する処理が実行される(ステップS20)。たとえば、作業者が、モデル対象登録部141を用いて、モデル画像スタックMStackに対してモデル対象領域MRoiPatとして登録したい領域を囲む等の操作を行なうことによって、モデル対象領域MRoiPatが登録される。 Next, the process of registering the model target area MRoiPat is executed (step S20). For example, the model target area MRoiPat is registered by the operator using the model target registration unit 141 to perform an operation such as surrounding the area to be registered as the model target area MRoiPat on the model image stack MS stack.
 次いで、モデル基準領域MRoiPosを登録する処理が実行される(ステップS30)。たとえば、作業者が、モデル基準登録部142を用いて、モデル画像スタックMStackに対してモデル基準領域MRoiPosとして登録したい領域を囲む等の操作を行なうことによって、モデル基準領域MRoiPosが登録される。 Next, the process of registering the model reference area MRoiPos is executed (step S30). For example, the model reference area MRoiPos is registered by the operator using the model reference registration unit 142 to perform an operation such as surrounding the area to be registered as the model reference area MRoiPos on the model image stack MS stack.
 <ワーク対象領域WRoiPatの計算手順>
 図3は、ワーク対象領域WRoiPat(ワーク2の全体画像のうちの合焦点画像IFocusを取得したい部分領域)の計算手順の一例を示すフローチャートである。図3のフローチャートは、評価対象となるワーク2が画像取得装置1にセットされた状態で開始される。
<Calculation procedure of work target area WRoiPat>
FIG. 3 is a flowchart showing an example of a calculation procedure of the work target area WRoiPat (a partial area in which the focused image IFocus of the entire image of the work 2 is desired to be acquired). The flowchart of FIG. 3 is started in a state where the work 2 to be evaluated is set in the image acquisition device 1.
 まず、評価対象となるワーク2の画像スタックStack(以下「ワーク画像スタックWStack」ともいう)を取得する処理が実行される(ステップS11)。たとえば、移動装置12を用いて撮像装置11をZ軸方向に移動させながら撮像装置11で複数枚の画像を撮像することによって、ワーク画像スタックWStackが取得される。 First, a process of acquiring the image stack Stack of the work 2 to be evaluated (hereinafter, also referred to as “work image stack WS Stack”) is executed (step S11). For example, the work image stack WStack is acquired by capturing a plurality of images with the image pickup device 11 while moving the image pickup device 11 in the Z-axis direction using the moving device 12.
 次いで、合焦点位置取得部131によって、ワーク画像スタックWStackの合焦点位置PosZを計算する処理が実行される(ステップS40)。この処理では、たとえば、ワーク画像スタックWStack内の画像において、モデル基準登録部142により予め登録されている3つ以上のモデル基準領域MRoiPosをそれぞれ含む3つ以上の計算領域が抽出され、抽出された3つ以上の計算領域ごとに合焦点位置PosZが計算される。これにより、モデル基準領域MRoiPosの数と同じ数(3つ以上)の合焦点位置PosZが計算される。 Next, the focusing position acquisition unit 131 executes a process of calculating the focusing position PosZ of the work image stack WStuck (step S40). In this process, for example, in the image in the work image stack WStack, three or more calculation areas including three or more model reference areas MRoiPos registered in advance by the model reference registration unit 142 are extracted and extracted. The focal position PosZ is calculated for each of the three or more calculation areas. As a result, the same number (three or more) of focus positions PosZ as the number of model reference region MRoiPos is calculated.
 なお、ステップS40で抽出される各計算領域は、たとえばモデル登録時のワーク2のセット位置と評価対象となるワーク2のセット位置とが大きくは異なってはいないことを想定して、各モデル基準領域MRoiPosを中心として各モデル基準領域MRoiPosを所定量拡大した領域に設定される。このようにすることで、各計算領域内に各モデル基準領域MRoiPosが含まれる状態にすることができる。 In each calculation area extracted in step S40, for example, assuming that the set position of the work 2 at the time of model registration and the set position of the work 2 to be evaluated are not significantly different, each model reference. Each model reference region MRoiPos is set to a region expanded by a predetermined amount centering on the region MRoiPos. By doing so, each model reference region MRoiPos can be included in each calculation region.
 次に、位置合わせ部132によって、ワーク基準領域WRoiPosを計算する処理が行なわれる(ステップS50)。この処理では、ステップS40で計算された合焦点位置PosZ毎にワーク基準領域WRoiPosを計算する処理が行なわれる。各処理では、合焦点位置PosZの計算領域に対して上述の平面位置合わせを行なってワーク基準領域WRoiPosのXY座標位置が特定され、特定されたワーク基準領域WRoiPosのXY座標位置に合焦点位置PosZを組合せた信号が、ワーク基準領域WRoiPos(XYZ座標位置)とされる。上記の処理が合焦点位置PosZ毎に順次行なわれることによって、モデル基準領域MRoiPosの数と同じ数(3つ以上)のワーク基準領域WRoiPosが計算される。 Next, the alignment unit 132 performs a process of calculating the work reference area WRoiPos (step S50). In this process, the work reference region WRoiPos is calculated for each focus position PosZ calculated in step S40. In each process, the above-mentioned plane alignment is performed with respect to the calculation area of the focus position PosZ, the XY coordinate position of the work reference area WRoiPos is specified, and the focus position PosZ is set to the XY coordinate position of the specified work reference area WRoiPos. The signal in which the above is combined is defined as the work reference region WRoiPos (XYZ coordinate position). By sequentially performing the above processing for each focus position PosZ, the same number (three or more) of work reference regions WRoiPos as the number of model reference region MRoiPos is calculated.
 次に、ワーク姿勢推定部151によって、撮像されたワーク2の姿勢を推定する処理が行なわれる(ステップS60)。この処理では、たとえば、上述のように、3つ以上のモデル基準領域MRoiPosと3つ以上のワーク基準領域WRoiPosとの位置関係の組合わせをもとに上述の式(3)の行列Aの各成分を求めることで、ワーク2の姿勢Pが3次元的に推定される。 Next, the work posture estimation unit 151 performs a process of estimating the posture of the imaged work 2 (step S60). In this process, for example, as described above, each of the matrices A in the above equation (3) is based on the combination of the positional relationships between the three or more model reference regions MRoiPos and the three or more work reference regions WRoiPos. By obtaining the components, the posture P of the work 2 is estimated three-dimensionally.
 次に、合焦点領域出力部152によって、ワーク対象領域WRoiPatを計算する処理が行なわれる(ステップS70)。この処理では、ステップS60で推定されたワーク2の姿勢Pを加味して、ワーク画像スタックWStackにおけるモデル対象領域MRoiPatに対応する領域が、ワーク対象領域WRoiPatとして計算される。なお、ワーク対象領域WRoiPatがワーク画像スタックWStackに含まれない場合には、図示しないブザー等を用いて作業者に報知してもよい。 Next, the focused area output unit 152 performs a process of calculating the work target area WRoiPat (step S70). In this process, the area corresponding to the model target area MRoiPat in the work image stack WStuck is calculated as the work target area WRoiPat in consideration of the posture P of the work 2 estimated in step S60. When the work target area WRoiPat is not included in the work image stack WStack, the operator may be notified by using a buzzer or the like (not shown).
 最後に、合焦点画像取得部16によって、ワーク対象領域WRoiPatに最も焦点が合っている画像を合焦点画像IFocusとして取得する処理が行なわれる(ステップS71)。 Finally, the in-focus image acquisition unit 16 performs a process of acquiring the image most focused on the work target area WRoiPat as the in-focus image IFocus (step S71).
 以上のように、本実施の形態による画像取得装置1によれば、予め登録されたモデル基準領域MRoiPosについてのみ合焦点位置PosZ、ワーク基準領域WRoiPos、ワーク姿勢Pの推定を行なうことによって、ワーク対象領域WRoiPatおよびその合焦点画像IFocusを取得することができる。すなわち、全焦点画像を生成することなく、ワーク対象領域WRoiPatおよびその合焦点画像IFocusを取得することができる。そのため、全焦点画像を生成する場合に比べて、少ない計算量でワーク対象領域WRoiPatおよびその合焦点画像IFocusを取得することができる。 As described above, according to the image acquisition device 1 according to the present embodiment, the work target is estimated by estimating the focus position PosZ, the work reference area WRoiPos, and the work posture P only for the model reference area MRoiPos registered in advance. The region WRoiPat and its focused image IFocus can be acquired. That is, the work target area WRoiPat and its focused image IFocus can be acquired without generating a omnifocal image. Therefore, the work target area WRoiPat and its in-focus image IFocus can be acquired with a smaller amount of calculation than in the case of generating a full-focus image.
 仮に全焦点画像を生成する場合には、画像取得エリアごとに平面方向、高さ方向の位置決めを実施する必要があり、評価領域の大きさに応じて計算量が線形に増加する。これに対し、本実施の形態による画像取得装置1によれば、全焦点画像を生成しないため、計算量が大幅に軽減される。その結果、評価対象となるワーク2の合焦点画像IFocusを高速に取得することができる。 If a omnifocal image is to be generated, it is necessary to perform positioning in the plane direction and height direction for each image acquisition area, and the amount of calculation increases linearly according to the size of the evaluation area. On the other hand, according to the image acquisition device 1 according to the present embodiment, since the omnifocal image is not generated, the amount of calculation is significantly reduced. As a result, the focused image I Focus of the work 2 to be evaluated can be acquired at high speed.
 実施の形態2.
 上述の実施の形態1においては、モデル登録時のワーク2のセット位置と評価対象となるワーク2のセット位置とが大きくは異なってはいない場合を想定していた。しかしながら、実際には、モデル登録時のワーク2のセット位置と評価対象となるワーク2のセット位置とが大きく異なる場合も想定され得る。
Embodiment 2.
In the above-described first embodiment, it is assumed that the set position of the work 2 at the time of model registration and the set position of the work 2 to be evaluated are not significantly different. However, in reality, it can be assumed that the set position of the work 2 at the time of model registration and the set position of the work 2 to be evaluated are significantly different.
 そこで、本実施の形態2においては、撮像視野内の位置合わせのための基準となる領域(以下「視野位置合わせ領域」ともいう)を予め登録しておき、視野位置合わせ領域に対する位置合わせ(粗位置合わせ)を行なった上で、上述の実施の形態1と同様の処理を行なう。これにより、モデル登録時のワーク2のセット位置と評価対象となるワーク2のセット位置とが大きく異なる場合にも対応可能となる。 Therefore, in the second embodiment, a reference area for alignment in the imaging field of view (hereinafter, also referred to as “field of view alignment area”) is registered in advance, and alignment (coarse) with respect to the field of view alignment area is performed. Alignment) is performed, and then the same processing as in the first embodiment is performed. As a result, it is possible to cope with a case where the set position of the work 2 at the time of model registration and the set position of the work 2 to be evaluated are significantly different.
 図4は、本実施の形態2による画像取得装置1Aを含む画像取得システムの構成を概略的に示す図である。画像取得装置1Aは、上述の図1に示す画像取得装置1に対して、モデル登録部14の内部にモデル視野登録部(第3登録部)143を追加するとともに、視野画像処理部17を追加したものである。その他の構造、機能、処理は、上述の図1に示す画像取得装置1と同じであるため、ここでの詳細な説明は繰返さない。 FIG. 4 is a diagram schematically showing the configuration of an image acquisition system including the image acquisition device 1A according to the second embodiment. The image acquisition device 1A adds a model field of view registration unit (third registration unit) 143 inside the model registration unit 14 and adds a field of view image processing unit 17 to the image acquisition device 1 shown in FIG. It was done. Since the other structures, functions, and processes are the same as those of the image acquisition device 1 shown in FIG. 1 described above, the detailed description here will not be repeated.
 モデル視野登録部143は、たとえば図示しないコンピュータに設けられており、視野位置合わせ領域を登録する。作業者は、ディスプレイに表示された画像に対してコンピュータを操作して指定したい領域を囲む等の操作を行なうことによって、視野位置合わせ領域を登録することができる。なお、視野位置合わせ領域には、モデル登録時の視野フォーカス位置の計算領域MRoiVFocusと、モデル視野位置合わせ領域MRoiVPatとが含まれる。モデル視野登録部143は、登録された視野位置合わせ領域を視野画像処理部17に出力する。 The model field of view registration unit 143 is provided in, for example, a computer (not shown) and registers the field of view alignment area. The operator can register the field of view alignment area by operating a computer on the image displayed on the display and performing an operation such as surrounding an area to be specified. The field of view alignment area includes the field of view focus position calculation area MRoiVFocus at the time of model registration and the model field of view alignment area MRoiVPat. The model field of view registration unit 143 outputs the registered field of view alignment area to the field of view image processing unit 17.
 視野画像処理部17は、視野合焦点位置取得部(第2取得部)171と、視野位置合わせ部172とを含む。視野合焦点位置取得部171は、合焦点位置取得部131と同様の手法で、視野合焦点位置VFocusを計算して視野位置合わせ部172に出力する。視野位置合わせ部172は、位置合わせ部132と同様の手法で、視野位置VPos(後述の視野位置ずれ量OffsetXYZ)を出力する。 The visual field image processing unit 17 includes a visual field focusing position acquisition unit (second acquisition unit) 171 and a visual field alignment unit 172. The visual field focusing position acquisition unit 171 calculates the visual field focusing position VFocus and outputs it to the visual field alignment unit 172 in the same manner as the visual field focusing position acquisition unit 131. The field of view alignment unit 172 outputs the field of view position VPos (field of view position deviation amount OffsetXYZ described later) in the same manner as the alignment unit 132.
 図5は、本実施の形態2によるモデル領域の登録手順の一例を示すフローチャートである。図5のフローチャートは、図2のフローチャートに対して、ステップS80,S81を追加したものである。その他のステップ(上述の図2に示したステップと同じ番号を付しているステップ)については、既に説明したため詳細な説明はここでは繰り返さない。 FIG. 5 is a flowchart showing an example of the procedure for registering the model area according to the second embodiment. The flowchart of FIG. 5 is obtained by adding steps S80 and S81 to the flowchart of FIG. Since the other steps (steps having the same numbers as the steps shown in FIG. 2 above) have already been described, detailed description will not be repeated here.
 モデル画像スタックMStackの取得(ステップS10)、モデル対象領域MRoiPatの登録(ステップS20)、モデル基準領域MRoiPosの登録(ステップS30)が行なわれた後、モデル視野登録部143によって、視野位置合わせ領域を登録する処理が行なわれる(ステップS80)。この処理では、モデル登録時の視野フォーカス位置の計算領域MRoiVFocusとモデル視野位置合わせ領域MRoiVPatとが登録される。 After the acquisition of the model image stack MS stack (step S10), the registration of the model target area MRoiPat (step S20), and the registration of the model reference area MRoiPos (step S30), the field of view alignment area is set by the model field of view registration unit 143. The process of registering is performed (step S80). In this process, the calculation area MRoiVFocus of the field of view focus position at the time of model registration and the model field of view alignment area MRoiVPat are registered.
 次いで、モデル視野合焦点位置MVFocusを登録する処理が行なわれる(ステップS81)。この処理では、モデル画像スタックMStack中の各画像の計算領域MRoiVFocusについて、視野合焦点位置取得部171を用いて合焦点指標が計算され、合焦点指標が最大である画像のZ座標位置がモデル視野合焦点位置MVFocusとして登録される。 Next, a process of registering the model field-of-view focal position MVFocus is performed (step S81). In this process, the focusing index is calculated by using the field focusing focus position acquisition unit 171 for the calculation area MRoiVFocus of each image in the model image stack MStack, and the Z coordinate position of the image having the maximum focusing index is the model field of view. It is registered as the in-focus position MV Focus.
 図6は、本実施の形態2によるワーク対象領域WRoiPatの計算手順の一例を示すフローチャートである。図6のフローチャートは、上述の図3のステップS40,S50をそれぞれステップS41,S51に変更し、さらにステップS90,S100を追加したものである。その他のステップ(上述の図3に示したステップと同じ番号を付しているステップ)については、既に説明したため詳細な説明はここでは繰り返さない。 FIG. 6 is a flowchart showing an example of the calculation procedure of the work target area WRoiPat according to the second embodiment. The flowchart of FIG. 6 is obtained by changing steps S40 and S50 of FIG. 3 to steps S41 and S51, respectively, and further adding steps S90 and S100. Since the other steps (steps having the same numbers as the steps shown in FIG. 3 above) have already been described, detailed description will not be repeated here.
 ワーク画像スタックWStackの取得(ステップS11)が行なわれた後、視野合焦点位置取得部171によって、ワーク視野合焦点位置WVFocusを計算する処理が行なわれる(ステップS90)。この処理では、ワーク画像スタックWStack中の各画像の計算領域MRoiVFocusについて合焦点指標が計算され、合焦点指標に基づいて合焦度が最も高いと判定される画像のZ座標位置がワーク視野合焦点位置WVFocusとして計算される。 After the work image stack WStuck is acquired (step S11), the field-of-view focus position acquisition unit 171 performs a process of calculating the work field-of-view focus position WVFocus (step S90). In this process, the focus index is calculated for the calculation area MRoiVFocus of each image in the work image stack WStack, and the Z coordinate position of the image determined to have the highest focus based on the focus index is the work field focus. Calculated as position WVFocus.
 次いで、視野位置合わせ部172によって、視野位置合わせが行なわれる(ステップS100)。この処理では、モデル視野合焦点位置MVFocusに対するワーク視野合焦点位置WVFocusの深さ位置ずれ量OffsetZを加味して、モデル視野位置合わせ領域MRoiVPatに対応するワーク視野位置合わせ領域WRoiVPatが計算され、ワーク視野位置合わせ領域WRoiVPatとモデル視野位置合わせ領域MRoiVPatとの位置ずれ量が、視野位置ずれ量OffsetXYZとして計算される。この視野位置ずれ量OffsetXYZが、モデル登録時のワーク2のセット位置と評価対象となるワーク2のセット位置との3次元的な位置ずれ量に相当する。上述の視野位置VPosは、ステップS100で計算される視野位置ずれ量OffsetXYZと同義である。 Next, the field of view alignment unit 172 performs the field of view alignment (step S100). In this process, the work field of view alignment area WRoiVPat corresponding to the model field of view alignment area MRoiVPat is calculated by adding the depth position deviation amount OffsetZ of the work field of view focusing position WVFocus with respect to the model field of view focusing position MVFocus, and the work field of view is calculated. The amount of misalignment between the alignment region WRoiVPat and the model visual field alignment region MRoiVPat is calculated as the visual field misalignment amount OffsetXYZ. This field of view position shift amount OffsetXYZ corresponds to a three-dimensional position shift amount between the set position of the work 2 at the time of model registration and the set position of the work 2 to be evaluated. The above-mentioned visual field position VPos is synonymous with the visual field position deviation amount OffsetXYZ calculated in step S100.
 次に、合焦点位置取得部131によって、ワーク画像スタックWStackの合焦点位置PosZを計算する処理が実行される(ステップS41)。この処理では、ステップS100で計算された視野位置ずれ量OffsetXYZを加味してモデル基準領域MRoiPosに対応するワーク基準領域WRoiPosが計算され、上述の図3のステップS40と同様に、ワーク基準領域WRoiPosごとの合焦点位置PosZが計算される。 Next, the focusing position acquisition unit 131 executes a process of calculating the focusing position PosZ of the work image stack WStack (step S41). In this process, the work reference area WRoiPos corresponding to the model reference area MRoiPos is calculated in consideration of the visual field position deviation amount OffsetXYZ calculated in step S100, and for each work reference area WRoiPos as in step S40 of FIG. 3 described above. The focal position of PosZ is calculated.
 次に、位置合わせ部132によって、ワーク基準領域WRoiPosを計算する処理が行なわれる(ステップS51)。この処理では、ステップS100で計算された視野位置ずれ量OffsetXYZを加味してワーク基準領域WRoiPosを計算し、上述の図3のステップS50と同様にワーク基準領域WRoiPosごとの位置ずれ量が計算される。 Next, the alignment unit 132 performs a process of calculating the work reference area WRoiPos (step S51). In this process, the work reference region WRoiPos is calculated by adding the visual field position deviation amount OffsetXYZ calculated in step S100, and the displacement amount for each work reference region WRoiPos is calculated in the same manner as in step S50 of FIG. ..
 以降においては、上述の図3で説明したワーク2の姿勢を推定する処理(ステップS60)、ワーク対象領域WRoiPatを計算する処理(ステップS70)、ワーク対象領域WRoiPatの合焦点画像IFocusを取得する処理(ステップS71)が行なわれる。 In the following, the process of estimating the posture of the work 2 described with reference to FIG. 3 (step S60), the process of calculating the work target area WRoiPat (step S70), and the process of acquiring the focused image IFocus of the work target area WRoiPat. (Step S71) is performed.
 以上のようにすることで、本実施の形態2による画像取得装置1Aにおいては、上述の実施の形態1と同様に、評価対象となるワーク2のワーク対象領域WRoiPatおよびその合焦点画像IFocusを高速に取得することができる。さらに、モデル登録時のワーク2のセット位置と評価対象となるワーク2のセット位置との視野位置ずれ量OffsetXYZを加味してワーク対象領域WRoiPatを計算することができる。そのため、モデル登録時のワーク2のセット位置と評価対象となるワーク2のセット位置とが大きく異なる場合にも、ワーク対象領域WRoiPatおよびその合焦点画像IFocusを取得することができる。 By doing so, in the image acquisition device 1A according to the second embodiment, the work target area WRoiPat of the work 2 to be evaluated and its focused image I Focus can be performed at high speed as in the above-described first embodiment. Can be obtained in. Further, the work target area WRoiPat can be calculated by adding the visual field position deviation amount OffsetXYZ between the set position of the work 2 at the time of model registration and the set position of the work 2 to be evaluated. Therefore, even when the set position of the work 2 at the time of model registration and the set position of the work 2 to be evaluated are significantly different, the work target area WRoiPat and its focused image IFocus can be acquired.
 実施の形態3.
 上述の実施の形態1,2においては、撮像装置11あるいは台座3をZ軸方向に移動させた場合の撮像視野内に、位置合わせのための基準として登録できる図柄が存在する場合を想定している。しかしながら、実際には、ワーク2のXY軸方向の大きさが大きいために、撮像装置11あるいは台座3をZ軸方向に移動させた場合の撮像視野内に、位置合わせのための基準として登録できる図柄がない場合も想定され得る。
Embodiment 3.
In the above-described first and second embodiments, it is assumed that there is a symbol that can be registered as a reference for alignment in the imaging field of view when the imaging device 11 or the pedestal 3 is moved in the Z-axis direction. There is. However, in reality, since the size of the work 2 in the XY axis direction is large, it can be registered as a reference for alignment in the imaging field of view when the image pickup device 11 or the pedestal 3 is moved in the Z axis direction. It can be assumed that there is no design.
 上記の点に鑑み、本実施の形態3においては、撮像装置11あるいは台座3をZ軸方向に加えてXY軸方向にも移動させることによってXY軸方向にスライドさせた複数の撮像視野でワーク2を撮像し、複数の撮像視野のうちのいずれかの視野の画像で位置合わせのための基準を登録するようにしている。 In view of the above points, in the third embodiment, the work 2 has a plurality of imaging fields of view slid in the XY axis direction by moving the image pickup device 11 or the pedestal 3 in the Z axis direction as well as the XY axis direction. Is imaged, and a reference for alignment is registered in the image of one of a plurality of imaging fields of view.
 図7は、本実施の形態3による画像取得装置1Bを含む画像取得システムの構成を概略的に示す図である。画像取得装置1Bは、上述の図4に示す画像取得装置1Aに対して、撮像位置記録部122を追加したものである。その他の構造、機能、処理は、上述の画像取得装置1Aと同じであるため、ここでの詳細な説明は繰返さない。 FIG. 7 is a diagram schematically showing the configuration of an image acquisition system including the image acquisition device 1B according to the third embodiment. The image acquisition device 1B is an image acquisition device 1A shown in FIG. 4 with an imaging position recording unit 122 added. Since the other structures, functions, and processes are the same as those of the image acquisition device 1A described above, the detailed description here will not be repeated.
 撮像位置記録部122は、ワーク2を撮像した時の移動部位置情報Encを撮像位置信号VPosとして記録し、記録した撮像位置信号VPosをワーク姿勢推定部151に出力する。なお、本実施の形態3においては、上述のように、撮像装置11あるいは台座3をZ軸方向に加えてXY軸方向にも移動させることによって、XY軸方向にスライドさせた複数の撮像視野でワーク2を撮像することを想定している。 The imaging position recording unit 122 records the moving unit position information Enc when the work 2 is imaged as an imaging position signal VPos, and outputs the recorded imaging position signal VPos to the work posture estimation unit 151. In the third embodiment, as described above, the imaging device 11 or the pedestal 3 is moved in the XY-axis direction in addition to the Z-axis direction, so that a plurality of imaging fields of view slid in the XY-axis direction can be used. It is assumed that the work 2 is imaged.
 ワーク姿勢推定部151は、実施の形態1および実施の形態2で説明した機能に加えて、撮像位置記録部122からの撮像位置信号VPosを受ける機能を有する。 The work posture estimation unit 151 has a function of receiving an image pickup position signal VPos from an image pickup position recording unit 122 in addition to the functions described in the first and second embodiments.
 図8は、本実施の形態3によるモデル領域の登録手順の一例を示すフローチャートである。図8のフローチャートは、図5のフローチャートに対して、ステップS110,S120を追加したものである。その他のステップ(上述の図5に示したステップと同じ番号を付しているステップ)については、既に説明したため詳細な説明はここでは繰り返さない。 FIG. 8 is a flowchart showing an example of the procedure for registering the model area according to the third embodiment. The flowchart of FIG. 8 is obtained by adding steps S110 and S120 to the flowchart of FIG. Since the other steps (steps having the same numbers as the steps shown in FIG. 5 above) have already been described, detailed description will not be repeated here.
 まず、XY軸方向のすべての視野の撮像が完了したかどうかが判定される(ステップS110)。XY軸方向のすべての視野の撮像が完了していない場合(ステップS110においてNO)、撮像位置記録部122によって現在の撮像位置信号VPosを記録する処理が行なわれる(ステップS120)。その後、上述の図5に示すステップS10~S81の処理が行なわれる。その後、処理はステップS110に戻され、XY軸方向のすべての視野の撮像が完了するまで、ステップS110,S120,S10~S81までの処理がXY軸方向の視野毎に繰り返される。そして、XY軸方向のすべての視野の撮像が完了した場合(ステップS110においてYES)、モデル登録処理は終了される。 First, it is determined whether or not the imaging of all the fields of view in the XY axis direction is completed (step S110). When the imaging of all the visual fields in the XY-axis direction is not completed (NO in step S110), the imaging position recording unit 122 performs a process of recording the current imaging position signal VPos (step S120). After that, the processes of steps S10 to S81 shown in FIG. 5 described above are performed. After that, the process is returned to step S110, and the processes of steps S110, S120, and S10 to S81 are repeated for each field of view in the XY axis direction until the imaging of all the fields of view in the XY axis direction is completed. Then, when the imaging of all the visual fields in the XY axis directions is completed (YES in step S110), the model registration process is completed.
 図9は、本実施の形態3によるワーク対象領域WRoiPatの計算手順の一例を示すフローチャートである。なお、図9に示したステップのうち、上述の図6に示したステップと同じ番号を付しているステップについては、既に説明したため詳細な説明はここでは繰り返さない。 FIG. 9 is a flowchart showing an example of the calculation procedure of the work target area WRoiPat according to the third embodiment. Of the steps shown in FIG. 9, the steps having the same numbers as the steps shown in FIG. 6 described above have already been described, and detailed description thereof will not be repeated here.
 まず、XY軸方向のすべての視野の撮像が完了したかどうかが判定される(ステップS111)。XY軸方向のすべての視野の撮像が完了していない場合(ステップS111においてNO)、撮像位置記録部122によって現在の撮像位置信号VPosを記録する処理が行なわれる(ステップS121)。その後、上述の図6に示すステップS11~S51の処理が行なわれる。その後、処理はステップS111に戻され、XY軸方向のすべての視野の撮像が完了するまでステップS111,S121,S11~S51までの処理が視野毎に繰り返される。 First, it is determined whether or not the imaging of all the fields of view in the XY axis direction is completed (step S111). When the imaging of all the visual fields in the XY-axis direction is not completed (NO in step S111), the imaging position recording unit 122 performs a process of recording the current imaging position signal VPos (step S121). After that, the processes of steps S11 to S51 shown in FIG. 6 described above are performed. After that, the process is returned to step S111, and the processes of steps S111, S121, and S11 to S51 are repeated for each field of view until the imaging of all the fields of view in the XY axis directions is completed.
 そして、XY軸方向のすべての視野の撮像が完了した場合(ステップS111においてYES)、ワーク2の姿勢を推定する処理が行なわれる(ステップS61)。この処理では、ステップS41で計算された各視野の合焦点位置PosZと、ステップS51で計算された各視野のワーク基準領域WRoiPosと、ステップS121で記録された各視野の撮像位置信号VPosとに基づいて、ワーク2の姿勢が推定される。 Then, when the imaging of all the visual fields in the XY axis directions is completed (YES in step S111), the process of estimating the posture of the work 2 is performed (step S61). In this process, it is based on the focused position PosZ of each field of view calculated in step S41, the work reference region WRoiPos of each field of view calculated in step S51, and the imaging position signal VPos of each field of view recorded in step S121. The posture of the work 2 is estimated.
 以後は、実施の形態1,2と同様に、ワーク対象領域WRoiPatを計算する処理(ステップS70)、合焦点画像IFocusとして取得する処理(ステップS71)が行なわれる。 After that, as in the first and second embodiments, the process of calculating the work target area WRoiPat (step S70) and the process of acquiring the focused image IFocus (step S71) are performed.
 以上のようにすることで、本実施の形態3による画像取得装置1Bにおいては、上述の実施の形態1,2と同様に、評価対象となるワーク2のワーク対象領域WRoiPatおよびその合焦点画像IFocusを高速に取得することができる。さらに、本実施の形態3による画像取得装置1Bは、XY軸方向にスライドさせた複数の撮像視野でワーク2を撮像し、複数の視野のうちのいずれかの視野の画像で位置合わせのための基準を登録するようにしている。そのため、1つの撮像視野内に位置合わせのための基準として登録できる構造物がない場合にも、ワーク対象領域WRoiPatおよびその合焦点画像IFocusを取得することができる。 By doing so, in the image acquisition device 1B according to the third embodiment, the work target area WRoiPat of the work 2 to be evaluated and its focused image IFocus are similarly to the above-described first and second embodiments. Can be obtained at high speed. Further, the image acquisition device 1B according to the third embodiment captures the work 2 in a plurality of imaging fields of view slid in the XY axis directions, and aligns the work 2 with an image of one of the plurality of fields of view. I try to register the standard. Therefore, even when there is no structure that can be registered as a reference for alignment in one imaging field of view, the work target area WRoiPat and its focused image IFocus can be acquired.
 実施の形態4.
 実施の形態1、2、3においては、ワーク2の位置合わせの基準と、撮像装置11による撮像領域との間に位置ずれがないことを想定している。しかしながら、ワーク2が複数の部品を組み立てて構成されるモジュールなどの場合には、ワーク2の組み立て誤差の影響で、位置合わせの基準と撮像領域の間に多少の位置ずれが生じる場合も想定され得る。
Embodiment 4.
In the first, second, and third embodiments, it is assumed that there is no positional deviation between the alignment reference of the work 2 and the image pickup region by the image pickup apparatus 11. However, in the case where the work 2 is a module composed of a plurality of parts assembled, it is assumed that a slight misalignment may occur between the alignment reference and the imaging region due to the influence of the assembly error of the work 2. obtain.
 上記の点に鑑み、本実施の形態4においては、撮像領域ごとに位置合わせの基準に対する位置ずれを補正するための位置合わせの基準を登録するようにしている。 In view of the above points, in the fourth embodiment, the alignment reference for correcting the misalignment with respect to the alignment reference is registered for each imaging region.
 図10は、本実施の形態4による画像取得装置1Cを含む画像取得システムの構成を概略的に示す図である。画像取得装置1Cは、上述の図1に示す画像取得装置1に対して、モデル登録部14の内部に部位基準登録部144と、部位画像処理部18とを追加したものである。その他の構造、機能、処理は、上述の図1に示す画像取得装置1と同じであるため、ここでの詳細な説明は繰返さない。 FIG. 10 is a diagram schematically showing the configuration of an image acquisition system including the image acquisition device 1C according to the fourth embodiment. The image acquisition device 1C is an image acquisition device 1 shown in FIG. 1 described by adding a site reference registration unit 144 and a site image processing unit 18 inside the model registration unit 14. Since the other structures, functions, and processes are the same as those of the image acquisition device 1 shown in FIG. 1 described above, the detailed description here will not be repeated.
 部位基準登録部144は、たとえば図示しないコンピュータに設けられており、ワーク2の部位ごとの位置合わせのための基準となる「部位位置合わせ領域」を登録する。作業者が、ディスプレイに表示された画像に対してコンピュータを操作して指定したい領域を囲む等の操作を行なうことによって、部位位置合わせ領域を登録することを想定している。なお、部位位置合わせ領域には、モデル登録時の部位フォーカス位置の計算領域MRoiPFocusと、モデル部位位置合わせ領域MRoiPPatとが含まれる。部位基準登録部144は、登録された部位位置合わせ領域を部位画像処理部18に出力する。 The site reference registration unit 144 is provided in, for example, a computer (not shown), and registers a "site alignment area" that serves as a reference for alignment of each part of the work 2. It is assumed that the operator registers the site alignment area by operating a computer on the image displayed on the display to surround the area to be specified. The site alignment area includes the calculation area MRoiP Focus of the site focus position at the time of model registration and the model site alignment area MRoiPPat. The site reference registration unit 144 outputs the registered site alignment area to the site image processing unit 18.
 部位画像処理部18は、部位合焦点位置取得部181と、部位位置合わせ部182とを含む。 The site image processing unit 18 includes a site focusing position acquisition unit 181 and a site alignment unit 182.
 部位合焦点位置取得部181は、合焦点位置取得部131と同様の手法で、部位合焦点位置PFocusを計算して部位位置合わせ部182に出力する。部位位置合わせ部182は、位置合わせ部132と同様の手法で、部位位置Ppos(後述の部位位置ずれ量POffsetXYZ)を計算して合焦点領域出力部152に出力する。 The site focusing position acquisition unit 181 calculates the site focusing position PFOcus by the same method as the site focusing position acquisition unit 131, and outputs it to the site alignment unit 182. The site alignment unit 182 calculates the site position Ppos (site position deviation amount POffsetXYZ, which will be described later) in the same manner as the alignment unit 132, and outputs it to the in-focus region output unit 152.
 本実施の形態4による合焦点領域出力部152は、上述による実施の形態1~3の合焦点領域出力部152の機能に加えて、部位位置ずれ量POffsetXYZを受ける機能を有する。 The focused area output unit 152 according to the fourth embodiment has a function of receiving the site misalignment amount POffsetXYZ in addition to the functions of the focused area output unit 152 according to the first to third embodiments described above.
 図11は、本実施の形態4によるモデル領域の登録手順の一例を示すフローチャートである。図11のフローチャートは、図2のフローチャートに対して、ステップS130,S131を追加したものである。その他のステップ(上述の図2に示したステップと同じ番号を付しているステップ)については、既に説明したため詳細な説明はここでは繰り返さない。 FIG. 11 is a flowchart showing an example of the procedure for registering the model area according to the fourth embodiment. The flowchart of FIG. 11 is obtained by adding steps S130 and S131 to the flowchart of FIG. Since the other steps (steps having the same numbers as the steps shown in FIG. 2 above) have already been described, detailed description will not be repeated here.
 本実施の形態4によるモデル領域の登録では、モデル画像スタックMStackの取得(ステップS10)、モデル対象領域MRoiPatの登録(ステップS20)、モデル基準領域MRoiPosの登録(ステップS30)が行なわれた後、部位位置合わせ領域を登録する処理が行われる(ステップS130)。この処理では、部位基準登録部144によって、モデル登録時の部位フォーカス位置の計算領域MRoiPFocusと、モデル部位位置合わせ領域MRoiPPatが登録される。 In the registration of the model area according to the fourth embodiment, after the acquisition of the model image stack MS stack (step S10), the registration of the model target area MRoiPat (step S20), and the registration of the model reference area MRoiPos (step S30) are performed. The process of registering the site alignment area is performed (step S130). In this process, the site reference registration unit 144 registers the calculation area MRoiPFocus of the site focus position at the time of model registration and the model site alignment area MRoiPPat.
 次いで、モデル部位合焦点位置MPFocusを登録する処理が行なわれる(ステップS131)。この処理では、モデル画像スタックMStack中の各画像の部位フォーカス位置の計算領域MRoiPFocusについて、部位合焦点位置取得部181を用いて合焦点指標が算出され、モデル画像スタックMStack中の各画像の計算領域MRoiVFocusについて、視野合焦点位置取得部171を用いて合焦点指標が計算され、求められた合焦点位置(合焦点指標が最大である画像のZ座標位置)がモデル部位合焦点位置MPFocusとして登録される。 Next, the process of registering the model site focusing position MPFocus is performed (step S131). In this process, the focusing index is calculated by using the site focusing position acquisition unit 181 for the region focus position calculation area MRoiPFocus of each image in the model image stack MS tack, and the calculation region of each image in the model image stack MS stack. For MRoiV Focus, the focus index is calculated using the visual field focus position acquisition unit 171 and the obtained focus position (Z coordinate position of the image with the maximum focus index) is registered as the model site focus position MPFocus. To.
 図12は、本実施の形態4によるワーク対象領域WRoiPatの計算手順の一例を示すフローチャートである。図12のフローチャートは、上述の図3のフローチャートに対して、ステップS140、S150を追加したものである。その他のステップ(上述の図3に示したステップと同じ番号を付しているステップ)については、既に説明したため詳細な説明はここでは繰り返さない。 FIG. 12 is a flowchart showing an example of the calculation procedure of the work target area WRoiPat according to the fourth embodiment. The flowchart of FIG. 12 is obtained by adding steps S140 and S150 to the flowchart of FIG. 3 described above. Since the other steps (steps having the same numbers as the steps shown in FIG. 3 above) have already been described, detailed description will not be repeated here.
 ワーク画像スタックWStackの取得(ステップS11)が行なわれた後、ワーク部位合焦点位置WPFocusを計算する処理が行なわれる(ステップS140)。この処理では、部位基準登録部144に部位フォーカス位置の計算領域MRoiPFocusとして登録された領域についてのワーク部位合焦点位置WPFocusを、部位合焦点位置取得部181により計算する処理が実施される。 After the work image stack WStuck is acquired (step S11), a process of calculating the work site in-focus position WP Focus is performed (step S140). In this process, the site focus position acquisition unit 181 calculates the work site focus position WP Focus for the area registered as the site focus position calculation area MRoiP Focus in the site reference registration unit 144.
 ステップ140の実施後、ステップ150の処理が行なわれる。ステップS150では、部位基準登録部144によりモデル部位位置合わせ領域MRoiPPatとして登録された領域について、モデル部位合焦点位置MPFocusに対するワーク部位合焦点位置WPFocusの部位深さ位置ずれ量POffsetZを加味した深さ位置をもとに、ワーク部位位置合わせ領域WRoiPPatが算出され、ワーク部位位置合わせ領域WRoiPPatとモデル部位位置合わせ領域MRoiPPatとの位置ずれ量が、部位位置ずれ量POffsetXYZとして算出される。 After the execution of step 140, the process of step 150 is performed. In step S150, with respect to the region registered as the model site alignment area MRoiPPat by the site reference registration unit 144, the depth position in consideration of the site depth position shift amount POffsetZ of the work site focus position WP Focus with respect to the model site focus position MPFocus. The work site alignment area WRoiPPat is calculated based on the above, and the amount of misalignment between the work site alignment area WRoiPPat and the model site alignment area MRoiPPat is calculated as the site misalignment amount POffsetXYZ.
 次に、上述の図3で説明した、合焦点位置PosZを計算する処理(ステップS40)、ワーク基準領域WRoiPosを計算する処理(ステップS50)、ワーク2の姿勢Pを推定する処理(ステップS60)が行なわれる。 Next, the process of calculating the in-focus position PosZ (step S40), the process of calculating the work reference region WRoiPos (step S50), and the process of estimating the posture P of the work 2 (step S60) described with reference to FIG. 3 above. Is performed.
 実施の形態4における、ワーク対象領域WRoiPatを計算する処理(ステップS70)においては、ステップS60で推定されたワーク2の姿勢Pと、モデル対象領域MRoiPatの位置と、ステップ150で算出された部位位置ずれ量POffsetXYZをもとに、ワーク画像スタックWStackにおけるワーク対象領域WRoiPatが算出される。 In the process of calculating the work target area WRoiPat (step S70) in the fourth embodiment, the posture P of the work 2 estimated in step S60, the position of the model target area MRoiPat, and the site position calculated in step 150. The work target area WRoiPat in the work image stack WStack is calculated based on the deviation amount POffsetXYZ.
 最後に、上述の図3で説明したワーク対象領域WRoiPatの合焦点画像IFocusを取得する処理(ステップS71)が行なわれる。 Finally, a process (step S71) of acquiring the focused image I Focus of the work target area WRoiPat described with reference to FIG. 3 is performed.
 以上のようにすることで、本実施の形態4による画像取得装置1Cにおいては、上述の実施の形態1、2、3と同様に、評価対象となるワーク2のワーク対象領域WRoiPatおよびその合焦点画像IFocusを高速に取得することができる。さらに、本実施の形態4による画像取得装置1Cは、撮像領域ごとに位置ずれを補正するための位置合わせの基準を登録するようにしている。そのため、複数の部品を組み立てて構成されるモジュールなどを撮像する場合に、組み立て誤差を吸収して、部品ごとの位置ずれの影響を受けずに合焦点画像を取得することが可能である。 By doing so, in the image acquisition device 1C according to the fourth embodiment, the work target area WRoiPat of the work 2 to be evaluated and its focusing focus are the same as those of the above-described first, second, and third embodiments. The image I Focus can be acquired at high speed. Further, the image acquisition device 1C according to the fourth embodiment registers the alignment reference for correcting the misalignment for each imaging region. Therefore, when an image is taken of a module or the like formed by assembling a plurality of parts, it is possible to absorb the assembly error and acquire a focused image without being affected by the positional deviation of each part.
 実施の形態5.
 実施の形態1、2、3、4においては、合焦点画像を取得したい領域ごとに、作業者がモデル領域の登録する作業を行なうことを想定している。しかしながら、ワーク2について合焦点画像を取得したい領域が大量にある場合には、登録作業に莫大な時間がかかることが懸念される。
Embodiment 5.
In the first, second, third, and fourth embodiments, it is assumed that the operator performs the work of registering the model area for each area for which the focused image is desired to be acquired. However, when there are a large number of areas for which a focused image is desired to be acquired for the work 2, there is a concern that the registration work will take an enormous amount of time.
 上記の点に鑑み、本実施の形態5においては、過去に登録した撮像領域の画像をもとに撮像領域の候補を作成する機械学習機能を用いて、撮像領域の候補をもとにモデル領域を登録する。これにより、合焦点画像を取得したい領域が大量にある場合にも、モデル領域の登録を効率的に行うことができる。 In view of the above points, in the fifth embodiment, the model area is based on the candidate of the imaging area by using the machine learning function that creates the candidate of the imaging area based on the image of the imaging area registered in the past. To register. This makes it possible to efficiently register the model area even when there are a large number of areas for which the focused image is to be acquired.
 図13は、本実施の形態5による画像取得装置1Dを含む画像取得システムの構成を概略的に示す図である。画像取得装置1Dは、上述の図1に示す画像取得装置1に対して、学習部19を追加したものである。その他の構造、機能、処理は、上述の図1に示す画像取得装置1と同じであるため、ここでの詳細な説明は繰返さない。 FIG. 13 is a diagram schematically showing the configuration of an image acquisition system including the image acquisition device 1D according to the fifth embodiment. The image acquisition device 1D is an image acquisition device 1 shown in FIG. 1 with a learning unit 19 added. Since the other structures, functions, and processes are the same as those of the image acquisition device 1 shown in FIG. 1 described above, the detailed description here will not be repeated.
 学習部19は、サンプル画像生成部191、機械学習部192、学習データ記録部193、画像切り出し部194、判定部195を含む。なお、学習部19は、必ずしも画像取得装置1Dに内蔵されることに限定されず、画像取得装置1Dの外部(たとえばクラウドサーバ)に存在していてもよい。 The learning unit 19 includes a sample image generation unit 191, a machine learning unit 192, a learning data recording unit 193, an image cutting unit 194, and a determination unit 195. The learning unit 19 is not necessarily limited to being built in the image acquisition device 1D, and may exist outside the image acquisition device 1D (for example, a cloud server).
 サンプル画像生成部191は、画像スタックStackとモデル対象領域MRoiPatとをもとに、画像スタックStackにおけるモデル対象領域MRoiPatの画像であるサンプル画像MRoiPatImgと、サンプル画像MRoiPatImgがモデル対象領域MRoiPatであるかどうかの真偽値である真偽値isMRoiPatとの組合せを出力する。真偽値isMRoiPatが偽の場合のサンプル画像MRoiPatImgは、モデル対象領域MRoiPatを元に、同程度の大きさで、モデル対象領域MRoiPatを含まない領域を出力する。 Based on the image stack Stack and the model target area MRoiPat, the sample image generation unit 191 determines whether the sample image MRoiPatImg, which is an image of the model target area MRoiPat in the image stack Stack, and the sample image MRoiPatImg are the model target area MRoiPat. The combination with the truth value isMRoiPat, which is the truth value of, is output. Sample image when the boolean value isMRoiPat is false MRoiPatImg outputs a region having the same size and not including the model target region MRoiPat based on the model target region MRoiPat.
 機械学習部192は、サンプル画像MRoiPatImgとその真偽値isMRoiPatとの組合せに従って、モデル対象領域である確率を示す確率MRoiPatPを学習し、学習結果を学習データCとして学習データ記録部193に記録する。また、機械学習部192は、学習データ記録部193に記録されている学習データCと、画像切り出し部194からの部分画像データRoiImgとを入力として、部分画像データRoiImgがモデル対象領域である確率を示す確率MRoiPatPを出力する推論手段として機能する。機械学習部192は、例えば、ニューラルネットワークモデルに従って、いわゆる教師あり学習を実施する装置である。 The machine learning unit 192 learns the probability MRoiPatP indicating the probability of being a model target region according to the combination of the sample image MRoiPatImg and its boolean value isMRoiPat, and records the learning result as learning data C in the learning data recording unit 193. Further, the machine learning unit 192 inputs the learning data C recorded in the learning data recording unit 193 and the partial image data RoiImg from the image cutting unit 194, and determines the probability that the partial image data RoiImg is the model target area. It functions as an inference means for outputting the indicated probability MRoiPatP. The machine learning unit 192 is, for example, a device that performs so-called supervised learning according to a neural network model.
 ここで、教師あり学習とは、ある入力と出力結果(ラベル)とのデータセットを大量に学習装置に与えることで、それらのデータセットにある特徴を学習し、入力から出力結果を推定するモデルをいう。ニューラルネットワークは、複数のニューロンからなる入力層、複数のニューロンからなる中間層(隠れ層)、及び複数のニューロンからなる出力層で構成される。中間層は、1層、又は2層以上でもよい。 Here, supervised learning is a model in which a large amount of data sets of a certain input and output results (labels) are given to a learning device to learn the features of those data sets and estimate the output results from the inputs. To say. A neural network is composed of an input layer composed of a plurality of neurons, an intermediate layer (hidden layer) composed of a plurality of neurons, and an output layer composed of a plurality of neurons. The intermediate layer may be one layer or two or more layers.
 図14は、ニューラルネットワークのモデルの一例を示す図である。図14に示すような3層のニューラルネットワークであれば、複数の入力が入力層(X1~X3)に入力されると、その値に重みW1(w11~w16)を掛けた結果が中間層(Y1,Y2)に入力され、その結果にさらに重みW2(w21~w26)を掛けた結果が出力層(Z1~Z3)から出力される。この出力結果は、重みW1とW2の値によって変わる。 FIG. 14 is a diagram showing an example of a neural network model. In the case of a three-layer neural network as shown in FIG. 14, when a plurality of inputs are input to the input layers (X1 to X3), the result of multiplying the values by the weights W1 (w11 to w16) is the intermediate layer (the intermediate layer (X1 to X3). It is input to Y1 and Y2), and the result of multiplying the result by the weights W2 (w21 to w26) is output from the output layers (Z1 to Z3). This output result depends on the values of the weights W1 and W2.
 本願において、ニューラルネットワークは、サンプル画像MRoiPatImgと真偽値isMRoiPatとの組合せに基づいて作成されるデータセットに従って、いわゆる教師あり学習により上述の確率MRoiPatPを学習し、その学習結果を学習データCとして学習データ記録部193に記録する。 In the present application, the neural network learns the above-mentioned probability MRoiPatP by so-called supervised learning according to a data set created based on the combination of the sample image MRoiPatImg and the boolean value isMRoiPat, and learns the learning result as learning data C. Record in the data recording unit 193.
 図13に戻って、学習データ記録部193は、機械学習部192の学習結果である学習データCを保存する機能を有する。 Returning to FIG. 13, the learning data recording unit 193 has a function of storing the learning data C which is the learning result of the machine learning unit 192.
 また、機械学習部192の学習は、複数種類のワーク2に対して作成されるデータセットに従って、学習してもよい。なお、機械学習部192は、同一の現場で使用される複数のワーク2からデータセットを取得してもよいし、異なる現場で独立して稼働する複数の工作機械から収集されるデータセットを利用して確率MRoiPatPを学習してもよい。さらに、データセットを収集するワーク2を途中で対象に追加し、或いは、逆に対象から除去することも可能である。 Further, the learning of the machine learning unit 192 may be performed according to the data set created for the plurality of types of work 2. The machine learning unit 192 may acquire data sets from a plurality of workpieces 2 used at the same site, or may use data sets collected from a plurality of machine tools operating independently at different sites. Then, the probability MRoiPatP may be learned. Further, the work 2 for collecting the data set can be added to the target on the way, or conversely, can be removed from the target.
 画像切り出し部194は、画像スタックStackを元に、部分画像データRoiImgを切り出して機械学習部192に出力する。部分画像データRoiImgの切り出し方は、画像スタックStackを構成する各画像に対して、縦方向、横方向に等ピッチ、等サイズで画像を切り出す方法であってもよい。 The image cutting unit 194 cuts out partial image data RoiImg based on the image stack Stack and outputs it to the machine learning unit 192. The method of cutting out the partial image data RoiImg may be a method of cutting out images at equal pitches and equal sizes in the vertical and horizontal directions with respect to each image constituting the image stack Stack.
 判定部195は、機械学習部192からの確率MRoiPatPを元に、部分画像データRoiImgがモデル対象領域の候補となるか否かを判定し、モデル対象領域の候補となると判定された部分画像データRoiImgを、推定モデル対象領域MRoiPatEとして出力する。例えば、確率MRoiPatPが予め設定されたしきい値Thとを比較した結果で、推定モデル対象領域MRoiPatEを出力するようにしてもよい。 The determination unit 195 determines whether or not the partial image data RoiImg is a candidate for the model target area based on the probability MRoiPatP from the machine learning unit 192, and the partial image data RoiImg determined to be a candidate for the model target area. Is output as the estimated model target area MRoiPatE. For example, the estimated model target region MRoiPatE may be output as a result of comparing the probability MRoiPatP with the preset threshold value Th.
 実施の形態5によるモデル対象登録部141は、画像スタックStackと、推定モデル対象領域MRoiPatEを入力として受け、モデル対象領域MRoiPatを出力する。 The model target registration unit 141 according to the fifth embodiment receives the image stack Stack and the estimated model target area MRoiPatE as inputs, and outputs the model target area MRoiPat.
 モデル対象登録部141は、図示省略のコンピュータに設けられている。推定モデル対象領域MRoiPatEがある場合には、推定モデル対象領域MRoiPatEを作業者に提示する。作業者は、コンピュータを操作して、推定モデル対象領域MRoiPatEをモデル対象領域MRoiPatとして登録する。あるいは、画像が表示された画面上で推定モデル対象領域MRoiPatEを修正する等の操作を行って、モデル対象領域MRoiPatとして登録することを想定している。 The model target registration unit 141 is provided on a computer (not shown). If there is an estimated model target area MRoiPatE, the estimated model target area MRoiPatE is presented to the operator. The operator operates a computer to register the estimated model target area MRoiPatE as the model target area MRoiPat. Alternatively, it is assumed that the estimated model target area MRoiPatE is modified on the screen on which the image is displayed and registered as the model target area MRoiPat.
 図15は、本実施の形態5によるモデル領域の登録手順の一例を示すフローチャートである。図15のフローチャートは、図2のフローチャートに対して、ステップS160,S170,S180を追加したものである。その他のステップ(上述の図2に示したステップと同じ番号を付しているステップ)については、既に説明したため詳細な説明はここでは繰り返さない。 FIG. 15 is a flowchart showing an example of the procedure for registering the model area according to the fifth embodiment. The flowchart of FIG. 15 is obtained by adding steps S160, S170, and S180 to the flowchart of FIG. Since the other steps (steps having the same numbers as the steps shown in FIG. 2 above) have already been described, detailed description will not be repeated here.
 本実施の形態5によるモデル登録では、ステップS10の後、ステップS160が実施される。ステップS160では、機械学習部192で利用できる学習データCが学習データ記録部193にあるかどうかが確認される。学習データCがなければ(ステップS160においてNO)、ステップS20、すなわち実施の形態1のモデル領域の登録手順と同様の方法で、モデル領域の登録が実施される。 In the model registration according to the fifth embodiment, step S160 is performed after step S10. In step S160, it is confirmed whether or not the learning data C that can be used by the machine learning unit 192 is in the learning data recording unit 193. If there is no training data C (NO in step S160), the model area is registered in step S20, that is, in the same manner as the procedure for registering the model area in the first embodiment.
 学習データCがある場合(ステップS160においてYES)には、ステップ170が実施される。ステップS170では、ステップS10で取得した画像スタックStackに対して画像切り出し部194により部分画像データRoiImgが切り出され、学習データCと合わせて機械学習部192に入力される。機械学習部192では、部分画像データRoiImgがモデル対象領域である確率MRoiPatPを計算する。この確率MRoiPatPを元に、判定部195は、部分画像データRoiImgがモデル対象領域の候補となるかどうかを判定し、モデル対象領域の候補となると判定された部分画像データRoiImgを推定モデル対象領域MRoiPatEとして、モデル対象登録部141に出力する。 If there is learning data C (YES in step S160), step 170 is executed. In step S170, partial image data RoiImg is cut out by the image cutting unit 194 with respect to the image stack Stack acquired in step S10, and is input to the machine learning unit 192 together with the learning data C. The machine learning unit 192 calculates the probability MRoiPatP in which the partial image data RoiImg is the model target region. Based on this probability MRoiPatP, the determination unit 195 determines whether or not the partial image data RoiImg is a candidate for the model target area, and estimates the partial image data RoiImg that is determined to be a candidate for the model target area MRoiPatE. Is output to the model target registration unit 141.
 本実施の形態5によるステップS20では、推定モデル対象領域MRoiPatEを参照しつつ、作業者が、合焦点画像を取得したい領域をモデル対象領域MRoiPatとして登録する。 In step S20 according to the fifth embodiment, the operator registers the area for which the focused image is to be acquired as the model target area MRoiPat while referring to the estimated model target area MRoiPatE.
 さらに、本実施の形態5では、ステップS30完了後、ステップS180が実施される。ステップS180では、後述の図16に示すモデル領域の学習手順を実施するため、モデル画像スタックMStackとモデル対象領域MRoiPatとの組合せをサンプル画像生成部191に記録する。 Further, in the fifth embodiment, step S180 is performed after the completion of step S30. In step S180, in order to carry out the learning procedure of the model area shown in FIG. 16 described later, the combination of the model image stack MS stack and the model target area MRoiPat is recorded in the sample image generation unit 191.
 図16は、本実施の形態5におけるモデル領域の学習手順の一例を示すフローチャートである。モデル領域の学習においては、まず、ステップS200が実施される。ステップS200では、上述の図15のステップS180において予め記録された、モデル画像スタックMStackとモデル対象領域MRoiPatとの組合せをもとに、サンプル画像生成部191が、サンプル画像MRoiPatImgと真偽値isMRoiPatとの組合せを出力する。次に、機械学習部192が、サンプル画像MRoiPatImgと真偽値isMRoiPatとの組合せの群からなるデータセットを学習し、与えられた画像が、モデル対象領域である確率である確率MRoiPatPを推論するように学習する。 FIG. 16 is a flowchart showing an example of the learning procedure of the model area in the fifth embodiment. In learning the model area, first, step S200 is carried out. In step S200, the sample image generation unit 191 sets the sample image MRoiPatImg and the boolean value isMRoiPat based on the combination of the model image stack MS stack and the model target area MRoiPat recorded in advance in step S180 of FIG. Output the combination of. Next, the machine learning unit 192 learns a data set consisting of a group of combinations of the sample image MRoiPatImg and the boolean value isMRoiPat, and infers the probability MRoiPatP which is the probability that the given image is the model target region. To learn.
 続いて、ステップS201が実施される。ステップS201では、機械学習部192の学習した結果である学習データCがモデル領域登録において使用できるように、学習データCが学習データ記録部193に記録される。 Subsequently, step S201 is carried out. In step S201, the learning data C is recorded in the learning data recording unit 193 so that the learning data C, which is the learning result of the machine learning unit 192, can be used in the model area registration.
 以上のようにすることで、本実施の形態5による画像取得装置1Dにおいては、上述の実施の形態1、2、3と同様に評価対象となるワーク2のワーク対象領域WRoiPatおよびその合焦点画像IFocusを高速に取得することができる。さらに、本実施の形態5による画像取得装置1Dは、過去に登録した撮像領域の画像をもとに撮像領域の候補を作成する機械学習機能を用いて、撮像領域の候補をもとに領域を登録する。これにより、合焦点画像を取得したい領域が大量にある場合にも、手間なく登録作業を行うことができる。 By doing so, in the image acquisition device 1D according to the fifth embodiment, the work target area WRoiPat of the work 2 to be evaluated and its focused image as in the above-described first, second, and third embodiments. IFocus can be acquired at high speed. Further, the image acquisition device 1D according to the fifth embodiment uses a machine learning function that creates a candidate for the imaging region based on the image of the imaging region registered in the past, and uses the machine learning function to create the region based on the candidate for the imaging region. to register. As a result, even when there are a large number of areas for which the focused image is to be acquired, the registration work can be performed without hassle.
 今回開示された実施の形態はすべての点で例示であって制限的なものではないと考えられるべきである。本開示の範囲は上記した説明ではなくて請求の範囲によって示され、請求の範囲と均等の意味および範囲内でのすべての変更が含まれることが意図される。 It should be considered that the embodiments disclosed this time are exemplary in all respects and are not restrictive. The scope of the present disclosure is indicated by the scope of claims rather than the above description, and is intended to include all modifications within the meaning and scope of the claims.
 1,1A,1B,1C,1D 画像取得装置、2 ワーク、3 台座、11 撮像装置、12 移動装置、13 画像処理部、14 モデル登録部、15 姿勢推定部、16 合焦点画像取得部、17 視野画像処理部、18 部位画像処理部、19 学習部、122 撮像位置記録部、131 合焦点位置取得部、132 位置合わせ部、141 モデル対象登録部、142 モデル基準登録部、143 モデル視野登録部、144 部位基準登録部、151 ワーク姿勢推定部、152 合焦点領域計算部、171 視野合焦点位置取得部、172 視野位置合わせ部、181 部位合焦点位置取得部、182 部位位置合わせ部、191 サンプル画像生成部、192 機械学習部、193 学習データ記録部、194 画像切り出し部、195 判定部。 1,1A, 1B, 1C, 1D image acquisition device, 2 work, 3 pedestal, 11 imaging device, 12 moving device, 13 image processing unit, 14 model registration unit, 15 posture estimation unit, 16 focused image acquisition unit, 17 Field image processing unit, 18 part image processing unit, 19 learning unit, 122 imaging position recording unit, 131 in-focus position acquisition unit, 132 alignment unit, 141 model target registration unit, 142 model reference registration unit, 143 model field image registration unit 144 Site reference registration unit, 151 Work posture estimation unit, 152 Focus area calculation unit, 171 Field focus position acquisition unit, 172 Field alignment unit, 181 Site focus position acquisition unit, 182 Site alignment unit, 191 sample Image generation unit, 192 machine learning unit, 193 learning data recording unit, 194 image cropping unit, 195 judgment unit.

Claims (10)

  1.  ワークの画像を取得する画像取得装置であって、
     モデル登録の対象となるワークの画像であるモデル画像を用いて合焦点画像を取得する対象となる第1領域を登録する第1登録部と、
     前記モデル画像を用いて位置合わせに用いられる第2領域を登録する第2登録部と、
     評価対象となるワークの画像であるワーク画像を撮影する撮像装置と、
     前記撮像装置および前記ワークの少なくとも一方を前記撮像装置の光軸方向に移動可能に構成された移動装置と、
     前記移動装置を作動させながら前記撮像装置が撮像した焦点の異なる複数のワーク画像のうちの最も合焦度の高い画像の深さ位置を合焦点位置として取得する第1取得部と、
     前記合焦点位置のワーク画像に対して前記第2領域との位置合わせを実施することによって前記ワーク画像における前記第2領域に対応する領域を特定する位置合わせ部と、
     前記モデル画像における前記第2領域と前記ワーク画像における前記第2領域に対応する領域との位置関係に基づいて撮像時の前記ワークの姿勢を示す情報を取得可能に構成された姿勢推定部と、
     前記ワークの姿勢を示す情報を用いて前記ワーク画像における前記第1領域に対応する領域の位置を出力する領域出力部と、
     を備える、画像取得装置。
    An image acquisition device that acquires an image of a work,
    A first registration unit that registers a first area to be acquired as a focused image using a model image that is an image of a work to be model-registered, and a first registration unit.
    A second registration unit that registers a second region used for alignment using the model image, and a second registration unit.
    An imaging device that captures a work image, which is an image of the work to be evaluated,
    A moving device configured so that at least one of the imaging device and the work can be moved in the optical axis direction of the imaging device.
    A first acquisition unit that acquires the depth position of the image having the highest degree of focus among a plurality of work images having different focal points captured by the imaging device while operating the moving device as the focusing position.
    An alignment unit that identifies a region corresponding to the second region in the work image by performing alignment with the second region on the work image at the in-focus position.
    A posture estimation unit configured to be able to acquire information indicating the posture of the work at the time of imaging based on the positional relationship between the second region in the model image and the region corresponding to the second region in the work image.
    A region output unit that outputs the position of a region corresponding to the first region in the work image using information indicating the posture of the work, and a region output unit.
    An image acquisition device.
  2.  前記モデル画像を用いて撮像視野の位置合わせに用いられる第3領域を登録する第3登録部と、
     前記撮像装置が撮像した焦点の異なる複数の前記ワーク画像のうちの前記第3領域において最も合焦度の高い画像の深さ位置を視野合焦点位置として取得する第2取得部と、
     前記合焦点位置のワーク画像における前記第3領域に対応する領域と、前記モデル画像における前記第3領域との位置合わせを実施することによって、前記ワーク画像における前記第3領域に対応する領域と前記モデル画像における前記第3領域との位置との位置ずれ量を取得する視野位置合わせ部とをさらに備え、
     前記第1取得部は、前記位置ずれ量を加味して前記合焦点位置を取得する、請求項1に記載の画像取得装置。
    A third registration unit that registers a third region used for positioning the imaging field of view using the model image, and a third registration unit.
    A second acquisition unit that acquires the depth position of the image having the highest degree of focus in the third region of the plurality of work images having different focal points captured by the imaging device as the visual field focusing position.
    By aligning the region corresponding to the third region in the work image at the in-focus position with the third region in the model image, the region corresponding to the third region in the work image and the region can be described. Further provided with a visual field alignment unit for acquiring the amount of displacement from the position of the third region in the model image.
    The image acquisition device according to claim 1, wherein the first acquisition unit acquires the in-focus position in consideration of the amount of misalignment.
  3.  前記撮像装置の光軸方向の撮像位置を視野毎に記録する位置記録部をさらに備え、
     前記姿勢推定部は、前記撮像位置に基づいて前記ワークの姿勢を示す情報を取得する、請求項1または2に記載の画像取得装置。
    A position recording unit for recording the imaging position in the optical axis direction of the imaging device for each field of view is further provided.
    The image acquisition device according to claim 1 or 2, wherein the posture estimation unit acquires information indicating the posture of the work based on the imaging position.
  4.  前記モデル画像の部位ごとの位置合わせに用いられる部位領域を登録する部位領域登録部と、
     前記モデル画像の部位ごとの合焦点位置である部位合焦点位置を取得する部位焦点位置取得部と、
     前記部位合焦点位置のワーク画像における前記部位領域に対応する領域と、前記モデル画像における前記部位領域との位置合わせを実施する部位位置合わせ部とをさらに備える、請求項1~3のいずれか1項に記載の画像取得装置。
    A site area registration unit that registers a site area used for alignment of each part of the model image, and a site area registration unit.
    A site focus position acquisition unit that acquires a site focus position, which is a site focus position for each part of the model image,
    Any one of claims 1 to 3, further comprising a region corresponding to the site region in the work image of the site focusing position and a site alignment portion for performing alignment with the site region in the model image. The image acquisition device described in the section.
  5.  前記第1登録部により登録された第1領域と前記撮像装置によって撮像された画像データの複数の組合せに基づいて前記第1領域と前記画像データとの対応を機械学習し、機械学習の結果を用いて、与えられた画像データが前記第1領域の候補となるかどうかを判定する機械学習部をさらに備える、請求項1~4のいずれか1項に記載の画像取得装置。 Machine learning is performed on the correspondence between the first region and the image data based on a plurality of combinations of the first region registered by the first registration unit and the image data captured by the imaging device, and the result of the machine learning is obtained. The image acquisition device according to any one of claims 1 to 4, further comprising a machine learning unit for determining whether or not the given image data is a candidate for the first region.
  6.  ワークの画像を取得する画像取得方法であって、
     モデル登録の対象となるワークの画像であるモデル画像を用いて合焦点画像を取得する対象となる第1領域を登録するステップと、
     前記モデル画像を用いて位置合わせに用いられる第2領域を登録するステップと、
     評価対象となるワークの画像であるワーク画像を撮像装置で撮影するステップと、
     前記撮像装置が撮像した焦点の異なる複数のワーク画像のうちの最も合焦度の高い画像の深さ位置を合焦点位置として取得するステップと、
     前記合焦点位置のワーク画像に対して前記第2領域との位置合わせを実施することによって前記ワーク画像における前記第2領域に対応する領域を特定するステップと、
     前記モデル画像における前記第2領域と前記ワーク画像における前記第2領域に対応する領域との位置関係に基づいて撮像時の前記ワークの姿勢を示す情報を取得するステップと、
     前記ワークの姿勢を示す情報を用いて前記ワーク画像における前記第1領域に対応する領域の位置を出力するステップと、
     を含む、画像取得方法。
    This is an image acquisition method for acquiring an image of a work.
    A step of registering a first region to be acquired as a focused image using a model image which is an image of a work to be model-registered, and
    A step of registering a second region used for alignment using the model image, and
    The step of taking a work image, which is an image of the work to be evaluated, with an imaging device,
    A step of acquiring the depth position of the image having the highest degree of focus among a plurality of work images having different focal points captured by the imaging device as the in-focus position, and
    A step of identifying a region corresponding to the second region in the work image by performing alignment with the second region on the work image at the in-focus position, and
    A step of acquiring information indicating the posture of the work at the time of imaging based on the positional relationship between the second region in the model image and the region corresponding to the second region in the work image, and
    A step of outputting the position of the region corresponding to the first region in the work image using the information indicating the posture of the work, and
    Image acquisition methods, including.
  7.  前記モデル画像を用いて撮像視野の位置合わせに用いられる第3領域を登録するステップと、
     前記撮像装置が撮像した焦点の異なる複数の前記ワーク画像のうちの前記第3領域において最も合焦度の高い画像の深さ位置を視野合焦点位置として取得するステップと、
     前記合焦点位置のワーク画像における前記第3領域に対応する領域と、前記モデル画像における前記第3領域との位置ずれ量を取得するステップとをさらに含み、
     前記合焦点位置を取得するステップは、前記位置ずれ量を加味して前記合焦点位置を取得するステップを含む、請求項6に記載の画像取得方法。
    A step of registering a third region used for positioning the imaging field of view using the model image, and
    A step of acquiring the depth position of the image having the highest degree of focus in the third region among the plurality of work images having different focal points captured by the imaging device as the visual field focusing position.
    Further including a step of acquiring the amount of misalignment between the region corresponding to the third region in the work image at the in-focus position and the third region in the model image.
    The image acquisition method according to claim 6, wherein the step of acquiring the in-focus position includes a step of acquiring the in-focus position in consideration of the amount of misalignment.
  8.  前記撮像装置の光軸方向の撮像位置を視野毎に記録するステップをさらに含み、
     前記ワークの姿勢を示す情報を取得するステップは、前記撮像位置に基づいて前記ワークの姿勢を示す情報を取得する、請求項6または7に記載の画像取得方法。
    Further including a step of recording the imaging position in the optical axis direction of the imaging device for each field of view.
    The image acquisition method according to claim 6 or 7, wherein the step of acquiring the information indicating the posture of the work is to acquire the information indicating the posture of the work based on the imaging position.
  9.  前記モデル画像の部位ごとの位置合わせに用いられる部位領域を登録するステップと、
     前記モデル画像の部位ごとの合焦点位置である部位合焦点位置を取得するステップと、 前記部位合焦点位置のワーク画像における前記部位領域に対応する領域と、前記モデル画像における前記部位領域との位置合わせを実施するステップとをさらに含む、請求項6~8のいずれか1項に記載の画像取得方法。
    A step of registering a part region used for alignment of each part of the model image, and
    The step of acquiring the site focusing position which is the focus position for each part of the model image, the position of the region corresponding to the site region in the work image of the site focus position, and the position of the site region in the model image The image acquisition method according to any one of claims 6 to 8, further comprising a step of performing matching.
  10.  登録された前記第1領域と前記撮像装置によって撮像された画像データの複数の組合せに基づいて前記第1領域と前記画像データとの対応を機械学習するステップと、
     機械学習の結果を用いて、与えられた画像データが前記第1領域の候補となるかどうかを判定するステップとをさらに備える、請求項6~9のいずれか1項に記載の画像取得方法。
    A step of machine learning the correspondence between the first region and the image data based on a plurality of combinations of the registered first region and the image data captured by the imaging device.
    The image acquisition method according to any one of claims 6 to 9, further comprising a step of determining whether or not the given image data is a candidate for the first region using the result of machine learning.
PCT/JP2020/020781 2019-05-30 2020-05-26 Image acquisition apparatus and image acquisition method WO2020241648A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021522789A JP7077485B2 (en) 2019-05-30 2020-05-26 Image acquisition device and image acquisition method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019100835 2019-05-30
JP2019-100835 2019-05-30

Publications (1)

Publication Number Publication Date
WO2020241648A1 true WO2020241648A1 (en) 2020-12-03

Family

ID=73552218

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/020781 WO2020241648A1 (en) 2019-05-30 2020-05-26 Image acquisition apparatus and image acquisition method

Country Status (2)

Country Link
JP (1) JP7077485B2 (en)
WO (1) WO2020241648A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007225431A (en) * 2006-02-23 2007-09-06 Mitsubishi Electric Corp Visual inspection device
JP2013255972A (en) * 2012-06-14 2013-12-26 Shinnichi Kogyo Co Ltd Workpiece conveying device and method for controlling the same

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007225431A (en) * 2006-02-23 2007-09-06 Mitsubishi Electric Corp Visual inspection device
JP2013255972A (en) * 2012-06-14 2013-12-26 Shinnichi Kogyo Co Ltd Workpiece conveying device and method for controlling the same

Also Published As

Publication number Publication date
JPWO2020241648A1 (en) 2021-10-28
JP7077485B2 (en) 2022-05-30

Similar Documents

Publication Publication Date Title
US10755428B2 (en) Apparatuses and methods for machine vision system including creation of a point cloud model and/or three dimensional model
US8600192B2 (en) System and method for finding correspondence between cameras in a three-dimensional vision system
US20140118500A1 (en) System and method for finding correspondence between cameras in a three-dimensional vision system
US11403780B2 (en) Camera calibration device and camera calibration method
US11488322B2 (en) System and method for training a model in a plurality of non-perspective cameras and determining 3D pose of an object at runtime with the same
JP2007256091A (en) Method and apparatus for calibrating range finder
JP6274794B2 (en) Information processing apparatus, information processing method, program, and image measurement apparatus
WO2017149869A1 (en) Information processing device, method, program, and multi-camera system
Liu et al. Robust camera calibration by optimal localization of spatial control points
Niola et al. A new real-time shape acquisition with a laser scanner: first test results
Kinnell et al. Autonomous metrology for robot mounted 3D vision systems
JP2008217526A (en) Image processor, image processing program, and image processing method
WO2020241648A1 (en) Image acquisition apparatus and image acquisition method
CN116051658B (en) Camera hand-eye calibration method and device for target detection based on binocular vision
JP4006296B2 (en) Displacement measuring method and displacement measuring apparatus by photogrammetry
WO2021177245A1 (en) Image processing device, work instruction creating system, and work instruction creating method
RU2647645C1 (en) Method of eliminating seams when creating panoramic images from video stream of frames in real-time
KR101837269B1 (en) Coordination guide method and system based on multiple marker
JP2007034964A (en) Method and device for restoring movement of camera viewpoint and three-dimensional information and estimating lens distortion parameter, and program for restoring movement of camera viewpoint and three-dimensional information and estimating lens distortion parameter
WO2018096669A1 (en) Laser processing device, laser processing method, and laser processing program
Kopparapu et al. The effect of measurement noise on intrinsic camera calibration parameters
JP3849030B2 (en) Camera calibration apparatus and method
WO2012076979A1 (en) Model-based pose estimation using a non-perspective camera
JP2007292657A (en) Camera motion information acquiring apparatus, camera motion information acquiring method, and recording medium
Gorevoy et al. 3D spatial measurements by means of prism-based endoscopic imaging system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20815582

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021522789

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20815582

Country of ref document: EP

Kind code of ref document: A1