WO2023170814A1 - Operation device for three-dimensional measurement, three-dimensional measurement program, recording medium, three-dimensional measurement device, and operation method for three-dimensional measurement - Google Patents

Operation device for three-dimensional measurement, three-dimensional measurement program, recording medium, three-dimensional measurement device, and operation method for three-dimensional measurement Download PDF

Info

Publication number
WO2023170814A1
WO2023170814A1 PCT/JP2022/010277 JP2022010277W WO2023170814A1 WO 2023170814 A1 WO2023170814 A1 WO 2023170814A1 JP 2022010277 W JP2022010277 W JP 2022010277W WO 2023170814 A1 WO2023170814 A1 WO 2023170814A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
component
shape
measurement
area
Prior art date
Application number
PCT/JP2022/010277
Other languages
French (fr)
Japanese (ja)
Inventor
直生 飛田
Original Assignee
ヤマハ発動機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ発動機株式会社 filed Critical ヤマハ発動機株式会社
Priority to PCT/JP2022/010277 priority Critical patent/WO2023170814A1/en
Publication of WO2023170814A1 publication Critical patent/WO2023170814A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object

Definitions

  • the present invention relates to a technique for measuring the three-dimensional shape of a measurement object based on a pattern image obtained by imaging a predetermined pattern of light irradiated onto the measurement object.
  • Patent Documents 1 and 2 describe three-dimensional measurement techniques that measure the three-dimensional shape of a measurement target using a so-called phase shift method.
  • the three-dimensional shape of the measurement object is measured based on an image obtained by capturing a predetermined pattern of light irradiated onto the measurement object from a projector using a camera.
  • Patent Document 2 proposes a technique for measuring a three-dimensional shape by suppressing the influence of secondary reflection that occurs when a tall object is present among the objects that constitute the measurement target.
  • secondary reflection such as light emitted from a projector and reflected on the side of a tall object and further reflected by another object, it becomes difficult to accurately measure a three-dimensional shape. Therefore, in Patent Document 2, the range of light irradiation from the projector is limited so that the light from the projector does not enter a tall object (causing object) that causes secondary reflection.
  • the poor irradiation area where poor light irradiation occurs in the measurement target, the poor irradiation area will be can be referred to. As a result, it is possible to perform calculations for calculating a three-dimensional shape while suppressing the effects of poor light irradiation.
  • This invention was made in view of the above-mentioned problems, and it is possible to measure the three-dimensional shape of a measurement object based on a pattern image obtained by imaging a predetermined pattern of light irradiated onto the measurement object.
  • the object of the present invention is to enable acquisition of irradiation defect areas where irradiation defects occur.
  • a computing device for three-dimensional measurement includes an object support section that supports a measurement object having a substrate and a component mounted on the substrate, and a predetermined pattern on the measurement object supported by the object support section.
  • a pattern obtained by a three-dimensional measuring device that includes a pattern irradiation section that irradiates light from the pattern irradiation section and an imaging section that obtains a two-dimensional pattern image by capturing the light irradiated onto the measurement target from the pattern irradiation section.
  • a three-dimensional measurement calculation device that executes calculations to measure the three-dimensional shape of a measurement target based on an image, and includes a component mounting range where components are mounted on a board and a component mounted in the component mounting range.
  • a reference model acquisition unit that obtains a reference model that shows the external shape, a shadow area where a shadow of the component is created on the board because the light irradiated from the imaging unit to the component is blocked by the component, and a shadow area where the light irradiated from the imaging unit is reflected by the component.
  • Area calculation that calculates the irradiation failure area in at least one of the secondary reflection areas where the light enters the board based on irradiation direction information indicating the direction in which the component is irradiated with light using a three-dimensional measuring device and a reference model. It is equipped with a section.
  • the three-dimensional measurement program according to the present invention causes a computer to function as the above-mentioned three-dimensional measurement calculation device.
  • a recording medium records the three-dimensional measurement program described above so as to be readable by a computer.
  • a three-dimensional measuring device includes an object support section that supports a measurement object having a substrate and components mounted on the substrate, and a predetermined pattern of light on the measurement object supported by the object support section. a pattern irradiation unit that irradiates the measurement target, an imaging unit that acquires a two-dimensional pattern image by capturing the light irradiated onto the measurement target from the pattern irradiation unit, and a three-dimensional shape of the measurement target based on the pattern image. and the above-mentioned three-dimensional measurement arithmetic device that executes arithmetic operations for the three-dimensional measurement.
  • a calculation method for three-dimensional measurement includes an object support section that supports a measurement object having a substrate and a component mounted on the substrate, and a predetermined pattern on the measurement object supported by the object support section.
  • a pattern obtained by a three-dimensional measuring device that includes a pattern irradiation section that irradiates light from the pattern irradiation section and an imaging section that obtains a two-dimensional pattern image by capturing the light irradiated onto the measurement target from the pattern irradiation section.
  • a calculation method for three-dimensional measurement that executes calculations to measure the three-dimensional shape of a measurement target based on an image, and includes a component mounting range where components are mounted on a board and a component mounted in the component mounting range.
  • the method includes a step of calculating an irradiation failure region of at least one of the secondary reflection regions incident on the substrate based on irradiation direction information indicating a direction in which light is irradiated onto the component by a three-dimensional measuring device and a reference model.
  • the component mounting range where components are mounted on the board A reference model indicating the external shape of the component to be mounted in the component mounting range is obtained. Then, a poor irradiation area is calculated based on the irradiation direction information indicating the direction in which the component is irradiated with light by the three-dimensional measuring device and the reference model. In this way, when measuring the three-dimensional shape of the measurement target based on the pattern image obtained by imaging the predetermined pattern of light irradiated onto the measurement target, the illumination defect area where light irradiation failure occurs is obtained. It is now possible to do so.
  • the poor illumination area is defined as a shadow area where the component's shadow occurs on the board due to the part blocking the light irradiated from the imaging unit to the component, and a shadow area where the light irradiated from the imaging unit and reflected by the component enters the board. at least one of the secondary reflection areas.
  • the reference model acquisition unit acquires inspection data indicating a standard for inspecting the suitability of the positional relationship between the components mounted in the component mounting range and the component mounting range, and CAD (Computer-Aided Design) data indicating the configuration of the board.
  • the three-dimensional measurement arithmetic device may be configured to create a reference model from at least one of the data. With such a configuration, a reference model is created using existing data such as board inspection data or CAD data, and a defective irradiation area can be calculated based on this reference model.
  • the information indicated by the inspection data is not limited to the above criteria, but may also include criteria for inspecting the suitability of the component itself and criteria for inspecting the suitability of the bond between the component and the board.
  • the area calculation unit configures the three-dimensional measurement calculation device to further calculate the poor irradiation area based on the result of recognizing the position of the substrate carried into the three-dimensional measurement device and supported by the object support unit. You may. With this configuration, it is possible to accurately calculate the irradiation failure area according to the actual position of the substrate loaded into the three-dimensional measurement device.
  • the three-dimensional measurement computing device may be configured such that the position of the substrate supported by the object support section is recognized based on the result of imaging a fiducial mark attached to the substrate by the imaging section. good. With this configuration, the irradiation defect area can be accurately calculated according to the position of the substrate loaded into the three-dimensional measurement device.
  • the area calculation unit may configure the three-dimensional measurement calculation device to further calculate the irradiation failure area based on the result of detecting the position of the component mounted in the component mounting range. With this configuration, it is possible to accurately calculate the irradiation failure area according to the actual position of the component mounted in the component mounting range.
  • the three-dimensional measurement arithmetic device may be configured so that the position of the component mounted in the component mounting range is detected based on the result of imaging the component. With this configuration, it is possible to accurately calculate the irradiation failure area according to the actual position of the component mounted in the component mounting range.
  • the pattern irradiation unit has a plurality of projectors that irradiate light onto the component from mutually different directions, and the irradiation direction information indicates, for each of the plurality of projectors, the direction in which the component is irradiated with light from the projector.
  • the calculation unit may configure a three-dimensional measurement calculation device so as to calculate a poor irradiation area for each of the plurality of projectors. With such a configuration, it is possible to obtain a poor irradiation area that occurs when a component is irradiated with light from each of a plurality of projectors.
  • calculations are performed to calculate unidirectional shape data that indicates shape-related values, which are values related to the three-dimensional shape, for each pixel.
  • a shape calculation section that calculates a plurality of unidirectional shape data by executing for each of the plurality of projectors, the shape calculation section calculating an average of shape-related values indicated by each of the plurality of unidirectional shape data for each pixel.
  • a first integrated calculation process is performed that integrates multiple unidirectional shape data by calculating the three-dimensional shape of the measurement target, and in the first integrated calculation process, the shape relationship indicated by the unidirectional shape data is calculated.
  • Three-dimensional measurement is performed so that the three-dimensional shape of the measurement object is calculated by a weighted average in which the shape-related values of pixels included in the poor irradiation area are multiplied by a weighting coefficient of less than 1 and greater than or equal to 0.
  • a computing device may also be configured.
  • unidirectional shape data indicating a value related to a three-dimensional shape (shape-related value) for each pixel is calculated based on a pattern image obtained by capturing an image of light irradiated onto the board from a projector using an imaging unit.
  • the unidirectional shape data is data regarding a three-dimensional shape calculated based on a pattern image obtained by capturing light irradiated onto a component from one projector.
  • the calculation for calculating this unidirectional shape data is executed for each of the plurality of projectors.
  • unidirectional shape data obtained when the component is irradiated with light from each of the plurality of projectors is obtained.
  • the plurality of unidirectional shape data obtained in this way are based on pattern images obtained while irradiating the component with light from different directions. Therefore, a pixel that corresponds to a poorly irradiated region in one unidirectional shape data may correspond to a region well irradiated with light in another unidirectional shape data. Therefore, by calculating a weighted average of the shape-related values indicated by each of the plurality of unidirectional shape data for each pixel, it is possible to integrate the plurality of unidirectional shape data and calculate the three-dimensional shape of the measurement target object.
  • the shape-related values of pixels included in the poorly irradiated area are weighted averaged by multiplying them by a weighting coefficient of less than 1 and greater than or equal to 0.
  • a three-dimensional shape of the object to be measured is calculated. This makes it possible to appropriately calculate the three-dimensional shape of the measurement target while suppressing the influence of the poorly irradiated area.
  • the three-dimensional shape of the measurement target is calculated by a weighted average in which shape-related values of pixels included in the poor irradiation area are multiplied by a weighting coefficient of 0.
  • a measurement arithmetic device may also be configured.
  • the operation of multiplying by a weighting coefficient of 0 is an operation of excluding a poorly irradiated region. This makes it possible to appropriately calculate the three-dimensional shape of the measurement target while eliminating the influence of poor irradiation areas.
  • the shape-related value may be given by the product of the calculated value of the three-dimensional shape calculated based on the pattern image and the reliability of the calculated value, and the shape-related value is calculated based on the pattern image. It may be a calculated value of a three-dimensional shape.
  • the shape calculation unit integrates the plurality of unidirectional shape data by calculating, for each pixel, a representative value of shape-related values indicated by the plurality of unidirectional shape data without weighting according to the poor irradiation area.
  • the first integrated calculation process and the second integrated calculation process are performed.
  • the first integrated calculation is performed by performing an evaluation process for evaluating the difference between the three-dimensional shapes of the measurement objects calculated in each case on a predetermined number of measurement objects whose pattern images are to be acquired by the three-dimensional measuring device.
  • the shape calculation unit After the necessity determination is executed to determine whether or not the process is necessary, and it is determined in the necessity determination that the first integrated calculation process is unnecessary, the shape calculation unit performs the second integration process without executing the first integrated calculation process.
  • the three-dimensional measurement arithmetic device may be configured to calculate the three-dimensional shape of the object to be measured through arithmetic processing. In such a configuration, when the poor irradiation area is not a problem, the three-dimensional shape of the measurement object can be accurately calculated by the second integrated calculation process, which is simpler than the first integrated calculation process.
  • the representative value of the shape-related values indicated by the plurality of unidirectional shape data, which is not weighted according to the poor irradiation area includes an average value or a median value based on a simple average.
  • the three-dimensional measurement arithmetic device may be configured to further include a setting operation section that accepts an operation to set a predetermined number. With this configuration, the user can appropriately adjust the period during which both the first integrated calculation process and the second integrated calculation process are executed.
  • the present invention when measuring the three-dimensional shape of a measurement target based on a pattern image obtained by imaging a predetermined pattern of light irradiated onto the measurement target, irradiation that causes light irradiation failure It becomes possible to acquire defective areas.
  • FIG. 1 is a block diagram schematically illustrating a three-dimensional measuring device according to the present invention.
  • FIG. 2 is a plan view schematically showing the relationship between a projector included in the three-dimensional measurement device of FIG. 1 and an imaging field of view.
  • FIG. 2 is a block diagram showing details of a control device included in the three-dimensional measuring device of FIG. 1.
  • FIG. 3 is a diagram schematically showing the contents of a part model created by a part model acquisition unit.
  • FIG. 3 is a diagram showing an example of board configuration data used for creating a component model in a table format. The figure which shows an example of part data used for creating a part model in a table format.
  • FIG. 1 is a block diagram schematically illustrating a three-dimensional measuring device according to the present invention.
  • FIG. 2 is a plan view schematically showing the relationship between a projector included in the three-dimensional measurement device of FIG. 1 and an imaging field of view.
  • FIG. 3 is a diagram schematically showing the contents of a poorly irradiated area calculated by an area calculation unit.
  • 5 is a flowchart illustrating an example of a calculation for calculating a defective irradiation area.
  • a flowchart showing a first example of three-dimensional measurement. 9 is a flowchart showing omnidirectional shape data acquisition executed in the three-dimensional measurement of FIG. 8; 9 is a flowchart showing the first data integration process executed in the three-dimensional measurement of FIG. 8.
  • 11 is a diagram schematically showing calculations executed in the first data integration process of FIG. 10.
  • FIG. 8 is a flowchart showing an example of calculation for correcting the irradiation defect area calculated in FIG. 7.
  • FIG. A flowchart showing a second example of three-dimensional measurement. 14 is a flowchart showing the second data integration process executed in the three-dimensional measurement of FIG. 13. 14 is a diagram schematically showing the contents of calculations executed in the three-dimensional measurement of FIG. 13.
  • FIG. 13 is a
  • FIG. 1 is a block diagram schematically illustrating a three-dimensional measuring device according to the present invention.
  • the X direction which is a horizontal direction
  • the Y direction which is a horizontal direction orthogonal to the X direction
  • the Z direction which is a vertical direction
  • the three-dimensional measuring device 1 of FIG. 1 measures the three-dimensional shape (external shape) of the measurement target J by controlling the conveyor 2, the measuring head 3, and the drive mechanism 4 by the control device 100.
  • the measurement object J is composed of a board B (printed board) and a component E mounted on the board B. This measurement object J is produced by a surface mounter that mounts the component E on the board B, and is carried into the three-dimensional measuring device 1.
  • the conveyor 2 conveys the measurement target object J along a predetermined conveyance path. Specifically, the conveyor 2 carries the measurement target J before measurement to the measurement position in the three-dimensional measurement device 1, and holds the measurement target J at the measurement position so that the substrate B is horizontal. Further, when the measurement of the three-dimensional shape of the measurement object J at the measurement position is completed, the conveyor 2 carries the measured measurement object J out of the three-dimensional measurement device 1.
  • the measurement head 3 has an imaging camera 31 that images the inside of the imaging field of view V31 from above, and the imaging camera 31 captures an image of the measurement object J carried into the measurement position within the imaging field of view V31.
  • the imaging camera 31 includes a solid-state image sensor 311 that detects reflected light from the measurement target object J, and captures an image of the measurement target object J using the solid-state image sensor 311.
  • the measurement head 3 includes a projector 32 that projects striped pattern light L(S) in which the light intensity distribution changes sinusoidally onto the imaging field of view V31.
  • the projector 32 includes a light source such as an LED (Light Emitting Diode), and a digital micromirror device that reflects light from the light source toward the imaging field of view V31. By adjusting the angle of each micromirror of the digital micromirror device, the projector 32 can project a plurality of types of patterned light L(S) having mutually different phases onto the imaging field of view V31.
  • the measurement head 3 captures an image using the imaging camera 31 while changing the phase of the patterned light L(S) projected from the projector 32, so that the measurement target J within the imaging field of view V31 can be three-dimensionally captured using the phase shift method. Shape can be measured.
  • FIG. 2 is a plan view schematically showing the relationship between a projector included in the three-dimensional measuring device of FIG. 1 and an imaging field of view.
  • the measurement head 3 has a plurality of (in this example, four) projectors 32 (in FIG. 1, two projectors 32 are representative for simplicity of illustration). ).
  • Each projector 32 projects pattern light L(S) from diagonally above onto the imaging field of view V31 of the imaging camera 31.
  • the plurality of projectors 32 are arranged so as to surround the imaging camera 31, and are arranged circumferentially at equal pitches with the vertical direction Z as the center. Therefore, the plurality of projectors 32 project pattern light L(S) onto the imaging field of view V31 from mutually different projection directions D.
  • the number of projectors 32 included in the measurement head 3 is not limited to four as in the example of FIG. 2 .
  • the measurement head 3 has an illumination 33 (FIG. 1) that irradiates light onto the imaging field of view V31.
  • the above projector 32 projects the pattern light L(S) onto the imaging field of view V31 when measuring a three-dimensional shape
  • the illumination 33 projects the pattern light L(S) onto the imaging field of view V31 when measuring a three-dimensional shape.
  • the field of view V31 is irradiated with illumination light.
  • the drive mechanism 4 supports the measurement head 3 and drives the measurement head 3 in the X direction, Y direction, and Z direction using a motor. By driving this drive mechanism 4, the measurement head 3 can move above the measurement target part of the measurement target object J and capture the measurement target part within the imaging field of view V31.
  • the three-dimensional shape of a location can be measured.
  • the three-dimensional measuring device 1 places the component E within the imaging field of view V31 so as to help inspect the floating of the component E from the substrate B, for example, the floating of the terminal of a package such as a QFP (Quad Flat Package) from the substrate B.
  • the three-dimensional shape of the part E and its surroundings can be measured.
  • the control device 100 has a main control section 110, which is a processor composed of a CPU (Central Processing Unit) and memory, and the main control section 110 controls the various parts of the device, so that the three-dimensional shape can be measured. be done. Also.
  • the control device 100 has a UI (User Interface) 200 configured with input/output devices such as a display, a keyboard, and a mouse. A user can input commands to the control device 100 via the UI 200, and 100 measurement results can be confirmed.
  • the control device 100 includes a projection control section 120, an imaging control section 130, and a drive control section 140.
  • the projection control unit 120 controls the projection of the pattern light L(S) by the projector 32.
  • the imaging control unit 130 controls the imaging of the imaging field of view V31 by the imaging camera 31 and the irradiation of light from the illumination 33 to the imaging field of view V31.
  • the drive control unit 140 controls the drive of the measurement head 3 by the drive mechanism 4 .
  • the main control section 110 controls the drive mechanism 4 by the drive control section 140 to move the measurement head 3 above the measurement point of the measurement object J. let As a result, the measurement target location falls within the imaging field of view V31 of the imaging camera 31.
  • the main control unit 110 projects the pattern light L(S) from the projector 32 onto the imaging field of view V31 and images the patterned light L(S) projected onto the imaging field of view V31 with the imaging camera 31 (pattern imaging operation). ).
  • the control device 100 has a storage unit 150, and reads out the projection pattern T(S) stored in the storage unit 150.
  • the main control unit 110 then controls the projection control unit 120 based on the projection pattern T(S) read from the storage unit 150 to adjust the angle of each micromirror of the digital micromirror device of the projector 32. Adjust according to pattern T(S). In this way, the patterned light L(S) having the projection pattern T(S) is projected onto the imaging field of view V31. Further, the main control unit 110 controls the imaging control unit 130 to capture the pattern light L(S) projected onto the imaging field of view V31 using the imaging camera 31 to obtain a pattern image I(S). This pattern image I is stored in the storage section 150.
  • four types of pattern images I(S) are obtained by capturing patterned light L(S) each having a phase different by 90 degrees.
  • the main control unit 110 determines the height of the imaging field of view V31 for each pixel of the imaging camera 31 from the four types of pattern images I(S) acquired in this way using the phase shift method.
  • variations in the projection pattern T(S) are not limited to this example; for example, eight types of projection patterns T(S) whose phases differ by 45 degrees may be used, or three types of projection patterns T(S) whose phases differ by 120 degrees may be used. A projection pattern T(S) may also be used.
  • FIG. 3 is a block diagram showing details of a control device included in the three-dimensional measuring device of FIG. 1.
  • the control device 100 stores the three-dimensional measurement program 92 downloaded from the external server computer 91 in the storage unit 150. Note that the manner in which the three-dimensional measurement program 92 is provided is not limited to external downloading, and the three-dimensional measurement program 92 may be provided in a state recorded on a DVD (Digital Versatile Disc) or USB (Universal Serial Bus). .
  • the main control section 110 executes the three-dimensional measurement program 92, the main control section 110 includes a component model acquisition section 111, an area calculation section 112, a pattern image acquisition section 113, and a shape calculation section 114.
  • These functional units 111, 112, 113, and 114 perform a function to correspond to a poor irradiation area that occurs due to the component E on the board B when the pattern light L(S) is projected onto the measurement target J. . Details of these are as follows.
  • FIG. 4 is a diagram schematically showing the contents of a component model created by the component model acquisition unit
  • FIG. 5A is a diagram showing an example of board configuration data used for creating the component model in a table format.
  • FIG. 5B is a diagram showing an example of component data used for creating a component model in a table format
  • FIG. 5C is a diagram schematically showing an example of the created component model.
  • the component model acquisition unit 111 uses board configuration data 81 indicating the configuration of the board B and component data 82 indicating the external shape of the component E to be mounted on the board B to obtain a three-dimensional model of the component E mounted on the board B.
  • a part model 83 is created that shows the existing range in the original. Note that the board configuration data 81 and the component data 82 are stored in the storage unit 150 in advance.
  • the component mounting range Ref(R) is defined by the position of the component mounting range Ref(R) in the X direction (X coordinate) and the position of the component mounting range Ref(R) in the Y direction (Y coordinate).
  • the component mounting range Ref(R) is defined by, for example, lands provided on the board B, and is a wide range.
  • the position (x, y) of the component mounting range Ref(R) corresponds to the center of the component mounting range Ref(R) in plan view.
  • the component mounting range Ref(R) indicated by the board configuration data 81 is the ideal range in which the component E should be mounted, and is not the range in which the component E is actually mounted by the surface mounter.
  • the board configuration data 81 includes CAD data indicating the configuration of the board B, and checking the suitability of the positional relationship between the component E mounted in the component mounting range Ref(R) and the component mounting range Ref(R) in the surface mounter. Inspection data or the like can be used to indicate standards for
  • the component data 82 shows the configuration of the component E for each type of component E.
  • the component data 82 includes the external shape of the rectangular component E such as the length El, width Ew, and height Eh, and the presence or absence of the property of reflecting light (patterned light L(S)) (reflection). ) are shown for various parts Ea, Eb, Ec, .
  • the component model acquisition unit 111 confirms the type of component E to be mounted in the component mounting range Ref(R) of the board B using the board configuration data 81, and confirms the configuration of the corresponding type of component E using the component data 82.
  • a part model 83 is created based on.
  • This component model 83 includes the position (x, y) of the component mounting range Ref(R) and the external shape of the component E mounted in the component mounting range Ref(R) (X size E_x, Y size E_y, and Z size E_z). In other words, the component model 83 indicates the range in which the component E mounted in the component mounting range Ref(R) protrudes from the board B in the Z direction (height direction).
  • the component model 83 indicates whether the component E mounted in the component mounting range Ref(R) has the property of reflecting light (pattern light L(S)). This component model 83 is created for each of the plurality of component mounting ranges Ref(R) provided on the board B.
  • the specific data used to create the part model 83 and the specific method of creating the part model 83 using each data are not limited to this example.
  • the component model acquisition unit 111 does not need to create the component model 83 based on the board configuration data 81 and the component data 82; What is necessary is to acquire the part model 83 that has been created.
  • FIG. 6 is a diagram schematically showing the contents of the poor irradiation area calculated by the area calculation unit.
  • Component E shown in FIG. 6 has a property of reflecting pattern light L(S).
  • the component E mounted in the component mounting range Ref(R) has a width E_x in the X direction, a length E_y in the Y direction, and a height E_z in the Z direction. Then, the pattern light L(S) emitted from the projector 32 is projected onto the component E from the projection direction D.
  • a shadow area As (poor irradiation area) occurs downstream of the component E in the projection direction D.
  • This shadow area As is an area where a shadow of the component E is generated on the board B because the pattern light L(S) projected onto the component E from the projector 32 is blocked by the component E.
  • a secondary reflection area Ar (defective irradiation area) occurs upstream of the component E in the projection direction D.
  • This secondary reflection area Ar is an area where light projected onto the component E from the projector 32 and reflected by the side surface of the component E enters the substrate B.
  • the length of the shadow area As and the secondary reflection area Ar in the Y direction corresponds to the length E_y of the component E in the Y direction.
  • the width of the shadow area As and the secondary reflection area Ar in the X direction corresponds to the width obtained by multiplying the height E_z of the component E by the tangent (tan ⁇ s) of the projection angle ⁇ s. Note that for the component E that does not reflect the pattern light L(S), no secondary reflection area Ar is generated.
  • FIG. 7 is a flowchart showing an example of calculation for calculating the irradiation defect area. The flowchart in FIG. 7 is executed by the calculations of the main control unit 110.
  • step S105 the component model acquisition unit 111 creates a component model 83 of the component E to be mounted in the R-th component mounting range Ref(R).
  • step S106 the area calculation unit 112 confirms the projection direction D in which the P-th projector 32 projects the patterned light L(S) with respect to the R-th component mounting range Ref.
  • projection direction information 84 indicating the projection direction D of the patterned light L(S) by the projector 32 for each of the plurality of projectors 32 is stored in advance in the storage unit 150, and the area calculation unit 112 The projection direction D is confirmed by referring to the direction information 84.
  • a shadow area occurs when pattern light L(S) is projected from the projection direction D by the P-th projector 32 onto the component E indicated by the component model 83 created for the R-th component mounting range Ref.
  • As (poor reflection area) is calculated by the area calculation unit 112.
  • the P-th projector 32 projects the component E in the projection direction D.
  • the area calculation unit 112 calculates a secondary reflection area Ar (defective reflection area) that occurs when patterned light L(S) is projected from the area.
  • step S108 it is determined whether the count value R of the component mounting range Ref has reached the number Rmax of component mounting ranges Ref provided on the board B, that is, for all the component mounting ranges Ref provided on the board B in steps S105 to S107. It is checked whether the has been executed. If there is a component mounting range Ref for which steps S105 to S107 have not been performed (“NO” in step S108), the process returns to step S104. In this way, steps S104 to S107 are repeated until steps S105 to S107 are executed for all component mounting ranges Ref ("YES" in step S108). As a result, the illumination failure area (shadow area As/secondary reflection area Ar) that occurs when the pattern light L(S) is projected from the P-th projector 32 is calculated for the component E in all component mounting ranges Ref. .
  • the illumination failure area shadow area As/secondary reflection area Ar
  • the pattern light L(S) is projected from the projector 32, the poor irradiation area (shadow area As/secondary reflection area Ar) that occurs due to the component E in each component mounting range Ref is removed from all the projectors. Poor irradiation area information 85 shown for 32 is calculated and stored in the storage unit 150.
  • FIG. 8 is a flowchart showing a first example of three-dimensional measurement
  • FIG. 9 is a flowchart showing omnidirectional shape data acquisition executed in three-dimensional measurement in FIG. 8
  • FIG. 11 is a flowchart showing the first data integration process to be executed
  • FIG. 11 is a diagram schematically showing the calculations to be executed in the first data integration process in FIG. 10.
  • the flowcharts in FIGS. 8, 9, and 10 are executed under the control of the main control unit 110.
  • omnidirectional shape data acquisition is executed in step S201.
  • recognition of the fiducial mark attached to the substrate B carried into the three-dimensional measuring device 1 is executed (step S301). Specifically, with the imaging camera 31 facing the fiducial mark in the imaging field of view V31 from above, the imaging camera 31 illuminates the fiducial mark while irradiating illumination light from the illumination 33 to the fiducial mark. Take an image and obtain a mark image. The mark image is sent from the imaging control section 130 to the main control section 110, and the main control section 110 detects the position of the substrate B held on the transport conveyor 2 from the position of the fiducial mark appearing in the mark image.
  • step S302 the count value P of the projector 32 is reset to zero, and in step S303, the count value P is incremented. Further, in step S304, the count value R of the component mounting range Ref is reset to zero, and in step S305, the count value R is incremented.
  • the main control unit 110 adjusts the position of the board B using the drive mechanism 4 so that the Rth component mounting range Ref falls within the imaging field of view V31.
  • the main control unit 110 executes position adjustment of the substrate B to the imaging field of view V31 based on the position of the substrate B detected in step S301.
  • the pattern image acquisition unit 113 controls the projection control unit 120 and the imaging control unit 130 to transmit pattern light L(S) from the P-th projector 32 to the component E in the R-th component mounting range Ref.
  • the image pickup camera 31 images the pattern light L(S).
  • the shape calculation unit 114 calculates unidirectional shape data 86 based on the four types of pattern images I(S) using the phase shift method.
  • This unidirectional shape data 86 is data that indicates, for each pixel PX (FIG. 11), a shape-related value Q regarding the three-dimensional shape of the component E mounted in the component mounting range Ref.
  • the pixel PX corresponds to, for example, a pixel of the solid-state image sensor 311, and since the imaging camera 31 takes an image while facing the substrate B from the Z direction, the plurality of pixels of the solid-state image sensor 311 are different from each other on the substrate B. Corresponds to the position (X coordinate, Y coordinate).
  • This shape-related value Q is composed of a measured value Qm calculated as a height (Z coordinate) from four types of pattern images I(S) by a phase shift method, and a reliability Qr of the measured value Qm.
  • the shape-related value Q is given by the product of the measured value Qm and the reliability Qr.
  • the pixel PX used when determining the shape-related value Q does not need to correspond to the pixel of the solid-state image sensor 311.
  • the shape-related value Q may be calculated using the pixel PX corresponding to the converted resolution.
  • step S308 it is determined whether the count value R of the component mounting range Ref has reached the number Rmax of component mounting ranges Ref provided on the board B, that is, the pattern image I( It is confirmed whether acquisition of S) (step S306) and calculation of unidirectional shape data 86 (step S307) have been performed. If there is a component mounting range Ref for which steps S306 to S307 have not been performed (“NO” in step S308), the process returns to step S305. In this way, steps S306 to S307 is repeated. As a result, unidirectional shape data 86 when patterned light L(S) is projected from the P-th projector 32 is calculated for the components E in all component mounting ranges Ref.
  • step S202 the first data integration process
  • the four unidirectional shape data 86 obtained by projecting the projection pattern T(S) from each of the four projectors 32 on the component E mounted in the R-th component mounting range Ref. It is specified. In other words, for the component E in the R-th component mounting range Ref, four pieces of unidirectional shape data 86 corresponding to the four projectors 32 are specified.
  • the shape-related value Q of the Nth pixel PX is The weighting coefficient W(P) is determined to be "0", and the shape-related value Q is excluded (step S408).
  • the shape of the N-th pixel PX is determined to be 1 (step S410).
  • it is determined that the Nth pixel PX corresponding to the projector 32 with P 2 belongs to the shadow area As or the secondary reflection area Ar, and the weighting coefficient W( 2) is determined to be "0".
  • the weighting coefficients W(1), W(3), and W(4) for the above are determined to be "1".
  • W(P) By executing a calculation to obtain a weighted average using W(P), an integrated value H that integrates these shape-related values Q is obtained.
  • the weighted average formula for determining the integrated value H is as shown in the "conversion formula" column of FIG. 11. As a result, in the example of FIG. 11, the integrated value H of the Nth pixel PX is 159.
  • steps S405 to S411 are performed while incrementing the count value N (step S404) until the count value N reaches the number Nmax of pixels PX constituting the unidirectional shape data 86 ("YES" in step S412). Repeated.
  • integrated shape data 87 indicating, for each pixel PX, an integrated value H obtained by integrating four shape-related values Q corresponding to mutually different projectors 32 is calculated and stored in the storage unit 150.
  • This integrated shape data 87 corresponds to data that is an integration of four pieces of unidirectional shape data 86 corresponding to the four projectors 32 acquired for the component E mounted in the R-th component mounting range Ref. The three-dimensional shape of E is shown.
  • steps S403 to S412 are repeated while incrementing the count value R (step S402) until the count value R reaches the number Rmax of the component mounting range Ref (“YES” in step S413).
  • the integrated shape data 87 is calculated for the components E in all component mounting ranges Ref.
  • a component model 83 (reference model) indicating the component mounting range Ref in which the component E is mounted on the board B and the external shape of the component E to be mounted in the component mounting range Ref is acquired ( Step S105).
  • the projection direction information 84 irradiation direction information
  • the shadow area As and the secondary A reflection area Ar (defective irradiation area) is calculated (step S107).
  • the component model acquisition unit 111 (reference model acquisition unit) also acquires inspection data indicating a standard for inspecting the suitability of the positional relationship between the component E mounted in the component mounting range Ref and the component mounting range Ref, and the configuration of the board.
  • a part model 83 is created from at least one of the CAD data indicating (step S105). With this configuration, the component model 83 is created by utilizing existing data such as inspection data or CAD data of the board B, and the shadow area As and the secondary reflection area Ar can be calculated based on this component model 83.
  • a plurality of projectors 32 are provided that irradiate pattern light L(S) onto the component E from mutually different projection directions D, and the projection direction information 84 indicates that the light irradiates from the projector 32 onto the component E.
  • a projection direction D is shown for each of the plurality of projectors 32.
  • the area calculation unit 112 calculates the shadow area As and the secondary reflection area Ar for each of the plurality of projectors 32 (steps S102, S107). Thereby, it is possible to obtain the shadow area As and the secondary reflection area Ar that occur when light is projected onto the component E from each of the plurality of projectors 32.
  • step S307 for calculating unidirectional shape data 86 indicating the related value Q for each pixel PX is executed by the shape calculation unit 114. This calculation (step S307) is executed for each of the plurality of projectors 32 (step S303), and a plurality of unidirectional shape data 86 corresponding to mutually different projectors 32 is calculated.
  • the shape calculation unit 114 integrates the plurality of unidirectional shape data 86 by calculating a weighted average of the shape-related values Q indicated by each of the plurality of unidirectional shape data 86 for each pixel PX, and The first integrated calculation process (FIG. 10) for calculating the three-dimensional shape of is executed.
  • the first integrated calculation process among the shape-related values Q indicated by the unidirectional shape data 86, the shape-related value Q of the pixel PX included in the shadow area As or the secondary reflection area Ar is less than 1 and greater than or equal to 0.
  • the three-dimensional shape of the measurement object J is calculated by the weighted average multiplied by the weighting coefficient W(P) (step S411).
  • a shape-related value Q regarding a three-dimensional shape is calculated from a pixel based on a pattern image I(S) obtained by capturing pattern light L(S) irradiated onto the substrate B from the projector 32 by the imaging camera 31.
  • Unidirectional shape data 86 shown for each PX is calculated (step S307).
  • the unidirectional shape data 86 is data regarding a three-dimensional shape calculated based on a pattern image I(S) obtained by capturing the patterned light L(S) irradiated onto the component E from one projector 32.
  • the calculation for calculating this unidirectional shape data 86 is executed for each of the plurality of projectors 32 (step S303).
  • unidirectional shape data 86 obtained when the pattern light L(S) is irradiated onto the component E from each of the plurality of projectors 32 is obtained.
  • the plurality of unidirectional shape data 86 thus obtained are based on pattern images I(S) obtained while irradiating pattern light L(S) onto the component E from different projection directions D, respectively. Therefore, a pixel PX that corresponds to a shadow area As or a secondary reflection area Ar in one unidirectional shape data 86 may correspond to an area well irradiated with light in another unidirectional shape data 86.
  • the three-dimensional shape of the measurement object J is calculated by integrating the plurality of unidirectional shape data 86 by calculating the average of the shape-related values Q indicated by each of the plurality of unidirectional shape data 86 for each pixel PX. ( Figure 10). Moreover, in the first integrated calculation process in FIG. 10, among the shape-related values Q indicated by the unidirectional shape data 86, 1 The three-dimensional shape of the measurement object J is calculated by weighted averaging, which is multiplied by a weighting coefficient W(P) that is less than 0 and is greater than or equal to 0 (steps S407 to S410). This makes it possible to appropriately calculate the three-dimensional shape of the measurement object J while suppressing the influence of the shadow area As or the secondary reflection area Ar.
  • the three-dimensional shape of the object J is calculated (steps S407 to S410).
  • the operation of multiplying by the weighting coefficient W(P) of 0 is an operation of excluding the shadow area As and the secondary reflection area Ar. This makes it possible to appropriately calculate the three-dimensional shape of the measurement object J while eliminating the effects of the shadow area As and the secondary reflection area Ar.
  • FIG. 12 is a flowchart showing an example of calculation for correcting the irradiation defect area calculated in FIG. 7.
  • the flowchart in FIG. 12 is executed by the calculations of the main control unit 110 in parallel with the acquisition of the omnidirectional shape data in FIG.
  • step S501 the count value P of the projector 32 is reset to zero, and in step S502, the count value P is incremented.
  • step S503 the count value R of the component mounting range Ref is reset to zero, and in step S504, the count value R is incremented.
  • the shadow area As and the secondary reflection area Ar calculated for the component E in the R-th component mounting range Ref are specified in correspondence with the P-th projector 32.
  • step S505 the shadow area As and secondary reflection area Ar specified in steps S502 and S504 are corrected based on the position of the substrate B detected in step S301 of omnidirectional shape data acquisition in FIG.
  • the position of the substrate B detected in step S301 deviates from the ideal position of the substrate B based on the assumption in calculating the shadow area As and the secondary reflection area Ar by a positional deviation amount ⁇ a. be done.
  • the shadow area As and the secondary reflection area Ar can be corrected according to the actual position of the substrate B carried into the three-dimensional measuring device 1. , the shadow area As and the secondary reflection area Ar can be accurately calculated.
  • step S506 the shadow area As and secondary reflection area Ar specified in steps S502 and S504 are corrected based on the position of the component E actually mounted in the R-th component mounting range Ref. The details of this are as follows.
  • step S306 of omnidirectional shape data acquisition in FIG. 9 a two-dimensional image of the component E is acquired in addition to the pattern image I(S). Specifically, a two-dimensional image of the component E is acquired by the imaging camera 31 capturing an image of the component E from above while the illumination 33 irradiates the component E within the imaging field of view V31 with illumination light. Furthermore, in step S306, the component position indicating the positional relationship between the component E actually mounted in the component mounting range Ref of the board B carried into the three-dimensional measuring device 1 and the component mounting range Ref is changed to It is calculated based on the dimensional image and stored in the storage unit 150.
  • the positional relationship between the component mounting range Ref indicated by the component position calculated in step S306 and the component E is different from the ideal positional relationship assumed in calculating the shadow area As and the secondary reflection area Ar. It is assumed that there is a deviation by a positional deviation amount ⁇ b. In such a case, by correcting the shadow area As and the secondary reflection area Ar according to the positional deviation amount ⁇ b of the component E with respect to the component mounting range Ref, the component of the board B carried into the three-dimensional measuring device 1 can be adjusted. The shadow area As and the secondary reflection area Ar can be accurately calculated according to the actual position of the component E mounted in the mounting range Ref.
  • step S507 it is determined whether the count value R of the component mounting ranges Ref has reached the number Rmax of component mounting ranges Ref provided on the board B, that is, whether or not the count value R of the component mounting ranges Ref provided on the board B is determined in steps S505 to S506. It is confirmed whether the correction has been performed. If there is a component mounting range Ref for which the corrections in steps S505 to S506 have not been performed (“NO” in step S507), the process returns to step S504. In this way, steps S505 and S506 are repeated until the corrections of steps S505 and S506 are executed for all component mounting ranges Ref (“YES” in step S507). As a result, the illumination failure area (shadow area As/secondary reflection area Ar) for each component mounting range Ref that occurs when the pattern light L(S) is projected from the P-th projector 32 is corrected.
  • the illumination failure area shadow area As/secondary reflection area Ar
  • the illumination defect area shadow area As/secondary reflection area Ar
  • steps S407 to S411 are executed based on the thus corrected shadow area As and secondary reflection area Ar.
  • the integrated shape data 87 is generated based on the shadow area As and the secondary reflection area Ar that are accurately set for the measurement object J (board B and component E) actually carried into the three-dimensional measuring device 1. It can be calculated.
  • the area calculation unit 112 further calculates the shadow area based on the result of recognizing the position of the substrate B carried into the three-dimensional measuring device 1 and supported by the conveyor 2 (object support unit) in step S301. As and the secondary reflection area Ar are calculated (step S505). With this configuration, the shadow area As and the secondary reflection area Ar can be accurately calculated according to the actual position of the substrate B carried into the three-dimensional measuring device 1.
  • the position of the substrate B supported by the transport conveyor 2 is recognized based on the result of imaging the fiducial mark attached to the substrate B by the imaging camera 31 (step S301).
  • the shadow area As and the secondary reflection area Ar can be accurately calculated according to the position of the substrate B carried into the three-dimensional measurement device 1.
  • the area calculation unit 112 calculates a shadow area As and a secondary reflection area Ar based on the result of detecting the position of the component E mounted in the component mounting range Ref in step S306 (step S506).
  • the shadow area As and the secondary reflection area Ar can be accurately calculated according to the actual position of the component E mounted in the component mounting range Ref.
  • the position of the component E mounted in the component mounting range Ref is detected based on a two-dimensional image of the component E (step S306).
  • the shadow area As and the secondary reflection area Ar can be accurately calculated according to the actual position of the component E mounted in the component mounting range Ref.
  • FIG. 13 is a flowchart showing a second example of three-dimensional measurement
  • FIG. 14 is a flowchart showing a second data integration process executed in three-dimensional measurement in FIG. 13
  • FIG. 15 is a flowchart showing a second data integration process executed in three-dimensional measurement in FIG.
  • FIG. 3 is a diagram schematically showing the contents of an operation to be executed.
  • a plurality of substrates B are sequentially loaded into the three-dimensional measuring device 1 and three-dimensional measurement is performed on each substrate B.
  • step S601 omnidirectional shape data acquisition
  • step S602 first data integration processing
  • step S603 a second data integration process
  • the execution order of the first data integration process and the second data integration process is not limited to the example in FIG. 13, and may be reversed to the example in FIG. 13.
  • step S414 is a simple average (all weighting coefficients are 1). This is the point at which the calculation of the average) is performed. That is, in step S414 of the second data integration process, a simple average of the shape-related values Q of the N-th pixel PX obtained for each of the four projectors 32 is calculated. In this way, the integrated value H is calculated not by a weighted average but by a simple average, and the integrated shape data 87 is obtained.
  • step S603 When the three-dimensional measurement is completed for one board B (steps S601 to S603), it is determined whether steps S601 to S603 have been performed for a predetermined number (one or more) of boards B (step S604). . If the measurement has not been completed (“NO” in step S604), it is determined in step S605 whether to end the measurement. If the measurement is to be terminated ("YES" in step S605), the three-dimensional measurement in FIG. return.
  • step S604 if it is determined that steps S601 to S603 have been executed for a predetermined number of substrates B (YES), the integrated shape data 87 (three-dimensional shape) acquired in the first data integration process in step S602 , the difference from the integrated shape data 87 (three-dimensional shape) acquired in the second data integration process in step S603 is calculated. For example, regarding the same component mounting range Ref of the same board B, the integrated shape data 87 (three-dimensional shape) obtained in the first data integration process and the integrated shape data 87 (three-dimensional shape) obtained in the second data integration process The average value of the square of the difference from the original shape (that is, the mean square error) is calculated.
  • the mean square error may be calculated for each component mounting range Ref, or the mean square error may be calculated for one representative component mounting range Ref. may be calculated. Further, the mean square error is calculated for each of the predetermined number of substrates B. The average value, median value, or maximum value of each mean square error calculated in this way is calculated as a difference. Note that the specific method for calculating the difference is not limited to this example, and any calculation method that can evaluate the difference between two pieces of data can be adopted.
  • step S607 it is determined whether the first data integration process is necessary based on the difference. Specifically, if the difference is greater than or equal to a predetermined threshold, it is determined that the first data integration process is necessary (YES), and if the difference is less than the threshold, the first data integration process is determined to be necessary. It is determined that it is unnecessary. For example, as shown in each waveform (three-dimensional shape) in the column "Poor irradiation effect" in Figure 15, when there is an influence of poor irradiation, a peak occurs in the three-dimensional shape obtained in the second data integration process. Noise does not appear in the three-dimensional shape obtained in the first data integration process. This indicates that the first data integration process is necessary to eliminate the effects of poor irradiation.
  • step S607 If it is determined that the first data integration process is necessary ("YES" in step S607), every time one substrate B is carried into the three-dimensional measuring device 1, omnidirectional shape data is acquired (step S608). 1 data integration processing (step S609) and determination of end of measurement (step S610) are executed. In this way, the integrated shape data 87 is calculated not by the second data integration process but by the first data integration process (step S609).
  • step S611 On the other hand, if it is determined that the first data integration process is unnecessary (“NO” in step S607), omnidirectional shape data is acquired every time one substrate B is carried into the three-dimensional measuring device 1 (step S611). , a second data integration process (step S612), and a determination of end of measurement (step S613) are executed. In this way, the integrated shape data 87 is calculated not by the first data integration process but by the second data integration process (step S612).
  • the shape calculation unit 114 executes both the first data integration process (step S602) and the second data integration process (step S603) on the same measurement object J.
  • the shape calculation unit 114 calculates the average (simple average) of the shape-related values Q indicated by the plurality of unidirectional shape data 86 without weighting according to the shadow area As and the secondary reflection area Ar.
  • the plurality of unidirectional shape data 86 are integrated, and the three-dimensional shape (integrated shape data 87) of the measurement target object J is calculated.
  • the shape calculation unit 114 evaluates the difference between the integrated shape data 87 (three-dimensional shape) of the measurement object J calculated in each of the first data integration process (step S602) and the second data integration process (step S603).
  • the evaluation process (step S606) is performed on a predetermined number of measurement objects J from which pattern images I(S) are to be obtained by the three-dimensional measuring device 1.
  • a necessity determination is performed to determine whether the first data integration process is necessary (step S607).
  • the shape calculation unit 114 performs the measurement target J by the second data integration process without executing the first data integration process.
  • the integrated shape data 87 (three-dimensional shape) is calculated (steps S611 to S613).
  • the integrated shape data 87 of the measurement object J can be accurately calculated by the second data integration process, which is simpler than the first data integration process. I can do it.
  • the "predetermined number of sheets" determined in step S604 may be configured to be set by the user.
  • the user performs an operation to set a predetermined number of sheets on the UI 200, and the UI 200 receives the predetermined number of sheets input by the user and sets it to be used in the three-dimensional measurement shown in FIG.
  • the user can appropriately adjust the period (steps S601 to S604) during which both the first data integration process (step S602) and the second data integration process (step S603) are executed.
  • the three-dimensional measuring device 1 corresponds to an example of the "three-dimensional measuring device” of the present invention
  • the control device 100 corresponds to an example of the "three-dimensional measurement computing device” of the present invention
  • the part model acquisition section 111 corresponds to an example of the "reference model acquisition section” of the present invention
  • the area calculation section 112 corresponds to an example of the "area calculation section” of the present invention
  • the shape calculation section 114 corresponds to an example of the "reference model acquisition section” of the present invention.
  • the conveyor 2 corresponds to an example of the "object support section” of the present invention
  • the imaging camera 31 corresponds to an example of the "imaging section” of the present invention
  • the projector 32 corresponds to an example of the "object support section” of the present invention.
  • This corresponds to an example of the "projector” of the invention
  • the plurality of projectors 32 corresponds to an example of the "pattern irradiation unit” of the invention
  • the component model 83 corresponds to an example of the "reference model” of the invention
  • the projection direction information 84 corresponds to an example of "irradiation direction information" of the present invention
  • unidirectional shape data 86 corresponds to an example of "unidirectional shape data” of the present invention
  • server computer 91 corresponds to an example of "recording medium” of the present invention.
  • the three-dimensional measurement program 92 corresponds to an example of the "three-dimensional measurement program" of the present invention
  • the UI 200 corresponds to an example of the "setting operation section” of the present invention
  • the secondary reflection area Ar corresponds to an example of the "three-dimensional measurement program” of the present invention.
  • the shadow area As corresponds to an example of the "secondary reflection area” and the "poor irradiation area” of the present invention
  • the shadow area corresponds to an example of the "shadow area” and the "poor irradiation area” of the present invention
  • the substrate B corresponds to an example of the "substrate B" of the present invention.
  • the part E corresponds to an example of the “component” of the present invention
  • the pattern image I(S) corresponds to an example of the “pattern image” of the present invention
  • the measurement object J corresponds to an example of the “pattern image” of the present invention.
  • This corresponds to an example of a "measurement object”
  • the pattern light L(S) corresponds to an example of "light” of the present invention
  • the pixel PX corresponds to an example of a "pixel” of the present invention
  • the shape-related value Q corresponds to an example of a "pixel” of the present invention.
  • the measured value Qm corresponds to an example of the “calculated value” of the present invention
  • the reliability Qr corresponds to an example of the "reliability” of the present invention
  • the component mounting The range Ref corresponds to an example of the “component mounting range” of the present invention
  • step S606 corresponds to an example of the “evaluation process” of the present invention
  • step S607 corresponds to an example of the "necessity determination” of the present invention
  • the first data integration process in FIG. 10 corresponds to an example of the "first integration calculation process” of the present invention
  • the second data integration process in FIG. 14 corresponds to an example of the "second integration calculation process” of the invention.
  • the present invention is not limited to the embodiments described above, and various changes can be made to what has been described above without departing from the spirit thereof.
  • it is not necessary to calculate both the secondary reflection area Ar and the shadow area As as the illumination defect area.
  • the influence of one of the secondary reflection area Ar and the shadow area As is large and the influence of the other is slight, only one of the secondary reflection areas Ar and the shadow area As may be calculated as the poor irradiation area in step S107 of FIG. .
  • the specific contents of the shape-related value Q may be changed as appropriate.
  • the measured value Qm may be directly calculated as the shape-related value Q without calculating the reliability Qr.
  • step S408 the weighting coefficient W(P) to be multiplied corresponding to the defective irradiation area does not need to be 0, but may be a value of 0 or more and less than 1.
  • each of the functional units 111, 112, 113, and 114 configured in the main control unit 110 by executing the three-dimensional measurement program 92 do not necessarily need to be configured in the control device 100 included in the three-dimensional measurement device 1. Therefore, each of the functional units 111, 112, 113, and 114 may be configured in a processor included in a computer provided separately from the three-dimensional measuring device 1.
  • a median value may be calculated instead of calculating the average value (representative value) by simple average.
  • 1...Three-dimensional measuring device 100 Control device (computing device for three-dimensional measurement) 111...Parts model acquisition unit (reference model acquisition unit) 112...Area calculation unit (area calculation unit) 114... Shape calculation section 2... Conveyor (object support section) 31...Imaging camera (imaging section) 32...Projector (pattern irradiation section) 83...Parts model (standard model) 84...Projection direction information (irradiation direction information) 86...Unidirectional shape data 91...Server computer (recording medium) 92...Three-dimensional measurement program 200...UI (setting operation section) Ar...Secondary reflection area (poor irradiation area) As...Shadow area (poor irradiation area) B...

Abstract

In the present invention, a component model 83 (reference model) is acquired which indicates a component mount range Ref on a board B in which a component E is mounted and the appearance of the component E mounted in the component mount range Ref. In addition, a shadow area As and a secondary reflection area Ar (irradiation failure area) are calculated on the basis of the component model 83 and projection direction information 84 (irradiation direction information) that indicates a projection direction D in which pattern light L (S) is projected onto the component E in a three-dimensional measurement device 1. In this way, in measuring a three-dimensional shape of an object J to be measured on the basis of a pattern image I(S) acquired by capturing an image of the pattern light L (S) irradiated to the component E, it is possible to acquire the shadow area As and the secondary reflection area A in which the irradiation failure of the pattern light L occurs.

Description

三次元計測用演算装置、三次元計測用プログラム、記録媒体、三次元計測装置および三次元計測用演算方法3D measurement calculation device, 3D measurement program, recording medium, 3D measurement device, and 3D measurement calculation method
 この発明は、計測対象物に照射された所定のパターンの光を撮像することで取得されたパターン画像に基づき計測対象物の三次元形状を計測する技術に関する。 The present invention relates to a technique for measuring the three-dimensional shape of a measurement object based on a pattern image obtained by imaging a predetermined pattern of light irradiated onto the measurement object.
 特許文献1、2では、いわゆる位相シフト法によって計測対象物の三次元形状を計測する三次元計測技術が記載されている。かかる三次元計測技術では、プロジェクタから計測対象物に照射された所定のパターンの光をカメラによって撮像することで取得した画像に基づき、計測対象物の三次元形状が計測される。 Patent Documents 1 and 2 describe three-dimensional measurement techniques that measure the three-dimensional shape of a measurement target using a so-called phase shift method. In such three-dimensional measurement technology, the three-dimensional shape of the measurement object is measured based on an image obtained by capturing a predetermined pattern of light irradiated onto the measurement object from a projector using a camera.
 また、特許文献2では、計測対象物を構成する物体のうちに背の高い物体が存在する場合に生じる二次反射の影響を抑えて三次元形状を計測する技術が提案されている。つまり、プロジェクタから射出されて背の高い物体の側面で反射された光が、他の物体によってさらに反射されるといった二次反射が発生すると、三次元形状を正確に計測することが難しくなる。そこで、特許文献2では、二次反射の発生原因となる背の高い物体(原因物体)にプロジェクタからの光が入射しないように、プロジェクタから光を照射する範囲が制限される。 Additionally, Patent Document 2 proposes a technique for measuring a three-dimensional shape by suppressing the influence of secondary reflection that occurs when a tall object is present among the objects that constitute the measurement target. In other words, when secondary reflection occurs, such as light emitted from a projector and reflected on the side of a tall object and further reflected by another object, it becomes difficult to accurately measure a three-dimensional shape. Therefore, in Patent Document 2, the range of light irradiation from the projector is limited so that the light from the projector does not enter a tall object (causing object) that causes secondary reflection.
特開2012-112952号公報Japanese Patent Application Publication No. 2012-112952 特開2016-130663号公報Japanese Patent Application Publication No. 2016-130663
 ただし、特許文献2のようにプロジェクタから光を照射する範囲を制限するためには、三次元計測装置に具備されるプロジェクタ等のハードウェアを制御する必要があるため、制御が複雑になりやすい。また、三次元形状の計測に影響を与える光の照射不良としては、上記の二次反射の他に、いわゆるオクリュージョンが挙げられる。このオクリュージョンとは、計測対象物を構成する物体に遮られることでプロジェクタからの光が届かない影が発生する現象である。特許文献2は、このようなオクリュージョンに対しては対応できない。 However, in order to limit the range of light irradiated from the projector as in Patent Document 2, it is necessary to control hardware such as a projector included in the three-dimensional measurement device, so the control tends to be complicated. Furthermore, in addition to the above-mentioned secondary reflection, so-called occlusion can be cited as a defective light irradiation that affects the measurement of a three-dimensional shape. This occlusion is a phenomenon in which a shadow occurs where the light from the projector cannot reach due to being blocked by objects that make up the measurement target. Patent Document 2 cannot deal with such occlusion.
 これに対して、計測対象物において光の照射不良が発生する照射不良領域を知ることができれば、三次元計測装置で取得されたパターン画像に基づき三次元形状を算出する演算において、当該照射不良領域を参照することができる。その結果、光の照射不良の影響を抑制しつつ三次元形状を算出する演算を実行することができる。 On the other hand, if it is possible to know the poor irradiation area where poor light irradiation occurs in the measurement target, the poor irradiation area will be can be referred to. As a result, it is possible to perform calculations for calculating a three-dimensional shape while suppressing the effects of poor light irradiation.
 この発明は上記課題に鑑みなされたものであり、計測対象物に照射された所定のパターンの光を撮像することで取得されたパターン画像に基づき計測対象物の三次元形状を計測するにあたって、光の照射不良が発生する照射不良領域を取得可能とすることを目的とする。 This invention was made in view of the above-mentioned problems, and it is possible to measure the three-dimensional shape of a measurement object based on a pattern image obtained by imaging a predetermined pattern of light irradiated onto the measurement object. The object of the present invention is to enable acquisition of irradiation defect areas where irradiation defects occur.
 本発明に係る三次元計測用演算装置は、基板および当該基板に実装された部品を有する計測対象物を支持する対象物支持部と、対象物支持部に支持された計測対象物に所定のパターンの光を照射するパターン照射部と、パターン照射部から計測対象物に照射された光を撮像することで二次元のパターン画像を取得する撮像部とを備えた三次元計測装置により取得されたパターン画像に基づき計測対象物の三次元形状を計測するための演算を実行する三次元計測用演算装置であって、基板のうち部品が実装される部品実装範囲および部品実装範囲に実装される部品の外形を示す基準モデルを取得する基準モデル取得部と、撮像部から部品に照射された光が部品によって遮られることで基板において部品の影が生じる影領域および撮像部から照射されて部品で反射された光が基板に入射する二次反射領域のうち少なくとも一方の照射不良領域を、三次元計測装置で部品に光が照射される方向を示す照射方向情報と、基準モデルとに基づき算出する領域算出部とを備える。 A computing device for three-dimensional measurement according to the present invention includes an object support section that supports a measurement object having a substrate and a component mounted on the substrate, and a predetermined pattern on the measurement object supported by the object support section. A pattern obtained by a three-dimensional measuring device that includes a pattern irradiation section that irradiates light from the pattern irradiation section and an imaging section that obtains a two-dimensional pattern image by capturing the light irradiated onto the measurement target from the pattern irradiation section. A three-dimensional measurement calculation device that executes calculations to measure the three-dimensional shape of a measurement target based on an image, and includes a component mounting range where components are mounted on a board and a component mounted in the component mounting range. There is a reference model acquisition unit that obtains a reference model that shows the external shape, a shadow area where a shadow of the component is created on the board because the light irradiated from the imaging unit to the component is blocked by the component, and a shadow area where the light irradiated from the imaging unit is reflected by the component. Area calculation that calculates the irradiation failure area in at least one of the secondary reflection areas where the light enters the board based on irradiation direction information indicating the direction in which the component is irradiated with light using a three-dimensional measuring device and a reference model. It is equipped with a section.
 本発明に係る三次元計測用プログラムは、上記の三次元計測用演算装置としてコンピュータを機能させる。 The three-dimensional measurement program according to the present invention causes a computer to function as the above-mentioned three-dimensional measurement calculation device.
 本発明に係る記録媒体は、上記の三次元計測用プログラムをコンピュータにより読み出し可能に記録する。 A recording medium according to the present invention records the three-dimensional measurement program described above so as to be readable by a computer.
 本発明に係る三次元計測装置は、基板および当該基板に実装された部品を有する計測対象物を支持する対象物支持部と、対象物支持部に支持された計測対象物に所定のパターンの光を照射するパターン照射部と、パターン照射部から計測対象物に照射された光を撮像することで二次元のパターン画像を取得する撮像部と、パターン画像に基づき計測対象物の三次元形状を計測するための演算を実行する上記の三次元計測用演算装置とを備える。 A three-dimensional measuring device according to the present invention includes an object support section that supports a measurement object having a substrate and components mounted on the substrate, and a predetermined pattern of light on the measurement object supported by the object support section. a pattern irradiation unit that irradiates the measurement target, an imaging unit that acquires a two-dimensional pattern image by capturing the light irradiated onto the measurement target from the pattern irradiation unit, and a three-dimensional shape of the measurement target based on the pattern image. and the above-mentioned three-dimensional measurement arithmetic device that executes arithmetic operations for the three-dimensional measurement.
 本発明に係る三次元計測用演算方法は、基板および当該基板に実装された部品を有する計測対象物を支持する対象物支持部と、対象物支持部に支持された計測対象物に所定のパターンの光を照射するパターン照射部と、パターン照射部から計測対象物に照射された光を撮像することで二次元のパターン画像を取得する撮像部とを備えた三次元計測装置により取得されたパターン画像に基づき計測対象物の三次元形状を計測するための演算を実行する三次元計測用演算方法であって、基板のうち部品が実装される部品実装範囲および部品実装範囲に実装される部品の外形を示す基準モデルを取得する工程と、撮像部から部品に照射された光が部品によって遮られることで基板において部品の影が生じる影領域および撮像部から照射されて部品で反射された光が基板に入射する二次反射領域のうち少なくとも一方の照射不良領域を、三次元計測装置で部品に光が照射される方向を示す照射方向情報と、基準モデルとに基づき算出する工程とを備える。 A calculation method for three-dimensional measurement according to the present invention includes an object support section that supports a measurement object having a substrate and a component mounted on the substrate, and a predetermined pattern on the measurement object supported by the object support section. A pattern obtained by a three-dimensional measuring device that includes a pattern irradiation section that irradiates light from the pattern irradiation section and an imaging section that obtains a two-dimensional pattern image by capturing the light irradiated onto the measurement target from the pattern irradiation section. A calculation method for three-dimensional measurement that executes calculations to measure the three-dimensional shape of a measurement target based on an image, and includes a component mounting range where components are mounted on a board and a component mounted in the component mounting range. The process of acquiring a reference model that shows the external shape, the shadow area where the part's shadow is created on the board due to the part blocking the light irradiated from the imaging unit to the part, and the shadow area where the part's shadow is created by the part blocking the light irradiated from the imaging unit and reflected by the part. The method includes a step of calculating an irradiation failure region of at least one of the secondary reflection regions incident on the substrate based on irradiation direction information indicating a direction in which light is irradiated onto the component by a three-dimensional measuring device and a reference model.
 このように構成された本発明(三次元計測用演算装置、三次元計測用プログラム、記録媒体、三次元計測装置および三次元計測用演算方法)では、基板のうち部品が実装される部品実装範囲および部品実装範囲に実装される部品の外形を示す基準モデルが取得される。そして、三次元計測装置で部品に光が照射される方向を示す照射方向情報と、基準モデルとに基づき、照射不良領域が算出される。こうして、計測対象物に照射された所定のパターンの光を撮像することで取得されたパターン画像に基づき計測対象物の三次元形状を計測するにあたって、光の照射不良が発生する照射不良領域を取得することが可能となっている。 In the present invention (three-dimensional measurement calculation device, three-dimensional measurement program, recording medium, three-dimensional measurement device, and three-dimensional measurement calculation method) configured as described above, the component mounting range where components are mounted on the board A reference model indicating the external shape of the component to be mounted in the component mounting range is obtained. Then, a poor irradiation area is calculated based on the irradiation direction information indicating the direction in which the component is irradiated with light by the three-dimensional measuring device and the reference model. In this way, when measuring the three-dimensional shape of the measurement target based on the pattern image obtained by imaging the predetermined pattern of light irradiated onto the measurement target, the illumination defect area where light irradiation failure occurs is obtained. It is now possible to do so.
 ここで、照射不良領域は、撮像部から部品に照射された光が部品によって遮られることで基板において部品の影が生じる影領域および撮像部から照射されて部品で反射された光が基板に入射する二次反射領域のうち少なくとも一方である。 Here, the poor illumination area is defined as a shadow area where the component's shadow occurs on the board due to the part blocking the light irradiated from the imaging unit to the component, and a shadow area where the light irradiated from the imaging unit and reflected by the component enters the board. at least one of the secondary reflection areas.
 また、基準モデル取得部は、部品実装範囲に実装された部品と部品実装範囲との位置関係の適否を検査するための基準を示す検査データおよび基板の構成を示すCAD(Computer-Aided Design)データの少なくとも一方のデータから、基準モデルを作成するように、三次元計測用演算装置を構成してもよい。かかる構成では、基板の検査データあるいはCADデータといった既存のデータを活用して基準モデルを作成し、この基準モデルに基づき照射不良領域を算出できる。なお、検査データが示す情報は、上記の基準に限られず、部品自体の適否を検査するための基準や、部品と基板との接合の適否を検査するための基準も含みうる。 In addition, the reference model acquisition unit acquires inspection data indicating a standard for inspecting the suitability of the positional relationship between the components mounted in the component mounting range and the component mounting range, and CAD (Computer-Aided Design) data indicating the configuration of the board. The three-dimensional measurement arithmetic device may be configured to create a reference model from at least one of the data. With such a configuration, a reference model is created using existing data such as board inspection data or CAD data, and a defective irradiation area can be calculated based on this reference model. Note that the information indicated by the inspection data is not limited to the above criteria, but may also include criteria for inspecting the suitability of the component itself and criteria for inspecting the suitability of the bond between the component and the board.
 また、領域算出部は、三次元計測装置に搬入されて対象物支持部に支持された基板の位置を認識した結果にさらに基づき照射不良領域を算出するように、三次元計測用演算装置を構成してもよい。かかる構成では、三次元計測装置に搬入された基板の実際の位置に応じて、照射不良領域を的確に算出することができる。 Further, the area calculation unit configures the three-dimensional measurement calculation device to further calculate the poor irradiation area based on the result of recognizing the position of the substrate carried into the three-dimensional measurement device and supported by the object support unit. You may. With this configuration, it is possible to accurately calculate the irradiation failure area according to the actual position of the substrate loaded into the three-dimensional measurement device.
 また、対象物支持部に支持された基板の位置は、基板に付されたフィデューシャルマークを撮像部によって撮像した結果に基づき認識されるように、三次元計測用演算装置を構成してもよい。かかる構成では、三次元計測装置に搬入された基板の位置に応じて、照射不良領域を的確に算出することができる。 Furthermore, the three-dimensional measurement computing device may be configured such that the position of the substrate supported by the object support section is recognized based on the result of imaging a fiducial mark attached to the substrate by the imaging section. good. With this configuration, the irradiation defect area can be accurately calculated according to the position of the substrate loaded into the three-dimensional measurement device.
 また、領域算出部は、部品実装範囲に実装された部品の位置を検出した結果にさらに基づき照射不良領域を算出するように、三次元計測用演算装置を構成してもよい。かかる構成では、部品実装範囲に実装された部品の実際の位置に応じて、照射不良領域を的確に算出することができる。 Furthermore, the area calculation unit may configure the three-dimensional measurement calculation device to further calculate the irradiation failure area based on the result of detecting the position of the component mounted in the component mounting range. With this configuration, it is possible to accurately calculate the irradiation failure area according to the actual position of the component mounted in the component mounting range.
 また、部品実装範囲に実装された部品の位置は、部品を撮像した結果に基づき検出されるように、三次元計測用演算装置を構成してもよい。かかる構成では、部品実装範囲に実装された部品の実際の位置に応じて、照射不良領域を的確に算出することができる。 Furthermore, the three-dimensional measurement arithmetic device may be configured so that the position of the component mounted in the component mounting range is detected based on the result of imaging the component. With this configuration, it is possible to accurately calculate the irradiation failure area according to the actual position of the component mounted in the component mounting range.
 また、パターン照射部は、互いに異なる方向から部品に光を照射する複数のプロジェクタを有し、照射方向情報は、プロジェクタから部品に光が照射される方向を、複数のプロジェクタのそれぞれについて示し、領域算出部は、複数のプロジェクタのそれぞれについて、照射不良領域を算出するように、三次元計測用演算装置を構成してもよい。かかる構成では、複数のプロジェクタのそれぞれから部品に光を照射した際に発生する照射不良領域を取得することができる。 Further, the pattern irradiation unit has a plurality of projectors that irradiate light onto the component from mutually different directions, and the irradiation direction information indicates, for each of the plurality of projectors, the direction in which the component is irradiated with light from the projector. The calculation unit may configure a three-dimensional measurement calculation device so as to calculate a poor irradiation area for each of the plurality of projectors. With such a configuration, it is possible to obtain a poor irradiation area that occurs when a component is irradiated with light from each of a plurality of projectors.
 また、プロジェクタから基板に照射された光を撮像部により撮像することで取得されたパターン画像に基づき、三次元形状に関する値である形状関連値を画素毎に示す単方向形状データを算出する演算を、複数のプロジェクタのそれぞれについて実行することで、複数の単方向形状データを算出する形状算出部をさらに備え、形状算出部は、複数の単方向形状データそれぞれが示す形状関連値の平均を画素毎に算出することで複数の単方向形状データを統合して、計測対象物の三次元形状を算出する第1統合演算処理を実行し、第1統合演算処理では、単方向形状データが示す形状関連値のうち、照射不良領域に含まれる画素の形状関連値に対しては1未満で0以上の重み係数を乗じる加重平均により、計測対象物の三次元形状が算出されるように、三次元計測用演算装置を構成してもよい。 In addition, based on the pattern image obtained by capturing the light irradiated onto the board from the projector using the imaging unit, calculations are performed to calculate unidirectional shape data that indicates shape-related values, which are values related to the three-dimensional shape, for each pixel. , further comprising a shape calculation section that calculates a plurality of unidirectional shape data by executing for each of the plurality of projectors, the shape calculation section calculating an average of shape-related values indicated by each of the plurality of unidirectional shape data for each pixel. A first integrated calculation process is performed that integrates multiple unidirectional shape data by calculating the three-dimensional shape of the measurement target, and in the first integrated calculation process, the shape relationship indicated by the unidirectional shape data is calculated. Three-dimensional measurement is performed so that the three-dimensional shape of the measurement object is calculated by a weighted average in which the shape-related values of pixels included in the poor irradiation area are multiplied by a weighting coefficient of less than 1 and greater than or equal to 0. A computing device may also be configured.
 かかる構成では、プロジェクタから基板に照射された光を撮像部により撮像することで取得されたパターン画像に基づき、三次元形状に関する値(形状関連値)を画素毎に示す単方向形状データが算出される。つまり、単方向形状データは、一のプロジェクタから部品に照射した光を撮像したパターン画像に基づき算出される三次元形状に関するデータである。この単方向形状データを算出する演算は、複数のプロジェクタのそれぞれについて実行される。これによって、複数のプロジェクタのそれぞれから部品に光を照射した場合の単方向形状データが取得される。こうして取得される複数の単方向形状データは、それぞれ異なる方向から部品に光を照射しつつ取得されるパターン画像に基づく。したがって、一の単方向形状データにおいて照射不良領域に該当する画素が、他の単方向形状データにおいては良好に光が照射された領域に該当しうる。よって、複数の単方向形状データそれぞれが示す形状関連値の加重平均を画素毎に算出することで複数の単方向形状データを統合して、計測対象物の三次元形状を算出することができる。しかも、第1統合演算処理では、単方向形状データが示す形状関連値のうち、照射不良領域に含まれる画素の形状関連値に対しては1未満で0以上の重み係数を乗じる加重平均により、計測対象物の三次元形状が算出される。これによって、照射不良領域の影響を抑制しつつ、計測対象物の三次元形状を適切に算出することが可能となっている。 In such a configuration, unidirectional shape data indicating a value related to a three-dimensional shape (shape-related value) for each pixel is calculated based on a pattern image obtained by capturing an image of light irradiated onto the board from a projector using an imaging unit. Ru. In other words, the unidirectional shape data is data regarding a three-dimensional shape calculated based on a pattern image obtained by capturing light irradiated onto a component from one projector. The calculation for calculating this unidirectional shape data is executed for each of the plurality of projectors. As a result, unidirectional shape data obtained when the component is irradiated with light from each of the plurality of projectors is obtained. The plurality of unidirectional shape data obtained in this way are based on pattern images obtained while irradiating the component with light from different directions. Therefore, a pixel that corresponds to a poorly irradiated region in one unidirectional shape data may correspond to a region well irradiated with light in another unidirectional shape data. Therefore, by calculating a weighted average of the shape-related values indicated by each of the plurality of unidirectional shape data for each pixel, it is possible to integrate the plurality of unidirectional shape data and calculate the three-dimensional shape of the measurement target object. Moreover, in the first integrated calculation process, among the shape-related values indicated by the unidirectional shape data, the shape-related values of pixels included in the poorly irradiated area are weighted averaged by multiplying them by a weighting coefficient of less than 1 and greater than or equal to 0. A three-dimensional shape of the object to be measured is calculated. This makes it possible to appropriately calculate the three-dimensional shape of the measurement target while suppressing the influence of the poorly irradiated area.
 また、第1統合演算処理では、照射不良領域に含まれる画素の形状関連値に対しては0の重み係数を乗じる加重平均により、計測対象物の三次元形状が算出されるように、三次元計測用演算装置を構成してもよい。このように、0の重み係数を乗じる操作は、すなわち照射不良領域を除外する操作である。これによって、照射不良領域の影響を排除しつつ、計測対象物の三次元形状を適切に算出することが可能となっている。 In addition, in the first integrated calculation process, the three-dimensional shape of the measurement target is calculated by a weighted average in which shape-related values of pixels included in the poor irradiation area are multiplied by a weighting coefficient of 0. A measurement arithmetic device may also be configured. In this way, the operation of multiplying by a weighting coefficient of 0 is an operation of excluding a poorly irradiated region. This makes it possible to appropriately calculate the three-dimensional shape of the measurement target while eliminating the influence of poor irradiation areas.
 なお、形状関連値の具体的な内容は種々想定される。例えば、形状関連値は、パターン画像に基づき算出される三次元形状の算出値と、当該算出値の信頼度との積で与えられてもよく、形状関連値は、パターン画像に基づき算出される三次元形状の算出値であってもよい。 Note that various specific contents of the shape-related value are assumed. For example, the shape-related value may be given by the product of the calculated value of the three-dimensional shape calculated based on the pattern image and the reliability of the calculated value, and the shape-related value is calculated based on the pattern image. It may be a calculated value of a three-dimensional shape.
 また、形状算出部は、照射不良領域に応じた重み付けを行わずに複数の単方向形状データが示す形状関連値の代表値を画素毎に算出することで複数の単方向形状データを統合して、計測対象物の三次元形状を算出する第2統合演算処理と、第1統合演算処理との両方を同一の計測対象物に対して実行して、第1統合演算処理および第2統合演算処理それぞれで算出された計測対象物の三次元形状の差を評価する評価処理を、三次元計測装置でパターン画像の取得対象となった所定数の計測対象物に実行することで、第1統合演算処理の要否を判定する要否判定を実行し、要否判定において第1統合演算処理が不要と判定された後は、形状算出部は、第1統合演算処理を実行せずに第2統合演算処理によって計測対象物の三次元形状を算出するように、三次元計測用演算装置を構成してもよい。かかる構成では、照射不良領域が問題にならない場合には、第1統合演算処理より簡素な第2統合演算処理によって計測対象物の三次元形状を的確に算出することができる。ここで、照射不良領域に応じた重み付けを行わない、複数の単方向形状データが示す形状関連値の代表値とは、単純平均による平均値あるいは中央値等を含む。 In addition, the shape calculation unit integrates the plurality of unidirectional shape data by calculating, for each pixel, a representative value of shape-related values indicated by the plurality of unidirectional shape data without weighting according to the poor irradiation area. , by performing both the second integrated calculation process for calculating the three-dimensional shape of the measurement object and the first integrated calculation process on the same measurement object, the first integrated calculation process and the second integrated calculation process are performed. The first integrated calculation is performed by performing an evaluation process for evaluating the difference between the three-dimensional shapes of the measurement objects calculated in each case on a predetermined number of measurement objects whose pattern images are to be acquired by the three-dimensional measuring device. After the necessity determination is executed to determine whether or not the process is necessary, and it is determined in the necessity determination that the first integrated calculation process is unnecessary, the shape calculation unit performs the second integration process without executing the first integrated calculation process. The three-dimensional measurement arithmetic device may be configured to calculate the three-dimensional shape of the object to be measured through arithmetic processing. In such a configuration, when the poor irradiation area is not a problem, the three-dimensional shape of the measurement object can be accurately calculated by the second integrated calculation process, which is simpler than the first integrated calculation process. Here, the representative value of the shape-related values indicated by the plurality of unidirectional shape data, which is not weighted according to the poor irradiation area, includes an average value or a median value based on a simple average.
 なお、所定数を設定する操作を受け付ける設定操作部をさらに備えるように、三次元計測用演算装置を構成してもよい。かかる構成では、ユーザは、第1統合演算処理および第2統合演算処理の両方が実行される期間を適宜調整することができる。 Note that the three-dimensional measurement arithmetic device may be configured to further include a setting operation section that accepts an operation to set a predetermined number. With this configuration, the user can appropriately adjust the period during which both the first integrated calculation process and the second integrated calculation process are executed.
 本発明によれば、計測対象物に照射された所定のパターンの光を撮像することで取得されたパターン画像に基づき計測対象物の三次元形状を計測するにあたって、光の照射不良が発生する照射不良領域を取得することが可能となる。 According to the present invention, when measuring the three-dimensional shape of a measurement target based on a pattern image obtained by imaging a predetermined pattern of light irradiated onto the measurement target, irradiation that causes light irradiation failure It becomes possible to acquire defective areas.
本発明に係る三次元計測装置を模式的に例示するブロック図。FIG. 1 is a block diagram schematically illustrating a three-dimensional measuring device according to the present invention. 図1の三次元計測装置が備えるプロジェクタと撮像視野との関係を模式的に示す平面図。FIG. 2 is a plan view schematically showing the relationship between a projector included in the three-dimensional measurement device of FIG. 1 and an imaging field of view. 図1の三次元計測装置が備える制御装置の詳細を示すブロック図。FIG. 2 is a block diagram showing details of a control device included in the three-dimensional measuring device of FIG. 1. FIG. 部品モデル取得部によって作成される部品モデルの内容を模式的に示す図。FIG. 3 is a diagram schematically showing the contents of a part model created by a part model acquisition unit. 部品モデルの作成のために使用される基板構成データの一例を表形式で示す図。FIG. 3 is a diagram showing an example of board configuration data used for creating a component model in a table format. 部品モデルの作成のために使用される部品データの一例を表形式で示す図。The figure which shows an example of part data used for creating a part model in a table format. 作成された部品モデルの一例を模式的に示す図。A diagram schematically showing an example of a created part model. 領域算出部によって算出される照射不良領域の内容を模式的に示す図。FIG. 3 is a diagram schematically showing the contents of a poorly irradiated area calculated by an area calculation unit. 照射不良領域を算出するための演算の一例を示すフローチャート。5 is a flowchart illustrating an example of a calculation for calculating a defective irradiation area. 三次元計測の第1例を示すフローチャート。A flowchart showing a first example of three-dimensional measurement. 図8の三次元計測で実行される全方向形状データ取得を示すフローチャート。9 is a flowchart showing omnidirectional shape data acquisition executed in the three-dimensional measurement of FIG. 8; 図8の三次元計測で実行される第1データ統合処理を示すフローチャート。9 is a flowchart showing the first data integration process executed in the three-dimensional measurement of FIG. 8. 図10の第1データ統合処理で実行される演算を模式的に示す図。11 is a diagram schematically showing calculations executed in the first data integration process of FIG. 10. FIG. 図7によって算出された照射不良領域を補正するための演算の一例を示すフローチャート。8 is a flowchart showing an example of calculation for correcting the irradiation defect area calculated in FIG. 7. FIG. 三次元計測の第2例を示すフローチャート。A flowchart showing a second example of three-dimensional measurement. 図13の三次元計測で実行される第2データ統合処理を示すフローチャート。14 is a flowchart showing the second data integration process executed in the three-dimensional measurement of FIG. 13. 図13の三次元計測で実行される演算の内容を模式的に示す図。14 is a diagram schematically showing the contents of calculations executed in the three-dimensional measurement of FIG. 13. FIG.
 図1は本発明に係る三次元計測装置を模式的に例示するブロック図である。同図および以下の図では、水平方向であるX方向、X方向に直交する水平方向であるY方向および鉛直方向であるZ方向を適宜示す。図1の三次元計測装置1は、制御装置100によって搬送コンベア2、計測ヘッド3および駆動機構4を制御することで、計測対象物Jの三次元形状(外観形状)を計測する。計測対象物Jは、基板B(プリント基板)および当該基板Bに実装された部品Eで構成される。この計測対象物Jは、部品Eを基板Bに実装する表面実装機によって生産されて、三次元計測装置1に搬入される。 FIG. 1 is a block diagram schematically illustrating a three-dimensional measuring device according to the present invention. In this figure and the following figures, the X direction, which is a horizontal direction, the Y direction, which is a horizontal direction orthogonal to the X direction, and the Z direction, which is a vertical direction, are shown as appropriate. The three-dimensional measuring device 1 of FIG. 1 measures the three-dimensional shape (external shape) of the measurement target J by controlling the conveyor 2, the measuring head 3, and the drive mechanism 4 by the control device 100. The measurement object J is composed of a board B (printed board) and a component E mounted on the board B. This measurement object J is produced by a surface mounter that mounts the component E on the board B, and is carried into the three-dimensional measuring device 1.
 搬送コンベア2は、計測対象物Jを所定の搬送経路に沿って搬送する。具体的には、搬送コンベア2は、計測前の計測対象物Jを三次元計測装置1内の計測位置に搬入し、基板Bが水平となるように計測対象物Jを計測位置に保持する。また、計測位置における計測対象物Jの三次元形状の計測が終了すると、搬送コンベア2は、計測後の計測対象物Jを三次元計測装置1の外へ搬出する。 The conveyor 2 conveys the measurement target object J along a predetermined conveyance path. Specifically, the conveyor 2 carries the measurement target J before measurement to the measurement position in the three-dimensional measurement device 1, and holds the measurement target J at the measurement position so that the substrate B is horizontal. Further, when the measurement of the three-dimensional shape of the measurement object J at the measurement position is completed, the conveyor 2 carries the measured measurement object J out of the three-dimensional measurement device 1.
 計測ヘッド3は、撮像視野V31内を上方から撮像する撮像カメラ31を有しており、計測位置に搬入された計測対象物Jを撮像視野V31に収めて撮像カメラ31によって撮像する。撮像カメラ31は、計測対象物Jからの反射光を検出する個体撮像素子311を有し、個体撮像素子311によって計測対象物Jの画像を撮像する。 The measurement head 3 has an imaging camera 31 that images the inside of the imaging field of view V31 from above, and the imaging camera 31 captures an image of the measurement object J carried into the measurement position within the imaging field of view V31. The imaging camera 31 includes a solid-state image sensor 311 that detects reflected light from the measurement target object J, and captures an image of the measurement target object J using the solid-state image sensor 311.
 さらに、計測ヘッド3は、光強度分布が正弦波状に変化する縞状のパターン光L(S)を撮像視野V31に投影するプロジェクタ32を有する。プロジェクタ32は、LED(Light Emitting Diode)等の光源と、光源からの光を撮像視野V31へ向けて反射するデジタル・マイクロミラー・デバイスとを有している。かかるプロジェクタ32は、デジタル・マイクロミラー・デバイスの各マイクロミラーの角度を調整することで、互いに位相の異なる複数種のパターン光L(S)を撮像視野V31に投影できる。つまり、計測ヘッド3は、プロジェクタ32から投影するパターン光L(S)の位相を変化させながら撮像カメラ31により撮像を行うことで、位相シフト法によって撮像視野V31内の計測対象物Jの三次元形状を計測することができる。 Further, the measurement head 3 includes a projector 32 that projects striped pattern light L(S) in which the light intensity distribution changes sinusoidally onto the imaging field of view V31. The projector 32 includes a light source such as an LED (Light Emitting Diode), and a digital micromirror device that reflects light from the light source toward the imaging field of view V31. By adjusting the angle of each micromirror of the digital micromirror device, the projector 32 can project a plurality of types of patterned light L(S) having mutually different phases onto the imaging field of view V31. In other words, the measurement head 3 captures an image using the imaging camera 31 while changing the phase of the patterned light L(S) projected from the projector 32, so that the measurement target J within the imaging field of view V31 can be three-dimensionally captured using the phase shift method. Shape can be measured.
 図2は図1の三次元計測装置が備えるプロジェクタと撮像視野との関係を模式的に示す平面図である。図2に示すように、計測ヘッド3は、複数(ここの例では、4個)のプロジェクタ32を有している(図1では、図示を簡便化するために2個のプロジェクタ32が代表して示されている)。各プロジェクタ32は、撮像カメラ31の撮像視野V31に対して斜め上方からパターン光L(S)を投影する。平面視において、複数のプロジェクタ32は、撮像カメラ31の周囲を囲むように配置されており、鉛直方向Zを中心として円周状に等ピッチで並ぶ。したがって、複数のプロジェクタ32は、互いに異なる投影方向Dから撮像視野V31にパターン光L(S)を投影する。なお、計測ヘッド3が備えるプロジェクタ32の個数は、図2の例の4個に限られない。 FIG. 2 is a plan view schematically showing the relationship between a projector included in the three-dimensional measuring device of FIG. 1 and an imaging field of view. As shown in FIG. 2, the measurement head 3 has a plurality of (in this example, four) projectors 32 (in FIG. 1, two projectors 32 are representative for simplicity of illustration). ). Each projector 32 projects pattern light L(S) from diagonally above onto the imaging field of view V31 of the imaging camera 31. In plan view, the plurality of projectors 32 are arranged so as to surround the imaging camera 31, and are arranged circumferentially at equal pitches with the vertical direction Z as the center. Therefore, the plurality of projectors 32 project pattern light L(S) onto the imaging field of view V31 from mutually different projection directions D. Note that the number of projectors 32 included in the measurement head 3 is not limited to four as in the example of FIG. 2 .
 また、計測ヘッド3は、撮像視野V31に光を照射する照明33(図1)を有する。上記のプロジェクタ32は、三次元形状を計測する際にパターン光L(S)を撮像視野V31に投影するのに対して、照明33は、二次元画像を撮像カメラ31により撮像する際に、撮像視野V31に照明光を照射する。 Furthermore, the measurement head 3 has an illumination 33 (FIG. 1) that irradiates light onto the imaging field of view V31. The above projector 32 projects the pattern light L(S) onto the imaging field of view V31 when measuring a three-dimensional shape, whereas the illumination 33 projects the pattern light L(S) onto the imaging field of view V31 when measuring a three-dimensional shape. The field of view V31 is irradiated with illumination light.
 駆動機構4は、計測ヘッド3を支持しつつ、モータによってX方向、Y方向およびZ方向へ計測ヘッド3を駆動させる。この駆動機構4の駆動によって、計測ヘッド3は計測対象物Jのうちの計測対象箇所の上方に移動して、計測対象箇所を撮像視野V31内に捉えることができ、撮像視野V31内の計測対象箇所の三次元形状を計測できる。特に三次元計測装置1は、基板Bからの部品Eの浮き、例えばQFP(Quad Flat Package)といったパッケージの端子の基板Bからの浮き等の検査に資するように、部品Eを撮像視野V31に収めつつ、部品Eおよびその周辺(計測対象箇所)の三次元形状を計測できる。 The drive mechanism 4 supports the measurement head 3 and drives the measurement head 3 in the X direction, Y direction, and Z direction using a motor. By driving this drive mechanism 4, the measurement head 3 can move above the measurement target part of the measurement target object J and capture the measurement target part within the imaging field of view V31. The three-dimensional shape of a location can be measured. In particular, the three-dimensional measuring device 1 places the component E within the imaging field of view V31 so as to help inspect the floating of the component E from the substrate B, for example, the floating of the terminal of a package such as a QFP (Quad Flat Package) from the substrate B. At the same time, the three-dimensional shape of the part E and its surroundings (measurement target location) can be measured.
 制御装置100は、CPU(Central Processing Unit)およびメモリで構成されたプロセッサである主制御部110を有しており、主制御部110が装置各部の制御を統括することで、三次元形状が計測される。また。制御装置100は、ディスプレイ、キーボードおよびマウス等の入出力機器で構成されたUI(User Interface)200を有しており、ユーザは、UI200を介して制御装置100に指令を入力したり、制御装置100による計測結果を確認したりすることができる。さらに、制御装置100は、投影制御部120、撮像制御部130および駆動制御部140を有する。投影制御部120は、プロジェクタ32によるパターン光L(S)の投影を制御する。撮像制御部130は、撮像カメラ31による撮像視野V31の撮像や、照明33から撮像視野V31への光の照射を制御する。駆動制御部140は、駆動機構4による計測ヘッド3の駆動を制御する。 The control device 100 has a main control section 110, which is a processor composed of a CPU (Central Processing Unit) and memory, and the main control section 110 controls the various parts of the device, so that the three-dimensional shape can be measured. be done. Also. The control device 100 has a UI (User Interface) 200 configured with input/output devices such as a display, a keyboard, and a mouse. A user can input commands to the control device 100 via the UI 200, and 100 measurement results can be confirmed. Further, the control device 100 includes a projection control section 120, an imaging control section 130, and a drive control section 140. The projection control unit 120 controls the projection of the pattern light L(S) by the projector 32. The imaging control unit 130 controls the imaging of the imaging field of view V31 by the imaging camera 31 and the irradiation of light from the illumination 33 to the imaging field of view V31. The drive control unit 140 controls the drive of the measurement head 3 by the drive mechanism 4 .
 搬送コンベア2が計測位置に計測対象物Jを搬入すると、主制御部110は、駆動制御部140により駆動機構4を制御して、計測対象物Jの計測対象箇所の上方へ計測ヘッド3を移動させる。これによって、撮像カメラ31の撮像視野V31内に計測対象箇所が収まる。続いて、主制御部110は、プロジェクタ32から撮像視野V31へパターン光L(S)を投影しつつ撮像視野V31に投影されたパターン光L(S)を撮像カメラ31により撮像する(パターン撮像動作)。具体的には、制御装置100は記憶部150を有しており、記憶部150に記憶された投影パターンT(S)を読み出す。そして、主制御部110は、記憶部150から読み出した投影パターンT(S)に基づいて投影制御部120を制御することで、プロジェクタ32のデジタル・マイクロミラー・デバイスの各マイクロミラーの角度を投影パターンT(S)に応じて調整する。こうして、撮像視野V31には、投影パターンT(S)を有するパターン光L(S)が投影される。さらに、主制御部110は、撮像制御部130を制御することで、撮像視野V31に投影されたパターン光L(S)を撮像カメラ31により撮像してパターン画像I(S)を取得する。このパターン画像Iは、記憶部150に記憶される。なお、記憶部150には、互いに90度ずつ位相の異なる4種類の投影パターンT(S)が記憶されており、パターン撮像動作は、投影パターンT(S)を変えながら4回実行される(S=1、2、3、4)。その結果、それぞれ90度ずつ位相の異なるパターン光L(S)を撮像した4種類のパターン画像I(S)が取得される。主制御部110は、こうして取得された4種類のパターン画像I(S)から、位相シフト法によって、撮像視野V31の高さを撮像カメラ31の画素毎に求める。なお、投影パターンT(S)のバリエーションはここの例に限られず、例えば45度ずつ位相の異なる8種類の投影パターンT(S)を用いてもよいし、120度ずつ位相の異なる3種類の投影パターンT(S)を用いてもよい。 When the conveyor 2 carries the measurement object J to the measurement position, the main control section 110 controls the drive mechanism 4 by the drive control section 140 to move the measurement head 3 above the measurement point of the measurement object J. let As a result, the measurement target location falls within the imaging field of view V31 of the imaging camera 31. Next, the main control unit 110 projects the pattern light L(S) from the projector 32 onto the imaging field of view V31 and images the patterned light L(S) projected onto the imaging field of view V31 with the imaging camera 31 (pattern imaging operation). ). Specifically, the control device 100 has a storage unit 150, and reads out the projection pattern T(S) stored in the storage unit 150. The main control unit 110 then controls the projection control unit 120 based on the projection pattern T(S) read from the storage unit 150 to adjust the angle of each micromirror of the digital micromirror device of the projector 32. Adjust according to pattern T(S). In this way, the patterned light L(S) having the projection pattern T(S) is projected onto the imaging field of view V31. Further, the main control unit 110 controls the imaging control unit 130 to capture the pattern light L(S) projected onto the imaging field of view V31 using the imaging camera 31 to obtain a pattern image I(S). This pattern image I is stored in the storage section 150. Note that the storage unit 150 stores four types of projection patterns T(S) whose phases differ by 90 degrees from each other, and the pattern imaging operation is executed four times while changing the projection pattern T(S). S=1, 2, 3, 4). As a result, four types of pattern images I(S) are obtained by capturing patterned light L(S) each having a phase different by 90 degrees. The main control unit 110 determines the height of the imaging field of view V31 for each pixel of the imaging camera 31 from the four types of pattern images I(S) acquired in this way using the phase shift method. Note that variations in the projection pattern T(S) are not limited to this example; for example, eight types of projection patterns T(S) whose phases differ by 45 degrees may be used, or three types of projection patterns T(S) whose phases differ by 120 degrees may be used. A projection pattern T(S) may also be used.
 図3は図1の三次元計測装置が備える制御装置の詳細を示すブロック図である。制御装置100は、外部のサーバコンピュータ91よりダウンロードした三次元計測プログラム92を記憶部150に保存する。なお、三次元計測プログラム92の提供態様は、外部からのダウンロードに限られず、三次元計測プログラム92はDVD(Digital Versatile Disc)やUSB(Universal Serial Bus)に記録された状態で提供されてもよい。主制御部110が三次元計測プログラム92を実行することで、部品モデル取得部111、領域算出部112、パターン画像取得部113および形状算出部114が主制御部110に構成される。これらの機能部111、112、113、114は、パターン光L(S)を計測対象物Jに投影した際に、基板B上の部品Eに起因して生じる照射不良領域に対応する機能を果たす。これらの詳細は次の通りである。 FIG. 3 is a block diagram showing details of a control device included in the three-dimensional measuring device of FIG. 1. The control device 100 stores the three-dimensional measurement program 92 downloaded from the external server computer 91 in the storage unit 150. Note that the manner in which the three-dimensional measurement program 92 is provided is not limited to external downloading, and the three-dimensional measurement program 92 may be provided in a state recorded on a DVD (Digital Versatile Disc) or USB (Universal Serial Bus). . When the main control section 110 executes the three-dimensional measurement program 92, the main control section 110 includes a component model acquisition section 111, an area calculation section 112, a pattern image acquisition section 113, and a shape calculation section 114. These functional units 111, 112, 113, and 114 perform a function to correspond to a poor irradiation area that occurs due to the component E on the board B when the pattern light L(S) is projected onto the measurement target J. . Details of these are as follows.
 図4は部品モデル取得部によって作成される部品モデルの内容を模式的に示す図であり、図5Aは部品モデルの作成のために使用される基板構成データの一例を表形式で示す図であり、図5Bは部品モデルの作成のために使用される部品データの一例を表形式で示す図であり、図5Cは作成された部品モデルの一例を模式的に示す図である。部品モデル取得部111は、基板Bの構成を示す基板構成データ81と、当該基板Bに実装予定の部品Eの外形を示す部品データ82とを用いて、基板Bに実装された部品Eの三次元における存在範囲を示す部品モデル83を作成する。なお、基板構成データ81および部品データ82は予め記憶部150に保存されている。 FIG. 4 is a diagram schematically showing the contents of a component model created by the component model acquisition unit, and FIG. 5A is a diagram showing an example of board configuration data used for creating the component model in a table format. , FIG. 5B is a diagram showing an example of component data used for creating a component model in a table format, and FIG. 5C is a diagram schematically showing an example of the created component model. The component model acquisition unit 111 uses board configuration data 81 indicating the configuration of the board B and component data 82 indicating the external shape of the component E to be mounted on the board B to obtain a three-dimensional model of the component E mounted on the board B. A part model 83 is created that shows the existing range in the original. Note that the board configuration data 81 and the component data 82 are stored in the storage unit 150 in advance.
 基板構成データ81(図5A)は、基板Bに設けられた部品実装範囲Ref(R)と、当該部品実装範囲Ref(R)に実装すべき部品Eの種類とを、各部品実装範囲Ref(1)、Ref(2)、Ref(3)、…について示す(R=1、2、3、…)。基板構成データ81において、部品実装範囲Ref(R)は、X方向における部品実装範囲Ref(R)の位置(X座標)と、Y方向における部品実装範囲Ref(R)の位置(Y座標)とによって特定される。なお、部品実装範囲Ref(R)は、例えば基板Bに設けられたランド等によって規定され、広がりを持った範囲である。これに対して、部品実装範囲Ref(R)の位置(x、y)は、平面視における部品実装範囲Ref(R)の中心に相当する。基板構成データ81が示す部品実装範囲Ref(R)は、部品Eが実装されるべき理想的な範囲であり、表面実装機により部品Eが実際に実装された範囲ではない。基板構成データ81としては、基板Bの構成を示すCADデータや、表面実装機において部品実装範囲Ref(R)に実装された部品Eと部品実装範囲Ref(R)との位置関係の適否を検査するための基準を示す検査データ等を用いることができる。 The board configuration data 81 (FIG. 5A) indicates the component mounting range Ref(R) provided on the board B and the type of component E to be mounted in the component mounting range Ref(R), for each component mounting range Ref( 1), Ref(2), Ref(3),... (R=1, 2, 3,...). In the board configuration data 81, the component mounting range Ref(R) is defined by the position of the component mounting range Ref(R) in the X direction (X coordinate) and the position of the component mounting range Ref(R) in the Y direction (Y coordinate). Identified by Note that the component mounting range Ref(R) is defined by, for example, lands provided on the board B, and is a wide range. On the other hand, the position (x, y) of the component mounting range Ref(R) corresponds to the center of the component mounting range Ref(R) in plan view. The component mounting range Ref(R) indicated by the board configuration data 81 is the ideal range in which the component E should be mounted, and is not the range in which the component E is actually mounted by the surface mounter. The board configuration data 81 includes CAD data indicating the configuration of the board B, and checking the suitability of the positional relationship between the component E mounted in the component mounting range Ref(R) and the component mounting range Ref(R) in the surface mounter. Inspection data or the like can be used to indicate standards for
 図5Bに示すように、部品データ82は、部品Eの構成を部品Eの種類毎に示す。具体的には、部品データ82は、矩形を有する部品Eの長さEl、幅Ewおよび高さEhといった部品Eの外形や、光(パターン光L(S))を反射する性質の有無(反射の有無)を、各種の部品Ea、Eb、Ec、…について示す。 As shown in FIG. 5B, the component data 82 shows the configuration of the component E for each type of component E. Specifically, the component data 82 includes the external shape of the rectangular component E such as the length El, width Ew, and height Eh, and the presence or absence of the property of reflecting light (patterned light L(S)) (reflection). ) are shown for various parts Ea, Eb, Ec, .
 部品モデル取得部111は、基板Bの部品実装範囲Ref(R)に実装すべき部品Eの種類を基板構成データ81により確認し、該当の種類の部品Eの構成を部品データ82により確認した結果に基づき、部品モデル83を作成する。この部品モデル83は、部品実装範囲Ref(R)の位置(x、y)と、当該部品実装範囲Ref(R)に実装される部品Eの外形(部品のXサイズE_x、YサイズE_yおよびZサイズE_z)とを示す。換言すれば、部品モデル83は、部品実装範囲Ref(R)に実装された部品Eが基板BからZ方向(高さ方向)に突出する範囲を示す。さらに、部品モデル83は、部品実装範囲Ref(R)に実装される部品Eが光(パターン光L(S))を反射する性質を有するか否かを示す。この部品モデル83は、基板Bに設けられた複数の部品実装範囲Ref(R)それぞれついて作成される。 The component model acquisition unit 111 confirms the type of component E to be mounted in the component mounting range Ref(R) of the board B using the board configuration data 81, and confirms the configuration of the corresponding type of component E using the component data 82. A part model 83 is created based on. This component model 83 includes the position (x, y) of the component mounting range Ref(R) and the external shape of the component E mounted in the component mounting range Ref(R) (X size E_x, Y size E_y, and Z size E_z). In other words, the component model 83 indicates the range in which the component E mounted in the component mounting range Ref(R) protrudes from the board B in the Z direction (height direction). Furthermore, the component model 83 indicates whether the component E mounted in the component mounting range Ref(R) has the property of reflecting light (pattern light L(S)). This component model 83 is created for each of the plurality of component mounting ranges Ref(R) provided on the board B.
 ちなみに、部品モデル83を作成するために使用する具体的なデータや、各データを用いて部品モデル83を作成する具体的な方法はここの例に限られない。例えば、ユーザのUI200への操作によって部品モデル83が入力される場合には、部品モデル取得部111は、基板構成データ81および部品データ82に基づき部品モデル83を作成する必要はなく、ユーザにより入力された部品モデル83を取得すれば良い。 Incidentally, the specific data used to create the part model 83 and the specific method of creating the part model 83 using each data are not limited to this example. For example, when the component model 83 is input by the user's operation on the UI 200, the component model acquisition unit 111 does not need to create the component model 83 based on the board configuration data 81 and the component data 82; What is necessary is to acquire the part model 83 that has been created.
 図6は領域算出部によって算出される照射不良領域の内容を模式的に示す図である。図6に示される部品Eは、パターン光L(S)を反射する性質を有する。図6において、部品実装範囲Ref(R)に実装される部品Eは、X方向に幅E_xを、Y方向に長さE_yを、Z方向に高さE_zを有する。そして、プロジェクタ32から射出されたパターン光L(S)が投影方向Dから部品Eに投影される。 FIG. 6 is a diagram schematically showing the contents of the poor irradiation area calculated by the area calculation unit. Component E shown in FIG. 6 has a property of reflecting pattern light L(S). In FIG. 6, the component E mounted in the component mounting range Ref(R) has a width E_x in the X direction, a length E_y in the Y direction, and a height E_z in the Z direction. Then, the pattern light L(S) emitted from the projector 32 is projected onto the component E from the projection direction D.
 その結果、平面視において、投影方向Dの部品Eより下流側では、影領域As(照射不良領域)が発生する。この影領域Asは、プロジェクタ32から部品Eに投影されたパターン光L(S)が部品Eによって遮られることで基板Bにおいて部品Eの影が生じる領域である。また、平面視において、投影方向Dの部品Eより上流側では、二次反射領域Ar(照射不良領域)が生じる。この二次反射領域Arは、プロジェクタ32から部品Eに投影され部品Eの側面で反射された光が基板Bに入射する領域である。ここの例では、影領域Asおよび二次反射領域ArのY方向への長さは、部品EのY方向への長さE_yに相当する。また、影領域Asおよび二次反射領域ArのX方向への幅は、部品Eの高さE_zに、投影仰角θsのタンジェント(tanθs)を乗じた幅に相当する。なお、パターン光L(S)を反射しない部品Eについては、二次反射領域Arは発生しない。 As a result, in plan view, a shadow area As (poor irradiation area) occurs downstream of the component E in the projection direction D. This shadow area As is an area where a shadow of the component E is generated on the board B because the pattern light L(S) projected onto the component E from the projector 32 is blocked by the component E. Furthermore, in plan view, a secondary reflection area Ar (defective irradiation area) occurs upstream of the component E in the projection direction D. This secondary reflection area Ar is an area where light projected onto the component E from the projector 32 and reflected by the side surface of the component E enters the substrate B. In this example, the length of the shadow area As and the secondary reflection area Ar in the Y direction corresponds to the length E_y of the component E in the Y direction. Further, the width of the shadow area As and the secondary reflection area Ar in the X direction corresponds to the width obtained by multiplying the height E_z of the component E by the tangent (tanθs) of the projection angle θs. Note that for the component E that does not reflect the pattern light L(S), no secondary reflection area Ar is generated.
 このような照射不良領域(影領域As・二次反射領域Ar)は、複数のプロジェクタ32のそれぞれについて算出される。続いては、この点について、図7を用いて説明する。図7は照射不良領域を算出するための演算の一例を示すフローチャートである。図7のフローチャートは主制御部110の演算によって実行される。 Such a poor irradiation area (shadow area As/secondary reflection area Ar) is calculated for each of the plurality of projectors 32. Next, this point will be explained using FIG. 7. FIG. 7 is a flowchart showing an example of calculation for calculating the irradiation defect area. The flowchart in FIG. 7 is executed by the calculations of the main control unit 110.
 ステップS101では、プロジェクタ32を識別するカウント値P(=1、2、3、4)がゼロにリセットされて、ステップS102では、カウント値Pがインクリメントされる。さらに、ステップS103では、部品実装範囲Refを識別するカウント値R(=1、2、3、…)がゼロにリセットされて、ステップS104では、カウント値Rがインクリメントされる。 In step S101, the count value P (=1, 2, 3, 4) that identifies the projector 32 is reset to zero, and in step S102, the count value P is incremented. Furthermore, in step S103, the count value R (=1, 2, 3, . . . ) identifying the component mounting range Ref is reset to zero, and in step S104, the count value R is incremented.
 ステップS105では、部品モデル取得部111がR番目の部品実装範囲Ref(R)に実装される部品Eの部品モデル83を作成する。ステップS106では、R番目の部品実装範囲Refに対して、P番目のプロジェクタ32がパターン光L(S)を投影する投影方向Dが領域算出部112によって確認される。具体的には、プロジェクタ32によるパターン光L(S)の投影方向Dを、複数のプロジェクタ32のそれぞれについて示す投影方向情報84が記憶部150に予め保存されており、領域算出部112は、投影方向情報84を参照して投影方向Dを確認する。 In step S105, the component model acquisition unit 111 creates a component model 83 of the component E to be mounted in the R-th component mounting range Ref(R). In step S106, the area calculation unit 112 confirms the projection direction D in which the P-th projector 32 projects the patterned light L(S) with respect to the R-th component mounting range Ref. Specifically, projection direction information 84 indicating the projection direction D of the patterned light L(S) by the projector 32 for each of the plurality of projectors 32 is stored in advance in the storage unit 150, and the area calculation unit 112 The projection direction D is confirmed by referring to the direction information 84.
 ステップS107では、R番目の部品実装範囲Refついて作成された部品モデル83が示す部品Eに対して、P番目のプロジェクタ32により投影方向Dからパターン光L(S)を投影した場合に生じる影領域As(反射不良領域)が領域算出部112によって算出される。さらに、R番目の部品実装範囲Refついて作成された部品モデル83が示す部品Eがパターン光L(S)を反射する場合には、当該部品Eに対して、P番目のプロジェクタ32により投影方向Dからパターン光L(S)を投影した場合に生じる二次反射領域Ar(反射不良領域)が領域算出部112によって算出される。 In step S107, a shadow area occurs when pattern light L(S) is projected from the projection direction D by the P-th projector 32 onto the component E indicated by the component model 83 created for the R-th component mounting range Ref. As (poor reflection area) is calculated by the area calculation unit 112. Furthermore, when the component E indicated by the component model 83 created for the R-th component mounting range Ref reflects the pattern light L(S), the P-th projector 32 projects the component E in the projection direction D. The area calculation unit 112 calculates a secondary reflection area Ar (defective reflection area) that occurs when patterned light L(S) is projected from the area.
 ステップS108では、部品実装範囲Refのカウント値Rが基板Bに設けられた部品実装範囲Refの個数Rmaxに到達したか、すなわち、基板Bに設けられた全ての部品実装範囲RefについてステップS105~S107を実行済みかが確認される。ステップS105~S107を未実行の部品実装範囲Refが存在する場合(ステップS108で「NO」の場合)には、ステップS104に戻る。こうして、全ての部品実装範囲RefについてステップS105~S107が実行されるまで(ステップS108で「YES」)、ステップS104~S107が繰り返される。これによって、P番目のプロジェクタ32からパターン光L(S)を投影した場合に発生する照射不良領域(影領域As・二次反射領域Ar)が全ての部品実装範囲Refの部品Eについて算出される。 In step S108, it is determined whether the count value R of the component mounting range Ref has reached the number Rmax of component mounting ranges Ref provided on the board B, that is, for all the component mounting ranges Ref provided on the board B in steps S105 to S107. It is checked whether the has been executed. If there is a component mounting range Ref for which steps S105 to S107 have not been performed (“NO” in step S108), the process returns to step S104. In this way, steps S104 to S107 are repeated until steps S105 to S107 are executed for all component mounting ranges Ref ("YES" in step S108). As a result, the illumination failure area (shadow area As/secondary reflection area Ar) that occurs when the pattern light L(S) is projected from the P-th projector 32 is calculated for the component E in all component mounting ranges Ref. .
 ステップS109では、プロジェクタ32のカウント値Pがプロジェクタ32の個数Pmax(=4)に到達したか、すなわち、全てのプロジェクタ32についてステップS103~S108を実行済みかが確認される。ステップS103~S108を未実行のプロジェクタ32が存在する場合(ステップS109で「NO」の場合)には、ステップS102に戻る。こうして、全てのプロジェクタ32についてステップS103~S108が実行されるまで(ステップS109で「YES」)、ステップS102~S108が繰り返される。これによって、プロジェクタ32からパターン光L(S)を投影した場合に、各部品実装範囲Refの部品Eに起因して発生する照射不良領域(影領域As・二次反射領域Ar)を全てのプロジェクタ32について示す照射不良領域情報85が算出されて、記憶部150に保存される。 In step S109, it is checked whether the count value P of the projectors 32 has reached the number Pmax (=4) of the projectors 32, that is, whether steps S103 to S108 have been executed for all projectors 32. If there is a projector 32 that has not executed steps S103 to S108 (“NO” in step S109), the process returns to step S102. In this way, steps S102 to S108 are repeated until steps S103 to S108 are executed for all projectors 32 (“YES” in step S109). As a result, when the pattern light L(S) is projected from the projector 32, the poor irradiation area (shadow area As/secondary reflection area Ar) that occurs due to the component E in each component mounting range Ref is removed from all the projectors. Poor irradiation area information 85 shown for 32 is calculated and stored in the storage unit 150.
 こうして取得された照射不良領域情報85を用いて、計測対象物Jの三次元計測が実行される(図8)。図8は三次元計測の第1例を示すフローチャートであり、図9は図8の三次元計測で実行される全方向形状データ取得を示すフローチャートであり、図10は図8の三次元計測で実行される第1データ統合処理を示すフローチャートであり、図11は図10の第1データ統合処理で実行される演算を模式的に示す図である。図8、図9および図10のフローチャートは主制御部110の制御に従って実行される。 Three-dimensional measurement of the measurement target J is performed using the irradiation defect area information 85 acquired in this way (FIG. 8). FIG. 8 is a flowchart showing a first example of three-dimensional measurement, FIG. 9 is a flowchart showing omnidirectional shape data acquisition executed in three-dimensional measurement in FIG. 8, and FIG. 11 is a flowchart showing the first data integration process to be executed, and FIG. 11 is a diagram schematically showing the calculations to be executed in the first data integration process in FIG. 10. The flowcharts in FIGS. 8, 9, and 10 are executed under the control of the main control unit 110.
 図8の三次元計測では、ステップS201において全方向形状データ取得が実行される。図9に示すように、全方向形状データ取得では、三次元計測装置1に搬入された基板Bに付されたフィデューシャルマークの認識が実行される(ステップS301)。具体的には、撮像カメラ31が撮像視野V31内のフィデューシャルマークに上方から対向した状態で、照明33からフィデューシャルマークに照明光を照射しつつ、撮像カメラ31がフィデューシャルマークを撮像してマーク画像を取得する。マーク画像は、撮像制御部130から主制御部110に送られて、主制御部110はマーク画像に表れるフィデューシャルマークの位置から、搬送コンベア2に保持される基板Bの位置を検出する。 In the three-dimensional measurement in FIG. 8, omnidirectional shape data acquisition is executed in step S201. As shown in FIG. 9, in omnidirectional shape data acquisition, recognition of the fiducial mark attached to the substrate B carried into the three-dimensional measuring device 1 is executed (step S301). Specifically, with the imaging camera 31 facing the fiducial mark in the imaging field of view V31 from above, the imaging camera 31 illuminates the fiducial mark while irradiating illumination light from the illumination 33 to the fiducial mark. Take an image and obtain a mark image. The mark image is sent from the imaging control section 130 to the main control section 110, and the main control section 110 detects the position of the substrate B held on the transport conveyor 2 from the position of the fiducial mark appearing in the mark image.
 ステップS302ではプロジェクタ32のカウント値Pがゼロにリセットされ、ステップS303ではカウント値Pがインクリメントされる。さらに、ステップS304では、部品実装範囲Refのカウント値Rがゼロにリセットされて、ステップS305では、カウント値Rがインクリメントされる。 In step S302, the count value P of the projector 32 is reset to zero, and in step S303, the count value P is incremented. Further, in step S304, the count value R of the component mounting range Ref is reset to zero, and in step S305, the count value R is incremented.
 ステップS306を開始するにあたっては、R番目の部品実装範囲Refが撮像視野V31に収まるように、主制御部110は駆動機構4によって基板Bの位置を調整する。特に主制御部110は、ステップS301で検出された基板Bの位置に基づき、撮像視野V31への基板Bの位置調整を実行する。そして、ステップS306では、パターン画像取得部113が投影制御部120および撮像制御部130を制御することで、P番目のプロジェクタ32からR番目の部品実装範囲Refの部品Eにパターン光L(S)を投影しつつ、撮像カメラ31によりパターン光L(S)を撮像する。これによって、上述の通り、互いに異なる位相に対応する4種類のパターン画像I(S)が取得されて(S=1、2、3、4)、記憶部150に保存される。 To start step S306, the main control unit 110 adjusts the position of the board B using the drive mechanism 4 so that the Rth component mounting range Ref falls within the imaging field of view V31. In particular, the main control unit 110 executes position adjustment of the substrate B to the imaging field of view V31 based on the position of the substrate B detected in step S301. Then, in step S306, the pattern image acquisition unit 113 controls the projection control unit 120 and the imaging control unit 130 to transmit pattern light L(S) from the P-th projector 32 to the component E in the R-th component mounting range Ref. While projecting the pattern light L(S), the image pickup camera 31 images the pattern light L(S). As a result, as described above, four types of pattern images I(S) corresponding to mutually different phases are acquired (S=1, 2, 3, 4) and stored in the storage unit 150.
 ステップS307では、位相シフト法によって、形状算出部114が4種類のパターン画像I(S)に基づき、単方向形状データ86を算出する。この単方向形状データ86は、部品実装範囲Refに実装された部品Eの三次元形状に関する形状関連値Qを、画素PX(図11)毎に示すデータである。ここで、画素PXは、例えば個体撮像素子311の画素に相当し、撮像カメラ31がZ方向から基板Bに対向しつつ撮像することから、個体撮像素子311の複数の画素は基板Bにおける互いに異なる位置(X座標、Y座標)に対応する。この形状関連値Qは、4種類のパターン画像I(S)から位相シフト法によって高さ(Z座標)として算出される計測値Qmと、当該計測値Qmの信頼度Qrとで構成される。特に、ここの例では、形状関連値Qは、計測値Qmと信頼度Qrとの積で与えられる。 In step S307, the shape calculation unit 114 calculates unidirectional shape data 86 based on the four types of pattern images I(S) using the phase shift method. This unidirectional shape data 86 is data that indicates, for each pixel PX (FIG. 11), a shape-related value Q regarding the three-dimensional shape of the component E mounted in the component mounting range Ref. Here, the pixel PX corresponds to, for example, a pixel of the solid-state image sensor 311, and since the imaging camera 31 takes an image while facing the substrate B from the Z direction, the plurality of pixels of the solid-state image sensor 311 are different from each other on the substrate B. Corresponds to the position (X coordinate, Y coordinate). This shape-related value Q is composed of a measured value Qm calculated as a height (Z coordinate) from four types of pattern images I(S) by a phase shift method, and a reliability Qr of the measured value Qm. In particular, in this example, the shape-related value Q is given by the product of the measured value Qm and the reliability Qr.
 ちなみに、位相シフト法では、4種類のパターン画像I(S)(S=1、2、3、4)の画素PXが示す輝度の差に基づいて、各画素PXにおける信頼度Qrを算出することができる。具体的には、位相をπ/2ずつシフトさせながら4種類のパターン画像I(S)を撮像した場合の各輝度値をd0、d1、d2およびd3とすると、位相シフト角αは、次式
 α=atan[(d2-d0)/(d3-d1)]
により算出され、信頼度Qrは、次式
 Qr=[(d2-d0)+(d3-d1)]1/2
により算出される。
Incidentally, in the phase shift method, the reliability Qr at each pixel PX is calculated based on the difference in luminance shown by the pixels PX of four types of pattern images I(S) (S = 1, 2, 3, 4). Can be done. Specifically, when four types of pattern images I(S) are captured while shifting the phase by π/2 and each luminance value is d0, d1, d2, and d3, the phase shift angle α is calculated by the following formula. α=atan[(d2-d0)/(d3-d1)]
The reliability Qr is calculated by the following formula: Qr=[(d2-d0) 2 +(d3-d1) 2 ] 1/2
Calculated by
 なお、形状関連値Qを求める際の画素PXは、個体撮像素子311の画素に相当する必要はない。例えば、個体撮像素子311により撮像した画像の解像度を変換する画像処理を行った場合には、変換後の解像度に対応する画素PXで、形状関連値Qを求めてもかまわない。 Note that the pixel PX used when determining the shape-related value Q does not need to correspond to the pixel of the solid-state image sensor 311. For example, when image processing is performed to convert the resolution of an image captured by the solid-state image sensor 311, the shape-related value Q may be calculated using the pixel PX corresponding to the converted resolution.
 ステップS308では、部品実装範囲Refのカウント値Rが基板Bに設けられた部品実装範囲Refの個数Rmaxに到達したか、すなわち、基板Bに設けられた全ての部品実装範囲Refについてパターン画像I(S)の取得(ステップS306)および単方向形状データ86の算出(ステップS307)を実行済みかが確認される。ステップS306~S307を未実行の部品実装範囲Refが存在する場合(ステップS308で「NO」の場合)には、ステップS305に戻る。こうして、全ての部品実装範囲Refについてパターン画像I(S)の取得(ステップS306)および単方向形状データ86の算出(ステップS307)が実行されるまで(ステップS308で「YES」)、ステップS306~S307が繰り返される。これによって、P番目のプロジェクタ32からパターン光L(S)を投影した場合の単方向形状データ86が全ての部品実装範囲Refの部品Eについて算出される。 In step S308, it is determined whether the count value R of the component mounting range Ref has reached the number Rmax of component mounting ranges Ref provided on the board B, that is, the pattern image I( It is confirmed whether acquisition of S) (step S306) and calculation of unidirectional shape data 86 (step S307) have been performed. If there is a component mounting range Ref for which steps S306 to S307 have not been performed (“NO” in step S308), the process returns to step S305. In this way, steps S306 to S307 is repeated. As a result, unidirectional shape data 86 when patterned light L(S) is projected from the P-th projector 32 is calculated for the components E in all component mounting ranges Ref.
 ステップS309では、プロジェクタ32のカウント値Pがプロジェクタ32の個数Pmax(=4)に到達したか、すなわち、全てのプロジェクタ32についてステップS304~S308を実行済みかが確認される。ステップS304~S308を未実行のプロジェクタ32が存在する場合(ステップS309で「NO」の場合)には、ステップS303に戻る。こうして、全てのプロジェクタ32についてステップS304~S308が実行されるまで(ステップS309で「YES」)、ステップS304~S308が繰り返される。これによって、プロジェクタ32からパターン光L(S)を投影した場合の各部品実装範囲Refの部品Eについて算出される単方向形状データ86が、全てのプロジェクタ32について取得されて、記憶部150に保存される。 In step S309, it is checked whether the count value P of the projectors 32 has reached the number Pmax (=4) of the projectors 32, that is, whether steps S304 to S308 have been executed for all projectors 32. If there is a projector 32 that has not executed steps S304 to S308 (“NO” in step S309), the process returns to step S303. In this way, steps S304 to S308 are repeated until steps S304 to S308 are executed for all projectors 32 (“YES” in step S309). As a result, the unidirectional shape data 86 calculated for the component E in each component mounting range Ref when the pattern light L(S) is projected from the projector 32 is acquired for all the projectors 32 and saved in the storage unit 150. be done.
 こうして図9の三次元計測の全方向形状データ取得(図8のステップS201)が完了すると、第1データ統合処理(ステップS202)が実行される。図10に示す第1データ統合処理では、部品実装範囲Refを識別するカウント値R(=1、2、3、…)がゼロにリセットされて(ステップS401)、カウント値Rがインクリメントされる(ステップS402)。これによって、R番目の部品実装範囲Refに実装された部品Eに対して、4個のプロジェクタ32のそれぞれから投影パターンT(S)を投影することで取得した4個の単方向形状データ86が指定される。換言すれば、R番目に部品実装範囲Refの部品Eに関して、4個のプロジェクタ32に対応する4個の単方向形状データ86が指定される。 When the omnidirectional shape data acquisition of the three-dimensional measurement in FIG. 9 (step S201 in FIG. 8) is thus completed, the first data integration process (step S202) is executed. In the first data integration process shown in FIG. 10, the count value R (=1, 2, 3, ...) that identifies the component mounting range Ref is reset to zero (step S401), and the count value R is incremented ( Step S402). As a result, the four unidirectional shape data 86 obtained by projecting the projection pattern T(S) from each of the four projectors 32 on the component E mounted in the R-th component mounting range Ref. It is specified. In other words, for the component E in the R-th component mounting range Ref, four pieces of unidirectional shape data 86 corresponding to the four projectors 32 are specified.
 続いて、画素PXを識別するカウント値N(N=1、2、3、…)がゼロにリセットされて(ステップS403)、カウント値Nがインクリメントされる(ステップS404)。さらに、ステップS405では、プロジェクタ32を識別するカウント値P(=1、2、3、4)がゼロにリセットされて、ステップS406では、カウント値Pがインクリメントされる。これによって、P番目のプロジェクタ32に対応する単方向形状データ86のN番目の画素PXが指定される。 Subsequently, the count value N (N=1, 2, 3, ...) that identifies the pixel PX is reset to zero (step S403), and the count value N is incremented (step S404). Furthermore, in step S405, the count value P (=1, 2, 3, 4) for identifying the projector 32 is reset to zero, and in step S406, the count value P is incremented. As a result, the Nth pixel PX of the unidirectional shape data 86 corresponding to the Pth projector 32 is specified.
 そして、形状算出部114は、こうして指定されたN番目の画素PXが影領域Asあるいは二次反射領域Arに属するか否かを記憶部150の照射不良領域情報85に基づき判断しつつ(ステップS407)、N番目の画素PXの形状関連値Q(=計測値Qm×信頼度Qr)に乗ずる重み係数W(P)を決定する(ステップS408、S409)。つまり、N番目の画素PXが影領域Asあるいは二次反射領域Arに属すると照射不良領域情報85により判断されると(ステップS407で「YES」)、N番目の画素PXの形状関連値Qに対する重み係数W(P)が「0」に決定され、当該形状関連値Qが除外される(ステップS408)。一方、N番目の画素PXが影領域Asおよび二次反射領域Arのいずれにも属さないと照射不良領域情報85により判断されると(ステップS407で「NO」)、N番目の画素PXの形状関連値Qに対する重み係数W(P)が1に決定される(ステップS410)。 Then, the shape calculation unit 114 determines whether the Nth pixel PX designated in this way belongs to the shadow area As or the secondary reflection area Ar based on the illumination defect area information 85 in the storage unit 150 (step S407 ), a weighting coefficient W(P) to be multiplied by the shape-related value Q (=measured value Qm×reliability Qr) of the Nth pixel PX is determined (steps S408, S409). In other words, when it is determined based on the illumination defect area information 85 that the Nth pixel PX belongs to the shadow area As or the secondary reflection area Ar ("YES" in step S407), the shape-related value Q of the Nth pixel PX is The weighting coefficient W(P) is determined to be "0", and the shape-related value Q is excluded (step S408). On the other hand, if it is determined based on the illumination defect area information 85 that the N-th pixel PX does not belong to either the shadow area As or the secondary reflection area Ar ("NO" in step S407), the shape of the N-th pixel PX The weighting coefficient W(P) for the related value Q is determined to be 1 (step S410).
 N番目の画素PXの形状関連値Qに対するステップS407~S409の演算は、プロジェクタ32のカウント値Pをインクリメントしつつ(ステップS406)、当該カウント値PがPmax(=4)になるまで(ステップS410で「YES」)、繰り返される。これによって、P=1、2、3、4のそれぞれに対応するN番目の画素PXの形状関連値Qに対する重み係数W(P)が決定される。図11の例では、P=2のプロジェクタ32に対応するN番目の画素PXが影領域Asあるいは二次反射領域Arに属すると判断されて、当該画素PXの形状関連値Qに対する重み係数W(2)が「0」に決定されている。一方、P=1、3、4それぞれのプロジェクタ32に対応するN番目の画素PXは影領域Asおよび二次反射領域Arのいずれにも属さないと判断されて、これら画素PXの形状関連値Qに対する重み係数W(1)、W(3)、W(4)が「1」に決定されている。 The calculations in steps S407 to S409 for the shape-related value Q of the Nth pixel PX are performed while incrementing the count value P of the projector 32 (step S406) until the count value P reaches Pmax (=4) (step S410). (“YES”), is repeated. As a result, the weighting coefficient W(P) for the shape-related value Q of the N-th pixel PX corresponding to P=1, 2, 3, and 4 is determined. In the example of FIG. 11, it is determined that the Nth pixel PX corresponding to the projector 32 with P=2 belongs to the shadow area As or the secondary reflection area Ar, and the weighting coefficient W( 2) is determined to be "0". On the other hand, the N-th pixel PX corresponding to each of the projectors 32 of P=1, 3, and 4 is determined not to belong to either the shadow area As or the secondary reflection area Ar, and the shape-related value Q of these pixels PX The weighting coefficients W(1), W(3), and W(4) for the above are determined to be "1".
 ステップS411では、形状算出部114は、P=1、2、3、4の各プロジェクタ32に対応するN番目の画素PXの形状関連値Qに対して、ステップS407~S409で決定された重み係数W(P)を用いた加重平均を求める演算を実行することで、これらの形状関連値Qを統合した統合値Hを求める。統合値Hを求めるための加重平均の式は、図11の「変換式」の欄に示すとおりである。その結果、図11の例では、N番目の画素PXの統合値Hは、159となる。 In step S411, the shape calculation unit 114 calculates the weighting coefficient determined in steps S407 to S409 for the shape-related value Q of the Nth pixel PX corresponding to each projector 32 of P=1, 2, 3, and 4. By executing a calculation to obtain a weighted average using W(P), an integrated value H that integrates these shape-related values Q is obtained. The weighted average formula for determining the integrated value H is as shown in the "conversion formula" column of FIG. 11. As a result, in the example of FIG. 11, the integrated value H of the Nth pixel PX is 159.
 ステップS405~S411の演算は、カウント値Nが単方向形状データ86を構成する画素PXの個数Nmaxに到達するまで(ステップS412で「YES」)、カウント値Nをインクリメントしつつ(ステップS404)、繰り返される。こうして、互いに異なるプロジェクタ32に対応する4個の形状関連値Qを統合した統合値Hを画素PX毎に示す統合形状データ87が算出されて、記憶部150に保存される。かかる統合形状データ87は、R番目の部品実装範囲Refに実装された部品Eについて取得された、4個のプロジェクタ32に対応する4個の単方向形状データ86を統合したデータに相当し、当該Eの三次元形状を示す。 The calculations in steps S405 to S411 are performed while incrementing the count value N (step S404) until the count value N reaches the number Nmax of pixels PX constituting the unidirectional shape data 86 ("YES" in step S412). Repeated. In this way, integrated shape data 87 indicating, for each pixel PX, an integrated value H obtained by integrating four shape-related values Q corresponding to mutually different projectors 32 is calculated and stored in the storage unit 150. This integrated shape data 87 corresponds to data that is an integration of four pieces of unidirectional shape data 86 corresponding to the four projectors 32 acquired for the component E mounted in the R-th component mounting range Ref. The three-dimensional shape of E is shown.
 ステップS403~S412の演算は、カウント値Rが部品実装範囲Refの個数Rmaxに到達するまで(ステップS413で「YES」)、カウント値Rをインクリメントしつつ(ステップS402)、繰り返される。こうして、全ての部品実装範囲Refの部品Eについて、統合形状データ87が算出される。 The calculations in steps S403 to S412 are repeated while incrementing the count value R (step S402) until the count value R reaches the number Rmax of the component mounting range Ref (“YES” in step S413). In this way, the integrated shape data 87 is calculated for the components E in all component mounting ranges Ref.
 以上に説明する実施形態では、基板Bのうち部品Eが実装される部品実装範囲Refおよび当該部品実装範囲Refに実装される部品Eの外形を示す部品モデル83(基準モデル)が取得される(ステップS105)。そして、三次元計測装置1で部品Eにパターン光L(S)が投影される投影方向Dを示す投影方向情報84(照射方向情報)と、部品モデル83とに基づき、影領域Asおよび二次反射領域Ar(照射不良領域)が算出される(ステップS107)。こうして、計測対象物Jに照射されたパターン光L(S)を撮像することで取得されたパターン画像I(S)に基づき計測対象物Jの三次元形状を計測するにあたって、パターン光Lの照射不良が発生する影領域Asおよび二次反射領域Arを取得することが可能となっている。 In the embodiment described above, a component model 83 (reference model) indicating the component mounting range Ref in which the component E is mounted on the board B and the external shape of the component E to be mounted in the component mounting range Ref is acquired ( Step S105). Based on the projection direction information 84 (irradiation direction information) indicating the projection direction D in which the pattern light L(S) is projected onto the component E by the three-dimensional measuring device 1 and the component model 83, the shadow area As and the secondary A reflection area Ar (defective irradiation area) is calculated (step S107). In this way, when measuring the three-dimensional shape of the measurement object J based on the pattern image I(S) acquired by imaging the pattern light L(S) irradiated onto the measurement object J, the irradiation of the pattern light L It is possible to obtain the shadow area As and the secondary reflection area Ar where defects occur.
 また、部品モデル取得部111(基準モデル取得部)は、部品実装範囲Refに実装された部品Eと部品実装範囲Refとの位置関係の適否を検査するための基準を示す検査データおよび基板の構成を示すCADデータの少なくとも一方のデータから、部品モデル83を作成する(ステップS105)。かかる構成では、基板Bの検査データあるいはCADデータとった既存のデータを活用して部品モデル83を作成し、この部品モデル83に基づき影領域Asおよび二次反射領域Arを算出できる。 The component model acquisition unit 111 (reference model acquisition unit) also acquires inspection data indicating a standard for inspecting the suitability of the positional relationship between the component E mounted in the component mounting range Ref and the component mounting range Ref, and the configuration of the board. A part model 83 is created from at least one of the CAD data indicating (step S105). With this configuration, the component model 83 is created by utilizing existing data such as inspection data or CAD data of the board B, and the shadow area As and the secondary reflection area Ar can be calculated based on this component model 83.
 また、互いに異なる投影方向Dから部品Eにパターン光L(S)を照射する複数のプロジェクタ32(パターン照射部)が設けられており、投影方向情報84は、プロジェクタ32から部品Eに光が照射される投影方向Dを、複数のプロジェクタ32のそれぞれについて示す。かかる構成では、パターン光L(S)を投影するプロジェクタ32が複数のプロジェクタ32の間で変われば、影領域Asや二次反射領域Arも変動する。これに対して、領域算出部112は、複数のプロジェクタ32のそれぞれについて、影領域Asおよび二次反射領域Arを算出する(ステップS102、S107)。これによって、複数のプロジェクタ32のそれぞれから部品Eに光を投影した際に発生する影領域Asおよび二次反射領域Arを取得することができる。 Further, a plurality of projectors 32 (pattern irradiation units) are provided that irradiate pattern light L(S) onto the component E from mutually different projection directions D, and the projection direction information 84 indicates that the light irradiates from the projector 32 onto the component E. A projection direction D is shown for each of the plurality of projectors 32. In this configuration, if the projector 32 that projects the pattern light L(S) changes among the plurality of projectors 32, the shadow area As and the secondary reflection area Ar also change. In contrast, the area calculation unit 112 calculates the shadow area As and the secondary reflection area Ar for each of the plurality of projectors 32 (steps S102, S107). Thereby, it is possible to obtain the shadow area As and the secondary reflection area Ar that occur when light is projected onto the component E from each of the plurality of projectors 32.
 また、プロジェクタ32から基板Bに照射されたパターン光L(S)を撮像カメラ31(撮像部)により撮像することで取得されたパターン画像I(S)に基づき、三次元形状に関する値である形状関連値Qを画素PX毎に示す単方向形状データ86を算出する演算(ステップS307)が、形状算出部114によって実行される。この演算(ステップS307)は、複数のプロジェクタ32のそれぞれについて実行されて(ステップS303)、互いに異なるプロジェクタ32に対応する複数の単方向形状データ86が算出される。さらに、形状算出部114は、複数の単方向形状データ86それぞれが示す形状関連値Qの加重平均を画素PX毎に算出することで複数の単方向形状データ86を統合して、計測対象物Jの三次元形状を算出する第1統合演算処理(図10)を実行する。この第1統合演算処理では、単方向形状データ86が示す形状関連値Qのうち、影領域Asあるいは二次反射領域Arに含まれる画素PXの形状関連値Qに対しては1未満で0以上の重み係数W(P)を乗じる加重平均により(ステップS411)、計測対象物Jの三次元形状が算出される。 Also, based on the pattern image I(S) obtained by capturing the pattern light L(S) irradiated onto the substrate B from the projector 32 with the imaging camera 31 (imaging unit), the shape is a value related to the three-dimensional shape. An operation (step S307) for calculating unidirectional shape data 86 indicating the related value Q for each pixel PX is executed by the shape calculation unit 114. This calculation (step S307) is executed for each of the plurality of projectors 32 (step S303), and a plurality of unidirectional shape data 86 corresponding to mutually different projectors 32 is calculated. Furthermore, the shape calculation unit 114 integrates the plurality of unidirectional shape data 86 by calculating a weighted average of the shape-related values Q indicated by each of the plurality of unidirectional shape data 86 for each pixel PX, and The first integrated calculation process (FIG. 10) for calculating the three-dimensional shape of is executed. In this first integrated calculation process, among the shape-related values Q indicated by the unidirectional shape data 86, the shape-related value Q of the pixel PX included in the shadow area As or the secondary reflection area Ar is less than 1 and greater than or equal to 0. The three-dimensional shape of the measurement object J is calculated by the weighted average multiplied by the weighting coefficient W(P) (step S411).
 かかる構成では、プロジェクタ32から基板Bに照射されたパターン光L(S)を撮像カメラ31により撮像することで取得されたパターン画像I(S)に基づき、三次元形状に関する形状関連値Qを画素PX毎に示す単方向形状データ86が算出される(ステップS307)。つまり、単方向形状データ86タは、一のプロジェクタ32から部品Eに照射したパターン光L(S)を撮像したパターン画像I(S)に基づき算出される三次元形状に関するデータである。この単方向形状データ86を算出する演算(ステップS307)は、複数のプロジェクタ32のそれぞれについて実行される(ステップS303)。これによって、複数のプロジェクタ32のそれぞれから部品Eにパターン光L(S)を照射した場合の単方向形状データ86が取得される。こうして取得される複数の単方向形状データ86は、それぞれ異なる投影方向Dから部品Eにパターン光L(S)を照射しつつ取得されるパターン画像I(S)に基づく。したがって、一の単方向形状データ86において影領域Asあるいは二次反射領域Arに該当する画素PXが、他の単方向形状データ86においては良好に光が照射された領域に該当しうる。よって、複数の単方向形状データ86それぞれが示す形状関連値Qの平均を画素PX毎に算出することで複数の単方向形状データ86を統合して、計測対象物Jの三次元形状を算出することができる(図10)。しかも、図10の第1統合演算処理では、単方向形状データ86が示す形状関連値Qのうち、影領域Asあるいは二次反射領域Arに含まれる画素PXの形状関連値Qに対しては1未満で0以上の重み係数W(P)を乗じる加重平均により(ステップS407~S410)、計測対象物Jの三次元形状が算出される。これによって、影領域Asあるいは二次反射領域Arの影響を抑制しつつ、計測対象物Jの三次元形状を適切に算出することが可能となっている。 In this configuration, a shape-related value Q regarding a three-dimensional shape is calculated from a pixel based on a pattern image I(S) obtained by capturing pattern light L(S) irradiated onto the substrate B from the projector 32 by the imaging camera 31. Unidirectional shape data 86 shown for each PX is calculated (step S307). In other words, the unidirectional shape data 86 is data regarding a three-dimensional shape calculated based on a pattern image I(S) obtained by capturing the patterned light L(S) irradiated onto the component E from one projector 32. The calculation for calculating this unidirectional shape data 86 (step S307) is executed for each of the plurality of projectors 32 (step S303). As a result, unidirectional shape data 86 obtained when the pattern light L(S) is irradiated onto the component E from each of the plurality of projectors 32 is obtained. The plurality of unidirectional shape data 86 thus obtained are based on pattern images I(S) obtained while irradiating pattern light L(S) onto the component E from different projection directions D, respectively. Therefore, a pixel PX that corresponds to a shadow area As or a secondary reflection area Ar in one unidirectional shape data 86 may correspond to an area well irradiated with light in another unidirectional shape data 86. Therefore, the three-dimensional shape of the measurement object J is calculated by integrating the plurality of unidirectional shape data 86 by calculating the average of the shape-related values Q indicated by each of the plurality of unidirectional shape data 86 for each pixel PX. (Figure 10). Moreover, in the first integrated calculation process in FIG. 10, among the shape-related values Q indicated by the unidirectional shape data 86, 1 The three-dimensional shape of the measurement object J is calculated by weighted averaging, which is multiplied by a weighting coefficient W(P) that is less than 0 and is greater than or equal to 0 (steps S407 to S410). This makes it possible to appropriately calculate the three-dimensional shape of the measurement object J while suppressing the influence of the shadow area As or the secondary reflection area Ar.
 特に、図10の第1統合演算処理では、影領域Asあるいは二次反射領域Arに含まれる画素PXの形状関連値Qに対しては0の重み係数W(P)を乗じる加重平均により、計測対象物Jの三次元形状が算出される(ステップS407~S410)。このように、0の重み係数W(P)を乗じる操作は、すなわち影領域Asおよび二次反射領域Arを除外する操作である。これによって、影領域Asおよび二次反射領域Arの影響を排除しつつ、計測対象物Jの三次元形状を適切に算出することが可能となっている。 In particular, in the first integrated calculation process shown in FIG. The three-dimensional shape of the object J is calculated (steps S407 to S410). In this way, the operation of multiplying by the weighting coefficient W(P) of 0 is an operation of excluding the shadow area As and the secondary reflection area Ar. This makes it possible to appropriately calculate the three-dimensional shape of the measurement object J while eliminating the effects of the shadow area As and the secondary reflection area Ar.
 図12は図7によって算出された照射不良領域を補正するための演算の一例を示すフローチャートである。図12のフローチャートは、図9の全方向形状データの取得と並行して、主制御部110の演算によって実行される。ステップS501では、プロジェクタ32のカウント値Pがゼロにリセットされて、ステップS502では、カウント値Pがインクリメントされる。また、ステップS503では、部品実装範囲Refのカウント値Rがゼロにリセットされて、ステップS504では、カウント値Rがインクリメントされる。これによって、P番目のプロジェクタ32に対応して、R番目の部品実装範囲Refの部品Eについて算出された影領域Asおよび二次反射領域Arが指定される。 FIG. 12 is a flowchart showing an example of calculation for correcting the irradiation defect area calculated in FIG. 7. The flowchart in FIG. 12 is executed by the calculations of the main control unit 110 in parallel with the acquisition of the omnidirectional shape data in FIG. In step S501, the count value P of the projector 32 is reset to zero, and in step S502, the count value P is incremented. Further, in step S503, the count value R of the component mounting range Ref is reset to zero, and in step S504, the count value R is incremented. As a result, the shadow area As and the secondary reflection area Ar calculated for the component E in the R-th component mounting range Ref are specified in correspondence with the P-th projector 32.
 ステップS505では、ステップS502、S504により指定された影領域Asおよび二次反射領域Arが、図9の全方向形状データ取得のステップS301で検出された基板Bの位置に基づき補正される。つまり、ステップS301で検出された基板Bの位置が、これら影領域Asおよび二次反射領域Arの算出において前提とした基板Bの理想位置に対して、位置ずれ量Δaだけずれている場合が想定される。このような場合には、基板Bの位置ずれ量Δaに応じて影領域Asおよび二次反射領域Arを補正することで、三次元計測装置1に搬入された基板Bの実際の位置に応じて、影領域Asおよび二次反射領域Arを的確に算出できる。 In step S505, the shadow area As and secondary reflection area Ar specified in steps S502 and S504 are corrected based on the position of the substrate B detected in step S301 of omnidirectional shape data acquisition in FIG. In other words, it is assumed that the position of the substrate B detected in step S301 deviates from the ideal position of the substrate B based on the assumption in calculating the shadow area As and the secondary reflection area Ar by a positional deviation amount Δa. be done. In such a case, by correcting the shadow area As and the secondary reflection area Ar according to the positional deviation amount Δa of the substrate B, the shadow area As and the secondary reflection area Ar can be corrected according to the actual position of the substrate B carried into the three-dimensional measuring device 1. , the shadow area As and the secondary reflection area Ar can be accurately calculated.
 ステップS506では、ステップS502、S504により指定された影領域Asおよび二次反射領域Arが、R番目の部品実装範囲Refに実際に実装された部品Eの位置に基づき補正される。これの詳細は次の通りである。 In step S506, the shadow area As and secondary reflection area Ar specified in steps S502 and S504 are corrected based on the position of the component E actually mounted in the R-th component mounting range Ref. The details of this are as follows.
 図12の照射不良領域補正を用いる制御では、図9の全方向形状データ取得のステップS306において、パターン画像I(S)の他に部品Eの二次元画像が取得される。具体的には、照明33が撮像視野V31内の部品Eに照明光を照射しつつ、撮像カメラ31が上方から部品Eを撮像することで、部品Eの二次元画像が取得される。さらにステップS306では、三次元計測装置1に搬入された基板Bの部品実装範囲Refに実際に実装された部品Eと、当該部品実装範囲Refとの位置関係を示す部品位置が、部品Eの二次元画像に基づき算出されて、記憶部150に保存される。 In the control using illumination defect area correction in FIG. 12, in step S306 of omnidirectional shape data acquisition in FIG. 9, a two-dimensional image of the component E is acquired in addition to the pattern image I(S). Specifically, a two-dimensional image of the component E is acquired by the imaging camera 31 capturing an image of the component E from above while the illumination 33 irradiates the component E within the imaging field of view V31 with illumination light. Furthermore, in step S306, the component position indicating the positional relationship between the component E actually mounted in the component mounting range Ref of the board B carried into the three-dimensional measuring device 1 and the component mounting range Ref is changed to It is calculated based on the dimensional image and stored in the storage unit 150.
 これに対して、ステップS306で算出された部品位置が示す部品実装範囲Refと部品Eとの位置関係が、影領域Asおよび二次反射領域Arの算出において前提とした理想位置関係に対して、位置ずれ量Δbだけずれている場合が想定される。このような場合には、部品実装範囲Refに対する部品Eの位置ずれ量Δbに応じて影領域Asおよび二次反射領域Arを補正することで、三次元計測装置1に搬入された基板Bの部品実装範囲Refに実装された部品Eの実際の位置に応じて、影領域Asおよび二次反射領域Arを的確に算出できる。 On the other hand, the positional relationship between the component mounting range Ref indicated by the component position calculated in step S306 and the component E is different from the ideal positional relationship assumed in calculating the shadow area As and the secondary reflection area Ar. It is assumed that there is a deviation by a positional deviation amount Δb. In such a case, by correcting the shadow area As and the secondary reflection area Ar according to the positional deviation amount Δb of the component E with respect to the component mounting range Ref, the component of the board B carried into the three-dimensional measuring device 1 can be adjusted. The shadow area As and the secondary reflection area Ar can be accurately calculated according to the actual position of the component E mounted in the mounting range Ref.
 ステップS507では、部品実装範囲Refのカウント値Rが基板Bに設けられた部品実装範囲Refの個数Rmaxに到達したか、すなわち、基板Bに設けられた全ての部品実装範囲RefについてステップS505~S506の補正を実行済みかが確認される。ステップS505~S506の補正を未実行の部品実装範囲Refが存在する場合(ステップS507で「NO」の場合)には、ステップS504に戻る。こうして、全ての部品実装範囲RefについてステップS505~S506の補正が実行されるまで(ステップS507で「YES」)、ステップS505~S506が繰り返される。これによって、P番目のプロジェクタ32からパターン光L(S)を投影した場合に発生する各部品実装範囲Refに対する照射不良領域(影領域As・二次反射領域Ar)が補正される。 In step S507, it is determined whether the count value R of the component mounting ranges Ref has reached the number Rmax of component mounting ranges Ref provided on the board B, that is, whether or not the count value R of the component mounting ranges Ref provided on the board B is determined in steps S505 to S506. It is confirmed whether the correction has been performed. If there is a component mounting range Ref for which the corrections in steps S505 to S506 have not been performed (“NO” in step S507), the process returns to step S504. In this way, steps S505 and S506 are repeated until the corrections of steps S505 and S506 are executed for all component mounting ranges Ref (“YES” in step S507). As a result, the illumination failure area (shadow area As/secondary reflection area Ar) for each component mounting range Ref that occurs when the pattern light L(S) is projected from the P-th projector 32 is corrected.
 ステップS508では、プロジェクタ32のカウント値Pがプロジェクタ32の個数Pmax(=4)に到達したか、すなわち、全てのプロジェクタ32についてステップS503~S507を実行済みかが確認される。ステップS503~S507を未実行のプロジェクタ32が存在する場合(ステップS508で「NO」の場合)には、ステップS502に戻る。こうして、全てのプロジェクタ32についてステップS503~S507が実行されるまで(ステップS508で「YES」)、ステップS503~S507が繰り返される。これによって、各プロジェクタ32からパターン光L(S)を投影した場合に、各部品実装範囲Refの部品Eに起因して発生する照射不良領域(影領域As・二次反射領域Ar)が補正される。 In step S508, it is confirmed whether the count value P of the projectors 32 has reached the number Pmax (=4) of the projectors 32, that is, whether steps S503 to S507 have been executed for all projectors 32. If there is a projector 32 that has not executed steps S503 to S507 (“NO” in step S508), the process returns to step S502. In this way, steps S503 to S507 are repeated until steps S503 to S507 are executed for all projectors 32 (“YES” in step S508). As a result, when the pattern light L(S) is projected from each projector 32, the illumination defect area (shadow area As/secondary reflection area Ar) that occurs due to the component E in each component mounting range Ref is corrected. Ru.
 そして、図10の第1データ統合処理では、こうして補正された影領域Asおよび二次反射領域Arに基づき、ステップS407~S411が実行される。これによって、三次元計測装置1に実際に搬入された計測対象物J(基板Bおよび部品E)に対して的確に設定された影領域Asおよび二次反射領域Arに基づき、統合形状データ87を算出することができる。 In the first data integration process in FIG. 10, steps S407 to S411 are executed based on the thus corrected shadow area As and secondary reflection area Ar. As a result, the integrated shape data 87 is generated based on the shadow area As and the secondary reflection area Ar that are accurately set for the measurement object J (board B and component E) actually carried into the three-dimensional measuring device 1. It can be calculated.
 図12の例では、領域算出部112は、三次元計測装置1に搬入されて搬送コンベア2(対象物支持部)に支持された基板Bの位置をステップS301で認識した結果にさらに基づき影領域Asおよび二次反射領域Arを算出する(ステップS505)。かかる構成では、三次元計測装置1に搬入された基板Bの実際の位置に応じて、影領域Asおよび二次反射領域Arを的確に算出することができる。 In the example of FIG. 12, the area calculation unit 112 further calculates the shadow area based on the result of recognizing the position of the substrate B carried into the three-dimensional measuring device 1 and supported by the conveyor 2 (object support unit) in step S301. As and the secondary reflection area Ar are calculated (step S505). With this configuration, the shadow area As and the secondary reflection area Ar can be accurately calculated according to the actual position of the substrate B carried into the three-dimensional measuring device 1.
 特に、搬送コンベア2に支持された基板Bの位置は、基板Bに付されたフィデューシャルマークを撮像カメラ31によって撮像した結果に基づき認識される(ステップS301)。かかる構成では、三次元計測装置1に搬入された基板Bの位置に応じて、影領域Asおよび二次反射領域Arを的確に算出することができる。 In particular, the position of the substrate B supported by the transport conveyor 2 is recognized based on the result of imaging the fiducial mark attached to the substrate B by the imaging camera 31 (step S301). With this configuration, the shadow area As and the secondary reflection area Ar can be accurately calculated according to the position of the substrate B carried into the three-dimensional measurement device 1.
 また、領域算出部112は、部品実装範囲Refに実装された部品Eの位置をステップS306で検出した結果にさらに基づき影領域Asおよび二次反射領域Arを算出する(ステップS506)。かかる構成では、部品実装範囲Refに実装された部品Eの実際の位置に応じて、影領域Asおよび二次反射領域Arを的確に算出することができる。 Furthermore, the area calculation unit 112 calculates a shadow area As and a secondary reflection area Ar based on the result of detecting the position of the component E mounted in the component mounting range Ref in step S306 (step S506). With this configuration, the shadow area As and the secondary reflection area Ar can be accurately calculated according to the actual position of the component E mounted in the component mounting range Ref.
 また、部品実装範囲Refに実装された部品Eの位置は、部品Eを撮像した二次元画像に基づき検出される(ステップS306)。かかる構成では、部品実装範囲Refに実装された部品Eの実際の位置に応じて、影領域Asおよび二次反射領域Arを的確に算出することができる。 Furthermore, the position of the component E mounted in the component mounting range Ref is detected based on a two-dimensional image of the component E (step S306). With this configuration, the shadow area As and the secondary reflection area Ar can be accurately calculated according to the actual position of the component E mounted in the component mounting range Ref.
 図13は三次元計測の第2例を示すフローチャートであり、図14は図13の三次元計測で実行される第2データ統合処理を示すフローチャートであり、図15は図13の三次元計測で実行される演算の内容を模式的に示す図である。ここでは、三次元計測装置1に複数の基板Bを順番に搬入して、各基板Bに対して三次元計測を実行する場面を想定する。 13 is a flowchart showing a second example of three-dimensional measurement, FIG. 14 is a flowchart showing a second data integration process executed in three-dimensional measurement in FIG. 13, and FIG. 15 is a flowchart showing a second data integration process executed in three-dimensional measurement in FIG. FIG. 3 is a diagram schematically showing the contents of an operation to be executed. Here, it is assumed that a plurality of substrates B are sequentially loaded into the three-dimensional measuring device 1 and three-dimensional measurement is performed on each substrate B.
 図13に示すように、三次元計測の第2例においても、第1例と同様に全方向形状データ取得(ステップS601)と、第1データ統合処理(ステップS602)とが実行される。さらに、第2例では、ステップS603において第2データ統合処理(図14)が実行される。なお、第1データ統合処理と第2データ統合処理との実行順序は図13の例に限られず、図13の例の逆でもよい。 As shown in FIG. 13, in the second example of three-dimensional measurement, omnidirectional shape data acquisition (step S601) and first data integration processing (step S602) are performed similarly to the first example. Furthermore, in the second example, a second data integration process (FIG. 14) is executed in step S603. Note that the execution order of the first data integration process and the second data integration process is not limited to the example in FIG. 13, and may be reversed to the example in FIG. 13.
 図14に示すように、第2データ統合処理が第1データ統合処理と異なるのは、第1データ統合処理のステップS405~S411に代えて、ステップS414の単純平均(全重み係数が1である平均)の算出を実行する点である。つまり、第2データ統合処理のステップS414では、4個のプロジェクタ32それぞれについて求められたN番目の画素PXの形状関連値Qの単純平均が算出される。こうして、加重平均ではなく、単純平均によって統合値Hが算出されて、統合形状データ87が取得される。 As shown in FIG. 14, the second data integration process is different from the first data integration process in that instead of steps S405 to S411 of the first data integration process, step S414 is a simple average (all weighting coefficients are 1). This is the point at which the calculation of the average) is performed. That is, in step S414 of the second data integration process, a simple average of the shape-related values Q of the N-th pixel PX obtained for each of the four projectors 32 is calculated. In this way, the integrated value H is calculated not by a weighted average but by a simple average, and the integrated shape data 87 is obtained.
 こうして1枚の基板Bに対して三次元計測が完了すると(ステップS601~S603)、所定枚数(1以上の枚数)の基板BについてステップS601~S603を実行済みかが判断される(ステップS604)。実行済みでない場合(ステップS604で「NO」の場合)には、ステップS605で計測を終了するかが判断される。計測を終了する場合(ステップS605で「YES」の場合)には、図13の三次元計測が終了する一方、計測を終了しない場合(ステップS605で「NO」の場合)には、ステップS601に戻る。 When the three-dimensional measurement is completed for one board B (steps S601 to S603), it is determined whether steps S601 to S603 have been performed for a predetermined number (one or more) of boards B (step S604). . If the measurement has not been completed (“NO” in step S604), it is determined in step S605 whether to end the measurement. If the measurement is to be terminated ("YES" in step S605), the three-dimensional measurement in FIG. return.
 ステップS604において、所定枚数の基板BについてステップS601~S603を実行済みであると判断されると(YES)、ステップS602の第1データ統合処理で取得された統合形状データ87(三次元形状)と、ステップS603の第2データ統合処理で取得された統合形状データ87(三次元形状)との差分が算出される。例えば、同一の基板Bの同一の部品実装範囲Refについて、第1データ統合処理で取得された統合形状データ87(三次元形状)と、第2データ統合処理で取得された統合形状データ87(三次元形状)との差の二乗の平均値(すなわち、平均二乗誤差)が算出される。なお、1枚の基板Bが複数の部品実装範囲Refを有する場合には、各部品実装範囲Refについて平均二乗誤差を算出してもよいし、代表的な一の部品実装範囲Refについて平均二乗誤差を算出してもよい。さらに、平均二乗誤差は所定枚数の基板Bのそれぞれについて算出される。こうして算出された各平均二乗誤差の平均値、中央値あるいは最大値が差分として算出される。なお、差分の具体的な算出方法はここの例に限られず、2個のデータの差を評価できる任意の算出方法を採用できる。 In step S604, if it is determined that steps S601 to S603 have been executed for a predetermined number of substrates B (YES), the integrated shape data 87 (three-dimensional shape) acquired in the first data integration process in step S602 , the difference from the integrated shape data 87 (three-dimensional shape) acquired in the second data integration process in step S603 is calculated. For example, regarding the same component mounting range Ref of the same board B, the integrated shape data 87 (three-dimensional shape) obtained in the first data integration process and the integrated shape data 87 (three-dimensional shape) obtained in the second data integration process The average value of the square of the difference from the original shape (that is, the mean square error) is calculated. In addition, when one board B has a plurality of component mounting ranges Ref, the mean square error may be calculated for each component mounting range Ref, or the mean square error may be calculated for one representative component mounting range Ref. may be calculated. Further, the mean square error is calculated for each of the predetermined number of substrates B. The average value, median value, or maximum value of each mean square error calculated in this way is calculated as a difference. Note that the specific method for calculating the difference is not limited to this example, and any calculation method that can evaluate the difference between two pieces of data can be adopted.
 ステップS607では、当該差分に基づき第1データ統合処理の要否が判定される。具体的には、差分が所定の閾値以上である場合には、第1データ統合処理が必要であると判定され(YES)、差分が当該閾値未満である場合には、第1データ統合処理が不要であると判定される。例えば、図15の「照射不良影響あり」の列の各波形(三次元形状)に示すように、照射不良の影響がある場合には、第2データ統合処理で求めた三次元形状に生じるピークノイズが、第1データ統合処理で求めた三次元形状では表れていない。これは、第1データ統合処理が照射不良の影響を除去するために必要であることを示す。一方、
図15の「照射不良影響なし」の列の各波形(三次元形状)に示すように、照射不良の影響がない場合には、第1データ統合処理で求めた三次元形状と第2データ統合処理で求めた三次元形状との間に大きな差がない。これは、第1データ統合処理が不要であることを示す。
In step S607, it is determined whether the first data integration process is necessary based on the difference. Specifically, if the difference is greater than or equal to a predetermined threshold, it is determined that the first data integration process is necessary (YES), and if the difference is less than the threshold, the first data integration process is determined to be necessary. It is determined that it is unnecessary. For example, as shown in each waveform (three-dimensional shape) in the column "Poor irradiation effect" in Figure 15, when there is an influence of poor irradiation, a peak occurs in the three-dimensional shape obtained in the second data integration process. Noise does not appear in the three-dimensional shape obtained in the first data integration process. This indicates that the first data integration process is necessary to eliminate the effects of poor irradiation. on the other hand,
As shown in each waveform (three-dimensional shape) in the column "No effect of irradiation failure" in Figure 15, if there is no influence of irradiation failure, the three-dimensional shape obtained in the first data integration process and the second data integration process There is no significant difference between the three-dimensional shape obtained through processing. This indicates that the first data integration process is unnecessary.
 第1データ統合処理が必要と判断されると(ステップS607で「YES」)、三次元計測装置1に1枚の基板Bが搬入される度に、全方向形状データ取得(ステップS608)、第1データ統合処理(ステップS609)および計測終了の判断(ステップS610)が実行される。こうして、第2データ統合処理ではなく、第1データ統合処理によって、統合形状データ87が算出される(ステップS609)。一方、第1データ統合処理が不要と判断されると(ステップS607で「NO」)、三次元計測装置1に1枚の基板Bが搬入される度に、全方向形状データ取得(ステップS611)、第2データ統合処理(ステップS612)および計測終了の判断(ステップS613)が実行される。こうして、第1データ統合処理ではなく、第2データ統合処理によって、統合形状データ87が算出される(ステップS612)。 If it is determined that the first data integration process is necessary ("YES" in step S607), every time one substrate B is carried into the three-dimensional measuring device 1, omnidirectional shape data is acquired (step S608). 1 data integration processing (step S609) and determination of end of measurement (step S610) are executed. In this way, the integrated shape data 87 is calculated not by the second data integration process but by the first data integration process (step S609). On the other hand, if it is determined that the first data integration process is unnecessary ("NO" in step S607), omnidirectional shape data is acquired every time one substrate B is carried into the three-dimensional measuring device 1 (step S611). , a second data integration process (step S612), and a determination of end of measurement (step S613) are executed. In this way, the integrated shape data 87 is calculated not by the first data integration process but by the second data integration process (step S612).
 図13の例では、形状算出部114は、第1データ統合処理(ステップS602)と、第2データ統合処理(ステップS603)との両方を同一の計測対象物Jに対して実行する。この第2データ統合処理では、形状算出部114は、影領域Asおよび二次反射領域Arに応じた重み付けを行わずに複数の単方向形状データ86が示す形状関連値Qの平均(単純平均)を画素PX毎に算出することで複数の単方向形状データ86を統合して、計測対象物Jの三次元形状(統合形状データ87)を算出する。さらに、形状算出部114は、第1データ統合処理(ステップS602)および第2データ統合処理(ステップS603)それぞれで算出された計測対象物Jの統合形状データ87(三次元形状)の差を評価する評価処理(ステップS606)を、三次元計測装置1でパターン画像I(S)の取得対象となった所定枚数の計測対象物Jに実行する。この評価処理に基づき、第1データ統合処理の要否が判定する要否判定が実行される(ステップS607)。そして、ステップS607の要否判定において、第1データ統合処理が不要と判定された後は、形状算出部114は、第1データ統合処理を実行せずに第2データ統合処理によって計測対象物Jの統合形状データ87(三次元形状)を算出する(ステップS611~S613)。かかる構成では、影領域Asおよび二次反射領域Arが問題にならない場合には、第1データ統合処理より簡素な第2データ統合処理によって計測対象物Jの統合形状データ87を的確に算出することができる。 In the example of FIG. 13, the shape calculation unit 114 executes both the first data integration process (step S602) and the second data integration process (step S603) on the same measurement object J. In this second data integration process, the shape calculation unit 114 calculates the average (simple average) of the shape-related values Q indicated by the plurality of unidirectional shape data 86 without weighting according to the shadow area As and the secondary reflection area Ar. By calculating for each pixel PX, the plurality of unidirectional shape data 86 are integrated, and the three-dimensional shape (integrated shape data 87) of the measurement target object J is calculated. Furthermore, the shape calculation unit 114 evaluates the difference between the integrated shape data 87 (three-dimensional shape) of the measurement object J calculated in each of the first data integration process (step S602) and the second data integration process (step S603). The evaluation process (step S606) is performed on a predetermined number of measurement objects J from which pattern images I(S) are to be obtained by the three-dimensional measuring device 1. Based on this evaluation process, a necessity determination is performed to determine whether the first data integration process is necessary (step S607). Then, in the necessity determination in step S607, after it is determined that the first data integration process is unnecessary, the shape calculation unit 114 performs the measurement target J by the second data integration process without executing the first data integration process. The integrated shape data 87 (three-dimensional shape) is calculated (steps S611 to S613). In such a configuration, if the shadow area As and the secondary reflection area Ar are not a problem, the integrated shape data 87 of the measurement object J can be accurately calculated by the second data integration process, which is simpler than the first data integration process. I can do it.
 なお、この例において、ステップS604で判断される「所定枚数」をユーザにより設定できるように構成してもよい。この場合、ユーザは、UI200に所定枚数を設定する操作を実行し、UI200は、ユーザにより入力された所定枚数を受け付けて、図13の三次元計測で使用するように設定する。かかる構成では、ユーザは、第1データ統合処理(ステップS602)および第2データ統合処理(ステップS603)の両方が実行される期間(ステップS601~S604)を適宜調整することができる。 In this example, the "predetermined number of sheets" determined in step S604 may be configured to be set by the user. In this case, the user performs an operation to set a predetermined number of sheets on the UI 200, and the UI 200 receives the predetermined number of sheets input by the user and sets it to be used in the three-dimensional measurement shown in FIG. With this configuration, the user can appropriately adjust the period (steps S601 to S604) during which both the first data integration process (step S602) and the second data integration process (step S603) are executed.
 このように上記の実施形態では、三次元計測装置1が本発明の「三次元計測装置」の一例に相当し、制御装置100が本発明の「三次元計測用演算装置」の一例に相当し、部品モデル取得部111が本発明の「基準モデル取得部」の一例に相当し、領域算出部112が本発明の「領域算出部」の一例に相当し、形状算出部114が本発明の「形状算出部」の一例に相当し、搬送コンベア2が本発明の「対象物支持部」の一例に相当し、撮像カメラ31が本発明の「撮像部」の一例に相当し、プロジェクタ32が本発明の「プロジェクタ」の一例に相当し、複数のプロジェクタ32が本発明の「パターン照射部」の一例に相当し、部品モデル83が本発明の「基準モデル」の一例に相当し、投影方向情報84が本発明の「照射方向情報」の一例に相当し、単方向形状データ86が本発明の「単方向形状データ」の一例に相当し、サーバコンピュータ91が本発明の「記録媒体」の一例に相当し、三次元計測プログラム92が本発明の「三次元計測用プログラム」の一例に相当し、UI200が本発明の「設定操作部」の一例に相当し、二次反射領域Arが本発明の「二次反射領域」および「照射不良領域」の一例に相当し、影領域Asが本発明の「影領域」および「照射不良領域」の一例に相当し、基板Bが本発明の「基板」の一例に相当し、部品Eが本発明の「部品」の一例に相当し、パターン画像I(S)が本発明の「パターン画像」の一例に相当し、計測対象物Jが本発明の「計測対象物」の一例に相当し、パターン光L(S)が本発明の「光」の一例に相当し、画素PXが本発明の「画素」の一例に相当し、形状関連値Qが本発明の「形状関連値」の一例に相当し、計測値Qmが本発明の「算出値」の一例に相当し、信頼度Qrが本発明の「信頼度」の一例に相当し、部品実装範囲Refが本発明の「部品実装範囲」の一例に相当し、ステップS606が本発明の「評価処理」の一例に相当し、ステップS607が本発明の「要否判定」の一例に相当し、図10の第1データ統合処理が本発明の「第1統合演算処理」の一例に相当し、図14の第2データ統合処理が本発明の「第2統合演算処理」の一例に相当する。 In the above embodiment, the three-dimensional measuring device 1 corresponds to an example of the "three-dimensional measuring device" of the present invention, and the control device 100 corresponds to an example of the "three-dimensional measurement computing device" of the present invention. , the part model acquisition section 111 corresponds to an example of the "reference model acquisition section" of the present invention, the area calculation section 112 corresponds to an example of the "area calculation section" of the present invention, and the shape calculation section 114 corresponds to an example of the "reference model acquisition section" of the present invention. The conveyor 2 corresponds to an example of the "object support section" of the present invention, the imaging camera 31 corresponds to an example of the "imaging section" of the present invention, and the projector 32 corresponds to an example of the "object support section" of the present invention. This corresponds to an example of the "projector" of the invention, the plurality of projectors 32 corresponds to an example of the "pattern irradiation unit" of the invention, the component model 83 corresponds to an example of the "reference model" of the invention, and the projection direction information 84 corresponds to an example of "irradiation direction information" of the present invention, unidirectional shape data 86 corresponds to an example of "unidirectional shape data" of the present invention, and server computer 91 corresponds to an example of "recording medium" of the present invention. The three-dimensional measurement program 92 corresponds to an example of the "three-dimensional measurement program" of the present invention, the UI 200 corresponds to an example of the "setting operation section" of the present invention, and the secondary reflection area Ar corresponds to an example of the "three-dimensional measurement program" of the present invention. The shadow area As corresponds to an example of the "secondary reflection area" and the "poor irradiation area" of the present invention, the shadow area As corresponds to an example of the "shadow area" and the "poor irradiation area" of the present invention, and the substrate B corresponds to an example of the "substrate B" of the present invention. ”, the part E corresponds to an example of the “component” of the present invention, the pattern image I(S) corresponds to an example of the “pattern image” of the present invention, and the measurement object J corresponds to an example of the “pattern image” of the present invention. This corresponds to an example of a "measurement object," the pattern light L(S) corresponds to an example of "light" of the present invention, the pixel PX corresponds to an example of a "pixel" of the present invention, and the shape-related value Q corresponds to an example of a "pixel" of the present invention. This corresponds to an example of the "shape-related value" of the present invention, the measured value Qm corresponds to an example of the "calculated value" of the present invention, the reliability Qr corresponds to an example of the "reliability" of the present invention, and the component mounting The range Ref corresponds to an example of the "component mounting range" of the present invention, step S606 corresponds to an example of the "evaluation process" of the present invention, step S607 corresponds to an example of the "necessity determination" of the present invention, The first data integration process in FIG. 10 corresponds to an example of the "first integration calculation process" of the present invention, and the second data integration process in FIG. 14 corresponds to an example of the "second integration calculation process" of the invention.
 なお、本発明は上記実施形態に限定されるものではなく、その趣旨を逸脱しない限りにおいて上述したものに対して種々の変更を加えることが可能である。例えば照射不良領域として二次反射領域Arおよび影領域Asの両方を算出する必要はない。例えば、二次反射領域Arおよび影領域Asのうちの一方の影響が大きく他方の影響がわずかである場合には、図7のステップS107において、当該一方のみを照射不良領域として算出してもよい。 Note that the present invention is not limited to the embodiments described above, and various changes can be made to what has been described above without departing from the spirit thereof. For example, it is not necessary to calculate both the secondary reflection area Ar and the shadow area As as the illumination defect area. For example, if the influence of one of the secondary reflection area Ar and the shadow area As is large and the influence of the other is slight, only one of the secondary reflection areas Ar and the shadow area As may be calculated as the poor irradiation area in step S107 of FIG. .
 また、図12の照射不良領域補正において、基板Bの位置に基づく補正(ステップS505)および部品Eの位置に基づく補正(ステップS506)のいずれか一方のみを実行して他方を実行しなくてもよい。 Furthermore, in the illumination defect area correction shown in FIG. 12, it is possible to perform only one of the correction based on the position of the substrate B (step S505) and the correction based on the position of the component E (step S506) without performing the other. good.
 また、形状関連値Qの具体的な内容を適宜変更してもよい。例えば、信頼度Qrを算出せずに、計測値Qmをそのまま形状関連値Qとして算出してもよい。 Furthermore, the specific contents of the shape-related value Q may be changed as appropriate. For example, the measured value Qm may be directly calculated as the shape-related value Q without calculating the reliability Qr.
 また、ステップS408において、不良照射領域に対応して乗ずる重み係数W(P)は0である必要はなく、0以上で1未満の値であればよい。 Furthermore, in step S408, the weighting coefficient W(P) to be multiplied corresponding to the defective irradiation area does not need to be 0, but may be a value of 0 or more and less than 1.
 また、三次元計測プログラム92の実行によって主制御部110に構成される各機能部111、112、113、114を、三次元計測装置1が備える制御装置100に構成する必要は必ずしもない。したがって、三次元計測装置1とは別に設けられたコンピュータが具備するプロセッサに各機能部111、112、113、114を構成してもよい。 Furthermore, the functional units 111, 112, 113, and 114 configured in the main control unit 110 by executing the three-dimensional measurement program 92 do not necessarily need to be configured in the control device 100 included in the three-dimensional measurement device 1. Therefore, each of the functional units 111, 112, 113, and 114 may be configured in a processor included in a computer provided separately from the three-dimensional measuring device 1.
 また、図1の第2データ統合処理では、ステップS414において、単純平均による平均値(代表値)を算出するのに代えて、中央値(代表値)を算出してもよい。 Furthermore, in the second data integration process in FIG. 1, in step S414, a median value (representative value) may be calculated instead of calculating the average value (representative value) by simple average.
 1…三次元計測装置
 100…制御装置(三次元計測用演算装置)
 111…部品モデル取得部(基準モデル取得部)
 112…領域算出部(領域算出部)
 114…形状算出部
 2…搬送コンベア(対象物支持部)
 31…撮像カメラ(撮像部)
 32…プロジェクタ(パターン照射部)
 83…部品モデル(基準モデル)
 84…投影方向情報(照射方向情報)
 86…単方向形状データ
 91…サーバコンピュータ(記録媒体)
 92…三次元計測プログラム
 200…UI(設定操作部)
 Ar…二次反射領域(照射不良領域)
 As…影領域(照射不良領域)
 B…基板
 E…部品
 I(S)…パターン画像
 J…計測対象物
 L(S)…パターン光(光)
 PX…画素
 Q…形状関連値
 Qm…計測値(算出値)
 Qr…信頼度
 Ref…部品実装範囲
 
1...Three-dimensional measuring device 100...Control device (computing device for three-dimensional measurement)
111...Parts model acquisition unit (reference model acquisition unit)
112...Area calculation unit (area calculation unit)
114... Shape calculation section 2... Conveyor (object support section)
31...Imaging camera (imaging section)
32...Projector (pattern irradiation section)
83...Parts model (standard model)
84...Projection direction information (irradiation direction information)
86...Unidirectional shape data 91...Server computer (recording medium)
92...Three-dimensional measurement program 200...UI (setting operation section)
Ar…Secondary reflection area (poor irradiation area)
As…Shadow area (poor irradiation area)
B... Board E... Component I(S)... Pattern image J... Measurement object L(S)... Pattern light (light)
PX...pixel Q...shape related value Qm...measured value (calculated value)
Qr…Reliability Ref…Component mounting range

Claims (17)

  1.  基板および当該基板に実装された部品を有する計測対象物を支持する対象物支持部と、前記対象物支持部に支持された前記計測対象物に所定のパターンの光を照射するパターン照射部と、前記パターン照射部から前記計測対象物に照射された光を撮像することで二次元のパターン画像を取得する撮像部とを備えた三次元計測装置により取得された前記パターン画像に基づき前記計測対象物の三次元形状を計測するための演算を実行する三次元計測用演算装置であって、
     前記基板のうち前記部品が実装される部品実装範囲および前記部品実装範囲に実装される前記部品の外形を示す基準モデルを取得する基準モデル取得部と、
     前記撮像部から前記部品に照射された光が前記部品によって遮られることで前記基板において前記部品の影が生じる影領域および前記撮像部から照射されて前記部品で反射された光が前記基板に入射する二次反射領域のうち少なくとも一方の照射不良領域を、前記三次元計測装置で前記部品に光が照射される方向を示す照射方向情報と、前記基準モデルとに基づき算出する領域算出部と
    を備えた三次元計測用演算装置。
    an object support section that supports a measurement object having a substrate and components mounted on the substrate; a pattern irradiation section that irradiates the measurement object supported by the object support section with a predetermined pattern of light; the measurement target based on the pattern image acquired by a three-dimensional measuring device including an imaging unit that acquires a two-dimensional pattern image by imaging the light irradiated onto the measurement target from the pattern irradiation unit; A calculation device for three-dimensional measurement that executes calculations for measuring the three-dimensional shape of
    a reference model acquisition unit that obtains a reference model indicating a component mounting range in which the component is mounted on the board and an outer shape of the component mounted in the component mounting range;
    A shadow area in which a shadow of the component is created on the board because light irradiated from the imaging unit to the component is blocked by the component, and light irradiated from the imaging unit and reflected by the component enters the board. an area calculating unit that calculates a poor irradiation area in at least one of the secondary reflection areas to be irradiated based on the reference model and irradiation direction information indicating a direction in which light is irradiated to the component by the three-dimensional measuring device; A calculation device for three-dimensional measurement.
  2.  前記基準モデル取得部は、前記部品実装範囲に実装された前記部品と前記部品実装範囲との位置関係の適否を検査するための基準を示す検査データおよび前記基板の構成を示すCAD(Computer-Aided Design)データの少なくとも一方のデータから、前記基準モデルを作成する請求項1に記載の三次元計測用演算装置。 The reference model acquisition unit includes inspection data indicating a standard for inspecting the suitability of the positional relationship between the component mounted in the component mounting range and the component mounting range, and a CAD (Computer-Aided Model Acquisition Unit) indicating the configuration of the board. 2. The three-dimensional measurement arithmetic device according to claim 1, wherein the reference model is created from at least one of (Design) data.
  3.  前記領域算出部は、前記三次元計測装置に搬入されて前記対象物支持部に支持された前記基板の位置を認識した結果にさらに基づき前記照射不良領域を算出する請求項1または2に記載の三次元計測用演算装置。 The area calculation unit calculates the ill-irradiation area further based on a result of recognizing the position of the substrate carried into the three-dimensional measurement device and supported by the object support unit. Computing device for three-dimensional measurement.
  4.  前記対象物支持部に支持された前記基板の位置は、前記基板に付されたフィデューシャルマークを前記撮像部によって撮像した結果に基づき認識される請求項3に記載の三次元計測用演算装置。 The three-dimensional measurement computing device according to claim 3, wherein the position of the substrate supported by the object support section is recognized based on a result of imaging a fiducial mark attached to the substrate by the imaging section. .
  5.  前記領域算出部は、前記部品実装範囲に実装された前記部品の位置を検出した結果にさらに基づき前記照射不良領域を算出する請求項1ないし4のいずれか一項に記載の三次元計測用演算装置。 The three-dimensional measurement calculation according to any one of claims 1 to 4, wherein the area calculation unit calculates the ill-irradiation area further based on a result of detecting the position of the component mounted in the component mounting range. Device.
  6.  前記部品実装範囲に実装された前記部品の位置は、前記部品を撮像した結果に基づき検出される請求項5に記載の三次元計測用演算装置。 The three-dimensional measurement arithmetic device according to claim 5, wherein the position of the component mounted in the component mounting range is detected based on a result of imaging the component.
  7.  前記パターン照射部は、互いに異なる方向から前記部品に光を照射する複数のプロジェクタを有し、
     前記照射方向情報は、前記プロジェクタから前記部品に光が照射される方向を、前記複数のプロジェクタのそれぞれについて示し、
     前記領域算出部は、前記複数のプロジェクタのそれぞれについて、前記照射不良領域を算出する請求項1ないし6のいずれか一項に記載の三次元計測用演算装置。
    The pattern irradiation unit includes a plurality of projectors that irradiate the component with light from mutually different directions,
    The irradiation direction information indicates, for each of the plurality of projectors, a direction in which light is irradiated from the projector to the component;
    The three-dimensional measurement arithmetic device according to any one of claims 1 to 6, wherein the area calculation unit calculates the ill-irradiation area for each of the plurality of projectors.
  8.  前記プロジェクタから前記基板に照射された光を前記撮像部により撮像することで取得された前記パターン画像に基づき、三次元形状に関する値である形状関連値を画素毎に示す単方向形状データを算出する演算を、前記複数のプロジェクタのそれぞれについて実行することで、複数の単方向形状データを算出する形状算出部をさらに備え、
     前記形状算出部は、前記複数の単方向形状データそれぞれが示す前記形状関連値の平均を前記画素毎に算出することで前記複数の単方向形状データを統合して、前記計測対象物の三次元形状を算出する第1統合演算処理を実行し、
     前記第1統合演算処理では、前記単方向形状データが示す前記形状関連値のうち、前記照射不良領域に含まれる前記画素の前記形状関連値に対しては1未満で0以上の重み係数を乗じる加重平均により、前記計測対象物の三次元形状が算出される請求項7に記載の三次元計測用演算装置。
    Unidirectional shape data indicating a shape-related value, which is a value related to a three-dimensional shape, for each pixel is calculated based on the pattern image acquired by capturing the light irradiated onto the substrate from the projector by the imaging unit. further comprising a shape calculation unit that calculates a plurality of unidirectional shape data by performing calculations for each of the plurality of projectors,
    The shape calculation unit integrates the plurality of unidirectional shape data by calculating the average of the shape-related values indicated by each of the plurality of unidirectional shape data for each pixel, and calculates the three-dimensional shape of the measurement target. Execute a first integrated calculation process to calculate the shape,
    In the first integrated calculation process, among the shape-related values indicated by the unidirectional shape data, the shape-related values of the pixels included in the poor irradiation area are multiplied by a weighting coefficient of less than 1 and greater than or equal to 0. The three-dimensional measurement computing device according to claim 7, wherein the three-dimensional shape of the measurement object is calculated by weighted averaging.
  9.  前記第1統合演算処理では、前記照射不良領域に含まれる前記画素の前記形状関連値に対しては0の重み係数を乗じる加重平均により、前記計測対象物の三次元形状が算出される請求項8に記載の三次元計測用演算装置。 2. A three-dimensional shape of the object to be measured is calculated by a weighted average in which the shape-related values of the pixels included in the poor irradiation area are multiplied by a weighting coefficient of 0 in the first integrated calculation process. 8. The three-dimensional measurement arithmetic device according to 8.
  10.  前記形状関連値は、前記パターン画像に基づき算出される三次元形状の算出値と、当該算出値の信頼度との積で与えられる請求項8または9に記載の三次元計測用演算装置。 The three-dimensional measurement arithmetic device according to claim 8 or 9, wherein the shape-related value is given as a product of a calculated value of the three-dimensional shape calculated based on the pattern image and a reliability of the calculated value.
  11.  前記形状関連値は、前記パターン画像に基づき算出される三次元形状の算出値である請求項8または9に記載の三次元計測用演算装置。 The arithmetic device for three-dimensional measurement according to claim 8 or 9, wherein the shape-related value is a calculated value of a three-dimensional shape calculated based on the pattern image.
  12.  前記形状算出部は、前記照射不良領域に応じた重み付けを行わずに前記複数の単方向形状データが示す前記形状関連値の代表値を前記画素毎に算出することで前記複数の単方向形状データを統合して、前記計測対象物の三次元形状を算出する第2統合演算処理と、前記第1統合演算処理との両方を同一の前記計測対象物に対して実行して、前記第1統合演算処理および第2統合演算処理それぞれで算出された前記計測対象物の三次元形状の差を評価する評価処理を、前記三次元計測装置で前記パターン画像の取得対象となった所定数の計測対象物に実行することで、前記第1統合演算処理の要否を判定する要否判定を実行し、
     前記要否判定において前記第1統合演算処理が不要と判定された後は、前記形状算出部は、前記第1統合演算処理を実行せずに前記第2統合演算処理によって前記計測対象物の三次元形状を算出する請求項8ないし11のいずれか一項に記載の三次元計測用演算装置。
    The shape calculation unit calculates, for each pixel, a representative value of the shape-related values indicated by the plurality of unidirectional shape data without performing weighting according to the poor irradiation area. A second integrated calculation process for calculating the three-dimensional shape of the measurement target object and the first integrated calculation process are performed on the same measurement target object, and the first integration process is performed on the same measurement target object. An evaluation process for evaluating the difference in the three-dimensional shapes of the measurement objects calculated by each of the arithmetic processing and the second integrated arithmetic processing is performed on a predetermined number of measurement objects from which the pattern images are acquired by the three-dimensional measuring device. executing a necessity determination for determining whether or not the first integrated calculation process is necessary by executing it on the object;
    After it is determined that the first integrated calculation process is unnecessary in the necessity determination, the shape calculation unit calculates the three-dimensional shape of the measurement target by the second integrated calculation process without executing the first integrated calculation process. The three-dimensional measurement arithmetic device according to any one of claims 8 to 11, which calculates an original shape.
  13.  前記所定数を設定する操作を受け付ける設定操作部をさらに備える請求項12に記載の三次元計測用演算装置。 The three-dimensional measurement arithmetic device according to claim 12, further comprising a setting operation unit that accepts an operation to set the predetermined number.
  14.  請求項1ないし13のいずれか一項に記載の三次元計測用演算装置としてコンピュータを機能させる三次元計測用プログラム。 A three-dimensional measurement program that causes a computer to function as the three-dimensional measurement arithmetic device according to any one of claims 1 to 13.
  15.  請求項14に記載の三次元計測用プログラムをコンピュータにより読み出し可能に記録する記録媒体。 A recording medium on which the three-dimensional measurement program according to claim 14 is recorded so as to be readable by a computer.
  16.  基板および当該基板に実装された部品を有する計測対象物を支持する対象物支持部と、
     前記対象物支持部に支持された前記計測対象物に所定のパターンの光を照射するパターン照射部と、
     前記パターン照射部から前記計測対象物に照射された光を撮像することで二次元のパターン画像を取得する撮像部と、
     前記パターン画像に基づき前記計測対象物の三次元形状を計測するための演算を実行する請求項1ないし13のいずれか一項に記載の三次元計測用演算装置と
    を備えた三次元計測装置。
    an object support section that supports a measurement object having a board and components mounted on the board;
    a pattern irradiation unit that irradiates the measurement target object supported by the target object support unit with a predetermined pattern of light;
    an imaging unit that acquires a two-dimensional pattern image by imaging the light irradiated onto the measurement target from the pattern irradiation unit;
    A three-dimensional measurement device comprising: a three-dimensional measurement calculation device according to any one of claims 1 to 13, which executes calculation for measuring a three-dimensional shape of the object to be measured based on the pattern image.
  17.  基板および当該基板に実装された部品を有する計測対象物を支持する対象物支持部と、前記対象物支持部に支持された前記計測対象物に所定のパターンの光を照射するパターン照射部と、前記パターン照射部から前記計測対象物に照射された光を撮像することで二次元のパターン画像を取得する撮像部とを備えた三次元計測装置により取得された前記パターン画像に基づき前記計測対象物の三次元形状を計測するための演算を実行する三次元計測用演算方法であって、
     前記基板のうち前記部品が実装される部品実装範囲および前記部品実装範囲に実装される前記部品の外形を示す基準モデルを取得する工程と、
     前記撮像部から前記部品に照射された光が前記部品によって遮られることで前記基板において前記部品の影が生じる影領域および前記撮像部から照射されて前記部品で反射された光が前記基板に入射する二次反射領域のうち少なくとも一方の照射不良領域を、前記三次元計測装置で前記部品に光が照射される方向を示す照射方向情報と、前記基準モデルとに基づき算出する工程と
    を備えた三次元計測用演算方法。
     
     
    an object support section that supports a measurement object having a substrate and components mounted on the substrate; a pattern irradiation section that irradiates the measurement object supported by the object support section with a predetermined pattern of light; the measurement target based on the pattern image acquired by a three-dimensional measuring device including an imaging unit that acquires a two-dimensional pattern image by imaging the light irradiated onto the measurement target from the pattern irradiation unit; A three-dimensional measurement calculation method for performing calculations for measuring a three-dimensional shape of
    obtaining a reference model indicating a component mounting range in which the component is mounted on the board and an external shape of the component mounted in the component mounting range;
    A shadow area in which a shadow of the component is created on the board because light irradiated from the imaging unit to the component is blocked by the component, and light irradiated from the imaging unit and reflected by the component enters the board. calculating a poor irradiation area in at least one of the secondary reflection areas to be irradiated based on irradiation direction information indicating a direction in which light is irradiated onto the component by the three-dimensional measuring device and the reference model. Computation method for three-dimensional measurement.

PCT/JP2022/010277 2022-03-09 2022-03-09 Operation device for three-dimensional measurement, three-dimensional measurement program, recording medium, three-dimensional measurement device, and operation method for three-dimensional measurement WO2023170814A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/010277 WO2023170814A1 (en) 2022-03-09 2022-03-09 Operation device for three-dimensional measurement, three-dimensional measurement program, recording medium, three-dimensional measurement device, and operation method for three-dimensional measurement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/010277 WO2023170814A1 (en) 2022-03-09 2022-03-09 Operation device for three-dimensional measurement, three-dimensional measurement program, recording medium, three-dimensional measurement device, and operation method for three-dimensional measurement

Publications (1)

Publication Number Publication Date
WO2023170814A1 true WO2023170814A1 (en) 2023-09-14

Family

ID=87936295

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/010277 WO2023170814A1 (en) 2022-03-09 2022-03-09 Operation device for three-dimensional measurement, three-dimensional measurement program, recording medium, three-dimensional measurement device, and operation method for three-dimensional measurement

Country Status (1)

Country Link
WO (1) WO2023170814A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014115107A (en) * 2012-12-06 2014-06-26 Canon Inc Device and method for measuring distance
WO2015104799A1 (en) * 2014-01-08 2015-07-16 ヤマハ発動機株式会社 Visual inspection device and visual inspection method
JP2016099257A (en) * 2014-11-21 2016-05-30 キヤノン株式会社 Information processing device and information processing method
WO2020065850A1 (en) * 2018-09-27 2020-04-02 ヤマハ発動機株式会社 Three-dimensional measuring device
WO2020183516A1 (en) * 2019-03-08 2020-09-17 ヤマハ発動機株式会社 Mounting data generation assistance device, mounting data generation assistance method, appearance inspection machine, and component mounting system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014115107A (en) * 2012-12-06 2014-06-26 Canon Inc Device and method for measuring distance
WO2015104799A1 (en) * 2014-01-08 2015-07-16 ヤマハ発動機株式会社 Visual inspection device and visual inspection method
JP2016099257A (en) * 2014-11-21 2016-05-30 キヤノン株式会社 Information processing device and information processing method
WO2020065850A1 (en) * 2018-09-27 2020-04-02 ヤマハ発動機株式会社 Three-dimensional measuring device
WO2020183516A1 (en) * 2019-03-08 2020-09-17 ヤマハ発動機株式会社 Mounting data generation assistance device, mounting data generation assistance method, appearance inspection machine, and component mounting system

Similar Documents

Publication Publication Date Title
JP4715944B2 (en) Three-dimensional shape measuring apparatus, three-dimensional shape measuring method, and three-dimensional shape measuring program
US10563977B2 (en) Three-dimensional measuring device
US9441957B2 (en) Three-dimensional shape measuring apparatus
KR101064161B1 (en) Scanning system with stereo camera set
TWI526671B (en) Board-warping measuring apparatus and board-warping measuring method thereof
JP6322335B2 (en) Appearance inspection device
TWI635252B (en) Methods and system for inspecting a 3d object using 2d image processing
US20130342677A1 (en) Vision testing device using multigrid pattern
KR20100083698A (en) Three-dimensional measurement device
JP4315536B2 (en) Electronic component mounting method and apparatus
US20210348918A1 (en) Three-dimensional measuring device
JP2009036589A (en) Target for calibration and device, method and program for supporting calibration
US9157874B2 (en) System and method for automated x-ray inspection
US20160320314A1 (en) Visual inspection apparatus and visual inspection method
JP6097389B2 (en) Inspection apparatus and inspection method
WO2023170814A1 (en) Operation device for three-dimensional measurement, three-dimensional measurement program, recording medium, three-dimensional measurement device, and operation method for three-dimensional measurement
JP5136108B2 (en) 3D shape measuring method and 3D shape measuring apparatus
JPWO2020039575A1 (en) 3D measuring device, 3D measuring method
WO2020241061A1 (en) Three-dimensional measurement apparatus and three-dimensional measurement method
KR101503021B1 (en) Inspection apparatus and compensating method thereof
TWI818349B (en) Workpiece height measuring device and mounting substrate inspection device using the same
US20230184541A1 (en) Three-dimensional measurement system and calibration method thereof
JP7156795B2 (en) Inspection device adjustment method and inspection device
JP2013115380A (en) Component mounting device and component mounting method
JP7139953B2 (en) Three-dimensional shape measuring device, three-dimensional shape measuring method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22930797

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2024505718

Country of ref document: JP