JP2008096123A - Optical displacement gauge, optical displacement measuring method, optical displacement measuring program, computer-readable memory medium and recording equipment - Google Patents

Optical displacement gauge, optical displacement measuring method, optical displacement measuring program, computer-readable memory medium and recording equipment Download PDF

Info

Publication number
JP2008096123A
JP2008096123A JP2006274533A JP2006274533A JP2008096123A JP 2008096123 A JP2008096123 A JP 2008096123A JP 2006274533 A JP2006274533 A JP 2006274533A JP 2006274533 A JP2006274533 A JP 2006274533A JP 2008096123 A JP2008096123 A JP 2008096123A
Authority
JP
Japan
Prior art keywords
light
profile
image
unit
measurement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2006274533A
Other languages
Japanese (ja)
Inventor
Yoshiaki Nishio
佳晃 西尾
Original Assignee
Keyence Corp
株式会社キーエンス
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Keyence Corp, 株式会社キーエンス filed Critical Keyence Corp
Priority to JP2006274533A priority Critical patent/JP2008096123A/en
Publication of JP2008096123A publication Critical patent/JP2008096123A/en
Application status is Pending legal-status Critical

Links

Abstract

<P>PROBLEM TO BE SOLVED: To enhance the visibility of light-receiving image, and to facilitate confirmation of the profile. <P>SOLUTION: The optical displacement gauge comprises a display part for displaying the light-receiving image formed, based on the amplified signal obtained by the amplifier at each point in a first direction; and a light-receiving image coloring means capable of displaying the image, in a state of being subjected to the coloring processing to color every pixel of light-receiving image, wherein for the light receiving image, the graduation of the light-receiving signal is divided into a plurality of ranges and allotted with different colors to every range. By coloring each graduation width, the light-receiving image is displayed like as a contour map, and depending on the colored graduation width, the steepness of the gradient of light-receiving distribution can be recognized, and the graduation of inclination of the profile etc., can be grasped visually. <P>COPYRIGHT: (C)2008,JPO&INPIT

Description

  The present invention irradiates the measurement object with light, receives the light from the measurement object with a light receiving element, and calculates the distance from the light projecting unit to the measurement object, the displacement of the measurement object, etc. The present invention relates to an optical displacement meter to be measured, an optical displacement measurement method, an optical displacement measurement program, a computer-readable recording medium, and a recorded device.

An optical displacement meter using triangulation is used in order to measure the dimension and movement amount of a measurement object (workpiece). FIG. 93 is a block diagram showing a configuration of a main part of a conventional optical displacement meter. In FIG. 93, a drive circuit 101 drives a laser diode (LD) 102 based on a light output control signal Va. The laser light emitted from the laser diode 102 is applied to the work WK through the light projection lens 103. Of the reflected light from the work WK, the diffuse reflection component and the regular reflection component are received by the light position detection element such as PSD through the light receiving lens 104. When the workpiece WK is displaced in the direction indicated by the arrow X, the position of the light spot on the light receiving surface of the light position detecting element 105 moves. Two output signals corresponding to the position of the light spot on the light receiving surface are output from the optical position detection element 105, and these output signals are current-converted by current-voltage conversion circuits (IV conversion circuits) 106a and 106b, respectively. The voltage is converted. One output signal of the light position detecting element 105 has a current value proportional to the distance from one end of the light receiving surface to the light spot, and the other output signal is proportional to the distance from the other end of the light receiving surface to the light spot. Current value. Accordingly, the displacement of the workpiece WK can be detected based on the current values of these two output signals.

In such an optical displacement meter, the intensity of the reflected light differs depending on the material and surface state of the workpiece WK. The output needs to be adjusted. FIG. 94 is a block diagram showing an example of a conventional control circuit for controlling the amount of light received by the optical position detection element 105. In FIG. The control circuit of FIG. 94 includes current / voltage conversion circuits 106a and 106b, an adder 112, a subtractor 113, an error integration circuit 114, a reference voltage generation circuit 115, and an optical output adjustment circuit 111. The current-voltage conversion circuits 106a and 106b convert the current signal of the optical position detection element 105 into a voltage signal. The adder 112 outputs the received light amount of the optical position detecting element 105 as the received light amount voltage VL by adding the respective voltage signals on the FAR side and the NEAR side. The reference voltage generation circuit 115 generates a predetermined reference voltage Vr . Subtractor 113 outputs the difference between the reference voltage V r generated by the received light amount voltage VL and the reference voltage generating circuit 115 obtained by the adder 112 as an error signal VE. The error integration circuit 114 integrates the error signal VE output from the subtractor 113 and supplies the integrated error signal to the light output adjustment circuit 111 as a control voltage VC. Accordingly, the voltage of the received light quantity voltage VL to the reference voltage generating circuit 115 optical output adjustment circuit 111 to be equal to the reference voltage V r generated by output from the adder 112 is controlled. Thus, by controlling the voltage of the optical output control signal V a applied to the drive circuit 101 of FIG. 93 from the optical output adjustment circuit 111 can control the amount of light received by the light position detecting element 105 at a constant level .

In such an optical displacement meter, the reflectivity of light and the amount of received light vary greatly depending on conditions such as the color, roughness and angle of the surface of the workpiece. If the received light signal is too small, or too large due to saturation or the like, the measurement accuracy deteriorates. Therefore, for example, a technology has been developed that performs feedback control to adjust the light emission amount of the light emitting element and the amplification factor (gain) of the amplifier so that the peak value of the light reception amount (image signal level) becomes the target value (for example, patents). Reference 1). As shown in FIG. 95, the optical displacement meter includes a light emitting element 102B for irradiating light to the work WK, an image sensor 105B for receiving light from the work WK and generating an image signal, and an image. A signal processing circuit including an amplifier 146B for amplifying an image signal from the sensor 105B, and at least one feedback control of an operation amount including a light emission amount of the light emitting element 102B and an amplification factor of the amplifier 146B based on the image signal from the signal processing circuit And a control unit 144B for executing at least one variable amount of the manipulated variable in the feedback control. The controller 144B sets an appropriate variable width of the operation amount based on the operation amount data for a predetermined period in the variable width setting mode. As a result, as shown in FIG. 96, it is possible to cope with high-speed measurement while retaining the advantage of performing feedback control of the light emission amount of the light emitting element 102B and the amplification factor of the amplifier 146B.
JP 2006-010361 A JP-A-10-267648

  On the other hand, an apparatus for measuring the shape (profile) of a workpiece using the principle of light cutting has been developed. The light section is a two-dimensional extension of the principle of triangulation as shown in FIGS. 97 (a) and 97 (b). That is, as shown in FIG. 97 (a), the laser beam LB irradiated from the sensor head SH to the workpiece WK is formed in a band shape, and the triangulation is expanded in the width (X direction). As a result, as shown in FIG. 97B, the triangulation is expanded in the width direction. For this reason, a linear (one-dimensional) light-receiving element is used in the triangulation, whereas a light-receiving element JS arranged in a two-dimensional manner is used in the light-receiving element.

  In such an optical cutting displacement meter, the profile shape of the workpiece measured as shown in FIG. 8 can be confirmed on a monitor. On the other hand, when the profile shape of the workpiece cannot be measured appropriately, it may be judged by looking at the received light image acquired by a light receiving element such as a CCD as shown in FIG. However, since the received light image is a black and white grayscale image, there is a problem that the distribution state of the received light is difficult to understand.

  In such a case, as a method of recognizing the distribution state of the received light, displaying the line bright waveform is performed in order to see the distribution state of the received light of one line in the image. The line bright waveform indicates a light reception luminance distribution in the displacement measurement direction. For example, by taking the displacement direction (pixel column direction) on the horizontal axis and the received luminance (gradation) on the vertical axis, the line bright waveform is displayed, and the luminance distribution along the displacement measurement direction on the surface of the measurement object is displayed. I can grasp. However, with the line bright waveform, only one line in the image can be confirmed, and there is a problem that when the plurality of line bright waveforms are displayed, the original received light image cannot be displayed.

  The present invention has been made to solve such conventional problems. An object of the present invention is to provide an optical displacement meter, an optical displacement measurement method, an optical displacement measurement program, a computer-readable recording medium, and a recording medium that improve the visibility of a received light image and facilitate the confirmation of a profile. To provide equipment.

Means for Solving the Problems and Effects of the Invention

  In order to achieve the above object, a first optical displacement meter of the present invention is an optical displacement meter that measures the displacement of a measurement object, and has a light spread in the first direction on the measurement object. Irradiation as a band-shaped light, or a light projecting unit for scanning and irradiating in the first direction, and the reflected light from the measurement object are received and output as a light reception signal at each position in the first direction. The two-dimensional light receiving element, an amplifier for amplifying the received light signal from the two-dimensional light receiving element, and an amplified signal obtained by the amplifier at each point in the first direction by the reflected light of the irradiation light from the light projecting unit The display unit for displaying the received light image generated based on the above, and the gradation of the received light signal for each pixel is divided into a plurality of ranges for the received light image, and different colors are assigned to each range, and the pixels of the received light image A coloring process that colors the color assigned to the gradation every time It may include a display capable of receiving an image coloring unit on the display unit in a state. As a result, coloring is performed for each gradation width of the received light signal, so that the received light image is displayed as a contour map, and it is recognized whether the received light distribution gradient is steep or gentle depending on the density of the colored gradation width. This makes it easier to visually grasp the degree of profile inclination.

  The second optical displacement meter can further calculate the profile shape of the measurement object from the amplified signal obtained by the amplifier at each point in the first direction by the reflected light of the irradiation light from the light projecting unit. Profile calculation means is provided, and the profile information calculated by the profile calculation means can be displayed in a superimposed manner in a state where the received light image colored by the received light image coloring means is displayed on the display unit. As a result, when a plurality of light is received by the light cutting method, the profile is superimposed on the received light image to solve the problem that it is difficult to determine which part of the measurement object is measured only from the received light image. This makes it easy to recognize which position of the received light distribution is detected. Also, the peak level of the profile waveform is easily recognized due to the different colors.

  The third optical displacement meter further includes a measurement area designating means for designating a desired measurement area in a state where the profile shape is displayed on the display unit, and a desired measurement area designated by the area designating means. And a measurement processing unit capable of performing the above calculation.

  In the fourth optical displacement meter, the two-dimensional light receiving element can be a CCD or a CMOS.

  The fifth optical displacement measuring method is an optical displacement measuring method capable of measuring the displacement of the measurement object based on the light cutting method, and the first light is emitted from the light projecting unit to the flat measurement object. Irradiated by scanning in the direction, or irradiated as a band-shaped light having a spread in the first direction, the reflected light from the measurement object is received by the two-dimensional light receiving element, and received at each position in the first direction In addition to outputting as a signal, the light receiving signal from the two-dimensional light receiving element is amplified by an amplifier, the amplified signal is further converted into a digital signal by a digital conversion means, and the reflected light of the irradiation light from the light projecting unit reflects the first direction The step of acquiring the received light image based on the digital signal obtained by the digital conversion means at each point of the above, and the gradation of the received light signal for each pixel is divided into a plurality of ranges with respect to the received light image, and is different for each range Assign a color to each pixel of the received image It may include a step of displaying on the display unit in a state subjected to coloring treatment for coloring the color assigned to the tone. As a result, coloring is performed for each gradation width of the received light signal, so that the received light image is displayed as a contour map, and it is recognized whether the received light distribution gradient is steep or gentle depending on the density of the colored gradation width. This makes it easier to visually grasp the degree of profile inclination.

  The sixth optical displacement measurement program is an optical displacement measurement program capable of measuring the displacement of the measurement object based on the light cutting method, and the light is emitted from the first light projecting unit to the flat measurement object. Irradiated by scanning in the direction, or irradiated as a band-shaped light having a spread in the first direction, the reflected light from the measurement object is received by the two-dimensional light receiving element, and received at each position in the first direction In addition to outputting as a signal, the light receiving signal from the two-dimensional light receiving element is amplified by an amplifier, the amplified signal is further converted into a digital signal by a digital conversion means, and the reflected light of the irradiation light from the light projecting unit reflects the first direction. Based on the digital signal obtained by the digital conversion means at each point, the function of acquiring the received light image and the gradation of the received light signal for each pixel with respect to the received light image are divided into a plurality of ranges, and are different for each range Assign color and receive image For each pixel, it is possible to realize a function of displaying a state subjected to coloring treatment for coloring the color assigned to the tone to the computer. As a result, coloring is performed for each gradation width of the received light signal, so that the received light image is displayed as a contour map, and it is recognized whether the received light distribution gradient is steep or gentle depending on the density of the colored gradation width. This makes it easier to visually grasp the degree of profile inclination.

  The computer-readable recording medium or the recorded device storing the seventh program stores the program. CD-ROM, CD-R, CD-RW, flexible disk, magnetic tape, MO, DVD-ROM, DVD-RAM, DVD-R, DVD + R, DVD-RW, DVD + RW, Blu-ray (registered) Trademarks), HD DVD and other magnetic disks, optical disks, magneto-optical disks, semiconductor memories and other media that can store programs. The program includes a program distributed in a download manner through a network line such as the Internet, in addition to a program stored and distributed in the recording medium. Further, the recorded devices include general-purpose or dedicated devices in which the program is implemented in a state where it can be executed in the form of software, firmware, or the like. Furthermore, each process and function included in the program may be executed by computer-executable program software, or each part of the process or hardware may be executed by hardware such as a predetermined gate array (FPGA, ASIC), or program software. And a partial hardware module that realizes a part of hardware elements may be mixed.

Hereinafter, embodiments of the present invention will be described with reference to the drawings. However, the embodiment described below includes an optical displacement meter, an optical displacement measurement method, an optical displacement measurement program, a computer-readable recording medium, and a recorded device for embodying the technical idea of the present invention. The present invention does not specify an optical displacement meter, an optical displacement measurement method, an optical displacement measurement program, a computer-readable recording medium, and a recorded device as follows. Further, the present specification by no means specifies the members shown in the claims as the members of the embodiments. In particular, the dimensions, materials, shapes, relative arrangements, and the like of the component parts described in the embodiments are not intended to limit the scope of the present invention unless otherwise specified, but are merely described. It is just an example. In addition, the size, positional relationship, and the like of members illustrated in each drawing may be exaggerated for clarity of explanation. Furthermore, in the following description, the same name and reference numeral indicate the same or the same members, and detailed description will be omitted as appropriate. Furthermore, each element constituting the present invention may be configured such that a plurality of elements are constituted by the same member and the plurality of elements are shared by one member, and conversely, the function of one member is constituted by a plurality of members. It can also be realized by sharing.
(Embodiment 1)

  1 to 3 show an optical displacement meter 100 according to Embodiment 1 of the present invention. 1 is a block diagram showing a configuration of an optical displacement meter 100 according to a first embodiment of the present invention, FIG. 2 is a perspective view showing a system configuration of the optical displacement meter 100, and FIG. 3 is a configuration of feedback control by a microprocessor 44. The block diagram which shows is shown, respectively. Here, an example of feedback control will be mainly described.

  FIG. 1 shows the measurement principle of the optical displacement meter 100. The optical displacement meter 100 is also called a laser displacement meter, and is used to measure the displacement of an object in a non-contact manner using the principle of triangulation. Laser light emitted from the laser diode 12 under the control of the LD driver 11 passes through the light projecting lens 13 and irradiates the work WK. A part of the laser beam reflected by the workpiece WK passes through the light receiving lens 14 and is received by the two-dimensional light receiving element 15. The two-dimensional light receiving element 15 is a CCD or CMOS image sensor in which a plurality of pixel components are arranged in a plane, and electric charges corresponding to the amount of received light are accumulated and extracted for each pixel component.

  When the workpiece WK is displaced as indicated by a broken line in FIG. 1, the optical path of the laser beam that is reflected by the workpiece WK and reaches the two-dimensional light receiving element 15 changes as indicated by the broken line. As a result, the position of the light receiving spot on the light receiving surface of the two-dimensional light receiving element 15 moves, and the light receiving signal waveform, that is, the position of the received light image changes. Accumulated charges corresponding to the amount of received light in each pixel component of the two-dimensional light receiving element 15 are read out by the readout circuit 16, and a received light waveform that is a one-dimensional received light amount distribution is obtained by signal processing. The displacement of the workpiece WK is obtained from the peak position and the center of gravity of the received light waveform.

  FIG. 2 shows a system configuration of the optical displacement meter 100. The optical displacement meter 100 includes a sensor head unit 21, a controller unit 22, and a display 27. The sensor head unit 21 incorporates the LD driver 11, the laser diode 12, the light projecting lens 13, the light receiving lens 14, the two-dimensional light receiving element 15, and the readout circuit 16 shown in FIG. 1. The controller unit 22 includes a display unit connection unit 25 that connects a display unit 27 that constitutes a display unit together with a head connection unit 24 that connects the sensor head unit 21. The controller unit 22 includes a microprocessor (control unit), controls the output (light emission amount) of the laser diode 12 via the LD driver 11 of the sensor head unit 21, and is read from the two-dimensional light receiving element 15. A process for obtaining the displacement of the workpiece WK from the signal is executed. The laser diode 12 can be controlled by the sensor head unit 21. In this case, the sensor head unit has a control unit. In addition, a push button switch, a console, or connectors for connecting these can be provided on the front surface of the controller unit 22 as an input interface for performing various settings and operations. Furthermore, the controller unit 22 includes an interface such as a slot for inserting a semiconductor memory, a power supply terminal block, an expansion connector, and a serial port such as a USB or RS232C. Furthermore, since the controller unit 22 connects a plurality of sensor head units 21, an extension unit 26 can be connected. In the example of FIG. 2, one extension unit 26 is connected to the main amplifier unit 28 constituting the controller unit 22, and a total of two sensor head units 21 can be connected. By further increasing the number of extension units 26, more sensor heads can be connected. Needless to say, a plurality of head connection portions for connecting a plurality of sensor head portions 21 may be provided in the amplifier unit itself constituting the controller portion 22, or two or more head connection portions may be provided in the extension unit. The sensor head unit 21 and the controller unit 22 are connected by an electric cable 23 to exchange electric signals with each other, and a power supply voltage is supplied from the controller unit 22 to the sensor head unit 21. The display device 27 is used for displaying numerical values of measurement results and various setting values. The display device 27 includes a display using an LCD, a CRT, or the like.

  As shown in FIG. 1, the laser light emitted from the laser diode 12 passes through the light projecting lens 13 and irradiates the work WK. Part of the laser light reflected by the workpiece WK passes through the light receiving lens 14 and enters the two-dimensional light receiving element 15. The charges accumulated in each pixel component of the two-dimensional light receiving element 15 are read out by the readout circuit 16. The readout circuit 16 obtains a time-series voltage signal corresponding to a one-dimensional received light amount distribution by applying a pixel selection signal that is a readout pulse signal to the two-dimensional light receiving element 15 and sequentially scanning each pixel component. .

  For example, if the two-dimensional light receiving element 15 is composed of 256 pixels and the transfer rate for each pixel is 1 microsecond, the accumulated charge of all the pixel components is read out in 256 microseconds, and the readout circuit 16 performs time series. Is output as a voltage signal. The sampling period is obtained by adding the time required for the control calculation to the time required for reading the accumulated charges of all the pixels. The output signal of the readout circuit 16 is passed to the controller unit 22.

  The intensity (light emission amount) of the laser light emitted from the laser diode 12 is controlled by the microprocessor 44 shown in FIG. If the intensity of the laser light changes, the amount of light (the amount of received light) reflected by the workpiece WK and incident on the two-dimensional light receiving element 15 also changes. Thus, by adjusting the intensity of the laser light emitted from the laser diode 12 according to the light reflectance (brightness) of the work WK, saturation of accumulated charges in each pixel component of the two-dimensional light receiving element 15 is avoided. However, the dynamic range can be fully utilized. Specifically, the intensity of the laser beam is adjusted by changing the pulse width or duty ratio of a pulse for driving the laser diode 12. Of course, the intensity of the laser beam may be adjusted by changing the pulse voltage (peak value).

  The control of the light emission amount (laser light intensity) by the microprocessor 44 as described above is performed as a kind of feedback control. That is, feedback control of the light emission amount (laser light intensity) is performed so that a value (for example, a peak value) corresponding to the light reception amount becomes a predetermined target value. Instead of feedback control of the light emission amount, feedback control of the gain (amplification factor) of the amplifier may be performed. Alternatively, the feedback control of the light emission amount and the feedback control of the amplification factor of the amplifier may be used in combination. For example, the feedback control of the amplification factor of the amplifier is performed while the error of the feedback amount with respect to the target value is within a predetermined range, and the feedback control of the light emission amount is performed when the error of the feedback amount exceeds the predetermined range. It is possible to configure. Therefore, at least one feedback control of the operation amount including the light emission amount of the laser diode 12 and the amplification factor of the amplifier may be executed.

  FIG. 3 is a block diagram showing a configuration of feedback control by the microprocessor 44. A comparison unit 441, an operation amount calculation unit 442, and an output unit 443 are configured by the microprocessor 44 (a program executed by the microprocessor 44). Further, the LD driver 11 and the laser diode 12 in FIG. 1 correspond to the controlled object 451, and the two-dimensional light receiving element 15, the readout circuit 16, and the like correspond to the feedback circuit (FB circuit) 52.

  The comparison unit 441 compares a predetermined target value with the feedback amount and outputs the error. Based on this error, the operation amount calculation unit 442 calculates the operation amount and provides it to the output unit 443. This operation amount corresponds to the above-described light emission amount or (and) amplification factor. The operation amount is given to the control object 451 as a control signal from the output unit 443 of the microprocessor 44. That is, a control signal is given to the LD driver 11 or (and) the amplifier, and the light emission amount of the laser diode 12 or (and) the amplification factor of the amplifier is controlled. Then, the peak value of the received light amount obtained by the feedback circuit 452 (two-dimensional light receiving element 15, readout circuit 16, etc.) is fed back again to the comparison unit 441 of the microprocessor 44, thereby forming a feedback loop.

The optical displacement meter includes a measurement processing unit that performs various calculations based on the generated profile shape in addition to the generation of the profile shape, and a measurement region specifying unit that specifies a desired measurement region. The user does not need to set up a dedicated displacement calculation program according to the application, and can easily measure the profile shape and displacement of the workpiece.
(Embodiment 2)

Next, FIG. 4 shows a block diagram of an optical displacement meter 200 according to the second embodiment of the present invention. The optical displacement meter 200 shown in this figure is also configured by connecting the sensor head unit 1 and the controller unit 2. The sensor head unit 1 receives a light projection unit 3 that irradiates the work WK with band light, receives reflected light of the band light from the work, captures a light reception image, and outputs it as a light reception signal at each position in the first direction. And a two-dimensional light receiving element 15 for the purpose. The controller unit 2 includes a light amount control unit 51 such as a driver that controls the light amount of the light projecting element included in the light projecting unit 3, a light receiving element control unit 52 that controls the light receiving characteristics of the two-dimensional light receiving element 15, and a light receiving element. An image reading unit 56 that reads the received light reception image, a light reception data control unit 60 that performs feedback control of the light amount control unit 51 and the light receiving element control unit 52 based on the light reception signal acquired by the image reading unit 56, and light reception data control A measurement processing unit 54 for performing a desired calculation on the profile obtained by the unit 60, a display unit 70 for displaying a received light image and a profile shape acquired by the received light data control unit 60, and a received light data control unit 60. And an interface unit 80 for operating the display unit 70, a memory unit 90 for holding necessary data, and a stability output means for outputting the stability of the received light peak waveform 8, includes a mode switching unit 53 for switching the measurement mode and setting mode, and the alarm detection means 55 for generating an alarm signal. The image reading unit 56 includes an amplifier for amplifying the light reception signal from the two-dimensional light receiving element 15 and digital conversion means for converting the amplified signal obtained by the amplifier into a digital signal.
(Light projector 3)

  The light projecting unit 3 includes a light projecting element and a light projecting lens. As the light projecting element, a light emitting diode (LED) can be used in addition to a semiconductor laser (LD). The light projecting lens includes a collimating lens, a cylindrical lens, and a rod lens. The light emitted from the semiconductor laser that is the light projecting element of the light projecting unit 3 is formed into a belt shape by the light projecting lens, and is irradiated to the measurement object. The band light is also called laser sheet light, slit light, line beam, or the like.

The light emission amount of the light projecting unit 3 can be controlled by adjusting parameters such as the amplitude of the drive current of the light projecting element and the ON duty. Accordingly, the manipulated variable includes parameters such as the amplitude or light emission level of the light projecting element, the duty ratio or light emission time, the exposure time of the two-dimensional light receiving element 15, or the amplification factor (signal level gain) of the amplifier. Is included.
(Two-dimensional light receiving element 15)

The band light of the light projecting element is reflected by the work surface (or back surface, middle) and received by the two-dimensional light receiving element 15 through the light receiving lens, and a received light image is acquired. As the two-dimensional light receiving element 15, a two-dimensional image sensor that is a two-dimensional shape by arranging a plurality of rows of CCD or CMOS sensors, which are linear image sensors, can be used. For example, the two-dimensional light receiving element 15 is configured by arranging CCDs, which are one-dimensional linear image sensors, in parallel by the number of pixel lines. The CCD is an image sensor suitable for detecting the peak of the received light waveform. In particular, when a position sensitive detector (PSD) is used for the light receiving element, the influence of secondary reflection, tertiary reflection, irregular reflection, etc. is large because the illuminance barycentric position of the entire light receiving surface is detected. On the other hand, the CCD can accurately detect the light receiving peak position based on the pixel information, and can perform accurate measurement without being affected by such irregular reflection.
(Light reception data control unit 60)

The received light data control unit 60 calculates the peak level of each line from the received light image, and receives received light level control means 61 for controlling the received light amount, and performs various image processing such as calculating the profile shape from the received light image data. The functions of the processing unit 62 and the calculation unit 610 that performs various calculations such as determining the state of the workpiece are realized. Further, the image processing unit 62 includes a profile calculation unit 64 that calculates a profile, a trend graph generation unit 65 that generates a trend graph, a profile coloring unit 66 and a profile highlight unit 67, a received light image coloring unit 68, a multiple synthesis unit 69, and the like. Realize the function.
(Calculation unit 610)

The calculation unit 610 includes an inclination angle calculation unit 611 that calculates an inclination angle formed by the horizontal reference position and the actual horizontal line, an elevation difference calculation unit 612 that calculates an elevation difference between the elevation difference reference positions, and a step of the workpiece. Inclination correcting means 613 for correcting the inclination of the sensor head relative to the workpiece with respect to the profile shape shown, profile matching means 614 for performing profile search to synthesize the profile shapes based on the step portions of the plurality of profile shapes, and profile matching means 614 Difference extraction means 615 for extracting difference information from the common profile shape synthesized in step, work determination means 63 for determining the state of the work, image search means 616 for performing image search on the received light image, and according to the image search result Move the mask to move the light receiving mask area to an appropriate position. Means 617, edge surface calculation means 618 for calculating the position of the edge surface based on the change in the amount of received light between the pixels of the adjacent two-dimensional light receiving elements from the received light image of the workpiece having the edge surface substantially parallel to the irradiation light, etc. Realize the function. Details of these will be described later. The received light data control unit 60 can be constituted by a microprocessor such as an ASIC. In the examples of FIGS. 4 and 10, the light reception data control unit 60 that performs light amount feedback control is provided in the controller unit 2. However, the laser diode 12 may be controlled on the sensor head unit 1 side. For example, as shown in FIG. 5, a head control unit 50 can be provided on the sensor head unit 1 side, and the head control unit 50 can control the light emission amount of the laser diode 12 or the light amount feedback control.
(Light reception level control means 61)

  The light reception level control means 61 adjusts the light emission amount of the light projecting unit 3 and the two-dimensional light receiving element 15 so that the distribution of the peak level of the light reception signal waveform in each line of the digital signal obtained by the digital conversion means is within an appropriate range. The gain and the amplification factor of the amplifier are set to appropriate values. In addition to acquiring the amount of received light over the entire range, it is also possible to control by limiting only to the area necessary for measurement. For example, when only dark part information is required, control can be performed while ignoring saturation of a bright part, and a highly accurate received light image suitable for measurement purposes can be obtained.

In addition to this, at least one parameter of the operation amount including the light emission amount of the light projecting unit 3 and the amplification factor of the amplifier is fed back so that the distribution of the peak level of the light reception signal waveform between the lines is within a predetermined range. Control. Here, feedback control is performed so that the amount of received light is constant. The operation amount can be calculated and adjusted by the light reception level control means 61 and can be operated by the user from the operation amount adjustment means 81 of the interface unit 80.
(Profile calculation means 64)

The profile calculation means 64 measures the displacement of the workpiece based on the light cutting method and calculates the profile shape of the workpiece. The calculated profile shape of the workpiece is displayed in a wave shape on the display unit 70.
(Measurement processing unit 54)

The measurement processing unit 54 performs installation correction processing, position correction processing, measurement processing, and the like on the profile shape calculated by the profile calculation means 64. The measurement processing includes various calculations such as height difference measurement of a specified line segment, inclination angle detection, and calculation of a specified area.
(Interface unit 80)

  The interface unit 80 constitutes input means for performing necessary inputs and operations such as settings for the controller unit 2. An input / output device can also be used as a member for the user to perform settings and operations. Alternatively, a user interface screen of an optical displacement measurement program for operating the optical displacement meter may be used. Input / output devices constituting the interface unit are connected to a computer by wire or wirelessly, or are fixed to the computer or the like. Examples of general input units include various pointing devices such as a mouse, keyboard, slide pad, track point, tablet, joystick, console, jog dial, digitizer, light pen, numeric keypad, touch pad, and accu point. These input / output devices are not limited to program operations, but can also be used to operate hardware such as an optical displacement meter. Furthermore, the display unit 70 that displays the interface screen can also be used as the display unit 70 and the interface unit 80 by using a touch screen or a touch panel for the display itself. Or can be operated, voice input or other existing input means can be used, or these can be used together.

In the example of FIG. 4, the interface unit 80 includes an operation amount adjusting unit 81, a measurement region specifying unit 82, a time specifying unit 83, a sampling specifying unit 84, a control region specifying unit 85, a mask region specifying unit 86, and a multiple composition condition setting unit. 87, multiple composition range restriction means 88, pre-combination image selection means 89, horizontal part designation means 812 for designating the horizontal reference position from among the profile shapes, and steps in the profile shape of a workpiece having a known step difference in height. The height difference designating means 814 for designating each constituting surface as the height difference reference position, the tilt angle adjusting means 816 for manually adjusting the tilt angle calculated by the tilt angle calculating means 611, and the height difference calculating means 612 are calculated. The height difference adjusting means 818 for manually adjusting the height difference, the step from the profile shape of the workpiece having the step shape. Common profile designating means 820 for designating each step profile shape, reversing means 822 capable of reversing the profile shape displayed on the display unit vertically or horizontally, manually moving the profile shape displayed on the display unit, and / or Alternatively, a rotatable profile moving means 824, an inclination adjusting means 826 for manually adjusting the inclination calculated by the inclination correcting means 613, and an arrangement layout of two or more sensor head portions in a measurement mode for measuring the displacement of a workpiece. Arrangement mode selection means 828 for selecting from horizontal arrangement, vertical arrangement, and sandwich arrangement, registration profile designation means 830 for specifying a registration profile as a reference for performing profile search from the profile shape, and profile search in the registration profile Invalidity with reduced importance An invalid area designating unit 832 for designating an area, an invalidating unit 834 for automatically extracting a part having a large change in the registered profile and setting it as an invalid area, and a plurality of received light signal waveforms on a desired measurement line on the display unit. When present, the function of the measurement light selection means 836 and the like for selecting which received light signal waveform is to be measured as measurement light is realized. The operation amount adjusting unit 81 is a unit for adjusting the operation amount of feedback control. The measurement area designating unit 82 designates a measurement area to be measured by the measurement processing unit 54 from the profile display area 71 of the display unit 70. The time designation unit 83 designates the time from the trend graph display area 72 of the display unit 70. The sampling designation unit 84 designates the timing and / or number of sheets for recording the profile shape in the memory unit 90. The control area designating unit 85 designates a control area to be subjected to feedback control. The mask area designating unit 86 designates a mask area that is not subject to feedback control. Here, the profile mask area set on the profile shape and the light receiving mask area set on the received light image can be specified by the common mask area specifying means 86. In addition, individual mask area specifying means may be provided, and profile mask area specifying means for setting the profile mask area and light receiving mask area specifying means for specifying the light receiving mask area may be provided.
(Profile mask area specification function)

  Conversely, a profile mask region that is not subject to feedback control can be specified on the profile shape. Here, the mask area designation means 86 designates a profile mask area PM that is not subject to feedback control from the profile display area 71. In the example of FIG. 6, a frame-like profile mask region PM that indicates a range in which the profile shape is unstable is indicated by a one-dot chain line. In feedback control, the measurement value of the profile mask region PM is ignored, so that an accurate feedback control result in which such an unstable region is eliminated can be obtained, and stable and reliable control and measurement can be performed. Realized.

The multiple composition condition setting means 87 is a member for setting multiple composition conditions for performing multiple composition, and further includes multiple composition range limiting means 88 for restricting the multiple composition range. Furthermore, the pre-combination image selection means 89 is a member for designating the imaging conditions for the pre-combination image.
(Memory unit 90)

  The memory unit 90 functions as a light reception peak storage unit 91 and a pre-combination image storage unit 92. The received light peak storage unit 91 holds the counted number of received light peaks when the workpiece determination unit 63 performs peak number storage received light amount control.

It is also possible to realize a profile storage function that sequentially stores the received light image captured by the memory unit 90 and the calculated profile shape together with the measured time information. Thereby, the history of past profile changes can be held. In addition to saving the profile as a wave-shaped line, the data to be saved can be saved as a set of numerical data (points) or a profile image displaying the profile shape. It can also be used as the memory unit 90 that holds various settings. Furthermore, a trend graph and a period during which the alarm detection means 55 outputs an alarm signal can be recorded in the memory unit 90.
(Profile data storage function)

  Further, it has a profile data storage function for saving profile data in a profile data storage area of the memory unit, so that the data can be recalled and confirmed later. By utilizing this, when the shape of the workpiece is temporally different, it can be suitably used for measuring the amount of change. For example, as shown in the perspective view of FIG. 103 and the cross-sectional view of FIG. 104, whether or not the coating device TS that applies the adhesive SZ is applying an appropriate amount of the adhesive SZ to a predetermined position on the workpiece WK15. In the application to be confirmed, if the background, that is, the shape of the workpiece WK15 to be applied is constant, this is registered in advance as a registered image, and only the profile shape after application is compared with the registered image in the measurement mode. Can be confirmed. However, since the shape of the workpiece may not be constant in reality, it is necessary to individually measure the profile shape before application and the profile shape after application. In this case, the profile shape must be acquired for the same workpiece at different timings.

  In such an application, a profile data storage function for saving a profile shape can be preferably used. That is, by obtaining the profile shape of the workpiece before and after applying the adhesive at a predetermined timing, and storing this in the data storage area of the memory unit, by calling the display unit with the acquisition time specified, It is possible to easily compare the front and rear profile shapes. In the display unit, a plurality of profile shapes having different acquisition times can be displayed side by side, or these can be switched and displayed. Furthermore, the difference in profile shape can be displayed as a difference profile as necessary.

  If the profile data storage function is used, even if there is only one sensor head unit, the same sensor head unit can capture the same work at different timings, and then read them from the data storage area and display them on the display unit. it can. FIG. 106 is a timing chart for measuring the shape of a workpiece before and after machining with one sensor head unit. In the figure, positions indicated by N, M, etc. indicate the workpiece measurement positions by the sensor head unit 1. In FIG. 106, the shape of the workpiece before machining is measured by the sensor head unit 1 at the N position, and the shape of the same workpiece after machining is measured by the sensor head unit 1 at the M position. Similarly, the shape of the workpiece before processing is measured by the sensor head portion 1 at the position N + 1, and the shape of the workpiece after processing is measured by the sensor head portion at the position M + 1. In this case, as a measurement order, one sensor head unit 1 or the workpiece is relatively moved, and measurement is performed in the order of N → M → N + 1 → M + 1. In this way, the sensor head portion can be shared for measurement before and after processing, and time difference processing, that is, profile shapes at different timings can be acquired and compared with less hardware resources. In this case, as shown in FIG. 104, the mechanism for moving the sensor head unit 1 to a position where the workpiece WK15 before application of the adhesive SZ and the workpiece WK15 after application can be imaged, or the sensor head unit after conveyance of the workpiece WK15 side. A mechanism for returning to 1 drooping is required. For this reason, preferably, two or more sensor head portions are used, and the workpieces before and after application are imaged by the individual sensor head portions, respectively. With this configuration, the sensor head unit and the workpiece moving mechanism can be simplified. In the present embodiment, as shown in FIG. 40 described later, two sensor head units are used, and the profile shape before application of the adhesive SZ and the application after application to the workpiece WK15 shown in FIGS. 103 and 104. Are obtained by each sensor head unit 1. FIG. 40 is a block diagram of the controller unit 2 to which two sensor head units can be connected. The sensor head unit 1 measures the profile shape before application of the adhesive shown in FIG. 105 (a) and the profile shape after application shown in FIG. 105 (b). These profile shapes are synthesized based on the shape to obtain a synthesized profile as shown in FIG. Further, as shown in FIG. 105 (d), by extracting these differential profiles, an accurate profile of the applied adhesive can be obtained.

  In the example of FIG. 103, a part different from the application position of the adhesive SZ on the workpiece WK15 is measured as a profile shape before application. As shown in FIG. 103, when the application position is a groove shape of a workpiece and the same profile can be obtained at any position, a portion different from the application position may be used as the profile shape before application. With this method, the profile shapes before and after application can be acquired at substantially the same timing. Of course, the profile shape of the same part can also be imaged before and after the application, which makes it possible to measure the adhesive more accurately even when the molding accuracy of the workpiece is particularly poor.

  Moreover, although the example which measures the application quantity of an adhesive material was demonstrated here as an example, this Embodiment is not restricted to this, It uses widely in the application which compares the shape which changes before and after processing, such as confirmation of a processing shape. it can.

Furthermore, in addition to a form in which display and processing are performed sequentially, the present invention can be used for a purpose of batch processing when predetermined data is collected. For example, as shown in FIG. 107, the profile shape before application of the adhesive is measured and stored at a plurality of locations, and the shape of the plurality of locations is measured after application. Batch processing using profile shapes can also be performed. In the example of FIG. 107, N → N + 1 → N + 2 →... So that the workpiece after machining is measured collectively and then the workpiece after machining is measured. . . → Measure in order of M → M + 1 → M + 2. In this case, it is possible to use two sensor head portions 1 to measure the workpiece before and after machining, respectively, or to measure the workpiece before and after machining with one sensor head portion 1. It can also be set as the structure to do.
(Measurement pitch)

  The timing or measurement pitch at which the profile shape is stored in the data storage area of the memory unit can be arbitrarily set according to the speed of the workpiece conveyance line, the required profile shape accuracy, and the like. As shown in FIG. 108, when the measurement pitch is relatively coarse, the profile shapes acquired immediately before can be displayed sequentially. In the example of FIG. 108, before and after machining can be measured sequentially for the same workpiece, workpieces are measured and read sequentially in the order of N → M → N + 1 → M + 1 → N + 2, and the shapes before and after machining are compared. it can. On the other hand, when it is desired to make the measurement pitch fine, since there are a plurality of profile data before applying the adhesive, it is necessary to temporarily store these data and to read them again at the time of display. In the example of FIG. 109, four cycles are advanced from the measurement position before application of the adhesive to the measurement position after application. That is, the measurement position after application corresponding to the measurement position N of the workpiece before application is M + 4, the measurement position after application corresponding to the measurement position N + 1 before application is M + 5, and similarly the position corresponding to N + 2 is M + 6. It becomes. For this reason, the measurement pitch is set according to the storage timing and the display timing so that the profile shape displayed on the display unit can be specified by how many measurement pitches or sampling periods correspond, for example. .

  In addition, the timing for saving the profile shape can be specified in various ways. For example, it can be saved by using a timing signal autonomously created inside the controller unit or by using an externally input trigger signal. it can. In an aspect in which a plurality of sensor head portions are connected, the timing signal and the trigger signal can be common to the sensor head portions, or can be individual signals for each sensor head portion. For example, a mode in which each sensor head unit operates individually according to one trigger signal, a mode in which two sensor head units operate in synchronization according to a trigger signal, or a mode according to a trigger signal However, it is possible to set various operations such as an aspect of performing profile acquisition a plurality of times for one trigger signal.

Needless to say, the stored profile shape is not only displayed on the display unit, but may include other processes. For example, the process of calculating the difference profile as shown in FIG. 105 (d) and displaying it on the display unit, or omitting the display of the profile shape, calculating the area and height of the difference profile, and calculating only the calculation result. Various processes such as display are possible.
(Saved data type setting)

  The data to be stored can be arbitrarily set. For example, all the profiles may be used, or only the profile in which an abnormality has occurred (NG profile) may be used. When saving all profiles, it does not matter so much when abnormalities occur frequently. On the other hand, if the occurrence of abnormalities is small, the profile data in which a defect has occurred from a large number of saved profile data. It is not easy to find out.

  Therefore, in the present embodiment, when profile data is stored in the profile data storage area, profile data in which an abnormality has occurred is stored with the fact added. FIG. 7 shows a list of stored profile data. In this example, in addition to the profile number and profile acquisition time, data indicating an error (NG) in measurement is added to each profile data displayed in a list with a flag indicating an error occurrence.

  The stored profile data can be displayed as a list on the display unit, and can be displayed in a table format as shown in FIG. 7, for example. When a profile number is selected from the list display, the selected profile waveform is displayed in the profile display area 71. In addition to the selected profile, a plurality of comparison profiles can be displayed. In this way, a plurality of profiles can be displayed in parallel, and these can be easily compared and used for confirming the location of an abnormality or confirming the process of occurrence. Furthermore, the profile can be displayed on the display unit by performing a highlighting process such as coloring. For example, by displaying all the profile data and displaying only the selected profile with different colors in an overlapping manner, the tendency and variation of the profile can be intuitively grasped.

Whether or not the profile data has an abnormality is determined according to a predetermined determination condition. For example, a case where a saturation point is included in the profile data, or a point where the received light amount is lower than a predetermined lower limit threshold can be included.
(Display unit 70)

  The display unit 70 displays calculation results, captured image data, and the like. In the example shown in FIG. 4, a profile display area 71 for displaying a profile shape, a trend graph display area 72 for displaying a trend graph, a light quantity graph display area 73 for displaying a light quantity graph, and a received light image are displayed. A received light image display area 74 and the like. These areas can be displayed by dividing the area of the display unit as appropriate and displayed on one screen, or by switching between a plurality of screens. For example, as shown in FIG. 8, the profile shape of the workpiece calculated by the profile calculation means 64 is displayed. Thereby, the profile shape of the workpiece can be visually confirmed. Various calculations can be performed on the profile shape by the measurement processing unit 54. For example, the height difference of the stepped portion can be calculated, and the length and area of the designated line segment can be calculated. The part to be calculated is designated by the measurement region designation means 82 constituted by a pointing device such as a mouse or a keyboard constituting the interface unit 80.

  In addition, as shown in FIG. 9, the received light image captured by the two-dimensional light receiving element 15 can be displayed in the received light image display area 74. The received light image can be suitably used when confirming the received light luminance distribution as a raw image acquired by the light receiving element, for example, when the profile shape of the workpiece is not properly measured. The display of the profile shape and the received light image can be performed simultaneously with switching the display screen. In particular, in addition to displaying the profile shape and the received light image side by side separately on two screens, they can be displayed superimposed on one screen. The display unit 70 can be a CRT monitor, a liquid crystal display, or the like. The display unit 70 can be configured as a separate member from the controller unit 2 in addition to the configuration incorporated in the controller unit 2.

  In the block diagram of FIG. 4, an arrow indicated by a bold line indicates a flow of image data, a broken line arrow indicates a flow of profile data, and a thin line arrow indicates a flow of a control signal. In the optical displacement meter 200, first, the light projecting element of the light projecting unit 3 is turned on, the band light is irradiated onto the work, and the reflected light is imaged by the light receiving element. The analog signal is captured by the image reading unit 56, amplified by an amplifier, and converted into a digital signal by a conversion means. As a result, the received light image is acquired by the image reading unit 56 and the profile shape is generated by the image processing unit 62 of the received light data control unit 60. These image data are displayed on the display unit 70. On the other hand, the light reception level control means 61 performs feedback control, and the operation amount is adjusted so that the peak value of the light reception amount can be obtained appropriately.

The above separation of the sensor head part and the controller part is merely an example, and the sensor head part may have a part of the function of the controller part, or the sensor head part and the controller part may be integrated. It is also possible to connect a plurality of sensor head units to one controller unit and control them. The connection between the controller unit and the sensor head unit can be performed by data communication in addition to the I / O connection.
(Communication means 57)

  The optical displacement meter may further include a communication unit 57 for performing communication with the external device GK. As a modified example of the second embodiment, FIG. 10 shows a block diagram of an optical displacement meter 300 provided with communication means 57. In the optical displacement meter 300, the mode switching unit 53 switches between the measurement mode and the operation amount adjustment mode in which the operation amount adjustment unit 81 adjusts the operation amount by communicating with the external device GK via the communication unit 57. . In the operation amount adjustment mode, the operation amount calculated by the light reception level control means 61 is transmitted to the external device GK via the operation amount adjustment means 81 and the communication means 57, and the operation amount data is stored in the external device GK. As described above, the optical displacement meter can be adjusted from the external device GK connected to the optical displacement meter via the communication means 57 in addition to the adjustment of the operation amount of the feedback control by the operation amount adjusting means 81. For example, a computer is connected as the external device GK, and the operation amount adjustment function can be realized by a program operating on the computer. In this case, the adjustment operation of the operation amount can be made easier to understand by the user interface provided by the display device or the input device of the computer.

Note that communication with the external device GK by the communication unit 57 is not limited to wired communication, and wireless connection can also be used. In addition, a program that operates on a computer that is the external device GK does not determine an adjustment amount of an appropriate operation amount based on the operation amount data, but a display for the user to arbitrarily set or change the adjustment amount. A user interface can also be provided.
(Optimization of received light image capturing conditions)

  As a work pattern that is actually considered as an object to be measured by the optical displacement meter, the work WK1 having a flat surface as shown in FIG. 11 and a uniform surface state, or the surface is flat as shown in FIG. A work WK2 having a different surface state, in particular, reflectivity depending on the part, or a work WK3 having a three-dimensional shape and having a different reflection state depending on the part, as shown in FIG. In the workpiece WK1 as shown in FIG. 11, since the distribution of the reflected light amount is almost constant, there is little variation in the peak level, and the total reflection amount can be grasped with one received light image. On the other hand, in the work WK2 as shown in FIG. 12, since the amount of reflected light differs depending on the part, it is necessary to adjust the operation amount, or it is necessary to take a plurality of received light images with different imaging conditions and synthesize multiple exposures. It becomes. Further, similar processing is required for the workpiece WK3 as shown in FIG. In this case, the points to be imaged differ depending on what profile information of the workpiece is desired to be used. For example, when it is desired to measure as many parts as possible, adjustment of exposure time and multi-exposure are necessary so that the entire received light amount distribution can be grasped as described above. On the other hand, in the workpiece WK3, when the measurement accuracy at the corner where the amount of reflected light is extremely small can be ignored, the received light amount control may be applied to the apex portion of the workpiece. Alternatively, when the accuracy of the corner is required rather than the accuracy of the apex portion, such as for measuring width, etc., the received light amount control may be applied to the corner. In this way, if the amount of reflected light is controlled according to the application and purpose, it is sufficient if only the surface portion of the workpiece can be imaged if it is not necessary to measure a part where the amount of reflected light is extremely small. Alternatively, if it is an application that does not require measurement of a portion where the amount of reflected light is extremely large, the surface portion of the workpiece may be omitted from the imaging target, the processing target, and the control target, and only the corners may be captured. Thus, profile information can be acquired efficiently and accurately by appropriately performing imaging according to the application and purpose. The present embodiment has a function suitable for such application-specific imaging. Each function will be specifically described below.

Examples of the operation amount to be adjusted so that the received light image and the profile shape can be appropriately obtained include the light emission level and light emission time of the light emitting element, the exposure time of the two-dimensional light receiving element, and the gain of the signal level of the circuit gain of the amplifier. It is done. These manipulated variables can be set directly by the user using the manipulated variable adjusting means, or can be automatically set on the optical displacement meter side. In particular, an appropriate value can be set by performing feedback control with the light receiving level control means. However, the range in which the operation amount can be set is vast, and the amount of reflected light varies greatly depending on the workpiece. Or it may take time until the control is stabilized. Therefore, as a method for obtaining a more stable operation, the sensitivity characteristic of the two-dimensional light receiving element is adjusted, and a two-dimensional light receiving element having a light receiving characteristic with a curve that does not saturate is used. Specifically, a two-dimensional light receiving element having Log characteristics is used. As a result, the absolute value is suppressed in the region where the light reception signal is high, the difference from the signal in the low region is relatively reduced, and even if there is a height difference in the light reception signal between the lines of the two-dimensional light receiving element, this can be reduced. The range of the received light signal that can be handled by one received light image can be widened. That is, one imaging can be completed, and the number of imaging can be reduced even when multi-exposure is performed in which a plurality of received light images are combined.
(Mode switching means 53)

  The mode switching unit 53 switches between a measurement mode for measuring the displacement of the measurement object and a setting mode for setting the operation amount. In the setting mode, the light projecting unit 3 previously irradiates the measurement object with band light, measures the distribution state of the peak of the amplified signal at each position in the first direction, and the received light level control means 61 The operation amount can be adjusted according to the distribution state in the direction of 1. As a result, the light receiving level control means 61 can adjust the operation amount to an appropriate amount based on the peak distribution at each position in the first direction, so that the line in the first direction of the peak level obtained by the two-dimensional light receiving element 15 can be obtained. It is possible to reduce the gap between them and effectively capture the peak level to measure the displacement. In this way, prior to actual operation, the work is once flowed to the line and optimal adjustment is performed, etc. Appropriate control can be realized without this. In particular, by setting a range to be changed with respect to an actual workpiece in advance, it is possible to prevent divergence when feedback control is performed, and more stable operation is expected.

Alternatively, depending on the application and purpose, it is possible to measure the displacement of the measurement object while executing feedback control by the light receiving level control means 61 even in the measurement mode. It is also possible to set the operation amount or set a rough operation amount in the setting mode in advance and then operate while adjusting the operation amount to a more optimal operation amount in the measurement mode.
(Automatic adjustment function)

  Further, as will be described later, when an element capable of adjusting the light receiving characteristics is used as the two-dimensional light receiving element, the light receiving sensitivity of the two-dimensional light receiving element is automatically adjusted by the light receiving level control means from the luminance distribution information of the received light image in the setting mode. It may have a function. Specifically, the work is actually arranged or flowed on a line, a light reception image of the work is captured by a two-dimensional light receiving element, and luminance distribution data is captured. Based on this luminance distribution information, the light receiving level control means automatically adjusts the light receiving characteristics of the two-dimensional light receiving element. That is, the light receiving characteristics are adjusted so that all luminance information can be appropriately acquired from luminance distribution data obtained by imaging an actual workpiece. Specifically, as shown in the example of the profile display area 71 in FIG. 14, the amount of reflected light at each position of the workpiece WK is detected for each line by the two-dimensional light receiving element. An arrow shown in FIG. 14 corresponds to a line of a two-dimensional light receiving element. As a result, the peak value of the received light amount of each line is extracted, the frequency of the peak value (luminance) is counted in the entire received light image, and a histogram of the brightness distribution of the received light image as shown in FIG. 15 is acquired. Based on this histogram, the distribution range is grasped, and the light receiving characteristics (FIG. 16 and the like) of the two-dimensional light receiving element are adjusted or selected so as to cover this range. When the most appropriate option or adjustment is impossible, it is possible to make a close setting and warn the user that the optimum setting is impossible. Further, it is possible to capture two or more received light images by changing the light receiving characteristics as necessary. Thereby, since it can set beforehand so that it may become a suitable light reception characteristic at the time of setting before entering measurement mode, more stable operation can be aimed at in measurement mode.

  Furthermore, such automatic adjustment is not limited to the method of adjusting the light receiving characteristics of the two-dimensional light receiving element in a direction in which the entire range of the luminance distribution can be captured by a single received light image, and only a part of the luminance distribution is captured. A method of adjusting the light receiving characteristics can also be adopted. In particular, when not all luminance information is required for measurement, it is possible to acquire a light-receiving image with higher efficiency and accuracy by adjusting the luminance so that the luminance of a necessary part is covered.

  Furthermore, the operation amount described above is not limited to the selection of the light receiving characteristics of the two-dimensional light receiving element, but can be appropriately used. For example, the light emission level and light emission time of the light emitting element, the exposure time of the two-dimensional light receiving element, the gain of the signal level of the circuit gain of the amplifier, and the like can be mentioned.

Furthermore, the operation amount automatically adjusted by the light receiving level control means or the like as described above or the operation amount adjusted by feedback control can be further adjusted by the user from the operation amount adjustment means. The user can make fine adjustments and resets as necessary. Adjustments are made when the behavior differs from the calculated or theoretical values, or when the automatic adjustment does not work, it can be reset manually or set from the beginning. It can be used when performing
(Presentation of candidate pattern)

Furthermore, in the automatic adjustment, it is configured not to calculate only one result (operation amount) but to generate a plurality of candidate patterns according to the application and to present and select them on the display unit. May be. Specifically, a plurality of candidate patterns are calculated by the light receiving level control means or the like and displayed on the display unit in a list display. FIG. 17 shows a display example of such a candidate pattern KP. The candidate pattern KP is displayed as a profile shape obtained when the profile shape is calculated with different operation amounts. The display unit has a plurality of profile display areas 71 for displaying profile shapes. Thus, the user can easily obtain a necessary result by selecting a candidate pattern KP from which a desired result is obtained from the screen of the display unit. In the example of FIG. 17, three candidate patterns KP are displayed, but the present invention is not limited to this example, and four or more or two or less can be displayed. If the candidate patterns are displayed in small thumbnails, the number of candidate patterns that can be displayed on one screen can be increased. Moreover, it is not restricted to the example which displays a list of candidate patterns on one screen. If a plurality of screens are switched and displayed, the candidate patterns can be displayed in a large size, and the visibility of fine points is improved.
(Highlight processing)

FIG. 18 shows an enlarged view of the profile display area 71. As shown in this figure, a highlight process according to the amount of received light can be applied to the profile shape by the profile highlight means. In this example, different colors are colored according to the level of the amount of received light at each position of the profile shape. For example, a part where the light quantity is in an appropriate range is blue (a narrow hatched area indicated by BA in FIG. 18), a part where the light quantity is excessive above the threshold is red (a two-line hatching area indicated by RA in FIG. 18), and the light quantity is insufficient. The part is colored in a different color according to the level of the amount of received light, such as gray (a wide hatched area indicated by GA in FIG. 18). As a result, the user can visually grasp the amount of light received at each position of the profile shape, and can easily select a profile shape having a small amount of inappropriate light quantity at a site necessary for measurement. In the color classification by the profile highlight means, the level of received light amount and the color assignment are preset. In addition to changing the profile drawing color according to the level of received light, the profile highlighting means changes the line type pattern such as fluorescent color, gray out, frame display, bold / thin / broken line, and hatch pattern. A method that can be distinguished from other parts, such as changing, can be used as appropriate.
(Light intensity graph)

  Further, as shown in FIG. 19, in addition to the profile shape, a light amount graph indicating the light amount at each position of the profile shape can be displayed as the candidate pattern KP. The light quantity graph is displayed in a light quantity graph display area 73 provided below the profile display area 71, as shown in the enlarged view of FIG. As a result, the profile shape and the light amount level are displayed at the same time, and the light amount at the position corresponding to the profile shape can be visually recognized by arranging them at the top and bottom, and can be used as a more objective index for selection.

Moreover, only a light quantity graph can be displayed as a candidate pattern, or a received light image can be displayed. Also, it is possible to display only a part of the image instead of the entire image. In particular, when the area required for measurement is limited, displaying or enlarging only the area makes it easier to compare and confirm, and can further improve the visibility.
(Feedback control with multiple lines)

In an optical displacement meter using a two-dimensional light receiving element, linear image sensors are arranged two-dimensionally, so that in addition to the feedback control of the amount of light received by the linear image sensor, a plurality of linear image sensors are connected to each other. That is, feedback control in consideration of the distribution of the amount of received light between lines is also necessary. Specifically, in the case of the two-dimensional light receiving element 15, since the linear image sensor LI as shown in FIG. 21A is arranged in parallel by the number of pixel lines, the structure shown in FIG. As shown in FIG. 5, there are as many peaks of one light reception signal as the number of pixel columns. In the case of a linear image sensor such as a one-dimensional CCD or CMOS, it is only necessary to determine the power of the laser beam based on the peak of the received light signal, whereas in the case of the two-dimensional light receiving element 15, the peak level is high. Since there are a plurality of rows, it is necessary to perform control in consideration of the difference in peak height between these lines. In particular, in the case of a workpiece in which parts having greatly different surface states are mixed, the peak level varies greatly from column to column, which adversely affects stable control and highly accurate displacement calculation. For this reason, in the present embodiment, the following method is adopted as a method for feedback control of the amount of received light by the received light level control means 61.
(1) Extracting the upper a% of the magnitudes from the measured peak level and making it a control target (2) Calculate the average from the upper b% to the c% of the magnitudes of the measured peak levels (3) By using a two-dimensional light receiving element whose light receiving characteristic (light receiving amount-output voltage relationship) is nonlinear, the above-mentioned (1) or (2) How to control

  In the method (1), it is possible to control the received light signal not to be saturated by paying attention to a portion where the peak level is large. a% can be set to an arbitrary value, preferably 50%. As an example, by setting to 5% to 15%, for example, the top 10%, it is possible to exclude columns that have a large amount of reflected light suddenly and locally and perform more stable control.

  In the method (2), control is performed so that not only a portion having a large peak level but also a small portion is detected, so that a workpiece having a small amount of reflected light is detected and a workpiece portion having a large amount of reflected light is not excessively lighted. realizable. The b% -c% th is 5% -95%, preferably 10% -90%. In particular, by setting the upper 10% to 90%, it is possible to exclude abnormal points that suddenly and locally have a large or small peak level.

In the method (3), a two-dimensional light receiving element having a non-linear light receiving characteristic, specifically, a characteristic that the output is not easily saturated even in a region where the amount of received light is large is used. The light receiving characteristic of a normal CCD or CMOS is linear, and the output signal is saturated if the amount of received light is too strong, but a wide dynamic range can be secured by making it non-linear in a high output region. In particular, as a result of suppressing the magnitude of the output signal in the region where the amount of received light is strong, the difference from the signal in the relatively low region is reduced, and the difference in peak height between the columns is reduced. ) Can be more effectively performed. Preferably, a two-dimensional light receiving element called a log characteristic or the like having a light receiving characteristic (sensitivity curve) of a two-dimensional light receiving element having a curve like a logarithmic graph or a polygonal line close thereto is used. The stronger the log characteristic, the lower the region where the received light signal is high.
(Log characteristics)

  As shown in FIG. 16, the log characteristic is a characteristic indicating a curve in which the input / output characteristic is non-linear. Here, it is assumed that there is a curve in which sensitivity decreases in a region where the amount of received light is high. Thereby, the possibility that the amount of received light is saturated can be reduced, and the amount of received light can be reproduced in a wide range. This leads to the ability to appropriately capture a single received light image even with a workpiece having a difference in height in received light level. In addition, the number of pre-combination images when performing multi-exposure can be reduced. As a result, the processing time required for imaging is shortened, and the processing amount is reduced to realize high-speed and low-load processing.

  The stronger the log characteristic, the more the output signal can be suppressed in a region where the amount of received light is high. For example, when the lower end of the light amount peak distribution is less than 500 in relative value, the Log characteristic is strengthened.

  Further, as a light receiving characteristic, a characteristic that increases a peak in a region where the amount of received light is low may be added. Furthermore, a two-dimensional light receiving element having a multi-slope characteristic capable of creating an arbitrary light receiving characteristic curve or a two-dimensional light receiving element capable of arbitrarily adjusting the light receiving characteristic curve can be used. Switching of the plurality of characteristic curves and adjustment of the light receiving characteristic curve are performed by the light receiving element control unit 52. Needless to say, these light receiving characteristics can be not only curved but also linear or polygonal. Accordingly, it is possible to select or adjust the light receiving characteristics of the two-dimensional light receiving element with the sensitivity corresponding to the measurement object, and to realize appropriate measurement. Note that the present invention is not limited to the example in which feedback control is performed in the measurement mode, and it is effective to use a two-dimensional light receiving element having a nonlinear light receiving characteristic such as a Log characteristic in the setting mode.

The adjustment of the light receiving characteristic can also be performed in association with the feedback control. For example, when a light amount graph showing the distribution of waveform peaks of the measured profile shape is created, 10% on the bright side and 10% on the dark side in the light amount graph are excluded from the measurement target, and the remaining 80 except these are excluded. The light receiving characteristics are adjusted so that% falls within a predetermined light amount range. As described above, if the adjustment is made so that the central value can be covered without covering all the light receiving levels, a more accurate and highly reliable control result can be expected. In this example, the light quantity is displayed as a relative value, and the light receiving characteristics are determined so as to fall within levels 500 to 900, for example.
(Work determination means 63)

  Furthermore, as a problem when performing feedback control of the amount of light received by the two-dimensional light receiving element 15, there is a treatment of a column in which no light reception peak is confirmed. There are two reasons why the light receiving peak is not confirmed: (a) the amount of light emitted from the laser is insufficient, and (b) there is actually no workpiece at the position corresponding to the column. However, conventionally, it has been impossible to distinguish which cause corresponds to the feedback control system.

  If the cause is (a), it can be dealt with by controlling in the direction of increasing the amount of laser light. However, when the actual cause is (b), the received light peak cannot be confirmed no matter how much the laser light quantity is increased. On the contrary, as a result of increasing the amount of laser light, the light reception signals in other columns in which the light reception peak has been confirmed may be saturated, leading to deterioration in accuracy.

In this embodiment, the workpiece determination unit 63 determines the workpiece state based on the number of received light peaks so that appropriate feedback control can be performed with respect to such a problem. Specifically, as shown in the flowchart of FIG. 22, as a preparation, first, in step S221, the light emitting element emits light with the maximum light emission amount, the number of received light reception peaks is counted, and the light reception peak storage unit 91 of the memory unit 90 is counted. (Step S222). When detecting the received light peak in this state, the number of received light peaks is counted (step S223), and the detected received light peak number is compared with the received light peak number stored in the received light peak storage means 91 (step S224). As a result of the comparison, if the difference between the detected light reception peak number and the stored light reception peak number is equal to or less than the predetermined value, the received light amount feedback control, that is, the control in any one of the above (1) to (3) is performed with no change in the workpiece. At the same time, the process returns to step S223 to repeat the control. On the other hand, if the difference between the detected light reception peak number and the stored light reception peak number is larger than the predetermined value, it is determined that a large change has occurred in the state of the work, and the process returns to step S221 again, and the light reception peak number at the maximum light quantity is determined. Re-count and compare. In this way, by performing the peak number storage received light amount control in which the workpiece determination unit 63 confirms the workpiece state change based on the number of received light peaks, whether the change in the received light amount is due to the workpiece state change or simply the reflected light amount. Therefore, it is possible to realize accurate feedback control in the light receiving level control means 61.
(Control area specifying means 85)

  Further, as another method, it is also suitable for stable control that a control region can be designated as a range in which the received light amount control is performed. For example, some optical displacement meters have a function for designating a measurement area to be measured by the measurement area designation means 82, such as a mask function. However, since this mainly focuses on setting of the measurement region such as displacement measurement, it is selected from a viewpoint different from stabilization of the operation amount to be controlled. As a result, there may be a portion where the workpiece does not exist in the measurement region or a portion where the amount of reflected light is extremely small or large. Such a part has a light receiving amount peak level that is significantly different from other parts. Therefore, if feedback control is performed while including such data, it may not be possible to adjust the operation amount accurately.

  Therefore, in the present embodiment, separately from such a measurement area, a control area can be designated as a control target range by the control area designating means 85, and thereby an area range suitable for control can be set independently. . This will be described with reference to FIG. In this example, consider an example in which the workpiece WK4 as shown in FIG. FIG. 23A shows a state in which a received light image obtained by capturing the workpiece WK4 shown in FIG. On this screen, a measurement area KR indicated by a rectangular shape is designated by the measurement area designation means 82. In the measurement region KR, there is no work in the circled region, and therefore the amount of reflected light at this part is extremely low compared to other parts, causing fluctuations. Therefore, the control area SR is specified separately from the measurement area. FIG. 23C shows an example in which such a control region SR is designated from the control region designation means 85 so as to overlap the measurement region KR shown in FIG. As shown in this figure, the control region SR is specified in a band shape, and in particular, it is specified by excluding the part where the workpiece does not exist. The overall level can be made constant, and more accurate and stable received light amount feedback control can be realized. In the example of FIG. 23 (c), the control region SR is specified as a band shape, but the present invention is not limited to this, and it is needless to say that the control region SR can be set to a rectangular shape, a polygonal shape, a circular shape, an elliptical shape, an arbitrary region, or the like. Yes.

The control area designating unit 85 that designates the control area SR can use the same member as the measurement area designating means 82 that designates the measurement area. That is, the member can be made common so that the control area and the measurement area are designated by one designation means.
(Stability output means 58)

  On the other hand, a stability output means 58 for outputting the stability, which is an index related to the stability of the received light peak waveform, can be provided. When performing feedback control based on the received light signal, if the workpiece is conveyed at high speed, if the reflectance is different, or if the received light signal is saturated or insufficient during the transitional period until the received light amount becomes stable It may be imaged. In such a case, there is a problem that the accuracy of the measurement value such as displacement is lower than that in the case where the received light amount control is stable or an abnormal value greatly different from the true value is shown. FIG. 24 shows the waveform of a normal light reception signal, while FIG. 25 shows the waveform of an abnormal light reception signal. Specifically, FIG. 25 (a) shows a broad waveform, FIG. 25 (b) shows a jagged waveform due to disturbance such as noise, and FIG. 25 (c) shows a waveform cut off at the end of the CCD. . If such a signal waveform is processed and measurement calculation is performed in a state where the signal waveform indicates an abnormality, an error factor is caused.

  On the other hand, in the present embodiment, the stability can be output by the stability output means 58 as an index indicating whether the feedback control is stable, and the reliability of the calculated measurement value and the measurement value in this state are displayed. It can be used as an indicator of whether to calculate. In the present embodiment, control is performed based on (1) the number of saturated received light quantity peaks and / or the number of received light quantity peak lines with insufficient light quantity, and (2) the number of peak lines where the received light signal has an abnormal shape. To determine the stability / instability. When the stability satisfies a preset alarm condition such as the number of abnormal light receiving peak lines, an alarm output is output. Also, information such as the number and type of abnormal peak lines can be directly output as stability information.

Further, a warning means 59 that issues a stepwise warning based on the stability output by the stability output means 58 may be provided. In the example of the block diagram shown in FIG. 4, the stability output means 58 includes warning means 59 that can set the alarm level in stages. For example, “Alarm output if there is saturation or insufficient light intensity even in one row” stage or “Calculate measured value no matter how many saturation or insufficient light intensity” stage, etc. it can. With such a received light amount control stability detection function using the stability output means 58, it becomes possible to perform displacement measurement in accordance with the accuracy and measurement stability required by the user.
(Measurement light selection means 836)

Further, when a profile is measured for the workpieces WK13 and WK14 as shown in FIGS. 101 and 102, a plurality of light reception signal waveforms may be generated on the measurement line. In such a case, the optical displacement meter can be provided with measurement light selection means 836 for selecting which received light signal waveform is to be measured as measurement light. Specifically, when there are a plurality of waveforms on the measurement line, the order of the waveforms is recognized and the nearest Near side is measured, or the nth from the Near side and the mth from the Far side are used. By recognizing the order of the waveforms, it is possible to select the number of waveforms to be processed as a measurement target. In this way, by selecting appropriate measurement light with the measurement light selection means 836, even when a large amount of reflected light is generated, unnecessary reflected light can be eliminated and the original measurement light can be measured.
(Light receiving mask function)

The above method can be suitably used when the reflection is stable and the number of waveforms is stable at any position. However, the number of waveforms may be unstable. In this case, the above method is insufficient as a method for determining the measurement light or excluding other waveforms. Such a case can be dealt with by designating the light receiving mask region JM with the mask region designating means 86. Specifically, as shown in FIG. 26, a light reception mask area JM that is not subject to measurement processing is set on the screen of the display unit that displays the light reception image. In the example of FIG. 26, since two reflected lights are confirmed, the light receiving mask region JM is set so as to include a reflected light component unnecessary for measurement. The light receiving mask area JM can be designated by a rectangle, trapezoid, triangle, straight line, arc, or the like. In the example of FIG. 26, the light receiving mask region JM is designated in an M shape by combining a rectangle and two trapezoids. As a result, as shown in FIG. 27, the reflected light component indicated by the wavy line is appropriately excluded, leaving the reflected light component indicated by the solid line, and the measurement processing unit 54 performs measurement based on this information.
(Mask moving means 617)

  As shown in FIG. 28, when the light receiving mask region JM is set to the limit in accordance with the measurement light, accurate measurement is expected. However, depending on the work, the work position may not be stable even when the mask function is used. For example, in an application in which a workpiece conveyed on a line is imaged with a CCD camera, the position of the workpiece changes with each imaging, and the appearance position of the waveform is not constant. In such a case, if the light receiving mask area JM is set to the limit in accordance with the measurement light as shown in FIG. 28, the workpiece is displaced as shown in FIG. May be masked and measurement may not be possible. On the other hand, if the light receiving mask region JM is set with a margin in anticipation of the positional deviation of the workpiece as shown in FIG. 30, the situation of masking the measurement light as shown in FIG. 31 can be avoided. As shown by the wavy frame UM in FIG. As described above, in an application in which the position of the workpiece is displaced, there may be a problem that the light receiving mask region JM cannot be appropriately set.

  Therefore, an image search is performed on the received light image in advance, and a mask moving means 617 for moving the received light mask region JM to an appropriate position according to the search result is prepared. Hereinafter, the setting procedure of the mask moving unit 617 in the setting mode and the procedure in which the mask moving unit 617 actually moves the light receiving mask region JM in the measurement mode will be described with reference to the flowcharts of FIGS. 32 and 35 and FIGS. This will be described with reference to the image diagrams of FIGS. First, the procedure at the time of setting will be described with reference to the flowchart of FIG. 32. In step S321, a received light image serving as a reference is acquired, and a registered image TG used as a reference in search is registered from this image. Here, as shown in FIG. 33, a registered image TG is set by a frame line or the like for a stable region in the received light image. In step S322, a range for image search is set. For example, in addition to performing an image search for the entire area of the image, a range in which the range in which the workpiece moves is known in advance is designated as the image search range. Further, in step S323, the light receiving mask area JM is specified by the mask area specifying means 86 from the received light image. Preferably, as shown in FIG. 34, a marginal range is specified so that reflected light unnecessary for measurement can be substantially covered.

  A procedure for performing mask movement in the measurement mode after setting the mask image and mask movement in the setting mode as described above will be described below with reference to the flowchart of FIG. First, when a trigger input for obtaining a received light image is received in step S351, capture of the received light image is executed in step S352. Next, in step S353, the image search means 616 performs an image search in order to examine the position where the registered image TG exists in the received light reception image. As an image search executed by the image search means 616, an existing image processing method such as pattern matching can be used as appropriate. As a result of the image search, the position of the registered image TG in the received light image is specified as shown in FIG. In step S354, the light receiving mask region JM is moved based on the search result. Specifically, the light receiving mask region JM is pasted so as to follow as shown in FIG. 37 according to the coordinate position of the registered image TG searched as shown in FIG. In step S355, mask processing is performed, and the profile calculation means creates a profile shape from the received light image data. In step S356, the measurement processing unit 54 performs measurement processing on the profile shape. This procedure is repeated for each input received light image.

In this way, each image search is performed on the received light reception image, and the light reception mask region JM is moved according to the search result, so that the light reception mask region JM is set to the limit of measurement light or reflected light. However, it can follow changes in the workpiece position, reliably cut out disturbances such as unnecessary reflected light, and perform highly accurate calculations.
(Sensor head calibration function)

On the other hand, the optical displacement meter has a sensor head portion calibration function for calibrating the positional relationship between the irradiation light emitted from the light projecting portion of the sensor head portion and the workpiece when the sensor head portion is set up. An accurate displacement measurement result cannot be obtained unless the angle between the band light, which is the irradiation light, and the workpiece is accurately adjusted. For example, as shown in FIG. 98, the convex height H of the workpiece WK5 indicated by the solid line arrow in FIG. 98 is measured in a state where the convex workpiece WK5 is placed horizontally and the band light OK is incident vertically. Considering the example, if the work WK5 is inclined with respect to the incident surface of the band light OK as shown in FIG. 99, the height H ′ indicated by the wavy arrow is measured, and the measurement error is Arise. Similarly, a measurement error occurs even if the workpiece is tilted in a direction orthogonal to the width direction of the band light OK. For example, as shown in FIG. 100, when the band light OK is inclined with respect to the incident surface of the workpiece WK12, the band light OK ′ indicated by the wavy line is incident to cause a measurement error. In order to solve such a problem, a method for physically positioning the relative position between the sensor head unit and the workpiece, and a positional deviation of the relative position between the sensor head unit and the workpiece are previously calibrated in an optical displacement meter. And a method of displaying the result of correcting the measurement result with the calibration value as the calculation result. The sensor head calibration function is effective for both methods. Hereinafter, the sensor head unit calibration function will be described.
(Tilt correction function)

  As shown in FIG. 38, a state in which the flat work WK6 is inclined with respect to the horizontal plane is considered. If the band light is incident in the vertical direction in this state, a measurement error occurs. Therefore, this inclination angle is detected. First, the flat work WK6 is irradiated with band light from the light projecting unit, and the profile shape is displayed on the display unit. In this state, in the profile shape, the horizontal part designating unit 812 designates two parts as the horizontal reference position located on the horizontal line regarded as horizontal by the horizontal part designating unit 812. This designation is designated as a horizontal reference position designation frame SK in a rectangular area as shown in FIG. Then, the tilt angle calculating means 611 calculates the tilt angle formed by the assumed horizontal line connecting the horizontal reference positions and the actual horizontal line. In addition, the calibration angle is set to treat the deemed horizon as horizontal. Specifically, the profile shape included in the designated area is extracted by the inclination angle calculation means 611, and the center position and average height of the profile included in the area are calculated. Then, angle correction is performed during measurement based on the calibration angle so that the average height obtained in the two regions is horizontal. In this way, troublesome position adjustment is achieved by holding the tilt angle as a calibration amount without correcting the physical position between the sensor head and the workpiece, and correcting the calculation result by internal processing of the optical displacement meter. Can be processed in software, and adjustment work during installation can be greatly saved.

  Further, since the horizontal reference position can be designated as an area rather than designated by a point, the designation work can be facilitated. Further, since the inclination angle of the profile can be automatically calculated from the designated area by the inclination angle calculating means 611, there is also an advantage that the horizontal position can be calibrated more accurately and reliably.

  In addition, it is good also as a structure which designates a horizontal reference position with a point using pointing devices, such as a mouse | mouth. In this case, the selection can be facilitated by providing a snap function that automatically selects the line shape of the profile shape by using edge detection processing or the like.

  In addition to performing correction based on the calibration angle, only the calculation of the tilt angle may be performed by the tilt angle calculation means 611. Based on the calculation result, it is possible to physically adjust the angle between the sensor head unit and the workpiece. For example, by displaying the calculated tilt angle on the display unit or by outputting it to the outside, the tilt angle of the workpiece that should be originally set on the horizontal plane can be found. Adjust the position. Further, by repeating the same operation on the adjustment result, readjustment can be performed, and it is possible to finally match the assumed horizontal line with the actual horizontal line.

  Further, the tilt angle calculation means 611 may have a function of automatically correcting the tilt of the optical displacement meter based on the calculated tilt angle. For example, an angle adjustment mechanism of a sensor head unit including a light projecting unit is provided, the angle adjustment mechanism is controlled according to the calculation result of the tilt angle calculation means 611, and automatic adjustment is performed so that the sensor head unit is in a horizontal position with respect to the workpiece. To do. Thereby, the horizontal position calibration at the time of troublesome installation can be automated.

Note that the horizontal line here is an example for explanation, and it goes without saying that it can be adjusted to match a vertical line, a line of an arbitrary angle, or a plane instead of the horizontal line in accordance with an actual application. Three or more horizontal reference positions can be designated.
(Inclination angle adjusting means 816)

Further, an inclination angle adjusting means 816 for adjusting the calculated inclination angle may be provided. As a result, the tilt angle can be finely adjusted manually by the user, and flexible adjustment adapted to the actual installation state and measurement purpose is possible.
(Elevation difference correction function)

  The tilt correction function for adjusting the tilt angle has been described above from the sensor head unit calibration function. In addition to this, a height difference correction function for calibrating the height difference, that is, calibration in the case of tilting in a direction orthogonal to the width direction of the light band of the projected light can be performed as shown in FIG. Hereinafter, the height difference correction function will be described with reference to FIG. Here, measurement is performed by placing a workpiece WK7 having a convex shape with a known height difference. As shown in FIG. 39, in the state where the profile shape is acquired and displayed on the display unit, the upper surface that is the first surface and the lower surface that is the second surface constituting the step are respectively set as the height difference reference positions. It is designated by the height difference designation means 814. Designation of the height difference reference position by the height difference designation means 814 can be stably designated by designating as a height difference reference position designation frame KK in a rectangular area as in the horizontal part designation means 812. Thereby, the height difference calculating means 612 calculates the average height in each region, and further calculates the height difference between the average heights of these two regions. By inputting a known convex height difference (actual size) with respect to the height difference thus obtained, the inclination in the direction perpendicular to the width direction of the band light can be calculated from these differences. Therefore, by setting the calibration amount so that the height difference matches the actual size, it is possible to obtain a calculation result obtained by calibrating the height difference in the subsequent calculation. Further, as described above, it can also be used as an index when performing physical adjustment so that the height difference of the profile matches the actual size value.

The specification of the reference position such as the horizontal reference position and the height difference reference position by the horizontal part specifying means 812 and the height difference specifying means 814 is not limited to the rectangular shape, but may be a circle, an ellipse, an arbitrary area, or the like. it can. Further, in addition to detecting the edge surface from the designated area, these reference positions may be designated by direct points or lines.
(Embodiment 3)

In the above optical displacement meter, one sensor head unit is connected to one controller unit. On the other hand, a head connection unit that connects two or more sensor head units to one controller unit may be provided. FIG. 40 shows an example of a controller unit including a head connection unit 4 that can connect two sensor head units as a third embodiment. In this optical displacement meter, processing using two sensor head units is possible. In other words, two sensor heads can measure different parts of the same workpiece, measure the same workpiece at different timings, or measure different workpieces. Measurement is possible. FIG. 41 (a) shows an example in which two sensor head portions are arranged side by side and the measurable area is substantially expanded. Ideally, by setting each sensor head unit in the same posture with respect to the workpiece WK8, the profile shape obtained by each sensor head unit as shown in FIG. 41 (b) is shown in FIG. 41 (c). Can be synthesized as shown. However, in reality, it is difficult to install the positions of the two sensor head portions without error on the same workpiece WK8. As a result, an error occurs in the profile obtained as shown in FIG. 42 (b) due to the inclination of the sensor head as shown in FIG. 42 (a), and the profile cannot be connected as it is. Therefore, the optical displacement meter has a sensor head unit coupling function for connecting a plurality of sensor head units.
(Sensor head coupling function)

In order to overlap the profile shapes obtained by the two sensor heads, as shown in FIG. 41A, the irradiation light of the band light OK may be arranged so as to partially overlap. When combining the two obtained profile shapes, it is not easy to determine how much to overlap. In particular, since there is an inclination or a positional deviation when the sensor head unit is installed, in order to eliminate these, it is necessary to perform processing for coupling after moving the profile in the height direction and the width direction.
(Correction profile creation function)

For this reason, a correction profile creation function for correcting the inclination of each sensor head part with respect to the workpiece by using the step part of each profile shape using the inclination correction means 613 described above is provided. Here, an inclination correction function and an elevation difference correction function are used. This procedure will be described based on the schematic diagram of FIG. First, as shown in FIG. 43 (a), a workpiece WK9 having a stepped shape such as a convex shape is prepared, and arranged so that the irradiation areas of the band light OK of the two sensor head portions both include a step. Then, as shown in FIG. 43B, the inclination correction unit 613 in FIG. 4 performs inclination correction and / or height difference correction of each sensor head unit to create a correction profile. Further, as shown in FIG. 43 (c), the common profile shape corresponding to the same portion, that is, the stepped portion is designated by the common profile designation means 820 with respect to the correction profile of each sensor head portion. Here, the user manually designates the step portion by the common profile designation means 820. However, the common profile designating unit 820 may compare each correction profile and automatically extract the common profile shape. Then, the profile matching unit 614 performs a profile search (described later) so that the common profile shape designated by the common profile designation unit 820 matches, and automatically adjusts the height and interval of the two profile shapes. Then we synthesize these. Note that a function may be provided that allows the user to manually fine-tune the height and interval of the common profile shape by the profile matching unit 614 with respect to the automatic calculation result. In the example of FIG. 4, profile moving means 824 that can set the offset amount of the profile shape and inclination adjusting means 826 that can adjust the inclination angle are provided. Also, the height difference adjusting means 814 for manually adjusting the height difference calculated by the height difference calculating means can be used. In this way, as shown in FIG. 43 (d), the profiles measured by the plurality of sensor head portions can be combined.
(Direct matching function)

  Furthermore, without performing inclination correction or height difference correction for each sensor head, as shown in FIG. 44, the common profile shape is designated by the common profile designation means 820 for each profile shape before correction, and directly A direct matching function for matching can also be provided. This procedure will be described based on the schematic diagram of FIG. 44 and the flowchart of FIG. First, the individual inclination of the sensor head unit is corrected in step S451 as necessary. If only profile merging is performed, this height can be omitted. Next, in step S452, the workpiece WK9 is arranged so that the convex portion of the workpiece WK9 is positioned in the overlapping portion of the irradiation light from the two sensor head portions. In step S453, each sensor head unit captures a profile shape. In step S454, a common profile shape range is designated by the common profile designating unit 820 for the profile shape of each sensor head unit. In addition, when the common profile shape obtained in each sensor head part is the same, it is sufficient to specify a common profile shape only in one sensor head part. Also at this time, the user can manually extract the common profile shape by the common profile designating unit 820 in addition to manually designating the common profile shape by operating the common profile designating unit 820. According to this method, matching of a plurality of profiles can be realized very easily without omitting area designation by the user. Next, in step S455, the profile search is performed by the profile matching unit 614 so that the common profile shapes match in both sensor heads. Finally, in step S456, based on the result of the profile search, the profile matching unit 614 calculates a profile shape combining condition, and performs synthesis.

In the above example, the synthesis of two profile shapes using two sensor head units has been described, but it goes without saying that the synthesis of three or more profile shapes can be similarly performed. That is, for example, in an optical displacement meter in which three sensor head portions A, B, and C are connected, the profile shapes of the sensor head portions A and B are first synthesized, and then the profile shape A + B and the profile shape after synthesis are combined. What is necessary is just to synthesize | combine with C.
(Profile inversion function)

The example in which the sensor head portions are arranged side by side has been described above. As an example of laying out a plurality of sensor head parts, it is also conceivable to arrange the sensor head parts to face each other. When arranging a plurality of sensor head portions adjacent to each other, it becomes a problem whether a space for arranging the sensor head portions so that the sensor head portions do not interfere physically can be secured. In other words, the sensor head unit cannot be arranged in the same posture due to the arrangement space, and there may be a situation where the sensor head unit must be arranged in an upside down, left and right posture. For example, as shown in FIG. 46, as a result of disposing the sensor head portion in a mirror shape, the obtained profile shape is left-right inverted, which is inconvenient for comparison of shapes. Therefore, it has a function of inverting the profile shape. Specifically, the inverting means 822 can display the profile shape on the display unit by inverting the profile shape vertically or horizontally as necessary. Thereby, it becomes possible to appropriately compare the profile shapes acquired by each sensor head unit, regardless of the restriction of the arrangement state of the sensor head unit.
(Flip horizontal)

An example of acquiring a profile difference by the reversing unit 822 will be described with reference to FIG. First, as shown in FIG. 48, two sensor head units are arranged so as to acquire profile shapes of different parts with respect to one work WK10. Inclination correction and / or height difference correction are performed on the profile shapes obtained by the sensor head units to create a correction profile as shown in FIG. Next, using the common profile designating unit 820, a common part is designated for the correction profile of each sensor head unit. Specifically, the common profile shape is included in the frame-shaped common profile designation frame KW indicated by the wavy line in FIG. Then, the profile matching unit 614 automatically calculates the height and position of the two profiles so that the common profile shape within the set frame matches. As a result, as shown in FIG. 47C, the two profile shapes are superimposed and displayed on the display unit. Here, since different parts are observed, partial matching pattern matching is performed instead of complete matching. Further, as shown in FIG. 47 (d), profile differences can be displayed as a difference profile. The method is not limited to this method, and a function for fine adjustment of the height and position of the profile shape automatically calculated as described above is provided by the user, or before correction is performed without performing tilt correction or height difference correction for each sensor head unit. It is also possible to specify a common profile shape for each profile shape and perform direct matching.
(flip upside down)

  Furthermore, as shown in FIG. 49, the same workpiece WK11 can be sandwiched and measured by two sensor head portions. At this time, when the measurement points on the workpiece WK11 exist apart from each other, the sensor head unit can be installed in the same direction. However, when the measurement points are close, as shown in FIG. 50, it is necessary to install the sensor head unit so that the light projecting sides face each other. In this case, the right and left of the profile are inverted between the two sensor heads, which is inconvenient for comparing the shapes. Therefore, the reversing means 822 can be adjusted by combining left-right reversal or vertical reversal. This example will be described with reference to FIG. Here, as shown in FIG. 49, the workpiece WK11 is sandwiched between two sensor head portions, and the thickness d of the convex portion is measured. As shown in FIG. 51A, the correction profile is created by performing the inclination correction and / or the height difference correction of the profile shape measured by each sensor head unit. Next, as shown in FIG. 51 (b), one of the correction profiles (the right side in FIG. 51 (a)) is inverted upside down by the inversion means 822, and further inverted horizontally as shown in FIG. 51 (c). In this state, the common profile designating unit 820 designates a part to be aligned with each correction profile. In the example of FIG. 51 (c), it is surrounded by a frame-shaped common profile designation frame KW indicated by a wavy line so as to include a convex rising portion. Then, the profile matching means 614 automatically calculates the height and position of the two profile shapes so as to match the profiles in the common profile designation frame KW, and synthesizes them. Here, instead of connecting the profiles, as shown in FIG. 51 (d), the profiles are arranged vertically so that the convex thickness d of the workpiece WK11 can be measured.

Note that the vertical / horizontal inversion of the profile shape is individually performed by the reversing unit 822. However, the reversing process may be automatically selected and executed according to the measurement application. Specifically, as a measurement mode for measuring the displacement of the workpiece, an arrangement mode selection unit 828 is provided to select an arrangement layout of two or more sensor head units from a horizontal arrangement, a vertical arrangement, and a sandwich arrangement. The reversing direction of the reversing unit 822 is automatically selected according to the arrangement mode selected by the mode selecting unit 828. Thereby, inversion of the profile waveform according to the arrangement layout of two or more sensor head portions is automatically performed, so that setting trouble can be saved and setting mistakes can be avoided. As an example, a procedure for inverting the profile shape obtained by the two sensor head portions A and B by the inverting means 822 will be described based on the flowchart of FIG. First, in step S521, it is determined whether or not the sensor head part A is turned upside down. If not reversed, the process proceeds to step S523 as it is, and if reversed, the process proceeds to step S522, the profile shape obtained by the sensor head part A is reversed up and down by the inverting means 822, and the process proceeds to step S523. Next, in step S523, it is determined whether or not the profile shape of the sensor head part A is reversed left and right. If not reversed, the process proceeds to step S525. If reversed, the process proceeds to step S524. After being reversed, the process proceeds to step S525. In the same manner, the sensor head portion B is also first checked in step S525 for the presence or absence of upside down. In the case of inversion, it is inverted up and down through step S526, and then the process proceeds to step S527. Further, in step S527, the presence / absence of left / right reversal is determined, and in the case of reversing, the left / right reversal is performed via step S528, and then the process proceeds to step S529 to perform post-processing. In this way, the presence or absence of up / down and left / right reversal is determined for each sensor head unit, and the necessary reversal is performed, so that the profile shape is displayed on the display unit in an easy-to-contrast posture or is calculated by the measurement processing unit 54. can do.
(Profile search)

In automatic tracking of the light receiving mask region JM, inclination correction / level difference correction, synthesis of two or more profiles, profile search for the profile shape functions effectively. In the profile search described above, once a registered profile is set, matching is performed based on the same registered profile. In the profile search, based on the coordinate position information constituting the registered profile, the coordinate position information constituting the input profile shape is searched for whether the coordinate position information constituting the registered profile is included. Unlike the image data, the profile shape is coordinate position information. Therefore, the profile search requires less data and processing than the image data search, and can realize a high-speed search with a low load. Note that the profile search is performed with actual dimensions. On the other hand, depending on the work, it is difficult to determine the registration profile itself. In such a case, the registration profile can be changed every time. The detailed procedure of profile search for the profile shape will be described below.
(Rotation / movement profile search)

First, an example will be described in which a profile search is performed using a profile waveform as a registered profile so that stable measurement can be performed even when the workpiece WK5 is tilted as shown in FIGS. Here, the profile shape is designated in advance as a registered profile from the profile shape displayed in the profile display area 71 of the display unit 70 using the registered profile designation means 830 shown in FIG. 4 (in FIG. 54A). (Registered profile designation area PS indicated by a broken line frame), the profile matching means 614 detects the best matching position from the input profile shape of the measurement object by profile search, and rotates and moves the measurement profile based on the search result. To measure the shape. This procedure will be described based on the flowchart of FIG. 53 and the image diagram of FIG. First, as shown in step S611, a rotation angle and a moving distance for starting a profile search are set. Next, in step S612, the profile of the registered profile is rotated / moved. In step S613, the profile matching unit 614 calculates a difference from each point of the input profile. In step S614, a profile matching degree is calculated based on the difference between the points. Furthermore, in step S615, the condition with the highest matching degree (here, the rotation angle / movement amount) is stored. The above procedure is repeated within a predetermined search range (rotation angle / movement amount). Specifically, in step S616, it is determined whether or not the entire range of the input profile has been scanned. If not, the process returns to step S612 and the above processing is continued. If all the ranges have been scanned, step S616 is performed. Proceeding to S617, the condition with the highest matching degree is output as the detection position. In this way, as shown in FIG. 54B, the rotated profile can be detected from the input profile by the profile search of the registered profile, and the coordinate position can be output. According to this result, the profile matching unit 614 can rotate and move the input profile as necessary to display it on the display unit 70.
(Invalid area setting)

  On the other hand, in the optical displacement meter using the principle of light cutting, the shape of the edge portion of the workpiece may not be stable. For example, in a workpiece having a convex shape as shown in FIG. 55 (a), peak-shaped noise may appear at the convex edge portion as shown in FIG. 55 (b). Such noise components reduce measurement errors. Therefore, by providing an invalid area that is not a target for profile search in the registered profile, it is possible to make it difficult to be affected by such shape fluctuation. The invalid area is designated from the profile display area 71 by the invalid area designation means 832 shown in FIG. In the example of FIG. 55, as shown in FIG. 55 (b), a rectangular invalid area MA is set in the registered profile designation area PS so as to include peak noise generated at the convex edge portion.

Although only one invalid area is set in this example, it goes without saying that two or more invalid areas can be set. For example, as shown in FIG. 55 (c), invalid areas MA can be set at two convex edge portions in the registered profile designation area PS, respectively. In addition to completely excluding the invalid area from the profile search target, the weight can be set low although it is the profile search target.
(Tilt invalidation function)

In addition, for workpieces with unstable shapes, such as workpieces with edges as shown in FIG. 56, in addition to manually setting invalid areas from the registration profile, areas with large inclination changes are automatically extracted from the registration profile. Then, these portions can be set to be invalidated or weighted low. This processing is automatically performed by the invalidating means 834 shown in FIG. Hereinafter, this procedure will be described based on the flowchart of FIG. 57 and the image diagram of FIG. First, in the same manner as in FIG. 53, in step S651, the rotation angle and moving distance for starting the profile search are set, and in step S652, the profile of the registered profile is rotated / moved. Further, in step S653, the invalidating means 834 analyzes the data of the registered profile, and extracts a region having a sharp change and a region having a gentle change. In step S654, the profile matching unit 614 calculates a difference from each point of the input profile. In step S655, processing for reducing this influence is performed on the portion where the shape change is steep. That is, this region is excluded from the processing target, or weighting is performed so that the specific gravity becomes light. In step S656, the profile matching unit 614 calculates the matching degree of the profile shape based on the difference between the points. In step S657, a condition having the highest matching degree is set as a detection position. Further, in step S658, it is determined whether or not the entire range of the input profile has been scanned. If not, the process returns to step S652 to continue the above processing. The condition with the highest matching degree is output as the detection position. In this way, even in a workpiece having an edge portion, high-precision matching can be realized by light weight corresponding to a change in inclination in the registered profile, and detection accuracy can be improved.
(Blind spot processing function)

On the other hand, as shown in FIG. 58, when measuring a workpiece having a convex shape with an optical displacement meter that uses the principle of light cutting, it is highly accurate for shapes that may become blind spots, such as stepped portions of profile shapes. It is difficult to measure with. For example, when the workpiece shown in FIG. 58 (a) rotates and becomes as shown in FIG. 58 (b), the irradiation light does not reach the convex root portion even if the irradiation light is irradiated from above, and FIG. ) Will be observed. When a blind spot is generated in this way, the shape matching accuracy is lowered, and the matching result becomes unstable. In order to deal with such a problem, the invalidation means 834 automatically extracts a part where the inclination is nearly vertical in the registration profile, and performs processing for removing such part or reducing the weight in measurement. This will be described with reference to the image diagram of FIG. 58 and the flowchart of FIG. First, in step S671, the rotation angle and movement distance for starting the profile search are set in the same manner as in FIG. 53 and the like, and then in step S672, the profile of the registered profile is rotated / moved. When the registration profile is registered, it can be determined whether or not a part where a blind spot can be generated by rotation of the registration profile is included. Therefore, in step S673, whether or not such a part is included by the invalidation unit 834 is included. If it is, check its position. In step S674, the profile matching unit 614 calculates a difference from each point of the input profile. Here, for a part that becomes a blind spot due to the rotation of the image, processing for reducing this influence is performed (step S675). That is, this region is excluded from the processing target, or weighting is performed so that the specific gravity becomes light. Thereafter, in the same manner as in FIG. 57 and the like, in step S676, the profile matching unit 614 calculates the matching degree of the profile shape based on the difference between the points. In step S677, the condition with the highest matching degree is set as the detection position. Further, in step S678, it is determined whether or not all the ranges of the input profile have been scanned. If not, the process returns to step S672 and the above processing is continued. On the other hand, if all the ranges have been scanned, the process proceeds to step S679. The condition with the highest matching degree is output as the detection position. In this way, even in a workpiece having an edge portion, high-precision matching can be realized by light weight corresponding to a change in inclination in the registered profile, and detection accuracy can be improved.
(Improved lateral accuracy)

On the other hand, when measuring a nearly vertical cross section using the principle of light cutting, there arises a problem that the lateral accuracy is limited due to the resolution of the light receiving element. That is, as shown in FIG. 60, the edge surface ED of a workpiece that changes in a convex shape, that is, a portion where the profile changes sharply, such as a convex rising portion and a falling portion, is high and low between adjacent pixels. The position is generated, and the data becomes discrete data having no intermediate position, and the accuracy is deteriorated. On the other hand, the received light image itself has a change in the amount of received light as shown in FIG. 61 in accordance with the change in the edge position. In particular, in a two-dimensional light receiving element such as a CCD, as shown in FIG. 62, the amount of light received by adjacent pixels near the edge surface changes stepwise. Therefore, when measuring the edge position close to vertical, the position detection is performed by using the received light image data before conversion to the profile shape, rather than performing position correction in the state converted to the profile shape. Accuracy can be improved. This detection is performed by the edge surface calculation means 618 shown in FIG. Thereby, it is possible to detect the rising position and the falling position with high accuracy in the sub-pixel order without being limited by the resolution of the CCD. This procedure will be described based on the flowchart of FIG. First, in step S711, the corresponding region on the corresponding received light image is calculated based on the range of the registered profile set on the profile shape. In step S712, the registration profile of the corresponding area is moved. In step S713, normalized correlation is performed between the registered profile and the input profile, and a correlation value is calculated. In step S714, the condition (movement amount) with the highest correlation value is stored. Further, the process returns to step S712 and repeats the process so that the predetermined area is scanned until the above process is completed for all the parts (step S715). In step S716, interpolation processing is performed based on the condition with the highest correlation value and the surrounding correlation values, and the peak position is calculated in the sub-pixel order. Finally, in step S717, since the peak position is calculated on the received light image, the result is linearized and converted into distance information. In this way, extremely high-precision matching processing using the received light image can be realized.
(Shape measurement function)

The profile search process described above can be used not only for position correction of the profile shape but also for measurement of the profile shape. For example, in the example of FIG. 64, for the workpiece WK14 facing each other, the common profile designation frame KW is designated for the two shapes that are the respective facing portions, and the profile search is performed by the profile matching means 614. Processing such as detecting points is also possible. Thus, the profile search can function effectively not only in position correction but also in measurement.
(Profile calculation function)

Furthermore, profile moving means 824 for manually moving the profile shape and operation functions such as addition and reversal of profiles can be provided. FIG. 65 shows an example of a screen for manually operating the profile shape displayed on the display unit 70. The display unit 70 shown in this figure shows the profile shape of the workpiece in the profile display area 71. In the example of FIG. 65 (a), the right side of the profile display area 71 is divided vertically into two sensor heads. The profile shapes PR1 and PR2 acquired in the part 2 are respectively displayed. Further, an added profile shape PR3 obtained by adding the profile shapes PR1 and PR2 is displayed on the left side of the profile display area 71. In this way, operations such as addition, subtraction, and difference extraction can be performed on a plurality of profile shapes. In particular, by extracting the difference information of the common shape portion of the profile shape synthesized by profile search or the like by the difference extraction means 615, the difference of the profile can be calculated and displayed. Can be easily compared.
(Profile transfer means)

In the example of FIG. 65 (b), the lower side of the profile display area 71 is divided into left and right to display the profile shapes PR1 and PR2, respectively, and the profile shapes A and B are synthesized on the upper side of the profile display area 71. The synthesized profile shape PR4 is displayed. The profiles are combined by profile search of the profile matching means 614. Further, from the screen of FIG. 65, each profile shape can be moved vertically and horizontally and reversed. Further, a rotation or enlargement / reduction function may be provided. As a result, the composite profile shape synthesized by the profile search can be finely adjusted manually, such as by setting an offset amount.
(Profile shape coloring display function)

By displaying the profile shape on the display unit as shown in FIG. 8, the profile shape of the workpiece can be confirmed. However, since the profile shape cannot know the variation or change in the amount of received light, it cannot be determined from this image alone whether the exact shape of the workpiece can be measured. For this reason, as shown in FIG. 9, the display on the display unit is switched to the received light image, and the light receiving state of each part of the profile shape is confirmed. However, the received light image is a black and white grayscale image, and there is a problem that the distribution state of received light is difficult to understand. If the profile shape and the received light image are individually displayed, it is not easy to confirm the corresponding position, and operations such as the need to switch the display screen one by one are troublesome. Therefore, in the present embodiment, the profile shape of the profile shape can be colored by the profile coloring means so that the amount of received light and its change can be grasped only from the profile shape.
(Profile coloring function)

  The profile coloring means can display on the display unit in a state where the range of the light quantity is divided in advance according to the light quantity at each position constituting the profile shape, and a different color assigned to each range is applied. It is. Further, the profile coloring means divides the range of the light amount change in advance according to the amount of change in the light amount at each position of the profile shape with respect to the profile shape, and performs a coloring process for coloring different colors assigned to each range. It can also be displayed on the display unit in the applied state. Hereinafter, how the profile shape is colored by the profile coloring means will be described with reference to an image diagram of the profile shape displayed on the display unit shown in FIGS.

  In FIG. 66, when the profile shape is displayed, the color displayed by the profile coloring means is changed according to the light quantity state of the profile, specifically, the amount of the light quantity. In this example, a portion with an appropriate amount of light is blue (indicated by a solid line hatching in FIG. 66), a portion having a light amount greater than a predetermined upper limit threshold value is white (indicated by a thick line and narrowly spaced hatching in FIG. 66), and conversely When the amount of light is less than a predetermined lower limit threshold, it is indicated in red (indicated by three-line hatching in FIG. 66). In this manner, by displaying the profile shape in different colors and displaying it on the display unit, it is possible to determine whether or not the measurement is stably performed only by looking at the profile shape. As an example of the threshold value, for example, when detecting the received light amount with 256 gradations (8 bits) of 0 to 255, the upper limit threshold value is set to 191 or more, the lower limit threshold value is set to 64 or less, and the like. Such a threshold may be set in advance on the optical displacement meter side, or may be configured to be arbitrarily designated and adjusted by the user. In particular, by setting an appropriate threshold value according to the measurement conditions and the like, the stability of the profile shape can be confirmed by color display that is more suitable for the application.

  FIG. 67 shows an example in which the profile coloring means performs the coloring process according to the change in the light amount of the profile. As a case where the profile shape is not stable, there may be a case where the light amount change is severe. In this case, if the change in the amount of received light is within a predetermined range at the front and rear positions so that the region where the change in the amount of light is significant can be identified, the change in the amount of received light is appropriate (blue in the example of FIG. 67), If it exceeds the range, the change in the amount of received light is assumed to be severe, and the color is displayed in green (broken line hatching in the example of FIG. 67) by the profile coloring means. As an example of the received light amount range, when detection is performed with 8 bits of 0 to 255 as described above, a change in received light amount is appropriate when the difference in received light amount is within 64 before and after the pixels constituting the profile shape. In addition, this range can be set in advance on the optical displacement meter side, or can be arbitrarily set by the user. Further, it is possible to determine the ratio of the maximum / minimum change amount with respect to the average value with reference to the light reception levels of several pixels before and after the target pixel.

  Furthermore, as shown in FIG. 68, the coloring process can be performed by combining the above by the profile coloring means. In the example of FIG. 68, coloring is performed according to the amount of received light and the amount of change at each position of the profile. An example of the coloring determination procedure will be described with reference to the flowchart of FIG. First, in step S691, the pixel is moved to the initial position of the target pixel for profile shape coloring processing. Since the profile shape is a line shape, it is not necessary to scan all the pixels of the image displaying the profile shape, and only the pixels of the received light image corresponding to the position of the profile shape are sequentially scanned for efficient coloring processing. Yes. In addition, the range in which the coloring process is performed in the profile shape can be further narrowed down. In step S692, it is determined whether or not the amount of light received by the pixel is appropriate by comparing with a predetermined threshold value. If appropriate, the process proceeds to step S693-1 to determine whether the amount of received light is stable. Specifically, the amount of received light is compared with the immediately preceding pixel adjacent to the pixel, and when the difference is smaller than a predetermined threshold, it is determined that the pixel is stable and when the difference is larger, it is not stable. If stable, the process proceeds to step S694-1 to perform a coloring process (for example, blue) when it is determined that the amount of received light is appropriate and stable, and the process proceeds to step S695. On the other hand, if it is determined in step S693-1 that the amount of received light is not stable, the process proceeds to step S694-2, and a coloring process (for example, green) is performed when it is determined that the amount of received light is appropriate but unstable. The process proceeds to step S695. On the other hand, if it is determined in step S692 that the amount of received light is not appropriate, the process proceeds to step S693-2 to determine the amount of received light. Specifically, it is determined whether the amount of received light is large compared to a predetermined threshold value. If the amount of received light is large, the process proceeds to step S694-3, and coloring processing (for example, white) is performed when the amount of received light is large. . If the amount of received light is small, the process proceeds to step S694-4, and coloring processing (for example, red) is performed when the amount of received light is small, and the process proceeds to step S695. Thus, when the coloring process of the pixel is completed, it is determined in step S695 whether or not the pixel position has reached the final position. If not, the process proceeds to step S696 to move the target pixel for the coloring process, Returning to step S692, the above steps are repeated. If it is determined in step S695 that the pixel has reached the final position, the process ends. According to the above procedure, the target pixel can be sequentially scanned and the profile shape can be colored.

  The procedure for coloring the profile is not limited to the above, and other methods such as the procedure shown in the flowchart of FIG. 70 can also be used. This procedure will be described. First, the procedure for moving to the initial position of the pixel in step S701 and then determining whether or not the received light amount of the pixel is appropriate in step S702 is the same as in FIG. If the amount of received light is appropriate, the process proceeds to step S703-1, and a coloring process (for example, blue) is performed when it is determined that the amount of received light is appropriate. On the other hand, if the amount of received light is smaller than the predetermined lower threshold, the process proceeds to step S703-2 to determine whether the amount of received light is stable. Specifically, the amount of received light is compared with the immediately preceding pixel adjacent to the pixel, and when the difference is smaller than a predetermined threshold, it is determined that the pixel is stable, and when it is larger, the pixel is not stable. If stable, the process proceeds to step S704-1, performing a coloring process (for example, red) when the amount of received light is small, and then proceeds to step S705. On the other hand, if it is determined in step S703-2 that the state is not stable, the process proceeds to step S704-2, where a coloring process (for example, green) is performed when the change in the amount of received light is large, and the process proceeds to step S705. On the other hand, if it is determined in step S702 that the amount of received light is greater than the upper limit threshold value, the process proceeds to step S703-3, and it is further determined whether the amount of received light is stable as described above. If stable, the process proceeds to step S704-3, and a coloring process (for example, white) is performed when the amount of received light is large, and the process proceeds to step S705. If it is not stable, the process proceeds to step S704-4 to perform a coloring process (for example, green) when the amount of received light is large, and then proceeds to step S705. In this way, the processing after the coloring process of the pixel is completed is determined in step S705 as to whether or not the pixel position has reached the final position as in FIG. 69. If not, the process proceeds to step S706. The target pixel of the coloring process is moved, and the process returns to step S702 to repeat the above steps. If it is determined in step S705 that the final position of the pixel has been reached, the process ends. Also by such a procedure, the target pixel can be sequentially scanned and the profile shape can be colored.

These procedures are examples, and the same result can be obtained even if the order is changed so as to determine the amount of the received light amount after determining whether the received light amount is stable. Moreover, the example of coloring mentioned above is an example, Comprising: The combination of the color which is easy to distinguish from others and is easy to visually recognize on a display part can be selected suitably.
(Profile display function over time)

  In the above method, the profile shape at one point in time of measurement can be confirmed on the display unit. On the other hand, a workpiece that changes with time may be measured, such as when measuring workpieces that are sequentially conveyed on the line. In such a case, the profile of the workpiece that is measured also changes with time. The optical displacement meter can store the measured profile shape data in the memory unit, and can display the profile shape at that point in time by going back to a certain point in the past as necessary. However, with the conventional optical displacement meter, it is only possible to switch and display the profile shape at each time point, and the state of the profile change over time and whether the profile has been measured stably at that time point. It was not easy to judge.

On the other hand, in the present embodiment, the profile shape at each time point can be displayed on the display unit in an overlapping manner, and each profile shape can be displayed by applying different highlight processing for each profile shape by the profile highlight means. The profile shape can be distinguished from each other, and the temporal change of the profile shape can be easily confirmed visually. This state will be described with reference to FIGS.
(Profile highlight function)

The profile highlight means displays a plurality of profile shapes measured at different timings on the display unit, and displays them on the display unit in a state in which different highlight processes are applied to each profile shape in time series. In the example of FIG. 71, a plurality of profile shapes obtained at regular time intervals are superimposed and displayed on the display unit, and the latest profile shape is displayed thicker and the old profile shape is displayed thinner. It is highlighted by means. Also, as shown in FIG. 72, highlight processing may be performed such that the latest profile shape is darker and the old profile shape is lighter. Furthermore, the degree of coloring is changed gradually or gradually (for example, the newer is blue, the older is red), the line pattern is changed to a solid line, a broken line, a one-dot chain line, a two-dot chain line, etc., or a combination of the above For example, the highlight processing pattern can be used in a mode that can be distinguished from others. Thereby, the temporal stability of the profile can be displayed based on a plurality of profile shapes measured at different timings.
(Profile width display function)

Further, the profile highlight means can display a plurality of profile shapes measured at different timings on the display unit, and can color and display the width shape defined by the maximum value and the minimum value at each position. With such a profile width display function, a possible range of the profile can be visually grasped. Further, a profile distribution display, a function of coloring the region surrounded by the envelope, and the like can be added as necessary.
(Average profile display function)

  Further, an average profile display function for displaying the average value of the profile at each position in a color different from the width display color can be provided. In the example of FIG. 73, the locus of the profile shape is filled with yellow (indicated by hatching in FIG. 73) and displayed in a band shape, and the average value at each position is colored in blue (indicated by a thick line in FIG. 73) to obtain the average The profile shape is displayed on the display unit. Thus, the range and average value that can be taken by the profile can be known from the history information of the profile acquired in the past, which can be used for analysis.

Furthermore, the profile width display function and the average profile display function described above can be executed within a specific period or the number of data, in addition to being executed for all data captured in the past and held in the memory unit. For example, the profile width or average is executed for all profile shapes from the start of measurement until it is reset, limited to the N number from the latest profile, or the sampling trigger generation period or the range that goes back to the past t seconds. The profile can be displayed. As described above, a plurality of profile shapes are displayed in an overlapping manner, and further, calculation and measurement of displacement, altitude difference, area, and the like can be performed in this state as necessary, so that the measurement range can be easily set.
(Light receiving image coloring function)

  The means for displaying the profile shape on the display unit has been described above. In addition to the display of the profile shape, the received light image picked up by the two-dimensional light receiving element can be displayed on the display unit alone or superimposed on the profile shape. For example, when the profile shape of the workpiece cannot be measured appropriately, the determination may be made by switching from the profile shape shown in FIG. 8 to the received light image shown in FIG. However, since the received light image is a black and white grayscale image, it is difficult to understand the distribution state of the received light.

  Therefore, in the present embodiment, as shown in FIG. 74, the received light image can be displayed on the display unit in a state where the received light image is colored by the received light image coloring means. The received light image coloring means performs a coloring process on the received light image according to the gradation of the received light signal for each pixel. The gradation is divided into a plurality of ranges in advance, a different color is assigned to each range, and the color assigned to the gradation is colored for each pixel of the received light image. As a result, the received light image is displayed as a contour map, so that the luminance distribution of the received light signal can be visually grasped. In addition, it is easy to recognize whether the received light distribution gradient is steep or gentle due to the density of the contour outline of the contour line, and the degree of inclination of the profile can be visually grasped. Furthermore, the level of the peak level of the profile shape is easily recognized by the color. For example, when a two-dimensional light receiving element receives a received light signal in 256 gradations (8 bits) from 0 to 255, it is divided into 16 gradations in 16 ranges, and different colors (for example, yellow, green in order from the highest) , Blue, purple, orange, pink, red, etc.), the received light level can be expressed in 16 colors. In the example of FIG. 74, the portion of WKa on the received light image display area 74 is color-coded into seven areas of orange DC, purple PC, blue BC, green GC, blue BC, purple PC, and orange DC from the top. In the WKb part, the color is divided into nine areas of orange DC, purple PC, blue BC, green GC, yellow YC, green GC, blue BC, purple PC, and orange DC from the top. The color is divided into nine areas of orange DC, purple PC, blue BC, green GC, yellow YC, green GC, blue BC, purple PC, and orange DC. Such a contour line display function allows a planar light-receiving image to be expressed like a contour line model with the luminance value in the height direction. It is much easier to see by color-coded display instead of just a monochrome grayscale image. It becomes possible to visually grasp the brightness level, distribution, gradient, and the like. The contour line display function can be turned ON / OFF, and can be executed as necessary. The color coding of the contour lines is an example, and it goes without saying that any combination of colors can be used.

  In addition, the present invention is not limited to the configuration in which the color distribution is performed on all the luminance distributions of the light reception signals, and the color classification may be performed only on a designated range. Further, instead of or in addition to color coding, patterns such as different hatching and dots can be added, which makes it easier to visually grasp the luminance distribution of the received light image.

Furthermore, in the case where a plurality of light receptions occur in the light cutting method, it is difficult to determine which part is being measured even if a light reception image is displayed on the display unit. Thus, as shown in FIG. 75, by displaying the profile shape over the received light image colored by the received light image coloring means, it is easy to visually recognize which position of the received light distribution is detected. In the example of FIG. 75, the profile shape is displayed in red RC on the received light image display area 74 shown in FIG. That is, the portion of WKa on the received light image display area 74 is color-coded into seven areas of orange DC, purple PC, blue BC, red RC, blue BC, purple PC, and orange DC from the top, and the WKb part. The color is divided into nine areas: orange DC, purple PC, blue BC, green GC, red RC, green GC, blue BC, purple PC, orange DC from the top, and orange DC, purple from the top also in the WKc part. It is color-coded into nine areas: PC, blue BC, green GC, red RC, green GC, blue BC, purple PC, and orange DC.
(Trend graph)

  Further, the display unit is provided with a trend graph display region for displaying a trend graph indicating a temporal change in the light emission amount of the light projecting unit or the light reception amount of the two-dimensional light receiving element, in addition to the profile display region for displaying as a profile shape. be able to. FIG. 76 shows a display example of such a display unit, in which a profile display area is provided at the top to display the profile shape generated at the latest time, and a trend graph display area is provided at the bottom to display a trend graph. is doing. In this example, the trend graph display area is provided by dividing one screen of the display unit, and the profile shape and the trend graph can be confirmed simultaneously on one screen. Moreover, it is good also as a structure which provides the trend monitor screen for displaying a trend graph separately from the display part which displays a profile shape.

The trend graph implements a trend recording function that records time series changes with the horizontal axis as the time axis. The vertical axis of the trend graph can display not only the light emission amount and the light reception amount, but also an operation amount that is feedback-controlled such as a change in amplification factor of the amplifier, and a measurement value that is a control target. In addition, a threshold value for determining whether or not an error has occurred can be displayed over the trend graph.
(Time designation means)

  Further, the profile shape at the designated time can be displayed in the profile display area by the time designation means for designating the time on the trend graph display area. When a different time is specified by operating the time specifying means, the profile display area is switched to display the profile shape at the newly specified time. In this way, a more user-friendly operating environment is realized by linking the trend graph display area and the profile display area. In the example of FIG. 76, a predetermined time is directly specified on the trend graph with a pointing device such as a mouse as the time specifying means. The designated time is displayed with a broken line perpendicular to the horizontal axis, and the user can check the imaging time of the profile shape currently displayed in the profile display area on the trend graph. Moreover, it is good also as changing easily in front and back time by operating a mouse | mouth and adjusting the position of a broken line.

The trend graph and the profile shape are stored in the memory unit together with the acquired time information, and the profile shape measured at the time designated by the time designation unit (or the time near it) is read from the memory unit, Expanded to the profile display area.
(Alarm occurrence period display function)

Such a trend graph is suitable for monitoring work for monitoring the state of feedback control of the light emission amount and the light reception amount, and can be suitably used particularly for investigation of the cause at the time of occurrence of abnormality and observation of treatment improvement effect. It is possible to identify the occurrence timing of an alarm with a visualized trend graph, analyze the cause, and confirm whether the improvement result is correctly reflected from the trend graph even after taking improvement measures. For example, as shown in FIG. 76, let us consider a case in which the reflectance changes rapidly depending on the presence or absence of a workpiece, the feedback control cannot respond instantaneously, and the light emission amount increases abruptly and an error occurs. First, the user confirms the location where an error has occurred with a trend graph. At this time, as shown in FIG. 77, by displaying the alarm occurrence period on the trend graph, the user can quickly grasp the location where the problem occurred.
(Alarm detection function)

  The alarm signal is generated by an alarm detection means. For example, when feedback control is performed by the light reception level control means, the alarm detection means outputs an alarm signal if the operation amount or the light reception amount exceeds a predetermined threshold value. When the alarm detection means outputs an alarm signal, the generation period is held in the memory unit. When displaying the trend graph, the alarm occurrence period is read from the memory, and the trend graph creation means automatically highlights the alarm occurrence period on the time axis of the trend graph and displays it in the trend graph display area. indicate. As shown in Fig. 77, the highlight display displays the alarm occurrence period by color coding in a band, as well as hatching, reverse display, blinking, line thickness and color of the trend graph, line type in the alarm occurrence period and other periods For example, various methods for distinguishing from other parts can be adopted as appropriate. As a result, the user can immediately identify the alarm occurrence period on the trend graph, so that the problem occurrence timing can be quickly grasped. The user analyzes the trend graph and adjusts the operation amount of the feedback control in order to shorten the alarm generation period.

In addition, after taking the necessary measures, the treatment improvement effect can be observed with a trend graph. In the example of FIG. 78, the light emission amount is gradually increased as indicated by the broken line so that the overshoot of the light emission amount does not occur at the site where the moving workpiece has entered the measurement target region. Can be set so as not to exceed the threshold. After making such settings, you can actually check in the trend graph to confirm that the system can be operated correctly without causing errors by actually putting in the work. If necessary, resetting such as fine adjustment of the light emission amount can be performed, and it can be easily confirmed on the trend graph whether the resetting result is correctly reflected.
Thus, by observing the response by feedback control using a trend graph, the amount of operation of feedback control can be adjusted and adjustments, such as a workpiece | work reflectance and approach speed, according to use environment can be performed. In addition, in this work, it is possible to display the alarm generation period on the trend graph in an overlapping manner, which improves the workability for improving the measures against the occurrence of errors and is convenient for confirming the occurrence of errors after setting. Contribute to improvement.
(Simultaneous display function of received light amount and emitted light amount)

As described above, the vertical axis of the trend graph is not limited to the light emission amount, and may be a light reception amount, a simultaneous display of the light emission amount and the light reception amount, or the like. FIG. 79 shows an example of a trend graph displaying the light emission amount and the light reception amount simultaneously. In the example of the trend graph of FIG. 79, the light emission amount is indicated by a solid line and the light reception amount is indicated by a broken line. In the feedback control, the amount of light emission changes in response to the change in the amount of received light, so that not only the amount of emitted light but also the amount of received light can be displayed contributes to the overall analysis.
(Measurement area specification function)

Furthermore, from the profile shape displayed in the profile display area, the measurement area can be specified by the measurement area specifying means, and the average, peak value, etc. of the received light amount in the specified measurement area can be calculated and displayed. it can. In the example of FIG. 79, a measurement area is specified by surrounding a desired area on a profile shape with a rectangular frame with a pointing device such as a mouse as a measurement area specifying means, and the peak light quantity or average in the specified measurement area is specified. The amount of light can be displayed in the trend graph display area. In this way, the profile display area and the trend graph display area are linked to display the profile shape at the time specified by the time specification means on the trend graph display area in the profile display area. Thus, the light emission amount and the light reception amount of the measurement region designated by the measurement region designation means can be displayed as a trend graph in the trend graph display region.
(Measurement value display function)

Further, the target to be displayed in the trend graph is not limited to the light emission amount and the light reception amount, and a measurement value can also be displayed. By directly displaying the measurement values to be controlled by the feedback control in time series, it is possible to directly confirm that the control result is correctly obtained. FIG. 80 shows an example in which an average value of measured values is displayed in a trend graph. The user can adjust the feedback operation amount in order to shorten the period in which the alarm occurs or to reduce the period in which the measurement value becomes unstable while directly checking the measurement value. Further, the measurement value obtained after the adjustment can be confirmed again with the trend graph and finely adjusted. Thus, by displaying the measured value on the trend graph, the adjustment result of the feedback control operation amount can be easily confirmed. In addition, the trend graph before adjustment and the trend graph after adjustment may be displayed so as to overlap each other in the trend graph display area, and the adjustment of the operation amount and the actually reflected result can be more easily compared. Further, the amplification factor, the light emission amount, the light reception amount, etc. of the amplifier as the operation amount may be displayed by appropriately overlapping the measurement value.
(Linkage of profile data storage function and trend graph)

  Furthermore, the trend graph function can be linked to the profile data storage function for storing the profile data described above in the profile data storage area of the memory unit. According to this, the profile shape at the designated time can be displayed in the profile display area by the time designation means for designating the time on the trend graph display area. When a different time is specified by operating the time specifying means, the profile display area is switched to display the profile shape at the newly specified time. In this way, a more user-friendly operating environment is realized by linking the trend graph display area and the profile display area. In the example of FIG. 81, a thin line (red) perpendicular to the horizontal axis of the trend graph is displayed as time designation means, and a predetermined time is directly designated by operating the time designation means with a pointing device such as a mouse. As a result, the profile shape at the time indicated by the thin line of the time specifying means is called from the memory unit and displayed in the profile display area. The user can check the imaging time of the profile shape on the trend graph. In addition, by adjusting the position of the thin line by operating the mouse, it is possible to easily change the time before and after.

  In order to realize the profile data storage function, the trend graph and profile shape are stored in the memory unit together with the acquired time information, and the profile measured at the time designated by the time designation means (or a time nearby). The shape is read from the memory unit and developed in the profile display area. In the example of FIG. 81, in FIG. 81 (b), the time designation means is located at the right end (current time) in the lower trend graph display area, and the profile shape of the current time is displayed in the upper profile display area. . If the time designation means is moved to the right side with the mouse as shown in FIG. 81A, the display of the profile shape is also changed to the image at the designated time (profile shape at the time of alarm occurrence).

By combining the profile data storage function that saves the profile shape and the trend monitor display function in this way, the profile shape acquired at that time can be referenced when the time is returned, and the measurement processing unit can measure the workpiece. When performing, appropriate measures can be taken in consideration of the stability of the profile waveform. For example, as shown in FIG. 81 (a), when the received light amount shows an abnormality on the trend graph, if the profile shape in this area (or alarm generation period) is displayed, it becomes partially unstable. It can be confirmed. Since such an unstable region causes a measurement error of the workpiece, it is preferable to perform measurement within a range in which the unstable region does not occur in order to stably measure the height of the workpiece.
(Control area specification function)

In view of this, a control area to be subjected to feedback control is designated on the profile display area by using the control area designating means. The control region is preferably set as wide as possible within a range where no unstable region occurs. In the example of FIG. 81 (a), frame-shaped control areas are designated at a total of three locations, the top portion and the left and right skirt portions, which are flat regions of a convex profile waveform. As a result, the feedback control by the received light data control unit is performed in an area where a stable profile is obtained. Therefore, more accurate feedback control results can be obtained by eliminating the unstable area, and the measurement processing unit can accurately The workpiece can be measured.
(Mask area specification function)

Conversely, a profile mask region that is not subject to feedback control can also be specified. Here, the mask area designation means 86 designates a profile mask area PM that is not subject to feedback control from the profile display area 71. In the example of FIG. 81 (a), a frame-like profile mask region PM in which an unstable region having a profile shape is indicated by a one-dot chain line is designated. When feedback control is performed, the measurement value of the profile mask area PM is ignored, so that accurate feedback control results can be obtained in which such unstable areas are eliminated, and stable and reliable control and measurement are realized. Is done. In addition to setting only one of the control area designating function and the mask area designating function, both may be used simultaneously as shown in FIG.
(Sampling trigger)

  It is also possible to provide sampling designation means capable of designating the timing and / or number of recording of the profile shape in the memory unit. As a result, it becomes possible to store a fixed number of images taken at a sampling interval specified by the sampling specifying means, or a fixed number of images or a fixed number of images based on the sampling trigger. In this way, by limiting the number of profile shapes to be recorded in the memory unit, it is not necessary to record an infinite amount of image data, and the amount of memory can be reduced. Alternatively, the old image data may be set to be overwritten.

The factor for generating the sampling trigger is designated by the user from the sampling designation means. For example, a predetermined threshold value (trigger level) is set when a measured value, light projection amount, received light amount, or the like exceeds or falls below. In the example of FIG. 82, a trend graph (solid line) and a trigger level (broken line) of the received light amount are superimposed on the trend graph display area, and when the received light amount exceeds the trigger level, a sampling trigger is output, A profile shape for a certain period (trigger generation period indicated by shading) is acquired and stored in the memory unit. In this way, by limiting the profile shape storage period when an abnormality occurs that exceeds a predetermined threshold, only the data that needs to be referenced can be efficiently collected and used, and the amount of memory can be used effectively.
(Multiple synthesis)

  The above has described an example in which a received light image is captured at different multiple composition conditions or at different timings. On the other hand, in order to obtain a clearer received light image, it is also possible to perform multiple combining in which a plurality of pre-combination images that are received light images are captured and combined. However, there is a problem that it takes a long time to generate the image because multiple imaging needs to be performed a plurality of times. In addition, it has not been possible to set a time and range for performing multiple synthesis, and an interface that can flexibly set multiple synthesis conditions for performing multiple synthesis has not been provided. It seems that the user can set more appropriate conditions for multiple composition by enabling the setting items to be adjusted while confirming the finished state of the received light image obtained by multiple composition. Therefore, in the present embodiment, by limiting the range and area where multiple synthesis is performed, it is possible to eliminate unnecessary imaging and improve efficiency, and to obtain a highly accurate received light image while shortening the imaging time. . This will be described with reference to FIGS.

  FIG. 83 shows an example in which multiple composition is performed over the entire range. Here, the exposure time of the two-dimensional light receiving element is changed stepwise as a parameter to be changed under the multiple composition condition. As in the example of FIG. 83, a portion with very high reflectivity (upper center of FIG. 83, referred to as “bright portion”) and a portion with very low reflectivity (lower left portion of FIG. 83, “dark portion”). In the case where an intermediate reflectance part (right center in FIG. 83) is included, it is necessary to change the light part to the dark part and capture the pre-combination image to perform multiple composition. That is, as shown in FIG. 83, it is necessary to change the exposure time stepwise in the entire range and capture a plurality of pre-combine images in a wide range. Thereby, even if a part having a large difference in brightness and darkness is included in one screen, such as a work having a large difference in reflectance, it can be confirmed as one received light image.

On the other hand, depending on the work to be imaged, the number of times of imaging may be reduced. For example, even if the workpiece has a different reflectance, if the degree of the workpiece is small, or if the reflectance distribution is fixed to 2 to 3 points (as an example, there are two types, a bright workpiece and a dark workpiece, There is no intermediate workpiece). For such workpieces, it is necessary to perform limited pre-combination image capture (for example, one image for the bright and dark areas) for the bright and dark areas. As a result, sufficient multiple composite images can be obtained, and as a result, measurement time can be shortened and multiple composite processing can be made more efficient. Such an example is shown in FIG. In FIG. 84, when a very dark portion is not necessary for measurement, or when a very dark portion is not included in the image, it is possible to omit the pre-combination image capturing in such a range. As a result, the number of steps for changing the exposure time can be reduced, the pre-combination images can be captured only within a necessary range, and the total number of captured images can be reduced, thereby speeding up the measurement.
(Multiple synthesis range limiting means 88)

  In the example of FIG. 84, a range in which the exposure time is changed is displayed in a gauge shape, and from this gauge shape, a range in which the exposure time is not changed, that is, a pre-combination image is not captured can be designated. In this way, the gauge shape constitutes the multiple composition range limiting means 88, and the multiple composition range for actually changing the exposure time can be designated.

  The gauge shape designates the start position and end position of the multiple composition range. For example, an arrow indicating the start position and an arrow indicating the end position are adjusted to be slidable. Alternatively, the gauge-like length can be directly expanded and contracted, or adjusted with a slider. In the example of FIG. 83, the multiple composition range is designated by an extendable arrow.

  Further, in this example, a range in which the exposure time is changed in a gauge shape is shown. As the exposure time becomes longer, it is suitable for detection of a dark portion, and as the exposure time is shortened, it is suitable for detection of a bright portion. In addition, the brightness of the pixel actually acquired is displayed together with the magnitude of the exposure time. This allows the user to grasp the brightness of what is recorded visually, which is convenient because the adjustment work can be performed sensibly. In this example, the exposure time of the two-dimensional light receiving element (here, the time width from the exposure start time to the exposure end time of the CCD which is the two-dimensional light receiving element) is adjusted as a parameter to be changed under the multiple composition condition. It can also be adjusted by the shutter speed, aperture, etc. Alternatively, as other parameters, the light emission amount of the light emitting element (irradiation light amount, light emission output), light emission time, amplification factor of the amplifier (light reception gain), light emission time, light reception characteristics such as log characteristics, input amount to be input to the light projecting unit, Parameters such as the brightness of the obtained received light image may be adjusted individually or in combination.

  Furthermore, the multiple composition condition can be designated by the brightness of the received light image obtained as a result. Specifically, by specifying the end value (initial value and final value) of the range in which the user wants to change the brightness, the single parameter or a combination of multiple parameters is back-calculated to obtain the specified brightness, adjust. In this method, the user can specify a range with a desired luminance without understanding the meaning of the parameter, and automatically set the corresponding parameter internally so that the luminance specified on the optical displacement meter side can be obtained. Since it can, it is easy to set up and is especially suitable for beginners.

  When synthesizing the multiple synthesized images, first, the pre-combined images are captured while changing within a specified range with a predetermined width. This change width can be specified by the user in addition to specifying on the optical displacement meter side. For example, when designating by the exposure time, the initial value and change width of the exposure time are designated by the multiple composition range limiting means 88, and the pre-combination image is taken according to the designated multiple composition conditions. Further, the number of times of image capture or the number of pre-combine images to be captured can be designated by the multiple composition range limiting means 88. In this case, the time interval is automatically calculated from the end value of the range so that the necessary number of pre-combination images are captured in the specified range. In addition, the range is not limited, and the pre-combination image can be picked up by setting the time width evenly in the entire range of the target range, and this method can also shorten the imaging processing of the pre-combination image.

Further, the multiple synthesis range limiting means 88 can also designate a plurality of multiple synthesis ranges for performing multiple synthesis from the target range. The example of FIG. 85 shows an application in which it is sufficient to measure only two points, a very dark part and a very bright part included in the received light image. In this case, as shown in FIG. 85, two multiple composition ranges are designated using two retractable arrows. Thereby, the acquisition of the pre-combination image of the intermediate portion can be omitted, and the overall measurement time can be shortened.
(Trackback function)

  Furthermore, any pre-combination image that has already been captured can be displayed retroactively using the profile storage function described above. First, as shown in FIG. 86, the pre-combination image is captured in the entire range of the target range or the multiple composition range designated by the multiple composition range restriction means 88, and multiple composition is performed. All the pre-combination image data captured at this time are stored in the pre-combination image memory unit in association with the imaging conditions at the time of imaging, and the pre-combination image selection means 89 later specifies the multiple composition conditions before the composition. Images can be read and displayed on the display unit. The pre-combine image selection means 89 is configured, for example, in the shape of a slide bar, and specifies multiple composition conditions with arrows. As a result, the user can instruct and adjust the multiple composition conditions intuitively. In particular, the slide bar shape is excellent in operability because it can be continuously changed to the front and back conditions. Needless to say, the multiple composition conditions may be directly specified by numerical values. In addition to the exposure time of the two-dimensional light receiving element employed in the above embodiment, other multiple parameters such as the light emission amount of the light emitting element or the time at which the pre-combination image was captured may be used as the multiple combining condition to be specified. In this way, a desired pre-combination image is called from the pre-combination images captured in the past by the pre-combination image selection unit 89 and displayed on the display unit, and the pre-combination image selection unit 89 includes the imaging time. When the imaging condition is changed, the display of the pre-combination image on the display unit is sequentially updated accordingly, and the user can easily grasp the state of change before and after the display unit.

Further, at this time, it is possible to visually confirm whether or not the light quantity is appropriately obtained by using the above-described contour display function of the received light image, the coloring display function of the profile shape, the line bright waveform display function, and the like.
(Save only part)

In the above description, the example in which all the pre-combination images captured in the entire target range are stored has been described. However, only a part of the pre-combination images may be stored. For example, after a pre-combination image is captured and a multiple composite image is generated, only a representative pre-combination image is extracted from the used pre-combination images and stored in the pre-combination image storage unit 92. As an example, a method of extracting by specifying intervals such as every fifth image, every tenth image, a method of extracting only an image corresponding to the representative value of the exposure time, a method of extracting an image according to a predetermined timing based on a trigger, etc. Is mentioned. In this way, by limiting the pre-combination images to be stored, it is possible to reduce the necessary data capacity and efficiently utilize hardware assets. Alternatively, data may be stored with emphasis on areas that are considered important, such as a range with a large amount of change, and data that is not important may be stored with weighting or urgency, such as reducing or omitting data storage. it can. Such weighting can be realized by a method in which the user manually designates an area to be preferentially stored, or a method of automatically detecting a change amount on the optical displacement meter side and automatically extracting a rapidly changing area. Also in this example, various highlight displays such as color-coded display based on the amount of light can be applied as described above. Furthermore, the range in which the exposure time is changed can be limited. It is needless to say that it is not necessary to synthesize a synthesized image using all the stored pre-synthesized images or display the pre-synthesized image on the display unit, and only a part can be used.
(Thumbnail display)

  Furthermore, the stored pre-combination images may be called out in the form of the slide bar described above, or the selection-target pre-combination images may be displayed as a list and selected as shown in FIG. In this example, the reduced image indicating the pre-combination image is displayed in a reduced size on the upper part of the display unit, and the user selects a desired pre-combination image from these to display the corresponding pre-combination image on the display unit. be able to. As described above, when the number of image data to be selected is small, the image data may be displayed directly in a button shape and selected by the user. In addition, the reduced image shows the imaging conditions by brightness and exposure time, so that the user can visually grasp the imaging conditions even from a small image. Especially, by displaying a plurality of reduced images side by side, the contrast between the images can also be compared. It becomes easy and has excellent visibility and operability.

Alternatively, the reduced image may be a thumbnail image obtained by reducing and displaying the pre-combination image. However, when thumbnail images are displayed in the state of FIG. 87, there is a possibility that the thumbnail images may be small and difficult to see. Preferably, as shown in FIG. In the example of FIG. 88, the upper left image has the longest exposure time, and the exposure time becomes shorter as it goes to the right. The upper right image is continuous with the lower left image, and the lower right image is the image with the shortest exposure time. As a result, the entire pre-combination image can be displayed as a list, and a desired image can be selected while comparing the preceding and following imaging conditions. Also in this example, various highlight displays such as color-coded display based on the amount of light can be applied as described above.
(Real-time display of multiple composite images)

In this way, while confirming the pre-combine image, the pre-combination image conditions and range suitable for generating the multiple composite image are selected, and the multiple composite conditions are reset to perform multiple composite. In order to facilitate this work, a multiple composite image can be displayed on the display unit in real time in accordance with resetting of the multiple composite conditions. FIGS. 89A and 89B show an example of such a real-time display of multiple composite images. FIG. 89 (a) shows a state in which a composite image is generated and displayed on the display unit in a multiple composite range of exposure times specified by a slide bar shape at the upper right of the display area, and whether the light quantity is appropriate from this state. When the exposure range is adjusted again with a slide bar shape, the multiple composite image displayed on the display unit is updated as shown in FIG. As described above, since the multiple composite image can be displayed according to the currently set multiple composition condition, the user can easily adjust the setting of the range and condition for performing the multiple composition. Also in this example, various highlight displays such as color-coded display based on the amount of light can be applied as necessary. Here, in order to smoothly update the multiple composite image in real time, the pre-combination image is performed using the image data stored in the pre-combination image storage unit 92. As a result, real-time updating is possible and smooth setting work can be performed.
(Specify multiple composition area)

  In the above example, an example in which multiple composition is performed using all the areas displayed on the display unit has been described. On the other hand, multiple composition can be performed only within a limited area in one image. By reducing the multiple composition area (area) for performing multiple composition, the data size of the image can be reduced and further multiple composition processing can be performed. Speed can be increased. Such an example will be described with reference to FIGS. In the example of FIG. 90, as in the above-described FIG. 85 and the like, a very bright part (upper center part of FIG. 90), a very dark part (lower right part), and an intermediate brightness part in the received light image. (Lower left) is included. Of these, it is necessary to perform measurement only in very bright and dark areas. In this case, first, using the above-described multiple composition range limiting means 88, two multiple composition ranges for performing multiple composition are set from the target range. That is, a relatively long exposure time range (multiple synthesis range 1) corresponding to a very dark portion and a relatively short exposure time range (multiple synthesis range 2) corresponding to a very bright portion are set.

  At the same time, in each multiplex composition range, the received light image is not captured in all areas, but is allocated by limiting the imaging range. In this example, as shown in FIG. 91, a region to be imaged in the multiple combining range 1, that is, a very dark region is designated by the multiple combining region limiting means from the received light image. Here, using a pointing device, the corresponding multiple synthesis region 1 is designated in the received light image and further associated with the multiple synthesis range 1. Similarly, as the multiple composition region 2 corresponding to the multiple composition range 2, a very bright region is designated by the multiple composition region restriction means from the received light image and assigned to the multiple composition range 2. In this way, after assigning multiple composite areas for each multiple composite range, the received light image is picked up, and the multiple composite means 69 generates a multiple composite image. At this time, coupled with being able to limit the range of exposure time, by limiting the area that is actually imaged, the amount of image data to be captured can be reduced, and the amount of data to be processed can be reduced. Low load and high speed can be achieved.

  In the above example, after specifying a plurality of multiple combining ranges using the multiple combining range limiting means 88, a multiple combining area is specified using the multiple combining area limiting means, and a multiple combining area is set for each multiple combining range. However, the present invention is not limited to this procedure. For example, after a plurality of multiple synthesis areas are first designated using the multiple synthesis area restriction means, the multiple synthesis range is designated using the multiple synthesis range restriction means 88, and the multiple synthesis area is designated. A procedure for assigning a multiple composition range for each region may be used. Alternatively, after assigning the multiple composite areas and the multiple composite ranges in a batch and assigning them, it is also possible to specify a multiple composite range and then specify and assign the relevant multiple composite areas in order. .

In this way, it is possible to increase the speed of multiple synthesis, sufficiently cope with real-time processing, and it is possible to cope with applications requiring immediacy such as inline processing.
(Laser scan type two-dimensional displacement sensor)

  The present invention can also be applied to a laser scanning type two-dimensional displacement sensor. An outline of the laser scanning type two-dimensional displacement sensor 400 is shown in FIG. As shown in this figure, a laser scanning type two-dimensional displacement sensor 400 includes a light emitting element 401, a light projection lens 402 made of aspherical glass, a scanner 403, a half mirror 404, an X direction light receiving element 405, and a light receiving lens. 406 and a Z-direction light receiving element 407. As the light receiving elements 405 and 407, a two-dimensional image sensor such as a CCD or CMOS can be used. In addition, it is also possible to acquire a light reception signal by switching for each line using a one-dimensional image sensor. With this method, feedback control can be performed individually for each line. Further, it is possible to obtain a light reception image adjusted to an appropriate light amount for each line without generating a composite image. The laser scan type two-dimensional displacement sensor 400 scans in a linear manner by polarizing the spot-like laser light emitted from the light emitting element 401 in the X-axis direction by the scanner 403 instead of the band light. The linear laser beam is separated into surface reflected light and transmitted light by the half mirror 404. The surface reflected light of the half mirror 404 forms a spot image on the X direction light receiving element 405 and determines a measurement point in the X direction. On the other hand, the transmitted light that has passed through the half mirror 404 is applied to the workpiece WK13. Then, the diffuse reflected light of the workpiece WK13 is collected by the light receiving lens 406, and a spot is formed on the Z direction light receiving element 407. The movement of the workpiece WK13 in the Z direction is measured by the change in the spot position. Thus, by collating the measurement point detected by the X direction light receiving element 405 with the measurement point detected by the Z direction light receiving element 407, the shape of the workpiece WK13 can be detected.

  An optical displacement meter, an optical displacement measuring method, an optical displacement measuring program, a computer-readable recording medium, and a recorded device according to the present invention are a CCD capable of measuring the displacement of a workpiece such as a transparent body, resin, or black rubber. It can be suitably applied as a laser displacement sensor, and can be used for purposes such as substrate warpage measurement, tire surface shape measurement, dispenser nozzle height control, and stage position control.

It is a figure which shows the measurement principle of the optical displacement meter which concerns on Embodiment 1 of this invention. It is the top view and side view which show the external appearance of an optical displacement meter. It is a block diagram which shows the structure of the feedback control by a microprocessor. It is a block diagram which shows the optical displacement meter which concerns on Embodiment 2 of this invention. It is a block diagram which shows an optical displacement meter provided with a control part in a sensor head part. It is a schematic diagram which shows the state which designates a frame-shaped profile mask area | region as a profile shape. It is a table | surface which shows the list of the preserve | saved profile data. It is a schematic diagram which shows the example which displays the profile shape of a workpiece | work on a display part. It is a schematic diagram which shows the example which displays the light-receiving image of a workpiece | work on a display part. It is a block diagram which shows an optical displacement meter provided with a communication means. It is a schematic diagram which shows the workpiece | work where the surface is flat and the surface state is uniform. It is a schematic diagram which shows the workpiece | work from which a surface is flat and a surface state changes with parts. It is a schematic diagram which shows the solid | 3D-shaped workpiece | work from which a reflective state changes with parts. It is a schematic diagram which shows a mode that the light reception amount is detected in each line of a two-dimensional light receiving element. It is a graph which shows the histogram of the luminance distribution of a received light image. It is a graph which shows the example of the light reception characteristic of a two-dimensional light receiving element. It is a schematic diagram which shows the state which displayed the profile shape as a candidate pattern. It is the schematic diagram which expanded the profile display area. It is a schematic diagram which shows the state which displayed the profile shape and the light quantity graph as a candidate pattern. It is the schematic diagram which expanded the profile display area and the light quantity graph display area. It is a schematic diagram which shows the waveform of a received light signal. It is a flowchart which shows the procedure which memorize | stores the number of peaks and controls received light quantity. It is the perspective view and schematic diagram explaining a mode that a control area | region is designated. It is a schematic diagram which shows an example of a normal light reception signal waveform. It is a schematic diagram which shows the example of an abnormal light received signal waveform. It is a schematic diagram which shows a mode that a light reception mask area | region is set to a light reception image. It is a schematic diagram which shows a mode that reflected light is isolate | separated from FIG. It is a schematic diagram which shows the example which sets a light reception mask area | region according to measurement light at the last minute. It is a schematic diagram which shows the state by which some measurement light was masked by the light reception mask area | region by position shift. It is a schematic diagram which shows the example which set the light reception mask area | region in consideration of position shift. FIG. 31 is a schematic diagram illustrating an example in which a positional shift occurs in the light receiving mask region of FIG. 30. It is a flowchart which shows the procedure at the time of the setting of a mask movement. It is a schematic diagram which shows a mode that a registration image is set in a light reception image. It is a schematic diagram which shows a mode that a light reception mask area | region is set to the last minute. It is a flowchart which shows the execution procedure of a mask movement. It is a schematic diagram which shows the state which specified the position of the registration image in the light reception image by image search. It is a schematic diagram which shows a mode that a light reception mask area | region tracks according to a light reception image. It is a schematic diagram which shows a mode that an inclination correction function is performed with respect to the workpiece | work which is inclined. It is a schematic diagram which shows a mode that a height difference correction function is performed with respect to the workpiece | work which has a convex shape with a known height difference. It is a block diagram which shows the optical displacement meter which concerns on Embodiment 3 of this invention. It is a schematic diagram which shows the example which arrange | positions two sensor head parts side by side and expands the area of irradiated light substantially, and uses it. It is a schematic diagram which shows the state by which one of the sensor head parts of FIG. 41 was inclined and arrange | positioned. It is a schematic diagram explaining a sensor head part coupling | bonding function. It is a schematic diagram explaining a direct matching function. It is a flowchart explaining the procedure of a direct matching function. It is a schematic diagram which shows the state which arrange | positions a sensor head part in mirror surface shape. It is an image figure which shows a mode that the difference of a profile is acquired by the inversion means. It is a perspective view which shows a mode that a band light is irradiated with respect to the one workpiece | work so that the profile shape of a different site | part may be acquired. It is a side view which shows a mode that the same workpiece | work is pinched | interposed and measured with two sensor head parts. It is a side view which shows a mode that it installs so that the light projection side of a sensor head part may oppose. It is a side view which shows a mode that the thickness of the convex part of a workpiece | work is measured by the inversion means. It is a flowchart which shows the procedure which reverses a profile shape with a reverse means. It is a flowchart which shows the procedure which detects the matching position by profile search from profile shape. It is an image figure which shows the result of having performed the profile search to the profile shape based on this with a registration profile. It is an image figure which shows a mode that an invalid area | region is set. It is an image figure which shows a mode that the area | region where the change of the inclination in a workpiece | work is large is extracted automatically. It is a flowchart which shows the procedure which performs invalidation by the inclination invalidation function. It is an image figure which shows the profile shape measured by generation | occurrence | production of a blind spot. It is a flowchart which shows the procedure which automatically extracts a blind spot by a blind spot processing function. It is an image figure which shows a mode that a profile becomes discrete with the workpiece | work which changes to convex shape. FIG. 61 is a plan view showing a received light image of the workpiece in FIG. 60. It is a graph which shows the change of the received light quantity of the two-dimensional light receiving element in an edge part. It is a flowchart which shows the detection procedure of the edge position using a received light image. It is a schematic diagram which shows the example of a measurement of the workpiece | work using profile search. It is an image figure which shows the example of a screen which operates a profile shape manually. It is an image figure which shows the example which colors and displays a profile shape according to the light quantity state of a profile. It is an image figure which shows the example which colors and displays a profile shape according to the light quantity change of a profile. It is an image figure which shows the example which colors and displays a profile shape according to the light quantity state and light quantity change of a profile. It is a flowchart which shows the determination procedure of coloring. It is a flowchart which shows the other determination procedure of coloring. It is an image figure which shows the state which displays a some profile shape in piles so that it may become thicker as the newest profile shape. It is an image figure which shows the state which displays a some profile shape in piles so that it may become darker as the newest profile shape. It is an image figure which shows the state which displays an average profile. It is an image figure which shows the state which gives a coloring process to a light reception image, and displays it. It is an image figure which shows the state which superimposes and displays a profile shape on the light reception image which carried out the coloring process. It is an image figure which shows the display part which provided the trend graph display area. It is an image figure which shows the state which displays an alarm generation period on a trend graph. It is an image figure which shows a mode that a gain adjustment is performed so that generation | occurrence | production of an overshoot may be avoided on a trend graph. It is an image figure which shows the trend graph which displayed the light emission amount and the light reception amount simultaneously. It is an image figure which shows the state which displays the average value of a measured value in a trend graph. It is an image figure which shows the state which linked the profile data storage function and the trend graph. It is an image figure which shows the state which displays a trigger level in a trend graph display area. It is an image figure which shows the synthesized image which performed the multiple composition based on the some pre-combination image. It is an image figure which shows the synthesized image which excluded the dark part. It is an image figure which shows the synthetic | combination image which designated the multiple synthetic | combination range. It is an image figure which shows a mode that the pre-combination image already imaged by the track back function is read. It is an image figure which shows a mode that the pre-combination image which displayed the list of the pre-combination images and was already imaged is read. It is an image figure which shows a mode that a some pre-combination image is displayed side by side. It is an image figure which shows a mode that a multiple composite image is displayed in real time. It is an image figure which shows a mode that the multiple composition area | region which performs multiple composition is restrict | limited. It is an image figure which shows a mode that the multiple composition area | region which performs multiple composition is restrict | limited. It is a schematic diagram which shows a laser scanning type two-dimensional displacement sensor. It is a block diagram which shows the structure of the principal part of the conventional optical displacement meter. It is a block diagram which shows an example of the conventional control circuit for controlling the light reception amount in an optical position detection element. It is a block diagram which shows the main circuit structures of the other conventional optical displacement meter. It is a schematic diagram for demonstrating the example of the setting method of a variable width. It is a schematic diagram which shows the principle of optical cutting. It is a schematic diagram which shows a mode that a band light is projected with respect to the workpiece | work which has a convex-shaped protrusion, and the height of a convex-shaped protrusion is measured. FIG. 99 is a schematic diagram showing a state in which the height of convex protrusions is measured when the work of FIG. 98 is installed tilted in the width direction. It is a schematic diagram which shows a mode that an error arises when it inclines and installs in the direction orthogonal to the width direction of a band light. It is a schematic diagram which shows a mode that light is projected to the workpiece | work which formed the V-shaped valley. It is a schematic diagram which shows a mode that light is projected on a roller-shaped workpiece | work. It is a perspective view which shows a mode that the profile of the adhesive agent apply | coated on the uneven | corrugated workpiece | work is measured. It is a side view which shows a mode that an adhesive agent is apply | coated to the workpiece | work of FIG. It is an image figure which shows the profile shape before and behind adhesive application | coating measured with the workpiece | work of FIG. It is a timing chart which shows a mode that the same workpiece | work is imaged at a different timing with one sensor head part. 10 is a timing chart illustrating an example of batch processing based on a plurality of profile shapes before processing and a plurality of profile shapes after processing. It is a timing chart which shows an example with comparatively rough timing before processing and after processing. It is a timing chart which shows an example with comparatively dense timing before processing and after processing.

Explanation of symbols

DESCRIPTION OF SYMBOLS 100, 200 ... Optical displacement meter 1 ... Sensor head part 2 ... Controller part 3 ... Light projection part 4 ... Head connection part 11 ... Driver 12 ... Laser diode 13 ... Light projection lens 14 ... Light receiving lens 15 ... Two-dimensional light receiving element 16 ... Reading circuit 21 ... Sensor head part 22 ... Controller part 23 ... Electric cable 24 ... Head connection part 25 ... Display unit connection part 26 ... Expansion unit 27 ... Display unit 28 ... Amplifier unit 44 ... Microprocessor; 441 ... Comparison part; 442 Operation amount calculation unit 443 Output unit 451 Control target 452 Feedback circuit 50 Head control unit 51 Light control unit 52 Light receiving element control unit 53 Mode switching unit 54 Measurement processing unit 55 Alarm detection unit 56 ... Image reading section 57 ... Communication means 58 ... Stability output means 59 ... Warning means 60 ... Light reception data control section 61 ... Light level control means 62 ... image processing section 63 ... work determination means 64 ... profile calculation means 65 ... trend graph creation means 66 ... profile coloring means 67 ... profile highlighting means 68 ... received light image coloring means 69 ... multiple synthesis means 70 ... display Unit 71 ... profile display area 72 ... trend graph display area 73 ... light intensity graph display area 74 ... received light image display area 80 ... interface part 81 ... manipulated variable adjustment means 82 ... measurement area designation means 83 ... time designation means 84 ... sampling designation section 85 ... Control region designation means 86 ... Mask area designation means 87 ... Multiple composition condition setting means 88 ... Multiple composition range restriction means 89 ... Pre-combination image selection means 90 ... Memory unit 91 ... Received light peak storage means 92 ... Pre-combination image storage means 101 ... Drive circuit 102 ... Laser diode; 102B ... Light emission Element 103 ... Projection lens 104 ... Light reception lens 105 ... Optical position detection element; 105B ... Image sensors 106a, 106b ... Current-voltage conversion circuit 111 ... Light output adjustment circuit 112 ... Adder 113 ... Subtractor 114 ... Error integration circuit 115 ... Reference voltage generating circuit 144B ... control unit; 146B ... amplifier 400 ... laser scan type two-dimensional displacement sensor 401 ... light emitting element; 402 ... light projecting lens; 403 ... scanner; 404 ... half mirror 405 ... light receiving element in X direction; Lens: 407 ... Z direction light receiving element 610 ... calculation unit 611 ... inclination angle calculation means 612 ... height difference calculation means 613 ... inclination correction means 614 ... profile matching means 615 ... difference extraction means 616 ... image search means 617 ... mask movement means 618 ... Edge surface calculating means 812 ... Horizontal part designating means 814 ... High Level difference specifying means 816 ... Inclination angle adjusting means 818 ... Height difference adjusting means 820 ... Common profile specifying means 822 ... Reversing means 824 ... Profile moving means 826 ... Inclination adjusting means 828 ... Arrangement mode selecting means 830 ... Registration profile specifying means 832 ... Invalid region designation means 834 ... invalidation means 836 ... measurement light selection means WK, WK1 to WK15 ... work PR1 to 4, WKa, WKb, WKc ... profile shape VC ... control voltage; VE ... error signal; VL ... received light amount voltage Va ... Light output control signal; Vr ... Reference voltage SH ... Sensor head part; LB ... Laser light; JS ... Light receiving element OK, OK '... Band light; H, H' ... Height; d ... Thickness BA of convex portion ... Blue region; RA ... Red region; GA ... Gray region DC ... Orange; PC ... Purple; BC ... Blue; GC ... Green; YC ... Yellow; RC ... Red LI ... Near image sensor JM ... Light-receiving mask area UM ... Wavy frame SK ... Horizontal reference position designation frame KK ... Height difference reference position designation frame KW ... Common profile designation frame; PS ... Registration profile designation area MA ... Invalid area ED ... Edge surface GK ... External device KP ... Candidate pattern PM ... Profile mask region TG ... Registered image SZ ... Adhesive; TS ... Coating device SR ... Control region; KR ... Measurement region

Claims (7)

  1. An optical displacement meter that measures the displacement of a measurement object,
    A light projecting unit for irradiating the measurement object with light as a band-shaped light having a spread in the first direction, or scanning and irradiating in the first direction;
    A two-dimensional light receiving element for receiving reflected light from the measurement object and outputting it as a light receiving signal at each position in the first direction;
    An amplifier for amplifying a received light signal from the two-dimensional light receiving element;
    A display unit for displaying a received light image generated based on the amplified signal obtained by the amplifier at each point in the first direction by reflected light of the irradiation light from the light projecting unit;
    A coloring process that divides the gradation of the light reception signal for each pixel into a plurality of ranges, assigns a different color to each range, and colors the color assigned to the gradation for each pixel of the light reception image. A light-receiving image coloring means that can be displayed on the display unit in a state of being applied,
    An optical displacement meter comprising:
  2. The optical displacement meter according to claim 1, further comprising:
    Profile calculation means capable of calculating the profile shape of the measurement object from the amplified signal obtained by the amplifier at each point in the first direction by the reflected light of the irradiation light from the light projecting unit,
    An optical displacement meter configured to be able to display the profile information calculated by the profile calculation means in a state where the received light image colored by the received light image coloring means is displayed on the display section. .
  3. The optical displacement meter according to claim 1, further comprising:
    A measurement area designating means for designating a desired measurement area in a state where the profile shape is displayed on the display unit;
    A measurement processing unit capable of performing a desired calculation on the measurement region specified by the region specifying means;
    An optical displacement meter comprising:
  4. An optical displacement meter according to any one of claims 1 to 3,
    The optical displacement meter, wherein the two-dimensional light receiving element is a CCD or a CMOS.
  5. An optical displacement measuring method capable of measuring the displacement of a measurement object based on a light cutting method,
    A flat measurement object is irradiated with light projected in the first direction by a light projecting unit, or irradiated as a band-shaped light having a spread in the first direction, and reflected light from the measurement object is 2 Light is received by the two-dimensional light receiving element and output as a light receiving signal at each position in the first direction, the light receiving signal from the two-dimensional light receiving element is amplified by an amplifier, and the amplified signal is further converted into a digital signal by a digital conversion means. And a step of acquiring a received light image based on the digital signal obtained by the digital conversion means at each point in the first direction by the reflected light of the irradiation light from the light projecting unit;
    A coloring process that divides the gradation of the light reception signal for each pixel into a plurality of ranges, assigns a different color to each range, and colors the color assigned to the gradation for each pixel of the light reception image. A process of displaying on the display unit in a state where
    An optical displacement measuring method comprising:
  6. An optical displacement measurement program capable of measuring the displacement of a measurement object based on a light cutting method,
    A flat measurement object is irradiated with light projected in the first direction by a light projecting unit, or irradiated as a band-shaped light having a spread in the first direction, and reflected light from the measurement object is 2 Light is received by the two-dimensional light receiving element and output as a light receiving signal at each position in the first direction, the light receiving signal from the two-dimensional light receiving element is amplified by an amplifier, and the amplified signal is further converted into a digital signal by a digital conversion means. And a function of acquiring a received light image based on the digital signal obtained by the digital conversion means at each point in the first direction by the reflected light of the irradiation light from the light projecting unit;
    A coloring process that divides the gradation of the light reception signal for each pixel into a plurality of ranges, assigns a different color to each range, and colors the color assigned to the gradation for each pixel of the light reception image. A function to display on the display unit with
    An optical displacement measuring program for causing a computer to realize the above.
  7.   A computer-readable recording medium or a recorded device storing the program according to claim 6.
JP2006274533A 2006-10-05 2006-10-05 Optical displacement gauge, optical displacement measuring method, optical displacement measuring program, computer-readable memory medium and recording equipment Pending JP2008096123A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006274533A JP2008096123A (en) 2006-10-05 2006-10-05 Optical displacement gauge, optical displacement measuring method, optical displacement measuring program, computer-readable memory medium and recording equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006274533A JP2008096123A (en) 2006-10-05 2006-10-05 Optical displacement gauge, optical displacement measuring method, optical displacement measuring program, computer-readable memory medium and recording equipment

Publications (1)

Publication Number Publication Date
JP2008096123A true JP2008096123A (en) 2008-04-24

Family

ID=39379137

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006274533A Pending JP2008096123A (en) 2006-10-05 2006-10-05 Optical displacement gauge, optical displacement measuring method, optical displacement measuring program, computer-readable memory medium and recording equipment

Country Status (1)

Country Link
JP (1) JP2008096123A (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011089909A (en) * 2009-10-23 2011-05-06 Nippon Signal Co Ltd:The Distance image sensor, and distance image processing system with the use of the same
CN102232173A (en) * 2009-03-25 2011-11-02 法罗技术股份有限公司 Method for optically scanning and measuring a scene
KR20120138630A (en) * 2011-06-14 2012-12-26 파나소닉 주식회사 Apparatus for measuring volume and measuring method for volume change
US8625106B2 (en) 2009-07-22 2014-01-07 Faro Technologies, Inc. Method for optically scanning and measuring an object
US8699007B2 (en) 2010-07-26 2014-04-15 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US8699036B2 (en) 2010-07-29 2014-04-15 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US8705016B2 (en) 2009-11-20 2014-04-22 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US8705012B2 (en) 2010-07-26 2014-04-22 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US8719474B2 (en) 2009-02-13 2014-05-06 Faro Technologies, Inc. Interface for communication between internal and external devices
US8730477B2 (en) 2010-07-26 2014-05-20 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US8830485B2 (en) 2012-08-17 2014-09-09 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US8896819B2 (en) 2009-11-20 2014-11-25 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US9009000B2 (en) 2010-01-20 2015-04-14 Faro Technologies, Inc. Method for evaluating mounting stability of articulated arm coordinate measurement machine using inclinometers
US9074883B2 (en) 2009-03-25 2015-07-07 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US9113023B2 (en) 2009-11-20 2015-08-18 Faro Technologies, Inc. Three-dimensional scanner with spectroscopic energy detector
WO2015129907A1 (en) * 2014-02-25 2015-09-03 Ricoh Company, Ltd. Distance measuring device and parallax calculation system
US9210288B2 (en) 2009-11-20 2015-12-08 Faro Technologies, Inc. Three-dimensional scanner with dichroic beam splitters to capture a variety of signals
JP2015227890A (en) * 2015-08-11 2015-12-17 セイコーエプソン株式会社 Shape measuring device, control method of shape measuring device, and program
USRE45854E1 (en) 2006-07-03 2016-01-19 Faro Technologies, Inc. Method and an apparatus for capturing three-dimensional data of an area of space
US9329271B2 (en) 2010-05-10 2016-05-03 Faro Technologies, Inc. Method for optically scanning and measuring an environment
US9372265B2 (en) 2012-10-05 2016-06-21 Faro Technologies, Inc. Intermediate two-dimensional scanning with a three-dimensional scanner to speed registration
US9417316B2 (en) 2009-11-20 2016-08-16 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US9417056B2 (en) 2012-01-25 2016-08-16 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US9513107B2 (en) 2012-10-05 2016-12-06 Faro Technologies, Inc. Registration calculation between three-dimensional (3D) scans based on two-dimensional (2D) scan data from a 3D scanner
US9529083B2 (en) 2009-11-20 2016-12-27 Faro Technologies, Inc. Three-dimensional scanner with enhanced spectroscopic energy detector
US9551575B2 (en) 2009-03-25 2017-01-24 Faro Technologies, Inc. Laser scanner having a multi-color light source and real-time color receiver
US9607239B2 (en) 2010-01-20 2017-03-28 Faro Technologies, Inc. Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations
US9628775B2 (en) 2010-01-20 2017-04-18 Faro Technologies, Inc. Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations
CN108168713A (en) * 2017-12-22 2018-06-15 福建和盛高科技产业有限公司 The anti-mountain fire system of transmission line of electricity infrared thermal imaging
US10067231B2 (en) 2012-10-05 2018-09-04 Faro Technologies, Inc. Registration calculation of three-dimensional scanner data performed between scans based on measurements by two-dimensional scanner
US10175037B2 (en) 2015-12-27 2019-01-08 Faro Technologies, Inc. 3-D measuring device with battery pack
US10281259B2 (en) 2010-01-20 2019-05-07 Faro Technologies, Inc. Articulated arm coordinate measurement machine that uses a 2D camera to determine 3D coordinates of smoothly continuous edge features
US10482592B2 (en) 2014-06-13 2019-11-19 Nikon Corporation Shape measuring device, structured object manufacturing system, shape measuring method, structured object manufacturing method, shape measuring program, and recording medium

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE45854E1 (en) 2006-07-03 2016-01-19 Faro Technologies, Inc. Method and an apparatus for capturing three-dimensional data of an area of space
US8719474B2 (en) 2009-02-13 2014-05-06 Faro Technologies, Inc. Interface for communication between internal and external devices
US9551575B2 (en) 2009-03-25 2017-01-24 Faro Technologies, Inc. Laser scanner having a multi-color light source and real-time color receiver
CN102232173A (en) * 2009-03-25 2011-11-02 法罗技术股份有限公司 Method for optically scanning and measuring a scene
JP2012521546A (en) * 2009-03-25 2012-09-13 ファロ テクノロジーズ インコーポレーテッド Method for optically scanning and measuring the surrounding space
US9074883B2 (en) 2009-03-25 2015-07-07 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US8625106B2 (en) 2009-07-22 2014-01-07 Faro Technologies, Inc. Method for optically scanning and measuring an object
JP2011089909A (en) * 2009-10-23 2011-05-06 Nippon Signal Co Ltd:The Distance image sensor, and distance image processing system with the use of the same
US9529083B2 (en) 2009-11-20 2016-12-27 Faro Technologies, Inc. Three-dimensional scanner with enhanced spectroscopic energy detector
US8705016B2 (en) 2009-11-20 2014-04-22 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US9417316B2 (en) 2009-11-20 2016-08-16 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US9113023B2 (en) 2009-11-20 2015-08-18 Faro Technologies, Inc. Three-dimensional scanner with spectroscopic energy detector
US9210288B2 (en) 2009-11-20 2015-12-08 Faro Technologies, Inc. Three-dimensional scanner with dichroic beam splitters to capture a variety of signals
US8896819B2 (en) 2009-11-20 2014-11-25 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US9009000B2 (en) 2010-01-20 2015-04-14 Faro Technologies, Inc. Method for evaluating mounting stability of articulated arm coordinate measurement machine using inclinometers
US9607239B2 (en) 2010-01-20 2017-03-28 Faro Technologies, Inc. Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations
US9628775B2 (en) 2010-01-20 2017-04-18 Faro Technologies, Inc. Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations
US10281259B2 (en) 2010-01-20 2019-05-07 Faro Technologies, Inc. Articulated arm coordinate measurement machine that uses a 2D camera to determine 3D coordinates of smoothly continuous edge features
US10060722B2 (en) 2010-01-20 2018-08-28 Faro Technologies, Inc. Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations
US9684078B2 (en) 2010-05-10 2017-06-20 Faro Technologies, Inc. Method for optically scanning and measuring an environment
US9329271B2 (en) 2010-05-10 2016-05-03 Faro Technologies, Inc. Method for optically scanning and measuring an environment
US8699007B2 (en) 2010-07-26 2014-04-15 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US8705012B2 (en) 2010-07-26 2014-04-22 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US8730477B2 (en) 2010-07-26 2014-05-20 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US8699036B2 (en) 2010-07-29 2014-04-15 Faro Technologies, Inc. Device for optically scanning and measuring an environment
KR20120138630A (en) * 2011-06-14 2012-12-26 파나소닉 주식회사 Apparatus for measuring volume and measuring method for volume change
JP2013002866A (en) * 2011-06-14 2013-01-07 Panasonic Corp Volume measuring apparatus and volume change measuring method
KR101707399B1 (en) 2011-06-14 2017-02-16 파나소닉 아이피 매니지먼트 가부시키가이샤 Apparatus for measuring volume and measuring method for volume change
US9417056B2 (en) 2012-01-25 2016-08-16 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US8830485B2 (en) 2012-08-17 2014-09-09 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US10067231B2 (en) 2012-10-05 2018-09-04 Faro Technologies, Inc. Registration calculation of three-dimensional scanner data performed between scans based on measurements by two-dimensional scanner
US9739886B2 (en) 2012-10-05 2017-08-22 Faro Technologies, Inc. Using a two-dimensional scanner to speed registration of three-dimensional scan data
US9618620B2 (en) 2012-10-05 2017-04-11 Faro Technologies, Inc. Using depth-camera images to speed registration of three-dimensional scans
US10203413B2 (en) 2012-10-05 2019-02-12 Faro Technologies, Inc. Using a two-dimensional scanner to speed registration of three-dimensional scan data
US9513107B2 (en) 2012-10-05 2016-12-06 Faro Technologies, Inc. Registration calculation between three-dimensional (3D) scans based on two-dimensional (2D) scan data from a 3D scanner
US9372265B2 (en) 2012-10-05 2016-06-21 Faro Technologies, Inc. Intermediate two-dimensional scanning with a three-dimensional scanner to speed registration
US9746559B2 (en) 2012-10-05 2017-08-29 Faro Technologies, Inc. Using two-dimensional camera images to speed registration of three-dimensional scans
WO2015129907A1 (en) * 2014-02-25 2015-09-03 Ricoh Company, Ltd. Distance measuring device and parallax calculation system
JP2015179078A (en) * 2014-02-25 2015-10-08 株式会社リコー Parallax calculation system and distance measurement device
CN106030243A (en) * 2014-02-25 2016-10-12 株式会社理光 Distance measuring device and parallax calculation system
CN106030243B (en) * 2014-02-25 2019-05-03 株式会社理光 Distance-measuring device and disparity computation system
US10482592B2 (en) 2014-06-13 2019-11-19 Nikon Corporation Shape measuring device, structured object manufacturing system, shape measuring method, structured object manufacturing method, shape measuring program, and recording medium
JP2015227890A (en) * 2015-08-11 2015-12-17 セイコーエプソン株式会社 Shape measuring device, control method of shape measuring device, and program
US10175037B2 (en) 2015-12-27 2019-01-08 Faro Technologies, Inc. 3-D measuring device with battery pack
CN108168713A (en) * 2017-12-22 2018-06-15 福建和盛高科技产业有限公司 The anti-mountain fire system of transmission line of electricity infrared thermal imaging
CN108168713B (en) * 2017-12-22 2019-08-23 福建和盛高科技产业有限公司 The anti-mountain fire system of transmission line of electricity infrared thermal imaging

Similar Documents

Publication Publication Date Title
DE69925582T2 (en) Device and method for optically measuring the surface contour of an object
US7853068B2 (en) Pattern defect inspection method and apparatus
US6141105A (en) Three-dimensional measuring device and three-dimensional measuring method
EP2175232A1 (en) Three-dimensional shape measuring device, three-dimensional shape measuring method, three-dimensional shape measuring program, and recording medium
EP1777487B1 (en) Three-dimensional shape measuring apparatus, program and three-dimensional shape measuring method
DE69634089T2 (en) Improving the orientation of inspection systems before image recording
US8797396B2 (en) Digital microscope slide scanning system and methods
US20070176927A1 (en) Image Processing method and image processor
US20130128280A1 (en) Method for measuring three-dimension shape of target object
US7630539B2 (en) Image processing apparatus
US20020167677A1 (en) Optical displacement sensor
JP2009123006A (en) Projector
JP4223979B2 (en) Scanning electron microscope apparatus and reproducibility evaluation method as apparatus in scanning electron microscope apparatus
JP2005302036A (en) Optical device for measuring distance between device and surface
EP0422946B1 (en) Digitising the surface of an irregularly shaped article, e.g. a shoe last
US9291450B2 (en) Measurement microscope device, image generating method, measurement microscope device operation program, and computer-readable recording medium
US20090195688A1 (en) System and Method for Enhanced Predictive Autofocusing
JP5725380B2 (en) Autofocus image system
JP2006136923A (en) Laser beam machine and laser beam machining method
US6724491B2 (en) Visual displacement sensor
EP0780671A1 (en) Spectrophotometers and colorimeters
CN104854427A (en) Device for optically scanning and measuring environment
EP2813803A1 (en) Machine vision inspection system and method for performing high-speed focus height measurement operations
JP4750444B2 (en) Appearance inspection method and apparatus
US8165351B2 (en) Method of structured light-based measurement