WO2015125528A1 - Dispositif de traitement d'image, procédé, programme et support de stockage pour ledit programme - Google Patents

Dispositif de traitement d'image, procédé, programme et support de stockage pour ledit programme Download PDF

Info

Publication number
WO2015125528A1
WO2015125528A1 PCT/JP2015/051206 JP2015051206W WO2015125528A1 WO 2015125528 A1 WO2015125528 A1 WO 2015125528A1 JP 2015051206 W JP2015051206 W JP 2015051206W WO 2015125528 A1 WO2015125528 A1 WO 2015125528A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
height
scanning
height information
image
Prior art date
Application number
PCT/JP2015/051206
Other languages
English (en)
Japanese (ja)
Inventor
博明 大庭
Original Assignee
Ntn株式会社
博明 大庭
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ntn株式会社, 博明 大庭 filed Critical Ntn株式会社
Publication of WO2015125528A1 publication Critical patent/WO2015125528A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present invention relates to an image processing apparatus, method, program, and storage medium for the program, and in particular, an image processing apparatus, method, and program having a surface height interpolation function from an object surface image measured by a three-dimensional measuring machine or the like. And a storage medium for the program.
  • Patent Document 1 Japanese Patent Laid-Open No. 2006-262305.
  • Patent Document 1 Japanese Patent Laid-Open No. 2006-262305.
  • pixel values on the image are calculated and interpolated by a weighted average method.
  • a known method such as a bicubic interpolation method or a bilinear interpolation method is used.
  • an object of the present invention is to provide an image processing apparatus, a method, a program, and a storage medium for the program that are excellent in usability.
  • An image processing apparatus is an image processing apparatus that processes an image composed of a plurality of pixels arranged two-dimensionally. Each of the plurality of pixels has the position and height information of the pixel.
  • the image processing apparatus includes an image input unit that inputs an image, and an image processing unit that calculates height information for pixels in which the height information is invalid in the input image.
  • the image processing means includes the height information of one or more pixels whose height information is valid among the nearest pixels of the pixels whose height information is invalid, the invalid pixel and one or more pixels, respectively. Based on Equation 1, the height information for the invalid pixel is calculated using the weight based on the reciprocal of the distance to the pixel.
  • the image processing means scans an image in one direction, and every time a pixel whose height information is valid is searched, the height of the pixels adjacent to the valid pixel is determined.
  • One-way scanning means for calculating height information based on Equation 1 for pixels with invalid information, and setting the calculated height information as effective height information for the invalid pixels, and one-way scanning When scanning an image in a direction opposite to one direction after scanning by means, every time a pixel for which height information is valid is searched, height information among pixels adjacent to the valid pixel is searched.
  • backward scanning means for calculating height information based on Equation 1 for pixels with invalidity and setting the calculated height information as effective height information for the invalid pixels.
  • the image processing means further includes means for initializing a value of a variable for controlling execution of the image processing prior to starting the image processing.
  • the initialization means substitutes one of the maximum height Hmax and the minimum height Hmin among the plurality of pixels of the input image for the variable Hs, substitutes the other for the variable He, and changes the variables Hs and He.
  • a value that satisfies the conditions (dH> 0 and dH ⁇ Hmax ⁇ Hmin) is set for the amount dH.
  • the one-way scanning unit or the backward scanning unit substitutes the maximum height Hmax for the variable Hs for the height m (i, j) and the variable Hs of the pixel (i, j) on the image.
  • the image processing unit repeatedly executes the scanning by the one-way scanning unit and the scanning by the backward scanning unit a plurality of times.
  • the image processing means further includes means for changing a value of a variable for controlling execution of the image processing every time scanning by the one-way scanning means and scanning by the backward scanning means are executed. .
  • the changing means subtracts the variable Hs by the change amount dH when the maximum height Hmax is assigned to the variable Hs, and changes the change amount dH (to the variable Hs when the minimum height Hmin is assigned to the variable Hs. Add only> 0).
  • the image processing means determines that the variable Hs calculated by the changing means satisfies the condition (Hs ⁇ He when He is the minimum height Hmin, and Hs ⁇ He when He is the maximum height Hmax).
  • the scanning by the unidirectional scanning unit and the scanning by the reverse scanning unit are repeatedly executed.
  • the method according to another aspect of the present invention is a method for processing an image composed of a plurality of pixels arranged in a two-dimensional manner. Each of the plurality of pixels has the position and height information of the pixel.
  • the method includes a step of inputting an image, and a step of image processing for calculating height information for a pixel whose height information is invalid in the input image.
  • the height information for the invalid pixel is calculated based on Equation 1 using the weight based on the reciprocal of the distance to each.
  • the image processing step when an image is scanned in one direction, every time a pixel whose height information is valid is searched, height information among pixels adjacent to the valid pixel is searched.
  • the image processing step further includes a step of initializing a value of a variable for controlling execution of the image processing prior to starting the image processing.
  • one of the maximum height Hmax and the minimum height Hmin among the plurality of pixels of the input image is substituted into the variable Hs, the other is substituted into the variable He, and the variables Hs and He are changed.
  • a value satisfying the condition (dH> 0 and dH ⁇ Hmax ⁇ Hmin) is set for the amount dH.
  • the maximum height Hmax is substituted into the variable Hs for the height m (i, j) and the variable Hs of the pixel (i, j) on the image during the search.
  • the condition Hs ⁇ dH ⁇ m (i, j) and m (i, j) ⁇ Hs
  • the condition It is determined whether or not Hs ⁇ m (i, j) and m (i, j) ⁇ Hs + dH) are satisfied.
  • the pixel (i, j) that satisfies the condition is determined as a pixel whose height information is valid.
  • scanning by the one-way scanning step and scanning by the backward scanning step are repeatedly performed a plurality of times.
  • the image processing step includes a step of changing a value of a variable for controlling execution of the image processing every time scanning by the one-way scanning step and scanning by the backward scanning step are performed.
  • the variable Hs is subtracted by the change amount dH.
  • the variable Hs is changed to the change amount dH ( > 0) is added.
  • the variable Hs after the calculation in the changing step satisfies the condition (Hs ⁇ He when He is the minimum height Hmin, and Hs ⁇ He when He is the maximum height Hmax). Until the determination is made, scanning by the one-way scanning step and scanning by the backward scanning step are repeatedly executed.
  • a program according to still another aspect of the present invention is a program for causing a computer including a processor to execute the above method.
  • a storage medium according to still another aspect of the present invention is a machine-readable storage medium storing the above program.
  • the height information for the invalid pixel can be calculated based on (Equation 1) using the weight based on the reciprocal of the distance to each. Therefore, interpolation processing is performed on the height information of pixels that are invalid, using the height information of the pixels nearest to the pixels whose height information is invalid, that is, an appropriate number of pixels. As a result, the processing can be completed in a relatively short time, and the usability is excellent.
  • FIG. 1 is a configuration diagram of a system including an image processing apparatus according to an embodiment of the present invention.
  • 1 is a schematic configuration diagram of an image processing apparatus according to an embodiment of the present invention. It is a functional block diagram concerning embodiment of this invention. It is an image processing flowchart concerning an embodiment of the invention. It is a figure which shows an example of the image data which concerns on embodiment of this invention. It is a figure which shows the other example of the image data which concerns on embodiment of this invention.
  • (A) And (B) is a figure for demonstrating the raster scanning which concerns on embodiment of this invention.
  • (A) to (D) are diagrams for explaining raster scanning according to an embodiment of the present invention. It is a figure for demonstrating the raster scan which concerns on embodiment of this invention.
  • FIG. 1 is a configuration diagram of a system including an image processing apparatus according to an embodiment of the present invention. This system can be applied, for example, to image processing of the surface of the workpiece 5 placed on the workpiece support 4.
  • the system includes an image processing device 1 corresponding to a computer having a processor, and an image acquisition device 2 connected to the image processing device 1 by wire or wirelessly.
  • the image acquisition device 2 acquires an image (image data) of the workpiece 5 using a three-dimensional measurement function.
  • Various methods can be applied as the three-dimensional measurement function of the image acquisition device 2, but here, a known non-contact optical measurement function is used. Note that the method to be applied is not limited to this.
  • the image acquisition device 2 includes a microprocessor (not shown), a light projecting system 21, and a light receiving system 22 that is an image sensor.
  • the light projecting system 21 irradiates irradiation light from a semiconductor laser (not shown) toward the surface of the work 5 that is a measurement object via an optical system (not shown) including a galvanometer mirror.
  • the reflected light from the surface of the work 5 is imaged on the image sensor by a light receiving lens (not shown) of the light receiving system 22.
  • the image acquisition device 2 calculates (detects) the image position on the image sensor.
  • the microprocessor drives the galvano mirror and the image sensor in synchronization and obtains the position (angle) from the change in the received light output at each projection angle (time) to make the measurement object more specific.
  • the surface shape can be measured three-dimensionally.
  • the microprocessor generates image data defined by the orthogonal X-axis, Y-axis, and Z-axis from the output of the image sensor.
  • image data height data that is a value in the Z-axis direction is assigned to each pixel of the two-dimensional planar image defined by the X-axis and the Y-axis corresponding to the size of the image image sensor.
  • the height data indicates the surface height of the workpiece 5 calculated from the change in received light output.
  • each pixel can be defined by each element (x, y) in the two-dimensional array using the two-dimensional coordinate position (x, y), and the element value of each element (x, y) Is assigned a value in the corresponding Z-axis direction.
  • the element value is also referred to as a pixel value.
  • the surface height of the object measured by the three-dimensional measurement function is stored in each pixel of the target image. Pixels may be lost due to measurement environment or characteristics of an optical system such as a sensor, and non-measurement points may be generated. However, in this embodiment, non-measurement points can be interpolated even when pixels are randomly missing. .
  • pixels whose height could be measured are called actual measurement points, and pixels that could not be measured are called non-measurement points.
  • a pixel to be calculated is selected to obtain an interpolation result of the surface height similar to the object shape, and a calculation procedure for interpolation using the pixel value of the selected pixel is adopted.
  • the weighted average method is adopted for the calculation, for the calculation accuracy, as many pixels as possible and an appropriate number of actually measured points are selected so as to surround the non-measured points from all directions.
  • FIG. 2 is a schematic configuration diagram of the image processing apparatus 1 according to the embodiment of the present invention.
  • image processing apparatus 1 includes a CPU (Central Processing Unit) 11 that is an arithmetic processing unit, a memory 12 and a hard disk 14 that serve as a storage unit, and a timer that measures time and outputs time-measurement data to CPU 11.
  • CPU Central Processing Unit
  • memory 12 and a hard disk 14 that serve as a storage unit
  • timer that measures time and outputs time-measurement data to CPU 11.
  • input I / F abbreviation of interface
  • display controller 16 communication I / F (abbreviation of interface) 17
  • reader / writer 18 These units are connected to each other via a bus so that data communication is possible.
  • the CPU 11 performs various calculations by executing a program (code) stored in the hard disk 14.
  • the memory 12 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory).
  • This program includes a program (code) and data for processing image data.
  • the hard disk 14 is a non-volatile magnetic storage device, and stores various setting values in addition to programs executed by the CPU 11.
  • the program installed in the hard disk 14 is distributed in a non-temporarily stored state in a memory card 23 or the like as will be described later.
  • a semiconductor storage device such as a flash memory may be employed in addition to the hard disk 14 or instead of the hard disk 14.
  • the input I / F 15 mediates data transmission between the CPU 11 and an input device such as a keyboard 19, a mouse (not shown), a touch panel (not shown). That is, the input I / F 15 receives an operation command given by the user operating the input device.
  • the display controller 16 is connected to a display 20 that is a typical example of a display device, and notifies the user by displaying the result of processing in the CPU 11.
  • the communication I / F 17 mediates data transmission between the CPU 11 and the image acquisition device 2 via a communication line.
  • the reader / writer 18 mediates data transmission between the CPU 11 and a memory card 23 that is a machine-readable recording medium.
  • the image processing apparatus 1 may be connected to another output device such as a printer as necessary.
  • the image processing apparatus 1 and the image acquisition apparatus 2 are configured as separate bodies, but both may be configured integrally.
  • FIG. 3 is a functional configuration diagram according to the embodiment of the present invention.
  • CPU 11 has an image input unit 110 corresponding to communication I / F 17 that receives image data from image acquisition device 2, and an image processing unit having a function of interpolating height data for the input image. 120.
  • the image processing unit 120 scans an image in a direction (opposite direction) different from the predetermined direction, a raster scanning unit 121 that scans the input image in a predetermined direction and interpolates height data.
  • an inverse raster scanning unit 122 that interpolates height data and an adjustment processing unit 123 that calculates a threshold value for determining the end of the raster scanning.
  • the functions of these units correspond to a program executed by the CPU 11 or a combination of a program and a circuit.
  • FIG. 4 is an image processing flowchart according to the embodiment of the present invention.
  • the program according to this process flowchart is stored in the memory in advance, and the process is realized by the CPU 11 reading the program from the memory and executing the read program.
  • the CPU 11 transmits an imaging request to the image acquisition device 2 in accordance with an operation command received via the input I / F 15.
  • the image acquisition device 2 receives the request and acquires image data according to the above-described procedure (step S1).
  • the image input unit 110 of the CPU 11 receives the image data transmitted from the image acquisition device 2 and inputs (stores) it in a predetermined area of the memory 12 (step S2).
  • the input path of the image data is not limited to the path from the image acquisition device 2.
  • it may be image data received from another device via the communication I / F 17 and a network (not shown), or image data read from the memory card 23.
  • the CPU 11 performs image processing (steps S4 to S7) by the image processing unit 120 on the image data in the memory 12.
  • image processing the actual measurement points are scanned radially around the non-measurement point.
  • the nearest actual measurement point is found with respect to the non-measurement point, and the height of the non-measurement point is calculated based on the above (Equation 1) using the inverse of the distance between the found actual measurement point and the non-measurement point as a weight.
  • the process of calculating the height of the non-measurement point includes the image scanning by the raster scanning unit 121 (Step S4) and the image scanning by the reverse raster scanning (Step S5). Further, the CPU 11 scans the image acquired by the image acquisition device 2 in advance to detect the maximum height Hmax and the minimum height Hmin from the actual measurement points, and either the height Hmax or Hmin is detected. Image processing for interpolation is started from the height.
  • values of variables Hs and He for controlling execution of the image processing are initialized in the initialization processing (step S3) by the initialization unit of the CPU 11. Specifically, when the height at the start of the process is set to Hs and the height at the end of the calculation process is set to He, either the value Hmax or the value Hmin (where Hmax> Hmin) is set. One is substituted, and the amount of change in the height Hs is set to a predetermined value dH.
  • the value dH is a value that satisfies the condition (dH> 0 and dH ⁇ Hmax ⁇ Hmin).
  • FIG. 5 shows an example of image data according to the embodiment of the present invention.
  • FIG. 5 shows the measured points and the non-measured points separately for each pixel arranged in a rectangular two-dimensional matrix.
  • an effective pixel value indicating the measured height is assigned to the pixel at the actual measurement point, and the pixel value at the non-measurement point for which the height could not be measured originally represents the height.
  • a predetermined invalid value is assigned.
  • the CPU 11 can determine from the scanned pixel value whether the pixel is an actual measurement point or a non-measurement point.
  • the raster scanning unit 121 of the CPU 11 performs raster scanning from any one of the four corners of the rectangular image in FIG. 5 toward the diagonal corner (step S4).
  • the pixel value m is read by scanning.
  • each pixel is uniquely identified by the pixel (i, j), and the pixel value is indicated by the height m (i, j).
  • m the height of the pixel (i, j).
  • the raster scanning unit 121 compares the height m (i, j) of the pixel (i, j) on the image with the height Hs.
  • Hmax is substituted for the height Hs in the initialization process
  • the condition [Hs ⁇ dH ⁇ m (i, j) and height m (i, j) ⁇ Hs] is satisfied based on the comparison result. It is determined whether it is established (established).
  • Hmin is substituted for the height Hs in the initialization process, whether or not the condition [Hs ⁇ m (i, j) and m (i, j) ⁇ Hs + dH] is satisfied is determined based on the comparison result. judge.
  • the height of the non-measurement point adjacent to the measurement point that satisfies the condition is calculated by (Equation 1). Thereafter, a non-measurement point for which height calculation has been completed is treated as an actual measurement point. That is, the calculated height value is set to the height m (i, j) of the pixel (i, j) at the non-measurement point.
  • the reverse raster scanning unit 122 of the CPU 11 raster scans the image data in the reverse direction from the diagonal corner of Step S1 toward the first corner (Step S5). Specifically, the height m (i, j) of the pixel (i, j) on the image is compared with the height Hs, and based on the comparison result, the non-measurement adjacent to the actual measurement point that satisfies the condition of step S1. The height of the point is calculated by (Equation 1). Thereafter, a non-measurement point for which height calculation has been completed is treated as an actual measurement point. That is, the calculated height value is set to the height m (i, j) of the pixel (i, j) at the non-measurement point.
  • the addition / subtraction processing unit 123 of the CPU 11 performs addition / subtraction processing on the height Hs (step S6). That is, every time scanning by the raster scanning unit 121 and scanning by the reverse raster scanning unit are executed, the values of the heights Hs and He, which are variables for controlling the execution of image processing, are changed. Specifically, when the value Hmax is substituted for the height Hs in the initialization process, the height Hs is subtracted by the value dH, and when the value Hmin is substituted for the height Hs, only the value dH is substituted for the height Hs. to add.
  • step S7 When the height Hs reaches the predetermined end height He as a result of processing by the addition / subtraction processing unit 123, that is, (when He is the minimum height Hmin, Hs ⁇ He, and He is the maximum height Hmax). If it is determined that the condition of Hs ⁇ He) is satisfied (YES in step S7), the process ends. If it is determined that the condition is not satisfied (NO in step S7), the process proceeds to step S4, and the processes after step S4 are the same as described above. To run.
  • height data can be interpolated with high accuracy for non-measurement points by performing raster scanning with different scanning directions in step S4 or step S5. That is, since the actual measurement point is searched by raster scanning, the actual measurement point closer to the scanning start point has a larger influence on the non-measurement point. Therefore, when interpolation is performed by performing only one of the scans of step S4 or step S5, the interpolation result is biased, and there is a possibility that an interpolated image similar to the object shape of the workpiece 5 cannot be obtained. is there. Therefore, in the present embodiment, by alternately performing raster scanning with different scanning directions, it is possible to reduce the bias of the interpolation result due to the difference in scanning direction.
  • FIG. 6 is a diagram showing another example of image data according to the embodiment of the present invention.
  • the vertical size extending in the Y-axis direction of the rectangular image m of FIG. 6 to be interpolated is N
  • the horizontal size extending in the X-axis direction is M
  • the vertical position of the pixel is j
  • the horizontal position is i. j is 1, 2, 3,. . . , N.
  • I is 1, 2, 3,. . . , M.
  • the cross-hatched pixel E1 and the hatched pixel E2 in FIG. 6 indicate that the pixel is an actual measurement point, and the non-hatched pixel (blank pixel) E3 indicates an unmeasured point.
  • the pixel E2 has a height H1
  • the pixel E1 has a height H2, and H1> H2.
  • step S2 When the image input unit 110 of the CPU 11 inputs the data of the image m in FIG. 6 (step S2), in the initialization process by the initialization unit, the image m data is scanned and the maximum value is selected from the heights of the measured points. And the minimum value is calculated (step S3).
  • the maximum value Hmax is calculated as H1
  • the minimum value Hmin is calculated as H2.
  • the CPU 11 stores data of the image F having the same size as the image m in the memory 12. Then, it is determined whether or not each pixel (i, j) of the image m read by scanning is an actual measurement point. When it is determined that it is an actual measurement point, “1” is set to the corresponding pixel F (i, j) of the image F, and when it is determined that it is not, “0” is set.
  • FIG. 9 are diagrams for explaining raster scanning according to the embodiment of the present invention.
  • the raster scanning unit 121 refers to the pixel values radially in the eight directions around the pixel (p0, q0) as shown in FIG. 8A with respect to the non-measurement point (p0, q0). Search for pixels.
  • the raster scanning unit 121 sets the weight Wn as the reciprocal of the distance between the pixel (p0, q0) and the pixel (xn, yn). That is, the weight Wn is calculated based on (Expression 2).
  • the raster scanning unit 121 calculates the data D by substituting the weight Wn and the height m (xn, yn) of the pixel (xn, yn) into the dn of (Expression 1), and the calculated data D is the pixel.
  • 1 is set to the corresponding pixel F (p0, q0).
  • a pixel (p0, q0) in FIG. 8B indicates the pixel.
  • the raster scanning unit 121 refers to the periphery of the pixel (i0, j0) of the image m again and searches for the next non-measurement point in the same manner.
  • the next unmeasured point is detected at the position of number 6 in FIG.
  • the non-measurement point (p1, q1) of number 6, as shown in FIG. 8C the nearest measurement point in the surrounding eight directions as the radial direction is found, and the found measurement point is set as (xn, yn). .
  • the raster scanning unit 121 calculates a weight Wn for the non-measurement point (p1, q1) according to (Expression 3).
  • FIG. 9 shows the processing result of the image m in FIG. 6 when scanning to the lower right end.
  • height data is interpolated for non-measurement points.
  • the white circle pixel E4 in the image m in FIG. 9 was a non-measurement point in the original image in FIG. 6, but changed to a pixel at the measurement point set by the height data calculated by the raster scanning unit 121. Yes.
  • the reverse raster scanning unit 122 raster scans the image m in FIG. 9 after the raster scanning by the raster scanning unit 121 in the reverse direction, that is, from the lower right end of the image toward the upper left end.
  • the adjustment processing unit 123 subtracts the height Hs by dH (dH> 0).
  • the CPU 11 determines that the height Hs after the subtraction satisfies the condition of (Hs> He) (NO in step S7), returns to the raster scanning process in step S4, and the image processing for interpolating the height data is performed again. Repeated.
  • FIG. 11 shows a state of the image m after the raster scanning unit 121 raster-scans the image m of FIG. 9 from the upper left end. In this step, it is assumed that Hs is equal to H2.
  • the reverse raster scanning unit 122 scans the image m from the lower right end to the upper left end in FIG.
  • the height m (i, j) of the pixel (i, j) satisfies the condition [Hs ⁇ H ⁇ m (i, j) and m (i, j) ⁇ Hs], and F (
  • Search for the point (p, q) When an unmeasured point is found, its height m (p, q) is calculated by the following method. Note that the pixel (i1, j1) in FIG. 11 is the first actual measurement point that satisfies this condition.
  • the first non-measurement point is detected at the position of number 8 in FIG.
  • the measurement point is searched radially in eight directions around the pixel (p2, q2).
  • a pixel (xn, yn) is set for the first actually found point in each direction.
  • four actual measurement point pixels (xn, yn) (where n is 1, 2,..., 4) are searched.
  • the reverse raster scanning unit 122 calculates the weight Wn for the non-measurement point (p2, q2) according to (Expression 4).
  • the data D is calculated by substituting the weight Wn and the height m (xn, yn) of the actual measurement point (xn, yn) into dn of (Expression 1), and the calculated data D is applied to the pixel (p2, q2).
  • 1 is set to the corresponding pixel F (p2, q2).
  • the pixel E5 in FIG. 13 corresponds to this.
  • FIG. 14 shows the state of the image m when scanned to the upper left corner.
  • the strict processing unit 123 subtracts the height Hs by dH (dH> 0).
  • the CPU 11 determines that the condition (Hs ⁇ He) is satisfied for the height Hs after subtraction, the CPU 11 ends the image processing for height data interpolation.
  • scanning is performed from the upper left end to the lower right end or from the upper right end to the lower left end of the image m.
  • the scanning may be performed from any one of the four corners toward the diagonal corner, and this scanning and the reverse scanning may be repeatedly performed.
  • FIGS. 15A and 15B and FIGS. 16A and 16B are diagrams showing display image examples on the display 20 before and after the interpolation processing according to the embodiment of the present invention.
  • FIG. 15A shows an image m before image processing for height interpolation.
  • FIG. 15B shows a display example after the height data is interpolated from the image of FIG. 15A by the method according to the embodiment of the present invention.
  • FIGS. 16A and 16B show images three-dimensionally displayed based on the image data corresponding to FIGS. 15A and 15B, respectively.
  • the black portion BL corresponds to a non-measurement point. It can be seen that the black portion BL does not exist in the image after the interpolation processing in FIG. 15B, and the height data of the non-measurement point is interpolated.
  • the heights of the pixels at the non-measurement points can be interpolated using the measurement points adjacent in the radiation direction. Interpolation processing can be completed in a relatively short time using the pixel height.
  • the height of the non-measurement point can be interpolated. Also, by performing a plurality of raster scans with different scanning directions on the image, it is possible to reduce the bias of the interpolation result due to the difference in the scanning direction. As a result, an image similar to the target object shape (surface height) of the workpiece 5 can be obtained for the interpolated image.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

Selon l'invention, chacun d'une pluralité de pixels groupés en deux dimensions possède des informations de position et de hauteur. Pour chaque pixel pour lequel ladite information de hauteur est invalide, ce dispositif de traitement d'images (1) calcule une information de hauteur sur la base de la formule (1) en utilisant ce qui suit : une information de hauteur pour un ou plusieurs des pixels les plus proches du pixel ayant une information de hauteur invalide qui ont une information de hauteur valide ; des poids constitués par les réciproques des distances entre le pixel ayant une information de hauteur invalide et chacun du ou des pixels précédemment mentionnés. (1) (Dans la formule (1), Wn représente les poids et dn représente les hauteurs du ou des pixels qui ont des informations de hauteur valide, D représente la hauteur calculée, et n représente le nombre du ou des pixels qui ont une information de hauteur valide.)
PCT/JP2015/051206 2014-02-18 2015-01-19 Dispositif de traitement d'image, procédé, programme et support de stockage pour ledit programme WO2015125528A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014-028633 2014-02-18
JP2014028633A JP6351289B2 (ja) 2014-02-18 2014-02-18 表面形状測定装置、方法およびプログラム

Publications (1)

Publication Number Publication Date
WO2015125528A1 true WO2015125528A1 (fr) 2015-08-27

Family

ID=53878044

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/051206 WO2015125528A1 (fr) 2014-02-18 2015-01-19 Dispositif de traitement d'image, procédé, programme et support de stockage pour ledit programme

Country Status (2)

Country Link
JP (1) JP6351289B2 (fr)
WO (1) WO2015125528A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000230809A (ja) * 1998-12-09 2000-08-22 Matsushita Electric Ind Co Ltd 距離データの補間方法,カラー画像階層化方法およびカラー画像階層化装置
JP2001078038A (ja) * 1999-09-06 2001-03-23 Fuji Photo Film Co Ltd 画像処理装置、方法及び記録媒体
JP2002016795A (ja) * 2000-04-24 2002-01-18 Seiko Epson Corp 画像データ補間プログラム、画像データ補間方法および画像データ補間装置
EP2410312A1 (fr) * 2010-07-19 2012-01-25 Siemens Aktiengesellschaft Procédé d'analyse assistée par ordinateur d'un système technique
JP2013229677A (ja) * 2012-04-24 2013-11-07 Olympus Corp 画像処理プログラム及び画像処理装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4885154B2 (ja) * 2007-01-31 2012-02-29 国立大学法人東京工業大学 複数波長による表面形状の測定方法およびこれを用いた装置
KR101226913B1 (ko) * 2011-04-12 2013-01-28 주식회사 휴비츠 영상 합성을 위한 3차원 프로파일 지도 작성 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000230809A (ja) * 1998-12-09 2000-08-22 Matsushita Electric Ind Co Ltd 距離データの補間方法,カラー画像階層化方法およびカラー画像階層化装置
JP2001078038A (ja) * 1999-09-06 2001-03-23 Fuji Photo Film Co Ltd 画像処理装置、方法及び記録媒体
JP2002016795A (ja) * 2000-04-24 2002-01-18 Seiko Epson Corp 画像データ補間プログラム、画像データ補間方法および画像データ補間装置
EP2410312A1 (fr) * 2010-07-19 2012-01-25 Siemens Aktiengesellschaft Procédé d'analyse assistée par ordinateur d'un système technique
JP2013229677A (ja) * 2012-04-24 2013-11-07 Olympus Corp 画像処理プログラム及び画像処理装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HWEI-JEN LIN ET AL.: "A Flexible and Effective Colorization System", 2009 10TH INTERNATIONAL SYMPOSIUM ON PERVASIVE SYSTEMS, ALGORITHMS, AND NETWORKS (ISPAN, 2009, pages 492 - 497, XP031611086, ISBN: 978-1-4244-5403-7 *

Also Published As

Publication number Publication date
JP6351289B2 (ja) 2018-07-04
JP2015153307A (ja) 2015-08-24

Similar Documents

Publication Publication Date Title
US8427488B2 (en) Parallax image generating apparatus
JP6256475B2 (ja) 画像処理装置、画像処理方法および画像処理プログラム
EP2889575A1 (fr) Appareil et procédé de mesure tridimensionnelle et support de stockage
EP2887313B1 (fr) Appareil de traitement d'images, système, procédé de traitement d'images et support d'enregistrement lisible sur ordinateur
JP6071257B2 (ja) 画像処理装置及びその制御方法、並びにプログラム
JP6332951B2 (ja) 画像処理装置および画像処理方法、およびプログラム
JP2017181298A (ja) 3次元形状測定装置および3次元形状測定方法
CN113155053B (zh) 三维几何形状测量装置和三维几何形状测量方法
JP6411188B2 (ja) ステレオマッチング装置とステレオマッチングプログラムとステレオマッチング方法
JP6634842B2 (ja) 情報処理装置、情報処理方法およびプログラム
KR101994120B1 (ko) 화상형성장치, 화상형성방법 및 기억매체
JP6351289B2 (ja) 表面形状測定装置、方法およびプログラム
JP5656018B2 (ja) 球体の検出方法
WO2017187935A1 (fr) Appareil de traitement d'informations, procédé de traitement d'informations et programme
JP6867766B2 (ja) 情報処理装置およびその制御方法、プログラム
JP2014002489A (ja) 位置推定装置、方法、及びプログラム
JP5955003B2 (ja) 画像処理装置および画像処理方法、プログラム
JP2016156702A (ja) 撮像装置および撮像方法
JP5206499B2 (ja) 測定方法、測定装置、測定制御プログラム
JP5446516B2 (ja) 画像処理装置
JP4500707B2 (ja) 画像データ処理装置
JP2014026641A (ja) 画像処理装置、その制御方法、およびプログラム
US20240212183A1 (en) Information processing apparatus that processes 3d information, information processing method, and information processing system
JP4775221B2 (ja) 画像処理装置、画像処理装置の制御方法、および画像処理装置の制御プログラム
JP2001148020A (ja) 対応点探索における信頼性の判定方法および装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15751645

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15751645

Country of ref document: EP

Kind code of ref document: A1