JP2021148731A - Shape measuring device - Google Patents

Shape measuring device Download PDF

Info

Publication number
JP2021148731A
JP2021148731A JP2020051547A JP2020051547A JP2021148731A JP 2021148731 A JP2021148731 A JP 2021148731A JP 2020051547 A JP2020051547 A JP 2020051547A JP 2020051547 A JP2020051547 A JP 2020051547A JP 2021148731 A JP2021148731 A JP 2021148731A
Authority
JP
Japan
Prior art keywords
shape
dimensional
feature
feature portion
dimensional shape
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2020051547A
Other languages
Japanese (ja)
Inventor
知彦 鶴田
Tomohiko Tsuruta
知彦 鶴田
康行 高井
Yasuyuki Takai
康行 高井
悟史 柏原
Satoshi Kashiwabara
悟史 柏原
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toyota Motor Corp
Soken Inc
Original Assignee
Toyota Motor Corp
Soken Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toyota Motor Corp, Soken Inc filed Critical Toyota Motor Corp
Priority to JP2020051547A priority Critical patent/JP2021148731A/en
Publication of JP2021148731A publication Critical patent/JP2021148731A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

To provide a shape measuring device capable of suitably measuring a shape of an object.SOLUTION: A shape measuring device 10 for measuring a shape of an object includes: a three-dimensional shape specification part 12 for specifying a three-dimensional shape of an object; a feature part specification part 13 for specifying a feature part as a part of a shape having a predetermined feature in a three-dimensional shape of an object; a two-dimensional sectional image generation part 14 for extracting a three-dimensional shape of a feature part from a three-dimensional shape of an object and for generating plural two-dimensional sectional images as images obtained by cutting a three-dimensional shape of a feature part by plural mutually parallel planes each; and a feature part measurement part 15 for performing measurement to a feature part on the basis of plural two-dimensional sectional images.SELECTED DRAWING: Figure 1

Description

本開示は、物体の形状を計測する形状計測装置に関する。 The present disclosure relates to a shape measuring device for measuring the shape of an object.

立体形状を有する各種物体を対象物として、立体形状の特徴部分の抽出や抽出した特徴部分に対する計測が行われている。特許文献1は、神経細胞のスパインと呼ばれる微細構造の三次元形態の解析を行う形態解析方法を開示している。この形態解析方法においては、まず、一定方向(z方向)から合焦面を移動させながら脳の切片を撮影して、脳神経細胞の三次元画像を取得する。次に、三次元画像を二次元平面(xy平面)にシルエット投影して脳神経細胞の二次元画像を求め、二次元画像に基づいて、脳神経細胞の二次元的な形状を特定し、脳神経細胞のスパインの二次元平面内での存在領域を抽出する。その後、スパインの三次元内での存在領域を抽出し、スパインの三次元形態を判別する。 For various objects having a three-dimensional shape, the feature portion of the three-dimensional shape is extracted and the extracted feature portion is measured. Patent Document 1 discloses a morphological analysis method for analyzing a three-dimensional morphology of a fine structure called a spine of a nerve cell. In this morphological analysis method, first, a section of the brain is photographed while moving the focal plane from a certain direction (z direction), and a three-dimensional image of brain nerve cells is acquired. Next, a three-dimensional image is silhouette-projected onto a two-dimensional plane (xy plane) to obtain a two-dimensional image of the brain nerve cell, and based on the two-dimensional image, the two-dimensional shape of the brain nerve cell is identified, and the brain nerve cell Extract the region of existence of the spine in the two-dimensional plane. After that, the existing region of the spine in the three dimensions is extracted, and the three-dimensional morphology of the spine is determined.

特開2006−343853号公報Japanese Unexamined Patent Publication No. 2006-343853

特許文献1が開示する方法では、対象物(脳神経細胞)の三次元画像を二次元平面にシルエット投影した二次元画像に基づいて、対象物の形状を特定している。しかし、対象物の形状によっては、特徴部分の近くに比較的大きな他の部分が存在することがある。このような対象物に、特許文献1が開示する方法を適用すると、投影方向によっては、投影方向から見て特徴部分が他の部分に重なるため、二次元画像において特徴部分が他の部分のシルエットの内部に隠れてしまい、特徴部分の特定及び計測が困難である。 In the method disclosed in Patent Document 1, the shape of an object is specified based on a two-dimensional image obtained by silhouette-projecting a three-dimensional image of an object (brain nerve cell) onto a two-dimensional plane. However, depending on the shape of the object, there may be other relatively large parts near the featured part. When the method disclosed in Patent Document 1 is applied to such an object, the feature portion overlaps with another portion when viewed from the projection direction depending on the projection direction, so that the feature portion is a silhouette of the other portion in the two-dimensional image. It is difficult to identify and measure the featured part because it is hidden inside.

本開示は、上記課題を鑑みてなされたものであり、対象物の形状を好適に計測できる形状計測装置を提供することを目的とする。 The present disclosure has been made in view of the above problems, and an object of the present disclosure is to provide a shape measuring device capable of suitably measuring the shape of an object.

上記課題を解決するために、本開示技術の一態様は、対象物の形状を計測する形状計測装置であって、対象物の三次元形状を特定する三次元形状特定部と、対象物の三次元形状のうち、所定の特徴を有する形状の部分である特徴部分を特定する特徴部分特定部と、対象物の三次元形状から特徴部分の三次元形状を抽出し、特徴部分の三次元形状を互いに平行な複数の平面でそれぞれ切断した画像である複数の二次元断面画像を生成する二次元断面画像生成部と、複数の二次元断面画像に基づいて、特徴部分に対する計測を行う特徴部分計測部と、を備える形状計測装置である。 In order to solve the above problems, one aspect of the disclosed technology is a shape measuring device that measures the shape of an object, that is, a three-dimensional shape specifying unit that specifies the three-dimensional shape of the object, and a tertiary of the object. Of the original shape, the feature part specifying part that specifies the feature part that is the part of the shape having a predetermined feature, and the three-dimensional shape of the feature part are extracted from the three-dimensional shape of the object, and the three-dimensional shape of the feature part is obtained. A two-dimensional cross-section image generation unit that generates a plurality of two-dimensional cross-section images that are images cut on a plurality of planes parallel to each other, and a feature portion measurement unit that measures a feature portion based on a plurality of two-dimensional cross-section images. It is a shape measuring device including.

本開示の形状計測装置によれば、対象物の形状を好適に計測することができる。 According to the shape measuring device of the present disclosure, the shape of an object can be suitably measured.

本開示の一実施形態に係る形状計測装置の機能ブロック図Functional block diagram of the shape measuring device according to the embodiment of the present disclosure. 本開示の一実施形態に係る対象物撮影部の模式側面図Schematic side view of an object photographing unit according to an embodiment of the present disclosure. 本形状計測装置が実行する形状計測処理を示すフローチャートA flowchart showing the shape measurement process executed by this shape measurement device. 対象物の三次元形状を表す情報の一例を示す図Diagram showing an example of information representing the three-dimensional shape of an object 対象物の特徴部分の一例を示す図The figure which shows an example of the characteristic part of an object 微細突出部分について生成する複数の二次元断面画像の一例を示す図The figure which shows an example of a plurality of 2D cross-sectional images generated about a fine protrusion.

[実施形態]
本開示に係る形状計測装置は、対象物の三次元形を、その形状の特徴(平面、曲面、及び線形状など)ごとに分類してグループ化した複数の特徴部分を特定した後、各特徴部分の三次元形状について複数の二次元断面画像をそれぞれ生成して、この生成した複数の二次元断面画像を用いて各特徴部分に対する各種の計測を個別に行う。これにより、対象物において複数の特徴部分どうしが所定の方向に重なるような場合であっても、各特徴部分のみをそれぞれ特定して抽出するので、各特徴部分に対する各種の計測を好適に実施することができる。
[Embodiment]
The shape measuring device according to the present disclosure classifies the three-dimensional shape of an object according to its shape features (plane, curved surface, line shape, etc.), identifies a plurality of grouped feature portions, and then identifies each feature. A plurality of two-dimensional cross-sectional images are generated for each of the three-dimensional shapes of the portions, and various measurements are individually performed for each feature portion using the generated plurality of two-dimensional cross-sectional images. As a result, even when a plurality of feature portions overlap in a predetermined direction in the object, only each feature portion is specified and extracted, so that various measurements for each feature portion are preferably performed. be able to.

以下、本開示の一実施形態について、図面を参照しながら詳細に説明する。 Hereinafter, one embodiment of the present disclosure will be described in detail with reference to the drawings.

<構成>
図1に、本実施形態に係る形状計測装置10の機能ブロック図を示す。図1に例示する形状計測装置10は、対象物撮影部11と、三次元形状特定部12と、特徴部分特定部13と、二次元断面画像生成部14と、特徴部分計測部15と、を備える。図2は、対象物撮影部11の側面図の一例を模式的に示す図である。
<Structure>
FIG. 1 shows a functional block diagram of the shape measuring device 10 according to the present embodiment. The shape measuring device 10 illustrated in FIG. 1 includes an object photographing unit 11, a three-dimensional shape specifying unit 12, a feature portion specifying unit 13, a two-dimensional cross-section image generation unit 14, and a feature portion measuring unit 15. Be prepared. FIG. 2 is a diagram schematically showing an example of a side view of the object photographing unit 11.

対象物撮影部11は、カメラ111と、カメラ111を支持するカメラ支持部112と、計測の対象となる対象物900を載置する対象物載置部113と、を含む。カメラ111は、例えば単眼カメラであって、レンズから合焦位置までの距離が固定されている。カメラ支持部112は、カメラ111の高さを可変自在にカメラ111を支持する。これによって、カメラ111と対象物900との間の距離である撮影距離を変更することができる。対象物撮影部11は、対象物載置部113に載置された対象物900を、複数の撮影距離においてカメラ111で撮影する。撮影された画像は、画像を撮影した際の撮影距離を対応付けて、図示しない記憶部などに記憶される。なお、対象物900は、例えば金属製の部品であり、表面から突出するウィスカと呼ばれる微細突出部分901のような異物を有し得る(図2を参照)。 The object photographing unit 11 includes a camera 111, a camera supporting unit 112 that supports the camera 111, and an object placing unit 113 on which the object 900 to be measured is placed. The camera 111 is, for example, a monocular camera, and the distance from the lens to the in-focus position is fixed. The camera support portion 112 supports the camera 111 so that the height of the camera 111 can be changed. Thereby, the shooting distance, which is the distance between the camera 111 and the object 900, can be changed. The object photographing unit 11 photographs the object 900 placed on the object placing unit 113 with the camera 111 at a plurality of shooting distances. The captured image is stored in a storage unit (not shown) or the like in association with the shooting distance when the image was captured. The object 900 is, for example, a metal part and may have a foreign substance such as a fine protruding portion 901 called a whisker protruding from the surface (see FIG. 2).

三次元形状特定部12は、対象物撮影部11が撮影した対象物900の画像及びそのときの撮影距離に基づいて、対象物900の三次元形状を特定する。なお、この三次元形状特定部12は、対象物撮影部11と一体的に構成されてもよい。 The three-dimensional shape specifying unit 12 identifies the three-dimensional shape of the object 900 based on the image of the object 900 photographed by the object photographing unit 11 and the photographing distance at that time. The three-dimensional shape specifying unit 12 may be integrally configured with the object photographing unit 11.

特徴部分特定部13は、三次元形状特定部12が特定した対象物900の三次元形状に基づいて、対象物900において予め定めた特徴を有する部分である特徴部分を特定する。特徴部分については後述する。 The feature portion specifying unit 13 specifies a feature portion that is a portion having a predetermined feature in the object 900 based on the three-dimensional shape of the object 900 specified by the three-dimensional shape specifying unit 12. The feature part will be described later.

二次元断面画像生成部14は、特徴部分特定部13が特定した特徴部分の三次元形状を、対象物900の三次元形状から抽出し、この抽出した特徴部分の三次元形状の二次元断面画像を複数生成する。二次元断面画像については後述する。 The two-dimensional cross-sectional image generation unit 14 extracts the three-dimensional shape of the feature portion specified by the feature portion identification unit 13 from the three-dimensional shape of the object 900, and the two-dimensional cross-sectional image of the three-dimensional shape of the extracted feature portion. To generate multiple. The two-dimensional cross-sectional image will be described later.

特徴部分計測部15は、特徴部分特定部13が特定した特徴部分ごとに、二次元断面画像生成部14が生成した複数の二次元断面画像に基づいて、特徴部分に対する各種計測を行う。 The feature portion measuring unit 15 performs various measurements on the feature portion based on a plurality of two-dimensional cross-sectional images generated by the two-dimensional cross-sectional image generation unit 14 for each feature portion specified by the feature portion specifying unit 13.

なお、上述した三次元形状特定部12、特徴部分特定部13、二次元断面画像生成部14、及び特徴部分計測部15による一部又は全部の機能は、プロセッサとメモリとを有する1つあるいは2つ以上のコンピューターによって実現することができる。 It should be noted that some or all of the functions of the three-dimensional shape identification unit 12, the feature portion identification unit 13, the two-dimensional cross-section image generation unit 14, and the feature portion measurement unit 15 described above include one or two having a processor and a memory. It can be achieved by more than one computer.

<処理>
本実施形態に係る処理の例を説明する。図3は、形状計測装置10が実行する形状計測処理の一例を示すフローチャートである。
<Processing>
An example of the process according to the present embodiment will be described. FIG. 3 is a flowchart showing an example of the shape measurement process executed by the shape measurement device 10.

(ステップS301)
三次元形状特定部12は、対象物900の三次元形状を表す情報を取得する。対象物900は、限定されないが、例えば金属製の部品であり、微細突出部分901のような異物を有し得る。対象物900の三次元形状は、レンズから合焦位置までの距離が固定されている単眼カメラを用いて、対象物900を特定の撮影方向から、対象物900までの距離を変えながら複数回撮影することで測定することができる。本実施形態では、三次元形状特定部12は、対象物900の三次元形状を表す情報として、対象物撮影部11のカメラ111が対象物900を撮影した画像及び撮影距離を取得する。なお、対象物撮影部11による撮影方向や撮影の数は、限定されない。
(Step S301)
The three-dimensional shape specifying unit 12 acquires information representing the three-dimensional shape of the object 900. The object 900 is, but is not limited to, a metal part, for example, and may have a foreign object such as a fine protruding portion 901. The three-dimensional shape of the object 900 uses a monocular camera in which the distance from the lens to the in-focus position is fixed, and the object 900 is photographed multiple times while changing the distance from a specific shooting direction to the object 900. It can be measured by doing. In the present embodiment, the three-dimensional shape specifying unit 12 acquires an image and a shooting distance of the object 900 taken by the camera 111 of the object photographing unit 11 as information representing the three-dimensional shape of the object 900. The shooting direction and the number of shootings by the object shooting unit 11 are not limited.

(ステップS302)
三次元形状特定部12は、対象物900の三次元形状を表す情報に基づいて、対象物900の三次元形状を特定する。例えば、三次元形状特定部12は、上述の複数回の撮影で得られた対象物900の各画像における合焦度が高い部分と、その画像を撮影したときのカメラ111から対象物900までの撮影距離とに基づいて、対象物表面の合焦位置の空間分布を算出することにより、対象物900の三次元形状を特定することができる。
(Step S302)
The three-dimensional shape specifying unit 12 identifies the three-dimensional shape of the object 900 based on the information representing the three-dimensional shape of the object 900. For example, the three-dimensional shape specifying unit 12 includes a portion having a high degree of focus in each image of the object 900 obtained by the above-mentioned multiple shots, and the camera 111 to the target 900 when the image is shot. By calculating the spatial distribution of the in-focus position on the surface of the object based on the shooting distance, the three-dimensional shape of the object 900 can be specified.

(ステップS303)
特徴部分特定部13は、三次元形状特定部12が特定した対象物900の三次元形状において、所定の特徴を有する特徴部分を特定する。具体的には、特徴部分特定部13は、対象物900の三次元形状に基づいて、例えば、対象物900の各表面の法線方向や曲率及びその分布などの特徴を求める。特徴部分特定部13は、対象物900の局所的な形状について、その局所的な形状の中心部の法線方向や曲率を求めることができる。そして、特徴部分特定部13は、この求めた特徴に基づいて、対象物900の三次元形状を平面、曲面、及び線形状などに分類してグループ化し、各グループの形状を特徴部分としてそれぞれ特定する。このような三次元形状の分類は、適宜公知の方法を用いて行うことができる。
(Step S303)
The feature portion specifying unit 13 specifies a feature portion having a predetermined feature in the three-dimensional shape of the object 900 specified by the three-dimensional shape specifying unit 12. Specifically, the feature portion specifying portion 13 obtains features such as the normal direction and curvature of each surface of the object 900 and its distribution based on the three-dimensional shape of the object 900. The feature portion specifying portion 13 can obtain the normal direction and the curvature of the central portion of the local shape of the object 900 with respect to the local shape of the object 900. Then, the feature portion specifying unit 13 classifies and groups the three-dimensional shapes of the object 900 into planes, curved surfaces, line shapes, and the like based on the obtained features, and specifies the shapes of each group as feature portions. do. Such classification of three-dimensional shapes can be performed by using a known method as appropriate.

(ステップS304)
二次元断面画像生成部14は、特徴部分特定部13によって特定された対象物900の各特徴部分について、対象物900の三次元形状から特徴部分の三次元形状をそれぞれ抽出する。
(Step S304)
The two-dimensional cross-sectional image generation unit 14 extracts the three-dimensional shape of the feature portion from the three-dimensional shape of the object 900 for each feature portion of the object 900 specified by the feature portion identification unit 13.

(ステップS305)
二次元断面画像生成部14は、各特徴部分の三次元形状に基づいて、各特徴部分の複数の二次元断面画像(二次元断面画像群)を生成する。具体的には、二次元断面画像生成部14は、特徴部分の三次元形状を用いて、任意の撮影距離における形状(断面形状)の輪郭を抽出することで、特徴部分の二次元断面画像の生成が可能である。又は、二次元断面画像生成部14は、対象物900の合焦点画像から特徴部分にあたる領域の画像を抽出することによっても、特徴部分の二次元断面画像の生成が可能である。この際、合焦点画像にボケが生じている領域でも断面が適切に得られるように、二次元断面画像生成部14は、合焦度の高い領域の輝度値又は合焦度を用いて二次元断面画像を生成することが望ましい。この複数の二次元断面画像は、特徴部分の三次元形状を、所定の方向に所定の間隔で互いに平行な複数の平面で切断(分割)した画像となる。所定の方向は、カメラ111が移動する方向とすることができる。所定の間隔は、特徴部分の形状を抽出することができる任意の長さとすることができる。例えば、特徴部分が太さ1μmの線形状であれば、1μm未満の間隔にすればよい。
(Step S305)
The two-dimensional cross-sectional image generation unit 14 generates a plurality of two-dimensional cross-sectional images (two-dimensional cross-sectional image group) of each feature portion based on the three-dimensional shape of each feature portion. Specifically, the two-dimensional cross-sectional image generation unit 14 extracts the outline of the shape (cross-sectional shape) at an arbitrary shooting distance by using the three-dimensional shape of the feature portion to obtain the two-dimensional cross-sectional image of the feature portion. It can be generated. Alternatively, the two-dimensional cross-sectional image generation unit 14 can also generate a two-dimensional cross-sectional image of the feature portion by extracting an image of a region corresponding to the feature portion from the focused image of the object 900. At this time, the two-dimensional cross-section image generation unit 14 uses the brightness value or the in-focus degree of the region having a high in-focus degree to two-dimensionally so that the cross-section can be appropriately obtained even in the region where the in-focus image is out of focus. It is desirable to generate a cross-sectional image. The plurality of two-dimensional cross-sectional images are images obtained by cutting (dividing) the three-dimensional shape of the feature portion into a plurality of planes parallel to each other in a predetermined direction at predetermined intervals. The predetermined direction can be the direction in which the camera 111 moves. The predetermined interval can be any length from which the shape of the feature portion can be extracted. For example, if the feature portion has a linear shape with a thickness of 1 μm, the interval may be less than 1 μm.

(ステップS306)
特徴部分計測部15は、特徴部分ごとに、その特徴部分について生成された複数の二次元断面画像(二次元断面画像群)に基づいて、特徴部分の各種計測を行い、特徴部分の立体形状を求める。計測では、一例として、細線化処理や測長などが実施される。この際、特徴部分計測部15は、二次元断面画像における断面形状の輪郭から形状の内側に任意の距離だけ離れた領域を抽出することが望ましい。これにより、特徴部分が対象物900の表面に横たわった線形状であっても、特徴部分の中心部分を線状として得ることができる。
(Step S306)
The feature portion measuring unit 15 performs various measurements of the feature portion based on a plurality of two-dimensional cross-sectional images (two-dimensional cross-sectional image group) generated for the feature portion for each feature portion, and obtains a three-dimensional shape of the feature portion. Ask. In the measurement, as an example, thinning processing and length measurement are performed. At this time, it is desirable that the feature portion measuring unit 15 extracts a region separated from the contour of the cross-sectional shape in the two-dimensional cross-sectional image by an arbitrary distance inside the shape. As a result, even if the feature portion has a linear shape lying on the surface of the object 900, the central portion of the feature portion can be obtained as a linear shape.

(ステップS307)
特徴部分計測部15は、各特徴部分についてそれぞれ計測された形状の結果から、対象物900の立体形状を表す情報を算出する。以上で、形状計測処理は終了となる。
(Step S307)
The feature portion measuring unit 15 calculates information representing the three-dimensional shape of the object 900 from the result of the shape measured for each feature portion. This completes the shape measurement process.

<具体例>
以下に、図4乃至図6をさらに参照して、形状計測処理の具体的な一例を示す。本例では、対象物900は金属製の部品であり、特徴部分として微細突出部分901の立体形状を求めている。
<Specific example>
Hereinafter, a specific example of the shape measurement process will be shown with reference to FIGS. 4 to 6. In this example, the object 900 is a metal part, and the three-dimensional shape of the fine protruding portion 901 is required as a feature portion.

図4に、上述のステップS301において、三次元形状特定部12が取得する対象物900の三次元形状を表す情報の一例を示す。図4に示すように、合焦法を用いて各撮影距離における対象物900の画像がカメラ111で撮影され、撮影された複数の画像が撮影距離と対応付けられて対象物900の三次元形状を表す情報として三次元形状特定部12によって取得される。 FIG. 4 shows an example of information representing the three-dimensional shape of the object 900 acquired by the three-dimensional shape specifying unit 12 in step S301 described above. As shown in FIG. 4, an image of the object 900 at each shooting distance is captured by the camera 111 using the focusing method, and a plurality of captured images are associated with the shooting distance to form a three-dimensional shape of the object 900. Is acquired by the three-dimensional shape specifying unit 12 as information representing.

図5に、上述のステップS303において、特徴部分特定部13が特定した特徴部分の一例を示す。図5に示すように、本実施形態の対象物900は、微細突出部分901(線形状グループ)と、微細突出部分901以外の部分(曲面グループ)とに分類され、それぞれが特徴部分として特定される。 FIG. 5 shows an example of the feature portion specified by the feature portion specifying unit 13 in step S303 described above. As shown in FIG. 5, the object 900 of the present embodiment is classified into a fine protruding portion 901 (line shape group) and a portion other than the fine protruding portion 901 (curved surface group), and each is specified as a characteristic portion. NS.

図6に、上述のステップS305において、二次元断面画像生成部14が微細突出部分901について生成する複数の二次元断面画像(二次元断面画像群)の一例を示す。図6に示すように、微細突出部分901の二次元断面画像が所定の間隔で複数生成される。二次元断面画像生成部14は、この複数の二次元断面画像(二次元断面画像群)に基づいて、微細突出部分901の立体形状を算出する。立体形状の算出は、各二次元断面画像について微細突出部分901の形状に細線化処理を行い、細線化された各二次元断面画像の微細突出部分901の形状を互いに連結して立体化することで実行可能である。なお、細線化処理及び立体化処理は、公知の各種方法を用いることができる。 FIG. 6 shows an example of a plurality of two-dimensional cross-sectional images (two-dimensional cross-sectional image group) generated by the two-dimensional cross-sectional image generation unit 14 for the fine protruding portion 901 in the above-mentioned step S305. As shown in FIG. 6, a plurality of two-dimensional cross-sectional images of the fine protruding portion 901 are generated at predetermined intervals. The two-dimensional cross-sectional image generation unit 14 calculates the three-dimensional shape of the fine protruding portion 901 based on the plurality of two-dimensional cross-sectional images (two-dimensional cross-sectional image group). To calculate the three-dimensional shape, each two-dimensional cross-sectional image is thinned into the shape of the fine protruding portion 901, and the shapes of the fine protruding parts 901 of each thinned two-dimensional cross-sectional image are connected to each other to be three-dimensionalized. It is feasible with. Various known methods can be used for the thinning process and the three-dimensional processing.

[効果]
本開示の一実施形態に係る形状計測装置10においては、対象物900の三次元形状を、その形状の特徴(平面、曲面、及び線形状など)ごとに分類してグループ化し、各グループの形状を特徴部分としてそれぞれ特定する。そして、各特徴部分の三次元形状について複数の二次元断面画像(二次元断面画像群)をそれぞれ生成して、この生成した複数の二次元断面画像を用いて各特徴部分に対する各種の計測を個別に行う。
[effect]
In the shape measuring device 10 according to the embodiment of the present disclosure, the three-dimensional shapes of the object 900 are classified and grouped according to the characteristics of the shape (plane, curved surface, line shape, etc.), and the shapes of each group are grouped. Are specified as feature parts. Then, a plurality of two-dimensional cross-sectional images (two-dimensional cross-sectional image group) are generated for each of the three-dimensional shapes of each feature portion, and various measurements for each feature portion are individually performed using the generated plurality of two-dimensional cross-sectional images. To do.

この処理により、対象物900において複数の特徴部分(例えば、平面と線形状)どうしが所定の方向(例えば、撮影方向)に重なるような場合であっても、各特徴部分のみをそれぞれ特定して抽出するので、各特徴部分に対する各種の計測を好適に実施することができる。 By this processing, even when a plurality of feature portions (for example, a plane and a line shape) overlap in a predetermined direction (for example, a shooting direction) in the object 900, only each feature portion is specified. Since it is extracted, various measurements for each feature portion can be suitably performed.

なお、本開示は、形状計測装置として捉えるだけでなく、形状計測方法、形状計測装置のコンピューターが実行するプログラム、プログラムを格納したコンピューター読み取り可能な非一時的記録媒体として捉えることも可能である。 The present disclosure can be regarded not only as a shape measuring device, but also as a shape measuring method, a program executed by a computer of the shape measuring device, and a computer-readable non-temporary recording medium in which the program is stored.

本開示の形状計測装置は、画像の細線化処理を用いて物体の形状を高精度かつ高速に計測する場合などに有用である。 The shape measuring device of the present disclosure is useful when measuring the shape of an object with high accuracy and high speed by using the thinning process of an image.

10 形状計測装置
11 対象物撮影部
12 三次元形状特定部
13 特徴部分特定部
14 二次元断面画像生成部
15 特徴部分計測部
111 カメラ
112 カメラ支持部
113 対象物載置部
900 対象物
901 微細突出部分
10 Shape measuring device 11 Object imaging unit 12 Three-dimensional shape specifying unit 13 Feature part specifying unit 14 Two-dimensional cross-section image generation unit 15 Feature part measuring unit 111 Camera 112 Camera support 113 Object mounting unit 900 Object 901 Fine protrusion part

Claims (1)

対象物の形状を計測する形状計測装置であって、
前記対象物の三次元形状を特定する三次元形状特定部と、
前記対象物の三次元形状のうち、所定の特徴を有する形状の部分である特徴部分を特定する特徴部分特定部と、
前記対象物の三次元形状から前記特徴部分の三次元形状を抽出し、前記特徴部分の三次元形状を互いに平行な複数の平面でそれぞれ切断した画像である複数の二次元断面画像を生成する二次元断面画像生成部と、
前記複数の二次元断面画像に基づいて、前記特徴部分に対する計測を行う特徴部分計測部と、を備える、
形状計測装置。
A shape measuring device that measures the shape of an object.
A three-dimensional shape specifying part that specifies the three-dimensional shape of the object,
Among the three-dimensional shapes of the object, a feature portion specifying portion that specifies a feature portion that is a portion of a shape having a predetermined feature, and a feature portion specifying portion.
The three-dimensional shape of the feature portion is extracted from the three-dimensional shape of the object, and a plurality of two-dimensional cross-sectional images, which are images obtained by cutting the three-dimensional shape of the feature portion on a plurality of planes parallel to each other, are generated. Dimensional cross-section image generator and
A feature portion measuring unit that measures the feature portion based on the plurality of two-dimensional cross-sectional images is provided.
Shape measuring device.
JP2020051547A 2020-03-23 2020-03-23 Shape measuring device Pending JP2021148731A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020051547A JP2021148731A (en) 2020-03-23 2020-03-23 Shape measuring device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2020051547A JP2021148731A (en) 2020-03-23 2020-03-23 Shape measuring device

Publications (1)

Publication Number Publication Date
JP2021148731A true JP2021148731A (en) 2021-09-27

Family

ID=77851249

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2020051547A Pending JP2021148731A (en) 2020-03-23 2020-03-23 Shape measuring device

Country Status (1)

Country Link
JP (1) JP2021148731A (en)

Similar Documents

Publication Publication Date Title
EP2568253B1 (en) Structured-light measuring method and system
JP6490219B2 (en) Autofocus system and autofocus method in digital holography
JP2021192064A (en) Three-dimensional measuring system and three-dimensional measuring method
KR101394809B1 (en) A method and systems for obtaining an improved stereo image of an object
JP2007074079A (en) Image input device
CN106846383A (en) High dynamic range images imaging method based on 3D digital micro-analysis imaging systems
KR20100080704A (en) Method and apparatus for obtaining image data
JP2015104107A (en) Ip stereoscopic video estimation device and program therefor
JP2017090447A (en) Three-dimensional contour information detecting system, and detecting method
JP5336325B2 (en) Image processing method
KR102253320B1 (en) Method for displaying 3 dimension image in integral imaging microscope system, and integral imaging microscope system implementing the same
JP2021148731A (en) Shape measuring device
JP7373297B2 (en) Image processing device, image processing method and program
WO2019016879A1 (en) Object detection device and object detection method
Kontogianni et al. Investigating the effect of focus stacking on sfm-mvs algorithms
JP2020190456A (en) Shape measuring device
JP6216842B1 (en) Image processing apparatus, image processing method, program, and system
KR101657373B1 (en) Multiple depth extraction method in integral imaging display
AU2022204926B2 (en) System and Method for Extracting Information on the Spatial Distribution of Wavefronts
CN111489384A (en) Occlusion assessment method, device, equipment, system and medium based on mutual view
Tian et al. Novel Automatic Human-Height Measurement Using a Digital Camera
WO1996030803A1 (en) Method and image capturing device for determining distance
CN211855280U (en) Visual ranging system based on dispersive lens and optical filter
CN114543749B (en) Optical system and method for measuring multi-target field depth
Xu et al. Range measurement from defocus gradient