JP5375201B2  3D shape measuring method and 3D shape measuring apparatus  Google Patents
3D shape measuring method and 3D shape measuring apparatus Download PDFInfo
 Publication number
 JP5375201B2 JP5375201B2 JP2009048662A JP2009048662A JP5375201B2 JP 5375201 B2 JP5375201 B2 JP 5375201B2 JP 2009048662 A JP2009048662 A JP 2009048662A JP 2009048662 A JP2009048662 A JP 2009048662A JP 5375201 B2 JP5375201 B2 JP 5375201B2
 Authority
 JP
 Japan
 Prior art keywords
 distance
 phase
 dimensional
 calculated
 coordinate
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Expired  Fee Related
Links
 238000003384 imaging method Methods 0.000 claims abstract description 50
 238000004364 calculation methods Methods 0.000 claims description 72
 230000000875 corresponding Effects 0.000 claims description 21
 238000000691 measurement method Methods 0.000 claims description 8
 239000000758 substrates Substances 0.000 claims description 7
 239000000919 ceramics Substances 0.000 description 16
 238000000034 methods Methods 0.000 description 9
 230000014509 gene expression Effects 0.000 description 7
 239000011521 glasses Substances 0.000 description 7
 238000003860 storage Methods 0.000 description 6
 230000003287 optical Effects 0.000 description 5
 238000005286 illumination Methods 0.000 description 3
 230000004075 alteration Effects 0.000 description 2
 230000001808 coupling Effects 0.000 description 2
 238000010168 coupling process Methods 0.000 description 2
 238000005859 coupling reactions Methods 0.000 description 2
 238000004519 manufacturing process Methods 0.000 description 2
 241000563994 Cardiopteridaceae Species 0.000 description 1
 280000638271 Reference Point companies 0.000 description 1
 238000006243 chemical reactions Methods 0.000 description 1
 238000005520 cutting process Methods 0.000 description 1
 238000010586 diagrams Methods 0.000 description 1
 229910003460 diamond Inorganic materials 0.000 description 1
 239000010432 diamond Substances 0.000 description 1
 238000009792 diffusion process Methods 0.000 description 1
 238000006073 displacement reactions Methods 0.000 description 1
 238000007689 inspection Methods 0.000 description 1
 239000004973 liquid crystal related substances Substances 0.000 description 1
 QSHDDOUJBYECFTUHFFFAOYSAN mercury Chemical compound data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0nMS4wJyBlbmNvZGluZz0naXNvLTg4NTktMSc/Pgo8c3ZnIHZlcnNpb249JzEuMScgYmFzZVByb2ZpbGU9J2Z1bGwnCiAgICAgICAgICAgICAgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJwogICAgICAgICAgICAgICAgICAgICAgeG1sbnM6cmRraXQ9J2h0dHA6Ly93d3cucmRraXQub3JnL3htbCcKICAgICAgICAgICAgICAgICAgICAgIHhtbG5zOnhsaW5rPSdodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rJwogICAgICAgICAgICAgICAgICB4bWw6c3BhY2U9J3ByZXNlcnZlJwp3aWR0aD0nMzAwcHgnIGhlaWdodD0nMzAwcHgnIHZpZXdCb3g9JzAgMCAzMDAgMzAwJz4KPCEtLSBFTkQgT0YgSEVBREVSIC0tPgo8cmVjdCBzdHlsZT0nb3BhY2l0eToxLjA7ZmlsbDojRkZGRkZGO3N0cm9rZTpub25lJyB3aWR0aD0nMzAwJyBoZWlnaHQ9JzMwMCcgeD0nMCcgeT0nMCc+IDwvcmVjdD4KPHRleHQgZG9taW5hbnQtYmFzZWxpbmU9ImNlbnRyYWwiIHRleHQtYW5jaG9yPSJzdGFydCIgeD0nMTIzLjMxNicgeT0nMTU2JyBzdHlsZT0nZm9udC1zaXplOjQwcHg7Zm9udC1zdHlsZTpub3JtYWw7Zm9udC13ZWlnaHQ6bm9ybWFsO2ZpbGwtb3BhY2l0eToxO3N0cm9rZTpub25lO2ZvbnQtZmFtaWx5OnNhbnMtc2VyaWY7ZmlsbDojM0I0MTQzJyA+PHRzcGFuPkhnPC90c3Bhbj48L3RleHQ+CjxwYXRoIGQ9J00gMTMwLDExMy42MzYgTCAxMjkuOTcsMTEyLjkzMyBMIDEyOS44NzksMTEyLjIzNSBMIDEyOS43MjksMTExLjU0NyBMIDEyOS41MiwxMTAuODc1IEwgMTI5LjI1NCwxMTAuMjIzIEwgMTI4LjkzMywxMDkuNTk2IEwgMTI4LjU1OSwxMDkgTCAxMjguMTM2LDEwOC40MzcgTCAxMjcuNjY2LDEwNy45MTQgTCAxMjcuMTUyLDEwNy40MzIgTCAxMjYuNTk5LDEwNi45OTYgTCAxMjYuMDEsMTA2LjYxIEwgMTI1LjM5MSwxMDYuMjc2IEwgMTI0Ljc0NSwxMDUuOTk2IEwgMTI0LjA3NywxMDUuNzczIEwgMTIzLjM5MywxMDUuNjA3IEwgMTIyLjY5NywxMDUuNTAyIEwgMTIxLjk5NCwxMDUuNDU2IEwgMTIxLjI5LDEwNS40NzIgTCAxMjAuNTksMTA1LjU0NyBMIDExOS45LDEwNS42ODMgTCAxMTkuMjIzLDEwNS44NzcgTCAxMTguNTY2LDEwNi4xMjkgTCAxMTcuOTMyLDEwNi40MzYgTCAxMTcuMzI4LDEwNi43OTcgTCAxMTYuNzU2LDEwNy4yMDggTCAxMTYuMjIyLDEwNy42NjcgTCAxMTUuNzMsMTA4LjE3IEwgMTE1LjI4MywxMDguNzE0IEwgMTE0Ljg4NCwxMDkuMjk0IEwgMTE0LjUzNiwxMDkuOTA2IEwgMTE0LjI0MiwxMTAuNTQ2IEwgMTE0LjAwNSwxMTEuMjA5IEwgMTEzLjgyNSwxMTEuODg5IEwgMTEzLjcwNCwxMTIuNTgzIEwgMTEzLjY0NCwxMTMuMjg0IEwgMTEzLjY0NCwxMTMuOTg4IEwgMTEzLjcwNCwxMTQuNjkgTCAxMTMuODI1LDExNS4zODMgTCAxMTQuMDA1LDExNi4wNjQgTCAxMTQuMjQyLDExNi43MjcgTCAxMTQuNTM2LDExNy4zNjcgTCAxMTQuODg0LDExNy45NzkgTCAxMTUuMjgzLDExOC41NTkgTCAxMTUuNzMsMTE5LjEwMiBMIDExNi4yMjIsMTE5LjYwNSBMIDExNi43NTYsMTIwLjA2NCBMIDExNy4zMjgsMTIwLjQ3NiBMIDExNy45MzIsMTIwLjgzNiBMIDExOC41NjYsMTIxLjE0NCBMIDExOS4yMjMsMTIxLjM5NiBMIDExOS45LDEyMS41OSBMIDEyMC41OSwxMjEuNzI2IEwgMTIxLjI5LDEyMS44MDEgTCAxMjEuOTk0LDEyMS44MTYgTCAxMjIuNjk3LDEyMS43NzEgTCAxMjMuMzkzLDEyMS42NjUgTCAxMjQuMDc3LDEyMS41IEwgMTI0Ljc0NSwxMjEuMjc3IEwgMTI1LjM5MSwxMjAuOTk3IEwgMTI2LjAxLDEyMC42NjMgTCAxMjYuNTk5LDEyMC4yNzYgTCAxMjcuMTUyLDExOS44NDEgTCAxMjcuNjY2LDExOS4zNTkgTCAxMjguMTM2LDExOC44MzUgTCAxMjguNTU5LDExOC4yNzMgTCAxMjguOTMzLDExNy42NzYgTCAxMjkuMjU0LDExNy4wNSBMIDEyOS41MiwxMTYuMzk4IEwgMTI5LjcyOSwxMTUuNzI2IEwgMTI5Ljg3OSwxMTUuMDM4IEwgMTI5Ljk3LDExNC4zNCBMIDEzMCwxMTMuNjM2IEwgMTIxLjgxOCwxMTMuNjM2IFonIHN0eWxlPSdmaWxsOiMwMDAwMDA7ZmlsbC1ydWxlOmV2ZW5vZGQ7ZmlsbC1vcGFjaXR5PTE7c3Ryb2tlOiMwMDAwMDA7c3Ryb2tlLXdpZHRoOjEwcHg7c3Ryb2tlLWxpbmVjYXA6YnV0dDtzdHJva2UtbGluZWpvaW46bWl0ZXI7c3Ryb2tlLW9wYWNpdHk6MTsnIC8+CjxwYXRoIGQ9J00gMTg2LjM2NCwxMTMuNjM2IEwgMTg2LjMzMywxMTIuOTMzIEwgMTg2LjI0MywxMTIuMjM1IEwgMTg2LjA5MiwxMTEuNTQ3IEwgMTg1Ljg4NCwxMTAuODc1IEwgMTg1LjYxOCwxMTAuMjIzIEwgMTg1LjI5NywxMDkuNTk2IEwgMTg0LjkyMywxMDkgTCAxODQuNDk5LDEwOC40MzcgTCAxODQuMDI5LDEwNy45MTQgTCAxODMuNTE2LDEwNy40MzIgTCAxODIuOTYyLDEwNi45OTYgTCAxODIuMzc0LDEwNi42MSBMIDE4MS43NTQsMTA2LjI3NiBMIDE4MS4xMDgsMTA1Ljk5NiBMIDE4MC40NDEsMTA1Ljc3MyBMIDE3OS43NTYsMTA1LjYwNyBMIDE3OS4wNiwxMDUuNTAyIEwgMTc4LjM1OCwxMDUuNDU2IEwgMTc3LjY1NCwxMDUuNDcyIEwgMTc2Ljk1NCwxMDUuNTQ3IEwgMTc2LjI2MywxMDUuNjgzIEwgMTc1LjU4NywxMDUuODc3IEwgMTc0LjkyOSwxMDYuMTI5IEwgMTc0LjI5NiwxMDYuNDM2IEwgMTczLjY5MSwxMDYuNzk3IEwgMTczLjEyLDEwNy4yMDggTCAxNzIuNTg2LDEwNy42NjcgTCAxNzIuMDk0LDEwOC4xNyBMIDE3MS42NDYsMTA4LjcxNCBMIDE3MS4yNDcsMTA5LjI5NCBMIDE3MC45LDEwOS45MDYgTCAxNzAuNjA2LDExMC41NDYgTCAxNzAuMzY4LDExMS4yMDkgTCAxNzAuMTg5LDExMS44ODkgTCAxNzAuMDY4LDExMi41ODMgTCAxNzAuMDA4LDExMy4yODQgTCAxNzAuMDA4LDExMy45ODggTCAxNzAuMDY4LDExNC42OSBMIDE3MC4xODksMTE1LjM4MyBMIDE3MC4zNjgsMTE2LjA2NCBMIDE3MC42MDYsMTE2LjcyNyBMIDE3MC45LDExNy4zNjcgTCAxNzEuMjQ3LDExNy45NzkgTCAxNzEuNjQ2LDExOC41NTkgTCAxNzIuMDk0LDExOS4xMDIgTCAxNzIuNTg2LDExOS42MDUgTCAxNzMuMTIsMTIwLjA2NCBMIDE3My42OTEsMTIwLjQ3NiBMIDE3NC4yOTYsMTIwLjgzNiBMIDE3NC45MjksMTIxLjE0NCBMIDE3NS41ODcsMTIxLjM5NiBMIDE3Ni4yNjMsMTIxLjU5IEwgMTc2Ljk1NCwxMjEuNzI2IEwgMTc3LjY1NCwxMjEuODAxIEwgMTc4LjM1OCwxMjEuODE2IEwgMTc5LjA2LDEyMS43NzEgTCAxNzkuNzU2LDEyMS42NjUgTCAxODAuNDQxLDEyMS41IEwgMTgxLjEwOCwxMjEuMjc3IEwgMTgxLjc1NCwxMjAuOTk3IEwgMTgyLjM3NCwxMjAuNjYzIEwgMTgyLjk2MiwxMjAuMjc2IEwgMTgzLjUxNiwxMTkuODQxIEwgMTg0LjAyOSwxMTkuMzU5IEwgMTg0LjQ5OSwxMTguODM1IEwgMTg0LjkyMywxMTguMjczIEwgMTg1LjI5NywxMTcuNjc2IEwgMTg1LjYxOCwxMTcuMDUgTCAxODUuODg0LDExNi4zOTggTCAxODYuMDkyLDExNS43MjYgTCAxODYuMjQzLDExNS4wMzggTCAxODYuMzMzLDExNC4zNCBMIDE4Ni4zNjQsMTEzLjYzNiBMIDE3OC4xODIsMTEzLjYzNiBaJyBzdHlsZT0nZmlsbDojMDAwMDAwO2ZpbGwtcnVsZTpldmVub2RkO2ZpbGwtb3BhY2l0eT0xO3N0cm9rZTojMDAwMDAwO3N0cm9rZS13aWR0aDoxMHB4O3N0cm9rZS1saW5lY2FwOmJ1dHQ7c3Ryb2tlLWxpbmVqb2luOm1pdGVyO3N0cm9rZS1vcGFjaXR5OjE7JyAvPgo8L3N2Zz4K data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0nMS4wJyBlbmNvZGluZz0naXNvLTg4NTktMSc/Pgo8c3ZnIHZlcnNpb249JzEuMScgYmFzZVByb2ZpbGU9J2Z1bGwnCiAgICAgICAgICAgICAgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJwogICAgICAgICAgICAgICAgICAgICAgeG1sbnM6cmRraXQ9J2h0dHA6Ly93d3cucmRraXQub3JnL3htbCcKICAgICAgICAgICAgICAgICAgICAgIHhtbG5zOnhsaW5rPSdodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rJwogICAgICAgICAgICAgICAgICB4bWw6c3BhY2U9J3ByZXNlcnZlJwp3aWR0aD0nODVweCcgaGVpZ2h0PSc4NXB4JyB2aWV3Qm94PScwIDAgODUgODUnPgo8IS0tIEVORCBPRiBIRUFERVIgLS0+CjxyZWN0IHN0eWxlPSdvcGFjaXR5OjEuMDtmaWxsOiNGRkZGRkY7c3Ryb2tlOm5vbmUnIHdpZHRoPSc4NScgaGVpZ2h0PSc4NScgeD0nMCcgeT0nMCc+IDwvcmVjdD4KPHRleHQgZG9taW5hbnQtYmFzZWxpbmU9ImNlbnRyYWwiIHRleHQtYW5jaG9yPSJzdGFydCIgeD0nMTYuMjI1NCcgeT0nNDcuNzk1NScgc3R5bGU9J2ZvbnQtc2l6ZTozOHB4O2ZvbnQtc3R5bGU6bm9ybWFsO2ZvbnQtd2VpZ2h0Om5vcm1hbDtmaWxsLW9wYWNpdHk6MTtzdHJva2U6bm9uZTtmb250LWZhbWlseTpzYW5zLXNlcmlmO2ZpbGw6IzNCNDE0MycgPjx0c3Bhbj5IZzwvdHNwYW4+PC90ZXh0Pgo8cGF0aCBkPSdNIDM2LjMzMzMsMTguMDQ1NSBMIDM2LjMyNDgsMTcuODQ2MiBMIDM2LjI5OTEsMTcuNjQ4NCBMIDM2LjI1NjUsMTcuNDUzNSBMIDM2LjE5NzMsMTcuMjYzIEwgMzYuMTIyLDE3LjA3ODMgTCAzNi4wMzEsMTYuOTAwOCBMIDM1LjkyNTIsMTYuNzMxNyBMIDM1LjgwNTIsMTYuNTcyNCBMIDM1LjY3MTksMTYuNDI0IEwgMzUuNTI2NCwxNi4yODc2IEwgMzUuMzY5NywxNi4xNjQyIEwgMzUuMjAyOSwxNi4wNTQ3IEwgMzUuMDI3NCwxNS45NTk5IEwgMzQuODQ0NCwxNS44ODA3IEwgMzQuNjU1MiwxNS44MTc0IEwgMzQuNDYxMywxNS43NzA2IEwgMzQuMjY0MSwxNS43NDA3IEwgMzQuMDY1LDE1LjcyNzggTCAzMy44NjU2LDE1LjczMjEgTCAzMy42NjczLDE1Ljc1MzUgTCAzMy40NzE2LDE1Ljc5MTkgTCAzMy4yNzk4LDE1Ljg0NyBMIDMzLjA5MzYsMTUuOTE4MyBMIDMyLjkxNDEsMTYuMDA1NCBMIDMyLjc0MjgsMTYuMTA3NiBMIDMyLjU4MSwxNi4yMjQyIEwgMzIuNDI5NywxNi4zNTQyIEwgMzIuMjkwMiwxNi40OTY4IEwgMzIuMTYzNCwxNi42NTA4IEwgMzIuMDUwNCwxNi44MTUxIEwgMzEuOTUxOSwxNi45ODg2IEwgMzEuODY4NywxNy4xNjk5IEwgMzEuODAxNCwxNy4zNTc2IEwgMzEuNzUwNCwxNy41NTA1IEwgMzEuNzE2MywxNy43NDcgTCAzMS42OTkxLDE3Ljk0NTcgTCAzMS42OTkxLDE4LjE0NTIgTCAzMS43MTYzLDE4LjM0MzkgTCAzMS43NTA0LDE4LjU0MDQgTCAzMS44MDE0LDE4LjczMzMgTCAzMS44Njg3LDE4LjkyMTEgTCAzMS45NTE5LDE5LjEwMjMgTCAzMi4wNTA0LDE5LjI3NTggTCAzMi4xNjM0LDE5LjQ0MDEgTCAzMi4yOTAyLDE5LjU5NDEgTCAzMi40Mjk3LDE5LjczNjcgTCAzMi41ODEsMTkuODY2NyBMIDMyLjc0MjgsMTkuOTgzMyBMIDMyLjkxNDEsMjAuMDg1NSBMIDMzLjA5MzYsMjAuMTcyNiBMIDMzLjI3OTgsMjAuMjQzOSBMIDMzLjQ3MTYsMjAuMjk5IEwgMzMuNjY3MywyMC4zMzc0IEwgMzMuODY1NiwyMC4zNTg4IEwgMzQuMDY1LDIwLjM2MzEgTCAzNC4yNjQxLDIwLjM1MDIgTCAzNC40NjEzLDIwLjMyMDMgTCAzNC42NTUyLDIwLjI3MzUgTCAzNC44NDQ0LDIwLjIxMDMgTCAzNS4wMjc0LDIwLjEzMSBMIDM1LjIwMjksMjAuMDM2MiBMIDM1LjM2OTcsMTkuOTI2NyBMIDM1LjUyNjQsMTkuODAzMyBMIDM1LjY3MTksMTkuNjY2OSBMIDM1LjgwNTIsMTkuNTE4NSBMIDM1LjkyNTIsMTkuMzU5MiBMIDM2LjAzMSwxOS4xOTAxIEwgMzYuMTIyLDE5LjAxMjYgTCAzNi4xOTczLDE4LjgyNzkgTCAzNi4yNTY1LDE4LjYzNzQgTCAzNi4yOTkxLDE4LjQ0MjUgTCAzNi4zMjQ4LDE4LjI0NDcgTCAzNi4zMzMzLDE4LjA0NTUgTCAzNC4wMTUyLDE4LjA0NTUgWicgc3R5bGU9J2ZpbGw6IzAwMDAwMDtmaWxsLXJ1bGU6ZXZlbm9kZDtmaWxsLW9wYWNpdHk9MTtzdHJva2U6IzAwMDAwMDtzdHJva2Utd2lkdGg6MnB4O3N0cm9rZS1saW5lY2FwOmJ1dHQ7c3Ryb2tlLWxpbmVqb2luOm1pdGVyO3N0cm9rZS1vcGFjaXR5OjE7JyAvPgo8cGF0aCBkPSdNIDUyLjMwMywxOC4wNDU1IEwgNTIuMjk0NCwxNy44NDYyIEwgNTIuMjY4OCwxNy42NDg0IEwgNTIuMjI2MiwxNy40NTM1IEwgNTIuMTY3LDE3LjI2MyBMIDUyLjA5MTcsMTcuMDc4MyBMIDUyLjAwMDcsMTYuOTAwOCBMIDUxLjg5NDksMTYuNzMxNyBMIDUxLjc3NDgsMTYuNTcyNCBMIDUxLjY0MTYsMTYuNDI0IEwgNTEuNDk2MSwxNi4yODc2IEwgNTEuMzM5NCwxNi4xNjQyIEwgNTEuMTcyNiwxNi4wNTQ3IEwgNTAuOTk3MSwxNS45NTk5IEwgNTAuODE0MSwxNS44ODA3IEwgNTAuNjI0OSwxNS44MTc0IEwgNTAuNDMxLDE1Ljc3MDYgTCA1MC4yMzM4LDE1Ljc0MDcgTCA1MC4wMzQ3LDE1LjcyNzggTCA0OS44MzUzLDE1LjczMjEgTCA0OS42MzcsMTUuNzUzNSBMIDQ5LjQ0MTMsMTUuNzkxOSBMIDQ5LjI0OTUsMTUuODQ3IEwgNDkuMDYzMywxNS45MTgzIEwgNDguODgzOCwxNi4wMDU0IEwgNDguNzEyNSwxNi4xMDc2IEwgNDguNTUwNywxNi4yMjQyIEwgNDguMzk5NCwxNi4zNTQyIEwgNDguMjU5OSwxNi40OTY4IEwgNDguMTMzMSwxNi42NTA4IEwgNDguMDIwMSwxNi44MTUxIEwgNDcuOTIxNiwxNi45ODg2IEwgNDcuODM4NCwxNy4xNjk5IEwgNDcuNzcxMSwxNy4zNTc2IEwgNDcuNzIwMSwxNy41NTA1IEwgNDcuNjg2LDE3Ljc0NyBMIDQ3LjY2ODgsMTcuOTQ1NyBMIDQ3LjY2ODgsMTguMTQ1MiBMIDQ3LjY4NiwxOC4zNDM5IEwgNDcuNzIwMSwxOC41NDA0IEwgNDcuNzcxMSwxOC43MzMzIEwgNDcuODM4NCwxOC45MjExIEwgNDcuOTIxNiwxOS4xMDIzIEwgNDguMDIwMSwxOS4yNzU4IEwgNDguMTMzMSwxOS40NDAxIEwgNDguMjU5OSwxOS41OTQxIEwgNDguMzk5NCwxOS43MzY3IEwgNDguNTUwNywxOS44NjY3IEwgNDguNzEyNSwxOS45ODMzIEwgNDguODgzOCwyMC4wODU1IEwgNDkuMDYzMywyMC4xNzI2IEwgNDkuMjQ5NSwyMC4yNDM5IEwgNDkuNDQxMywyMC4yOTkgTCA0OS42MzcsMjAuMzM3NCBMIDQ5LjgzNTMsMjAuMzU4OCBMIDUwLjAzNDcsMjAuMzYzMSBMIDUwLjIzMzgsMjAuMzUwMiBMIDUwLjQzMSwyMC4zMjAzIEwgNTAuNjI0OSwyMC4yNzM1IEwgNTAuODE0MSwyMC4yMTAzIEwgNTAuOTk3MSwyMC4xMzEgTCA1MS4xNzI2LDIwLjAzNjIgTCA1MS4zMzk0LDE5LjkyNjcgTCA1MS40OTYxLDE5LjgwMzMgTCA1MS42NDE2LDE5LjY2NjkgTCA1MS43NzQ4LDE5LjUxODUgTCA1MS44OTQ5LDE5LjM1OTIgTCA1Mi4wMDA3LDE5LjE5MDEgTCA1Mi4wOTE3LDE5LjAxMjYgTCA1Mi4xNjcsMTguODI3OSBMIDUyLjIyNjIsMTguNjM3NCBMIDUyLjI2ODgsMTguNDQyNSBMIDUyLjI5NDQsMTguMjQ0NyBMIDUyLjMwMywxOC4wNDU1IEwgNDkuOTg0OCwxOC4wNDU1IFonIHN0eWxlPSdmaWxsOiMwMDAwMDA7ZmlsbC1ydWxlOmV2ZW5vZGQ7ZmlsbC1vcGFjaXR5PTE7c3Ryb2tlOiMwMDAwMDA7c3Ryb2tlLXdpZHRoOjJweDtzdHJva2UtbGluZWNhcDpidXR0O3N0cm9rZS1saW5lam9pbjptaXRlcjtzdHJva2Utb3BhY2l0eToxOycgLz4KPC9zdmc+Cg== [Hg] QSHDDOUJBYECFTUHFFFAOYSAN 0.000 description 1
 229910052753 mercury Inorganic materials 0.000 description 1
 229910052751 metals Inorganic materials 0.000 description 1
 239000002184 metals Substances 0.000 description 1
 239000000203 mixtures Substances 0.000 description 1
 230000000051 modifying Effects 0.000 description 1
 238000001314 profilometry Methods 0.000 description 1
 230000002123 temporal effects Effects 0.000 description 1
 230000001131 transforming Effects 0.000 description 1
Abstract
Description
The present invention relates to threedimensional shape measurement using a phase shift method.
With the improvement in quality of automobiles and the like, high shape accuracy is required for outer panel, parts, molds and the like used in automobiles. In addition, in order to reduce production costs and speed up development, it is necessary to compare and verify 3D CAD data and 3D shape data of products, and to provide prompt feedback to the production process. For this reason, a highprecision optical threedimensional shape measuring instrument is introduced, and not only offline mold shape measurement but also inline product shape inspection is performed.
Conventionally, there are many methods for obtaining a threedimensional shape of a measurement object by scanning a displacement meter (onedimensional) or a lightcutting sensor (twodimensional) due to restrictions such as cost and size.
However, there is a need for a system that can perform measurement without mechanical scanning, and a 3D coordinate measurement method for measurement objects using the phase shift method has been proposed as a measurement method that may be used in such a system. ing.
In the phase shift method, lattice fringes having phases different from each other by π / 2 are sequentially projected from the projector onto the object to be measured, and the phase of each pixel of the captured image is obtained from the four captured images. The relationship between the phase obtained by the camera and the threedimensional coordinates can be obtained from the geometric arrangement of the projector and the camera and the lattice fringe period.
However, in order to accurately obtain the threedimensional coordinates as described above, it is necessary to correct the distortion of the projection and the imaging lens, etc., and the following NonPatent Document 1 proposes using Fourier transform as the correction method. Has been.
Hereinafter, the threedimensional coordinate measuring method in NonPatent Document 1 will be described.
(Z coordinate calculation)
A grid pattern is projected onto a glass plate sprayed with white to capture an image, and a phase Ψ (m, n) of each pixel of the captured image is obtained. While moving the glass plate stepwise in the z direction, the phase Ψ (m, n) is measured at each coordinate, and from the N (N = 11) images, z (m, n) and the phase Ψ (m, n) Is expressed by the following formula (i)
z (m, n)
= A (m, n) + b (m, n) [Psi] (m, n) + c (m, n) [Psi] ^{2} (m, n) (i)
Is obtained by the least squares method.
(X, y coordinate calculation)
A glass plate on which a sinusoidal orthogonal grating is printed and pasted is imaged, and the captured image is Fourier transformed to obtain a grating period and a phase image. FIG. 23 shows an outline of a coordinate system used in NonPatent Document 1, and a CCD (imaging unit) receives light from an imaging object through an imaging lens. In the drawing, the distance direction from the CCD to the glass plate to be imaged is represented by z, and two directions orthogonal to the distance direction z are represented by x and y, respectively.
While the glass plate is moved stepwise in the z direction (N = 11), the captured image is Fourier transformed at each position, and the x, y coordinate enlargement ratio and the like with respect to the change in the z direction are obtained by the least square method (N = 11, first order Formula, d values and e values of the following calculation formulas (ii) and (iii)). Further, (m _{c} , n _{c} ) of each pixel of the captured image is obtained from the phase of the lattice. Note that (m _{c} , n _{c} ) means the center of m and n imaging pixels.
x = (m−m _{c} ) [d + ez] (ii)
y = (n−n _{c} ) [d + ez] (iii)
Further, the imaging distortion (imaging position shift) of the lattice fringes due to the distortion of the lens is corrected by obtaining the phase by Fourier transformation (two dimensions in the x direction and the y direction).
H. O. Saldner, J .; M.M. Huntley, "Profilometry using temporal phase unwrapping and a spatial light modulatorbased fringe", Opt. Eng. 36 (2) 610615 (1997)
The phase shift of lattice fringes due to lens distortion can be corrected by Fourier transform (twodimensional in the x direction and y direction) as shown in NonPatent Document 1, but is calculated by Fourier transform. The correction process takes a very long time.
The present invention provides a highprecision threedimensional shape measuring instrument that is faster and is not affected by lens distortion or the like.
The present invention projects a plurality of lattice fringes having different phases onto the object to be measured, and the distance direction coordinates to the object to be measured and twodimensional coordinates orthogonal to the distance direction coordinates from the obtained lattice fringe image by the phase shift method. A threedimensional shape measurement method for obtaining a threedimensional shape represented by the following: a reference plate is disposed at a position where the distance from the projection unit and the imaging unit is known, and a plurality of lattice fringes having different phases are projected onto the reference plate. Then, each phase of a plurality of pixels is calculated from the obtained captured image of the lattice pattern, a phasedistance relationship is calculated from the calculated phase and the known distance, and the distance from the projection unit and the imaging unit is calculated. A reference grid plate having a reference grid with a known twodimensional coordinate on a plane orthogonal to the distance direction is arranged at a known position, and each twodimensional coordinate for a plurality of pixels of a captured image based on the reference grid To calculate the above Distance from each of the plurality of twodimensional coordinates of pixels out and said known distance  calculating the twodimensional coordinate relationship. At the time of actual measurement, the object to be measured is arranged at a predetermined distance from the projection unit and the imaging unit, and a plurality of lattice fringes having different phases are projected onto the object to be measured. Calculate the phase of the pixel, calculate the distance for the corresponding pixel based on the phasedistance relationship, and calculate the twodimensional of the pixel from the calculated distance for the corresponding pixel based on the distancetwodimensional coordinate relationship The coordinates are calculated to obtain the threedimensional shape of the object to be measured.
In another aspect of the present invention, in the above method, when calculating the phasedistance relationship, the reference plate has lattice fringes having a period of 2π or less in the phase change in the projection region, and a period of phase change greater than 2π. Obtained from a plurality of distance coordinate candidates calculated from the phase of a pickedup image obtained when projecting a grid stripe with a period greater than 2π, and obtained when projecting a grid pattern with a phase change of 2π or less. A candidate that is close to the distance coordinate calculated from the phase of the captured image is set as the distance calculation result.
In another aspect of the present invention, in the above method, when calculating the phasedistance relationship, the reference plate is set to a plurality of different distances, and the phase of each pixel is calculated at each distance, and the set plurality The distance intervals are set so that the phase difference of the correspondingly calculated phase satisfies less than 2π.
In another aspect of the present invention, in the above method, when calculating the distancetwodimensional coordinate relationship, the reference grid plate is set to a plurality of different distances, and the twodimensional coordinates of each pixel are set corresponding to each distance. The intervals between the plurality of distances calculated and set are set so that the phase difference of the phase corresponding to each distance satisfies less than 2π.
In another aspect of the present invention, a plurality of lattice fringes having different phases are projected onto the object to be measured, and the distance direction coordinates to the object to be measured are orthogonal to the distance direction coordinates from the obtained lattice fringe image by the phase shift method. A threedimensional shape measuring apparatus for obtaining a threedimensional shape represented by twodimensional coordinates, and a stage on which the object to be measured is arranged at a predetermined position, and a plurality of lattice fringes having different phases on the object arranged on the stage A projection unit for projecting, an imaging unit for imaging an object placed on the stage, and a measurement processing unit for obtaining a threedimensional shape of the object to be measured based on the captured image, A phase calculation unit, a phasedistance relationship calculation unit, a pixel twodimensional coordinate calculation unit, a distancetwodimensional coordinate relationship calculation unit, and a threedimensional coordinate calculation unit, and the phase calculation unit includes: The distance from the projection unit and the imaging unit is already When a plurality of lattice fringes with different phases are projected on the reference plate, the phases of a plurality of pixels are calculated from the obtained captured images of the lattice fringes, and the phasedistance relationship calculation is performed. The unit calculates a phasedistance relationship from the calculated phase and the known distance, and the twodimensional coordinate calculation unit of the pixel has the distance from the projection unit and the imaging unit at a known position. When a reference grid plate having a reference grid with known twodimensional coordinates on a plane orthogonal to the distance direction is arranged, each twodimensional coordinate for a plurality of pixels of the captured image based on the twodimensional coordinates of the reference grid The distancetwodimensional coordinate relationship calculation unit calculates a distancetwodimensional coordinate relationship from the calculated twodimensional coordinates of the plurality of pixels and the known distance. During actual measurement, the phase calculation unit is obtained by arranging the measurement object at a predetermined distance from the projection unit and the imaging unit, and projecting a plurality of lattice fringes having different phases on the measurement object. Further, the phase of each pixel is calculated from the lattice pattern image, and the threedimensional coordinate calculation unit calculates a distance for a corresponding pixel based on the phasedistance relationship, and based on the distancetwodimensional coordinate relationship, The twodimensional coordinates of the pixel are calculated from the calculated distance for the corresponding pixel, and the threedimensional shape of the object to be measured is obtained.
In another aspect of the present invention, in the above apparatus, the projection unit can project a lattice fringe having a period with a phase change of 2π or less and a lattice fringe having a period with a phase change greater than 2π in the projection region, When calculating the phasedistance relationship, the projection unit projects, onto the reference plate, lattice fringes having a period with a phase change of 2π or less in the projection region and lattice fringes having a period with a phase change greater than 2π. The phasedistance calculation unit projects a lattice fringe having a period of which the phase change is 2π or less among a plurality of distance coordinate candidates calculated from the phase of the captured image obtained when projecting the lattice fringe having a phase change of greater than 2π. A candidate close to the distance coordinate calculated from the phase of the captured image that is sometimes obtained is set as a distance corresponding to the phase of the captured image.
In another aspect of the present invention, in the apparatus described above, the stage is provided with the reference plate for calculating the phasedistance relationship, and the reference grid plate is used for calculating the distancetwodimensional coordinate relationship. The reference flat plate and the reference grid substrate can be set at a plurality of different distance positions with respect to the projection unit and the front photographing unit by the stage, and the plurality of distances set by the stage. Are set so that the phase difference of the correspondingly calculated phase satisfies less than 2π.
In another aspect of the present invention, in the above threedimensional shape measuring method or apparatus, the plurality of lattice fringes having different phases projected onto the object to be measured are sinusoidal lattice fringes.
As described above, in the present invention, a reference plate is arranged at a position where the distance is known, a plurality of lattice fringes having different phases are projected on the reference plate, and each phase of a plurality of pixels is calculated from the captured image, and is calculated. The phasedistance relationship is calculated from the obtained phase and the known distance.
Furthermore, a reference grid plate having a reference grid with a known twodimensional coordinate on a plane orthogonal to the distance direction is arranged at a position where the distance is known, and the twodimensional coordinates of the pixels of the captured image are calculated based on the reference grid. Then, the distancetwodimensional coordinate relationship is calculated from the twodimensional coordinates of the pixel and the known distance.
The phasedistance relationship is obtained from polynomial approximation of the obtained phase and the known distance coordinates, and the distancetwodimensional coordinate relationship is also obtained with the distance coordinates obtained for each pixel and the twodimensional obtained correspondingly. It is obtained by polynomial approximation with coordinates. As described above, also for the phasedistance calibration and the distancetwodimensional coordinate calibration, it is possible to execute the calibration in a short time while removing the influence of the lens distortion by adopting polynomial approximation. Since the calibration is executed using the captured image, the calibration can be approximated by a polynomial to the same region as the measurement region at the time of actual measurement, and the accuracy is high.
Further, when calculating the phasedistance relationship, a grid pattern having a period with a phase change of 2π or less and a grid pattern having a period with a phase change greater than 2π is projected on the reference plate from the captured image. The absolute phase of the phase can be easily and accurately determined, and the distance can be determined with high accuracy from this phase.
Further, when calculating the phasedistance relationship and the distancetwodimensional coordinate relationship, the reference plate and the reference grid substrate can set a plurality of distance direction positions depending on the stage. By setting the phase difference of the phases calculated corresponding to the positions so as to satisfy less than 2π, it is possible to easily determine the absolute value of the phase without the influence of the measurement error.
Hereinafter, modes for carrying out the present invention (hereinafter referred to as embodiments) will be described with reference to the drawings.
[Overview]
FIG. 1 shows a schematic configuration of a threedimensional shape measurement method according to an embodiment of the present invention and a measurement apparatus 300 that performs this method. The threedimensional shape measuring apparatus 300 includes a stage 12 on which an object 16 to be measured, a reference flat plate 10, and a reference grid flat plate 14 can be mounted, a projection unit 310 that projects a checkered pattern onto a measurement target, and an image that captures the measurement target. Unit 312 and a measurement processing unit 320 that performs calibration and actual measurement processing described later. The measurement processing unit 320 includes at least a calculation unit 330 and a storage unit 390. The calculation unit 330 includes a phase calculation unit 340, a phasedistance relationship calculation unit 350, a gridpixel coordinate calculation unit 360, and a distancetwodimensional coordinate relationship. A calculation unit 370 and a threedimensional coordinate calculation unit 380 are included.
In the measurement, the device under test 16 is arranged on the stage 12, a plurality of lattice fringes having different phases are projected onto the device under test 16, and a third order is obtained by a phase shift method from a lattice fringe image obtained by imaging the device under test 16. The original shape measurement processing unit 320 measures the threedimensional shape of the DUT 16.
FIG. 2 shows a schematic procedure of threedimensional shape measurement (calibration and actual measurement) according to the present embodiment. As shown in FIG. 2, in the present embodiment, calibration is performed before the actual measurement (s5) is performed on the device under test 16. Therefore, at the start of measurement, it is first determined whether calibration is necessary (s1). In the calibration determination, for example, when there is no accumulated calibration data, and conditions such as calibration data update timing (elapse of a specified period) are satisfied, it is determined that calibration has not been completed (s1: NO). If valid calibration data already exists, the calibration is completed (s1: YES), and actual measurement (s5) is performed.
In this embodiment, prior to actual measurement, at least calibration (φz calibration) for obtaining the relationship between phase (φ) and distance (z) shown in step s3, and distance (z) shown in step s4. And calibration (zx and zy calibration) for obtaining the relationship between the twodimensional coordinates (x, y) of the pixel in advance.
In addition, as shown in step s2, by performing calibration of the sine wave of the lattice fringes projected onto the projection target (calibration target, object to be measured) in the phase shift method, a more accurate threedimensional shape measurement is possible. It becomes. The sine wave calibration will be described later in detail.
In the φz calibration (s3), the reference plate 10 having a flat surface without unevenness is mounted on the stage 12, and the distance between the projection unit 310 and the imaging unit 312 is set at a known position. Move step by step. At each position, a plurality of lattice fringes having different phases are sequentially projected. An imaging unit 312 using a CCD or the like as an imaging element sequentially captures the lattice fringes projected on the reference plate 10, and a phase calculation unit 340 calculates a phase at each pixel of the captured image from the captured image.
The phasedistance relationship calculation unit 350 obtains a relationship between the calculated phase (φ) and a known position (distance) in the stage movement direction (z coordinate) of each pixel, for example, by polynomial approximation.
In zx and zy calibration (s4), a reference grid plate 14 having a reference grid pattern whose twodimensional grid coordinates are known on a plane is mounted on the stage 12 instead of the reference plate 10.
The reference grid plate 14 is stepped to a known distance position by the stage 12, and the reference grid plate is imaged at each step position. The gridpixel coordinate calculation unit 360 obtains the coordinates of each pixel of the captured image (twodimensional coordinates of a pixel that is straight in the distance direction) quickly and accurately by linearly interpolating the known grid coordinates of the captured image.
The distancetwodimensional coordinate relationship calculation unit 370 calculates a relationship between the obtained pixel twodimensional coordinates and a known distance by polynomial approximation. As the reference grid substrate 14, a substrate (for example, a glass substrate) provided with a reference grid pattern with high positional accuracy by printing or the like can be used. In addition, you may use the board  substrate with which the opening part and the uneven  corrugated  grooved part are each formed in the grid  grid shape in the reference position.
Phasedistance relation obtained by φz calibration and distancetwodimensional coordinate relations obtained by zx and zy calibration (zx relational expression, zy relational expression) Are stored in the storage unit 390 and referred to by the phasedistance relationship calculation unit 350, the threedimensional coordinate calculation unit 380, and the like during the calculation of actual measurement. Note that the calibration data may be stored for all pixels, or may be stored for some pixels in order to improve processing speed and reduce the amount of stored information.
During the actual measurement (s5), the DUT 16 is mounted on the stage 12, the projection unit 310 projects a plurality of lattice fringes having different phases on the DUT 16, and the imaging unit 312 is arranged at a predetermined position. The checkered image projected on the DUT 16 is imaged.
The phase calculation unit 340 calculates the phase φ of each pixel of the captured image. The phasedistance relationship calculation unit 350 obtains the stage direction distance (z coordinate) of the DUT 16 based on the stored phasedistance relationship from the calculated phase φ. Further, the threedimensional coordinate calculation unit 380 calculates the remaining twodimensional coordinates (x coordinate, y coordinate) based on the stage position distance (z coordinate) obtained based on the stored distancetwodimensional coordinate relationship. calculate. As described above, the threedimensional shape of the device under test 16 is converted into the threedimensional coordinates (x, y, zcoordinate at each point of the device under test 16 from the zcoordinate and the x and y coordinates determined correspondingly. ).
[Phase shift method]
Next, the phase shift method used for measurement in the present embodiment will be described. In the phase shift method, as shown in FIG. 3, lattice fringes having different phases are projected from the projection unit (projector) 310 onto the DUT 16 and picked up by the image pickup unit (camera) 312. Find the shape from the quantity. Since the measurement accuracy largely depends on the projection pattern accuracy of the checkered pattern, it is projected using a precise grating drawn on a glass plate or film, or a checkered pattern made by optical interference is projected.
A generalpurpose data projector can be used as the projector of the projection unit. Since the data projector can easily project lattice fringes, a highly accurate measuring instrument can be realized in combination with a TV camera. In addition, by using ultracompact projectors that use LEDs and lasers as light sources, these LEDs and lasers have a longer life than those using ultrahigh pressure mercury lamps. Is easily realized.
(Measurement principle of phase shift method)
Hereinafter, the phase shift method measurement principle will be described in more detail. From the position of the projector in FIG. 4, sinusoidal lattice fringes P (u, v) whose phases are different by π / 2 are projected in order. Examples of the sine wave lattice pattern to be projected include patterns as shown in FIGS.
This sine wave lattice pattern P (u, v) is expressed by the following equation (1).
As shown in FIG. 4, when there is an object to be measured (a flat plate) at a distance z from the projector, the brightness I ′ (x, y) on the flat plate plane can be expressed by the following equation (2). .
When the lattice fringes (formula (1)) with different phases are projected (k = 0 to 3) and four images I _{0} (i, j) to I _{3} (i, j) captured by the camera are used, The phase φ (i, j) of the pixel is obtained by equation (3).
FIG. 6 shows the geometric positional relationship between the phase of the projected grid pattern and the camera image. Assuming that the phase of the pixel q (i _{q} , j _{q} ) of the checkered image obtained from the above equation (3) is φ _{q} , the intersection of the line of sight of the pixel q and the object to be measured is the measurement point coordinate S _{q} . , On the lattice fringe projection plane (i) of the phase φ _{q} projected by the projector.
In the phase shift method shown here, four different lattice fringes are used for each phase of π / 2, but lattice fringes for every 5, 6,. Further, three different lattice fringes may be used for each phase of π / 3, and the phase difference and the number of lattice fringes to be used can be determined by the measurement time and accuracy (the higher the number, the higher the number).
That is, the coordinates S _{q} (x _{q} , y _{q} , z _{q} ) of the measurement point are
Plane of phase φ _{q} projected from projector (i)
Line of sight of pixel q on camera image sensor (ii)
Is the intersection of Therefore, for example, for all the pixels, the phase φ is obtained using Equation (3), and the intersection of (i) and (ii) is calculated, and the threedimensional shape of the object to be measured is obtained.
[Highprecision measurement method]
(1. Outline of highprecision measurement method)
To accurately measure the shape of the object to be measured,
(1) Projecting and imaging a sine wave pattern without distortion (2) It is necessary to accurately obtain the intersection of the plane (i) and the straight line (ii).
Therefore, in the present embodiment, calibration is executed as described above to realize highprecision measurement. Hereinafter, this calibration method will be described.
(1) The nonlinearity between the brightness setting value of the projector and the brightness value of the captured image is corrected (calibration) and projected so as to be a sine wave grid pattern (calibration of the sine wave grid pattern).
(2) The phase φ _{q} is obtained by projecting lattice fringes while moving the flat plate in the z direction by a precision stage, and for each pixel, the x, y, z coordinates on the line of sight (ii) of FIG. 6 and the phase φ _{q} The relationship is obtained (φz, zx, zy calibration: threedimensional calibration). At the time of measurement, the z coordinate is obtained from the phase φ _{q of} each pixel, and the x and y coordinates are obtained from the z coordinate.
(2. Sinusoidal grid pattern calibration)
As the projection unit 310, a general data projector can be used as described above, but the data projector is adjusted so that the appearance is improved for presentation. Therefore, the relationship between the projection brightness setting value of this projector and the actual projection brightness value is nonlinear as shown in FIG. Even if the projector can be set to perform linear projection, the video card output error on the computer side that outputs data to the projector, the data conversion error on the projector side, and the display device (liquid crystal, DLP (Digital Light Processing), etc.) An output error during driving occurs.
Further, the relationship between the amount of incident light received by the camera employed in the imaging unit 312 and the imaging luminance value is also nonlinear. Therefore, in order to project an accurate sine wave lattice pattern and accurately capture an image, it is preferable to obtain and correct the relationship between the projection brightness setting value and the image capture brightness value.
In addition, the projector collects the light emitted backward from the lamp (filament) forward using a concave elliptical mirror to increase the projection brightness, and measures the unevenness in the amount of light using the integrator illumination system. It is difficult to completely eliminate unevenness.
Therefore, in the present embodiment, for example, the method shown in the following steps s11 to s15 corrects the relationship between the light amount unevenness, the projection luminance setting value, and the imaging luminance value (calibration of sine wave lattice fringes).
(S11) First, as shown in FIG. 8, a reference grid whose coordinates are known is projected from a projector onto a ceramic plate to capture an image, and an image corresponding to the projection pixel P (u, v) is captured from the grid position on the image. Pixel q (i, j) is obtained.
(S12) Next, the projection reference grid is projected while being shifted vertically and horizontally, and imaging pixels q (i, j) corresponding to all the projection pixels P (u, v) are obtained. In addition, thousands of corresponding points can be calculated at a time by projecting and photographing a twodimensional reference grid as shown in FIG.
(S13) The same luminance pattern with different projection luminance setting values (brightness) is projected onto the white ceramic plate and imaged.
(S14) The relationship between the set value of P (u, v) and the imaging luminance value of q (i, j) is approximated by a polynomial. FIG. 9A shows the appearance of a polynomial approximation (here, a quartic equation) between the projection brightness setting value and the imaging brightness value.
(S15) Next, as shown in FIG. 9B, a necessary projection setting value corresponding to the imaging luminance value is obtained by inversely calculating the above polynomial.
(S16: Correction Result) FIG. 10 shows a luminance histogram and a luminance average value when the luminance setting value of the projector is corrected and projected onto the white ceramic plate so that the imaging luminance value becomes constant (150). The brightness setting value is actually set from the brightness histogram in the area surrounded by the dotted line in FIG. 10A and the results of the brightness average value, deviation, intermediate value, etc. in that area as shown in FIG. I can understand that.
FIG. 11 shows an example in which a sine wave is projected onto a white ceramic plate. The imaging brightness value was set to have an amplitude of 50 and an offset of 100. The imaging brightness value in the dotted line part attached to the center of the image in FIG. 11A is shown in FIG. From the result of FIG. 11B, it can be understood that the sine wave lattice fringes are projected and imaged almost according to the set values. That is, in practice, calibration for accurate projection and imaging of sinusoidal lattice fringes can be realized by the method described above.
(3. Threedimensional shape calculation method and threedimensional calibration method)
Next, a method for calculating the threedimensional shape of an object to be measured using the abovedescribed lattice fringes and threedimensional (φz, zx, zy) calibration performed at that time will be described.
(A. Measurement principle)
In FIG. 6, the threedimensional shape of the object to be measured is geometrically determined from parameters such as the position and orientation (optical axis) of the projector and camera, the focal length of the lens, the projection element size, and the image sensor size. Can be calculated.
Conceptually, images of reference points arranged at different positions in the threedimensional space are taken, and the relationship between the position of the reference point on the image and the threedimensional coordinates in the space is solved to obtain camera parameters (camera Calibration), and then projecting the checkerboard pattern onto a plane and capturing it with a calibrated camera, and determining the projector parameters from the relationship between the position of the checkerboard image and the threedimensional coordinates in space (projector calibration) . Since the projection plane (i) and the line of sight (ii) in FIG. 6 are obtained from the parameters of the camera and the projector, threedimensional coordinates can be calculated from the intersection.
Camera lens distortion correction (aberration) can be executed by imaging a grid pattern or a square lattice in advance, and projector lens distortion correction can be executed by projecting a predetermined pattern from the projector.
In the present embodiment, attention is paid to the line of sight (ii) shown in FIG. 6, and the relationship between the phase change on the line of sight (ii) and the threedimensional coordinates is approximated by a function. That is, the lens distortion and the like are automatically corrected by approximating the relationship between the phase change on the line of sight (ii) and the threedimensional coordinates without performing the camera or projector lens distortion correction separately. It is possible to do.
FIG. 12 shows the relationship between the phase φ _{q} of the line of sight (ii) of the photographing pixel q (i _{q} , j _{q} ) and the z axis along the lens optical axis direction.
As can be understood from FIG. 12A, the phase on the line of sight (ii) repeatedly changes beyond 2π when the z coordinate changes greatly. However, the phase can be obtained only in the range of 0 to 2π. For example, if the 2nπ phase changes in the measurement region in the z coordinate direction, there are n z coordinate candidate points. In the example shown in FIG. 12 (b) is n = 5, 5 points of the phase φ candidate points _{q} _{L} 0 ~L _{4} is present. Therefore, as shown in FIG. 12C, if the phases are connected (referred to as unwrapping processing; hereinafter, the connected phase is referred to as an absolute phase), z is immediately obtained from the absolute phase φ. For example, in FIG. 12C, n = 1, and the corresponding z candidate point of the phase φ _{q} is L _{1} (z = L _{1} ). The function representing the absolute phase is an nth order polynomial as shown in Equation (4).
As an approximation method, an approximation method such as Lagrangian interpolation or spline interpolation is conceivable. However, adopting a polynomial such as equation (4) is faster in calculation and easier to implement in hardware capable of realtime processing.
The calibration method with the phase φz coordinate according to the present embodiment has the following features compared to the method for obtaining the threedimensional coordinate from the geometric position.
(A) It is possible to correct not only lens aberration but also local distortion.
(B) Since no geometric calculation is performed, processing can be performed at high speed.
Similarly to the z coordinate, the x and y coordinates can be immediately obtained by polynomial approximation of the correspondence relationship with the z coordinate on the line of sight (ii) as in the following formulas (5) and (6).
(B. Phase connection method)
FIG. 13 illustrates a phase connection method and a calibration method in the z coordinate direction according to the present embodiment. In the present embodiment, as shown in FIG. 13A, the absolute phase is calculated using the zdirection calibration system using the white ceramic plate (diffusion plate) 10 and the precision stage 12, and the absolute phase and z Correspondence with the coordinates (formula (4)) is obtained. Here, phase connection was performed by utilizing the fact that the phase of the captured image increases as the ceramic plate is brought closer to the projector / camera from the farthest point in the measurement region.
Hereinafter, this phase connection procedure (steps s21 to s25) will be described.
(S21) projecting a checkerboard ceramic plate 10 to move to the farthest point _{Z 0,} obtains the phase φ _{0} (i, j) (see ○ mark in FIG. 13 (b)).
(S22) Next, step movement in the z direction is performed to obtain the phase φ at that time. In the example of FIG. 13, the ceramic plate 10 is moved closer in the z direction, the farthest point Z _{0} to Z _{1} (Z _{1} = Z _{0} −Δz), and the phase φ _{1} (i, j) is calculated. The phase calculation at each z coordinate is repeated by bringing the ceramic plate 10 closer.
(S23) In the case of φ _{k} (i, j) <φ _{k−1} (i, j), phase coupling is performed by adding 2π to φ _{k} (i, j). The absolute phase obtained by phase concatenation is indicated by □ in FIG.
(S24) Further, the above steps (s22) and (s23) are repeated (m−1 times, step movement) up to the nearest point z _{m−1} , and phase coupling is performed as necessary.
(S25) Using m pieces of phase data φ _{k} (i, j) (k = 0 to m−1) and z coordinate data Z _{k} (k = 0 to m−1) obtained by the above processing, The above equation (4) is obtained by the method of least squares. That is, the correspondence between the measured phase and the actual z coordinate is obtained (φz calibration).
Here, as Δz is smaller, the number of measurement points is increased and approximation is improved. However, since calibration takes time, Δz is appropriately selected based on required accuracy and allowable processing time. Further, Δz is selected to a value that does not exceed 2π in phase change.
(C. Calculation method of z coordinate)
Next, a more accurate calculation method of the z coordinate will be described. The phase of the lattice pattern actually measured is in the range of 0 to 2π, and the absolute phase is unknown. Therefore, by using lattice stripes having different periods of (a) coarse, (b) fine, and (c) fine as shown in FIG. 14, the z coordinate can be calculated with higher accuracy. In the measurement range in the z direction (Z _{min to} Z _{max} ), the absolute phase of the coarse lattice fringes in FIG. 14A is 0 to 2π, the fine lattice fringes of FIG. 14B are 0 to 2πB, and FIG. ) Fine lattice fringes change from 0 to 2πC (C> B).
Hereinafter, the z coordinate calculation procedure (steps s31 to s35) will be described with further reference to FIG.
(S31) First, as shown in FIG. 15A, from the phase φ _{a} (i, j) of the captured image of the coarse grid stripes in FIG. 14A, the phasedistance relationship calculating unit 350 in FIG. The coordinate Z _{a} (i, j) is obtained. Z _{a} (i, j) is calculated using the approximate expression (4).
(S32) Similarly, as shown in FIG. 15 (b), Z _{b} (i, j) is obtained from the phase φ _{b} (i, j) of the captured image of the fine grid stripes in FIG. 14 (b). Here, there are B candidates for Z _{b} (i, j), as shown in FIG. _{15B} , and 2π is added to φ _{b} (i, j) (phase connection: φ _{b} (i , J) = 2πk + φ _{b} (i, j), k = 0 to B) and Z _{b} (i, j) are obtained in this order.
(S33) Next, the phasedistance relationship calculation unit 350 in FIG. 1 (may be calculated by another calculation unit) calculates the difference between Z _{b} (i, j) and Z _{a} (i, j). this difference _{ε =  Z b (i,} j) Z a (i, j)  is a _{Z} b calculated from b to a minimum (i, j), the approximate value (phi _{b} (i, j ) = 2πb + φ _{b} (i, j)).
(S34) Further, as shown in FIG. 15 (c), Z _{c} (i, j) is obtained from the phase φ _{c} (i, j) of the captured image of the fine grid stripes of FIG. 14 (c). _{Here,} the candidate of Z c (i, j), because the Cnumber is, by adding 2π to φ _{c} (i, j) (the phase _{connected: φ c (i, j)} = 2πk + φ c (i, j) , K = 0 to C) and Z _{c} (i, j) are obtained in this order.
(S35) The phasedistance relationship calculating unit 350 obtains a difference between Z _{c} (i, j) and Z _{b} (i, j), and this difference ε =  Z _{c} (i, j) −Z _{b} (i , J) Let Z _{c} (i, j) obtained from c that minimizes  be a measured value (φ _{c} (i, j) = 2πc + φ _{c} (i, j)).
Moreover, since the measured value obtained using the fine lattice fringes has the highest accuracy, Z _{c} (i, j) obtained from the phase corresponding to the fine lattice fringes is used as the calculated value of the z coordinate. Further, the phasedistance relationship calculation unit 350 obtains the relationship between the determined z coordinate and the phase, and this relationship is stored in the storage unit 390 of FIG.
The calibration for the phase φ and the z coordinate (z direction) is completed by the method as described above.
(4. x and y direction calibration)
Next, calibration of the z coordinate and the x coordinate, and the z coordinate and the y coordinate will be described. In brief, first, a grid lattice (reference grid) in which x and y coordinates orthogonal to the z coordinate direction are known is placed at a distance z, and the captured image is relative to the reference grid position on the captured image. The position is obtained by linear interpolation. Next, based on the relative position of this pixel on the image, the x and y coordinates at the distance z of each pixel are calculated. These processes were sequentially executed while changing the distance z, and the relationship between the above formulas (5) and (6) was obtained from the z coordinate and the calculated x and y coordinates using the least square method.
Specifically, in the calibration, the ceramic plate 10 shown in FIG. 13A is replaced with a reference grid plate (grid lattice) 14 having a reference grid as shown in FIG. , Ydirection calibration system is constructed. In addition, according to the processing procedure (steps s41 to s45) described below, a relational expression (the above formula (5) between the z coordinate and the x, y coordinate for each pixel q (i _{q} , j _{q} ) of the captured image. ), (6)).
(S41) First, imaged by moving the grid grating farthest point z _{0,} we obtain the center position of each grid coordinates. The center position of the grid coordinates is the center coordinates G (s, t), G (s + 1, t), G (s, t + 1), G (s + 1, t + 1) of the points indicated by ● in FIG. Etc.
(S42) Next, the pixel q (i _{q} , j _{q} ) of the captured image and the positions (G (s, t), G (s + 1, t) of four grids surrounding this q (i _{q} , j _{q} ) , G (s, t + 1 ), G from (s + 1, t + 1 )) and the relative position of, _{q (i q,} the threedimensional coordinates _{x} 0 of _{j q) (i q, j} q), y 0 (i q, j _{q} ) is obtained by linear interpolation from the x and y coordinates of the grid (see FIG. 16B).
(S43) Further, the grid grating 14 is moved stepwise in the z direction (Z _{1} = Z _{0} −Δz), and at each z position, the threedimensional coordinates x _{1} (i _{q} , j _{q} ), y _{1} (i _{q} , _{jq} ) is obtained.
(S44) By repeating the above steps (s42) and (s43) up to the nearest point z _{m−1} (m−1 times, step movement), z coordinate data Z _{k} (k = 0 to m−1). ) To obtain m pieces of x, y coordinate data x _{k} (i _{q} , j _{q} ), y _{k} (i _{q} , j _{q} ).
(S45) Using m pieces of x, y coordinate data x _{k} (i _{q} , j _{q} ), y _{k} (i _{q} , j _{q} ) and z coordinate data Z _{k} (k = 0 to m−1), the minimum The above formulas (5) and (6) are obtained by the square method. The obtained relational expressions (5) and (6) are stored as coefficients, for example, in the storage unit 390 of FIG.
By the method as described above, the relationship between the x coordinate and the y coordinate with respect to the z coordinate can be obtained, and the calibration can be completed.
[Concrete example]
Next, a specific example of a threedimensional shape measurement method using the abovedescribed calibration and measurement principle will be further described with reference to the drawings.
(1. System configuration)
The basic system is as shown in FIG. 1 described above. FIG. 17 shows the system configuration of a threedimensional shape measuring apparatus 301 according to a specific example. This apparatus system employs a data projector 314 as the projection unit 310 and a camera (CCD camera) 316 as the photographing unit 312 in FIG. Further, a computer 322 provided with a video board and an image input board is employed as the measurement processing unit 320 in FIG. The computer 322 is constituted by a CPU or the like, and corresponds to the calculation unit 332 having the function of the calculation unit 330 in FIG. 1 and the storage unit 390 in FIG. 1, and stores data (for example, calibration data) necessary for processing. A memory 343 is provided.
The data projector 314 and the camera 316 are arranged as shown in FIG. 17, and the measurement area is approximately z = 300 ± 15 mm and x, y = ± 15 mm. The resolution in the x and y directions is about 50 μm (= ± 15 mm / 600 pixels).
Here, the measurement area was set to 600 × 600 pixels in the center of the camera. As described with reference to FIG. 6, in the present embodiment, since the threedimensional coordinates on the camera line of sight (ii) are calculated, the shape is a rhombus, as shown as the measurement region in FIG. 17. Become.
In this specific example, the lens position of the projector is shifted in order to set the lattice fringe projection distance to 300 mm.
As will be described later, the lattice fringes were one, ten, and fifty for a projection width of 55 mm (z = 300 mm) (see FIG. 18). The check pattern is obtained by drawing a check pattern on a screen (not shown) of the computer 322 and outputting the check pattern to the projector 314 via a video board.
(2.z calibration)
An example of the zdirection calibration of the checkered pattern is shown in FIG. The calibration interval Δz was set to coarse grid stripes: 5 mm and fine grid stripes and fine grid stripes: 1 mm. Note that the calibration range was expanded ± 5 mm from the measurement region, and the distortion of the approximate expression at the end of the approximate function was almost eliminated. In the calibration example, the center pixel of the captured image is approximated.
Next, as described with reference to FIG. 13, the ceramic plate was moved in the z direction on the precision stage, and the difference between the z coordinate measurement value of the ceramic plate and the stage position was compared. FIG. 19 shows the comparison results. The positioning accuracy of the stage used was 7 μm.
The difference between the measured value of the z coordinate of the ceramic plate and the stage position was an average value: 25 μm at the maximum (z = 312.5 mm) and a standard deviation: 28 μm (z = 315 mm). Further, the maximum difference of all measurement coordinates (600 × 600 pixels) at each measurement position is shown as max. And min. It showed in. The maximum difference was 157 μm, and it was found that practical accuracy was realized.
(3. zx, zy calibration)
Next, an example of an algorithm used for zx, zy calibration will be described. Of course, the employable algorithm is not limited to the following specific examples.
(1) Calculation of grid position (s51) Using a commercially available image processing library MIL, grid center coordinates G [s] [t], s = 0 to 50, and t = 0 to 50 were obtained from the grid image. In addition, as for the reference grid 14 to be used, for example, a grid distortion chart (57983I manufactured by Edmund Optics) has a circular dot (grid) of φ0.5 mm at an interval of 1 mm, 51 × 51 (vertical, horizontal) = 2. , 601 precision lattice patterns drawn.
(2) Calculation of x, y coordinates (s61) For the coordinate calculation pixel (target pixel) q of the captured image, the coordinates G [s ′] [t ′] of the grid closest to this pixel were obtained.
(S62) As shown in FIG. 20A, the grid is divided into patterns (I) to (IV) from the position of the coordinate calculation pixel q and the four grids surrounding it.
(S63) After dividing into patterns, in the pattern (region) to which the coordinate calculation pixel q belongs, the x coordinate of this pixel q is calculated (see FIG. 20B). The calculation was performed by linear interpolation from the positions of the four grids surrounding the pixel q as described above.
The y coordinate of the coordinate calculation pixel q was also calculated by the same method.
(4. Actual measurement results)
Next, the result of actual measurement of coins having fine irregularities after the above calibration will be described with reference to FIG.
FIGS. 21A to 21D show the fine checkered images, the amplitude images obtained from the checkered images (luminance images from which the illumination light has been removed) and the phase images are shown in FIGS. 21E and 21F as the shape measurement results. Is shown in FIG.
From the shape measurement result of FIG. 21G, it can be understood that the coin shape can actually be measured. Although it is difficult to discriminate with the expression of FIG. 21 (g), in actuality, the coin height is measured to be different between the upper and lower sides, and the cause is due to the slight inclination of the board on which the coin is stuck. . On the contrary, it can be understood that the inclination of the surface of the object to be measured can be accurately measured.
In addition, the black part of the measurement image shown in FIG.21 (f) was not measured for the following reasons.
Since the luminance value is saturated in any of the images (a) to (d), calculation is impossible.
・ The brightness of the amplitude image is small and the error is large.
For example, a blackbrown board portion on which a coin is pasted has a low luminance and a large phase noise (see FIGS. 21E and 21F).
Since metal such as coins is easily specularly reflected, as shown in FIG. 21E, it is too bright or too dark depending on the surface angle, the luminance change is very large, and the imaging dynamic range of the camera is insufficient, and measurement is performed. An area that cannot be created occurs.
FIG. 22 shows an example in which the same coin is measured by changing the z position. As described in the above section (1. System configuration), since the measurement area is a diamond, the position of the coin is shifted from the left to the right, but the shape can be measured in detail. Also, in both FIG. 22A in which z = 290 mm and FIG. 22B in which z = 310 mm, the coin measurement results (surface irregularities thereof) except that the positions are shifted to the left and right. The surface unevenness information is almost equal. In both cases, the slight tilt of the coin surface (slight tilt of the board), specifically, the fact that the upper side of the board is tilted away from the lower side in the z direction is measured almost equally. ing. Thus, it can be understood that the configuration disclosed in the specific example of the present embodiment can accurately measure the threedimensional shape of the measurement object.
[Summary of specific examples]
(1) A threedimensional shape measuring instrument based on the phase shift method could be constructed using a commercially available projector and a TV camera.
(2) High accuracy could be achieved by incorporating the following two methods.
(A) Correction of nonlinear error between projection luminance and imaging luminance of projector (b) Lens distortion removal in which relationship between phase on line of sight of each pixel and threedimensional coordinate is replaced by function (3) Ceramic plate on precision stage As a result of measuring the position of the ceramic plate, the difference (z) from the stage position is as small as an average value: 25 μm and a standard deviation: maximum 28 μm at a distance of 300 ± 15 mm. It was found.
10 reference flat plate, 14 reference grid flat plate, 12 stage, 16 object to be measured, 300 threedimensional shape measuring apparatus, 310 projection unit, 312 imaging unit, 320 measurement processing unit, 330 calculation unit, 340 phase calculation unit, 350 phasedistance Relationship calculation unit, 360 Gridpixel coordinate relationship calculation unit, 370 Distancetwodimensional coordinate relationship calculation unit, 380 Threedimensional coordinate calculation unit, 390 Storage unit.
Claims (9)
 A plurality of lattice fringes having different phases are projected onto the object to be measured, and the obtained lattice fringe image is represented by the distance direction coordinates to the object to be measured and the twodimensional coordinates orthogonal to the distance direction coordinates by the phase shift method. A threedimensional shape measuring method for obtaining a threedimensional shape.
A reference plate is arranged at a position where the distance from the projection unit and the imaging unit is known, and a plurality of lattice fringes with different phases are projected onto the reference plate, and each phase of a plurality of pixels is calculated from the obtained captured image of the lattice fringes. And calculating a phasedistance relationship from the calculated phase and the known distance,
A reference grid plate having a reference grid with a known twodimensional coordinate on a plane orthogonal to the distance direction is arranged at a position where the distance from the projection unit and the imaging unit is known, and imaging is performed based on the reference grid. Calculating each twodimensional coordinate for a plurality of pixels of the image, calculating a distancetwodimensional coordinate relationship from each twodimensional coordinate of the calculated plurality of pixels and the known distance,
During actual measurement,
The object to be measured is arranged at a predetermined distance from the projection unit and the imaging unit,
Projecting a plurality of lattice fringes with different phases onto the object to be measured, calculating the phase of each pixel from the obtained lattice fringe image, and calculating the distance for the corresponding pixel based on the phasedistance relationship,
A threedimensional shape measurement characterized in that, based on the distancetwodimensional coordinate relationship, a twodimensional coordinate of the pixel is calculated from a distance of the calculated corresponding pixel to obtain a threedimensional shape of the object to be measured. Method.  The threedimensional shape measuring method according to claim 1,
When calculating the phasedistance relationship, a grating fringe having a period with a phase change of 2π or less and a grating fringe having a period with a phase change larger than 2π are projected onto the reference flat plate, and the phase change is more than 2π. Among a plurality of distance coordinate candidates calculated from the phase of the captured image obtained at the time of projection of the lattice pattern with a large period, the distance change is close to the distance coordinate calculated from the phase of the imaged image obtained at the time of projection of the lattice pattern with a period of 2π or less. A method for measuring a threedimensional shape, wherein a candidate is a calculation result of a distance.  In the threedimensional shape measuring method according to claim 1 or 2,
In calculating the phasedistance relationship, the reference plate is set to a plurality of different distances, and the phase of each pixel is calculated at each distance,
The interval between the plurality of distances to be set is set so that the phase difference of the correspondingly calculated phase satisfies less than 2π.  In the threedimensional shape measuring method according to claim 3,
In calculating the distancetwodimensional coordinate relationship, the reference grid plate is set to a plurality of different distances, and the twodimensional coordinates of each pixel are calculated corresponding to each distance,
The interval between the plurality of distances to be set is set so that the phase difference of the phase corresponding to each distance satisfies less than 2π.  A plurality of lattice fringes having different phases are projected onto the object to be measured, and the obtained lattice fringe image is represented by a distance direction coordinate to the object to be measured and a twodimensional coordinate orthogonal to the distance direction coordinate by a phase shift method. It is a 3D shape measuring device for obtaining 3D shapes,
A stage for placing the object to be measured at a predetermined position;
A projecting unit that projects a plurality of lattice fringes with different phases onto the object arranged on the stage;
An imaging unit for imaging an object placed on the stage;
A measurement processing unit for obtaining a threedimensional shape of the object to be measured based on a captured image,
The measurement processing unit includes a phase calculation unit, a phasedistance relationship calculation unit, a pixel twodimensional coordinate calculation unit, a distancetwodimensional coordinate relationship calculation unit, and a threedimensional coordinate calculation unit,
The phase calculation unit includes a reference plate at a position where the distance from the projection unit and the imaging unit is known, and when a plurality of lattice stripes having different phases are projected on the reference plate, a captured image of the lattice pattern obtained From the above, calculate each phase of multiple pixels,
The phasedistance relationship calculating unit calculates a phasedistance relationship from the calculated phase and the known distance,
The twodimensional coordinate calculation unit of the pixel includes a reference grid plate having a reference grid with a known twodimensional coordinate on a plane orthogonal to the distance direction at a position where the distance from the projection unit and the imaging unit is known. When arranged, calculate each twodimensional coordinate for a plurality of pixels of the captured image based on the twodimensional coordinates of the reference grid,
The distancetwodimensional coordinate relationship calculating unit calculates a distancetwodimensional coordinate relationship from each of the calculated twodimensional coordinates of the plurality of pixels and the known distance,
During actual measurement,
The phase calculation unit arranges the object to be measured at a predetermined distance from the projection unit and the imaging unit, and projects the plurality of lattice fringes having different phases on the object to be measured. Calculate the phase of each pixel from
The threedimensional coordinate calculation unit calculates a distance for the corresponding pixel based on the phasedistance relationship, and based on the distancetwodimensional coordinate relationship, the threedimensional coordinate calculation unit calculates the twodimensional coordinate of the pixel from the calculated distance for the corresponding pixel. A threedimensional shape measuring apparatus characterized by calculating coordinates to obtain a threedimensional shape of the object to be measured.  The threedimensional shape measuring apparatus according to claim 5,
The projection unit can project a lattice fringe having a period of 2π or less in phase change in the projection region and a lattice fringe having a period of phase change greater than 2π,
When calculating the phasedistance relationship, the projection unit projects, onto the reference flat plate, lattice fringes having a period with a phase change of 2π or less in the projection region and lattice fringes having a period with a phase change greater than 2π. ,
The phasedistance calculation unit projects a lattice fringe having a period of which the phase change is 2π or less among a plurality of distance coordinate candidates calculated from the phase of the captured image obtained when projecting the lattice fringe having a phase change of greater than 2π. A threedimensional shape measuring apparatus characterized in that a candidate close to a distance coordinate calculated from a phase of a captured image obtained at times is set as a distance corresponding to the phase of the captured image.  In the threedimensional shape measuring apparatus according to claim 5 or 6,
When calculating the phasedistance relationship, the stage is provided with the reference plate, and when calculating the distancetwodimensional coordinate relationship, the stage is provided with the reference grid plate.
And the reference flat plate and the reference grid substrate can be set at a plurality of different distance positions with respect to the projection unit and the imaging unit, respectively, by the stage,
The interval between the plurality of distances set by the stage is set so that the phase difference of the correspondingly calculated phase satisfies less than 2π.  The plurality of lattice fringes having the different phases projected onto the object to be measured, the threedimensional shape measuring device according to any one of 請 Motomeko 57 you being a sine wave checkerboard.
 5. The threedimensional shape measurement method according to claim 1, wherein the plurality of lattice fringes having different phases projected onto the object to be measured are sinusoidal lattice fringes.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

JP2009048662A JP5375201B2 (en)  20090302  20090302  3D shape measuring method and 3D shape measuring apparatus 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

JP2009048662A JP5375201B2 (en)  20090302  20090302  3D shape measuring method and 3D shape measuring apparatus 
Publications (2)
Publication Number  Publication Date 

JP2010203867A JP2010203867A (en)  20100916 
JP5375201B2 true JP5375201B2 (en)  20131225 
Family
ID=42965499
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

JP2009048662A Expired  Fee Related JP5375201B2 (en)  20090302  20090302  3D shape measuring method and 3D shape measuring apparatus 
Country Status (1)
Country  Link 

JP (1)  JP5375201B2 (en) 
Cited By (1)
Publication number  Priority date  Publication date  Assignee  Title 

EP3381350A1 (en) *  20170331  20181003  Nidek Co., Ltd.  Subjective optometry apparatus and subjective optometry program 
Families Citing this family (9)
Publication number  Priority date  Publication date  Assignee  Title 

JP2012202771A (en) *  20110324  20121022  Fujitsu Ltd  Threedimensional surface shape calculation method of measuring target and threedimensional surface shape measuring apparatus 
CN102628676B (en) *  20120119  20140507  东南大学  Adaptive window Fourier phase extraction method in optical threedimensional measurement 
JP6041513B2 (en) *  20120403  20161207  キヤノン株式会社  Image processing apparatus, image processing method, and program 
JP6299150B2 (en) *  20131031  20180328  セイコーエプソン株式会社  Control device, robot, control system, control method, and control program 
JP2015099050A (en) *  20131118  20150528  セイコーエプソン株式会社  Calibration method and shape measuring device 
JP6602867B2 (en) *  20141222  20191106  サイバーオプティクス コーポレーション  How to update the calibration of a 3D measurement system 
CN104729429B (en) *  20150305  20170630  深圳大学  A kind of three dimensional shape measurement system scaling method of telecentric imaging 
JP2017126267A (en) *  20160115  20170720  株式会社Pfu  Image processing system, image processing method and computer program 
JP2019105458A (en) *  20171208  20190627  株式会社日立ハイテクファインシステムズ  Defect inspection device and defect inspection method 
Family Cites Families (3)
Publication number  Priority date  Publication date  Assignee  Title 

JP2913021B2 (en) *  19960924  19990628  和歌山大学長  Shape measuring method and device 
JPH11166818A (en) *  19971204  19990622  Suzuki Motor Corp  Calibrating method and device for threedimensional shape measuring device 
JP3417377B2 (en) *  19990430  20030616  日本電気株式会社  Threedimensional shape measuring method and apparatus, and recording medium 

2009
 20090302 JP JP2009048662A patent/JP5375201B2/en not_active Expired  Fee Related
Cited By (1)
Publication number  Priority date  Publication date  Assignee  Title 

EP3381350A1 (en) *  20170331  20181003  Nidek Co., Ltd.  Subjective optometry apparatus and subjective optometry program 
Also Published As
Publication number  Publication date 

JP2010203867A (en)  20100916 
Similar Documents
Publication  Publication Date  Title 

US10563978B2 (en)  Apparatus and method for measuring a three dimensional shape  
US10677591B2 (en)  System and method for measuring threedimensional surface features  
TWI480832B (en)  Reference image techniques for threedimensional sensing  
US9322643B2 (en)  Apparatus and method for 3D surface measurement  
JP2015057612A (en)  Device and method for performing noncontact measurement  
EP1777487B1 (en)  Threedimensional shape measuring apparatus, program and threedimensional shape measuring method  
KR101257188B1 (en)  Threedimensional shape measuring device, threedimensional shape measuring method, and computer readable recording medium for threedimessional shape measuring program  
JP4112858B2 (en)  Method and system for measuring unevenness of an object  
EP1596158B1 (en)  Threedimensional shape input device  
US20150015701A1 (en)  Triangulation scanner having motorized elements  
JP5395507B2 (en)  Threedimensional shape measuring apparatus, threedimensional shape measuring method, and computer program  
TWI396823B (en)  Three dimensional measuring device  
US10812694B2 (en)  Realtime inspection guidance of triangulation scanner  
KR100615576B1 (en)  Threedimensional image measuring apparatus  
KR101461068B1 (en)  Threedimensional measurement apparatus, threedimensional measurement method, and storage medium  
TWI460394B (en)  Threedimensional image measuring apparatus  
Wang et al.  Threedimensional shape measurement with a fast and accurate approach  
EP2475954B1 (en)  Noncontact object inspection  
KR101121691B1 (en)  Threedimensional measurement device  
US8199335B2 (en)  Threedimensional shape measuring apparatus, threedimensional shape measuring method, threedimensional shape measuring program, and recording medium  
US7548324B2 (en)  Threedimensional shape measurement apparatus and method for eliminating 2π ambiguity of moire principle and omitting phase shifting means  
US6611344B1 (en)  Apparatus and method to measure three dimensional data  
CN100338434C (en)  Thrredimensional image measuring apparatus  
JP5390900B2 (en)  Method and apparatus for determining 3D coordinates of an object  
Xu et al.  Phase error compensation for threedimensional shape measurement with projector defocusing 
Legal Events
Date  Code  Title  Description 

A621  Written request for application examination 
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20120111 

A977  Report on retrieval 
Free format text: JAPANESE INTERMEDIATE CODE: A971007 Effective date: 20121126 

A131  Notification of reasons for refusal 
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20130108 

A521  Written amendment 
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20130301 

A131  Notification of reasons for refusal 
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20130528 

A521  Written amendment 
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20130612 

TRDD  Decision of grant or rejection written  
A01  Written decision to grant a patent or to grant a registration (utility model) 
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20130827 

A61  First payment of annual fees (during grant procedure) 
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20130909 

LAPS  Cancellation because of no payment of annual fees 