CN112525101B - Laser triangulation method 3D imaging device based on facula recognition - Google Patents

Laser triangulation method 3D imaging device based on facula recognition Download PDF

Info

Publication number
CN112525101B
CN112525101B CN202011400150.3A CN202011400150A CN112525101B CN 112525101 B CN112525101 B CN 112525101B CN 202011400150 A CN202011400150 A CN 202011400150A CN 112525101 B CN112525101 B CN 112525101B
Authority
CN
China
Prior art keywords
camera
laser
device based
imaging device
imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011400150.3A
Other languages
Chinese (zh)
Other versions
CN112525101A (en
Inventor
郑恭明
李耀兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze University
Original Assignee
Yangtze University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze University filed Critical Yangtze University
Priority to CN202011400150.3A priority Critical patent/CN112525101B/en
Publication of CN112525101A publication Critical patent/CN112525101A/en
Application granted granted Critical
Publication of CN112525101B publication Critical patent/CN112525101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2504Calibration devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2518Projection by scanning of the object

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

The invention relates to a laser triangulation method 3D imaging device based on light spot identification, and belongs to the technical field of image identification imaging. Comprises a fixed frame and five steps; the fixing frame is composed of five supporting columns, three mounting blocks, a circular top plate and a circular base, the supporting columns are assembled into a whole through the mounting blocks, the circular top plate is installed at the upper end of the supporting columns, the circular base is installed at the lower end of the supporting columns, a fixing groove is formed in the center of the circular base, an adjustable telescopic support is installed at the bottom of the supporting columns through a hinge, a stepping motor and an FPGA processing circuit board are arranged at the uppermost mounting block, a lead screw assembled and connected by the stepping motor extends out of the lower end of the circular base through the middle mounting block and the lowermost mounting block, a camera and a laser emitter are installed at the lower end of the lead screw, and the camera, the laser emitter and the stepping motor are connected with the FPGA processing circuit board through leads. The structure and the operation are simple, the anti-interference capability is strong, the influence of objective factors on depth of field calculation is greatly reduced, the height and the width are convenient to adjust, and the device is suitable for non-open cavity detection application.

Description

Laser triangulation method 3D imaging device based on facula recognition
Technical Field
The invention relates to a laser triangulation method 3D imaging device based on light spot identification, and belongs to the technical field of image identification imaging.
Background
The structured light method is to project an artificial light source to a measured object through a specific pattern code, and the surface irregularity of the measured object has different depths, so that the specific pattern code reflected to the image sensor can generate distortion, and the depth information of the observed object can be analyzed through the comparison of the distorted pattern received by the image sensor and the original pattern. However, in the existing device for obtaining a 3D image by adopting a structured light method, a specific pattern needs to be coded to improve the demodulation anti-interference performance of the coded pattern, and objective factors such as a natural light source, the texture of an observed object and the like have large influence on the depth of field calculation, so that the structure and the operation of the device are very complex, and the device is particularly not suitable for the detection imaging operation of a non-open cavity; the laser triangulation method is an imaging method based on object scanning, and aims at the space positions among a light source, an observed object and an image sensor and measures the depth of field by using a geometric relation; the disadvantage is that this method can only cover a small distance range, which is greatly influenced by ambient light.
Disclosure of Invention
The invention aims to provide the laser triangulation 3D imaging device which does not need to encode a specific image, has good monochromaticity and directivity and strong anti-interference capability, greatly reduces the influence of the natural light source and the texture of an observed object on the depth of field calculation, has simple structure and operation, is convenient to adjust the height and width, is particularly suitable for detection imaging shooting of a non-open cavity and has low cost based on spot identification.
The invention realizes the purpose through the following technical scheme:
a laser triangulation method 3D imaging device based on spot recognition comprises a fixing frame, a stepping motor, a lead screw, a camera, a laser emitter and an FPGA processing circuit board; the method is characterized in that: the mount comprises five support columns, three installation pieces, circular roof and circular base, the support column links into an integrated entity through the installation piece dress admittedly, and circular roof is equipped with to the upper end of support column, and circular base is equipped with to the lower extreme of support column, circular base open at the center has the fixed slot, and there is adjustable telescopic bracket circular base bottom through the hinge mounting, be provided with step motor and FPGA processing circuit board on the installation piece of top, step motor output shaft dress has the lead screw, the lead screw extends circular base lower extreme through the fixed slot of an installation piece in the middle of, installation piece and circular base the next, camera and laser emitter are installed to the lower extreme of lead screw, camera, laser emitter and step motor pass through the wire and are connected with FPGA processing circuit board.
The adjusting depth of the screw rod is 2 mm-2 cm each time, and the adjusting angle theta is 5-20 degrees each time.
The camera and the laser emitter are mutually 90 degrees and are arranged at the lower end of the screw rod, and the distance between the laser emitter and the center of the camera is 5 mm-3 cm.
The camera comprises a CCD image sensor and a CMOS image sensor.
The adjustable range of the adjustable telescopic bracket is as follows: the height is 10 cm-80 cm, and the width is 20 cm-100 cm.
The FPGA processing circuit board comprises an automatic calibration function, the incomplete light spots imaged nearby the camera vision blind area are obtained through light spot center points, classification is carried out according to different distances between the camera and the laser transmitter, the corresponding relation between the center points of the incomplete light spots and the actual depth of field is recorded through multiple times of actual measurement, and a plurality of automatic calibration index tables are formed.
An imaging method of a laser triangulation 3D imaging device based on spot recognition is characterized by comprising the following steps:
firstly, adjusting an adjustable telescopic support to enable the device and an observed object to be in a proper spatial position, then adjusting the distance b between a laser transmitter and a camera to enable point-shaped laser emitted by the laser transmitter to irradiate the surface of the observed object to form a light spot, wherein the light spot and the observed object in the current field angle 2a are imaged on the camera together to realize conversion of optical signals and electric signals;
secondly, the camera transmits a frame of digitized image data to the FPGA processing circuit board, and the frame of image data records the image data including the current height and angle;
processing the frame of light spot image by the FPGA processing circuit board, firstly eliminating noise by using a digital filtering algorithm, and then carrying out edge detection, binarization and light spot identification processing;
step four, solving a light spot central point to obtain coordinate values of the light spot center (X, Y), wherein the coordinate values are the position of the light spot center, and the light spot central coordinate Y corresponds to the nth point from the top of the imaging photo; knowing the distance b from the laser emitter to the camera and the field angle of the camera
Figure 157096DEST_PATH_IMAGE001
The depth of field d to be measured is calculated by the following formula:
Figure 556984DEST_PATH_IMAGE002
in the formula: n is the nth point of the spot central point coordinate Y in the image from the top of the imaging photo to the bottom; y is the middle point of the Y axis of the camera imaging;
integrating the depth of field d of (360 DEG/theta) test points measured by a camera in each rotation into a data set and sending the data set to an upper computer; then the stepping motor controls the screw rod to adjust the depth of the camera and the laser emitter, and the screw rod rotates for one circle again to collect the depth of field d of the (360 degrees/theta) test points to form a data set and send the data set to the upper computer;
step five, repeating the step four until the collection is finished; and the upper computer finally fits a structural 3D image according to the data set.
Compared with the prior art, the invention has the beneficial effects that:
the laser triangulation 3D imaging device based on the facula identification enables a fixed frame and an observed object to be in a proper spatial position through an adjustable telescopic support arranged on a circular base, enables a laser emitter and a camera which are arranged at 90 degrees to complete laser scanning and shooting of the observed object at an optimal angle and distance through an FPGA processing circuit board, a stepping motor and a lead screw by adopting laser with good monochromaticity, directivity and strong anti-jamming capability as an active light source, realizes a series of analysis processing of digital filtering algorithm, noise elimination, binaryzation and facula identification on an image facula through the FPGA processing circuit board, obtains a center point coordinates of the image facula, identifies incomplete facula near a field angle blind area of the camera, obtains the depth of field exceeding a theoretical value from the corresponding depth of field measured, and takes the characteristic that the X-axis coordinate corresponding to a standard facula is unchanged as a basis, reflection light spots generated due to the influence of the environment and the material of the surface of an observation object are eliminated, finally, data streams obtained at different heights and different angles are integrated into a structural 3D image in a fitting mode, specific patterns do not need to be coded, the anti-interference capability is high, the structure and operation of the 3D imaging device are greatly simplified, and the imaging device is particularly suitable for shooting and imaging in non-open cavities. The structure is simple, the cost is low, and the method has high application value in the field of space structure 3D imaging. The method and the device perfectly solve the problems that the existing structured light 3D imaging equipment is complex in structure and operation, poor in demodulation anti-interference performance, large in influence of the texture of a natural light source and an observed object on depth of field calculation, and particularly not suitable for detection imaging operation of a non-open cavity.
Drawings
FIG. 1 is a schematic overall structure diagram of a laser triangulation 3D imaging device based on spot identification;
FIG. 2 is a schematic diagram of the working principle of a laser triangulation 3D imaging device based on spot identification;
FIG. 3 is a schematic structural view of a circular base;
FIG. 4 is a schematic diagram of depth of field measurement of a laser triangulation 3D imaging device based on spot recognition;
FIG. 5 is a schematic perspective view of a laser triangulation 3D imaging device based on spot recognition;
FIG. 6 is a diagram of the effect of a processed light spot image of a laser triangulation 3D imaging device based on light spot identification;
FIG. 7 is a schematic view of an incomplete light spot near a blind field angle region;
FIG. 8 is a schematic view of a reflected light spot formed due to the influence of the environment and the material of the surface of an observation object;
FIG. 9 is a 3D imaging effect diagram of a cavity structure of a laser triangulation 3D imaging device based on light spot identification;
fig. 10 is a 3D imaging effect diagram of a cup structure of a laser triangulation 3D imaging device based on spot recognition.
In the figure: 1. the device comprises a fixing frame, 2, a stepping motor, 3, an FPGA processing circuit board, 4, a screw rod, 5, a camera, 6 and a laser emitter;
100. the adjustable telescopic support comprises a support column, 101, an installation block, 102, a circular top plate, 103, a circular base, 104, a fixing groove, 105 and an adjustable telescopic support.
Detailed Description
The embodiment of the laser triangulation 3D imaging device based on spot recognition is further described in detail with reference to the accompanying drawings:
(see fig. 1, 3 and 5), the laser triangulation method 3D imaging device based on light spot identification comprises a fixing frame 1, a stepping motor 2, an FPGA processing circuit board 3, a screw rod 4, a camera 5 and a laser emitter 6; the fixing frame 1 is composed of five supporting columns 100, three mounting blocks 101, a circular top plate 102 and a circular base 103, the supporting columns 100 are fixedly connected into a whole through the mounting blocks 101, the circular top plate 102 is mounted at the upper ends of the supporting columns 100, the circular base 103 is mounted at the lower ends of the supporting columns 100, and the stepping motor 2, the FPGA processing circuit board 3, the lead screw 4, the camera 5 and the laser emitter 6 are mounted and mounted stably and firmly; the center of the circular base 103 is provided with a fixed groove 104, the bottom of the circular base 103 is provided with an adjustable telescopic bracket 105 through a hinge, and the spatial position of the device and an observed object can be conveniently adjusted through the adjustable telescopic bracket 105.
The device comprises a circular base 103, a top mounting block 101, a stepping motor 2 and an FPGA processing circuit board 3 are arranged on the top mounting block 101, a lead screw 4 is mounted on an output shaft of the stepping motor 2, the lead screw 4 extends out of the lower end of the circular base 103 through a fixing groove 104 formed in the middle mounting block 101, the bottom mounting block 101 and the circular base 103, a camera 5 and a laser emitter 6 are mounted at the lower end of the lead screw 4, and the camera 5, the laser emitter 6 and the stepping motor 2 are connected with the FPGA processing circuit board 3 through leads. The stepping motor 2 drives the screw rod 4 to act, and the angle and the height between the camera 5 and the observed object and between the laser emitter 6 and the observed object are adjusted.
The adjusting depth of the screw rod 4 is 2 mm-2 cm each time, and the adjusting angle theta is 5-20 degrees each time.
Camera 5 and laser emitter 6 be 90 each other and install at 4 lower extremes of lead screw, the interval at laser emitter 6 and 5 centers of camera is 5mm ~3 cm. The emitting direction of the laser emitter 6 and the plane of the camera 5 form an angle of 90 degrees.
The camera 5 comprises a CCD image sensor and a CMOS image sensor.
The adjustable range of the adjustable telescopic bracket 105 is as follows: the height is 10 cm-80 cm, and the width is 20 cm-100 cm. Three adjustable telescopic bracket 105 is 120 each other and passes through the hinge mount is in the bottom surface of circular base 103, before the shooting, through the opening width of the adjustable telescopic bracket 105 of hinge adjustment, through the height of the telescopic link of the adjustable telescopic bracket 105 of snap ring adjustment.
The distance range from the laser transmitter 6 to the center of the camera 5 is 5 mm-3 cm, and 1cm is selected in the implementation of the invention.
FPGA processing circuit board 3 include the automatic calibration function, ask for the light spot central point to the incomplete light spot of 5 near field of vision blind areas formation of image of camera, classify according to the different distances between camera 5 and laser emitter 6, the central point of incomplete light spot and the corresponding relation of actual depth of field are recorded in many times the actual measurement, form a plurality of automatic calibration index tables.
(see fig. 2) fig. 2 is a schematic diagram of a working principle of the laser triangulation 3D imaging device based on light spot identification, as shown in fig. 2, a power supply driving module supplies power to the device, a stepping motor 2 controls a lead screw 4 to adjust a spatial relationship between a laser emitter 6 and a camera 5 and an observed object, the camera 5 transmits shot image data to an FPGA processing circuit board 3 through a lead, the FPGA processing circuit board 3 transmits a measured depth of field data set to an upper computer, and the upper computer simulates the data set into a structural 3D image.
An imaging method of a laser triangulation 3D imaging device based on spot identification is characterized by comprising the following steps: (see fig. 4, 6 to 8):
firstly, adjusting an adjustable telescopic bracket 105 to enable the device and an observed object to be in a proper spatial position, then adjusting the distance b between a laser emitter 6 and a camera 5 to enable point-shaped laser emitted by the laser emitter 6 to irradiate the surface of the observed object to form a light spot, and imaging the light spot and the observed object in the current field angle 2a on the camera 5 together to realize conversion of optical signals and electric signals;
secondly, the camera 5 transmits a frame of digitized image data to the FPGA processing circuit board 3, and the frame of image data records the image data including the current height, angle and current time;
processing the frame of light spot image by the FPGA processing circuit board 3, firstly eliminating noise by using a digital filtering algorithm, and then carrying out edge detection, binarization and light spot identification processing;
step four, solving a light spot central point to obtain coordinate values of the light spot center (X, Y), wherein the coordinate values are the position of the light spot center, and the light spot central coordinate Y corresponds to the nth point counted from the uppermost part of the imaging photo downwards; knowing the distance b from the laser emitter 6 to the camera 5 and the field angle 2 α of the camera 5, the depth of field d to be measured is obtained by the following formula:
Figure 484882DEST_PATH_IMAGE003
in the formula: n is the nth point from the uppermost of the imaging photos of the coordinate Y of the central point of the facula in the image; y is the middle point of the Y axis imaged by the camera 5;
the FPGA processing circuit board 3 integrates the depth of field d of (360 DEG/theta) test points measured by the camera 5 in each rotation into a data set and sends the data set to an upper computer; then the stepping motor 2 controls the screw rod 4 to adjust the depth of the camera 5 and the laser emitter 6, and the depth of field d of the test points is collected by rotating for one circle again (360 degrees/theta), so that a data set is formed and sent to an upper computer;
step five, repeating the step four until the collection is finished; and the upper computer finally fits a structural 3D image according to the data set.
Calculating the depth of field d (see fig. 4), as shown in fig. 4, knowing that the distance between the laser emitter a and the camera B is B, the field angle of the camera B is 2 α, the center of the whole field angle shooting range is EF, n is the number of points corresponding to the spot center point in the image, Y is the total Y-axis midpoint coordinate of the camera imaging, and according to the triangle law, obtaining the following formula:
Figure 97260DEST_PATH_IMAGE004
push out
Figure 417383DEST_PATH_IMAGE005
(see fig. 7), in fig. 7, the point G is the nearest depth of field that can be theoretically measured, but since the laser is not a pixel point but a spot, the nearest distance measuring point is at the point H. Similarly, the distance within the point I cannot shoot complete light spots, the central point of the incomplete light spots within the point I needs to be obtained, and then the real depth of field corresponding to the central point is obtained according to the automatic calibration index table.
(see fig. 8), due to the influence of the observation environment and the material of the surface of the observation object, a reflection phenomenon may occur, and a plurality of reflection light spots appear in the collected image, as shown in fig. 8, the X-axis coordinate of the standard light spot should be the midpoint of the X-axis, so that the reflection light spots caused by the influence of the environment and the material of the surface of the observation object can be eliminated.
The laser triangulation 3D imaging device based on the facula recognition emits punctiform laser to irradiate an observed object through a laser emitter 6, image information is collected through a camera 5, optical signals are converted into electric signals, then a stepping motor 2 controls a screw rod 4 to adjust the depth and the angle between the laser emitter 6 and the observed object, images at different angles are collected to obtain data streams, a data set of the depth of field is formed through processing of an FPGA processing circuit board 3, the data set of the depth of field is sent to an upper computer to be processed, and the data set of the depth of field is finally fitted into a 3D image through the upper computer. The user is with low costs, and simple structure easily operates, has very high using value in space structure 3D formation of image field, and actual cavity application and cup application show that the formation of image effect is very good (see fig. 9 and 10).
Specifically, the present invention further provides an imaging method of the 3D imaging apparatus, comprising:
step one, point-like laser emitted from a laser emitter 6 irradiates the surface of an observed object to form a light spot, and the light spot and the observed object in the current field angle are imaged on a camera 5 together to realize the conversion of an optical signal into an electric signal.
Step two, after acquiring the light spots of the frame of image, the camera 5 is conveyed to the FPGA processing circuit board 3 through a lead, noise is eliminated through a digital filtering algorithm, binarization and light spot identification processing are carried out, the central points of the light spots are obtained, (see fig. 6) as shown in fig. 6, wherein some light spots are incompletely imaged near a visual field blind area, (see fig. 7) as shown in fig. 7, the central points of the incomplete light spots are obtained, and the corresponding relation between the central points of the incomplete light spots and the actual depth of field is recorded according to actual measurement and is used as an automatic calibration index, so that the depth of field beyond a theoretical value is obtained in actual application.
And (4) calculating a central point of the light spots processed in the second step to obtain a coordinate value of (X, Y), wherein the coordinate value is the position of the central point, and the coordinate Y of the central point corresponds to the nth point of the imaging photo from the top to the bottom. The larger the depth of field of the observed object, the closer the spot is to the midpoint of the image coordinates. Conversely, the smaller the depth of field of the observation object, the higher the imaging spot.
This laser triangulation 3D image device based on facula discernment possesses following advantage:
1) the FPGA processing circuit board 3 carries out a series of analysis processing of noise elimination, smoothing, binarization and light spot identification by a digital filtering algorithm on the light spot image collected by the camera 5, finally obtains the coordinates of the central point of the light spot of the image, and accurately calculates the depth of field between the camera 5 and the observed object by the depth of field formula. The lead screw 4 is controlled to act through the fixed frame 1, the adjustable telescopic support 105 and the stepping motor 2 arranged on the fixed frame 1, data stream sets with different heights and different angles are obtained, and the data stream sets are transmitted to an upper computer simulation structure 3D image through the FPGA processing circuit board 3. The structure and the operation steps of the structured light 3D imaging device are greatly simplified.
2) And performing light spot identification on incomplete light spots near the blind area of the field angle of the camera 5, and obtaining the field depth exceeding the theoretical value from the actually measured corresponding field depth.
3) According to the characteristic that the X-axis coordinate corresponding to the standard light spot is unchanged, the reflected light spot generated due to the influence of the environment and the surface material of the observation object is eliminated on the basis of the characteristic.
4) The imaging device has the advantages of simple structure and low cost, is particularly suitable for non-open cavity shooting imaging application, and has high application value in the field of space structure 3D imaging.
While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (6)

1. An imaging method of a laser triangulation 3D imaging device based on spot recognition is disclosed, wherein the laser triangulation 3D imaging device based on spot recognition comprises a fixed frame (1), a stepping motor (2), an FPGA processing circuit board (3), a lead screw (4), a camera (5) and a laser transmitter (6); the fixing frame (1) is composed of five supporting columns (100), three mounting blocks (101), a circular top plate (102) and a circular base (103), the supporting columns (100) are fixedly connected into a whole through the mounting blocks (101), the circular top plate (102) is installed at the upper ends of the supporting columns (100), the circular base (103) is installed at the lower ends of the supporting columns (100), a fixing groove (104) is formed in the center of the circular base (103), an adjustable telescopic support (105) is installed at the bottom of the circular base (103) through a hinge, a stepping motor (2) and an FPGA processing circuit board (3) are arranged on the uppermost mounting block (101), a lead screw (4) is connected to an output shaft of the stepping motor (2), and the lead screw (4) extends out of the lower end of the circular base (103) through the fixing groove (104) of the middle mounting block (101), the lowermost mounting block (101) and the circular base (103), the lower end of the screw rod (4) is provided with a camera (5) and a laser emitter (6), and the camera (5), the laser emitter (6) and the stepping motor (2) are connected with the FPGA processing circuit board (3) through leads; the method is characterized in that: the imaging method of the laser triangulation 3D imaging device based on the spot identification comprises the following steps:
firstly, adjusting an adjustable telescopic support (105) to enable the laser triangulation 3D imaging device based on light spot identification and an observed object to be in a proper spatial position, then adjusting the distance b between a laser emitter (6) and a camera (5) to enable point-shaped laser emitted by the laser emitter (6) to irradiate the surface of the observed object to form light spots, and enabling the light spots and the observed object in the current field angle 2a to be imaged on the camera (5) together to realize conversion of optical signals and electric signals;
secondly, the camera (5) transmits a frame of digitized image data to the FPGA processing circuit board (3), and the frame of image data records the image data including the current height and angle;
processing the frame image by the FPGA processing circuit board (3), firstly eliminating noise by using a digital filtering algorithm, and then carrying out edge detection, binarization and light spot identification processing;
step four, solving a light spot central point to obtain coordinate values of the light spot center (X, Y), wherein the coordinate values are the position of the light spot center, and the light spot central coordinate Y corresponds to the nth point from the top of the imaging photo; knowing the distance b from the laser emitter (6) to the camera (5) and the field angle 2 alpha of the camera, the depth of field d to be measured is calculated by the following formula:
Figure 202709DEST_PATH_IMAGE002
in the formula: n is the nth point from the uppermost of the imaging photos of the coordinate Y of the central point of the facula in the image; y is the middle point of the Y axis imaged by the camera (5);
integrating the depth of field d of 360 degrees/theta test points measured by a camera (5) in each rotation into a data set, and sending the data set to an upper computer, wherein theta is an adjusting angle; then the stepping motor (2) controls the screw rod (4) to adjust the depth of the camera (5) and the laser emitter (6), and the stepping motor rotates for one circle again to collect the depth of field d of 360 degrees/theta test points to form a data set which is sent to an upper computer;
step five, repeating the step four until the collection is finished; and the upper computer finally fits a structural 3D image according to the data set.
2. The imaging method of the laser triangulation 3D imaging device based on the spot identification as claimed in claim 1, wherein: the adjusting depth of the screw rod (4) is 2 mm-2 cm each time, and the adjusting angle theta is 5-20 degrees each time.
3. The imaging method of the laser triangulation 3D imaging device based on the spot identification as claimed in claim 1, wherein: camera (5) and laser emitter (6) be 90 each other and install at lead screw (4) lower extreme, the interval at laser emitter (6) and camera (5) center is 5mm ~3 cm.
4. The imaging method of the laser triangulation 3D imaging device based on the spot identification as claimed in claim 1, wherein: the camera (5) comprises a CCD image sensor and a CMOS image sensor.
5. The imaging method of the laser triangulation 3D imaging device based on the spot identification as claimed in claim 1, wherein: the adjustable range of the adjustable telescopic bracket (105) is as follows: the height is 10 cm-80 cm, and the width is 20 cm-100 cm.
6. The imaging method of the laser triangulation 3D imaging device based on the spot identification as claimed in claim 1, wherein: FPGA processing circuit board (3) including the automatic calibration function, realize seeking the facula central point to the incomplete facula of near formation of image of camera field of vision blind area, classify according to the different distances between camera and the laser emitter, the central point of incomplete facula of many times actual measurements record and the corresponding relation of actual depth of field form a plurality of automatic calibration index tables.
CN202011400150.3A 2020-12-04 2020-12-04 Laser triangulation method 3D imaging device based on facula recognition Active CN112525101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011400150.3A CN112525101B (en) 2020-12-04 2020-12-04 Laser triangulation method 3D imaging device based on facula recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011400150.3A CN112525101B (en) 2020-12-04 2020-12-04 Laser triangulation method 3D imaging device based on facula recognition

Publications (2)

Publication Number Publication Date
CN112525101A CN112525101A (en) 2021-03-19
CN112525101B true CN112525101B (en) 2022-07-12

Family

ID=74997985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011400150.3A Active CN112525101B (en) 2020-12-04 2020-12-04 Laser triangulation method 3D imaging device based on facula recognition

Country Status (1)

Country Link
CN (1) CN112525101B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113513989B (en) * 2021-07-22 2023-04-07 深圳源国光子通信有限公司 Laser triangulation method 3D image device based on facula discernment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3624353B2 (en) * 2002-11-14 2005-03-02 有限会社テクノドリーム二十一 Three-dimensional shape measuring method and apparatus
CN101726257B (en) * 2009-12-22 2011-03-09 西安交通大学 Multiple eye large range laser scanning measurement method
CN107305698A (en) * 2016-04-18 2017-10-31 北京体云科技有限公司 A kind of method for building footwear inner surface threedimensional model
CN107084982A (en) * 2017-03-20 2017-08-22 上海大学 A kind of Portable non-contact historical relic profile and texture collection equipment
CN206724890U (en) * 2017-03-21 2017-12-08 南京工程学院 A kind of motion for large ring on-line checking

Also Published As

Publication number Publication date
CN112525101A (en) 2021-03-19

Similar Documents

Publication Publication Date Title
CN107345789B (en) PCB hole position detection device and method
CN108534710A (en) A kind of the three-D profile scanning means and method of single line laser
CN101561249B (en) Method for automatically detecting fit dimension of surgical knife blade
CN110065074A (en) A kind of the visual servo laser orientation system and method for picking robot
CN108592788A (en) A kind of 3D intelligent camera systems towards spray-painting production line and workpiece On-line Measuring Method
WO1998005157A2 (en) High accuracy calibration for 3d scanning and measuring systems
WO2021115301A1 (en) Close-range target 3d acquisition apparatus
CN108458659A (en) A kind of blade contactless detection device and method
CN112525101B (en) Laser triangulation method 3D imaging device based on facula recognition
CN110966956A (en) Binocular vision-based three-dimensional detection device and method
CN113566733B (en) Line laser vision three-dimensional scanning device and method
CN109142267A (en) A kind of real-time terahertz imaging device and method
CN112254675B (en) Space occupancy rate acquisition and judgment equipment and method containing moving object
CN111275665A (en) Blade grinding and polishing processing vibration detection system and method based on vision
CN111707221A (en) Multi-exposure scattering signal fusion surface roughness measurement method
CN112254680A (en) Multi freedom's intelligent vision 3D information acquisition equipment
CN109493418A (en) A kind of three-dimensional point cloud acquisition methods based on LabVIEW
CN112254676B (en) Portable intelligent 3D information acquisition equipment
CN112254638B (en) Intelligent visual 3D information acquisition equipment that every single move was adjusted
CN104008366A (en) 3D intelligent recognition method and system for biology
CN107063123B (en) 360 degree of environment pattern spinning Laser Scannings
KR102210533B1 (en) 3D Scanning Device for Objects
CN108759676A (en) Based on tessellated transmission case end face large scale geometric tolerance detection device and method
CN112253913A (en) Intelligent visual 3D information acquisition equipment deviating from rotation center
CN115657061B (en) Indoor wall surface three-dimensional scanning device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant