CN112104858B - Three-dimensional machine vision system ambient light inhibition imaging method - Google Patents

Three-dimensional machine vision system ambient light inhibition imaging method Download PDF

Info

Publication number
CN112104858B
CN112104858B CN202010967425.5A CN202010967425A CN112104858B CN 112104858 B CN112104858 B CN 112104858B CN 202010967425 A CN202010967425 A CN 202010967425A CN 112104858 B CN112104858 B CN 112104858B
Authority
CN
China
Prior art keywords
camera
image
blue light
point cloud
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010967425.5A
Other languages
Chinese (zh)
Other versions
CN112104858A (en
Inventor
丁克
丁兢
唐学燕
李翔
马洁
庞旭芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Xianyang Technology Co ltd
Original Assignee
Foshan Xianyang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Xianyang Technology Co ltd filed Critical Foshan Xianyang Technology Co ltd
Priority to CN202010967425.5A priority Critical patent/CN112104858B/en
Publication of CN112104858A publication Critical patent/CN112104858A/en
Application granted granted Critical
Publication of CN112104858B publication Critical patent/CN112104858B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses an ambient light inhibition imaging method of a three-dimensional machine vision system, which comprises the following steps: acquiring a preliminary surface image of a measured target by using a camera group, and processing to obtain a preliminary three-dimensional point cloud picture; controlling the structured light projector equipment to project high-resolution blue light speckle codes to a detected target, collecting surface images of the detected target by using a camera group, and filtering out white light to obtain corresponding blue light coded surface images; evaluating the quality of the blue light coding surface image, and adjusting exposure time to acquire the blue light coding surface image for multiple times to obtain a corresponding blue light coding surface image; acquiring a fused blue light surface image of each camera; obtaining a blue light three-dimensional point cloud picture according to the fused blue light surface image; fusing the preliminary three-dimensional point cloud picture and the blue light three-dimensional point cloud picture to obtain a fused point cloud picture; multiple exposure is carried out, all the steps are repeated, and the fused point cloud picture obtained by multiple exposure is fused again to obtain a re-fused point cloud picture; and acquiring the cavity range in the re-fused point cloud picture and completing to obtain complete three-dimensional point cloud.

Description

Three-dimensional machine vision system ambient light inhibition imaging method
Technical Field
The invention relates to the technical field of three-dimensional imaging, in particular to an ambient light inhibition imaging method of a three-dimensional machine vision system.
Background
The three-dimensional machine vision system is mainly applied to the industrial industry, and due to the fact that the using environment is complex and various, the illumination environment is easily complex and various. The imaging quality of the three-dimensional machine vision system is easily interfered by ambient light, and when the ambient light is weak, the imaging quality of the picture is low, and the three-dimensional imaging effect is poor; when the ambient light is strong, complete three-dimensional imaging is difficult to achieve on a highly reflective object, a large number of cavities can occur in the three-dimensional point cloud, and reconstruction cannot be achieved.
In view of this, it is necessary to provide an ambient light suppression imaging method for a three-dimensional machine vision system to suppress the influence of ambient light on the three-dimensional machine vision system, so as to improve the imaging quality.
Disclosure of Invention
The invention aims to provide an ambient light inhibition imaging method for a three-dimensional machine vision system to inhibit the influence of ambient light on the three-dimensional machine vision system and improve the imaging quality.
In order to solve the technical problems, the invention adopts the following technical scheme: an ambient light suppression imaging method for a three-dimensional machine vision system, comprising the following steps:
s1, acquiring a primary surface image of the detected target by using a camera group, and processing to obtain a primary three-dimensional point cloud picture; wherein the camera group comprises at least two cameras;
step S2, controlling the structured light projector to project high-resolution blue light speckle codes to the detected target;
s3, collecting the surface image of the measured target by the camera set, filtering white light of the surface image collected by each camera by adopting a filtering technology to reserve the blue light information code of the surface image, and obtaining the blue light code surface image of the measured target of each camera;
step S4, evaluating the quality of the obtained blue light coded surface image of the measured target of each camera by adopting a self-adaptive exposure method, automatically adjusting the exposure time according to the evaluation result to acquire the surface image of the measured target for multiple times, and obtaining the blue light coded surface image of the measured target of each camera;
step S5, fusing the blue light coded surface images of the detected target of each camera under multiple exposure according to each camera to obtain fused blue light surface images of each camera;
s6, obtaining a blue light three-dimensional point cloud picture by adopting a three-dimensional imaging algorithm according to the obtained fused blue light surface image of each camera;
s7, fusing the preliminary three-dimensional point cloud picture and the blue light three-dimensional point cloud picture by adopting a three-dimensional point cloud fusion completion method to obtain a fused point cloud picture;
step S8, multiple exposure, repeatedly executing the steps S1-S7, and re-fusing the fused point cloud images obtained through multiple exposure to obtain re-fused point cloud images;
and step S9, acquiring the hollow range in the re-fused point cloud picture, and completing the hollow range according to the characteristics of the three-dimensional point cloud around the hollow range to acquire a complete three-dimensional point cloud picture.
The further technical scheme is as follows: the number of cameras of the camera group is two.
The further technical scheme is as follows: the step S1 specifically includes:
step S11, two cameras are adopted to respectively collect the surface images of the measured target, and the primary surface images of the measured target of each camera are obtained;
step S12, evaluating the quality of the obtained preliminary surface image of the measured target of each camera by adopting a self-adaptive exposure method, automatically adjusting the exposure time according to the evaluation result to acquire the surface image of the measured target for multiple times, and acquiring the preliminary surface image of the measured target of each camera;
step S13, respectively fusing the preliminary surface images of the measured target of each camera under multiple exposures according to each camera to obtain fused preliminary surface images of each camera;
step S14, obtaining the light reflection areas of the fused preliminary surface images of the cameras, comparing the fused preliminary surface images of the two cameras, finding out corresponding areas from the fused preliminary surface image of the other camera according to the characteristic points around the light reflection areas of the cameras so as to complement the fused preliminary surface image of the camera, and obtaining the fused supplemented surface image of each camera;
and step S15, acquiring a preliminary three-dimensional point cloud picture by adopting a three-dimensional imaging algorithm according to the obtained fusion completion surface image of each camera.
The further technical scheme is as follows: the step S12 specifically includes:
step S121, analyzing and judging whether the obtained preliminary surface images of the detected target of each camera have regions which are not suitable for exposure, if so, executing step S122;
step S122, calculating a proper exposure value and automatically adjusting the exposure time;
s123, respectively acquiring surface images of the measured target by adopting two cameras according to the adjusted exposure time to obtain initial surface images of the measured target of each adjusted camera;
and step S124, analyzing and judging whether the adjusted preliminary surface images of the detected targets of the cameras have regions with improper exposure, if so, returning to the step S122, and if not, executing the step S13.
The further technical scheme is as follows: the step S15 specifically includes:
step S151, respectively taking the obtained fusion completion surface images of the cameras as a reference image and a target image;
step S152, calculating the similarity value of the pixel point of the reference image relative to the pixel point in the parallax searching range of the target image, and obtaining the highest similarity value of the pixel point of the reference image;
step S153, acquiring a point of which the highest value of the similarity of the pixel points of the reference image is higher than a preset threshold value as a corresponding point of the pixel points of the reference image in the target image, and calculating a parallax value of a corresponding binocular camera;
step S154, a preliminary parallax map is prepared according to the acquired parallax values of the corresponding points of the pixel points of the reference image in the target image in the parallax search range of the target image, the connectivity of the preliminary parallax map is checked, the mismatching points and the corresponding parallax values are eliminated, and a corrected parallax map is acquired;
step S155, performing sub-pixel interpolation on the corrected disparity map to obtain a precision disparity map;
and step S156, solving and obtaining a corresponding depth map and a three-dimensional reconstructed preliminary three-dimensional point cloud map according to the obtained precision parallax map.
The further technical scheme is as follows: the step S4 specifically includes:
step S41, analyzing and judging whether the obtained blue light coded surface image of the detected target of each camera has an area which is not suitable for exposure, if yes, executing step S42;
step S42, calculating a proper exposure value, and automatically adjusting the exposure time;
step S43, adopting two cameras to respectively collect the surface images of the measured target according to the adjusted exposure time, respectively adopting a filtering technology to filter white light on the surface images collected by each camera so as to reserve the blue light information codes of the surface images, and obtaining the blue light coded surface images of the measured target of each camera;
and step S44, analyzing and judging whether the adjusted blue light coded surface images of the detected objects of the cameras have regions with improper exposure, if so, returning to the step S42, and if not, executing the step S5.
The further technical scheme is as follows: the step S41 is followed by: and S41.1, if the blue light coded surface image of the detected target of each camera does not have an area which is not exposed properly, obtaining a blue light three-dimensional point cloud picture by adopting a three-dimensional imaging algorithm according to the obtained blue light coded surface image of the detected target of each camera, and executing the step S7.
The further technical scheme is as follows: the step S6 specifically includes:
step S61, respectively taking the obtained fused blue light surface images of the cameras as a reference image and a target image;
step S62, calculating the similarity value of the pixel point of the reference image relative to the pixel point in the parallax searching range of the target image, and obtaining the highest similarity value of the pixel point of the reference image;
step S63, acquiring a point of which the highest value of the similarity of the pixel points of the reference image is higher than a preset threshold value as a corresponding point of the pixel points of the reference image in the target image, and calculating the parallax value of the corresponding binocular camera;
step S64, a preliminary parallax map is prepared according to the obtained parallax values of corresponding points of the pixel points of the reference image in the target image in the parallax search range of the target image, the connectivity of the preliminary parallax map is checked, the mismatching points and the corresponding parallax values are eliminated, and a corrected parallax map is obtained;
step S65, performing sub-pixel interpolation on the corrected disparity map to obtain a precision disparity map;
and step S66, solving and obtaining a corresponding depth map and a three-dimensionally reconstructed blue light three-dimensional point cloud map according to the obtained precision parallax map.
The further technical scheme is as follows: the number of the multiple exposures in step S8 ranges from 5 to 10.
The further technical scheme is as follows: before the step S1, the method further includes: and step S00, controlling the structured light projector to project white light to the measured object.
The invention has the beneficial technical effects that: according to the three-dimensional machine vision system ambient light inhibition imaging method, the primary three-dimensional point cloud picture is obtained, the structured light projector device is controlled to project high-resolution blue light speckle codes to a detected target, the surface image of the detected target is collected, white light is filtered by combining a filtering technology, only the coded information of the blue light is collected, and the ambient light is mainly white light information, so that the effect of filtering part of ambient light is achieved, the influence of the ambient light on a three-dimensional machine vision system is inhibited, and the imaging quality is improved; moreover, by utilizing the self-adaptive exposure technology, the blue light coding surface images of the detected target in different illumination environments are obtained by evaluating the imaging quality and automatically adjusting the exposure time, and the blue light coding surface images of the detected target in different illumination environments are fused and then subjected to three-dimensional imaging to obtain a blue light three-dimensional point cloud picture so as to obtain complete three-dimensional imaging; the method comprises the steps of fusing a primary three-dimensional point cloud picture and a blue-light three-dimensional point cloud picture by using a three-dimensional fusion point cloud method to obtain a fusion point cloud picture, then carrying out multiple exposure to repeatedly carry out all the steps to obtain the fusion point cloud pictures under different illumination environments, carrying out secondary fusion to obtain a secondary fusion point cloud picture, and improving the imaging accuracy and integrity.
Drawings
FIG. 1 is a schematic flow chart of an ambient light suppression imaging method of a three-dimensional machine vision system according to the present invention;
FIG. 2 is a sub-flowchart of step S1 of the ambient light suppression imaging method for the three-dimensional machine vision system of the present invention;
FIG. 3 is a sub-flowchart of step S12 of the method for ambient light suppression imaging for the three-dimensional machine vision system of FIG. 2;
FIG. 4 is a sub-flowchart illustration of step S15 of the method for ambient light suppression imaging for the three-dimensional machine vision system of FIG. 2;
FIG. 5 is a sub-flowchart of step S4 of the method for ambient light rejection imaging for a three-dimensional machine vision system of the present invention;
FIG. 6 is a sub-flowchart of step S6 of the method for ambient light suppression imaging for a three-dimensional machine vision system according to the present invention;
fig. 7 is a flowchart of an embodiment of the ambient light rejection imaging method of the three-dimensional machine vision system according to the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly understood by those skilled in the art, the present invention is further described with reference to the accompanying drawings and examples.
Referring to fig. 1, the ambient light suppression imaging method of the three-dimensional machine vision system of the invention comprises the following steps:
and step S1, acquiring a primary surface image of the detected target by using the camera group, and processing to obtain a primary three-dimensional point cloud picture. Wherein the camera group comprises at least two cameras.
And step S2, controlling the structured light projector device to project the high-resolution blue light speckle code to the measured object.
And step S3, collecting the surface image of the measured target by the camera set, filtering white light of the surface image collected by each camera by adopting a filtering technology to reserve the blue light information code of the surface image, and obtaining the blue light code surface image of the measured target of each camera.
And step S4, evaluating the quality of the obtained blue light coded surface image of the detected target of each camera by adopting a self-adaptive exposure method, automatically adjusting the exposure time according to the evaluation result to acquire the surface image of the detected target for multiple times, and obtaining the blue light coded surface image of the detected target of each camera.
And step S5, fusing the blue coded surface images of the detected target of each camera under multiple exposures according to each camera to obtain fused blue surface images of each camera.
And step S6, obtaining a blue light three-dimensional point cloud picture by adopting a three-dimensional imaging algorithm according to the obtained fused blue light surface image of each camera.
And step S7, fusing the preliminary three-dimensional point cloud picture and the blue light three-dimensional point cloud picture by adopting a three-dimensional point cloud fusion completion method to obtain a fused point cloud picture.
And S8, repeatedly executing the steps S1-S7, and performing re-fusion on the fusion point cloud images obtained through multiple exposures to obtain re-fusion point cloud images.
And step S9, acquiring a cavity range in the re-fused point cloud picture, and completing the cavity range according to the characteristics of the three-dimensional point cloud around the cavity range to obtain a complete three-dimensional point cloud picture.
Specifically, in this embodiment, the number of cameras of the camera group is two. The number of the multiple exposures ranges from 5 to 10. The cavity range refers to the range of a reflective cavity area, the evaluation result comprises image quality passing and image quality failing, and when the image quality is failing, the exposure time is automatically adjusted to acquire the surface image of the detected target for multiple times. The characteristics of the three-dimensional point cloud include the curvature of the three-dimensional point cloud and the normal direction and position of the three-dimensional point cloud. According to the three-dimensional machine vision system ambient light inhibition imaging method, the primary three-dimensional point cloud picture is obtained, the structured light projector device is controlled to project high-resolution blue light speckle codes to a detected target, the surface image of the detected target is collected, white light is filtered by combining a filtering technology, only the coded information of the blue light is collected, and the ambient light is mainly white light information, so that the effect of filtering part of ambient light is achieved, the influence of the ambient light on a three-dimensional machine vision system is inhibited, and the imaging quality is improved; moreover, by utilizing the self-adaptive exposure technology, the blue light coding surface images of the detected target in different illumination environments are obtained by evaluating the imaging quality and automatically adjusting the exposure time, and the blue light coding surface images of the detected target in different illumination environments are fused and then subjected to three-dimensional imaging to obtain a blue light three-dimensional point cloud picture so as to obtain complete three-dimensional imaging; the method comprises the steps of fusing a primary three-dimensional point cloud picture and a blue-light three-dimensional point cloud picture by using a three-dimensional fusion point cloud method to obtain a fusion point cloud picture, repeatedly carrying out exposure for multiple times to obtain fusion point cloud pictures under different illumination environments, carrying out fusion again to obtain a re-fusion point cloud picture, so as to improve imaging accuracy and integrity, and meanwhile completing the cavity range of the re-fusion point cloud picture after re-fusion according to the characteristics of surrounding three-dimensional point clouds to obtain complete three-dimensional point cloud, so as to solve the influence of ambient light on a three-dimensional machine vision system, is applicable to more industrial scenes, and improves practicability.
Referring to fig. 2, in this embodiment, the step S1 specifically includes:
and step S11, respectively acquiring the surface images of the measured object by adopting two cameras to obtain the preliminary surface images of the measured object of each camera.
And step S12, evaluating the quality of the obtained preliminary surface image of the measured object of each camera by adopting a self-adaptive exposure method, automatically adjusting the exposure time according to the evaluation result to acquire the surface image of the measured object for multiple times, and acquiring the preliminary surface image of the measured object of each camera.
And step S13, fusing the preliminary surface images of the detected target of the cameras under multiple exposures according to the cameras respectively to obtain fused preliminary surface images of the cameras.
And step S14, obtaining the light reflection areas of the fused preliminary surface images of the cameras, comparing the fused preliminary surface images of the two cameras, finding corresponding areas from the fused preliminary surface image of the other camera according to the characteristic points around the light reflection areas of the cameras so as to complement the fused preliminary surface image of the camera, and obtaining the fused supplemented surface image of each camera.
And step S15, acquiring a preliminary three-dimensional point cloud picture by adopting a three-dimensional imaging algorithm according to the obtained fusion completion surface image of each camera.
The method comprises the steps of adopting a self-adaptive exposure technology, evaluating imaging quality, automatically adjusting exposure time according to an evaluation result, obtaining preliminary surface images of a detected target under different illumination environments, fusing the preliminary surface images of the detected target under different illumination environments according to each camera, comparing the fused preliminary surface images of the cameras, completing respective light reflection areas according to the fused preliminary images obtained by the two cameras through a compensation method, obtaining complete surface images of the detected target of the cameras, namely obtaining the fused complete surface images of the cameras, and then performing three-dimensional imaging according to the fused complete surface images of the cameras by using a three-dimensional imaging algorithm to obtain a preliminary three-dimensional point cloud image, so that complete preliminary three-dimensional point cloud images under different illumination environments are obtained, and the integrity and the accuracy of imaging are improved. And the evaluation result comprises image quality passing and image quality failing, and when the image quality fails, the exposure time is automatically adjusted to acquire the surface image of the detected target for multiple times.
Referring to fig. 3, preferably, in this embodiment, the step S12 specifically includes:
step S121, analyzing and judging whether the obtained preliminary surface image of the detected target of each camera has an area with improper exposure, if yes, executing step S122.
And step S122, calculating a proper exposure value and automatically adjusting the exposure time.
And S123, respectively acquiring the surface images of the measured target by adopting two cameras according to the adjusted exposure time, and obtaining the adjusted initial surface images of the measured target of each camera.
And step S124, analyzing and judging whether the adjusted preliminary surface images of the detected targets of the cameras have regions with improper exposure, if so, returning to the step S122, and if not, executing the step S13.
Specifically, the step S121 is followed by: if the obtained preliminary surface image of the measured target of each camera is not provided with an area with improper exposure, a light reflection area of the preliminary surface image of each camera is obtained, the preliminary surface images of the two cameras are compared, a corresponding area is found in the preliminary surface image of the other camera according to the characteristic points around the light reflection area of each camera so as to complement the fused preliminary surface image of the camera, a complemented surface image of each camera is obtained, and a preliminary three-dimensional point cloud picture is obtained by adopting a three-dimensional imaging method according to the obtained complemented surface image of each camera. The poor image quality of the evaluation result represents that there is an area where exposure is not appropriate, and the poor image quality represents that there is no area where exposure is not appropriate.
Referring to fig. 4, preferably, the step S15 specifically includes:
and step S151, respectively using the obtained fusion completion surface images of the cameras as a reference image and a target image.
Step S152, calculating the similarity value of the pixel point of the reference image relative to the pixel point in the parallax searching range of the target image, and obtaining the highest similarity value of the pixel point of the reference image.
Step S153, a point of which the highest value of the similarity of the pixel points of the reference image is higher than a preset threshold value is obtained and is used as a corresponding point of the pixel points of the reference image in the target image, and the parallax value of the corresponding binocular camera is calculated.
Step S154, a preliminary parallax map is prepared according to the obtained parallax values of the corresponding points of the pixel points of the reference image in the target image in the parallax search range of the target image, the connectivity of the preliminary parallax map is checked, the mismatching points and the corresponding parallax values are eliminated, and the corrected parallax map is obtained.
And step S155, performing sub-pixel interpolation on the corrected parallax map to obtain the precision parallax map.
And step S156, solving and obtaining a corresponding depth map and a three-dimensional reconstructed preliminary three-dimensional point cloud map according to the obtained precision parallax map.
Referring to fig. 5 and 7, preferably, in this embodiment, the step S4 specifically includes:
and S41, analyzing and judging whether the obtained blue light coded surface image of the detected target of each camera has an area which is not suitable for exposure, if so, executing S42.
And step S42, calculating a proper exposure value and automatically adjusting the exposure time.
And step S43, adopting two cameras to respectively collect the surface images of the detected target according to the adjusted exposure time, respectively adopting a filtering technology to filter white light on the surface images collected by the cameras so as to reserve the blue light information codes of the surface images, and obtaining the blue light coded surface images of the detected target of each camera.
And step S44, analyzing and judging whether the adjusted blue light coded surface images of the detected objects of the cameras have regions with improper exposure, if so, returning to the step S42, and if not, executing the step S5.
The structured light projector equipment is controlled to project high-resolution blue light speckle codes to a detected target, surface images of the detected target are collected, white light is filtered by combining a filtering technology, only coding information of the blue light is collected, and because the environment light is mainly white light information, the effect of filtering part of environment light is achieved, the influence of the environment light on a three-dimensional machine vision system is inhibited, and the imaging quality is improved. And moreover, by adopting the self-adaptive exposure technology, the imaging quality is evaluated, the exposure time is automatically adjusted according to the evaluation result, the blue light coded surface image of the detected target in different illumination environments is obtained, and the blue light coded surface image of the detected target in different illumination environments is used for improving the integrity and the accuracy of subsequent imaging. The image quality failing of the evaluation result represents that there is an area where exposure is not appropriate, and the image quality failing represents that there is no area where exposure is not appropriate.
Specifically, after the step S41, the method further includes:
and S41.1, if the blue light coded surface image of the detected target of each camera does not have an area which is not suitable for exposure, acquiring a blue light three-dimensional point cloud picture by adopting a three-dimensional imaging algorithm according to the obtained blue light coded surface image of the detected target of each camera, and executing the step S7.
Referring to fig. 6, preferably, in this embodiment, the step S6 specifically includes:
step S61, respectively taking the obtained fused blue light surface images of the cameras as a reference image and a target image;
step S62, calculating the similarity value of the pixel point of the reference image relative to the pixel point in the parallax searching range of the target image, and obtaining the highest similarity value of the pixel point of the reference image;
step S63, acquiring a point of which the highest value of the similarity of the pixel points of the reference image is higher than a preset threshold value as a corresponding point of the pixel points of the reference image in the target image, and calculating the parallax value of the corresponding binocular camera;
step S64, a preliminary parallax map is prepared according to the obtained parallax values of corresponding points of the pixel points of the reference image in the target image in the parallax search range of the target image, the connectivity of the preliminary parallax map is checked, the mismatching points and the corresponding parallax values are eliminated, and a corrected parallax map is obtained;
step S65, performing sub-pixel interpolation on the corrected disparity map to obtain a precision disparity map;
and step S66, solving and obtaining a corresponding depth map and a three-dimensionally reconstructed blue light three-dimensional point cloud map according to the obtained precision parallax map.
With reference to fig. 7, specifically, before the step S1, the method further includes:
and step S00, controlling the structured light projector device to project white light to the measured object. The preliminary collected image of the target under test collected by the camera group in step S1 may be made to contain no structured light encoding information.
In summary, the three-dimensional machine vision system ambient light inhibition imaging method controls the structured light projector equipment to project high-resolution blue light speckle codes to a detected target by acquiring a preliminary three-dimensional point cloud picture, collects a surface image of the detected target, filters white light by combining a filtering technology, only collects coding information of the blue light, and achieves the effect of filtering partial ambient light because the ambient light is mainly white light information, so as to inhibit the influence of the ambient light on the three-dimensional machine vision system and improve the imaging quality; moreover, by utilizing the self-adaptive exposure technology, the blue light coding surface images of the detected target in different illumination environments are obtained by evaluating the imaging quality and automatically adjusting the exposure time, and the blue light coding surface images of the detected target in different illumination environments are fused and then subjected to three-dimensional imaging to obtain a blue light three-dimensional point cloud picture so as to obtain complete three-dimensional imaging; the method comprises the steps of fusing a primary three-dimensional point cloud picture and a blue-light three-dimensional point cloud picture by using a three-dimensional fusion point cloud method to obtain a fusion point cloud picture, repeatedly carrying out exposure for multiple times to obtain fusion point cloud pictures under different illumination environments, carrying out fusion again to obtain a re-fusion point cloud picture, so as to improve imaging accuracy and integrity, and meanwhile completing the cavity range of the re-fusion point cloud picture after re-fusion according to the characteristics of surrounding three-dimensional point clouds to obtain complete three-dimensional point cloud, so as to solve the influence of ambient light on a three-dimensional machine vision system, is applicable to more industrial scenes, and improves practicability.
The foregoing is considered as illustrative of the preferred embodiments of the invention and is not to be construed as limiting the invention in any way. Various equivalent changes and modifications can be made by those skilled in the art based on the above embodiments, and all equivalent changes and modifications within the scope of the claims should fall within the protection scope of the present invention.

Claims (10)

1. An ambient light suppression imaging method for a three-dimensional machine vision system, comprising the steps of:
s1, acquiring a primary surface image of the detected target by using a camera group, and processing to obtain a primary three-dimensional point cloud picture; wherein the set of cameras comprises at least two cameras;
step S2, controlling the structured light projector to project high-resolution blue light speckle codes to the detected target;
s3, collecting the surface image of the measured target by the camera set, filtering white light of the surface image collected by each camera by adopting a filtering technology to reserve the blue light information code of the surface image, and obtaining the blue light code surface image of the measured target of each camera;
step S4, evaluating the quality of the obtained blue light coded surface image of the measured target of each camera by adopting a self-adaptive exposure method, automatically adjusting the exposure time according to the evaluation result to acquire the surface image of the measured target for multiple times, and obtaining the blue light coded surface image of the measured target of each camera;
step S5, fusing the blue light coded surface images of the detected target of each camera under multiple exposure according to each camera to obtain fused blue light surface images of each camera;
s6, obtaining a blue light three-dimensional point cloud picture by adopting a three-dimensional imaging algorithm according to the obtained fused blue light surface image of each camera;
s7, fusing the preliminary three-dimensional point cloud picture and the blue light three-dimensional point cloud picture by adopting a three-dimensional point cloud fusion completion method to obtain a fused point cloud picture;
step S8, multiple exposure, repeatedly executing the steps S1-S7, and re-fusing the fused point cloud images obtained through multiple exposure to obtain re-fused point cloud images;
and step S9, acquiring the hollow range in the re-fused point cloud picture, and completing the hollow range according to the characteristics of the three-dimensional point cloud around the hollow range to acquire a complete three-dimensional point cloud picture.
2. The three-dimensional machine vision system ambient light suppression imaging method of claim 1, wherein the number of cameras of the set of cameras is two.
3. The three-dimensional machine vision system ambient light suppression imaging method of claim 2, wherein the step S1 specifically comprises:
step S11, adopting two cameras to respectively collect the surface images of the measured object, and obtaining the preliminary surface images of the measured object of each camera;
step S12, evaluating the quality of the obtained preliminary surface image of the measured target of each camera by adopting a self-adaptive exposure method, automatically adjusting the exposure time according to the evaluation result to acquire the surface image of the measured target for multiple times, and acquiring the preliminary surface image of the measured target of each camera;
step S13, respectively fusing the preliminary surface images of the measured target of each camera under multiple exposures according to each camera to obtain fused preliminary surface images of each camera;
step S14, acquiring the reflective area of the fused preliminary surface image of each camera, comparing the fused preliminary surface images of the two cameras, finding a corresponding area in the fused preliminary surface image of the other camera according to the characteristic points around the reflective area of each camera to complete the fused preliminary surface image of the camera, and acquiring the fused complete surface image of each camera;
and step S15, acquiring a preliminary three-dimensional point cloud picture by adopting a three-dimensional imaging algorithm according to the obtained fusion completion surface image of each camera.
4. The three-dimensional machine vision system ambient light suppression imaging method of claim 3, wherein the step S12 specifically comprises:
step S121, analyzing and judging whether the obtained preliminary surface images of the detected target of each camera have regions which are not suitable for exposure, if so, executing step S122;
step S122, calculating a proper exposure value and automatically adjusting the exposure time;
s123, respectively acquiring surface images of the measured target by adopting two cameras according to the adjusted exposure time to obtain initial surface images of the measured target of each adjusted camera;
and step S124, analyzing and judging whether the adjusted preliminary surface images of the detected targets of the cameras have regions with improper exposure, if so, returning to the step S122, and if not, executing the step S13.
5. The three-dimensional machine vision system ambient light suppression imaging method of claim 3, wherein the step S15 specifically comprises:
step S151, respectively taking the obtained fusion completion surface images of the cameras as a reference image and a target image;
step S152, calculating the similarity value of the pixel point of the reference image relative to the pixel point in the parallax searching range of the target image, and obtaining the highest similarity value of the pixel point of the reference image;
step S153, acquiring a point of which the highest value of the similarity of the pixel points of the reference image is higher than a preset threshold value as a corresponding point of the pixel points of the reference image in the target image, and calculating a parallax value of a corresponding binocular camera;
step S154, a preliminary parallax map is prepared according to the acquired parallax values of the corresponding points of the pixel points of the reference image in the target image in the parallax search range of the target image, the connectivity of the preliminary parallax map is checked, the mismatching points and the corresponding parallax values are eliminated, and a corrected parallax map is acquired;
step S155, performing sub-pixel interpolation on the corrected disparity map to obtain a precision disparity map;
and step S156, solving and obtaining a corresponding depth map and a three-dimensional reconstructed preliminary three-dimensional point cloud map according to the obtained precision parallax map.
6. The three-dimensional machine vision system ambient light suppression imaging method of claim 2, wherein the step S4 specifically comprises:
step S41, analyzing and judging whether the obtained blue light coded surface image of the detected target of each camera has an area which is not suitable for exposure, if yes, executing step S42;
step S42, calculating a proper exposure value and automatically adjusting the exposure time;
step S43, adopting two cameras to respectively collect the surface images of the measured target according to the adjusted exposure time, respectively adopting a filtering technology to filter white light on the surface images collected by each camera so as to reserve the blue light information codes of the surface images, and obtaining the blue light coded surface images of the measured target of each camera;
and step S44, analyzing and judging whether the adjusted blue light coded surface images of the detected objects of the cameras have regions with improper exposure, if so, returning to the step S42, and if not, executing the step S5.
7. The three-dimensional machine vision system ambient light suppression imaging method of claim 6, further comprising after step S41:
and S41.1, if the blue light coded surface image of the detected target of each camera does not have an area which is not suitable for exposure, acquiring a blue light three-dimensional point cloud picture by adopting a three-dimensional imaging algorithm according to the obtained blue light coded surface image of the detected target of each camera, and executing the step S7.
8. The three-dimensional machine vision system ambient light suppression imaging method of claim 2, wherein the step S6 specifically comprises:
step S61, respectively taking the obtained fused blue light surface images of the cameras as a reference image and a target image;
step S62, calculating the similarity value of the pixel point of the reference image relative to the pixel point in the parallax searching range of the target image, and acquiring the highest similarity value of the pixel point of the reference image;
step S63, acquiring a point of which the highest value of the similarity of the pixel points of the reference image is higher than a preset threshold value as a corresponding point of the pixel points of the reference image in the target image, and calculating the parallax value of the corresponding binocular camera;
step S64, a preliminary parallax image is prepared according to the obtained parallax values of corresponding points of the pixel points of the reference image in the target image in the parallax search range of the target image, the connectivity of the preliminary parallax image is checked, the mismatching points and the corresponding parallax values are eliminated, and a corrected parallax image is obtained;
step S65, performing sub-pixel interpolation on the corrected disparity map to obtain a precision disparity map;
and step S66, solving and obtaining a corresponding depth map and a three-dimensionally reconstructed blue light three-dimensional point cloud map according to the obtained precision parallax map.
9. The three-dimensional machine vision system ambient light rejection imaging method of claim 1, wherein the number of said multiple exposures in step S8 ranges from 5 to 10.
10. The three-dimensional machine vision system ambient light suppression imaging method of claim 1, further comprising, before step S1:
and step S00, controlling the structured light projector device to project white light to the measured object.
CN202010967425.5A 2020-09-15 2020-09-15 Three-dimensional machine vision system ambient light inhibition imaging method Active CN112104858B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010967425.5A CN112104858B (en) 2020-09-15 2020-09-15 Three-dimensional machine vision system ambient light inhibition imaging method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010967425.5A CN112104858B (en) 2020-09-15 2020-09-15 Three-dimensional machine vision system ambient light inhibition imaging method

Publications (2)

Publication Number Publication Date
CN112104858A CN112104858A (en) 2020-12-18
CN112104858B true CN112104858B (en) 2022-05-27

Family

ID=73760497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010967425.5A Active CN112104858B (en) 2020-09-15 2020-09-15 Three-dimensional machine vision system ambient light inhibition imaging method

Country Status (1)

Country Link
CN (1) CN112104858B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113608395B (en) * 2021-07-26 2022-10-14 深圳市仁拓实业有限公司 3D visual environment light inhibition imaging method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200620978A (en) * 2004-12-13 2006-06-16 Etoms Electronics Corp Method for eliminating environmental light source
CN108519064A (en) * 2018-04-20 2018-09-11 天津工业大学 A kind of reflective suppressing method applied to multi-frequency three-dimensional measurement
CN109996008A (en) * 2019-03-18 2019-07-09 深圳奥比中光科技有限公司 It is a kind of to reduce the method, device and equipment interfered between more depth camera systems
CN110891146A (en) * 2018-09-10 2020-03-17 浙江宇视科技有限公司 Strong light inhibition method and device
CN111145342A (en) * 2019-12-27 2020-05-12 山东中科先进技术研究院有限公司 Binocular speckle structured light three-dimensional reconstruction method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112119628B (en) * 2018-03-20 2022-06-03 魔眼公司 Adjusting camera exposure for three-dimensional depth sensing and two-dimensional imaging

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200620978A (en) * 2004-12-13 2006-06-16 Etoms Electronics Corp Method for eliminating environmental light source
CN108519064A (en) * 2018-04-20 2018-09-11 天津工业大学 A kind of reflective suppressing method applied to multi-frequency three-dimensional measurement
CN110891146A (en) * 2018-09-10 2020-03-17 浙江宇视科技有限公司 Strong light inhibition method and device
CN109996008A (en) * 2019-03-18 2019-07-09 深圳奥比中光科技有限公司 It is a kind of to reduce the method, device and equipment interfered between more depth camera systems
CN111145342A (en) * 2019-12-27 2020-05-12 山东中科先进技术研究院有限公司 Binocular speckle structured light three-dimensional reconstruction method and system

Also Published As

Publication number Publication date
CN112104858A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
EP3869797B1 (en) Method for depth detection in images captured using array cameras
KR100776649B1 (en) A depth information-based Stereo/Multi-view Stereo Image Matching Apparatus and Method
CN113205592B (en) Light field three-dimensional reconstruction method and system based on phase similarity
KR20090052889A (en) Method for determining a depth map from images, device for determining a depth map
Takeda et al. Fusing depth from defocus and stereo with coded apertures
CN113129430B (en) Underwater three-dimensional reconstruction method based on binocular structured light
CN110458952B (en) Three-dimensional reconstruction method and device based on trinocular vision
CN113129241A (en) Image processing method and device, computer readable medium and electronic equipment
CN112104858B (en) Three-dimensional machine vision system ambient light inhibition imaging method
CN116958419A (en) Binocular stereoscopic vision three-dimensional reconstruction system and method based on wavefront coding
CN115578296A (en) Stereo video processing method
CN117456114B (en) Multi-view-based three-dimensional image reconstruction method and system
JP2996067B2 (en) 3D measuring device
Kallwies et al. Effective combination of vertical and horizontal stereo vision
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium
CN112950698B (en) Depth estimation method, device, medium and equipment based on binocular defocused image
CN113959398B (en) Distance measurement method and device based on vision, drivable equipment and storage medium
CN116168071A (en) Depth data acquisition method, device, electronic equipment and machine-readable storage medium
CN113160393B (en) High-precision three-dimensional reconstruction method and device based on large depth of field and related components thereof
Ghita et al. Real time 3-D estimation using depth from defocus
JP2807137B2 (en) 3D shape detection method
CN114820798A (en) Calibrator matching method and device
CN114972468A (en) Depth image acquisition method and device and electronic equipment
CN110827343B (en) Improved light field depth estimation method based on energy enhanced defocus response
CN109191441B (en) Image processing method, apparatus, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Environmental Light Suppression Imaging Method for 3D Machine Vision Systems

Effective date of registration: 20231113

Granted publication date: 20220527

Pledgee: Shenzhen hi tech investment small loan Co.,Ltd.

Pledgor: Foshan Xianyang Technology Co.,Ltd.

Registration number: Y2023980065307

PE01 Entry into force of the registration of the contract for pledge of patent right