CN111354047A - Camera module positioning method and system based on computer vision - Google Patents

Camera module positioning method and system based on computer vision Download PDF

Info

Publication number
CN111354047A
CN111354047A CN201811577406.0A CN201811577406A CN111354047A CN 111354047 A CN111354047 A CN 111354047A CN 201811577406 A CN201811577406 A CN 201811577406A CN 111354047 A CN111354047 A CN 111354047A
Authority
CN
China
Prior art keywords
image
module
roi
fitting
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811577406.0A
Other languages
Chinese (zh)
Other versions
CN111354047B (en
Inventor
孔庆杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingrui Vision Intelligent Technology Shanghai Co ltd
Original Assignee
Riseye Intelligent Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Riseye Intelligent Technology Shenzhen Co ltd filed Critical Riseye Intelligent Technology Shenzhen Co ltd
Priority to CN201811577406.0A priority Critical patent/CN111354047B/en
Publication of CN111354047A publication Critical patent/CN111354047A/en
Application granted granted Critical
Publication of CN111354047B publication Critical patent/CN111354047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a camera module positioning method and system based on computer vision, wherein the method comprises the following steps: acquiring an image of a camera module through a high-definition camera; preprocessing the image; segmenting the preprocessed image according to the module template parameters to obtain each lens ROI; performing preliminary fitting on each lens ROI to obtain respective circle centers, and mapping the circle centers back to an original image; determining the coordinate of the obtained fitted central point, and determining the error of the point according to the rule of adjacent points; determining a loss function; and traversing in a certain direction according to the loss function, and adjusting the coordinate point to be the position with the minimum loss which can be obtained currently. According to the invention, the original image of the camera module is processed through computer vision, fitting is carried out by utilizing the shape characteristics of the image, and the position of a fitted point is regulated to a certain degree by combining the detection precision requirement, so that the error is reduced.

Description

Camera module positioning method and system based on computer vision
Technical Field
The invention relates to the field of camera module detection and positioning, in particular to a camera module positioning method and system based on computer vision.
Background
In the prior art, the geometric position is positioned by the traditional computer vision technology, and the method comprises the following specific steps: firstly, segmenting an acquired image, acquiring an ROI of each lens, and fitting the ROI to obtain a fitting central point. This approach has the following problems: in the process of carrying out graph fitting, certain deviation may exist due to uncertain noise points and the influence of pixel resolution, and the certain deviation also has great influence in the detection of the high-precision camera module.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a camera module positioning method and system based on computer vision, and the camera module is further accurately positioned in a fitting manner. The technical problems of deviation and the like caused by uncertain noise points and pixel resolution influence in the prior art are solved.
The technical scheme of the invention for solving the technical problems is as follows: a camera module positioning method based on computer vision comprises the following steps: collecting the image of the camera module through collection equipment; performing a preprocessing operation on the image; segmenting the preprocessed image according to the module template parameters to obtain the ROI of each lens; performing preliminary fitting on each lens ROI to obtain respective circle centers, and mapping the circle centers back to the original image; determining the coordinate of the obtained fitted central point, and determining the error of the point according to the rule of adjacent points; determining a loss function according to the standard position of the camera module; and traversing the error determined according to the rule of the adjacent points according to the loss function, and adjusting the coordinate point to be the position with the minimum loss which can be obtained currently.
Wherein the pre-processing the image comprises: carrying out graying processing on the acquired image to simplify information, and then eliminating noise by using Gaussian filtering; carrying out binarization operation on the threshold value of the image, and carrying out pixel-level segmentation on an object and a background in the image through the threshold value operation; refilling the image with noise by the morphological opening and closing operation.
Wherein, according to module template parameter, cut apart the image after the preliminary treatment, obtain the ROI of every camera lens and include: for the preprocessed image, performing straight line fitting on the edge of a camera module in the preprocessed image to obtain an ROI of the camera module; and according to the template parameters of the camera module, dividing the obtained module ROI according to the proportion of the module ROI to the template, and obtaining the ROI position of each lens.
Wherein, the preliminary fitting is respectively carried out on each lens ROI to obtain respective circle centers, and the mapping back to the original image comprises the following steps: extracting the outline of each ROI; carrying out ellipse fitting operation on the extracted contour set to obtain the circle center, width and height of each ellipse; screening all ellipse sets obtained through fitting; and mapping each circle center coordinate back to the original image respectively to obtain the initial circle center coordinate in the global image coordinate system.
Wherein, the determining the coordinates of the obtained fitted central point and the determining the error of the point according to the rules of the adjacent points comprises: obtaining the corresponding proportion of the actual distance of the lens on the image pixel according to the resolution of the current image; obtaining the distance between adjacent points according to the obtained image proportion and the detection precision requirement; and expanding the original circle center into a set according to the acquired rough coordinates of the circle center and the distance between adjacent points.
Wherein the Loss function expression is Loss(s) m (1-abs (sin (a))) + n (l (abc) -l (standard)), where m and n are constants arbitrarily adjusted and set according to the currently acquired image; l represents the circumference. L (standard) is the perimeter of the pattern under the standard template.
In another aspect, an embodiment of the present invention provides a camera module positioning system based on computer vision, where the system includes: the image acquisition module is used for acquiring the image of the camera module through acquisition equipment; the image preprocessing module is used for preprocessing the image; the image segmentation module is used for segmenting the preprocessed image according to the module template parameters to obtain the ROI of each lens; the preliminary fitting module is used for carrying out preliminary fitting on each lens ROI respectively to obtain respective circle centers and mapping the circle centers back to the original image; an adjacent point determining module, which is used for determining the coordinates of the obtained fitted central point and determining the error of the point according to the rule of the adjacent points; and the optimization loss module is used for determining a loss function, traversing errors determined according to the rule of adjacent points according to the loss function, and adjusting the coordinate point to be the position with the minimum loss which can be obtained currently.
Wherein the image preprocessing module comprises: the de-noising module is used for carrying out graying processing on the acquired image to simplify information and then eliminating noise by using Gaussian filtering; the binarization module is used for carrying out binarization operation on the threshold value of the image and carrying out pixel-level segmentation on an object and a background in the image through the threshold value operation; and the filling and noise-preventing module is used for refilling and noise-preventing the image through the morphological opening and closing operation.
Wherein the image segmentation module comprises: the linear fitting module is used for performing linear fitting on the edge of the camera module in the preprocessed image to obtain an ROI of the camera module; and the position determining module is used for segmenting the obtained module ROI according to the proportion of the module ROI and the template according to the template parameters of the camera module, and obtaining the ROI position of each lens.
Wherein the preliminary fitting module comprises: the outline extraction module is used for extracting the outline of the lens ROI; the ellipse fitting module is used for carrying out ellipse fitting operation on the extracted contour set to obtain the circle center, the width and the height of each ellipse; the screening module is used for screening all ellipse sets obtained through fitting; and the mapping module is used for mapping each circle center coordinate back to the original image respectively to obtain the initial circle center coordinate in the global image coordinate system.
Wherein the determine neighbor point module comprises: the device comprises a pixel determining module used for obtaining the corresponding proportion of the actual distance of the lens on the image pixel according to the resolution of the current image, and an adjacent point distance determining module used for obtaining the distance of the adjacent point according to the obtained image proportion and the detection precision requirement, and expanding the original circle center into a set according to the obtained preliminary coordinate of the circle center and the distance of the adjacent point.
Wherein, the Loss function expression of the optimization Loss module is Loss ═ m (1-abs (sin (a)) + n (l (abc) -l (standard)), where m and n are constants adjusted arbitrarily and set according to the currently acquired image; l represents the circumference. L (standard) is the perimeter of the pattern under the standard template.
The technical scheme provided by the invention has the beneficial effects that: the invention provides a camera module positioning method and system based on computer vision, aiming at the technical problems of deviation and the like caused by uncertain noise points and pixel resolution influence in the prior art. According to the method, smooth filtering can be performed on the image through Gaussian filtering, and noise is eliminated; the image can not relate to the multi-level value of the pixel any more through the binarization operation, the processing becomes simple, and the processing and compression amount of the data are small; a small amount of white spots and black spots caused by slight difference of lighting conditions of each image can be removed through morphological opening and closing operation, and the influence of noise on the image is reduced; the shape characteristics of the image are used for fitting, a loss function is set by combining the detection precision requirement, the position of the fitted point is adjusted to a certain degree, and the coordinate is adjusted to obtain the coordinate of the minimum loss position, so that the error is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a camera module positioning method based on computer vision according to an embodiment of the present invention;
fig. 2 is an exemplary diagram of a triple camera module according to a first embodiment of the present invention;
FIG. 3 is a flowchart of step S200;
FIG. 4 is a flowchart illustrating step S300;
FIG. 5 is a flowchart illustrating step S400;
fig. 6 shows the position of the circle center after rough fitting and the positions of three lenses to be measured according to the first embodiment of the present invention;
FIG. 7 is a flowchart illustrating step S500;
FIG. 8 is a diagram illustrating the definition of the positions of neighboring points according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating a definition of a loss function according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a traversal range of coordinate point optimization according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a camera module positioning system based on computer vision according to a second embodiment of the present invention;
fig. 12 is a schematic structural diagram of a preprocessing module of a camera module positioning system based on computer vision according to a second embodiment of the present invention;
fig. 13 is a schematic structural diagram of an image segmentation module of a camera module positioning system based on computer vision according to a second embodiment of the present invention;
fig. 14 is a schematic structural diagram of a rough fitting module of a camera module positioning system based on computer vision according to a second embodiment of the present invention;
fig. 15 is a schematic structural diagram of a module for determining neighboring points by a camera module positioning system based on computer vision according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Example one
Fig. 1 is a flowchart of a camera module positioning method based on computer vision according to an embodiment of the present invention, and referring to fig. 1, the method includes the following steps:
s100, collecting images of a camera module through collection equipment; fig. 2 is a three-camera module image captured by a capturing device such as a high definition camera according to an embodiment of the present invention;
s200, preprocessing the image;
s300, segmenting the preprocessed image according to the module template parameters to obtain the ROI of each lens;
s400, performing preliminary fitting on each lens ROI to obtain respective circle centers, and mapping the circle centers back to the original image;
s500, determining the coordinates of the obtained fitted central point, and determining the error of the point according to the rule of adjacent points;
s600, determining the composition of a loss function formula according to the standard position of the camera module, traversing the error determined according to the rule of adjacent points according to the loss function, and adjusting the coordinate point to be the position with the minimum loss which can be obtained currently.
Wherein the Loss function expression is Loss(s) m (1-abs (sin (a))) + n (l (abc) -l (standard)), where m and n are constants arbitrarily adjusted and set according to the currently acquired image; l represents the circumference. L (standard) is the perimeter of the pattern under a standard template, with the standard constraints: the three lenses form a triangle with a fixed angle and the distance between the central points of the three lenses.
With reference to fig. 3, step S200 further includes:
s201, carrying out gray processing on the acquired image, simplifying information, and then eliminating noise by using Gaussian filtering;
s202, performing binarization operation on the threshold value of the image, and performing pixel-level segmentation on an object and a background in the image through the threshold value operation;
and S203, refilling the image with noise through the morphological opening and closing operation.
The graying process is a process of converting a color image into a grayscale image. The color of each pixel in the color image is determined by R, G, B three components, and 255 median values are desirable for each component, so that a pixel can have a range of 1600 tens of thousands (255 x 255) of colors. The gray image is a special color image with R, G, B components being the same, and the variation range of one pixel point is 255, so in digital image processing, images in various formats are generally converted into gray images first, so that the calculation amount of subsequent images is reduced. The description of a grayscale image, like a color image, still reflects the distribution and characteristics of the chrominance and luminance levels, both globally and locally, of the entire image. The gaussian filtering is a filter for smoothing signals, and is a mathematical model established to convert energy of image data, and noise belongs to a high-frequency part, and the influence of the noise is reduced after the gaussian filtering smoothing.
The binarization operation of the image is to set the gray scale of a point on the image to be 0 or 255, that is, the whole image has an obvious black-and-white effect, that is, a gray scale image with 256 brightness levels is selected by a proper threshold value to obtain a binarization image which can still reflect the whole and local characteristics of the image. The gray level image is binarized to obtain a binarized image, so that when the image is further processed, the set property of the image is only related to the positions of the points with the pixel values of 0 or 255, the multi-level values of the pixels are not related, the processing is simple, and the processing and compression amount of data is small. In order to obtain an ideal binary image, a non-overlapping region is generally defined by closed and connected boundaries. All pixels with the gray levels larger than or equal to the threshold are judged to belong to the specific object, the gray level of the pixels is 255 for representation, otherwise the pixels are excluded from the object region, the gray level is 0, and the pixels represent the background or the exceptional object region. If a particular object has a uniform gray level inside it and is in a uniform background with gray levels of other levels, a comparable segmentation result can be obtained using the thresholding method.
Wherein the morphological opening operation is an operation of erosion followed by dilation, which has the effect of eliminating fine objects, separating objects at fine points and smoothing the boundaries of larger objects, and is expressed as:
Figure BDA0001913855410000081
the morphological opening operation is an operation of expansion followed by erosion, which has the effect of filling fine cavities in the object, connecting the adjacent object and smoothing the boundary, and is expressed as:
Figure BDA0001913855410000082
wherein the content of the first and second substances,
Figure BDA0001913855410000083
Figure BDA0001913855410000084
where A Θ B is a corrosion operation, the corrosion results are a set of shifted elements z, such that the results of B shifting these elements are still included in A;
Figure BDA0001913855410000085
is a dilation operation, the result of which is a set of shift elements z, such that the shift of these elements by B1 has at least one overlapping element with A, A, B is two sets on the image, B1Is the reflection of B with respect to its origin.
With reference to fig. 4, step S300 further includes:
s301, performing straight line fitting on the edge of the camera module in the preprocessed image to obtain an ROI of the camera module;
s302, according to template parameters of the camera module, dividing the obtained module ROI according to the proportion of the module ROI to the template, and obtaining the ROI position of each lens;
the ROI (region of interest) refers to a region of interest, a region to be processed is delineated from a processed image in a manner of a square frame, a circle, an ellipse, an irregular polygon, and the like, and is called a region of interest, the region is a key point concerned by image analysis, and the ROI is used to delineate a target, so that the processing time can be reduced, and the accuracy can be increased. Linear fitting is used with the least squares method and an important approach to image segmentation is by edge detection, i.e. detecting where gray levels or structures have abrupt changes, indicating the end of one region and where another starts. Such discontinuities are referred to as edges. Different images have different gray levels, and the boundary generally has obvious edges, so that the images can be segmented by utilizing the characteristics. And fitting the local gray value of the image by using the edge parameter model, and then carrying out edge detection on the fitted parameter model.
With reference to fig. 5, step S400 further includes:
s401, extracting the outline of each ROI; because the preprocessing is carried out before, the three ROIs are binary images at the moment, and each ROI can be directly and respectively subjected to contour extraction;
s402, carrying out ellipse fitting operation on the extracted contour set to obtain the circle center, the width and the height of each ellipse;
s403, screening all ellipse sets obtained through fitting; the specific method comprises the following steps: if either the width or height of the ellipse is less than a certain range, such as if the ellipse is too small, then it is filtered; if the center of the ellipse deviates too far from the center point of the ROI, for example, the ellipse fitted by the noise which cannot be completely eliminated in the preprocessing stage is filtered;
s404, mapping each circle center coordinate back to the original image respectively to obtain a circle center initial coordinate under a global image coordinate system; because the three circle center coordinates obtained by the preliminary fitting are relative to the respective ROI coordinate systems, for the subsequent accurate fitting operation, the three coordinates need to be respectively mapped back to the original image to obtain the circle center preliminary coordinates under the three global image coordinate systems, and the specific mapping method comprises the following steps: and respectively adding the horizontal and vertical coordinates of each circle center to the horizontal and vertical coordinates of the ROI origin of coordinates relative to the global image, thereby obtaining three global circle center coordinates for subsequent processing.
Referring to fig. 6, fig. 6 shows the circle center position and the positions of the three lenses to be measured after the preliminary fitting in step S400 according to the first embodiment of the present invention;
with reference to fig. 7, step S500 further includes:
s501, obtaining the corresponding proportion of the actual lens distance on image pixels according to the resolution of the current image;
s502, obtaining the distance between adjacent points according to the acquired image proportion and the detection precision requirement;
s503, expanding the original circle center into a set according to the acquired rough coordinates of the circle center and the distance between adjacent points.
The distance between adjacent points is measured by using euclidean distance, so as to expand the points into a point set, where the euclidean distance refers to a real distance between two points in an m-dimensional space or a natural length of a vector (i.e., a distance from the point to an origin), and a specific expression of the euclidean distance in a two-dimensional space is as follows:
Figure BDA0001913855410000101
wherein x and y are horizontal and vertical coordinates of the two points respectively.
FIG. 8 is a diagram illustrating the definition of the positions of neighboring points according to an embodiment of the present invention; as can be seen from fig. 8, when the fitting center point is at the center pixel position in the graph and the distance between adjacent points is 1, the fitting center point becomes a set of points with the distance of 1, and the pixel point is changed from 1 to 9; when the distance between adjacent points is 2, the fitted center point becomes a set of points with a distance of 2, and the number of pixels is changed from 1 to 25.
FIG. 9 is a diagram illustrating a definition of a loss function according to an embodiment of the present invention; referring to fig. 9, the Loss function expression is Loss(s) ═ m (1-abs (sin (a))) + n (l (abc) -l (standard)), where m and n are constants arbitrarily adjusted and set according to the currently acquired image; l represents the circumference. L (standard) is the perimeter of the pattern under a standard template constrained by: the three lenses form a triangle with a fixed angle and the distance between the central points of the three lenses. Because the angle influence of the image is large, the distance between the central points of the three lenses is actually influenced by the angle of the triangle, so the influence proportion of the angle in the loss is defined to be about 5 times of the distance, and the specific formula is adjusted according to the difference of the samples to be measured. Two determination conditions are involved in the loss function expression, wherein the determination condition 1 is to determine an angle formed by two short sides of a triangle composed of three lenses, the determination condition 2 is to determine lengths of three sides of the triangle composed of three lenses, and different function expressions are obtained according to the determination result.
FIG. 10 is a schematic diagram of a traversal range of coordinate point optimization according to an embodiment of the present invention; referring to fig. 10, scanning and traversing the points within the distance of X (self-defined constant) from top to bottom according to the positions of the pixel points, and meanwhile, observing the change of the Loss function along with the traversal process, wherein the position with the minimum obtained Loss function value is the target position to be determined.
The embodiment of the invention realizes the purpose of reducing the image error by the camera module positioning method based on computer vision, and specifically can carry out smooth filtering on the image by gray value operation and Gaussian filtering to eliminate noise; the image can not relate to the multi-level value of the pixel any more through the binarization operation, the processing becomes simple, and the processing and compression amount of the data are small; a small amount of white spots and black spots caused by slight difference of lighting conditions of each image can be removed through morphological opening and closing operation, and the influence of noise on the image is reduced; the shape characteristics of the image are used for fitting, a loss function is set by combining the detection precision requirement, the position of the fitted point is adjusted to a certain degree, and the coordinate is adjusted to obtain the coordinate of the minimum loss position, so that the error is reduced.
Example two
The embodiment of the invention provides a camera module positioning system based on computer vision, which is suitable for a camera module positioning method based on computer vision, and referring to fig. 11, the system comprises: the image acquisition module 100 is used for acquiring the image of the camera module through acquisition equipment; the image preprocessing module 200 is connected to the image acquisition module 100 and is configured to perform preprocessing operation on the image; the image segmentation module 300 is connected to the image preprocessing module 200, and is configured to segment the preprocessed image according to the module template parameters to obtain an ROI of each lens; the preliminary fitting module 400 is connected with the image segmentation module 300 and is used for performing preliminary fitting on each lens ROI to obtain respective circle centers and mapping the circle centers back to the original image; an adjacent point determining module 500, connected to the preliminary fitting module 400, for determining coordinates of the obtained fitted central point, and determining an error of the point according to a rule of the adjacent points; and the optimization loss module 600 is connected to the neighboring point determining module 500, and is configured to determine a loss function, traverse the loss function according to an error determined by a rule of the neighboring point, and adjust the coordinate point to a position where the current obtained loss is the minimum.
Fig. 12 is a schematic structural diagram of a preprocessing module of a camera module positioning system based on computer vision according to a second embodiment of the present invention; referring to fig. 12, the image preprocessing module 200 includes: the denoising module 201 is configured to perform graying processing on the acquired image, simplify information, and eliminate noise by using gaussian filtering; a binarization module 202, configured to perform binarization operation on a threshold value of the image, and perform pixel-level segmentation on an object and a background in the image through the threshold value operation; and a filling and noise-preventing module 203 for refilling and noise-preventing the image by the morphological opening and closing operation.
Fig. 13 is a schematic structural diagram of an image segmentation module of a camera module positioning system based on computer vision according to a second embodiment of the present invention; referring to fig. 13, the image segmentation module 300 includes: a straight line fitting module 301, configured to perform straight line fitting on an edge of the camera module in the preprocessed image to obtain an ROI of the camera module; the position determining module 302 is configured to segment the obtained module ROI according to a ratio of the module ROI to the template according to a template parameter of the camera module, and obtain a ROI position of each lens.
Fig. 14 is a schematic structural diagram of a rough fitting module of a camera module positioning system based on computer vision according to a second embodiment of the present invention; as can be seen from fig. 14, the preliminary fitting module 400 includes: a contour extraction module 401, configured to perform contour extraction on each ROI; an ellipse fitting module 402, configured to perform ellipse fitting operation on the extracted contour set to obtain a circle center, a width, and a height of each ellipse; a screening module 403, configured to screen all ellipse sets obtained through fitting; the specific method comprises the following steps: if either the width or height of the ellipse is less than a certain range, such as if the ellipse is too small, then it is filtered; if the center of the ellipse deviates too far from the center point of the ROI, for example, the ellipse fitted by the noise which cannot be completely eliminated in the preprocessing stage is filtered; a mapping module 404, configured to map each circle center coordinate back to the original image, to obtain a rough circle center coordinate in the global image coordinate system, where the mapping method includes: and respectively adding the horizontal and vertical coordinates of each circle center to the horizontal and vertical coordinates of the ROI origin of coordinates relative to the global image, thereby obtaining three global circle center coordinates for subsequent processing.
Fig. 15 is a schematic structural diagram of a module for determining adjacent points by a camera module positioning system based on computer vision according to a second embodiment of the present invention, and as can be seen from fig. 15, the module 500 for determining adjacent points includes: the device comprises a pixel determining module 501 for obtaining the corresponding proportion of the actual lens distance on the image pixel according to the resolution of the current image, and an adjacent point distance determining module 502 for obtaining the distance of the adjacent point according to the obtained image proportion and the detection precision requirement, and expanding the original circle center into a set according to the obtained rough circle center coordinate and the obtained distance of the adjacent point.
Wherein, the Loss function expression of the optimization Loss module is Loss ═ m (1-abs (sin (a)) + n (l (abc) -l (standard)), where m and n are constants adjusted arbitrarily and set according to the currently acquired image; l represents the circumference. L (standard) is the perimeter of the pattern under the standard template.
The embodiment of the invention realizes the purpose of reducing image errors through the provided camera module positioning system based on computer vision, further improves the accuracy of the module position, and particularly, carries out smooth filtering on an image through a denoising module to eliminate noise; the image does not relate to the multi-level value of the pixel any more through the binarization module, the processing becomes simple, and the processing and compression amount of the data are small; a small amount of white spots and black spots which are generated due to slight difference of lighting conditions of each image are removed through the filling noise-proof module, and the influence of noise on the image is reduced; the linear fitting module is used for performing linear fitting by utilizing the shape characteristics of the image, and then the optimization loss module is combined to set a loss function according to the detection precision requirement, so that the position of the fitted point is adjusted to a certain extent, and the coordinate is adjusted to obtain the coordinate of the position with the minimum loss, thereby reducing the error.
It should be noted that: in the above embodiment, when the positioning method is implemented, the system is only illustrated by dividing the functional modules, and in practical application, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the system and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A camera module positioning method based on computer vision is characterized by comprising the following steps:
collecting the image of the camera module through collection equipment;
performing a preprocessing operation on the image;
segmenting the preprocessed image according to the module template parameters to obtain each lens ROI;
performing preliminary fitting on each lens ROI to obtain respective circle centers, and mapping the circle centers back to the original image;
determining the coordinate of the obtained fitted central point, and determining the error of the point according to the rule of adjacent points;
and determining a loss function according to the standard position of the camera module, traversing according to the error according to the loss function, and adjusting the coordinate point to be the position with the minimum loss which can be obtained currently.
2. The method of claim 1, wherein the pre-processing the image comprises:
carrying out graying processing on the acquired image to simplify information, and then eliminating noise by using Gaussian filtering;
carrying out binarization operation on the threshold value of the image, and carrying out pixel-level segmentation on an object and a background in the image through the threshold value operation;
refilling the image with noise by the morphological opening and closing operation.
3. The method of claim 1, wherein the segmenting the pre-processed image according to the module template parameters to obtain the ROI of each shot comprises:
for the preprocessed image, performing straight line fitting on the edge of a camera module in the preprocessed image to obtain an ROI of the camera module;
and according to the template parameters of the camera module, dividing the obtained module ROI according to the proportion of the module ROI to the template, and obtaining the approximate ROI position of each lens.
4. The method of claim 1, wherein the performing the preliminary fitting on each lens ROI to obtain a respective center of circle and mapping back to the original image comprises:
extracting the outline of each ROI;
carrying out ellipse fitting operation on each extracted contour set to obtain the circle center, width and height of each ellipse;
screening all ellipse sets obtained through fitting;
and mapping each circle center coordinate back to the original image respectively to obtain the initial circle center coordinate in the global image coordinate system.
5. The method of claim 1, wherein determining coordinates of the center point of the obtained fit and determining the error of the point according to the regularity of adjacent points comprises:
obtaining the corresponding proportion of the actual distance of the lens on the image pixel according to the resolution of the current image;
obtaining the distance between adjacent points according to the obtained image proportion and the detection precision requirement;
and expanding the original circle center into a set according to the acquired initial coordinates of the circle center and the distance between adjacent points.
6. A camera module positioning system based on computer vision, the system comprising:
the image acquisition module is used for acquiring the image of the camera module through acquisition equipment;
the image preprocessing module is used for preprocessing the image;
the image segmentation module is used for segmenting the preprocessed image according to the module template parameters to obtain the ROI of each lens;
the preliminary fitting module is used for carrying out preliminary fitting on each lens ROI respectively to obtain respective circle centers and mapping the circle centers back to the original image;
an adjacent point determining module, which is used for determining the coordinates of the obtained fitted central point and determining the error of the point according to the rule of the adjacent points;
and the optimization loss module is used for determining a loss function, traversing according to the loss function and the error, and adjusting the coordinate point to be the position with the minimum loss which can be obtained currently.
7. The system of claim 6, wherein the image pre-processing module comprises:
the de-noising module is used for carrying out graying processing on the acquired image to simplify information and then eliminating noise by using Gaussian filtering;
the binarization module is used for carrying out binarization operation on the threshold value of the image and carrying out pixel-level segmentation on an object and a background in the image through the threshold value operation;
and the filling and noise-preventing module is used for refilling and noise-preventing the image through the morphological opening and closing operation.
8. The system of claim 7, wherein the image segmentation module comprises:
the linear fitting module is used for performing linear fitting on the edge of the camera module in the preprocessed image to obtain an ROI of the camera module;
and the position determining module is used for segmenting the obtained module ROI according to the proportion of the module ROI and the template according to the template parameters of the camera module, and obtaining the position of each lens ROI.
9. The system of claim 7, wherein the preliminary fitting module comprises:
the outline extraction module is used for extracting the outline of each lens ROI;
the ellipse fitting module is used for carrying out ellipse fitting operation on each extracted contour set to obtain the circle center, the width and the height of each ellipse;
the screening module is used for screening all ellipse sets obtained through fitting;
and the mapping module is used for mapping each circle center coordinate back to the original image respectively to obtain the initial circle center coordinate in the global image coordinate system.
10. The system of claim 7, wherein the determine neighboring points module comprises:
the pixel determining module is used for obtaining the corresponding proportion of the actual distance of the lens on the image pixel according to the resolution of the current image;
and the adjacent point distance determining module is used for obtaining the distance between adjacent points according to the obtained image proportion and the detection precision requirement, and expanding the original circle center into a set according to the obtained preliminary coordinate of the circle center and the distance between the adjacent points.
CN201811577406.0A 2018-12-20 2018-12-20 Computer vision-based camera module positioning method and system Active CN111354047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811577406.0A CN111354047B (en) 2018-12-20 2018-12-20 Computer vision-based camera module positioning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811577406.0A CN111354047B (en) 2018-12-20 2018-12-20 Computer vision-based camera module positioning method and system

Publications (2)

Publication Number Publication Date
CN111354047A true CN111354047A (en) 2020-06-30
CN111354047B CN111354047B (en) 2023-11-07

Family

ID=71195132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811577406.0A Active CN111354047B (en) 2018-12-20 2018-12-20 Computer vision-based camera module positioning method and system

Country Status (1)

Country Link
CN (1) CN111354047B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882530A (en) * 2020-07-15 2020-11-03 苏州佳智彩光电科技有限公司 Sub-pixel positioning map generation method, positioning method and device
CN113267502A (en) * 2021-05-11 2021-08-17 江苏大学 Micro-motor friction plate defect detection system and detection method based on machine vision
CN116258838A (en) * 2023-05-15 2023-06-13 青岛环球重工科技有限公司 Intelligent visual guiding method for duct piece mold clamping system
CN116309799A (en) * 2023-02-10 2023-06-23 四川戎胜兴邦科技股份有限公司 Target visual positioning method, device and system
CN117808770A (en) * 2023-12-29 2024-04-02 布劳宁(上海)液压气动有限公司 Check valve surface quality detecting system based on machine vision

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103033126A (en) * 2011-09-29 2013-04-10 鸿富锦精密工业(深圳)有限公司 Annular object location method and system
US20140285676A1 (en) * 2011-07-25 2014-09-25 Universidade De Coimbra Method and apparatus for automatic camera calibration using one or more images of a checkerboard pattern
CN106204544A (en) * 2016-06-29 2016-12-07 南京中观软件技术有限公司 A kind of automatically extract index point position and the method and system of profile in image
CN108332681A (en) * 2018-01-03 2018-07-27 东北大学 A kind of determination method of the big plastic bending sectional profile curve lin of thin-wall pipes

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140285676A1 (en) * 2011-07-25 2014-09-25 Universidade De Coimbra Method and apparatus for automatic camera calibration using one or more images of a checkerboard pattern
CN103033126A (en) * 2011-09-29 2013-04-10 鸿富锦精密工业(深圳)有限公司 Annular object location method and system
CN106204544A (en) * 2016-06-29 2016-12-07 南京中观软件技术有限公司 A kind of automatically extract index point position and the method and system of profile in image
CN108332681A (en) * 2018-01-03 2018-07-27 东北大学 A kind of determination method of the big plastic bending sectional profile curve lin of thin-wall pipes

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882530A (en) * 2020-07-15 2020-11-03 苏州佳智彩光电科技有限公司 Sub-pixel positioning map generation method, positioning method and device
CN111882530B (en) * 2020-07-15 2024-05-14 苏州佳智彩光电科技有限公司 Sub-pixel positioning map generation method, positioning method and device
CN113267502A (en) * 2021-05-11 2021-08-17 江苏大学 Micro-motor friction plate defect detection system and detection method based on machine vision
CN113267502B (en) * 2021-05-11 2022-07-22 江苏大学 Micro-motor friction plate defect detection system and detection method based on machine vision
CN116309799A (en) * 2023-02-10 2023-06-23 四川戎胜兴邦科技股份有限公司 Target visual positioning method, device and system
CN116258838A (en) * 2023-05-15 2023-06-13 青岛环球重工科技有限公司 Intelligent visual guiding method for duct piece mold clamping system
CN116258838B (en) * 2023-05-15 2023-09-19 青岛环球重工科技有限公司 Intelligent visual guiding method for duct piece mold clamping system
CN117808770A (en) * 2023-12-29 2024-04-02 布劳宁(上海)液压气动有限公司 Check valve surface quality detecting system based on machine vision

Also Published As

Publication number Publication date
CN111354047B (en) 2023-11-07

Similar Documents

Publication Publication Date Title
CN109978839B (en) Method for detecting wafer low-texture defects
CN111354047B (en) Computer vision-based camera module positioning method and system
CN108921176B (en) Pointer instrument positioning and identifying method based on machine vision
CN110866924B (en) Line structured light center line extraction method and storage medium
CN111415363B (en) Image edge identification method
CN109580630B (en) Visual inspection method for defects of mechanical parts
CN108280450B (en) Expressway pavement detection method based on lane lines
CN115170669B (en) Identification and positioning method and system based on edge feature point set registration and storage medium
CN112651968B (en) Wood board deformation and pit detection method based on depth information
CN107784669A (en) A kind of method that hot spot extraction and its barycenter determine
CN111462066B (en) Thread parameter detection method based on machine vision
CN110569857B (en) Image contour corner detection method based on centroid distance calculation
CN112233116B (en) Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description
CN109540925B (en) Complex ceramic tile surface defect detection method based on difference method and local variance measurement operator
WO2023070593A1 (en) Line width measurement method and apparatus, computing processing device, computer program, and computer readable medium
CN114863492B (en) Method and device for repairing low-quality fingerprint image
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision
CN112991283A (en) Flexible IC substrate line width detection method based on super-pixels, medium and equipment
CN114972575A (en) Linear fitting algorithm based on contour edge
CN117635615B (en) Defect detection method and system for realizing punching die based on deep learning
CN113793309B (en) Subpixel level ellipse detection method based on morphological characteristics
CN112330667B (en) Morphology-based laser stripe center line extraction method
CN116563298B (en) Cross line center sub-pixel detection method based on Gaussian fitting
CN111178210B (en) Image identification and alignment method for cross mark
CN113284158B (en) Image edge extraction method and system based on structural constraint clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210305

Address after: 200333 room 808, 8th floor, No.6 Lane 600, Yunling West Road, Putuo District, Shanghai

Applicant after: Jingrui vision intelligent technology (Shanghai) Co.,Ltd.

Address before: 409-410, building A1, Fuhai information port, Fuyong street, Bao'an District, Shenzhen, Guangdong 518000

Applicant before: RISEYE INTELLIGENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant