CN112634140B - High-precision full-size visual image acquisition system and method - Google Patents

High-precision full-size visual image acquisition system and method Download PDF

Info

Publication number
CN112634140B
CN112634140B CN202110248456.XA CN202110248456A CN112634140B CN 112634140 B CN112634140 B CN 112634140B CN 202110248456 A CN202110248456 A CN 202110248456A CN 112634140 B CN112634140 B CN 112634140B
Authority
CN
China
Prior art keywords
image
visual
translation mechanism
coordinate system
rectangular coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110248456.XA
Other languages
Chinese (zh)
Other versions
CN112634140A (en
Inventor
戴江松
韩广良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Science And Technology Intelligent Technology Guangzhou Co ltd
Guangzhou Songhe Intelligent Technology Co ltd
Original Assignee
China Science And Technology Intelligent Technology Guangzhou Co ltd
Guangzhou Songhe Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Science And Technology Intelligent Technology Guangzhou Co ltd, Guangzhou Songhe Intelligent Technology Co ltd filed Critical China Science And Technology Intelligent Technology Guangzhou Co ltd
Priority to CN202110248456.XA priority Critical patent/CN112634140B/en
Publication of CN112634140A publication Critical patent/CN112634140A/en
Application granted granted Critical
Publication of CN112634140B publication Critical patent/CN112634140B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a high-precision full-size visual image acquisition system, which comprises a translation mechanism control module, a visual imaging module and a visual image splicing processing module, wherein the translation mechanism control module is connected with the visual image splicing processing module; the translation mechanism control module is used for controlling the translation mechanism to drive the visual camera to enable the visual camera to translate relative to a target to be measured, traversing each position of a full-size visual image to be measured, the visual imaging module is used for acquiring a scene image in a visual field and sending the scene image to the visual image splicing processing module, and the visual image splicing processing module is used for completing splicing of the high-precision full-size visual image. The invention is scientific and reasonable, is convenient to use, adopts two-layer operation of coarse positioning and fine positioning, expands the applicability of the system and reduces the cost of the system on the basis of ensuring the real-time performance of the system.

Description

High-precision full-size visual image acquisition system and method
Technical Field
The invention relates to the technical field of image data processing, in particular to a high-precision full-size visual image acquisition system and method.
Background
In the application of measuring high-precision workpieces or objects based on machine vision technology, because the visual imaging resolution is high, usually 10 to 50 micrometers, even several micrometers, based on the existing imaging chip technology, the coverable single-view field measuring range is about 60 to 300 millimeters with the resolution of 6K, and the size of the workpiece to be measured is often larger than the single-view field covering range, in the application of measuring visual workpieces, the combination of images of a plurality of view fields is generally needed to realize the measurement of a certain size or shape, two types of technologies can solve the problem of acquiring full-size scene images at present, one type is that a translation mechanism drives equipment to move, so that a visual camera and the workpiece to be measured form relative motion, the image of each view field is acquired in the motion, the position of the visual image is converted by the motion distance of the translation mechanism, and thus the required full-size image is formed by splicing, the full-size splicing precision of the method depends on the precision of a motion mechanism, the pixel-level precision cannot be achieved, and if the high precision is required, the cost of a translation mechanism is greatly increased; the other type is that a plurality of vision cameras are adopted for combination, corresponding vision images are obtained from different positions by the plurality of vision cameras according to the size distribution of a workpiece to be measured, the positions of the vision images are converted by utilizing the relation of the vision cameras, and the required full-size images are formed by splicing. Therefore, a high-precision full-scale visual image acquisition system and method are urgently needed.
Disclosure of Invention
The present invention is directed to a system and a method for acquiring a full-scale visual image with high accuracy, so as to solve the problems of the background art.
In order to solve the technical problems, the invention provides the following technical scheme: a high-precision full-size visual image acquisition system comprises a translation mechanism control module, a visual imaging module and a visual image splicing processing module, wherein the translation mechanism control module is connected with the visual image splicing processing module;
the translation mechanism control module is used for controlling the translation mechanism to drive the visual camera to enable the visual camera to translate relative to a target to be measured, the visual camera traverses each position of a full-size visual image to be measured, meanwhile, the real-time position of the translation mechanism is obtained to be used for the visual image splicing processing module to perform coarse positioning, the visual imaging module is used for obtaining a scene image in a visual field and sending the scene image to the visual image splicing processing module, and the visual image splicing processing module is used for completing splicing of the high-precision full-size visual image.
The invention utilizes the translation control module to control the translation mechanism to drive the vision camera to traverse each position of the full-size vision image to be measured, in the process that the translation mechanism drives the vision camera to move relative to the measured target, the vision image splicing processing module calculates the offset relation of adjacent images in the moving process in real time, the coarse positioning of the adjacent image relation is carried out according to the real-time position of the translation mechanism given by the translation mechanism control module, the local image characteristic matching is carried out on two adjacent images on the basis of the coarse positioning, the offset of the adjacent images is corrected, thereby improving the calculation speed and reducing the system cost.
Furthermore, the vision camera is fixed on the translation mechanism, the translation mechanism is connected with the translation mechanism control module, and the translation mechanism is connected with the vision imaging module.
According to the invention, the vision camera is fixed on the translation mechanism through the fixing support, the single vision camera is matched with the translation structure, so that the vision camera and the measured target generate relative position translation, each position of the full-size vision image to be measured is traversed, the obtained adjacent images are subjected to position rough splicing according to the position deviation of the translation mechanism, the precision of the rough splicing is only used as the rough positioning precision of a subsequent algorithm, and the splicing precision of the finally obtained full-size vision image is not influenced.
A high-precision full-size visual image acquisition method comprises the following steps:
step S1: starting the translation mechanism and the visual camera, wherein the translation mechanism drives the visual camera to acquire a local image of the current view field in real time, and turning to the step S2;
step S2: the translation mechanism control module acquires the position of the translation mechanism corresponding to the current view field in real time, the visual image splicing processing module performs coarse positioning on a series of acquired local images of the current view field according to the position of the translation mechanism of the current view field, performs coarse connection on the adjacent images according to the coarse positioning, and the step S3 is switched to;
step S3: on the basis of rough splicing, the visual image splicing processing module performs adjacent image pixel relation detection by using image correlation characteristic matching to obtain a high-precision pixel-level relative position relation of adjacent images, and then the step S4 is executed;
step S4: and according to the high-precision pixel-level relative position relation of the adjacent images, the visual image splicing processing module performs high-precision image splicing to form a high-precision full-size visual image.
The invention utilizes the translation mechanism to drive the vision camera to complete the traversal of the corresponding positions of a plurality of required visual fields, in the process that the translation mechanism drives the vision camera to move relative to the measured target, the offset relation of adjacent images in the motion process is calculated in real time by an image processing algorithm, the algorithm carries out coarse positioning of the relation of the adjacent images according to the real-time position of the translation mechanism given by the translation mechanism control module, the method comprises the steps of roughly splicing adjacent images on the basis of rough positioning, detecting the pixel relation of the adjacent images on the basis of rough splicing by a visual image splicing processing module after the rough splicing is finished, carrying out local image feature matching on the two adjacent images, correcting the offset of the adjacent images to obtain the accurate offset of the adjacent images, the matching is carried out in the small area range of the coarse positioning, so that the calculation speed can be improved, and the real-time and rapid calculation can be realized; splicing adjacent pictures by adopting rough positioning and fine positioning so as to obtain a full-size visual image required by measurement of a large-size target to be measured; on the basis of ensuring the real-time performance of the system, the applicability of the system is expanded, and the cost of the system is reduced.
Further, the step S2 of performing coarse positioning on neighboring images of the acquired series of partial images of the current field of view according to the position of the current field of view translation mechanism, further includes:
step S201: establishing a rectangular coordinate system XOY, wherein the origin of the rectangular coordinate system XOY is a fixed position, the X axis of the rectangular coordinate system XOY is parallel to the visual image, the Y axis of the rectangular coordinate system XOY is perpendicular to the X axis, the translation mechanism drives the visual camera to move in the rectangular coordinate system XOY, and the step S202 is turned;
step S202, the control module of the translation mechanism obtains a coordinate O1(X1, Y1) of a first position of the translation mechanism in the rectangular coordinate system XOY, the control module of the translation mechanism obtains a coordinate O2(X2, Y2) of a second position of the translation mechanism in the rectangular coordinate system XOY, the stitching processing module of the visual image establishes a rectangular coordinate system X1O1Y1 by taking the first position O1 as an origin according to the translation mechanism, the X1 axis of the rectangular coordinate system X1O1Y1 is parallel to the X axis of the rectangular coordinate system XOY, the Y1 axis of the rectangular coordinate system X1O1Y1 is parallel to the Y axis of the rectangular coordinate system XOY, the stitching processing module of the visual image establishes a rectangular coordinate system X2O2Y 38 by taking the coordinate O2 of the second position as the origin, the X2 axis of the rectangular coordinate system X2O2Y12 is parallel to the X axis of the rectangular coordinate system XOY, the Y2 axis of the rectangular coordinate system is parallel to the X6329 axis of the rectangular coordinate system, the visual image acquisition mechanism drives the second position of the camera to translate the rectangular coordinate system XOY, and the first position of the camera to obtain a second camera by the second camera, turning to step S203;
step S203: calculating a horizontal offset X and a vertical offset Y of the first image and the second image according to the coordinates O1(X1, Y1) of the first position and the coordinates O2(X2, Y2) of the second position in the rectangular coordinate system XOY, and turning to step S204; the calculation formulas of the horizontal offset X and the vertical offset Y of the first image and the second image are respectively as follows:
Figure DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE004
wherein X is a horizontal offset of the first image from the second image, X2 is an abscissa of the second location, X1 is an abscissa of the first location, Y is a vertical offset of the first image from the second image, Y2 is an ordinate of the second location, and Y1 is an ordinate of the first location;
step S204: and carrying out coarse positioning on the first image and the second image according to the horizontal offset X and the vertical offset Y of the first image and the second image, and carrying out coarse splicing on the first image and the second image according to the coarse positioning to obtain a coarse splicing image.
According to the invention, the translation position corresponding to the current view field is obtained in real time according to the translation mechanism control module, the precision of the translation position is the precision of the translation mechanism, the relative position of the measured target corresponding to the current view field and the visual camera can be corresponding according to the precision, the visual image splicing processing module can carry out position rough splicing on a series of obtained adjacent images quickly according to the precision of the translation mechanism, and the rough splicing speeds up the obtaining speed of the full-size visual image.
Further, the first image and the second image are generated by a visual imaging module, the first image and the second image are the same in size, the first image is located in a rectangular coordinate system X101Y1, and the second image is located in a rectangular coordinate system X2O2Y 2.
The images generated by the visual imaging module are the same in size, so that the splicing of adjacent images is facilitated, the rectangular coordinate system is established, the deviation amount between the first image and the second image is conveniently observed, and the rough splicing can be rapidly performed.
Further, after the first image and the second image are subjected to rough splicing, an overlapped area appears between the first image and the second image, the overlapped area is marked as a first area in the first image, the overlapped area is marked as a second area in the second image, and the overlapped areas are the same parts of the detected target.
After coarse positioning, the invention can determine that the subgraph in the first region and the subregion in the second region are from the same part of the detected target, thus reducing the detection range of fine positioning, accelerating the calculation speed of fine positioning and improving the working efficiency of the system.
Further, the step S3, the visual image stitching processing module, based on the rough stitching, using image correlation feature matching to detect the pixel relationship between adjacent images further includes obtaining an accurate offset between the first image and the second image according to a standard image correlation matching algorithm, where the accurate offset includes an accurate horizontal offset X1 of the second image relative to the first image and an accurate vertical offset Y1 of the second image relative to the first image.
After coarse positioning, the invention can determine that the subgraph in the second area, which necessarily comprises a sub-area and the first area, is from the same part of the detected object, thereby obtaining the accurate offset of the first image and the second image, and the offset accuracy is less than 1 pixel because the offset is from image accurate correlation matching.
Further, the image correlation matching algorithm in step S3 further includes:
step S301: the visual image splicing processing module selects a sub-image T (m, n) as a matching template T in a first region of the first image, wherein m and n are the length and width of the matching template T, and the step S302 is executed;
step S302: the visual image stitching processing module searches a matching region of the matching template T in a second region of the second image, records the matching region as a search graph S (w, h), wherein w and h are the length and width of the search graph S, and then the step S303 is executed;
step S303: searching the region covered by the matching template T (m, n) in the graph S (w, h) as a subgraph SijSequentially calculating the normalized correlation coefficients of the matching template T (m, n) and all the search areas, and turning to the step S304; the calculation formula of the normalized correlation coefficient is as follows:
Figure DEST_PATH_IMAGE006
wherein R (i, j) is a correlation coefficient, T (m, n) is a matching template, SijFor searching for the area covered by the matching template T (m, n);
step S304: after matching search is carried out on all second areas of the second image, the maximum value R of R is found outmax(im,ij) Its corresponding sub-diagram SijPosition (i) ofm,ij) I.e., the matching position of the two, thereby obtaining the precise offset amount of the first image and the second image, which comprises the precise horizontal offset amount X1 of the second image relative to the first image and the precise vertical offset amount Y1 of the second image relative to the first image.
The accurate deviation amount of the first image and the second image is calculated by adopting the normalized correlation coefficient, the higher the similarity of the two images is, the closer the correlation coefficient is to 1, so that the accurate deviation amount of the first image and the second image is obtained, and the deviation amount precision is less than 1 pixel because the deviation amount is from image accurate correlation matching.
Further, in step S4, according to the high-precision pixel-level relative position relationship between the adjacent images, the performing, by the visual image stitching processing module, high-precision image stitching further includes:
step S401: the original image size of the first image and the second image is M × N, the accurate horizontal shift amount X1 and the accurate vertical shift amount Y1 of the second image relative to the first image in step S203 are obtained, a blank image is set, the size of the blank image is (M + X1) × (N + Y1), go to step S402;
step S402: and copying the first image to the (0, Y1) coordinate position of the blank image, copying the second image to the (X1,0) coordinate position, finishing high-precision splicing of the first image and the second image, and obtaining a high-precision full-size visual image.
According to the invention, after the high-precision pixel-level relative position relation of adjacent images is obtained, the high-precision image splicing can be started by the visual image splicing processing module, the image splicing precision at the moment is at the pixel level, the fine positioning is carried out on the basis of the coarse positioning, and two-layer operation of the coarse positioning and the fine positioning is adopted, so that the applicability of the system is expanded on the basis of ensuring the real-time performance of the system, and the cost of the system is reduced.
Compared with the prior art, the invention has the following beneficial effects: the method utilizes the translation mechanism to drive the vision camera to complete the traversal of corresponding positions of a plurality of required fields of view, in the process of relative motion between the translation mechanism and the measured target, the offset relationship of adjacent images in the motion process is calculated in real time by an image processing algorithm, the algorithm carries out coarse positioning of the relationship of the adjacent images according to the real-time position given by the translation mechanism, local image feature matching is carried out on the two adjacent images on the basis of the coarse positioning, the offset of the adjacent images is corrected, and the accurate offset of the adjacent images is obtained, so the calculation speed is improved and the real-time rapid operation can be realized as the matching is carried out in the small area range of the coarse positioning; the local area matching avoids the interference of similar shapes or texture areas on the feature matching, and the robustness of fine positioning is improved; in a common vision measurement system with a motion translation mechanism, the invention realizes the acquisition of high-precision spliced images independent of the translation motion precision; according to the invention, the single vision camera is matched with the translation mechanism to realize accurate image acquisition of a large-range full-size visual image, so that the use cost is reduced; the invention does not depend on the precision of the translation mechanism, and the translation mechanism is only responsible for providing coarse positioning information for an image processing algorithm; the invention adopts two-layer operation of coarse positioning and fine positioning, expands the applicability of the system and reduces the cost of the system on the basis of ensuring the real-time performance of the system.
Drawings
FIG. 1 is a schematic diagram of the overall structure of a high precision full-scale visual image acquisition system;
FIG. 2 is a schematic process flow diagram of a high precision full scale visual image acquisition system;
FIG. 3 is a flow chart diagram of a high precision full-scale visual image acquisition method;
FIG. 4 is a schematic flow chart of coarse positioning in a high-precision full-scale visual image acquisition method;
FIG. 5 is a schematic diagram of coarse positioning in a high-precision full-scale visual image acquisition method;
FIG. 6 is a schematic flow chart of fine positioning in a high-precision full-scale visual image acquisition method;
FIG. 7 is a schematic diagram of fine positioning in a high-precision full-scale visual image acquisition method.
In the figure: 1. a first image; 2. a second image; 3. an overlap region.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 to 7, the present invention provides a technical solution:
as shown in fig. 1, a high-precision full-size visual image acquisition system comprises a translation mechanism control module, a visual imaging module and a visual image stitching processing module, wherein the translation mechanism control module is connected with the visual image stitching processing module, and the visual imaging module is connected with the visual image stitching processing module;
the translation mechanism control module is used for controlling the translation mechanism to drive the vision camera to enable the vision camera to translate relative to a target to be measured, the vision camera traverses each position of a full-size vision image to be measured, meanwhile, the real-time position of the translation mechanism is acquired to be used for the coarse positioning of the vision image splicing processing module, the vision imaging module is used for acquiring a scene image in a visual field and sending the scene image to the vision image splicing processing module, and the vision image splicing processing module is used for completing the splicing of the high-precision full-size vision image.
As shown in fig. 2, the translation mechanism drives the vision camera to obtain a vision image of the current view field corresponding to the real-time position of the translation mechanism, the vision imaging module performs coarse positioning on adjacent images on a series of obtained local images, a standard image correlation matching algorithm is used for obtaining the accurate offset of the adjacent images on the basis of the coarse positioning, the vision imaging module performs fine positioning on the adjacent images according to the accurate offset of the adjacent images, and the adjacent images are spliced according to the fine positioning to obtain a high-precision full-size vision image.
As shown in fig. 3, a high precision full-scale visual image acquisition method comprises the following steps:
step S1: starting the translation mechanism and the visual camera, wherein the translation mechanism drives the visual camera to acquire a local image of the current view field in real time, and turning to the step S2;
step S2: the translation mechanism control module acquires the position of the translation mechanism corresponding to the current view field in real time, the visual image splicing processing module performs coarse positioning on a series of acquired local images of the current view field according to the position of the translation mechanism of the current view field, performs coarse connection on the adjacent images according to the coarse positioning, and the step S3 is switched to;
step S3: on the basis of rough splicing, the visual image splicing processing module performs adjacent image pixel relation detection by using image correlation characteristic matching to obtain a high-precision pixel-level relative position relation of adjacent images, and then the step S4 is executed;
step S4: and according to the high-precision pixel-level relative position relation of the adjacent images, the visual image splicing processing module performs high-precision image splicing to form a high-precision full-size visual image.
The step S3, the visual image stitching processing module based on the rough stitching, using image correlation feature matching to detect the pixel relationship between adjacent images, further includes obtaining the accurate offset of the first image 1 and the second image 2 according to a standard image correlation matching algorithm, where the accurate offset includes the accurate horizontal offset X1 of the second image relative to the first image and the accurate vertical offset Y1 of the second image relative to the first image.
As shown in fig. 4 to 5, the step S2, the step of coarse positioning adjacent images of the acquired partial images of the current field of view by the visual image stitching processing module according to the position of the current field of view translation mechanism, further includes:
step S201: establishing a rectangular coordinate system XOY, wherein the origin of the rectangular coordinate system XOY is a fixed position, the X axis of the rectangular coordinate system XOY is parallel to the visual image, the Y axis of the rectangular coordinate system XOY is perpendicular to the X axis, the translation mechanism drives the visual camera to move in the rectangular coordinate system XOY, and the step S202 is turned;
step S202, the control module of the translation mechanism obtains a coordinate O1(X1, Y1) of a first position of the translation mechanism in the rectangular coordinate system XOY, the control module of the translation mechanism obtains a coordinate O2(X2, Y2) of a second position of the translation mechanism in the rectangular coordinate system XOY, the stitching processing module of the visual image establishes a rectangular coordinate system X1O1Y1 by taking the first position O1 as an origin according to the translation mechanism, the X1 axis of the rectangular coordinate system X1O1Y1 is parallel to the X axis of the rectangular coordinate system XOY, the Y1 axis of the rectangular coordinate system X1O1Y1 is parallel to the Y axis of the rectangular coordinate system XOY, the stitching processing module of the visual image establishes a rectangular coordinate system X2O2Y 38 by taking the coordinate O2 of the second position as the origin, the X2 axis of the rectangular coordinate system X2O2Y12 is parallel to the X axis of the rectangular coordinate system XOY, the visual image stitching processing module establishes a rectangular coordinate system X2O2Y 59638, the X2 axis of the rectangular coordinate system is parallel to the rectangular coordinate system X6329, the visual image acquisition mechanism drives the second position of the camera to move in the second position of the rectangular coordinate system XOY, and the camera to obtain a second camera, the camera in the first position of the camera, turning to step S203;
step S203: calculating a horizontal offset X and a vertical offset Y of the first image 1 and the second image 2 according to the coordinates O1(X1, Y1) of the first position and the coordinates O2(X2, Y2) of the second position in the rectangular coordinate system XOY, and turning to step S204; the calculation formulas of the horizontal shift amount X and the vertical shift amount Y of the first image 1 and the second image 2 are respectively:
Figure DEST_PATH_IMAGE007
Figure DEST_PATH_IMAGE008
wherein X is a horizontal offset of the first image 1 and the second image 2, X2 is an abscissa of the second position, X1 is an abscissa of the first position, Y is a vertical offset of the first image 1 and the second image 2, Y2 is an ordinate of the second position, and Y1 is an ordinate of the first position;
step S204: and carrying out coarse positioning on the first image 1 and the second image 2 according to the horizontal offset X and the vertical offset Y of the first image 1 and the second image 2, and carrying out coarse splicing on the first image 1 and the second image 2 according to the coarse positioning to obtain a coarse spliced image.
The first image 1 and the second image 2 are both generated by the vision imaging module, the first image 1 and the second image 2 are the same size, the first image 1 is located in a rectangular coordinate system X101Y1, and the second image 2 is located in a rectangular coordinate system X2O2Y 2.
After the first image 1 and the second image 2 are subjected to rough splicing, an overlapped area 3 appears between the first image 1 and the second image 2, the overlapped area 3 is marked as a first area in the first image 1, the overlapped area 3 is marked as a second area in the second image 2, and the overlapped area 3 is the same part of a detected target.
As shown in fig. 6-7, the step S3, the visual image stitching processing module, based on the rough stitching, using image correlation feature matching to detect the pixel relationship between adjacent images, further includes obtaining the precise offset amounts of the first image 1 and the second image 2 according to a standard image correlation matching algorithm, where the precise offset amounts include a precise horizontal offset amount X1 of the second image 2 relative to the first image 1 and a precise vertical offset amount Y1 of the second image 2 relative to the first image 1.
The standard image correlation matching algorithm in step S3 further includes:
step S301: the visual image stitching processing module selects a sub-image T (m, n) as a matching template T in a first region of the first image 1, wherein m and n are the length and width of the matching template T, and then the step S302 is executed;
step S302: the visual image stitching processing module searches a matching region of the matching template T in a second region of the second image 2, records the matching region as a search graph S (w, h), wherein w and h are the length and width of the search graph S, and then goes to step S303;
step S303: searching the region covered by the matching template T (m, n) in the graph S (w, h) as a subgraph SijSequentially calculating the normalized correlation coefficients of the matching template T (m, n) and all the search areas, and turning to the step S304; the calculation formula of the normalized correlation coefficient is as follows:
Figure 333752DEST_PATH_IMAGE006
wherein R (i, j) is a correlation coefficient, T (m, n) is a matching template, SijFor searching for the area covered by the matching template T (m, n);
step S304: after matching search is carried out on all second areas of the second image 2, the maximum value R of R is foundmax(im,ij) Its corresponding sub-diagram SijPosition (i) ofm,ij) I.e., the matching position of the two, thereby obtaining the precise offset amounts of the first image 1 and the second image 2, including the precise horizontal offset amount X1 of the second image 2 with respect to the first image 1 and the precise vertical offset amount Y1 of the second image 2 with respect to the first image 1.
In step S4, according to the high-precision pixel-level relative position relationship between the adjacent images, the process of stitching high-precision images by the visual image stitching module further includes:
step S401: the original image size of the first image 1 and the second image 2 is M × N, the accurate horizontal shift amount X1 and the accurate vertical shift amount Y1 of the second image 2 with respect to the first image 1 in step S203 are obtained, a blank image is set, the size of the blank image is (M + X1) × (N + Y1), go to step S402;
step S402: and copying the first image 1 to the (0, Y1) coordinate position of the blank image, copying the second image 2 to the (X1,0) coordinate position, completing high-precision splicing of the first image 1 and the second image 2, and obtaining a high-precision full-size visual image.
The first embodiment is as follows:
the translation mechanism drives the vision camera to acquire a first image 1 at a first position o1(790, 680) in the rectangular coordinate system XOY, the translation mechanism drives the vision camera to acquire a second image 2 at a second position o2(880, 760) in the rectangular coordinate system XOY, wherein the first image 1 and the second image 2 have the same size, and a horizontal offset X and a vertical offset Y of the second position o2 relative to the first position o1 are calculated:
Figure DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE012
wherein X is a horizontal offset of the first image 1 and the second image 2, Y is a vertical offset of the first image 1 and the second image 2, X2 is an abscissa of the second position, X1 is an abscissa of the first position, Y2 is an ordinate of the second position, and Y1 is an ordinate of the first position, the first image 1 and the second image 2 are roughly positioned according to the horizontal offset X and the vertical offset Y of the first image 1 and the second image 2, the roughly positioned first image 1 and second image 2 have already formed a rough spatial position relationship, the first image 1 and the second image 2 are roughly spliced according to the rough positioning to form an overlapping region 3, the overlapping region 3 is the same part of the object to be detected, the overlapping region 3 is marked as a first region in the first image 1, the overlapping region 3 is marked as a second region in the second image 2, the visual image stitching processing module selects one of the first regions as a matching template, acquiring an accurate horizontal offset X1 and an accurate horizontal offset Y1 of the first image 1 and the second image 2 according to a standard image correlation matching method, setting a blank image, wherein the length of the blank image is the length of the first image 1 plus the accurate horizontal offset of the second image 2 relative to the first image 1, the width of the blank image is the width of the first image 1 plus the accurate vertical offset of the second image 2 relative to the first image 1, copying the first image 1 to the position with coordinates (0, X1) in the blank image, and copying the second image 2 to the position with coordinates (Y1,0) in the blank image, namely acquiring a fine splicing image of the first image 1 and the second image 2.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (8)

1. A high-precision full-size visual image acquisition method is characterized by comprising the following steps: the visual image acquisition method comprises the following steps:
step S1: starting the translation mechanism and the visual camera, wherein the translation mechanism drives the visual camera to acquire a local image of the current view field in real time, and turning to the step S2;
step S2: the translation mechanism control module acquires the position of the translation mechanism corresponding to the current view field in real time, the visual image splicing processing module performs coarse positioning on a series of acquired local images of the current view field according to the position of the translation mechanism of the current view field, performs coarse connection on the adjacent images according to the coarse positioning, and the step S3 is switched to;
step S3: on the basis of rough splicing, the visual image splicing processing module performs adjacent image pixel relation detection by using image correlation characteristic matching to obtain a high-precision pixel-level relative position relation of adjacent images, and then the step S4 is executed;
step S4: the translation mechanism drives the vision camera to acquire adjacent images at different positions and records the adjacent images as a first image (1) and a second image (2), the original size of the images is M multiplied by N, the accurate horizontal offset X1 and the accurate vertical offset Y1 of the second image (2) relative to the first image (1) are acquired according to the high-accuracy pixel-level relative position relation of the adjacent images, a blank image is set, the size of the blank image is (M + X1) X (N + Y1), the first image (1) is copied to the (0, Y1) coordinate position of the blank image, the second image (2) is copied to the (X1,0) coordinate position of the blank image, the high-accuracy splicing of the first image (1) and the second image (2) is completed, and the high-accuracy full-size vision image is formed.
2. The method for acquiring the high-precision full-size visual image according to claim 1, wherein the method comprises the following steps: the step S2, in which the step S2, the step S of performing coarse positioning on adjacent images of the acquired series of local images of the current field of view according to the position of the current field of view translation mechanism further includes:
step S201: establishing a rectangular coordinate system XOY, wherein the origin of the rectangular coordinate system XOY is a fixed position, the X axis of the rectangular coordinate system XOY is parallel to the visual image, the Y axis of the rectangular coordinate system XOY is perpendicular to the X axis, the translation mechanism drives the visual camera to move in the rectangular coordinate system XOY, and the step S202 is turned;
step S202, the translation mechanism control module obtains coordinates O1(X1, Y1) of a first position of the translation mechanism in the rectangular coordinate system XOY, the translation mechanism control module obtains coordinates O2(X2, Y2) of a second position of the translation mechanism in the rectangular coordinate system XOY, the visual image stitching processing module establishes a rectangular coordinate system X1O1Y1 by taking the first position O1 as an origin according to the translation mechanism, the X1 axis of the rectangular coordinate system X1O1Y1 is parallel to the X axis of the rectangular coordinate system XOY, the Y1 axis of the rectangular coordinate system X1O1Y1 is parallel to the Y axis of the rectangular coordinate system XOY, the visual image stitching processing module establishes a rectangular coordinate system X2O2Y 38 by taking the coordinates O2 of the second position as the origin, the X2 axis of the rectangular coordinate system X2O2Y12 is parallel to the X axis of the rectangular coordinate system XOY, the X582 axis of the rectangular coordinate system Y12 is parallel to the X axis of the rectangular coordinate system XOY, the first position O2Y 5961 camera is driven by the translation mechanism, the translation mechanism drives the vision camera to obtain a second image (2) at a second position, and the step S203 is executed;
step S203: calculating a horizontal offset X and a vertical offset Y of the first image (1) and the second image (2) according to the coordinates O1(X1, Y1) of the first position and the coordinates O2(X2, Y2) of the second position in the rectangular coordinate system XOY, and turning to the step S204; the calculation formulas of the horizontal offset X and the vertical offset Y of the first image (1) and the second image (2) are respectively as follows:
Figure DEST_PATH_IMAGE001
Figure 37121DEST_PATH_IMAGE002
wherein X is the horizontal offset of the first image (1) from the second image (2), X2 is the abscissa of the second location, X1 is the abscissa of the first location, Y is the vertical offset of the first image (1) from the second image (2), Y2 is the ordinate of the second location, and Y1 is the ordinate of the first location;
step S204: and roughly positioning the first image (1) and the second image (2) according to the horizontal offset X and the vertical offset Y of the first image (1) and the second image (2), and roughly splicing the first image (1) and the second image (2) according to the rough positioning to obtain a roughly spliced image.
3. The method for acquiring the high-precision full-size visual image according to claim 1, wherein the method comprises the following steps: the first image (1) and the second image (2) are both generated by a visual imaging module, the first image (1) and the second image (2) are the same size, the first image (1) is located in a rectangular coordinate system X1O1Y1, and the second image (2) is located in a rectangular coordinate system X2O2Y 2.
4. The method for acquiring the high-precision full-size visual image according to claim 1, wherein the method comprises the following steps: after the first image (1) and the second image (2) are subjected to rough splicing, an overlapped area (3) appears between the first image (1) and the second image (2), the overlapped area (3) is marked as a first area in the first image (1), the overlapped area (3) is marked as a second area in the second image (2), and the overlapped areas (3) are the same part of a detected target.
5. The method for acquiring the high-precision full-size visual image according to claim 1, wherein the method comprises the following steps: the step S3, the visual image stitching processing module based on the rough stitching, using image correlation feature matching to detect the pixel relationship between adjacent images, further includes obtaining the precise offset of the first image (1) and the second image (2) according to a standard image correlation matching algorithm, where the precise offset includes the precise horizontal offset X1 of the second image (2) relative to the first image (1) and the precise vertical offset Y1 of the second image (2) relative to the first image (1).
6. The method for acquiring the high-precision full-size visual image according to claim 1, wherein the method comprises the following steps: the standard image correlation matching algorithm in step S3 further includes:
step S301: the visual image splicing processing module selects a sub-image T (m, n) as a matching template T in a first region of the first image (1), wherein m and n are the length and width of the matching template T, and the step S302 is turned;
step S302: the visual image splicing processing module searches a matching region of the matching template T in a second region of the second image (2) and records the matching region as a search graph S (w, h), wherein w and h are the length and width of the search graph S, and the step S303 is executed;
step S303: searching for the region of the map S (w, h) covered by the matching template T (m, n)Is a subgraph SijSequentially calculating the normalized correlation coefficients of the matching template T (m, n) and all the search areas, and turning to the step S304; the calculation formula of the normalized correlation coefficient is as follows:
Figure DEST_PATH_IMAGE003
wherein R (i, j) is a correlation coefficient, T (m, n) is a matching template, SijFor searching for the area covered by the matching template T (m, n);
step S304: after matching search is carried out on all second areas of the second image (2), the maximum value R of R is foundmax(im,ij) Its corresponding sub-diagram SijPosition (i) ofm,ij) I.e. the matching position of the two, to obtain the exact offset of the first image (1) from the second image (2), including the exact horizontal offset X1 of the second image (2) from the first image (1) and the exact vertical offset Y1 of the second image (2) from the first image (1).
7. A high-precision full-size visual image acquisition system applying the high-precision full-size visual image acquisition method of any one of claims 1 to 6, wherein the visual image acquisition system comprises a translation mechanism control module, a visual imaging module and a visual image splicing processing module, the translation mechanism control module is connected with the visual image splicing processing module, and the visual imaging module is connected with the visual image splicing processing module;
the translation mechanism control module is used for controlling the translation mechanism to drive the visual camera to enable the visual camera to translate relative to a target to be measured, the visual camera traverses each position of a full-size visual image to be measured, meanwhile, the real-time position of the translation mechanism is obtained to be used for the visual image splicing processing module to perform coarse positioning, the visual imaging module is used for obtaining a scene image in a visual field and sending the scene image to the visual image splicing processing module, and the visual image splicing processing module is used for completing splicing of the high-precision full-size visual image.
8. The high precision full scale visual image acquisition system according to claim 7, wherein: the vision camera is fixed on the translation mechanism, the translation mechanism is connected with the translation mechanism control module, and the translation mechanism is connected with the vision imaging module.
CN202110248456.XA 2021-03-08 2021-03-08 High-precision full-size visual image acquisition system and method Active CN112634140B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110248456.XA CN112634140B (en) 2021-03-08 2021-03-08 High-precision full-size visual image acquisition system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110248456.XA CN112634140B (en) 2021-03-08 2021-03-08 High-precision full-size visual image acquisition system and method

Publications (2)

Publication Number Publication Date
CN112634140A CN112634140A (en) 2021-04-09
CN112634140B true CN112634140B (en) 2021-08-13

Family

ID=75297606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110248456.XA Active CN112634140B (en) 2021-03-08 2021-03-08 High-precision full-size visual image acquisition system and method

Country Status (1)

Country Link
CN (1) CN112634140B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658147B (en) * 2021-08-23 2024-03-29 宁波棱镜空间智能科技有限公司 Workpiece size measuring device and method based on deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102157004A (en) * 2011-04-18 2011-08-17 东华大学 Automatic image mosaicking method for high-accuracy image measuring apparatus of super-view field part
CN102506830A (en) * 2011-11-21 2012-06-20 奇瑞汽车股份有限公司 Vision-based positioning method and device
CN103337068A (en) * 2013-06-04 2013-10-02 华中科技大学 A multiple-subarea matching method constraint by a space relation
CN111899174A (en) * 2020-07-29 2020-11-06 北京天睿空间科技股份有限公司 Single-camera rotation splicing method based on deep learning
CN112325778A (en) * 2020-12-02 2021-02-05 广东省科学院智能制造研究所 Full-size detection device and method for over-the-field workpiece based on machine vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205796A (en) * 2014-06-30 2015-12-30 华为技术有限公司 Wide-area image acquisition method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102157004A (en) * 2011-04-18 2011-08-17 东华大学 Automatic image mosaicking method for high-accuracy image measuring apparatus of super-view field part
CN102506830A (en) * 2011-11-21 2012-06-20 奇瑞汽车股份有限公司 Vision-based positioning method and device
CN103337068A (en) * 2013-06-04 2013-10-02 华中科技大学 A multiple-subarea matching method constraint by a space relation
CN111899174A (en) * 2020-07-29 2020-11-06 北京天睿空间科技股份有限公司 Single-camera rotation splicing method based on deep learning
CN112325778A (en) * 2020-12-02 2021-02-05 广东省科学院智能制造研究所 Full-size detection device and method for over-the-field workpiece based on machine vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于机器视觉的轴类零件尺寸测量系统的研制;徐兴波;《中国优秀硕士学位论文全文数据库》;20170515;全文 *

Also Published As

Publication number Publication date
CN112634140A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN112629441B (en) 3D curved surface glass contour scanning detection method and system
EP1343332B1 (en) Stereoscopic image characteristics examination system
US8600192B2 (en) System and method for finding correspondence between cameras in a three-dimensional vision system
US6751338B1 (en) System and method of using range image data with machine vision tools
US11488322B2 (en) System and method for training a model in a plurality of non-perspective cameras and determining 3D pose of an object at runtime with the same
WO2017126060A1 (en) Three-dimensional measurement device and measurement assistance processing method for same
CN111707187B (en) Measuring method and system for large part
CN108470356A (en) A kind of target object fast ranging method based on binocular vision
CN111292241B (en) Large-diameter optical element regional scanning splicing method
CN112017248B (en) 2D laser radar camera multi-frame single-step calibration method based on dotted line characteristics
CN112634140B (en) High-precision full-size visual image acquisition system and method
CN109190612A (en) Image acquisition and processing equipment and image acquisition and processing method
CN110044266B (en) Photogrammetry system based on speckle projection
CN111598177A (en) Self-adaptive maximum sliding window matching method facing low-overlapping image matching
JP4101478B2 (en) Human body end point detection method and apparatus
JP3221384B2 (en) 3D coordinate measuring device
JP2000171214A (en) Corresponding point retrieving method and three- dimensional position measuring method utilizing same
JP4097255B2 (en) Pattern matching apparatus, pattern matching method and program
JPH1096607A (en) Object detector and plane estimation method
JP3340599B2 (en) Plane estimation method
CN114485479B (en) Structured light scanning and measuring method and system based on binocular camera and inertial navigation
CN116630164B (en) Real-time splicing method for massive microscopic images
JP3247305B2 (en) Feature region extraction method and apparatus
Zheng et al. Study of binocular parallax estimation algorithms with different focal lengths
CN118135033A (en) RGBD sensor-assisted laser radar and camera combined calibration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant