CN113139900B - Method for acquiring complete surface image of bar - Google Patents

Method for acquiring complete surface image of bar Download PDF

Info

Publication number
CN113139900B
CN113139900B CN202110365113.1A CN202110365113A CN113139900B CN 113139900 B CN113139900 B CN 113139900B CN 202110365113 A CN202110365113 A CN 202110365113A CN 113139900 B CN113139900 B CN 113139900B
Authority
CN
China
Prior art keywords
bar
image
camera
area
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110365113.1A
Other languages
Chinese (zh)
Other versions
CN113139900A (en
Inventor
石杰
邓能辉
杨朝霖
吴昆鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
USTB Design and Research Institute Co Ltd
Original Assignee
USTB Design and Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by USTB Design and Research Institute Co Ltd filed Critical USTB Design and Research Institute Co Ltd
Priority to CN202110365113.1A priority Critical patent/CN113139900B/en
Publication of CN113139900A publication Critical patent/CN113139900A/en
Application granted granted Critical
Publication of CN113139900B publication Critical patent/CN113139900B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention provides a method for acquiring an image of the complete surface of a bar, and belongs to the technical field of machine vision detection. According to the method, bar images are acquired from different angles through four linear array cameras, bar areas are obtained through a segmentation algorithm, and skeleton searching is performed; the bar area is transformed to a vertical state by using a deformation correction algorithm and is positioned in the middle of the image, so that the problem of distortion of a shot image caused by shaking in the bar production process is effectively avoided, and the actual bar state is truly embodied; according to the position relation between the cameras and the bar, calculating the conversion relation between the pixel coordinates and the physical coordinates of each camera, obtaining the superposition area of the photographed images of each camera, removing the superposition area, and transversely splicing the retained images of each camera to obtain the complete surface image of the bar. The complete surface image of the bar can better reflect the surface quality condition of the bar, and simultaneously, better basic image data is provided for subsequent surface defect detection processing.

Description

Method for acquiring complete surface image of bar
Technical Field
The invention relates to the technical field of machine vision detection, in particular to a method for acquiring an image of the complete surface of a bar.
Background
As the requirements of customers on steel quality are higher and higher, the detection of surface defects of the steel is more and more important, and the current surface detection technology based on machine vision is close to maturity in the fields of hot rolling, medium plates and the like; however, in the production process of round bars, because of the effects of high production speed, large shaking interference, irregular surface imaging and the like, the acquired bar images cannot effectively provide basic image assurance for subsequent detection, so that the defect detection recognition rate is low, and the field use requirement cannot be met.
The invention designs a method for acquiring the complete surface image of the bar, which is to obtain the complete spreading image of the bar surface through the processes of bar area extraction, bar image straightening, overlapping pixel removal and the like of the image acquired by a camera, so as to provide more reliable image guarantee for subsequent defect detection. The irregular cylindrical surface is adjusted to be a plane view, so that convenience is provided for further defect detection.
Disclosure of Invention
The invention aims to provide a method for acquiring an image of the complete surface of a bar.
According to the method, four linear array cameras are used for respectively acquiring bar surface images from different angles, then bar pixel areas in the images are extracted through a segmentation model, a bending correction algorithm is adopted for correcting the bar images, bars in the corrected images are completely vertical and located in the middle position of the images, conversion relations between the image acquisition pixels of the cameras and the positions of objects are calculated, overlapping areas among the cameras are obtained and removed, and finally the images reserved by the cameras are transversely spliced to obtain a complete surface image of the bars.
The method specifically comprises the following steps:
(1) Acquiring bar surface images from different angles by using four linear array cameras respectively;
(2) Extracting a bar pixel region in an image through a segmentation model, obtaining a skeleton form of the bar region by adopting a bending correction algorithm, and further carrying out bar image transformation by utilizing a deformation correction formula to obtain a corrected bar image, wherein the bar in the corrected image is completely vertical and is positioned in the middle position of the image;
(3) Calculating the conversion relation between the image pixels collected by each camera and the position of the object, obtaining the superposition area between each camera and eliminating;
(4) And (3) transversely splicing the images reserved by each camera in the step (3) to obtain a complete surface image of the bar material.
In the step (1), four linear array cameras acquire images of the surface of the bar at 90-degree intervals on the circumference of the cross section of the bar, each camera shoots one third of the area of the surface of the bar, and the images of the cameras are overlapped.
In the step (2), a Unet semantic segmentation model is selected as a segmentation model, a bending correction algorithm firstly adopts the segmentation model to extract an actual area of the bar, then the transverse position is averaged by calculating a starting position and a stopping position with a transverse gray value larger than 0, and median filtering is carried out on the data to filter noise data, so that a skeleton form of the bar area is obtained; center position of bar area of each row in the image:
p c (i)=[max(p f(i,j)>0 (i))+min(p f(i,j)>0 (i))]/2
wherein p is f(i,j)>0 (i) Representing the pixel positions of the ith row of the image, wherein all gray values of the pixel positions are larger than 0; f (i, j) is the gray value of the ith row and jth column on the image.
The input of the Unet semantic segmentation model is that a camera collects pictures, and the pictures are output as binary images, wherein a region with a pixel value of 0 is used as a background, and a region with a pixel value of 1 is used as a bar.
In the step (2), image transformation is performed by using a deformation correction formula, wherein the deformation correction formula is as follows:
ε=p c (i)-w/2
wherein f d (i, j) is the image gray of the corresponding point after deformation correction; f (i, j) is the gray value of the ith row and jth column on the image; w is the width of the image; epsilon is an intermediate variable; p is p c (i) The center position of the bar area for each row in the image.
In the step (3), the mapping relation between the camera coordinates and the physical coordinates is calculated through the camera positions, and the coincident pixel parts among the cameras are analyzed for cutting; setting the effective range of each camera for shooting the bar to be 90 degrees, setting the distance between the center of the camera and the center of the bar to be d, setting the focal length of the camera to be f, and setting the radius of the bar to be r;
alpha is the included angle between the camera and two tangent lines at the outer side edge of the photographed bar:
w b bar area width for image capture:
w b′ width of effective bar area in the image:
w b and w is equal to b′ The difference value of the bar is the overlapping area between the cameras, and the 4 overlapping areas of the cameras are filtered respectively, so that the respective effective bar area coordinates can be obtained.
And (4) splicing the effective bar areas of the cameras in the transverse direction in turn according to the anticlockwise sequence, so that a complete unfolding diagram of the bar surface can be obtained.
The technical scheme of the invention has the following beneficial effects:
in the scheme, the problems that the image shooting distortion is caused by bar shaking and irregular surface, the shape and the size of a real bar cannot be reflected, and a large number of false positives are easy to generate during image detection are solved; the complete bar surface unfolding image obtained by the method can be used for detecting defects by directly utilizing defect detection methods such as medium plate and hot rolling, and the like, so that the reuse of detection algorithms is facilitated.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a diagram showing a position distribution of an acquisition camera according to the present invention;
fig. 3 is a schematic diagram of a bar shape correction process according to the present invention;
fig. 4 is a schematic diagram of calculating a camera image overlapping region according to the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages to be solved more apparent, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
The invention provides a method for acquiring an image of the complete surface of a bar.
As shown in fig. 1, the method comprises the steps of firstly acquiring bar surface images from different angles by using four linear array cameras respectively, then extracting bar pixel areas in the images by using a segmentation model, correcting the bar images by using a bending correction algorithm, completely vertical bars in the corrected images and positioned at the middle position of the images, calculating the conversion relation between the image pixels acquired by each camera and the positions of objects, obtaining the superposition areas among the cameras, removing the superposition areas, and finally transversely splicing the images reserved by each camera to obtain the complete surface image of the bar.
The method specifically comprises the following steps:
(1) Acquiring bar surface images from different angles by using four linear array cameras respectively;
(2) Extracting a bar pixel region in an image through a segmentation model, obtaining a skeleton form of the bar region by adopting a bending correction algorithm, and further carrying out bar image transformation by utilizing a deformation correction formula to obtain a corrected bar image, wherein the bar in the corrected image is completely vertical and is positioned in the middle position of the image;
(3) Calculating the conversion relation between the image pixels collected by each camera and the position of the object, obtaining the superposition area between each camera and eliminating;
(4) And (3) transversely splicing the images reserved by each camera in the step (3) to obtain a complete surface image of the bar material.
The following describes specific embodiments.
Example 1
The invention provides a method for acquiring an image of the complete surface of a bar.
Acquiring bar surface images from different angles by using four linear array cameras respectively; extracting a bar pixel area in the image through the segmentation model, correcting the bar image by adopting a bending correction algorithm, wherein the bar in the corrected image is completely vertical and is positioned at the middle position of the image; and calculating the conversion relation between the image pixels collected by each camera and the position of the object, obtaining the superposition area between each camera, removing the superposition area, and then transversely splicing the images reserved by each camera to obtain the complete surface image of the bar material. As shown in fig. 2, four linear array cameras are used for respectively acquiring bar surface images from circumferences of 45 degrees, 135 degrees, 225 degrees and 315 degrees, each camera shoots one third of the bar surface, the images of the cameras are overlapped, the resolution of the cameras is 1024 x 1024, and the diameter range of the bar is 80mm-145mm.
Firstly, extracting an actual area of a bar by adopting a Unet semantic segmentation model in a bar bending correction algorithm, then, averaging the transverse positions by calculating a starting position and a stopping position with transverse gray values larger than 0, and carrying out median filtering on the data to filter noise data so as to obtain a skeleton form of the bar area; center position of bar area of each row in the image:
p c (i)=[max(p f(i,j)>0 (i))+min(p f(i,j)>0 (i))]/2
wherein p is f(i,j)>0 (i) Representing the pixel positions of the ith row of the image, wherein all gray values of the pixel positions are larger than 0; f (i, j) is the gray value of the ith row and jth column on the image.
As shown in fig. 3, the image is transformed by using a deformation correction formula to obtain a straightened bar image, wherein the straightened bar image is positioned at the right center of the image, i.e. the bar area is distributed near the pixel coordinates 512; the deformation correction formula is as follows:
ε=p c (i)-w/2
wherein f d (i, j) is the image gray of the corresponding point after deformation correction; f (i, j) is the gray value of the ith row and jth column on the image; w is the width of the image; epsilon is an intermediate variable; p is p c (i) The center position of the bar area for each row in the image.
And calculating the mapping relation between the camera coordinates and the physical coordinates through the camera positions, analyzing the coincident pixel parts among the cameras, and intercepting. As shown in fig. 4, the effective range of each camera for shooting the bar is set to be 90 degrees, the distance between the center of the camera and the center of the bar is set to be d, the focal length of the camera is set to be f, and the radius of the bar is set to be r;
alpha is the included angle between the camera and two tangent lines at the outer side edge of the photographed bar:
w b bar area width for image capture:
w b′ width of effective bar area in the image:
w b and w is equal to b′ The difference value of the bar is the overlapping area between the cameras, and the 4 overlapping areas of the cameras are filtered respectively, so that the respective effective bar area coordinates can be obtained.
And splicing the effective bar areas of the cameras in the transverse direction in turn according to the anticlockwise sequence, so that a complete unfolded view of the bar surface can be obtained.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.

Claims (4)

1. The method for acquiring the complete surface image of the bar is characterized by comprising the following steps of: the method comprises the following steps:
(1) Acquiring bar surface images from different angles by using four linear array cameras respectively;
(2) Extracting a bar pixel region in an image through a segmentation model, obtaining a skeleton form of the bar region by adopting a bending correction algorithm, and further carrying out bar image transformation by utilizing a deformation correction formula to obtain a corrected bar image, wherein the bar in the corrected image is completely vertical and is positioned in the middle position of the image;
(3) Calculating the conversion relation between the image pixels collected by each camera and the position of the object, obtaining the superposition area between each camera and eliminating;
(4) Transversely splicing the images reserved by each camera in the step (3) to obtain a complete surface image of the bar material;
the segmentation model in the step (2) is a Unet semantic segmentation model, a bending correction algorithm firstly adopts the segmentation model to extract an actual area of the bar, then calculates a starting position and a stopping position with a transverse gray value larger than 0, averages the transverse positions, and carries out median filtering on the data to filter noise data, so as to obtain a skeleton form of the bar area; center position of bar area of each row in the image:
p c (i)=[max(p f(i,j)>0 (i))+min(p f(i,j)>0 (i))]/2
wherein p is f(i,j)>0 (i) Representing the pixel positions of the ith row of the image, wherein all gray values of the pixel positions are larger than 0; f (i, j) is the gray value of the ith row and jth column on the image;
in the step (2), an image is transformed by using a deformation correction formula, wherein the deformation correction formula is as follows:
ε=p c (i)-w/2
wherein f d (i, j) is the image gray of the corresponding point after deformation correction; f (i, j) is the gray value of the ith row and jth column on the image; w is the width of the image; epsilon is an intermediate variable; p is p c (i) The center position of the bar area of each row in the image;
in the step (3), the mapping relation between the camera coordinates and the physical coordinates is calculated through the camera positions, and the coincident pixel parts among the cameras are analyzed for cutting; setting the effective range of each camera for shooting the bar to be 90 degrees, setting the distance between the center of the camera and the center of the bar to be d, setting the focal length of the camera to be f, and setting the radius of the bar to be r;
alpha is the included angle between the camera and two tangent lines at the outer side edge of the photographed bar:
w b bar area width for image capture:
w b′ width of effective bar area in the image:
w b and w is equal to b′ The difference value of the bar is the overlapping area between the cameras, and the 4 overlapping areas of the cameras are filtered respectively, so that the respective effective bar area coordinates can be obtained.
2. The method for obtaining a complete surface image of a rod according to claim 1, characterized in that: in the step (1), four linear array cameras acquire images of the surface of the bar at 90-degree intervals on the circumference of the cross section of the bar, each camera shoots one third area of the surface of the bar, and the images of the cameras are overlapped.
3. The method for obtaining a complete surface image of a rod according to claim 1, characterized in that: the input of the Unet semantic segmentation model is that a camera collects pictures, and the pictures are output as binary images, wherein a region with a pixel value of 0 is used as a background, and a region with a pixel value of 1 is used as a bar.
4. The method for obtaining a complete surface image of a rod according to claim 1, characterized in that: and (4) splicing the effective bar areas of the cameras in the transverse direction in turn according to the anticlockwise sequence to obtain a complete unfolded view of the bar surface.
CN202110365113.1A 2021-04-01 2021-04-01 Method for acquiring complete surface image of bar Active CN113139900B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110365113.1A CN113139900B (en) 2021-04-01 2021-04-01 Method for acquiring complete surface image of bar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110365113.1A CN113139900B (en) 2021-04-01 2021-04-01 Method for acquiring complete surface image of bar

Publications (2)

Publication Number Publication Date
CN113139900A CN113139900A (en) 2021-07-20
CN113139900B true CN113139900B (en) 2023-09-01

Family

ID=76811722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110365113.1A Active CN113139900B (en) 2021-04-01 2021-04-01 Method for acquiring complete surface image of bar

Country Status (1)

Country Link
CN (1) CN113139900B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113776437B (en) * 2021-08-17 2022-06-07 北京科技大学 High-precision medium plate width measuring method based on machine vision
CN116645476B (en) * 2023-07-12 2023-10-24 小羽互联智能科技(长沙)有限公司 Rod three-dimensional data model reconstruction method and system based on multi-view vision

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499166A (en) * 2009-03-16 2009-08-05 北京中星微电子有限公司 Image splicing method and apparatus
CN102410811A (en) * 2011-07-27 2012-04-11 北京理工大学 Method and system for measuring parameters of bent pipe
CN103369192A (en) * 2012-03-31 2013-10-23 深圳市振华微电子有限公司 Method and device for Full-hardware splicing of multichannel video images
CN107424118A (en) * 2017-03-28 2017-12-01 天津大学 Based on the spherical panorama mosaic method for improving Lens Distortion Correction
CN111223114A (en) * 2020-01-09 2020-06-02 北京达佳互联信息技术有限公司 Image area segmentation method and device and electronic equipment
CN111369439A (en) * 2020-02-29 2020-07-03 华南理工大学 Panoramic view image real-time splicing method for automatic parking stall identification based on panoramic view

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103888679A (en) * 2014-03-13 2014-06-25 北京智谷睿拓技术服务有限公司 Image collection method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499166A (en) * 2009-03-16 2009-08-05 北京中星微电子有限公司 Image splicing method and apparatus
CN102410811A (en) * 2011-07-27 2012-04-11 北京理工大学 Method and system for measuring parameters of bent pipe
CN103369192A (en) * 2012-03-31 2013-10-23 深圳市振华微电子有限公司 Method and device for Full-hardware splicing of multichannel video images
CN107424118A (en) * 2017-03-28 2017-12-01 天津大学 Based on the spherical panorama mosaic method for improving Lens Distortion Correction
CN111223114A (en) * 2020-01-09 2020-06-02 北京达佳互联信息技术有限公司 Image area segmentation method and device and electronic equipment
CN111369439A (en) * 2020-02-29 2020-07-03 华南理工大学 Panoramic view image real-time splicing method for automatic parking stall identification based on panoramic view

Also Published As

Publication number Publication date
CN113139900A (en) 2021-07-20

Similar Documents

Publication Publication Date Title
CN105894499B (en) A kind of space object three-dimensional information rapid detection method based on binocular vision
CN109190628A (en) A kind of plate camber detection method based on machine vision
CN113139900B (en) Method for acquiring complete surface image of bar
CN111192198B (en) Pipeline panoramic scanning method based on pipeline robot
CN101276465B (en) Method for automatically split-jointing wide-angle image
Aubailly et al. Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach
CN109632808B (en) Edge defect detection method and device, electronic equipment and storage medium
CN111179170B (en) Rapid panoramic stitching method for microscopic blood cell images
CN106657789A (en) Thread panoramic image synthesis method
CN103679672B (en) Panorama image splicing method based on edge vertical distance matching
CN108921819B (en) Cloth inspecting device and method based on machine vision
CN113160339A (en) Projector calibration method based on Samm's law
CN107392849A (en) Target identification and localization method based on image subdivision
CN107462182B (en) A kind of cross section profile deformation detecting method based on machine vision and red line laser
CN111667470B (en) Industrial pipeline flaw detection inner wall detection method based on digital image
CN107358628A (en) Linear array images processing method based on target
CN112419212A (en) Infrared and visible light image fusion method based on side window guide filtering
CN104318583A (en) Visible light broadband spectrum image registration method
CN108961155B (en) High-fidelity fisheye lens distortion correction method
CN112330613A (en) Method and system for evaluating quality of cytopathology digital image
CN116681879A (en) Intelligent interpretation method for transition position of optical image boundary layer
CN116596987A (en) Workpiece three-dimensional size high-precision measurement method based on binocular vision
CN113592953B (en) Binocular non-cooperative target pose measurement method based on feature point set
CN114004770B (en) Method and device for accurately correcting satellite space-time diagram and storage medium
CN115760893A (en) Single droplet particle size and speed measuring method based on nuclear correlation filtering algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant