CN113888408A - Multi-camera image acquisition method and device - Google Patents
Multi-camera image acquisition method and device Download PDFInfo
- Publication number
- CN113888408A CN113888408A CN202111128099.XA CN202111128099A CN113888408A CN 113888408 A CN113888408 A CN 113888408A CN 202111128099 A CN202111128099 A CN 202111128099A CN 113888408 A CN113888408 A CN 113888408A
- Authority
- CN
- China
- Prior art keywords
- image
- detected
- images
- overlapping area
- point set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000004927 fusion Effects 0.000 claims abstract description 20
- 238000007781 pre-processing Methods 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims description 20
- 239000011159 matrix material Substances 0.000 claims description 18
- 238000013519 translation Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 8
- 238000003708 edge detection Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 238000003709 image segmentation Methods 0.000 claims description 6
- 230000000877 morphologic effect Effects 0.000 claims description 6
- 238000007500 overflow downdraw method Methods 0.000 claims description 6
- 230000007797 corrosion Effects 0.000 claims description 5
- 238000005260 corrosion Methods 0.000 claims description 5
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 9
- 238000007689 inspection Methods 0.000 description 6
- 230000001737 promoting effect Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 1
- 229910052782 aluminium Inorganic materials 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention provides a multi-camera image acquisition method and device, which are applied to the image acquisition link in automatic product detection. The multi-camera image acquisition method comprises the following steps: transmitting an object to be detected to an image acquisition area; more than two cameras are arranged on the shooting frame to collect images of an object to be detected; carrying out image preprocessing on a plurality of collected images to be detected; and splicing the images, and carrying out image fusion on the overlapped area of the spliced images to obtain the image of the object to be detected. The invention utilizes a plurality of cameras to collect the images of the object to be measured, improves the image precision of the object to be measured and ensures the integrity of the object to be measured.
Description
Technical Field
The invention relates to the field of image acquisition, in particular to a multi-camera image acquisition method and device.
Background
With the continuous development of computer technology, more and more enterprises utilize machine vision-based online detection to detect the surface defects of products. In the machine vision-based online detection process, a camera is used for obtaining an image of an object to be detected, and then image processing is carried out to obtain information of the object to be detected so as to judge the defect of the object to be detected. The image acquisition is a very important link in the whole quality inspection process, and the higher the image acquisition precision of the object to be detected is, the higher the product detection accuracy is.
Under the condition of acquiring high-precision images, the problem of incomplete acquisition of the object to be detected can occur. The method has the advantages that the multiple cameras are used for collecting images of the object to be detected, the collected images of the object to be detected are spliced, the image precision of the object to be detected is improved, and the integrity of the object to be detected is ensured. At present, a commonly used image stitching method is to perform image stitching by adopting feature point matching of images, the feature point matching-based method needs to match more features in the images, and errors occur in feature point matching for an object to be detected with less feature information.
Disclosure of Invention
The invention provides a multi-camera image acquisition method and device, aiming at the problems of an image acquisition link in the enterprise quality inspection process at present, and is applied to the image acquisition link in automatic product detection. The multi-camera image acquisition method comprises the following steps: transmitting an object to be detected to an image acquisition area; more than two cameras are arranged on the shooting frame to collect images of an object to be detected; carrying out image preprocessing on a plurality of collected images to be detected; and splicing the images, and carrying out image fusion on the overlapped area of the spliced images to obtain the image of the object to be detected.
The technical scheme adopted by the invention is as follows:
a multi-camera image acquisition method, comprising:
transmitting an object to be detected to an image acquisition area;
more than two cameras with equal height positioned right above the image acquisition area are used for acquiring an image of an object to be detected; the size of the image collected by each camera is equal, and the object to be measured in the images collected by adjacent cameras has a partial overlapping area.
Carrying out image preprocessing on the acquired image;
and splicing the images acquired by adjacent cameras in sequence, and carrying out image fusion on the spliced image overlapping area to obtain a complete image of the object to be detected. The specific process of sequentially splicing the images collected by the adjacent cameras is as follows: acquiring an intersection point set of the outline of an object to be detected and the edge of an image in each image, searching a corresponding point set positioned on the object to be detected in the image j for two images i and j acquired by adjacent cameras according to the height of the intersection point set in the image i, acquiring a corresponding point set of the intersection point set in the image i and the image j according to the pixel track distance between each point in the corresponding point set and a point with the approximate height of the intersection point set in the image j, calculating a spliced rotation matrix and translation matrix according to the intersection point set and the corresponding point set, and rotating and translating the image j according to the rotation matrix and the translation matrix to complete the splicing of the images i and j.
Further, image preprocessing is performed on the acquired image, including:
carrying out self-adaptive filtering processing on the image, removing image noise and keeping edge detail information;
performing morphological processing on the image, including corrosion and expansion, and extracting an image of an object to be detected by using an image segmentation algorithm based on a threshold value;
and extracting the edge of the object to be detected by using an edge detection algorithm.
Further, the first intersection set at least comprises intersections with the maximum and minimum intersection heights of the outline of the object to be detected and the edge of the image.
Further, image fusion is performed on the spliced image overlapping area, specifically: and calculating the pixel value of the overlapping area of the spliced image by using a weighted fusion method, determining the weight of the pixel of the overlapping area of the spliced image according to the distance between the pixel of the overlapping area and the image boundary of the overlapping area, wherein the more distant the pixel of the overlapping area is from the image boundary of the overlapping area, the larger the weight occupied by the pixel of the image is, and the smaller the weight is otherwise. And adding the pixel values of the overlapped areas of the two images calculated by the weight to obtain the pixel value of the spliced image, so that the transition of the joint of the spliced image is more natural, and the image fusion is realized.
A multi-camera image acquisition device, comprising:
the transmission module is used for transmitting the object to be detected to an image acquisition area;
the image acquisition module comprises more than two cameras with equal height positioned right above an image acquisition area and is used for acquiring an image of an object to be detected;
the image processing module is used for carrying out image preprocessing on the collected multiple images;
and the image splicing module is used for splicing the images acquired by the adjacent cameras in sequence, and carrying out image fusion on the spliced image overlapping area to obtain an image of the object to be detected. The specific process of sequentially splicing the images collected by the adjacent cameras is as follows: acquiring an intersection point set of the outline of an object to be detected and the edge of an image in each image, searching a corresponding point set positioned on the object to be detected in the image j for two images i and j acquired by adjacent cameras according to the height of the intersection point set in the image i, acquiring a corresponding point set of the intersection point set in the image i and the image j according to the pixel track distance between each point in the corresponding point set and a point with the approximate height of the intersection point set in the image j, calculating a spliced rotation matrix and translation matrix according to the intersection point set and the corresponding point set, and rotating and translating the image j according to the rotation matrix and the translation matrix to complete the splicing of the images i and j.
Further, the image acquisition module still includes the frame of shooing that is used for fixed camera, the camera is horizontal equidistance and installs on the frame of shooing, ensures that there is partial overlap area between the image that adjacent camera was gathered, and the camera is apart from the object height that awaits measuring the same, the perpendicular object surface that awaits measuring.
Furthermore, a light shielding plate is arranged outside the shooting frame, and an LED light source is arranged inside the light shielding plate.
Further, the image processing module performs image preprocessing on the acquired multiple images to be detected by the following method, including:
carrying out self-adaptive filtering processing on the image;
performing morphological processing on the image, including corrosion and expansion, and extracting an image of an object to be detected by using an image segmentation algorithm based on a threshold value;
and extracting the edge of the image of the object to be detected by using an edge detection algorithm.
Further, the first intersection set at least comprises intersections with the maximum and minimum intersection heights of the outline of the object to be detected and the edge of the image.
Further, in the image stitching module, the image fusion specifically includes: and calculating the pixel value of the overlapping area of the spliced image by using a weighted fusion method, determining the weight of the pixel of the overlapping area of the spliced image according to the distance between the pixel of the overlapping area and the image boundary of the overlapping area, wherein the more distant the pixel of the overlapping area is from the image boundary of the overlapping area, the larger the weight occupied by the pixel of the image is, and the smaller the weight is otherwise. And adding the pixel values of the overlapped areas of the two images calculated by the weight to obtain the pixel value of the spliced image, so that the transition of the joint of the spliced image is more natural, and the image fusion is realized.
The method has the beneficial effects that: the invention utilizes a plurality of cameras to collect the images of the object to be measured, improves the image precision of the object to be measured and ensures the integrity of the object to be measured. And the collected images are spliced by adopting an image edge splicing method, so that the number of matching point sets is reduced, and the image matching speed is improved. For an object to be detected with less characteristic information, a corresponding characteristic point set can be accurately determined, the error matching rate of the characteristic points is reduced, and the image splicing is more accurate. High-precision images are collected in the quality inspection link, so that the accuracy of product detection can be improved, and the working efficiency of enterprise quality inspection is improved. Thereby promoting the industrialization degree of the quality detection field and promoting the intelligent manufacturing development of enterprises.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the specification and together with the description serve to explain the specification and not to limit the specification in a limiting sense.
FIG. 1 is a flow chart of an image acquisition method of the present invention;
FIG. 2 is a diagram of an image stitching method according to an embodiment of the present invention;
FIG. 3 is an image fusion graph provided by an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an image capturing device according to the present invention.
Detailed Description
In order to more clearly illustrate the objects, technical solutions and advantages of the present invention, the technical solutions of the present invention will be fully described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without any creative effort belong to the protection scope of the present specification.
Based on the knowledge of the above problems, an embodiment of the present disclosure provides a method and an apparatus for acquiring multiple camera images in a process of detecting a quality of a garment, and fig. 1 is a flow chart of the method for acquiring multiple camera images according to the present disclosure, where the method specifically includes the following steps:
and S100, transmitting the garment to be detected to an image acquisition area.
Preferably, the clothing to be measured is laid on the conveyor belt, and the clothing to be measured is conveyed to the image acquisition area by the conveyor belt, wherein the image acquisition area is an area shot by the camera on the object to be measured.
And S200, more than two cameras are arranged on the shooting frame to collect the clothes image to be detected.
Preferably, the shooting frame is built by an aluminum profile mounting rack of a European standard 3030, five light shading plates are arranged outside the shooting frame, the phenomenon that the image is unevenly received due to interference of an external light source is avoided, and an LED lamp light source is arranged inside each light shading plate, so that the illumination environment is the same when the image is shot. The method comprises the steps that three Mars4072S-24uc industrial cameras are arranged on a shooting frame to collect images of clothes to be measured, the cameras are horizontally and equidistantly arranged on the shooting frame, the height of an installation frame where the cameras are located can be adjusted, the heights of the cameras are adjusted to ensure that partial overlapping areas exist between the images collected by the adjacent cameras, the heights of the cameras from the clothes to be measured are the same, the cameras are perpendicular to the surface of the clothes to be measured, and when the clothes to be measured are conveyed to the shooting area, the cameras shoot a group of pictures of the clothes to be measured.
And step S300, carrying out image preprocessing on the collected multiple images to be detected.
Preferably, the method comprises: carrying out self-adaptive filtering processing on the image, removing image noise and keeping edge detail information; performing morphological processing on the image, converting the image from an RGB color space to an HSV color space, obtaining HSV color space distribution of a background of the garment to be detected, separating the garment image to be detected from the background, performing open operation on the garment image to be detected, corroding the garment image to be detected, eliminating small miscellaneous points, performing expansion processing, filling small holes in the image and small concave parts at the edge of the image, and finally extracting the image of the object to be detected by using an image segmentation algorithm based on a threshold value; the Canny operator is used for carrying out edge detection on the garment image to be detected, and the edge profile of the garment to be detected can be accurately extracted.
And S400, splicing the images, and carrying out image fusion on the overlapped area of the spliced images to obtain the clothing image to be detected.
Preferably, the image stitching is performed by stitching an image edge of a partial overlapping area, specifically: as shown in fig. 2, an image captured by the camera 1 is an image 1, an image captured by the camera 2 is an image 2, an image captured by the camera 3 is an image 3, and images collected by adjacent cameras are sequentially spliced, taking splicing of the image 1 and the image 2 as an example, the method specifically includes: by taking the lower left corner of the image as a coordinate origin, knowing a highest intersection point A of the left edge of the image 2 and the outline of the garment to be tested and a highest intersection point B of the right edge of the image 1 and the outline of the garment to be tested, searching a corresponding point A 'which is the intersection point of the left edge of the image 2 and the outline of the garment to be tested in the image 2 according to the height, namely the vertical coordinate, of the intersection point A, obtaining a pixel track distance d between a corresponding point coordinate and a point B which is the maximum vertical coordinate in the intersection point of the right edge of the image 1 and the outline of the object to be tested, and finding a corresponding point B' of the maximum vertical coordinate intersection point B in the image 1 in the image 2 according to the pixel track distance d; similarly, knowing that the intersection point C of the left edge of the image 2 and the clothing profile to be tested and the intersection point D of the right edge of the image 1 and the clothing profile to be tested, the corresponding point C 'of the minimum point on the ordinate in the intersection point of fig. 2 can be found in fig. 1, and the corresponding point D' of the minimum point on the ordinate in the intersection point of fig. 1 can be found in the image 2, the rotation matrix and the translation matrix of the stitching can be solved through the four corresponding pairs of points A, B, C, D and a ', B', C ', D' in the image 1 and the image 2, and the image stitching can be realized through the corresponding rotation and translation of all the pixel values in the image 2. And splicing and fusing the spliced image and the image 3 by the same method to obtain a complete garment image to be detected.
And further, carrying out image fusion on the overlapping area of the spliced image, wherein the image fusion is to calculate the pixel value of the overlapping area of the spliced image by using a weighted fusion method, determine the weight of the pixel of the overlapping area of the spliced image according to the distance between the pixel of the overlapping area and the image boundary of the overlapping area, and the more the pixel of the overlapping area is far away from the image boundary of the overlapping area, the more the pixel of the overlapping area occupies the weight, and the less the pixel of the overlapping area is. And adding the pixel values of the overlapped areas of the two images calculated by the weight to obtain the pixel value of the spliced image, so that the transition of the junction of the spliced images is more natural, and the image fusion is realized. As shown in FIG. 3, a fused image of the garment to be tested is obtained, and it can be seen that the method of the present invention has a good splicing effect.
Fig. 4 is a multi-camera image capturing device, comprising: the transmission module is used for transmitting the clothing to be detected to an image acquisition area; the image acquisition module comprises more than two cameras with equal height positioned right above an image acquisition area and is used for acquiring an image of an object to be detected; the image processing module is used for carrying out image preprocessing on the collected multiple clothing images to be detected; the image splicing module is used for splicing the clothing image to be tested according to the method disclosed by the invention, and carrying out image fusion on the overlapped area of the spliced images to obtain a complete clothing image to be tested.
Preferably, the image acquisition module still includes shoots the frame, arranges a plurality of cameras on shooting the frame, shoots the frame outside and is equipped with the light screen, the inside LED light source that is equipped with of light screen, a plurality of cameras gather the clothing image of awaiting measuring, the camera is horizontal equidistance and installs on shooting the frame, ensures that there is partial overlap region between the image that adjacent camera was gathered, and the camera is apart from the clothing height that awaits measuring the same, the clothing surface of awaiting measuring perpendicularly.
Preferably, the image processing module performs image preprocessing on the acquired multiple clothing images to be detected by the following method, including: carrying out self-adaptive filtering processing on the image; performing morphological processing on the image, including corrosion and expansion, and extracting the garment image to be detected by using an image segmentation algorithm based on a threshold value; and extracting the edge of the clothing image to be detected by using an edge detection algorithm.
Preferably, the image fusion of the image splicing module is to calculate the pixel value of the overlapping area of the spliced image by using a weighted fusion method, determine the weight of the pixel of the overlapping area of the spliced image according to the distance between the pixel of the overlapping area and the image boundary of the overlapping area, and the more distant the pixel of the overlapping area from the image boundary of the overlapping area, the larger the weight occupied by the pixel of the image, and the less the weight. And adding the pixel values of the overlapping areas of the two images calculated by the weight to obtain the pixel value of the spliced image, thereby realizing image fusion.
According to the multi-camera image acquisition method and device, the multiple cameras are used for acquiring the image of the object to be detected, so that the image precision of the object to be detected is improved; and splicing the acquired images by adopting an image edge splicing method to ensure the integrity of the object to be detected. High-precision images are collected in the quality inspection link, so that the accuracy of product detection can be improved, and the working efficiency of enterprise quality inspection is improved. Thereby promoting the industrialization degree of the quality detection field and promoting the intelligent manufacturing development of enterprises.
The above description is only for the purpose of illustration and description, and is not intended to limit the scope of the invention, which is to be construed as broadly as possible, and any modifications, equivalents, or improvements made within the spirit and scope of the present invention are intended to be covered thereby.
Claims (10)
1. A multi-camera image acquisition method, comprising:
transmitting an object to be detected to an image acquisition area;
more than two cameras with equal height positioned right above the image acquisition area are used for acquiring an image of an object to be detected; the size of the image collected by each camera is equal, and the object to be measured in the images collected by adjacent cameras has a partial overlapping area.
Carrying out image preprocessing on the acquired image;
and splicing the images acquired by adjacent cameras in sequence, and carrying out image fusion on the spliced image overlapping area to obtain a complete image of the object to be detected. The specific process of sequentially splicing the images collected by the adjacent cameras is as follows: acquiring an intersection point set of the outline of an object to be detected and the edge of an image in each image, searching a corresponding point set positioned on the object to be detected in the image j for two images i and j acquired by adjacent cameras according to the height of the intersection point set in the image i, acquiring a corresponding point set of the intersection point set in the image i and the image j according to the pixel track distance between each point in the corresponding point set and a point with the approximate height of the intersection point set in the image j, calculating a spliced rotation matrix and translation matrix according to the intersection point set and the corresponding point set, and rotating and translating the image j according to the rotation matrix and the translation matrix to complete the splicing of the images i and j.
2. The multi-camera image acquisition method according to claim 1, wherein the image pre-processing of the acquired images comprises:
carrying out self-adaptive filtering processing on the image, removing image noise and keeping edge detail information;
performing morphological processing on the image, including corrosion and expansion, and extracting an image of an object to be detected by using an image segmentation algorithm based on a threshold value;
and extracting the edge of the object to be detected by using an edge detection algorithm.
3. The multi-camera image acquisition method according to claim 1, wherein the first intersection set comprises at least intersections of the contours of the object to be measured and the edges of the image with the maximum and minimum intersection heights.
4. The multi-camera image acquisition method according to claim 1, wherein image fusion is performed on the spliced image overlap region, specifically: and calculating the pixel value of the overlapping area of the spliced image by using a weighted fusion method, determining the weight of the pixel of the overlapping area of the spliced image according to the distance between the pixel of the overlapping area and the image boundary of the overlapping area, wherein the more distant the pixel of the overlapping area is from the image boundary of the overlapping area, the larger the weight occupied by the pixel of the image is, and the smaller the weight is otherwise.
5. A multi-camera image acquisition device, comprising:
the transmission module is used for transmitting the object to be detected to an image acquisition area;
the image acquisition module comprises more than two cameras with equal height positioned right above an image acquisition area and is used for acquiring an image of an object to be detected;
the image processing module is used for carrying out image preprocessing on the collected multiple images;
and the image splicing module is used for splicing the images acquired by the adjacent cameras in sequence, and carrying out image fusion on the spliced image overlapping area to obtain an image of the object to be detected. The specific process of sequentially splicing the images collected by the adjacent cameras is as follows: acquiring an intersection point set of the outline of an object to be detected and the edge of an image in each image, searching a corresponding point set positioned on the object to be detected in the image j for two images i and j acquired by adjacent cameras according to the height of the intersection point set in the image i, acquiring a corresponding point set of the intersection point set in the image i and the image j according to the pixel track distance between each point in the corresponding point set and a point with the approximate height of the intersection point set in the image j, calculating a spliced rotation matrix and translation matrix according to the intersection point set and the corresponding point set, and rotating and translating the image j according to the rotation matrix and the translation matrix to complete the splicing of the images i and j.
6. The multi-camera image capturing device of claim 5, wherein the image capturing module further comprises a frame for fixing the cameras, the cameras are horizontally mounted on the frame at equal intervals to ensure that there is a partial overlapping area between the images captured by adjacent cameras, and the cameras are at the same height from the object to be measured and are perpendicular to the surface of the object to be measured.
7. The multi-camera image capturing device as claimed in claim 6, wherein a light shielding plate is installed outside the camera stand, and the LED light source is installed inside the light shielding plate.
8. The multi-camera image acquisition device of claim 5, wherein the image processing module performs image preprocessing on the acquired multiple images to be detected by the following method comprising:
carrying out self-adaptive filtering processing on the image;
performing morphological processing on the image, including corrosion and expansion, and extracting an image of an object to be detected by using an image segmentation algorithm based on a threshold value;
and extracting the edge of the image of the object to be detected by using an edge detection algorithm.
9. The multi-camera image acquisition device according to claim 5, wherein the first set of intersection points comprises at least intersection points where the intersection heights of the profile of the object to be measured and the edge of the image are the largest and the smallest.
10. The multi-camera image acquisition device according to claim 5, wherein in the image stitching module, the image fusion is specifically: and calculating the pixel value of the overlapping area of the spliced image by using a weighted fusion method, determining the weight of the pixel of the overlapping area of the spliced image according to the distance between the pixel of the overlapping area and the image boundary of the overlapping area, wherein the more distant the pixel of the overlapping area is from the image boundary of the overlapping area, the larger the weight occupied by the pixel of the image is, and the smaller the weight is otherwise.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111128099.XA CN113888408A (en) | 2021-09-26 | 2021-09-26 | Multi-camera image acquisition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111128099.XA CN113888408A (en) | 2021-09-26 | 2021-09-26 | Multi-camera image acquisition method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113888408A true CN113888408A (en) | 2022-01-04 |
Family
ID=79006824
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111128099.XA Pending CN113888408A (en) | 2021-09-26 | 2021-09-26 | Multi-camera image acquisition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113888408A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114445688A (en) * | 2022-01-14 | 2022-05-06 | 北京航空航天大学 | Target detection method for distributed multi-camera spherical unmanned system |
CN117333372A (en) * | 2023-11-28 | 2024-01-02 | 广东海洋大学 | Fusion splicing method of marine organism images |
-
2021
- 2021-09-26 CN CN202111128099.XA patent/CN113888408A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114445688A (en) * | 2022-01-14 | 2022-05-06 | 北京航空航天大学 | Target detection method for distributed multi-camera spherical unmanned system |
CN114445688B (en) * | 2022-01-14 | 2024-06-04 | 北京航空航天大学 | Target detection method for spherical unmanned system of distributed multi-camera |
CN117333372A (en) * | 2023-11-28 | 2024-01-02 | 广东海洋大学 | Fusion splicing method of marine organism images |
CN117333372B (en) * | 2023-11-28 | 2024-03-01 | 广东海洋大学 | Fusion splicing method of marine organism images |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109900711A (en) | Workpiece, defect detection method based on machine vision | |
CN113888408A (en) | Multi-camera image acquisition method and device | |
JPH07325161A (en) | Method and equipment for package inspection | |
CN110567976B (en) | Mobile phone cover plate silk-screen defect detection device and detection method based on machine vision | |
CN110345877B (en) | Method for measuring aperture and pitch of tube plate | |
CN106780473A (en) | A kind of magnet ring defect multi-vision visual detection method and system | |
CN105865329A (en) | Vision-based acquisition system for end surface center coordinates of bundles of round steel and acquisition method thereof | |
CN110533654A (en) | The method for detecting abnormality and device of components | |
CN104103069B (en) | Image processing apparatus, image processing method and recording medium | |
CN115308222B (en) | System and method for identifying poor chip appearance based on machine vision | |
CN109900719A (en) | A kind of visible detection method of blade surface knife mark | |
CN106651849A (en) | Area-array camera-based PCB bare board defect detection method | |
CN107895362A (en) | A kind of machine vision method of miniature binding post quality testing | |
CN102879404B (en) | System for automatically detecting medical capsule defects in industrial structure scene | |
CN114136975A (en) | Intelligent detection system and method for surface defects of microwave bare chip | |
CN112345534B (en) | Defect detection method and system for particles in bubble plate based on vision | |
JP2017040600A (en) | Inspection method, inspection device, image processor, program and record medium | |
CN112964732A (en) | Spinning cake defect visual detection system and method based on deep learning | |
TW419634B (en) | Automatic detection system and method using bar code positioning | |
CN114518526A (en) | Automatic testing machine control system suitable for PCB board ICT | |
CN116091506B (en) | Machine vision defect quality inspection method based on YOLOV5 | |
CN116908185A (en) | Method and device for detecting appearance defects of article, electronic equipment and storage medium | |
CN114187269B (en) | Rapid detection method for surface defect edge of small component | |
CN112798608B (en) | Optical detection device and optical detection method for side wall of inner cavity of mobile phone camera support | |
Telljohann | Introduction to building a machine vision inspection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |