CN211403455U - Image acquisition and processing equipment - Google Patents

Image acquisition and processing equipment Download PDF

Info

Publication number
CN211403455U
CN211403455U CN201821855661.2U CN201821855661U CN211403455U CN 211403455 U CN211403455 U CN 211403455U CN 201821855661 U CN201821855661 U CN 201821855661U CN 211403455 U CN211403455 U CN 211403455U
Authority
CN
China
Prior art keywords
positioning
camera
image
images
imaging camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201821855661.2U
Other languages
Chinese (zh)
Inventor
朱炳强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201821855661.2U priority Critical patent/CN211403455U/en
Application granted granted Critical
Publication of CN211403455U publication Critical patent/CN211403455U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

An image acquisition processing apparatus for acquiring and stitching a plurality of partial images of a subject, the apparatus comprising: an imaging camera for taking a plurality of partial images of a subject; the reference body is provided with at least one positioning mark and is relatively fixed with the imaging camera; the positioning camera is fixed relative to the shot object and is used for shooting the reference body and acquiring a positioning image containing at least part of positioning marks; and the processing unit is used for splicing the local images shot by the imaging camera according to the positioning images shot by the positioning camera. The utility model discloses with simple structure the location problem when having realized the image concatenation.

Description

Image acquisition and processing equipment
Technical Field
The utility model relates to an image acquisition field especially relates to an image acquisition processing equipment.
Background
There is a need for high-resolution imaging over a wide range in applications such as machine vision, industrial inspection, etc., and the size and resolution of the field of view captured by a camera (including optical resolution and the actual size of a single pixel) are contradictory criteria. For example, in the surface appearance inspection of a wafer in a semiconductor process, a high-definition inspection of 1um resolution is required for a wafer having a diameter of 300 mm. If a camera with 2K pixels is adopted, the corresponding size of a single picture is only 2mm, and the imaging range of 300mm cannot be covered.
A plurality of images shot at different positions are adopted and spliced with each other to obtain a larger-range image, but the spatial displacement and the rotation angle of mutual dislocation between the images need to be known. Moreover, the information generally needs to be made to sub-pixel precision, so that seamless and accurate image splicing can be guaranteed. Accurately and stably acquiring dislocation information between images, splicing the images based on the information, and performing large-scene high-precision imaging, which is a direction required to be explored by technical personnel in the field.
The existing large scene high-precision imaging schemes include the following:
scheme 1. with higher resolution cameras and imaging systems:
if the 2K pixel camera commonly used at present is changed into 4K, the size of the imaged scene can be doubled without affecting the precision.
Scheme 2. take a picture with a plurality of cameras:
a plurality of cameras form a camera array to take pictures simultaneously, and a plurality of images are spliced to obtain a larger scene; and the mutual misalignment between the images depends on the relative position between the cameras, which can be measured and calculated beforehand.
And 3, a single camera takes pictures in a scanning mode on a mobile platform, and images are spliced by means of information of one or more displacement sensors:
the principle of the scheme is similar to that of a scanner; each time the camera exposes a part of the area corresponding to the imaging range; and shooting and acquiring an image sequence at the shooting positions one by one through a driving device. By stitching the image sequences to each other, a larger scene is obtained.
The dislocation information between the images can be obtained through conversion of one or more high-precision one-dimensional displacement sensors, and fig. 1 shows a scheme of obtaining a large scene image by moving a camera to different positions to photograph and splicing the images. Each position sensor, wherein 1-2 is an X-direction position sensor, 1-3 is a Y-direction position sensor, and can read the absolute or relative position in a linear direction in space, so as to obtain the coordinate of the photographing position on a coordinate axis; or reading an absolute or relative angle in one direction of rotation. And determining the position and the posture of the camera during photographing by integrating the reading of each one-dimensional sensor, and calculating the dislocation information between the images. The one-dimensional sensor can be a linear grating encoder, a magnetic grating encoder and the like and is used for obtaining the readings of XYZ axes in space, and can also be a circular photoelectric encoder, a rotary transformer and the like and is used for reading attitude angles.
The method is characterized in that the relative position and the posture of the camera and the photographed object are changed (1-1 in figure 1, the camera with the changed relative position with the object). In practical implementation, the object can be fixed, the camera can be moved, or the camera position can be fixed, and the object to be shot can be moved through the driving device. Fig. 2 illustrates a conventional scheme for moving a photographed object 2-4 by a two-dimensional motion mechanism so as to realize in-plane scanning stitching. In this scheme, the position of the camera is kept constant, and the object to be photographed is placed on the y-direction driving device with a fixed relative position. The y-direction driving device is placed on the x-direction driving device, and moves along a single direction relative to the x-direction driving device through a motor, a guide rail, a screw rod and other movement driving mechanisms, and the movement is formed and read out through the y-direction sensor reading heads 2-6. Similarly, the X-direction driving device is placed on the fixed carrying platform 2-1 and moves along the X direction, and the X direction is formed and read by the reading head 2-3 of the X-direction sensor, wherein 2-2 is the X-direction displacement sensor, 2-5 is the X-direction moving platform, 2-7 is the Y-direction moving platform, and 2-8 is the Y-direction displacement sensor. The moving object takes a picture at different x and y positions to obtain a series of pictures and corresponding picture taking position information; the translation relation between the images can be obtained according to the x and y information, so that splicing of different images is realized, and the image of a larger scene is obtained.
However, the existing solutions all have various drawbacks:
the main technical drawback of solution 1 derives from the high requirements on camera and lens selection and cost. The photosensitive chip with large area puts very strict requirements on the yield of the manufacturing process, and the camera with high resolution greatly increases the cost and the selectable range; similarly, the lens aperture requirements corresponding to a large-target-surface high-resolution camera are very large, and under the same lens parameter requirements, the manufacturing and assembling processes of a large-aperture lens have very great challenges. The resolution of the current technical camera is higher than 8K, and the cost is high.
Scheme 2 has the same cost problem as scheme 1. For scenario 2, the cost of the entire setup is proportional to the area of the imaging range. While scheme 2 has additional technical drawbacks. As shown in fig. 3, for a single camera, if the field of view of the object to be photographed is smaller than the corresponding imaging size on the photosensitive chip, that is, the optical magnification is larger than 1, two cameras placed side by side in close contact cannot photograph the position between two images. That is, for applications with optical magnification of 1 or more, the multiple pictures obtained by the camera array cannot completely cover the entire field of view of the object.
Scheme 3 is a large field of view high precision imaging mode widely used in industrial applications, such as a common quadratic element measuring instrument. The main problem is that the imaging precision requires a high drive device. The mode of calculating the dislocation information by adopting one or more high-precision displacement sensors has higher requirements on cost and also has higher requirements on the precision and the stability of the driving device. For example, in a quadratic element measuring instrument, in order to ensure the measurement accuracy of image dislocation information at a micron level, the whole system needs to be provided with a high-performance grating ruler, a high-accuracy guide rail and a servo motor, and simultaneously needs a marble platform or even an air floatation carrying platform to ensure the stability of the platform; in addition, the temperature of the use environment is generally required, and temperature drift caused by expansion with heat and contraction with cold is avoided.
If the driving device moves in the scanning and photographing process, the pictures obtained by scanning and splicing have obvious deformation such as shaking, stretching and the like. If the driving device generates tiny movement perpendicular to the movement direction in the movement process, the picture generated by scanning generates ripple type distortion; if the timing trigger is adopted and the speed of the driving device is not uniform, the scanned picture has stretching or shrinking distortion.
Disclosure of Invention
In view of the different problems of present measurement method, for the problem that solves prior art existence, the embodiment of the utility model provides an image acquisition and processing equipment is provided.
The image acquisition and processing equipment is used for acquiring and splicing a plurality of local images of a shot object, and comprises:
an imaging camera for taking a plurality of partial images of a subject;
the reference body is provided with at least one positioning mark and is relatively fixed with the imaging camera;
the positioning camera is fixed relative to the shot object and is used for shooting the reference body and acquiring a positioning image containing at least part of positioning marks; and
and the processing unit is used for splicing the local images shot by the imaging camera according to the positioning images shot by the positioning camera.
In an optional embodiment, the image acquisition and processing device further includes a synchronization triggering unit, configured to control the imaging camera and the positioning camera to capture images of the object to be captured and the reference object respectively at the same time.
In an optional embodiment, the processing unit is to:
and receiving the local images shot by the imaging camera and the images shot by the positioning camera and containing the positioning marks, and determining shooting pose information by utilizing the plurality of positioning images so as to splice the plurality of local images shot by the imaging camera.
In an optional embodiment, the processing unit is to:
obtaining a global image P of a reference volumeref
Obtaining a global image (P) of a reference volumeref);
When the ith shooting is obtained, the shot is takenPartial image of object (P [ i ]]) And a local image (P) of the reference bodyref[i])。
From a global image (P) of the reference volumeref) Searching for a partial image (P) of the reference volumeref[i]) At the position, obtaining a global image (P) of the reference volumeref) Local image (P) of the reference bodyref[i]) Coordinates (x, y) of the matched image block relative to the position of the origin;
calculating the coordinates of the local image (Pi) of the object relative to the origin in the global image (P) of the object;
and filling the local image (Pi) into the global image (P) according to the calculated coordinates relative to the origin.
In an optional embodiment, the processing unit is to:
at the i-th shooting, a partial image (P [ i ] of the object to be shot is obtained]) And a local image (P) of the reference bodyref[i])。
At the j-th shooting, a partial image (P [ j ] of the object to be shot is obtained]) And a local image (P) of the reference bodyref[j]) Wherein the partial image (P)ref[j]) And the partial image (P)ref[i]) With a partial overlap region therebetween.
Searching for overlapping area image in each image (P)ref[i]) And picture (P)ref[j]) The relative coordinate offset (Δ x, Δ y) of the two positions is obtained.
-calculating a coordinate offset (au, av) in the local image (P [ j ]) relative to the local image (P [ i ]) from the relative coordinate offset (ax, ay);
-generating said partial image (P [ i ])]) And a partial image (P [ j ]]) Stitching according to the coordinate deviations (Deltau, Deltav) to obtain a composite image (P)ij)。
In an alternative embodiment, the reference body comprises a first reference body and a second reference body arranged at a fixed angle. In an optional embodiment, the image acquisition and processing device further comprises a driving device for controlling the synchronous motion of the imaging camera and the reference body, or for controlling the synchronous motion of the object to be shot and the positioning camera.
In an optional embodiment, the imaging camera and the positioning camera comprise shutters, and the synchronization triggering unit is in signal connection with the shutters of the imaging camera and the positioning camera.
In an optional embodiment, the image acquisition and processing apparatus further comprises:
the synchronous trigger unit is connected with the light source through signals and controls the light source corresponding to the shot object and the light ray stroboscopic synchronization corresponding to the reference body.
In an optional embodiment, the image acquisition and processing apparatus further comprises:
a first fixing structure for fixing the imaging camera and the reference body;
and the second fixing structure is used for fixing the shot object and the positioning camera.
In an alternative embodiment, the positioning mark of the reference body comprises at least one of the following:
structural marks, texture marks, pattern marks.
In an alternative embodiment, the positioning mark is made by at least one of printing, plating and etching.
In an alternative embodiment, the shooting accuracy of the positioning camera is less than the shooting accuracy of the imaging camera.
In an alternative embodiment, the field of view of the positioning camera is greater than the field of view of the imaging camera.
In an optional embodiment, the image acquisition processing device further comprises at least one of a position sensor, a speed sensor and an acceleration sensor, and is configured to acquire shooting pose information of the positioning camera and/or the imaging camera and correct the shooting pose information of the imaging camera.
In an alternative embodiment, the positioning camera is at least one, and the imaging camera is also at least one.
The embodiment of the utility model provides an image acquisition and processing equipment is still provided for a plurality of local images of the object of being shot are gathered and the concatenation is carried out, include:
the first connecting device is used for fixing an imaging camera which is used for shooting a plurality of partial images of a shot object;
the reference body is provided with at least one positioning mark and is relatively fixed with the imaging camera;
the second connecting device is used for fixing a positioning camera, the positioning camera is fixed relative to the shot object and is used for shooting the reference body and acquiring a positioning image containing at least part of positioning marks; and
and the processing unit is used for splicing the local images shot by the imaging camera according to the positioning images shot by the positioning camera.
In an optional embodiment, the processing unit is to:
and receiving the local images shot by the imaging camera and the images shot by the positioning camera and containing the positioning marks, and determining shooting pose information by utilizing the plurality of positioning images so as to splice the plurality of local images shot by the imaging camera.
To sum up, the utility model provides an adopt image acquisition and processing equipment has following advantage at least:
the embodiment of the utility model provides an image acquisition and processing equipment through relatively fixed imaging camera and reference body to and relatively fixed by shooting object and location camera, utilize the location mark that contains in the image of the reference body that the location camera was shot, confirm the concatenation method of the local image of shooing the object correspondingly. Because can obtain the required image dislocation information of image concatenation fast through the alignment mark, the utility model discloses based on the imaging means of telecontrol equipment and image concatenation, through the design of imaging system and algorithm, guaranteed in principle that image dislocation information can very accurate and stable acquisition to precision and the stability requirement to drive arrangement itself are very low, have that the system is simple, with low costs, the manufacturing accuracy requires low, advantages such as image concatenation stability height.
In addition, the utility model does not need a high-precision displacement sensor, a guide rail and a servo system, and has low requirement on a motion system; the interference of environmental temperature, humidity and the like is not easy to happen; the environmental applicability is wide; the utility model discloses do not do any restriction to being shot object surface outward appearance, to the even, single and no texture's of distribution object region, also can realize accurate concatenation on a large scale. And the calculation amount of the dislocation information between the images is small.
Drawings
Fig. 1 shows a scheme of obtaining a large scene image by moving a camera to different positions to take pictures and splicing the images.
Fig. 2 illustrates a conventional scheme for moving a photographed object through a two-dimensional motion mechanism, so as to realize in-plane scanning stitching.
Fig. 3 is a schematic diagram illustrating a scenario in which the conventional scheme 2 is not applicable.
Fig. 4 is a schematic view of an image capturing and processing apparatus according to a first embodiment of the present invention.
Fig. 5 is a schematic diagram showing the embodiment of fig. 4 after the object to be photographed and the reference body move.
Wherein, 1-1 is a camera with position change relative to the object, 1-2 is an X-direction position sensor, 1-3 is a Y-direction position sensor, 2-1 is a carrying platform, 2-2 is an X-direction displacement sensor, 2-3 is an X-direction sensor reading head, 2-4 is a shot object, 2-5 is an X-direction motion platform, 2-6 is a Y-direction sensor reading head, 2-7 is a Y-direction motion platform, and 2-8 is a Y-direction displacement sensor.
Detailed Description
The following describes the image acquisition and processing device according to the present invention with a plurality of embodiments.
First embodiment
The utility model discloses a first embodiment provides an image acquisition and processing equipment for a plurality of local images of object are shot in the collection and splice. Fig. 4 shows an image capturing and processing apparatus according to a first embodiment of the present invention, which may include the following components:
an imaging camera 10, a reference body 20, a positioning camera 30 and a processing unit 40.
The imaging camera 10 may be a photographing device including a camera, an image sensor, a shutter, and the like, and is configured to photograph the object 100 and obtain a plurality of partial images of the object 100. It should be noted that the imaging camera 10 can be broadly referred to as an image capturing device composed of an image sensor, a lens, a light source, a fixed connection structure and other auxiliary imaging modules, and is not limited to a commercially available camera device.
The reference body 20 may be one or more. In the embodiment shown in fig. 4, the reference body 20 is one. The reference body 20 may be a plate body as shown in the figure, or may be in any other form. The reference body 20 is provided with a plurality of positioning marks 21, the positioning marks 21 may be structure marks, texture marks, pattern marks, and the like, and the marks include positioning information for the positioning camera 30 to capture a plurality of positioning images, and subsequently, according to the plurality of positioning images captured by the positioning camera, stitching a plurality of local images captured by the imaging camera.
The structural mark may be a hole, a projection, a recess, a slit, or a combination thereof, and the texture mark may be a special texture provided on the reference body. The pattern marks are marks such as dots, squares, lines, triangles, crosses, and the like. The positioning marks can be formed by one or more processes of printing, etching, coating and the like. The material of the reference body 20 may be, for example, a material having stable properties such as glass and ceramic and less affected by temperature.
The positioning mark 21 may be plural, and in an embodiment, the positioning camera 30 may acquire a positioning image including at least a part of the positioning marks 21 in the plural positioning marks 21; in other embodiments, the positioning mark may be one (e.g., a narrow to wide band of marks), in which case the positioning camera 30 may acquire a positioning image containing a portion of the positioning mark.
In this embodiment, the image acquisition processing apparatus may further include a first fixing structure and a second fixing structure. In the present embodiment, a first fixing structure is used to fix the imaging camera 10 and the reference body 20; the second fixing structure is used for fixing the subject 100 and the positioning camera 30. In other embodiments, the image acquisition processing apparatus may include only the first fixing structure for fixing the photographic subject 100 and the positioning camera 30. The imaging camera 20 and the reference body 20 may be fixedly disposed in other ways as long as the two remain relatively fixed. The fixing manner of the subject 100 is not shown in fig. 4. However, those skilled in the art will appreciate that the photographed object 100 and the positioning camera 30 may be fixed in various ways, and the present invention is not particularly limited.
In the present embodiment, the subject 100 is disposed in parallel with the reference body 20, and the imaging camera 10 and the positioning camera 30 are directed toward the subject 100 and the reference body 20, respectively. At the timing shown in fig. 4, the imaging camera 10 photographs the a1 area of the subject 100, and the positioning camera 30 photographs the a2 area of the reference body 20.
The positioning camera 30 is used for shooting the reference body 20 and acquiring a positioning image containing at least part of the positioning mark 21. The positioning camera 30 may have a lower shooting accuracy than the imaging camera 10 and the field of view of the positioning camera 30 may be larger than the field of view of the imaging camera 10.
Fig. 5 is a schematic view after the positions of the object 100 and the positioning camera 30 are moved. As shown in fig. 5, after the movement, the imaging camera 10 photographs the B1 area of the subject 100, and the positioning camera 30 photographs the B2 area of the reference body 20. The B1 region is moved a distance to the left compared to the a1 region, and the B2 region is moved a distance to the right relative to the a2 region. Therefore, a midpoint between the subject 100 and the reference body 20 may be set, and the correspondence relationship between the images of the subject 100 and the reference body 20 may be derived from the deviation distance from the midpoint.
In an optional embodiment, the image acquisition processing device further comprises a synchronization triggering unit. The synchronous trigger unit is used for controlling the synchronous shooting of the imaging camera 10 and the positioning camera 30; specifically, a synchronization trigger unit may be connected to the shutters of the imaging camera 10 and the positioning camera 30 for synchronously controlling the shutters of the imaging camera 10 and the positioning camera 30 to ensure that the imaging camera 10 and the positioning camera 30 capture images simultaneously. The mechanical and electrical synchronous triggering units may be in various forms, and are not described herein again. The setting of the synchronous trigger unit ensures the synchronism of shooting, and facilitates the subsequent operation of splicing a plurality of local images shot by the imaging camera according to a plurality of positioning images shot by the positioning camera.
In the present embodiment, due to the existence of the synchronization triggering unit, the imaging camera 10 and the positioning camera 30 can shoot at the same time, and the local image shot by the imaging camera 10 can be determined according to the shooting time by combining the local image shot by the positioning camera and the offset distance from the midpoint. However, in other embodiments, the imaging camera 10 and the positioning camera 30 are not limited to synchronized shooting. For example, the two images may be captured several milliseconds apart from each other, and it is sufficient that the partial image captured by the imaging camera 10 and the partial image captured by the positioning camera 30 can be positionally correlated with each other or indirectly in a certain capturing.
In one embodiment, if the imaging camera 10 and the positioning camera 30 are not captured at the same time, for example, 30 ms after the positioning camera, the captured positions and the captured images do not correspond to each other. However, since the moving tracks of the object 100 and the positioning camera 30 are known (for example, moving at a constant speed), the position and the image captured by the positioning camera 30 before 30 ms can be calculated or estimated, and the calculated or estimated position and image can correspond to the position and the image captured by the imaging camera 10 by using the offset distance, so that the purpose of stitching the plurality of local images captured by the imaging camera according to the plurality of positioning images captured by the positioning camera according to the embodiment of the present invention can also be achieved. Therefore, the image acquisition processing apparatus is not limited to include the synchronization triggering unit.
The processing unit 40 is configured to receive the partial images captured by the imaging camera 10 and the images of the positioning marks captured by the positioning camera 30, and splice the partial images captured by the imaging camera according to the positioning images captured by the positioning camera.
For example, the processing unit 40 determines the pose information of the positioning camera 30 at the shooting time using each shot positioning image. In the present embodiment, since the positioning cameras 30 and the object 100 are fixed to each other, the shooting pose information of the object 100 can be determined according to the shooting pose information of the positioning cameras 30, and the local images shot by the imaging camera 10 can be stitched.
The above-described photographing pose information includes XYZ positions and pitch angles, rotation angles, and roll angles of the object 100 or the positioning camera 30 with respect to a given reference coordinate system, that is, six degrees of freedom known to those skilled in the art. The shooting pose information may also include a part of the above information, but not all of the information, and will not be described herein again.
The process of stitching the local images adopts dislocation information. The dislocation information is: for two images in a digital image storage format, I1(x, y) and I2(x, y), where x, y are the row and column coordinates of the two-dimensional image sensor array; the misalignment information refers to a mathematical transformation x, y → f (x, y), g (x, y) so that the transformed image I1(f (x, y), g (x, y)) can be spliced together with I2(x, y) without errors; in particular, the mathematical transformation function may be chosen as a transmission transformation:
Figure DEST_PATH_GDA0002594880720000091
by acquiring the pose information, the parameter h of the mathematical transformation function can be determined1To h8This is well known to those skilled in the art and will not be described further herein.
The embodiment of the utility model provides a two kinds of processing unit 40 realize the method of concatenation image, as the example.
In one approach, the processing unit 40 or a storage unit connected thereto stores a global image P of the reference volume 20ref. In the ith shooting, a local image P (i) of the object and a local image P of the reference body are obtainedref(i) In that respect The processing unit extracts the global picture PrefSearching for partial image Pref(i) The position information of the local image P (i) in the global image of the shot object is determined; the processing unit detects the obtained Pref(i) With respect to the global picture PrefScaling, rotating and distorting corresponding to part of original local images, and correcting P (i) according to the obtained scaling, rotating and distorting coefficients; the processing unit 40 then splices the modified local image into the global image according to the position information.
In another approach, the processing unit 40 does not store the global image P of the reference volumeref. In the i-th shooting, the processing unit 40 obtains a partial image P (i) of the subject and a partial image P of the reference bodyref(i) In the j-th shooting, a partial image P (j) of the object and a partial image P of the reference body are obtainedref(j) In that respect Partial image Pref(i) And a partial image Pref(j) There is a partial overlapping area between them, the processing unit calculates P according to the matching relation of the overlapping arearef(i) And Pref(j) Relative position information between the partial images P (i) and P (j) is obtained according to the corresponding relation; the processing unit determines the scaling, rotation and distortion coefficients of two times of image acquisition according to the scaling, rotation and distortion of the overlapping area, and corrects the partial images P (i) and P (j); and the processing unit completes the splicing of the two partial images according to the relative position information of the partial images P (i) and P (j) after the correction. By analogy, the processing unit 40 may complete the stitching of the entire image.
It can be seen from the above description that the first embodiment of the present invention provides an image capturing and processing apparatus, which utilizes the positioning mark contained in the image of the reference body captured by the positioning camera to correspondingly determine the local image splicing method of the captured object through the relatively fixed captured object and the positioning camera, and the relatively fixed imaging camera and the reference body. Because can obtain the required image dislocation information of image concatenation fast through the alignment mark, the utility model discloses based on the imaging means of telecontrol equipment and image concatenation, through the design of imaging system and algorithm, guaranteed in principle that image dislocation information can very accurate and stable acquisition to precision and the stability requirement to drive arrangement itself are very low, have that the system is simple, with low costs, the manufacturing accuracy requires low, advantages such as image concatenation stability height.
In addition, the utility model does not need a high-precision displacement sensor, a guide rail and a servo system, and has low requirement on a motion system; the interference of environmental temperature, humidity and the like is not easy to happen; the environmental applicability is wide; the utility model discloses do not do any restriction to being shot object surface outward appearance, to the even, single and no texture's of distribution object region, also can realize accurate concatenation on a large scale. And the calculation amount of the dislocation information between the images is small.
In an optional embodiment, the image capturing and processing apparatus may further include a driving device for controlling the synchronous motion of the object 100 and the positioning camera 30, or for controlling the synchronous motion of the imaging camera 10 and the reference body 20. The detailed structure of the driving device is known to those skilled in the art and will not be described herein.
In one embodiment, there is no relative movement between the imaging camera 10 and the reference body 20, but the object 100 and the positioning camera 30 can be driven by the driving device to perform the aforementioned six-degree-of-freedom movement. For example, translational or rotational. In other embodiments, the drive means may also control the synchronous movement of the imaging camera 10 and the reference body 20.
In an embodiment, the imaging camera and the positioning camera may each comprise an image sensor and a processing unit. The image sensor can be a common image sensor sold in the market, and the detailed structure of the image sensor is not repeated. The processing unit 40 receives the images of the image sensor and stitches the partial images. Specifically, after receiving the positioning image including the positioning mark 21 captured by the positioning camera 30, the processing unit 40 may compare the positioning mark of the positioning image with a reference image captured in advance in a non-motion state, calculate the translation and rotation of the positioning camera 30 at the capturing time, obtain the translation and rotation of the imaging camera 10, determine the misalignment information by using the translation and rotation of the imaging camera, and splice a plurality of local images. The processing unit 40 may be implemented by software or hardware, and is not limited thereto.
In an embodiment, the imaging camera 10 and the positioning camera 30 may further include shutters, and the synchronous triggering unit is in signal connection with the shutters of the imaging camera 10 and the positioning camera 30, and is configured to control synchronous opening of the shutters. Optionally, the imaging camera 10 and the positioning camera 30 may also include a light source. The light source is a normally bright or stroboscopic light source, and if the imaging camera 10 and the positioning camera 30 select stroboscopic light sources, the synchronous triggering units can correspondingly trigger the stroboscopic light sources synchronously when the respective cameras take pictures, so that the simultaneity of the acquired imaging and positioning images is ensured.
It should be noted that, in some embodiments, since the imaging camera and the positioning camera are triggered synchronously, the simultaneously triggered imaging image and positioning image are paired with each other when stored.
The processing unit 40 may include two parts, namely, an attitude estimation module and an image stitching module, which may be implemented by software or hardware, but the present invention is not limited thereto.
The attitude estimation module analyzes the shot patterns from the images of the positioning cameras, identifies whether the images have changes such as translation, scaling, rotation and the like through the difference of the patterns with positioning pictures shot at other times, sequentially detects and calculates attitude changes such as relative position, angle and the like between the positioning cameras and a reference body, and estimates the shooting attitude of the positioning cameras relative to the reference body; since the relative positions of the positioning camera and the imaging camera are constant, and the relative position of the reference body and the object to be photographed is constant, the photographing pose information of the imaging camera 10 with respect to the object to be photographed 100 can be reversely deduced from the photographing pose information of the positioning camera 30 with respect to the reference body 20.
Optionally, the attitude estimation module may also be connected to an external position sensor, a speed sensor, an acceleration sensor, and the like, and the current attitude is calculated more accurately and reliably by combining information of the external sensor and the positioning image.
The image splicing module acquires the attitude information of the imaging camera when each picture is shot from the attitude estimation module, and accordingly, the images and the information sequence in the motion scanning process are spliced together to form a result with a large view field and high precision.
The synchronous trigger unit here refers to only the functional module for realizing synchronous control, and physically, it may be independent from the processing unit, or may be integrated, on the same circuit board, or in the same component.
Similarly, the posture estimation module and the image stitching module in the processing unit may be physically independent from each other, or may be integrated, on the same circuit board, or in the same component.
In an embodiment, the image acquisition processing device further includes at least one of a position sensor, a velocity sensor, and an acceleration sensor, and is configured to acquire shooting pose information of the positioning camera and/or the imaging camera and correct the shooting pose information of the imaging camera.
In one embodiment, the reference bodies 20 are two or more, and may include a first reference body and a second reference body disposed at a fixed angle, for example. That is, another reference body may be included in addition to the reference body 20 shown in fig. 4 and 5. More than two reference bodies may be used, for example, to make it more convenient for the positioning camera to obtain other coordinates, for example, the reference body 20 shown in fig. 4 or 5 is used to provide X-Y plane positioning markers, while the second reference body is used to provide Z-axis positioning markers. The plurality of positioning cameras 30 respectively shoot different reference bodies 20, so that the processing unit can calculate and obtain shooting position information more quickly and conveniently. For another example, the positioning marks of the plurality of reference bodies 20 captured by the positioning camera 30 can collectively reflect the capturing positions, and the plurality of reference bodies can be arranged so that the processing unit can calculate and obtain the capturing position information more quickly and easily than a single reference body.
Therefore, the difference between the scheme of the present invention and the scheme 1 of the prior art lies in at least adopting the image stitching imaging obtained by a plurality of photographing positions; the utility model discloses the difference of scheme and prior art scheme 2 lies in at least: in the scheme, a plurality of spliced images are obtained in different time by moving and walking through the same imaging camera, so that accurate motion attitude estimation is required; in the scheme 2, a plurality of images come from a plurality of cameras, the postures of the images are constant, and the images can be calibrated in advance without motion posture estimation; the utility model discloses the difference of scheme and prior art scheme 3 lies in at least: according to the scheme, the positioning camera obtains the dislocation information between the images to be spliced by shooting the images obtained by the reference body; instead of using an external one or more one-dimensional displacement sensors, the raw information is obtained by an image sensor within the positioning camera, and the image information of the imaging camera is not used to calculate the inter-image misalignment. Therefore, the utility model does not need a high-precision displacement sensor, a high-stability loading platform and a high-precision movement mechanism, and has low cost; meanwhile, the appearance characteristics and the patterns of the reference body can be specially designed, and the reference body generally has abundant marks and texture information, so that the problem that the uniform appearance area cannot be spliced through the information of the image is solved.
Second embodiment
The utility model discloses the second embodiment provides an image acquisition and processing equipment for a plurality of local images of the object of being shot are gathered and are spliced, include:
the first connecting device is used for fixing an imaging camera which is used for shooting a plurality of partial images of a shot object;
the reference body is provided with at least one positioning mark and is relatively fixed with the imaging camera;
the second connecting device is used for fixing a positioning camera, the positioning camera is fixed relative to the shot object and is used for shooting the reference body and acquiring a positioning image containing at least part of positioning marks; and
and the processing unit is used for splicing the local images shot by the imaging camera according to the positioning images shot by the positioning camera.
The second embodiment is similar to the first embodiment, and reference may be made to the first embodiment for relevant content. The difference is that the image acquisition processing apparatus of the second embodiment does not include an imaging camera and a positioning camera, or one of them. The image acquisition and processing equipment of the second embodiment comprises a first connecting device and a second connecting device which are used for connecting an external imaging camera and a positioning camera.
The embodiment of the utility model provides an image acquisition and processing equipment through relative external formation of image camera and reference body to and the relatively fixed object and the location camera of being shot, utilize the location mark that contains in the image of the reference body that the location camera was shot, confirm the concatenation method of the local image of the object of being shot correspondingly. Because can obtain the required image dislocation information of image concatenation fast through the alignment mark, the utility model discloses based on the imaging means of telecontrol equipment and image concatenation, through the design of imaging system and algorithm, guaranteed in principle that image dislocation information can very accurate and stable acquisition to precision and the stability requirement to drive arrangement itself are very low, have that the system is simple, with low costs, the manufacturing accuracy requires low, advantages such as image concatenation stability height.
In addition, the utility model does not need a high-precision displacement sensor, a guide rail and a servo system, and has low requirement on a motion system; the interference of environmental temperature, humidity and the like is not easy to happen; the environmental applicability is wide; the utility model discloses do not do any restriction to being shot object surface outward appearance, to the even, single and no texture's of distribution object region, also can realize accurate concatenation on a large scale. And the calculation amount of the dislocation information between the images is small.
The image acquisition processing device provided by the present application is introduced in detail, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (8)

1. An image acquisition and processing device for acquiring and stitching a plurality of partial images of a photographed object, comprising:
an imaging camera for taking a plurality of partial images of a subject;
the reference body is provided with at least one positioning mark and is relatively fixed with the imaging camera;
the positioning camera is fixed relative to the shot object and is used for shooting the reference body and acquiring a positioning image containing at least part of positioning marks; and
and the processing unit is used for splicing the local images shot by the imaging camera according to the positioning images shot by the positioning camera.
2. The apparatus according to claim 1, further comprising a synchronization trigger unit configured to control the imaging camera and the positioning camera to capture images of the object and the reference object, respectively, at the same time.
3. The image acquisition processing apparatus according to claim 1,
the processing unit is configured to:
and receiving the local images shot by the imaging camera and the images shot by the positioning camera and containing the positioning marks, and determining shooting pose information by utilizing the plurality of positioning images so as to splice the plurality of local images shot by the imaging camera.
4. The image acquisition and processing device according to claim 1, wherein the reference body comprises a first reference body and a second reference body arranged at a fixed angle.
5. The apparatus according to claim 1, further comprising a driving device for controlling the synchronous movement of the imaging camera and the reference body, or for controlling the synchronous movement of the subject and the positioning camera.
6. The apparatus according to claim 3, further comprising at least one of a position sensor, a velocity sensor, and an acceleration sensor for acquiring shooting pose information of the positioning camera and/or the imaging camera and correcting the shooting pose information of the imaging camera.
7. An image acquisition and processing device for acquiring and stitching a plurality of partial images of a photographed object, comprising:
the first connecting device is used for fixing an imaging camera which is used for shooting a plurality of partial images of a shot object;
the reference body is provided with at least one positioning mark and is relatively fixed with the imaging camera;
the second connecting device is used for fixing a positioning camera, the positioning camera is fixed relative to the shot object and is used for shooting the reference body and acquiring a positioning image containing at least part of positioning marks; and
and the processing unit is used for splicing the local images shot by the imaging camera according to the positioning images shot by the positioning camera.
8. The image acquisition processing apparatus according to claim 7,
the processing unit is configured to:
and receiving the local images shot by the imaging camera and the images shot by the positioning camera and containing the positioning marks, and determining shooting pose information by utilizing the plurality of positioning images so as to splice the plurality of local images shot by the imaging camera.
CN201821855661.2U 2018-11-12 2018-11-12 Image acquisition and processing equipment Active CN211403455U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201821855661.2U CN211403455U (en) 2018-11-12 2018-11-12 Image acquisition and processing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201821855661.2U CN211403455U (en) 2018-11-12 2018-11-12 Image acquisition and processing equipment

Publications (1)

Publication Number Publication Date
CN211403455U true CN211403455U (en) 2020-09-01

Family

ID=72211775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201821855661.2U Active CN211403455U (en) 2018-11-12 2018-11-12 Image acquisition and processing equipment

Country Status (1)

Country Link
CN (1) CN211403455U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190612A (en) * 2018-11-12 2019-01-11 朱炳强 Image acquisition and processing equipment and image acquisition and processing method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190612A (en) * 2018-11-12 2019-01-11 朱炳强 Image acquisition and processing equipment and image acquisition and processing method

Similar Documents

Publication Publication Date Title
CN108769530B (en) Image acquisition processing device and image acquisition processing method
KR101458991B1 (en) Optical measurement method and measurement system for determining 3D coordinates on a measurement object surface
US8619144B1 (en) Automatic camera calibration
CN109190612A (en) Image acquisition and processing equipment and image acquisition and processing method
CN106871787B (en) Large space line scanning imagery method for three-dimensional measurement
JP3728900B2 (en) Calibration method and apparatus, and calibration data generation method
US20090067706A1 (en) System and Method for Multiframe Surface Measurement of the Shape of Objects
CN111024047B (en) Six-degree-of-freedom pose measurement device and method based on orthogonal binocular vision
CN101825431A (en) Reference image techniques for three-dimensional sensing
JP2012132739A (en) Stereo camera calibrating device and calibrating method
CN103475820B (en) PI method for correcting position and system in a kind of video camera
CN103679693A (en) Multi-camera single-view calibration device and calibration method thereof
CN112082480A (en) Method and system for measuring spatial orientation of chip, electronic device and storage medium
KR20210117959A (en) System and method for three-dimensional calibration of a vision system
CN109191527A (en) A kind of alignment method and device based on minimum range deviation
US6819789B1 (en) Scaling and registration calibration especially in printed circuit board fabrication
CN211403455U (en) Image acquisition and processing equipment
CN108955642B (en) Large-breadth equivalent center projection image seamless splicing method
CN116743973A (en) Automatic correction method for noninductive projection image
CN113330487A (en) Parameter calibration method and device
Ju et al. Multi-camera calibration method based on minimizing the difference of reprojection error vectors
CN113781579B (en) Geometric calibration method for panoramic infrared camera
JPH11101640A (en) Camera and calibration method of camera
CN115079727A (en) Method for adjusting cradle head of inspection robot
US8885051B2 (en) Camera calibration method and camera calibration apparatus

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant