CN107135336A - A kind of video camera array - Google Patents

A kind of video camera array Download PDF

Info

Publication number
CN107135336A
CN107135336A CN201610113510.9A CN201610113510A CN107135336A CN 107135336 A CN107135336 A CN 107135336A CN 201610113510 A CN201610113510 A CN 201610113510A CN 107135336 A CN107135336 A CN 107135336A
Authority
CN
China
Prior art keywords
camera
support
plate
support plate
plates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610113510.9A
Other languages
Chinese (zh)
Other versions
CN107135336B (en
Inventor
田勇
谢清鹏
张维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201610113510.9A priority Critical patent/CN107135336B/en
Priority to PCT/CN2016/095899 priority patent/WO2017148108A1/en
Publication of CN107135336A publication Critical patent/CN107135336A/en
Application granted granted Critical
Publication of CN107135336B publication Critical patent/CN107135336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/08Stereoscopic photography by simultaneous recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

The present invention relates to multimedia technology field, disclose a kind of video camera array, the video camera array includes the video camera that supporting construction and multiple arrays are arranged, and the video camera in the video camera array can be realized and slided along its axis direction, and position adjustment in the horizontal direction, so that the photocentre of the video camera of video camera array is in coplanar or conllinear.In the above-mentioned technical solutions, the adjustment to camera position is realized by the supporting construction of setting, realizes that the photocentre of video camera, in coplanar or conllinear, improves video camera using picture in spliced effect.

Description

Camera array
Technical Field
The invention relates to the technical field of multimedia, in particular to a camera array.
Background
With the development of multimedia technology, a single camera has become unable to meet the requirements of people. The applications of panoramic video, stereo camera shooting, augmented reality, vision measurement, three-dimensional reconstruction, synthetic aperture imaging and the like all require the cooperative work of a plurality of cameras, and the cameras form a camera array together with a combination body formed by specific postures.
The camera array must be positioned by means whose aim is to facilitate the video registration as much as possible, i.e. to achieve an optimal positioning pose. The camera array (the combination of the acquired videos) has to be registered by means with the aim of obtaining as perfect a spatial consistency as possible, i.e. to achieve an optimized registration state.
Video splicing is an important application based on a camera array, provides people with higher resolution and larger visual angle content than common videos, brings about immersive visual experience, and has wide application prospect and profound research value.
The localization and registration involved in video stitching requires obtaining as perfect a spatial consistency as possible, i.e. limiting parallax offset as much as possible, since it will cause anomalies such as ghosting, which adversely affect video quality.
The requirements for eliminating parallax offset are: the camera arrays are co-centric or the scenes are coplanar. As long as at least one of these two conditions is met, a zero disparity shift can be achieved via an appropriate homographic transformation. Unfortunately, real-world environments have few coplanar scenes; array common-center is also typically not possible due to camera duty conflicts. Although the parallax offset cannot be eliminated, the optimum registration state can be achieved by the optimum positioning and registration method.
The prior art provides a device capable of positioning a camera array, which utilizes a structural unit, wherein the structural unit comprises a support plate and two cameras arranged in the vertical direction, one end of each camera close to the camera is rotatably connected with the support plate, a chute is arranged on the support plate, the cameras are provided with locking screws penetrating in the chutes, the positioning of the positions of the cameras is realized by screwing the locking screws, and a plurality of structural units are arranged in the horizontal direction, so that a two-dimensional camera array is formed. It finds several groups of corresponding points in the overlapping area of the two cameras to collect video, and uses the corresponding points to solve the homography transformation matrix for expressing the registration relation, thus completing the registration of the two cameras. However, the camera array can only realize the adjustment of the rotation of the camera, and when the adjustment is concentric, the positioning process is difficult to control, and the array cannot be positioned to the optimal space attitude; and the registration completely depends on the ideal degree of the acquired video, and the calculation accuracy is poor.
Disclosure of Invention
The invention provides a camera array, which is used for improving the adjustment of the camera array in the optimal posture and improving the control precision.
In order to solve the above technical problem, an embodiment of the present invention provides a camera array, including:
a support plate;
the camera comprises at least two camera supports on the support plate, wherein any one of the at least two camera supports can rotate relative to the support plate and can be locked at a set position on the support plate, and a rotating shaft around which any one camera support rotates is perpendicular to the support plate;
the cameras of the at least two cameras converge, and the optical centers of the at least two cameras are collinear; wherein,
any one of the at least two cameras can slide on the support of the camera along the optical axis direction of the camera and can be locked at a set position on the support of the camera.
In the technical scheme, the camera rotates in the horizontal direction and slides in the direction of the optical axis of the camera through the supporting plate and the support, so that the light rays of the camera in the same plane are positioned on the same straight line, and the effect of the spliced pictures of the camera is improved.
In a specific embodiment, an arc-shaped sliding chute corresponding to any one support is arranged on the supporting plate, and at least two arc-shaped sliding chutes on the supporting plate are concentric arc-shaped sliding chutes; a first locking piece is arranged on any one of the supports, penetrates through the arc-shaped sliding groove and is in threaded connection with any one of the supports, so that the supporting plate is in sliding connection with any one of the supports;
the camera comprises a camera body and is characterized in that a linear sliding groove is formed in the support of any camera, a second locking piece is arranged on any camera, and the second locking piece penetrates through the linear sliding groove and then is in threaded connection with any camera, so that the support of any camera is in sliding connection with any camera. In above-mentioned concrete structure, realize the control to the motion of support through first retaining member, this first retaining member can be bolt or screw, is provided with the screw hole on the corresponding support, and bolt or screw pass arc spout after be connected with the screw hole, when needs sliding support, unscrew bolt or screw, and at this moment, the support can slide, when needs locking, screws bolt or screw, and after screwing, the nut of bolt or screw supports and presses in the backup pad, locks the support. The principle of the sliding and locking of the camera is the same as that of the above-mentioned support, i.e. the sliding and locking control is realized by the second locking member, which is not described in detail herein.
In a preferred embodiment, the arc chute is provided with an angle scale value, and/or the straight chute is provided with a length scale value. Through set up scale interval on the arc spout, conveniently control support pivoted angle. In a similar way, the length scales are arranged on the linear sliding grooves, so that the telescopic distance of the camera can be conveniently controlled.
In a specific embodiment, any one of the holders includes: the device comprises a cylindrical shell with an opening at one end, and two connecting plates symmetrically and rotatably connected to two sides of the opening end of the cylindrical shell;
the support plate is provided with at least two fixing plates, any support is located in a space formed by two fixing plates which are perpendicular to each other in the at least two fixing plates at intervals, any connecting plate of any support is connected with any fixing plate in the two fixing plates in a sliding mode and can be locked at a set position on any fixing plate, and any support can rotate relative to the support plate through the fact that any connecting plate slides relative to any fixing plate.
When the camera array is specifically arranged, the camera array further comprises a third locking piece, and a linear sliding groove is formed in the fixing plate; the connecting plate is connected with the fixing plate in a sliding manner through the third locking piece penetrating through the linear sliding groove;
when the two sides of the fixed plate are provided with the connecting plates, the connecting plates on the two sides of the fixed plate are in sliding connection with the fixed plate through the third locking piece penetrating through the linear sliding groove.
In a more specific embodiment, the support plate is pivotally connected to another support plate at an end adjacent the camera head. Through the rotation connected mode between two backup pads, further improvement to camera angle of adjustment's adjustment, improved the effect that the camera gathered the picture.
In order to facilitate the placement of the upper camera array, the camera array further comprises a base, a support is arranged on the base, the support plate and the support are in sliding connection on the support plate at the other end opposite to the end close to the camera and can be locked at a set position on the support, the other support plate and the support are connected on the other support plate at the other end opposite to the end close to the camera, and the sliding direction of the support plate relative to the support is the same as the rotating direction of the support plate relative to the other support plate. The two supporting plates are supported by the arranged support, and the rotating positions of the supporting plates are locked by the sliding grooves formed in the support and the locking pieces in the sliding grooves.
In order to solve the above technical problem, the present invention also provides a camera array, including: two first support plates, two cameras and two supports, the two cameras being respectively fixed in the two supports, wherein,
the two first supporting plates are rotatably connected;
the two supports respectively with two first backup pad sliding connection and lockable set for the position on two first backup pads, wherein, in two supports arbitrary support is relative the slip direction of arbitrary first backup pad, with arbitrary first backup pad rotates the pivot of winding and is parallel, arbitrary support with arbitrary first backup pad sliding connection.
In the technical scheme, the camera is adjusted through rotation between the first supporting plates and sliding between the two first supporting seats, so that the optical centers of the cameras are collinear, the image shooting effect of the camera array is improved, and the video splicing effect is improved.
In a particular embodiment, the camera array further comprises two second support plates, the two second support plates being rotatably connected, the rotating shaft around which the two second supporting plates rotate is perpendicular to the rotating shaft around which the two first supporting plates rotate, wherein, for any one of the two second support plates, the any one first support plate is connected with the any one second support plate in a sliding way and can be locked at a set position on the any one second support plate, the other one of the two first support plates is connected with any one of the second support plates, wherein one end of each of the two first supporting plates, which is connected with any one of the second supporting plates, is the opposite end of one end rotatably connected between the two supporting plates, the sliding direction of any first supporting plate relative to any second supporting plate is the same as the rotating direction of any first supporting plate relative to the other first supporting plate. The second supporting plate is arranged, so that the camera can be adjusted conveniently, and the optical center collineation of the camera is realized.
When in specific connection, two first supporting plates are positioned on the same second supporting plate, each first supporting plate is rotatably connected with one first mounting plate, and a rotating shaft wound by the first supporting plate connected with the first mounting plate is parallel to a rotating shaft wound by the two first supporting plates; one of the first mounting plates is fixedly connected with the second support plate, and the other first mounting plate is slidably connected with the second support plate and can be locked at a set position. Thereby when making two first backup pads relative rotations, two first backup pads can be parallel with the first mounting panel and the second backup pad of second backup pad, make things convenient for the slip between the two.
In a specific sliding assembly, each second supporting plate is provided with a first linear sliding groove; the length direction of first spout is on a parallel with the length direction of the pivot that two second backup pads rotate to connect, be provided with the slip assembly on another first mounting panel and be in retaining member in the first straight chute.
Each second support plate is rotatably connected with one second mounting plate, and a rotating shaft around which the second mounting plate is connected with the second support plate is parallel to rotating shafts around which the two second support plates are rotated;
in order to facilitate the placement of the camera array, the camera array further comprises a fixing plate, any one of the second supporting plates is connected with the fixing plate in a sliding mode and can be locked at a set position on the fixing plate, and the other one of the two second supporting plates is connected with the fixing plate.
When the first mounting plate and the second mounting plate are connected specifically, one of the second mounting plates is fixedly connected with the fixing plate, and the other second mounting plate is connected with the fixing plate in a sliding manner and can be locked at a set position;
be provided with the straight line spout of second on the fixed plate, just the length direction perpendicular to of the straight line spout of second two first backup pads rotate the length direction of the pivot of connecting, be provided with the slip assembly on another second mounting panel and be in retaining member in the straight line spout of second.
In order to control the adjustment amplitude of the camera when adjusting the camera, in a specific embodiment, a length scale value is provided on any one of the first support plates for identifying the position of any one of the supports relative to any one of the first support plates; and/or the presence of a gas in the gas,
a length scale value is arranged on any one of the second supporting plates and used for marking the position of any one of the first supporting plates relative to any one of the second supporting plates;
and/or the presence of a gas in the gas,
and the fixing plate is provided with a length scale value for marking the position of any one second supporting plate relative to the fixing plate. The relative position of each part can be observed visually through the set length scale value, and the position of the camera is convenient to adjust.
In order to facilitate the placement of the camera array, in a more specific embodiment, the camera array further comprises a tripod support for supporting the camera array, and the camera array is mounted on the tripod support so as to be rotatable about a vertical axis and to be adjustable in both an overall downward view and an upward view.
Drawings
FIG. 1 is a top view of a camera array provided by an embodiment of the present invention;
FIG. 2 is an exploded view of a camera array provided by an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an arc-shaped chute of a camera array according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a linear chute according to an embodiment of the present invention;
FIG. 5 is a side view of a camera array provided in accordance with another embodiment of the present invention;
FIG. 6 is a top view of a camera array provided in accordance with another embodiment of the present invention;
fig. 7 is a schematic view of a camera arrangement of a camera array according to another embodiment of the present invention;
FIG. 8 is a schematic diagram of a mount for a camera array according to another embodiment of the present invention;
fig. 9 is a schematic structural diagram of a support plate of a camera array according to another embodiment of the present invention;
fig. 10 is a schematic structural diagram of a base of a camera array according to another embodiment of the present invention;
fig. 11 is a schematic structural diagram of a base of a camera array according to a third embodiment of the present invention;
fig. 12 is a schematic structural view of a fixing plate of a camera array according to a third embodiment of the present invention;
fig. 13 is a schematic structural view of a second support plate of the camera array according to the third embodiment of the present invention;
fig. 14 is a schematic structural diagram of a first support plate of a camera array according to a third embodiment of the present invention;
fig. 15 is a schematic structural view of a mount for a camera array according to a third embodiment of the present invention;
fig. 16 is an exploded view of a tripod mount for a camera array according to a third embodiment of the present invention;
FIG. 17 is a diagram illustrating a circumferential distribution of optical centers of a camera array according to an embodiment of the present invention;
FIG. 18 is a diagram illustrating an alignment distribution of optical centers of a camera array according to an embodiment of the present invention;
FIG. 19 is a diagram illustrating an alignment distribution of optical centers of a camera array according to an embodiment of the present invention;
FIG. 20 is a schematic view of the optical center position of a camera provided by an embodiment of the present invention;
fig. 21 is a schematic lens diagram of a camera according to an embodiment of the present invention;
FIG. 22 is a diagram illustrating an alignment distribution of optical centers of a camera array according to an embodiment of the present invention;
FIG. 23 is a diagram illustrating an alignment distribution of optical centers of a camera array according to an embodiment of the present invention;
FIG. 24 is a schematic diagram of a re-projection of a camera provided by an embodiment of the present invention;
FIG. 25 is a top view of a re-projection of a camera provided by an embodiment of the present invention;
fig. 26 is a view showing a state in which the camera array is horizontally aligned according to the embodiment of the present invention;
fig. 27 is a state diagram of the camera according to equation 25 after transformation according to the present invention;
fig. 28 is a schematic diagram of X coordinate offset sequence clustering according to an embodiment of the present invention.
Reference numerals:
1-support plate 11-arc chute 2-camera
3-support 31-linear chute 10-base
101-support 1011-chute 20-support plate
201-fixed plate 2011-sliding chute 30-support
301-connection board 302-housing 40-camera
100-base 200-fixed plate 2001-second straight sliding groove
300-second support plate 3001-first linear chute 400-second mounting plate
500-first support plate 5001-sliding groove 600-first mounting plate
700-support 800-triangular support
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to improve the adjustment of the camera array in the optimal posture, the control precision is improved. The embodiment of the invention provides a camera array, which is convenient for adjusting the postures of cameras in the camera array, thereby being beneficial to video registration and improving the effect after video splicing.
In a specific embodiment, as shown in fig. 1, fig. 1 shows a schematic structural diagram of a camera 2 array using a one-dimensional array, the camera 2 array can be placed in either a horizontal or vertical direction, and the following description will take the horizontal placement as shown in fig. 1 as an example.
In this embodiment, the array of cameras 2 comprises:
a support plate 1;
the camera support comprises at least two supports 3 of cameras 2 on a support plate 1, wherein any one support 3 of the supports 3 of the cameras 2 can rotate relative to the support plate 1 and can be locked at a set position on the support plate 1, and a rotating shaft around which any one support 3 rotates is perpendicular to the support plate 1;
the cameras of the at least two cameras 2 converge, and the optical centers of the at least two cameras 2 are collinear; wherein,
any one camera 2 of the at least two cameras 2 is slidable on the mount 3 of any one camera 2 in the direction of the optical axis of any one camera 2 and lockable in a set position on the mount 3 of any one camera 2.
In the above embodiment, the supporting plate 1 and the support 3 are arranged to realize the rotation of the camera 2 in the horizontal direction and the sliding in the direction along the optical axis of the camera 2, so that the positions of the light rays of the camera 2 in the same plane on the same straight line can be realized, and the effect of the spliced pictures of the camera 2 is improved.
In order to facilitate understanding of the camera 2 array provided by the embodiment of the present invention, the following detailed description is made with reference to the accompanying drawings.
As shown in fig. 1 and 2, an arc-shaped chute 11 corresponding to any one of the supports 3 is arranged on the support plate 1, and at least two arc-shaped chutes 11 on the support plate 1 are arc-shaped chutes 11 having a common circular center; a first locking piece is arranged on any one of the supports 3, and the first locking piece penetrates through the arc-shaped sliding groove 11 to be in threaded connection with any one of the supports 3, so that the support plate 1 is in sliding connection with any one of the supports 3; specifically, realize the control to the motion of support 3 through first retaining member, this first retaining member can be bolt or screw, is provided with the screw hole on the corresponding support 3, and bolt or screw pass arc spout 11 back and be connected with the screw hole, when needs sliding support 3, unscrew bolt or screw, and at this moment, support 3 can slide, when needs locking, screws bolt or screw, and after screwing, the nut of bolt or screw supports and presses in backup pad 1, locks support 3. In a specific embodiment, when the number of the supporting seats 3 is odd, the supporting seat 3 located in the middle is fixedly connected with the supporting plate 1, at this time, the first locking member connected to the supporting seat 3 passes through the through hole in the supporting plate 1 and then is fixedly connected with the supporting seat 3, the supporting seats 3 located at both sides of the supporting seat 3 are slidably assembled with the corresponding arc-shaped sliding grooves 11 in the supporting plate 1, and when the above structure is adopted, the arc-shaped sliding grooves 11 and the through holes are located on the same circumference. In the above technical solution, in order to ensure the adjustment effect of the support 3, preferably, two arc-shaped sliding grooves 11 corresponding to each support 3 are provided, and the support 3 is provided with a first locking member corresponding to each arc-shaped sliding groove 11.
In addition, a linear sliding groove 31 is formed in the support 3 of any camera 2, a second locking member is arranged on any camera 2, and the second locking member penetrates through the linear sliding groove 31 and then is in threaded connection with any camera 2, so that the support 3 of any camera 2 is in sliding connection with any camera 2. In the above-mentioned specific structure, the principle of the sliding and locking of the camera 2 is the same as that of the sliding and locking of the support 3, that is, the sliding and locking control is realized by the second locking member, which is not described in detail herein.
In order to facilitate the control of the adjustment angle of the camera 2, the curved chute 11 is provided with an angle scale value and/or the linear chute 31 is provided with a length scale value. Through set up the scale interval on arc spout 11, conveniently control support 3 pivoted angle. Similarly, the linear chute 31 is provided with length scales, so that the telescopic distance of the camera 2 can be conveniently controlled. Referring to fig. 3 and 4 together, as shown in fig. 3, the arc chute 11 on the supporting plate 1 is provided with an angle value, the angle that the support 3 rotates can be directly observed through the set angle value, that is, when the support 3 is at the position, the angle corresponding to the central position is provided, so that the adjustment is convenient, if the support 3 needs to be located at 50 °, the support 3 is slid, so that the corresponding first locking piece on the support 3 is located at the position that is marked with 50 ° on the arc chute 11, at this time, the position of the support 3 is the position that needs to be adjusted, and the adjustment of the support 3 is facilitated. Similarly, when the camera 2 needs to extend out 3cm, the second locking member on the camera 2 is adjusted, so that the third locking member is located at the position pointed by 3 cm. Realize the control to 2 positions of camera through above-mentioned adjustment to make 2 optical centers of camera can the collineation, improve 2 video effects of gathering of camera, and then improve the effect of video concatenation.
In order to facilitate understanding of the effects of the camera 2 array provided in the present embodiment, the following is a detailed description of the operation principle of the camera 2 array provided in the present embodiment.
It should be noted that video stitching is an important application based on the camera 2 array, and the present embodiment will be described by taking the video stitching as an example to illustrate the positioning and registration of the camera 2 array. For this purpose, first an optimized pose of the camera 2 array for video stitching and an optimized registration state definition are proposed.
The optimal pose definition for the camera 2 array for video stitching is: under the condition of specifying the array scale and the resolution, the total array visual angle is maximized and no blind area exists; and an optimized registration state is achieved through an optimized registration method.
The optimal registration state definition for the camera 2 array for video stitching is: through global registration, the length of parallax offset of all corresponding points of the videos acquired by the adjacent cameras 2 is integrally minimized, the directions are limited to the specified directions, and the parallax offset of the target corresponding point is zero.
The present embodiment includes two steps: the positioning and the registration are carried out,
the positioning comprises four steps: determining the array form, calculating the deflection angle, positioning the deflection direction and positioning the telescopic position, wherein the last two steps embody the key characteristics of the invention.
In the first step, the array morphology is determined.
In the case where the total array view angle is not required to be close to or greater than 180 degrees, the optimized pose preferably uses a convergent array as the primary localization mode. The convergent array means that the optical axes of the cameras 2 intersect in the front direction, so that the sight lines intersect, and taking two cameras 2 as an example, the effect that the visual field of the left camera 2 is deviated to the right and the visual field of the right camera 2 is deviated to the left is formed, and a divergent array model is opposite to the convergent array model, and the convergent array model has at least the following advantages relative to the latter:
the view angle of the convergent array of cameras 2 is not wasted, and if the view angle of a single camera 2 is a, the total view angle of the N cameras 2 can reach Na. While the total viewing angle of the divergent array must be less than Na;
secondly, the convergent array almost has no blind area, the divergent array has a blind area with a certain depth range, and if the total visual angle is required to be closer to Na, the depth range of the blind area is larger;
thirdly, since the lens is mounted in front of the camera 2, the position of the optical center in the camera 2 is usually shifted forward. The convergent array can therefore achieve a smaller optical centre distance than the divergent array, so that it is possible to achieve a minimum length of the parallax offset vectors of adjacent cameras 2.
The more general case of a two-dimensional array spatially distributed in M rows and N columns, referred to as an M N array, is discussed next. The above-mentioned horizontally distributed one-dimensional array, as well as the vertically distributed one-dimensional array, can be regarded as a special case of a two-dimensional array, i.e., a case where M is 1 and N is 1.
The two-dimensional array positioning has two positioning modes, one is a row-preferred positioning mode, namely, the one-dimensional array positioning in the horizontal direction is performed on each row of the array, and the relative position of each row of the one-dimensional array in the vertical direction is determined on the basis, so that the two-dimensional array positioning is completed; the other is a column-preferred positioning mode, namely, each column of the array is firstly subjected to one-dimensional array positioning in the vertical direction, and the relative position of each column of one-dimensional array in the horizontal direction is determined on the basis, so that the two-dimensional array positioning is completed.
Taking the line-first positioning mode as an example, the process may be considered to regard the one-dimensional array after each line positioning as a single virtual camera 2, and then perform vertical positioning on these virtual cameras 2.
It is to be noted here that the cameras 2 in the array of cameras 2 may all be placed upright or upside down in their entirety, may be placed upside down in whole rows or columns, etc., for example the cameras 2 in the first row are all placed upside down. However, when the camera 2 is placed upside down, the original horizontal direction parameters such as the horizontal angle of view and the horizontal length of the sensor should be regarded as the vertical direction parameters, and the original vertical direction parameters such as the vertical angle of view and the vertical length of the sensor should be regarded as the horizontal direction parameters.
In a second step, the deflection angle is determined.
In order to maximize the total viewing angle of the array without creating blind spots, the present invention proposes the following scheme for determining the deflection angle. For simplicity of description, it is assumed that the internal parameters such as the focal length, the sensor size, and the like of each camera 2 in the array are the same.
The deflection angle for a first dimension in a two-dimensional array is first determined. For each row of the row-first positioning mode, which comprises a horizontally-distributed one-dimensional array, the horizontal deflection angle of the adjacent cameras 2 should be equal to the horizontal viewing angle of a single camera 2; for each column of the column-first positioning mode, comprising a vertically distributed one-dimensional array, the vertical deflection angle of adjacent cameras 2 should be equal to the vertical viewing angle of a single camera 2. The calculation formula of the horizontal angle of view and the vertical angle of view of the single camera 2 is
In formula (1), aX and aY are horizontal and vertical angles of view of the single camera 2, sX and sY are physical lengths of the sensor in the horizontal and vertical directions, and f is a lens focal length. Strictly speaking, f in formula (1) should use the imaging object distance, but since it is difficult to measure, and its value is very close to the lens focal length, the approximation is replaced by the lens focal length.
Next, a deflection angle for a second dimension of the two-dimensional array is determined. To this end, a concept of a central view angle, which is a horizontal view angle at the upper and lower center of the field of view of the camera 2, and a concept of a peripheral view angle, which is a vertical view angle at the left and right center of the field of view of the camera 2, are introduced. And the calculation formula of the edge viewing angle is
In formula (2), aX 'is a horizontal edge view, i.e., a horizontal view at the upper and lower edges of the field of view of the camera 2, and aY' is a vertical edge view, i.e., a vertical view at the left and right sides of the field of view of the camera 2.
For the line priority positioning mode, if the number N of 2 cameras in each line is an odd number, the vertical direction deflection angle between the lines is aY, otherwise, the vertical direction deflection angle is aY'; for the column-first positioning mode, if the number M of cameras in each column is odd, the horizontal deflection angle between columns should be aX, otherwise it should be aX'.
And thirdly, positioning the deflection direction.
And the positioning deflection direction is to position the array to the posture of the circumferential distribution of the optical center on the basis of determining the deflection angle of the array. The circumferential distribution of the optical centers is short for compact circumferential distribution of the optical centers, and at the moment, the cameras 2 are arranged at equal angular intervals, and the front ends of the lenses of the adjacent cameras 2 are just abutted.
At this moment, the deflection angle can be conveniently adjusted under the constraint of optical center circumference distribution by the device, the gesture can be quantitatively read, and the specific physical realization is as follows.
Firstly, a support 3 is provided, the support 3 is a U-shaped support 3, the camera 2 can be conveniently placed, and the camera 2 can be fixed in the support 3 by using screw holes at the upper end and the lower end of the camera 2, as shown in fig. 2. Secondly, it is necessary to open a sliding groove (i.e. an arc-shaped sliding groove 11) on the supporting plate 1, as shown in fig. 2. The support 3 can be fixed on the support plate 1 by using screw holes on the bottom surface of the support 3. Under the constraint of the circumferential distribution of the optical centers, the motion track of the screw hole on the bottom surface of the U-shaped support 3 can be calculated within a certain parameter adjusting range, and the shape and the position of the sliding chute depend on the track.
Knowing the lens diameter d and the coordinates of the screw hole with respect to the camera 2 own coordinate system (x0, y0), the coordinates of the screw hole with respect to the common coordinate system (x, y) can be calculated for a given deflection angle a. When the deflection angle a is changed within a limited range, all corresponding coordinates (x, y) form a screw hole motion track. The calculation formula of the coordinates (x, y) will be given below for the 1 × 3 array and the 1 × 4 array, respectively.
First, a 1x 3 camera 2 array is discussed, as in fig. 17. The array takes a coordinate system with the O point as the coordinate origin as a common coordinate system, and the middle point of the front end of the lens of the middle camera 2 is fixedly arranged at the point.
For the middle camera 2, the coordinate system of the camera 2 takes the middle point of the front end of the lens as the origin, and the right side of the camera 2 is the positive X direction, and the downward side is the positive Y direction. Since the coordinate system coincides with the common coordinate system, the common coordinate (x, y) of the screw hole P (x0, y0) defined in the coordinate system of the camera 2 itself is calculated by the following formula
When a changes, the intermediate camera 2 is stationary, so the trajectory formed by (x, y) is a fixed point.
For the right-end camera 2, the coordinate system of the camera 2 takes the left vertex of the front end of the lens as the origin, and the right side of the camera 2 is the positive X direction, and the downward side is the positive Y direction. For the screw hole P (x0, y0) defined in the camera 2 own coordinate system, the formula for calculating the common coordinate (x, y) thereof is as follows
In equation 4, l is the lens diameter, and when a changes, the locus formed by (x, y) is an arc with the right point a as the center.
For the left camera 2, the coordinate system of the camera 2 itself takes the right vertex of the front end of the lens as the origin, and the left direction relative to the camera 2 is the positive direction of X, and the down direction is the positive direction of Y. The formula for calculating the common coordinates (x, y) of the screw holes P (x0, y0) defined in the coordinate system of the camera 2 itself is the same as the formula (4). When a is changed, the locus formed by (x, y) is an arc with the left point A as the center.
The 1x 4 array will now be discussed, as in fig. 18. The array takes a coordinate system with a point O as a coordinate origin as a common coordinate system, and the front end abutting points of the lenses of the two middle cameras 2 are fixed at the point.
For the camera 2 whose middle part is inclined to the right, the coordinate system of the camera 2 takes the left vertex at the front end of the lens as the origin, and the right side of the camera 2 is the positive X direction and the downward side is the positive Y direction. For the screw hole P (x0, y0) defined in the camera 2 own coordinate system, the formula for calculating the common coordinate (x, y) thereof is as follows
When a is changed, the track formed by (x, y) is an arc with the O point as the center.
For the left-side camera 2 in the middle, the coordinate system of the camera 2 takes the right vertex at the front end of the lens as the origin, and the left side relative to the camera 2 is the positive X direction, and the down side is the positive Y direction. The formula for calculating the common coordinates (x, y) of the screw holes P (x0, y0) defined in the coordinate system of the camera 2 itself is the same as the formula (5). When a is changed, the track formed by (x, y) is an arc with the O point as the center.
For the right-end camera 2, the coordinate system of the camera 2 takes the left vertex of the front end of the lens as the origin, and the right side of the camera 2 is the positive X direction, and the downward side is the positive Y direction. For the screw hole Q (x0, y0) defined in the camera 2 own coordinate system, the formula for calculating the common coordinate (x, y) thereof is as follows
When a is changed, the track formed by (x, y) is a relatively complex curve shape.
For the left camera 2, the coordinate system of the camera 2 itself takes the right vertex of the front end of the lens as the origin, and the left direction relative to the camera 2 is the positive direction of X, and the down direction is the positive direction of Y. The formula for calculating the common coordinates (x, y) of the screw hole Q (x0, y0) defined in the coordinate system of the camera 2 itself is the same as the formula (6). When a is changed, the track formed by (x, y) is a relatively complex curve shape.
The movement track of the screw hole is given on the premise of meeting the optical center circumference distribution constraint, and the shape and the position of the sliding groove are determined. The position of the screw hole on the bottom surface of the U-shaped base, i.e. the deflection angle a of the camera 2, should be marked with scales on the chute, and the device provided by the embodiment shown in fig. 2 can quickly position the camera 2 to the required deflection angle a in the positioning module, and can quickly read the deflection angle a of the camera 2 in the registration module.
And fourthly, positioning the telescopic position.
The telescopic position is positioned, on the basis that the array forms the circumferential distribution of optical centers, the optical centers of the cameras 2 are moved back and forth along the direction of the optical axis of each camera 2 to enable the optical centers to be distributed in a designated way, and therefore the direction of parallax offset vectors of adjacent cameras 2 is limited to the designated direction. The optical centers of all the cameras 2 are generally coplanar and arranged in a regular matrix, which is called the state of the invention as the optical center alignment distribution, as shown in fig. 19. At this time, after appropriate homographic transformation is performed on the images acquired by all the cameras 2, the parallax offset vectors of the left and right adjacent cameras 2 are limited to the horizontal direction, and the parallax offset vectors of the vertical adjacent cameras 2 are limited to the vertical direction.
The telescopic position can be positioned by adding the degree of freedom of moving back and forth along the optical axis direction on the positioning device. Under such constraints, to achieve the optical center alignment distribution, the length of the back-and-forth movement needs to be calculated, and for this purpose the position of the optical center of the camera 2 in the camera 2 needs to be known, one implementation is as follows.
The optical center should be located on the axis of the lens, and then only the depth position of the optical center needs to be estimated. One way, as shown in fig. 20, is to have the light passing through the sensor converge forward, i.e., point O' as the optical center, which is located at a depth f forward of the sensor. In another method, the light passing through the front end of the lens converges backward, i.e. the point O is the optical center position. Since the lens is generally unlikely to fit exactly into the pinhole model, point O and point O' are generally misaligned, making it more reasonable to actually use point O as the optical center position.
When the position of the point O cannot be accurately obtained, the estimation can be performed by the criterion: the light that reaches point O through the filter at the front of the lens can just constitute the required viewing angle, which is equal to the viewing angle that reaches the sensor through point O'. Then the formula for estimating the depth of point O is
In equation 7, e is the estimated physical distance from the optical center to the front of the lens, w and h are the horizontal and vertical effective physical dimensions of the sensor, and l0 is the diameter of the filter, which is usually slightly smaller than the lens diameter l, as shown in fig. 20 and 21. The distance from the actual optical center position to the front end of the lens may be slightly less than e, and may not be greater than e.
In addition, a method for calibrating the camera 2 can be used, for example, shooting an object with a known physical length, measuring the physical distance from the object to the front end of the lens and the length of an imaging pixel, and calculating to obtain e by using the imaging geometric principle.
Next, a 1 × 3 array is taken as an example to illustrate how to position the telescopic position, i.e. to achieve the optical center alignment distribution state. As shown in fig. 22, the optical centers are circumferentially distributed, and a coordinate system is established with the central point of the front end of the lens of the central camera 2 as the origin, and the forward direction along the optical axis is the positive Y direction. Fig. 23 shows the optical center alignment distribution state, and in order to achieve the optical center alignment state, it is assumed that the center camera 2 needs to be moved forward along its optical axis t0, the right side camera 2 needs to be moved forward along its optical axis t1, and the left side camera 2 needs to be moved forward along its optical axis t 2. The values of the expansion amounts t0, t1 and t2 can be positive or negative, and if the values are negative, the backward movement is indicated.
It is generally required that the plane of the optical centre of the array after the movement is perpendicular to the optical axis of the central camera 2, and t2 is t1 according to symmetry, so that only t0 and t1 need to be calculated. Obviously, as long as t0 and t1 are not positive at the same time, no duty conflict occurs with the array.
Based on the light center circumference distribution state, after the middle camera 2 is moved forward by t0, the light center Y coordinate is
y0=t0-e
(8)
Based on the circumferential distribution state of the optical centers, after the right camera 2 is moved forward by t1, the Y coordinate of the optical center is
To achieve the optical center alignment distribution, y 0-y 1 is only required to be satisfied, that is, the optical center alignment distribution state is obtained
As long as the amounts of expansion t0 and t1 satisfy the formula 10, the optical center alignment state can be achieved, and at this time, the images acquired by the middle camera 2 are not processed, and after appropriate homographic transformation is performed on the images acquired by the right camera 2, the parallax offset vectors of the two cameras 2 can be limited to the horizontal direction.
Further, in order to minimize the length of the parallax offset vector, t1 should be 0, i.e. the right camera 2 does not zoom, and the left camera 2 should zoom in or out at this time
T0 given by equation 11 must be a positive number, i.e. the middle camera 2 should zoom forward. In this way no duty conflicts between adjacent cameras 2 occur, nor is there any view occlusion for most cases of practical application. However, in the case of certain internal and external parameters, there is still a possibility that the middle camera 2 will block the view of the right camera 2. At this time, the stretching amounts of the middle camera 2 and the right camera 2 should be matched with each other, and the backward stretching length of the right camera 2 is reduced as much as possible on the premise that the formula 10 is satisfied and no view shielding occurs, so that the length of the parallax offset vector is minimized. This situation rarely occurs in practical applications, and therefore the present invention does not give a specific formula.
The physical realization of the positioning of the telescopic position can be achieved by using the side walls of the holder 3 in fig. 2 with a straight runner 31, with which the camera 2 can be supported for a back and forth movement in the direction of the optical axis. The sliding groove should be marked with scales to indicate the position of the screw hole of the camera 2, i.e. the telescopic position t of the camera 2, as shown in fig. 2. Where t is 0 as the initial telescopic position, and the screw hole coordinates (x0, y0) of the U-shaped base bottom surface defined in the coordinate system of the camera 2 itself are defined with respect to the initial telescopic position. With such an arrangement, the camera 2 can be quickly positioned to the desired telescopic position t in the positioning module, and the telescopic position t at which the camera 2 is located can be quickly read in the registration module.
The implementation method of the positioning module is changed.
The implementation methods for positioning the deflection angle and the telescopic position are given above, more specific implementation methods can be flexibly designed according to requirements, for example, the moving part can be a sliding groove, a screw rod, a hinge, a wheel shaft and the like, the control mode can be manual, electric, program control and the like, and the posture reading can be read through scales, screen display, instrument measurement and the like.
The above embodiments of the localized yaw angle and the localized telescopic position are both given for a one-dimensional array, where each camera 2 in the array has only one dimension of the magnitude of the yaw angle. The following second and third embodiments will give examples of the positioning yaw angle and the positioning telescopic position for a two-dimensional array, when each camera 2 in the array has a yaw angle magnitude of two dimensions.
The above embodiments of the positioning deflection angle and the positioning telescopic position are given based on the optimized positioning posture defined above, and the positioning posture can be redefined and decomposed according to actual requirements. For example for visual measurement (depth estimation), the minimization of the parallax offset length is not required, and therefore the optimized pose is necessarily different from the aforementioned definition.
And (5) abstract extraction of the positioning module.
The amount of a certain independent dimension of space gesture can be abstracted with deflection angle and flexible position, the spacing orbit of a certain independent dimension of function abstraction for corresponding that moving part such as spout provided, then abstract the extraction as follows with the orientation module:
for each camera 2 of the array, the possible space posture is decomposed into a plurality of independent dimensional limiting tracks, and each track can be independently positioned and read; for each camera 2 of the array, the camera is positioned to the required magnitude of the corresponding dimension of each limiting track along the limiting track, so that each camera 2 is positioned to the required space posture.
Two, registration
An optimized registration method based on an optimized positioning of the array of cameras 2 is set forth below. The method comprises three steps: modeling correction, direction alignment and target registration. The following will be described by taking a 1 × 3 array as an example, in which case the directional alignment step will be embodied as horizontal alignment.
Assuming optimized array positioning, the optical centers of the three cameras 2 are collinear, and the linear direction is parallel to the horizontal direction (imaging plane X direction) of the middle camera 2; in addition, the postures of the three cameras 2 can be quantitatively read from the positioning devices. According to the optimized registration concept proposed by the present invention, the registration state is: the images acquired by the middle camera 2 need not be processed, and after appropriate homographic transformation is performed on the images acquired by the left and right cameras 2, the parallax offset vectors of all corresponding points of the adjacent cameras 2 are limited to the horizontal direction (horizontal alignment), and the parallax offset of the target corresponding point is zero (target registration). This state is referred to as an optimum state because an ideal state in which all corresponding points of the images acquired by the two cameras 2 are shifted by zero parallax is almost impossible in practical use, and the optimum state is the state which is closest to the ideal state and most advantageous for subsequent processing.
In the first step, the correction is modeled.
And modeling correction, namely deducing initial homography transformation of images acquired by the left camera 2 and the right camera 2 by a mathematical method according to internal and external parameters of the cameras 2 which can be conveniently acquired after the optimized array positioning to obtain an initial correction result close to the optimized registration. The step does not need to acquire images at all, avoids the risk that the prior art depends on the ideal degree of the acquired images and local overfitting completely and has larger global error, and obtains available results even under the condition of not performing the subsequent two steps.
The modeling correction can be divided into two steps: deflection correction and parallax correction. The parameters required for the deflection correction step are: the focal length f of the lens, the effective horizontal physical size W of the sensor, the number of horizontal pixels W of the collected image, the number of vertical pixels H of the collected image and the horizontal deflection angle a of the adjacent camera 2. The parallax correction step requires, in addition to the above parameters, the following parameters: the diameter l of the lens, the physical distance e from the optical center to the front end of the lens, the expansion length t of the left camera 2 and the expansion length t of the right camera 2 on the basis of the circumferential distribution of the optical center, and the estimated value g of the physical distance from the optical center of the middle camera 2 to a typical target along the optical axis direction. Some of the parameters are indicated in fig. 24, where the deflection angle a is positive when the deflection angle is right and negative when the deflection angle is left, i.e., a positive value is taken for the left camera 2, a negative value is taken for the right camera 2, and a zero is taken for the middle camera 2; the telescopic length t1 is positive in forward movement and negative in backward movement; all other parameters take positive values. Among the above parameters, f, W, d are easily obtained through the interface of the camera 2 or specifications, a, t can be read from the positioning device where the array is located, and e can be measured or estimated (see the positioning module for details); g, the typical depth, can be measured or given as a rough estimate, e.g. 5 meters for an indoor scenario and 50 meters for an outdoor scenario.
Deflection correction is performed first. The yaw correction is a calculation of the homographic transformation which results therefrom in the case of only horizontal yaw movements, assuming that the optical center of the camera 2 is stationary. In the deflection correction, the principle of pixel re-projection is shown in fig. 24, and the top view thereof is shown in fig. 25. Point C in the figure is the optical center of camera 2, CO is the array principal optical axis, i.e. the optical axis direction of the middle camera 2, CO1 is the current optical axis direction of camera 2, and the deflection angle of this direction relative to CO is a. OXY is a coordinate system of a reference imaging plane S, the direction of which is consistent with the direction of the image collected by the middle camera 2, and the origin of which is the central point of the image; O1X1Y1 is the coordinate system of the current camera 2 imaging plane S1, whose orientation coincides with the orientation of the current camera 2 acquired image, with its origin at the image center point. The length of the vectors CO and CO1 are both F, where F is the result of converting the focal length to pixel units, and the calculation formula is
Under this setting, the length units of the coordinate systems OXY and O1X1Y1 are both pixels.
The re-projection is to project any point P1 on the current imaging plane S1 onto the reference imaging plane to obtain P, knowing the coordinates (X1, Y1) of P1 in the coordinate system O1X1Y1, find the coordinates (X, Y) of P in the coordinate system OXY. Can be derived by derivation
The transformation relationship of equation 13 can be expressed by a homographic transformation matrix, i.e.
P~HRP1
(14)
In equation 14, the sign-indicates that there is only one constant factor difference, while the variables P and P1 are homogeneous coordinate column vectors, and HR is the 3 × 3 matrix derived from equation 13, i.e., HR is the
Since the actually captured image has its upper left corner as the origin instead of the center point, when the coordinate system O1X1Y1 is changed to capture the coordinates of the image, equation 14 needs to be modified to
P~HRHCP1
(16)
In equation 16
Equation 16 gives the coordinate transformation from the current captured image to the reference image. The formula is also applicable to the left, right and middle cameras 2, except that the deflection angle a values used therein are positive, negative and zero values, respectively.
Subsequently, parallax correction is performed. The parallax correction gives a homographic transformation relationship at a typical depth plane, taking into account the shift in the optical center position. It is first necessary to calculate the physical length of the horizontal offset of the optical center of the current camera 2 from the optical center of the intermediate camera 2, i.e., C0C1 in fig. 24, by the formula
In equation 18 d is the horizontal offset of the optical center of the current camera 2 with respect to the optical center of the middle camera 2, taking a negative value for the left camera 2 and a right value for the right camera 2. And for the intermediate camera 2, take d equal to 0. Derived, the offset results in a horizontal offset pixel number at typical depths of
In equation 19, D is positive with a right offset and negative with a left offset, F is given by equation 12, and g is a typical depth estimate.
Considering the parallax correction, equation 16 will be further modified to
P~HDHRHCP1
(20)
In equation 20
While HR is still given by equation 15 and HC is still given by equation 17. Equation 20 gives the coordinate transformation from the current captured image to the reference image, and the equation is also applicable to the center, left, and right cameras 2, and if they are numbered 0, 1, and 2, respectively, equation 20 can be rewritten as
P~HDiHRiHCiPi,i=0,1,2
(22)
From equation 22, it can be deduced that if the image captured by the middle camera 2 is used as the reference image, the transformation relationship between the coordinates of the images captured by the left and right cameras 2 to the coordinates of the reference image is
P0~(HD0HR0HC0)-1HDiHRiHCiPi,i=1,2
(23)
I.e. at a typical depth plane, the homographic transformation matrices of the acquired images from the left and right cameras 2 to the reference image are respectively
Hi=(HD0HR0HC0)-1HDiHRiHCi,i=1,2
(24)
In the formula 24, HR0 is equivalent to a unit matrix in the homography sense, and HD0 is a unit matrix, so that the formula 24 can be equivalently simplified to
Hi=HC0 -1HDiHRiHCi,i=1,2
(25)
If the parameters related to equation 25 are accurate values and the scene is completely centered on a typical depth plane, the image acquired by the middle camera 2 as the reference image remains unchanged, and after homographic transformation is performed on the images acquired by the left and right cameras 2 respectively according to equation 25, the transformed images can be ideally aligned with the reference image, that is, the parallax offset of all corresponding points is zero. In practice, the scene cannot be completely centered on a typical depth plane, and therefore the parallax offsets of the corresponding points cannot be all zero; but may be entirely limited to the horizontal direction, i.e., achieve a horizontally aligned state, as shown in fig. 26. In fact, after homography transformation is performed by the formula 25 for theoretical model error, imaging error of the camera 2, assembly error of the positioning device, parameter reading and measuring error, etc., the actual state is usually close to but not completely reach horizontal alignment as shown in fig. 27, so that the next processing is needed.
And secondly, horizontally aligning.
The horizontal alignment step is to adjust on the basis of modeling correction to realize the state of full-image horizontal alignment. The horizontal alignment is achieved by modifying equation 25 by inserting the correction matrix HA at the forefront to the right of its equal sign, i.e.
Hi=HAHC0 -1HDiHRiHCi,i=1,2
(26)
In fact, it is also possible to insert HA in other positions of equation 25, but the advantage of equation 26 is that it is equivalent to obtaining Pi 'once for point Pi on the image acquired by camera 2i (i ═ 1, 2) using the Hi transform of equation 25, and then obtaining Pi "once for Pi' using the HA transform of equation 26, which brings computational convenience. In addition, equation 26 further transforms only the left and right camera 2 captured images, while the middle camera 2 captured image remains unchanged as the reference image, again for computational convenience, if three images are transformed simultaneously.
The degrees of freedom of the correction matrix HA need to be considered next. Homography matrices typically have 8 degrees of freedom, and solution design should balance between degrees of freedom and constraints for practical situations. By analysis, a more appropriate embodiment is to retain only 3 degrees of freedom, namely uniform scaling and X, Y direction translation, when the HA is shaped as
In equation 27, s is the uniform scaling ratio, u is the amount of translation in the X direction, and v is the amount of translation in the Y direction. It is noted here that only s and v need to be considered in the horizontal alignment step, and u need to be considered in the target registration step.
The method for implementing the horizontal alignment step will be described below by taking the horizontal alignment of the middle image, which refers to the image captured by the middle camera 2, and the right image, which refers to the image captured by the left camera 2 and transformed by equation 25, as an example, and the current state is shown in fig. 28.
The region of interest from the middle and right images is first extracted, which may be an overlap region, may be a full image, or may be a region resulting from an appropriate inward (left in the middle image and right in the right image) expansion of the overlap region. The region of interest may be subjected to appropriate processing, such as filtering, enhancement, scaling, binarization, etc., and if scaling, etc., involving coordinate transformation, then transformation in cooperation therewith may be subsequently required.
Then, feature points are detected in the two regions of interest, and a SIFT operator, a SURF operator, a Harris operator and the like can be used. And matching the feature points from the middle graph and the right graph, a Brute Force, a Flann method and the like can be used.
To improve robustness, the set of matching points may be screened. The method of cross check, detection of the saliency of the optimal matching point in the candidate matching point set and the like can be used, and the allowable range of coordinate offset and the like of the matching point can be limited; a series of subsets may be extracted from the set of pairs of matching points, a consistency parameter may be calculated from each subset, and the most significant subset (with the highest number of elements and/or the highest consistency) with the same parameter may be extracted, and so on.
Assuming that N pairs of matching points are obtained through screening, the matching points are represented as Pn (xn, yn) from the middle graph, wherein N is 0, 1, …, N-1, as shown by the black dots in fig. 27; denoted Pn ' (xn ', yn ') from the right graph, as shown by the white dots in fig. 27. If the right image is transformed by the correction matrix HA in equation 27 to achieve the horizontal alignment state as shown in fig. 26, the condition is actually satisfied
yn=syn'+v,n=0,1,...,N-1
(28)
Equation 28 is a linear system of equations for s and v, more intuitively expressed as
The equation set can be solved when N is more than or equal to 2, actually, N is as large as possible, at this time, the formula 29 forms an overdetermined linear equation set, and can be solved by methods such as eigenvalue decomposition, SVD decomposition, QR decomposition, RANSAC, Hough transformation and the like, so as to obtain solutions s and v with least square meaning or other optimization meanings.
The correction matrix HA thus obtained is substituted into equation 26 to obtain a homography transform matrix Hi (i ═ 1, 2) by substituting s and v solved by equation 29 into equation 27, where u can take an arbitrary value and is not temporarily set to zero. In this case, the image captured by the middle camera 2 remains unchanged as the reference image, and after homography transformation is performed on the images captured by the left and right cameras 2 by using Hi (i is 1, 2), the transformed images may be horizontally aligned with the reference image, that is, the state of fig. 26 is reached.
And thirdly, target registration.
The target registration step is further adjusted on the basis of the full-image horizontal alignment, and a coordinate alignment state, namely a state with zero parallax offset, is realized at the target. The target may be a single target or a plurality of targets, may be a strong perspective or a weak perspective, may be an automatically recognized target or a manually selected target, and may even refer to the entire scene.
The embodiment provided below still performs homographic transformation on the images acquired by the left and right cameras 2 respectively by using the formula 26, and only the value u in the formula 27 needs to be calculated on the basis of the previous step.
Taking the center and right graph target registration as an example, with u temporarily set to zero, the sequence of matching points from the right graph is transformed to Pn "(xn", yn ") by equation 26, where N is 0, 1, …, N-1. When the matching point sequence from the middle graph is still Pn (xn, yn), the X coordinate difference value of the matching point pair is calculated pair by pair, namely
un=xn-xn",n=0,1,...,N-1
(30)
Here un is the X coordinate difference from the nth pair of matching point pairs from the right to the middle graph, i.e., the X coordinate offset from the white to black point in fig. 26. It can be seen that, taking the nth un as u, the difference between the X coordinates of the nth pair of matching point pairs is compensated, so that the coordinates of the nth pair of matching point pairs can be aligned. In practice, u should be calculated by a certain method according to the un sequence, and a simpler method is to take the mean value or the median value of the un sequence as u, which has the effect of generalizing the whole scene to the target to realize target registration.
Another method is to cluster the un sequences, and extract the most significant (the most elements or/and the most consistent) subset, and take the mean or median as u. The clustering may use a general algorithm such as K Means, or may use the following method.
The un sequences are sorted from small to large or from large to small and connected in sequence into a curve, as shown in FIG. 28. Detecting a step in the curve, wherein the step is defined as a subset formed by continuous elements of which the difference value of adjacent elements is not greater than a difference threshold value and the number of the elements is not less than a length threshold value, and one step corresponds to one target area. For example, if the difference threshold is 0.5 and the length threshold is 10, then there are two subsets of 18 and 20 in fig. 18. The next method is to take the subset with the largest length, i.e. the subset with the length of 20; another method is to take the subset with the smallest average absolute value of un, i.e. the subset with length 18, and use this method on the premise that the target depth estimate g is sufficiently accurate. After the subset is obtained, the average value or the median value is used as u, and the effect is to realize target registration by taking the region corresponding to the step as a target.
The implementation method of the registration module is changed.
The above implementation methods perform target registration on the basis of completing horizontal alignment, and the other method performs the two steps together. The specific method is the same as above before the appearance of equation 28, and equation 28 is changed to
The aim is to align the X direction and the Y direction together, and the effect is to generalize the whole scene into a target to realize target registration. Equation 31 is a linear system of equations for s, u, v, more intuitively expressed as
Solving the system of equations formed by equation 32 can accomplish horizontal alignment and target registration at one time, but in the case of large parallax offset, the method is not as robust (robust).
The implementation methods of the horizontal alignment and the target registration are both based on features, and the other method is based on area, for example, cross-correlation operation is performed on interested areas from adjacent acquired images, and then extreme points of a cross-correlation function are obtained, or the extreme points of the function are subjected to second derivative operation and the like, so that registration parameters are obtained through calculation. Still another method is to detect a target in an adjacent captured image by using methods such as moving target detection, face detection, saliency detection, focus region detection, and the like, and then to register the target.
The implementation method of the horizontal alignment and the target registration can obtain only one group of registration parameters, and can also obtain a plurality of groups of registration parameters according to actual requirements.
The above implementation methods of horizontal alignment and target registration belong to automatic registration, and another method is manual registration or manual and automatic combination. In this case, a corresponding human-computer interaction interface is provided, which supports the user to manually input the registration parameters in some form, and can see the registration result after the parameters are implemented so as to adjust repeatedly. Manual and automatic integration means that the user further adjusts the parameters on the basis of automatic registration, or selects one of a plurality of sets of alternative parameters provided by the system, and so on.
The optimized registration method based on the optimized positioning of the camera 2 array is explained above by taking a 1 × 3 array as an example. For other forms of arrays, this can be embodied in accordance with the concepts of the present invention.
The registration module abstracts.
The registration module can be abstracted as: the cameras 2 are registered based on the limit trajectory readings and/or other information for each camera 2 of the array. It includes at least the following two types of embodiments.
The modeling correction step does not need to acquire video at all, and the registration result can be obtained only by the step, so that the method can be abstracted and abstracted as follows: and registering each camera 2 according to the limit track reading and the internal parameters of each camera 2.
And the modeling correction step plus the horizontal alignment step, or the modeling correction step plus the horizontal alignment step plus the target registration step, which all need to collect video, can be abstracted and abstracted as: and registering each camera 2 according to the limit track reading, the internal parameters and the collected video of each camera 2.
And three, a multi-camera 2 system.
The system using the camera 2 array is a multi-camera 2 system, and the invention provides a design idea of the multi-camera 2 system on the basis of the optimized positioning and the optimized registration.
The multi-camera 2 system may be off-line or on-line, i.e. the input original video and the output panoramic video may be in the form of video files or image files, or in the form of video streams. The basic flow of the system framework is as follows: the optimized positioning device loaded with the camera 2 array obtains an original video combination through array acquisition on one hand, and outputs attitude parameters on the other hand. And the original video combination and the attitude parameters are subjected to an optimized registration module together to obtain a registered video combination. And the registered video combination passes through a video processing module to obtain an output result. If the multi-camera 2 system is a video splicing system, the video processing can be embodied as video fusion, and the output result can be embodied as a panoramic video.
The multi-camera 2 system provided by the invention can be changed and expanded on the basis of the above procedures, for example, as follows.
The optimized registration module may feedback control the array of cameras 2. For example, a right image HA matrix is calculated in the optimized registration process, and the left camera 2 is controlled by the s value to adjust the focusing depth, which specifically includes: s >1 increases the depth of focus appropriately, and s <1 decreases the depth of focus appropriately. The array of cameras 2 continues to capture video at this time, and the adjusted combination of the captured original video is sent to the optimized registration module, so that the calculation and feedback adjustment are continued until s is 1.
The optimized registration module may feedback control the optimized positioning device. For example, a right image HA matrix is calculated in an optimized registration process, and the left camera 2 is controlled to adjust the tilt angle through a v value, which specifically includes: v >0 is rotated upward by an appropriate angle, and v <0 is rotated downward by an appropriate angle. The array of cameras 2 continues to capture video at this point, and the adjusted combination of captured original video is sent to the optimized registration module, and so on, and the calculation and feedback adjustment are continued until v is 0.
Another example of an optimized registration module feedback controls an optimized positioning device. Calculating a basic matrix of the right image relative to the middle image in the optimized registration process, further obtaining all polar lines of the right image relative to the middle image, and controlling the left camera 2 to adjust the stretching position through the polar line direction, wherein the specific method comprises the following steps: if the polar line converges to the right side, the proper position is adjusted forwards, and if the polar line converges to the left side, the proper position is adjusted backwards. At this time, the array of cameras 2 still continuously acquires video, and the adjusted acquired original video combination is sent to the optimized registration module again, so that the calculation and feedback adjustment are continuously carried out until the epipolar lines are all in the horizontal direction.
The feedback control camera 2 array can be realized by a control interface provided by the camera 2, and the feedback control camera 2 array and the feedback control optimal positioning device can be realized by control equipment such as a stepping motor and the like.
The optimized registration module may add a mechanism for determining whether the parameters need to be updated, which may be determined manually or automatically. The registration parameters need to be recalculated only when the system is initialized, the array or positioning device of the camera 2 is manually or automatically adjusted, the shooting position or the scene itself is changed, the registration target is reset, and the like, otherwise, the registration parameters calculated and saved last time can be directly taken out without recalculation every time the video combination is acquired.
Through the description of the above working principle, it can be seen that the camera array provided by this embodiment can also adopt a two-dimensional arrangement mode. The following will explain details of the present invention by specific examples.
As shown in fig. 5, 6, and 7, fig. 5 is a side view showing a two-dimensional arrangement of the camera 40 array, fig. 6 is a top view showing a two-dimensional arrangement of the camera 40 array, and fig. 7 is a schematic diagram showing a two-dimensional arrangement of the cameras 40.
In this embodiment, two support plates 20 are used, and one support plate 20 is rotatably connected to the other support plate 20 at an end close to the camera. The adjustment of the position of the camera 40 in the vertical direction is achieved by means of a rotational connection between the two support plates 20. The details of which are set forth in the accompanying drawings.
As shown in fig. 5 and 10, the camera array provided by this embodiment further includes a base 10, a support 101 is disposed on the base 10, the support plate 20 is slidably connected to another end of the support 101 opposite to the end close to the camera on the support plate 20 and can be locked at a set position on the support 101, another support plate 20 is connected to another end of the support 101 opposite to the end close to the camera on another support plate 20, wherein a sliding direction of the support plate 20 relative to the support 101 is the same as a rotating direction of the support plate 20 relative to another support plate 20. The two support plates 20 are supported by the arranged bracket 101, and the locking of the rotating position of the support plates 20 is realized by the sliding grooves 1011 arranged on the bracket 101 and the locking pieces in the sliding grooves 1011. Specifically, the bracket 101 is provided with a sliding groove 1011 in the vertical direction and a through hole, the support plate 20 located below is fixedly connected with the bracket 101, and the support plate 20 located above is slidably assembled in the sliding groove 1011 and can slide in the vertical direction and be locked at a set position. The adjustment of the position of the camera 40 in the vertical direction is achieved by adjusting the position of the support plate 20 located above.
Referring to fig. 6 and 8, the support 30 includes a cylindrical housing 302 with an open end, and two connection plates 301 connected to two sides of the open end of the cylindrical housing 302 in a symmetrical and rotating manner; the housing 302 is a cylindrical structure with an open end, the camera 40 is inserted into the housing 302 and fixed, and the two connecting plates 301 are respectively connected with two sides of the open end of the housing 302 and the housing 302 in a rotating manner.
Referring to fig. 5, 6 and 9, one end of each of the two support plates 20 is hinged, and the end is located at an end where the camera is located, taking the example that two cameras 40 are arranged on each support plate 20, each support plate 20 is provided with at least two fixing plates 201, any support 30 is located in a space formed by two fixing plates 201 which are perpendicular to each other in the at least two fixing plates 201, any connecting plate 301 of any support 30 is slidably connected with any fixing plate 201 in the two fixing plates 201 and can be locked at a set position on any fixing plate 201, and any support 30 can rotate relative to the support plate 20 by sliding any connecting plate 301 relative to any fixing plate 201. As shown in fig. 6, the camera 40 array further includes a third locking member, and the fixing plate 201 is provided with a linear chute 2011; any connecting plate 301 is connected with the fixing plate 201 in a sliding manner through a third locking piece penetrating through the linear sliding groove 2011; and when there are connecting plates 301 on both sides of the fixing plate 201, the connecting plates 301 on both sides of the fixing plate 201 are slidably connected to the fixing plate 201 through a third locking member passing through the linear sliding groove 2011. That is, as shown in fig. 6, the fixing plate 201 located at the middle position has connecting plates 301 on both sides thereof, the third locking member is a bolt and a nut, the bolt passes through the two connecting plates 301 and the sliding groove 2011 and then is connected with the nut, and the sliding of the support 30 is controlled by screwing or unscrewing the nut.
When the adjustment is needed in the horizontal direction, the adjustment of the rotation of the support 30 is realized by adjusting the relative position between the connecting plate 301 and the fixing plate 201, and the optical centers of the cameras 40 on the same support plate 20 are in the same straight line and the optical centers of the cameras 40 on the two support plates 20 are adjusted in the same plane according to the above-described adjustment method, and the effect is shown in fig. 19. And then, the video splicing effect is adjusted according to the principle described above, so that the overall effect of the video is improved.
It should be understood that the above embodiments are only described by taking the example of two cameras 40 disposed on each support plate 20, and the principle is the same when three, four or other numbers of cameras 40 are disposed on each support plate 20, and will not be described in detail herein.
In addition, in order to facilitate the adjustment of the cameras 40, another camera 40 array is provided in the embodiment of the present invention, and the positioning and registration principles of the camera 40 array are the same as those in the above embodiment, and are not described in detail herein. Only the structural changes of the array of cameras 40 provided in the present embodiment will be described below.
Reference is also made to fig. 11 to 15, which are schematic structural views of respective components of the camera 40 array provided in the present embodiment.
The present invention also provides a camera array comprising: two first support plates 500, two cameras, and two holders 700, the two cameras being fixed in the two holders 700, respectively, wherein,
the two first support plates 500 are rotatably connected;
the two supports 700 are respectively connected with the two first support plates 500 in a sliding manner and can be locked at set positions on the two first support plates 500, wherein the sliding direction of any one support 700 in the two supports 700 relative to any one first support plate 500 is parallel to the rotating shaft around which any one first support plate 500 rotates, and any one support 700 is connected with any one first support plate 500 in a sliding manner.
In the above technical scheme, the adjustment of the position of the camera is realized by rotating the first supporting plates 500 and sliding the two first supporting seats 700, so that the optical centers of the camera are collinear, and the image shooting effect of the camera array and the video splicing effect are improved.
The structure of the imaging array and the principle of positioning adjustment will be described in detail below with reference to the accompanying drawings.
As shown in fig. 15, the camera array of this embodiment is provided in a holder 700, the holder 700 is folded from a rectangular steel plate into a rectangular parallelepiped side shape, has a square hole in accordance with the size of the front face of the camera, and can be tightly fitted with the camera. It has two groups of front and back spacing d at the near-optical center end of camera1Screw hole of (i) F1、G1And F2、G2
As shown in fig. 14, in addition, the camera array provided in the present embodiment is provided with a second support plate 300 for adjusting the position of the camera array in the horizontal direction, wherein two first support plates 500 are located on the same second support plate 300, each first support plate 500 is rotatably connected with one first mounting plate 600, and the rotating shaft around which the first support plate 500 connected with the first mounting plate 600 rotates is parallel to the rotating shaft around which the two first support plates 500 rotate; one of the first mounting plates 600 is fixedly connected with the second support plate 300, and the other first mounting plate 600 is slidably connected with the second support plate 300 and can be locked at a set position. Specifically, the first support plate 500 is hinged to 4 pieces of the same widthThe rectangular steel plate of degree, the junction can freely rotate. The two steel plates in the middle have the same length and are called movable plates, and the two steel plates at the outer sides are called mounting plates. Each movable plate has a group of front and back spacing d1I.e., H and I, of chute 5001. Each connecting plate has a group of right and left spacing d2Screw hole of, i.e. J1、J1And K1、K2
As shown in fig. 13, the camera array further includes two second support plates 300, the two second support plates 300 are rotatably coupled, a rotation axis about which the two second support plates 300 are rotated is perpendicular to a rotation axis about which the two first support plates 500 are rotated, wherein, for any one second supporting plate 300 of the two second supporting plates 300, any one first supporting plate 500 is connected with any one second supporting plate 300 in a sliding way and can be locked at a set position on any one second supporting plate 300, the other first supporting plate 500 of the two first supporting plates 500 is connected with any one second supporting plate 300, wherein, the end of each of the two first supporting plates 500 connected to any one of the second supporting plates 300 is the opposite end of the end rotatably connected between the two supporting plates, and the sliding direction of any one of the first supporting plates 500 relative to any one of the second supporting plates 300 is the same as the rotating direction of any one of the first supporting plates 500 relative to another one of the first supporting plates 500. In addition, second mounting plates 400 are respectively disposed at both sides of the second support plates 300, and specifically, a first linear chute 3001 is disposed on each second support plate 300; the length direction of the first sliding groove is parallel to the length direction of the rotating shaft connected by the rotation of the two second supporting plates 300, and a locking member assembled in the first linear sliding groove 3001 in a sliding manner is arranged on the other first mounting plate.
During specific manufacturing, 4 rectangular steel plates with the same width are connected through hinges, and the joints can rotate freely. The middle two steel plates have the same length and are also called movable plates, and the outer two steel plates are also called connecting plates. Each steel plate in the middle has a group of left and right spacing d2The first linear chute 3001, i.e. M1And M2In addition, a group of right and left spacing is d2Aligned therewith, i.e. L1And L2. Outer coverEach steel plate on the side has two groups with the distance d3Screw hole of, i.e. N1、N3,N2、N4And P1、Q1,P2、Q2Wherein the left-right spacing of the front two groups is d4. Other chute M1Or M2Scales are marked beside the first support plate 500 to indicate the rotation angle of the two movable plates of the first support plate 500 relative to the completely overlapped state, which is a vertical deflection angle.
As shown in fig. 12, the camera array is provided with a fixing plate 200, any one of the second supporting plates 300 is slidably coupled to the fixing plate 200 and can be locked at a set position on the fixing plate 200, and the other one 300 of the two second supporting plates 300 is coupled to the fixing plate 200.
During specific connection, one of the second mounting plates 400 is fixedly connected with the fixing plate 200, and the other second mounting plate 400 is slidably connected with the fixing plate 200 and can be locked at a set position;
the fixing plate 200 is provided with a second linear sliding groove 2001, the length direction of the second linear sliding groove 2001 is perpendicular to the length direction of the rotating shaft rotatably connected with the two first supporting plates 500, and the other second mounting plate 400 is provided with a locking member slidably fitted in the second linear sliding groove 2001. The fixing plate 200 is formed by cutting a rectangular steel plate into a square hole in the middle thereof, and is shaped like a Chinese character 'hui'. The left side has 4 up-down intervals d3The left-right distance is d4Rectangularly distributed screw holes, i.e. R1、R2、R3、R4. The right side is provided with a group of upper and lower spacing of d3T and S, respectively. In addition, scales are marked beside the sliding groove T or S to indicate the rotation angle of the two movable plates of the second supporting plate 300 relative to the fully-unfolded state, which is a horizontal deflection angle.
As shown in fig. 11, the fixing plate 200 is fixed to a base 100, and the base 100 is formed by vertically connecting a steel plate U and a steel plate V to the lower edge of the whole of the square steel plate and the lower edge of the square hole, and is reinforced by three support rods, i.e., left and right support rods. Wherein the steel plate U constitutes the base 100 and the steel plate V constitutes the support plate for bearing a part of the weight of the second support plate 300.
In addition, in order to control the adjustment range of the camera when adjusting the camera, in a specific embodiment, a length scale value is provided on any one of the first support plates 500 for identifying the position of any one of the supports 700 relative to any one of the first support plates 500; and/or the presence of a gas in the gas,
a length scale value is arranged on any one of the second support plates 300 and used for marking the position of any one of the first support plates 500 relative to any one of the second support plates 300;
and/or the presence of a gas in the gas,
the fixing plate 200 is provided with a length scale value for identifying a position of any one of the second support plates 300 with respect to the fixing plate 200. The relative position of each part can be observed visually through the set length scale value, and the position of the camera is convenient to adjust.
To facilitate placement of the camera array, in a more specific embodiment, as shown in fig. 16, the camera array further includes a tripod 800 for supporting the camera array, the camera array being mounted on the tripod 800 for rotation about a vertical axis and for adjustment of the overall top and bottom views. The tripod is connected to the base 100 to connect the camera array to the tripod, and it should be understood that the tripod can also be applied to the camera array in the configurations shown in fig. 1 and 5.
In a specific installation, the first step is to connect the second support plate 300 and the structural unit E while positioning the horizontal deflection angle. Firstly, the screws are respectively connected with N1R1、N2R2、N3R3、N4R4And the left connecting plate of the second support plate 300 is connected to the left side of the structural unit E. Then combining two groups of screw holes into a group P1Q1And P2Q2Is connected to a correct position in the chute combinations S and T, thereby connecting the right side of the second support plate 300 to the boardAnd is coupled to the right side of the fixed plate 200. The correct position is such that Q1Lower row of scale readings or Q at T2The upper row of scale readings at T is equal to the desired horizontal yaw angle, which, according to the yaw angle calculation rules, should be equal to the horizontal viewing angle of the individual cameras.
The second step is to connect the first support plate 500 and the second support plate 300 while positioning the vertical deflection angle. Since the first supporting plate 500 has two entities, only how the left entity is connected with the left movable sheet of the second supporting plate 300 needs to be described, and the connection method between the right entity and the right movable sheet of the second supporting plate 300 can be obtained symmetrically. Firstly, the screws are respectively connected with J1L1、J2L2And the lower side connection plate of the first support plate 500 is connected to the upper side of the second support plate 300. Then combining the screw holes K1K2Connected to a chute assembly M1M2Thereby coupling the upper side connection plate of the first support plate 500 and the upper side connection plate of the second support plate 300 together. The correct position should be such that K2At M2The scale reading of (a) is equal to the desired vertical deflection angle, which, according to the deflection angle calculation rules, should be equal to the vertical viewing angle of the individual cameras. The angular positioning of the positioning device is completed through the previous two steps. There is still a gap between the two entities in the first support plate 500, and for all cases that need to be supported by the array support, the structure should be designed to ensure that the two entities do not try to intrude into the space of the other.
The third step is to connect the support 700 and the first support plate 500. Since the support 700 has four entities, only how the upper side movable plate of the upper left entity of the support 700 is connected with the upper side movable plate of the left entity of the first support plate 500 is described, the connection method of the lower left entity of the support 700 and the lower side movable plate of the left entity of the first support plate 500 and the connection method of the upper right entity and the lower right entity of the support 700 and the upper side movable plate and the lower side movable plate of the right entity of the first support plate 500 can be obtained symmetrically. Combining two groups of screw holes F1G1And F2G2Is attached to the optimum position in the chute combinations H and I such that the innermost vertex of the holder 700 is exactly aligned with the central axis of the second support plate 300. Thus connectingThe upper left entity and the upper right entity are just opposite to the central axis and the lower left entity and the lower right entity are just opposite to the central axis, so that the entity combination is symmetrically opposite to each other and is symmetrically connected up and down.
After the assembly of the positioning device is completed, the work of installing the camera array is very simple. It is only necessary to load the four bodies of the camera into the four bodies of the cradle 700, respectively, and align the front end of the camera with the front end of the cradle 700. At this time, the screw hole F1G1And F2G2Besides the function of positioning the support 700, the function of fastening the camera is also realized.
When the adjustment, adjust the turned angle of two first backup pads of rotating the connection, the realization is to the position adjustment of camera lens in vertical direction, through the turned angle of adjusting two second backup pads, the realization is to the position adjustment of camera lens in the horizontal direction, in addition, when the needs camera slides, the position of adjustment support on first backup pad can be realized, the adjustment to the camera position is realized through several slide adjustment mentioned above, thereby make the optical center of camera be in coplane state, improve the video effect of camera collection, and then improve the effect of video when adjusting.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (11)

1. A camera array, characterized in that the camera array comprises:
a support plate;
the camera comprises at least two camera supports on the support plate, wherein any one of the at least two camera supports can rotate relative to the support plate and can be locked at a set position on the support plate, and a rotating shaft around which any one camera support rotates is perpendicular to the support plate;
the cameras of the at least two cameras converge; wherein,
any one of the at least two cameras can slide on the support of the camera along the optical axis direction of the camera and can be locked at a set position on the support of the camera.
2. The camera array according to claim 1, wherein the support plate is provided with an arc-shaped sliding groove corresponding to any one of the supports, and at least two arc-shaped sliding grooves on the support plate are concentric arc-shaped sliding grooves; a first locking piece is arranged on any one of the supports, penetrates through the arc-shaped sliding groove and is in threaded connection with any one of the supports, so that the supporting plate is in sliding connection with any one of the supports;
the camera comprises a camera body and is characterized in that a linear sliding groove is formed in the support of any camera, a second locking piece is arranged on any camera, and the second locking piece penetrates through the linear sliding groove and then is in threaded connection with any camera, so that the support of any camera is in sliding connection with any camera.
3. The camera array according to claim 2, characterized in that the arc-shaped chute is provided with an angle scale value and/or the straight chute is provided with a length scale value.
4. The camera array of claim 1, wherein any mount comprises: the device comprises a cylindrical shell with an opening at one end, and two connecting plates symmetrically and rotatably connected to two sides of the opening end of the cylindrical shell;
the support plate is provided with at least two fixing plates, any support is located in a space formed by two fixing plates which are perpendicular to each other in the at least two fixing plates at intervals, any connecting plate of any support is connected with any fixing plate in the two fixing plates in a sliding mode and can be locked at a set position on any fixing plate, and any support can rotate relative to the support plate through the fact that any connecting plate slides relative to any fixing plate.
5. The camera array of claim 4, further comprising a third latch member, the stationary plate having a linear slide slot; the connecting plate is connected with the fixing plate in a sliding manner through the third locking piece penetrating through the linear sliding groove;
when the two sides of the fixed plate are provided with the connecting plates, the connecting plates on the two sides of the fixed plate are in sliding connection with the fixed plate through the third locking piece penetrating through the linear sliding groove.
6. The camera array of claim 4 or 5, wherein the support plate is pivotally connected to the other support plate at an end adjacent the camera head.
7. The camera array of claim 6, further comprising a base, wherein a bracket is disposed on the base, the support plate is slidably connected to the bracket at the other end of the support plate opposite to the end near the camera and is lockable at a predetermined position on the bracket, the other support plate is connected to the bracket at the other end of the support plate opposite to the end near the camera, and wherein the support plate slides relative to the bracket in the same direction as the support plate rotates relative to the other support plate.
8. A camera array, characterized in that the camera array comprises: two first support plates, two cameras and two supports, the two cameras being respectively fixed in the two supports, wherein,
the two first supporting plates are rotatably connected;
the two supports respectively with two first backup pad sliding connection and lockable set for the position on two first backup pads, wherein, in two supports arbitrary support is relative the slip direction of arbitrary first backup pad, with arbitrary first backup pad rotates the pivot of winding and is parallel, arbitrary support with arbitrary first backup pad sliding connection.
9. The camera array of claim 8, further comprising two second support plates rotatably connected to each other, wherein the rotation axis about which the two second support plates rotate is perpendicular to the rotation axis about which the two first support plates rotate, wherein, for any one of the two second support plates, the any one first support plate is slidably connected to the any one second support plate and can be locked at a set position on the any one second support plate, and the other one of the two first support plates is connected to the any one second support plate, wherein, an end of the any one first support plate connected to the any one second support plate, respectively, is opposite to an end of the two support plates rotatably connected to each other, and a direction in which the any one first support plate slides with respect to the any one second support plate is opposite to an end in which the any one first support plate rotates with respect to the other first support plate The directions are the same.
10. The camera array of claim 9, further comprising a fixed plate, wherein any one of the second support plates is slidably connected to the fixed plate and lockable in a set position on the fixed plate, and wherein the other of the two second support plates is connected to the fixed plate.
11. The camera array of claim 10, wherein said any first support plate has a length scale value disposed thereon for identifying a position of said any mount relative to said any first support plate; and/or the presence of a gas in the gas,
a length scale value is arranged on any one of the second supporting plates and used for marking the position of any one of the first supporting plates relative to any one of the second supporting plates;
and/or the presence of a gas in the gas,
and the fixing plate is provided with a length scale value for marking the position of any one second supporting plate relative to the fixing plate.
CN201610113510.9A 2016-02-29 2016-02-29 A kind of video camera array Active CN107135336B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610113510.9A CN107135336B (en) 2016-02-29 2016-02-29 A kind of video camera array
PCT/CN2016/095899 WO2017148108A1 (en) 2016-02-29 2016-08-18 Camera array

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610113510.9A CN107135336B (en) 2016-02-29 2016-02-29 A kind of video camera array

Publications (2)

Publication Number Publication Date
CN107135336A true CN107135336A (en) 2017-09-05
CN107135336B CN107135336B (en) 2019-11-29

Family

ID=59721604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610113510.9A Active CN107135336B (en) 2016-02-29 2016-02-29 A kind of video camera array

Country Status (2)

Country Link
CN (1) CN107135336B (en)
WO (1) WO2017148108A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108471495A (en) * 2018-02-02 2018-08-31 上海大学 The object multi-angle image acquisition system and method for machine learning and deep learning training

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI767523B (en) * 2021-01-20 2022-06-11 佳世達科技股份有限公司 Electronic device
CN115988338B (en) * 2022-07-29 2024-09-03 南京理工大学 Far-field signal inversion reconstruction method based on compound-eye camera array
CN117930568A (en) * 2022-10-14 2024-04-26 昆山扬皓光电有限公司 Projection device
CN115396586B (en) * 2022-10-26 2023-04-07 浙江华智新航科技有限公司 Camera shell

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060012673A1 (en) * 2004-07-16 2006-01-19 Vision Robotics Corporation Angled axis machine vision system and method
CN101865668A (en) * 2010-04-29 2010-10-20 北京航空航天大学 Three-dimensional ice form detection instrument
CN101872111A (en) * 2009-04-21 2010-10-27 鸿富锦精密工业(深圳)有限公司 Image capture device
CN102103320A (en) * 2009-12-22 2011-06-22 鸿富锦精密工业(深圳)有限公司 Three-dimensional imaging camera module
CN202252685U (en) * 2011-10-26 2012-05-30 青岛海信网络科技股份有限公司 Turning bracket and semi-spherical camera
CN202563240U (en) * 2012-05-18 2012-11-28 深圳市维尚视界立体显示技术有限公司 Dual-camera rotary shooting device for three-dimensional imaging
KR20140088801A (en) * 2013-01-03 2014-07-11 구해원 Position control apparatus of three-dimensional imaging camera
CN105334682A (en) * 2015-11-30 2016-02-17 常州信息职业技术学院 Dual-mode fine-tuning 3D image shooting support

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6701081B1 (en) * 2000-06-06 2004-03-02 Air Controls, Inc. Dual camera mount for stereo imaging
CN105262946A (en) * 2015-09-23 2016-01-20 上海大学 Three-dimensional binocular camera platform experimental device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060012673A1 (en) * 2004-07-16 2006-01-19 Vision Robotics Corporation Angled axis machine vision system and method
CN101872111A (en) * 2009-04-21 2010-10-27 鸿富锦精密工业(深圳)有限公司 Image capture device
CN102103320A (en) * 2009-12-22 2011-06-22 鸿富锦精密工业(深圳)有限公司 Three-dimensional imaging camera module
CN101865668A (en) * 2010-04-29 2010-10-20 北京航空航天大学 Three-dimensional ice form detection instrument
CN202252685U (en) * 2011-10-26 2012-05-30 青岛海信网络科技股份有限公司 Turning bracket and semi-spherical camera
CN202563240U (en) * 2012-05-18 2012-11-28 深圳市维尚视界立体显示技术有限公司 Dual-camera rotary shooting device for three-dimensional imaging
KR20140088801A (en) * 2013-01-03 2014-07-11 구해원 Position control apparatus of three-dimensional imaging camera
CN105334682A (en) * 2015-11-30 2016-02-17 常州信息职业技术学院 Dual-mode fine-tuning 3D image shooting support

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108471495A (en) * 2018-02-02 2018-08-31 上海大学 The object multi-angle image acquisition system and method for machine learning and deep learning training
CN108471495B (en) * 2018-02-02 2020-09-08 上海大学 Object multi-angle image acquisition system and method for machine learning and deep learning training

Also Published As

Publication number Publication date
WO2017148108A1 (en) 2017-09-08
CN107135336B (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN107135336B (en) A kind of video camera array
Aliaga Accurate catadioptric calibration for real-time pose estimation in room-size environments
KR20190021342A (en) Improved camera calibration system, target and process
US20070076090A1 (en) Device for generating three dimensional surface models of moving objects
CN103839227B (en) Fisheye image correcting method and device
CN109859272A (en) A kind of auto-focusing binocular camera scaling method and device
CN108550104B (en) Image registration method and device
CN111080705B (en) Calibration method and device for automatic focusing binocular camera
WO1999060525A1 (en) Method and apparatus for 3d representation
CN107977998B (en) Light field correction splicing device and method based on multi-view sampling
CN113763480B (en) Combined calibration method for multi-lens panoramic camera
Knight et al. Linear auto-calibration for ground plane motion
CN112907647A (en) Three-dimensional space size measurement method based on fixed monocular camera
CN112258581A (en) On-site calibration method for multi-fish glasses head panoramic camera
WO2017131868A1 (en) Real time wide angle video camera system with distortion correction
CN111627048A (en) Multi-camera cooperative target searching method
Li et al. Fast multicamera video stitching for underwater wide field-of-view observation
Schillebeeckx et al. Single image camera calibration with lenticular arrays for augmented reality
CN114820811A (en) Fundamental matrix solving and calibrating method of synchronous camera based on moving sphere
Dornaika et al. Image registration for foveated panoramic sensing
JP2005275789A (en) Three-dimensional structure extraction method
Chen et al. Efficient vision-based calibration for visual surveillance systems with multiple PTZ cameras
Pflugfelder et al. Self-calibrating cameras in video surveillance
Hörster et al. Calibrating and optimizing poses of visual sensors in distributed platforms
Schönbein omnidirectional Stereo Vision for autonomous Vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant