CN116761065A - Image capturing apparatus, control method, and medium - Google Patents

Image capturing apparatus, control method, and medium Download PDF

Info

Publication number
CN116761065A
CN116761065A CN202310216295.5A CN202310216295A CN116761065A CN 116761065 A CN116761065 A CN 116761065A CN 202310216295 A CN202310216295 A CN 202310216295A CN 116761065 A CN116761065 A CN 116761065A
Authority
CN
China
Prior art keywords
unit
image capturing
image
driving
distortion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310216295.5A
Other languages
Chinese (zh)
Inventor
佐藤夏子
川崎谅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2022184973A external-priority patent/JP2023134346A/en
Application filed by Canon Inc filed Critical Canon Inc
Publication of CN116761065A publication Critical patent/CN116761065A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

An image capturing apparatus, a control method, and a medium are disclosed. An image capturing apparatus includes: an image capturing unit including an imaging optical system and an image sensor; a rotation driving unit configured to drive the image capturing unit in a panning direction and/or a tilting direction; an offset driving unit configured to drive at least one of the imaging optical system and the image sensor in a plane parallel to the imaging plane; and a synchronization control unit configured to synchronize rotational driving of the image capturing unit with offset driving of at least one of the imaging optical system and the image sensor so as not to change an imaging range of the image capturing unit during correction of distortion of a subject in a captured image captured by the image capturing unit.

Description

Image capturing apparatus, control method, and medium
Technical Field
The present invention relates generally to an image capturing apparatus, a control method, and a medium, and in particular, to image capturing control for reducing distortion of an object.
Background
When the subject does not directly face the camera when, for example, a high-rise building is photographed from the ground, the subject in the captured image is distorted. As a technique for solving this, a technique of directly facing a camera and an object to each other, translationally moving (shifting) an image sensor (or an optical system) with respect to an imaging plane, and then photographing the object (to be referred to as "shift control" hereinafter) is known.
Japanese patent laid-open No.2003-185902 (patent document 1) discloses a technique of detecting the inclination of a subject with respect to a camera using distance measurement data obtained based on image capturing and performing correction of direct facing. Patent document 1 also discloses a technique of performing correction of direct face by rotation control of a lens unit and offset control of an image sensor. Japanese patent laid-open No.2011-059283 (patent document 2) discloses a technique of detecting distortion (angle) of an edge of an object in a vertical direction and performing offset control of an optical system to compensate for the detected distortion.
However, in patent document 1, the inclination of the object is detected based on the difference between a plurality of distance measurement data obtained at different image height positions in the image sensor. When the distance measurement accuracy is low, distortion of the object cannot be appropriately corrected. In patent document 2, offset control of the optical system is performed to compensate for distortion of the edge, but fluctuation of the imaging range caused by the offset control is not considered. During the offset control, the position of the subject in the captured image may change, thereby degrading the quality.
Disclosure of Invention
According to one aspect of the present invention, an image capturing apparatus is provided. The image capturing apparatus includes:
an image capturing unit including an imaging optical system and an image sensor;
a rotation driving unit configured to drive the image capturing unit in a panning direction and/or a tilting direction;
an offset driving unit configured to drive at least one of the imaging optical system and the image sensor in a plane parallel to an imaging plane; and
a synchronization control unit configured to synchronize a rotational drive of the image capturing unit with an offset drive of at least one of the imaging optical system and the image sensor so as not to change an imaging range of the image capturing unit during correction of distortion of a subject in a captured image captured by the image capturing unit.
According to another aspect of the present invention, a method of controlling an image capturing apparatus is provided. The image capturing apparatus includes:
an image capturing unit including an imaging optical system and an image sensor;
A rotation driving unit configured to drive the image capturing unit in a panning direction and/or a tilting direction; and
an offset driving unit configured to drive at least one of the imaging optical system and the image sensor in a plane parallel to an imaging plane, and
the control method includes synchronizing a rotational drive of the image capturing unit with an offset drive of at least one of the imaging optical system and the image sensor so as not to change an imaging range of the image capturing unit during correction of distortion of a subject in a captured image captured by the image capturing unit.
According to still another aspect of the present invention, there is provided a medium storing a program for causing a computer to execute a method of controlling an image capturing apparatus. The image capturing apparatus includes:
an image capturing unit including an imaging optical system and an image sensor;
a rotation driving unit configured to drive the image capturing unit in a panning direction and/or a tilting direction; and
an offset driving unit configured to drive at least one of the imaging optical system and the image sensor in a plane parallel to an imaging plane, and
The control method includes synchronizing a rotational drive of the image capturing unit with an offset drive of at least one of the imaging optical system and the image sensor so as not to change an imaging range of the image capturing unit during correction of distortion of a subject in a captured image captured by the image capturing unit.
The present invention performs more appropriate distortion correction while improving the quality of a captured image.
Other features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the accompanying drawings).
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a block diagram for explaining an arrangement of an image capturing system;
fig. 2 is a view for explaining object distortion;
fig. 3 is a view for explaining offset control;
fig. 4 is a view for explaining control for reducing the distortion of the subject while maintaining the imaging range;
fig. 5 is a view for explaining calculation of driving amounts of rotation control and offset control;
fig. 6 is a view showing time variation of a captured image in distortion correction;
FIG. 7 is a view illustrating a distortion correction user interface;
fig. 8 is a graph for explaining a control method in the first synchronization control;
fig. 9 is a flowchart of a process in the first synchronization control;
fig. 10 is a graph for explaining a control method in the second synchronous control;
fig. 11A and 11B are flowcharts of processing in the second synchronization control;
fig. 12 is a flowchart of a process performed by the image capturing apparatus according to the modification;
fig. 13 is a view for explaining distortion shape determination based on depth information;
fig. 14 is a view for explaining distortion shape determination based on edge detection;
fig. 15 is a view for explaining distortion shape determination when there are a plurality of subjects;
fig. 16 is a view for explaining correction target determination when there are a plurality of subjects;
fig. 17 is a table showing a relationship between rotation control and offset control for each distortion shape;
fig. 18 is a view for explaining an amount of distortion calculation based on depth information;
fig. 19 is a view for explaining calculation of an amount of distortion based on edge detection; and
fig. 20 is a block diagram for explaining a hardware arrangement of the image capturing apparatus.
Detailed Description
Hereinafter, embodiments will be described in detail with reference to the accompanying drawings. It is noted that the following examples are not intended to limit the scope of the claimed invention. In the embodiments, a plurality of features are described, but the invention requiring all such features is not limited, and a plurality of such features may be appropriately combined. In addition, in the drawings, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
(first embodiment)
Next, an image capturing system as an image capturing apparatus according to a first embodiment of the present invention will be exemplified.
< System layout >
Fig. 1 is a block diagram for explaining an arrangement of an image capturing system. The image capturing system includes an image capturing device and a monitoring device 111. The image capturing apparatus includes a camera unit 100 serving as an image capturing unit, an Automatic Gain Controller (AGC) 107, an analog-to-digital (a/D) converter 108, a camera signal processor 109, and a communication unit 110. The image capturing apparatus further includes a driving amount determining unit 112, an offset control unit 113, a rotation control unit 114, an offset driving unit 115, and a rotation driving unit 116.
The camera unit 100 includes an imaging optical system, a band-pass filter (BPF) 104, a color filter 105, and an image sensor 106. The imaging optical system includes a zoom lens 101 that moves along an optical axis to change a focal length, a focus lens 102 that moves along the optical axis to adjust a focus, and a diaphragm unit 103 that adjusts an amount of light. Note that the imaging optical system may be integrated with or detachable from the image capturing apparatus.
The light having passed through the imaging optical system forms a subject image as an optical image on the image sensor 106 via the BPF 104 and the color filter 105. The BPF 104 may be removed from the optical path of the imaging optical system. The subject image is photoelectrically converted by the image sensor 106.
The analog electric signal (image capturing signal) output from the image sensor 106 is gain-adjusted by the AGC 107 and converted into a digital signal by the a/D converter 108. The digital signal is input to the camera signal processor 109. The camera signal processor 109 performs various image processing on the digital image capturing signal, thereby generating a video signal.
The video signal is output to a monitoring device 111 connected to the image capturing device by wired or wireless communication via a communication unit 110. The monitoring device 111 receives an instruction from a user, and outputs a control signal such as a command to the offset control unit 113 and the rotation control unit 114 via the communication unit 110.
The drive amount determination unit 112 determines the other drive amount based on any one of the rotation drive amount when the rotation control of the camera unit 100 is performed and the offset drive amount when the offset control of the image sensor 106 is performed. The rotation control is a posture control at the time of rotational movement in the pan (direction) and/or tilt direction. At this time, the determined driving amount is another driving amount for performing a correction operation to compensate for a change in the imaging range generated by one control (rotation control or offset control). That is, another driving amount is determined to suppress a change in the position of the subject in the captured image. Details of the determination of the driving amount in the driving amount determination unit 112 will be described later with reference to fig. 2 to 5.
The offset control unit 113 instructs the image sensor 106 to perform offset driving based on the driving instruction (offset driving amount) determined by the driving amount determination unit 112 or accepted via the communication unit 110. The rotation control unit 114 instructs the camera unit 100 to perform rotation driving based on the rotation driving amount determined by the driving amount determination unit 112 or specified via the communication unit 110.
The offset driving unit 115 drives the image sensor 106 based on the offset driving specified from the offset control unit 113. The offset driving unit 115 is, for example, a motor. The rotation driving unit 116 drives the camera unit 100 based on the rotation driving specified from the rotation control unit 114. The rotation driving unit 116 is, for example, a motor. The offset control unit 113 and the offset driving unit 115 perform specification and driving of offset driving on the image sensor 106, but may perform specification and driving of offset driving on the optical system.
< offset control for Compensation (correction) of object distortion >
Fig. 2 is a view for explaining object distortion. Fig. 2 is a schematic diagram when a photographer looks up a building from the ground and photographs it, and illustrates an image captured at this time. When the camera unit does not directly face the object 201 (i.e., the object 201 (a wall of a building) is not perpendicular to the optical axis 204), as illustrated in the captured image 200, the building is distorted like a trapezoid (object distortion).
Fig. 3 is a view for explaining offset control. Fig. 3 is a schematic diagram when the image capturing apparatus directly faces the same subject 201 (building) as that in fig. 2 by offset control and photographs it, and illustrates an image captured at this time. More specifically, the imaging plane 203 (with respect to the optical system 202) is shifted downward such that the optical axis 204 becomes perpendicular to the subject 201 (wall of a building), and the subject 201 falls within an imaging range. By this offset control, as represented by the captured image 300, the subject distortion is compensated (corrected).
In order to achieve image capturing that compensates (corrects) the distortion of the subject like the captured image 300, the camera unit 100 and the image sensor 106 need to be appropriately controlled. This requires information about the elevation angle 205 and the object distance 206.
Fig. 4 is a view for explaining control for reducing the distortion of the subject while maintaining the imaging range. More specifically, for the shooting state in fig. 2, fig. 4 shows the following states: while rotation control (rotation amount α) of the camera unit is performed so that the optical axis 204 is close to being perpendicular to the object 201 (wall of a building), offset control (offset amount x) for maintaining the imaging range is performed.
That is, when the camera unit 100 is controlled to rotate (tilt) only by a small rotation amount α, the imaging range is changed from the original imaging range 400 to the imaging range 401 rotated by α of the camera unit. For example, in a captured image within the imaging range 401, the ground is captured in 1/3 of the image from the bottom while the top of the building is lost (not captured). To suppress such a change in imaging range (maintain imaging range 400), image sensor 106 is shifted downward within the imaging plane by an offset x.
Fig. 5 is a view for explaining the calculation of the driving amounts of the rotation control and the offset control. More specifically, a method of calculating the offset x of the image sensor 106 from the rotation amount α of the camera unit 100 to maintain the imaging range as shown in fig. 4 will be described.
The triangle OAB shown in fig. 5 is an enlarged view of the triangle OAB shown in fig. 4, where α is the rotation amount of the camera unit 100, α 0 Is the vertical viewing angle/2, x is the offset of the image sensor 106, x 0 Is the vertical sensor size/2 and l is the imaging position. At this time, the offset x of the image sensor when the camera unit 100 rotates α is given by formula (1):
in this case, the formula (2) holds:
According to the equations (1) and (2), when the camera unit 100 rotates α, the offset x of the image sensor for suppressing the change in the imaging range can be given by the equation (3) using the vertical sensor size and the imaging position:
that is, by performing rotation control and offset control using the relationship of expression (3), object distortion can be reduced. In addition, a change in the imaging range during correction control (rotation control and offset control) can be suppressed (i.e., the imaging range can be maintained).
Fig. 6 is a view showing time variation of a captured image in distortion correction. Fig. 6 shows a change in captured image displayed on the display unit of the monitor device 111 from the start of correction control (rotation control and offset control) to completion, for example.
At the start of image capturing (before the start of correction control), a distortion (hereinafter referred to as "upper distortion") is generated in which an upper portion of a building serving as a subject becomes shorter than a lower portion. In this state, offset control of the image sensor 106 synchronized with rotation control of the camera unit 100 is started using the relationship of expression (3). Then, while maintaining the relative position of the object in the captured image, distortion generated in the object is gradually corrected with the passage of time. Correction of the upper distortion will be referred to as "upper correction" hereinafter. Similarly, the corrections to the lower distortion, the right distortion, and the left distortion will be referred to as lower correction, right correction, and left correction, respectively.
Fig. 7 is a view illustrating a distortion correction User Interface (UI). For example, in the monitoring apparatus 111, the UI may be provided as a GUI or physical buttons on the display unit. The user determines distortion correction (upper correction, lower correction, right correction, or left correction) to be performed according to the distortion state of the subject in the captured image displayed on the display unit of the monitoring apparatus 111, and presses a corresponding button on the UI. Then, user instructions (a drive instruction, a drive start instruction, and a drive stop instruction) including information of the corresponding offset direction are transmitted.
For the captured image as shown in fig. 6, "upper correction" is performed. Control of the camera unit 100 and the image sensor 106 starts at the start timing when the user presses the button and stops at the end timing when the user presses the button based on equation (3).
While confirming the captured image that changes with time, as shown in fig. 6, the user presses a button and can perform distortion correction of the subject with high accuracy. For example, when the user presses the "upper correction" button too long and the correction becomes excessive, he/she may press the "lower correction" button.
< Effect >
The calculation of the offset given by equation (3) does not require information such as the elevation angle of the image capturing apparatus and the subject distance. More specifically, pieces of information required to calculate the offset amount based on the equation (3) are the rotation amount of the camera unit 100, the vertical sensor size, and the imaging position. These pieces of information are information known to each image capturing apparatus or information obtained from control information. The above-described offset control is useful because it is not affected by the detection accuracy of information such as the elevation angle and the object distance.
As described above with reference to fig. 6, by synchronously performing the rotation control of the camera unit 100 and the offset control of the image sensor 106, variation in the imaging range during the correction control can be suppressed. That is, during the correction control, the position of the object with respect to the imaging range does not change, and only the distortion of the object is corrected, so the quality is high.
< operation of image capturing device >
The synchronization control between the rotation control of the camera unit 100 and the offset control of the image sensor 106 will be further described. In particular, two different synchronous control methods will be described.
< first synchronization control >
Fig. 8 is a graph for explaining a control method in the first synchronization control. In the first synchronization control, rotation control instructions of one step (a predetermined amount) each for the camera unit 100 are sequentially accepted to sequentially determine the offset amount of the image sensor 106 and perform the synchronization control. That is, the camera unit 100 and the image sensor 106 are synchronously driven while finely specifying their target control positions.
Fig. 8 shows a relationship between the rotation amount of the camera unit 100 and the offset amount of the image sensor 106 under a given image capturing condition. In this case, the rotation amount of one step of the camera unit 100 is α', the offset x of the image sensor 106 according to the rotation amount from the reference position is calculated, and control is performed based on the calculation result.
Fig. 9 is a flowchart of processing in the first synchronization control. The processing corresponding to the flowchart may be implemented by, for example, executing a corresponding program (stored in the auxiliary memory device 213 or the like) by the CPU 211 operating in the imaging device. The respective units of the image capturing apparatus perform the following operations.
In step S900, the image capturing apparatus obtains x necessary to calculate the offset amount of the image sensor 106 based on equation (3) 0 (vertical sensor size/2).
In step S901, the image capturing apparatus determines whether a distortion correction button (fig. 7) is pressed by a user operation. For example, the determination is made based on whether or not the control signal transmitted from the monitoring device 111 and input via the communication unit 110 indicates the pressing of the distortion correction button. If the distortion correction button is pressed, the process advances to step S902; if not, the determination continues.
Among the four buttons shown in fig. 7, a button corresponding to the shape of distortion to be corrected is selectively pressed by a user operation. The driving direction (offset direction) of the camera unit 100 and the image sensor 106 is uniquely determined based on the type of correction selected. For example, when the upper correction is selected, the rotation control is controlled to the downward inclination direction, and the offset control is controlled to the downward direction. When the lower correction is selected, the rotation control and the offset control are controlled to be opposite to the direction in the upper correction. In addition, when the left correction is selected, the rotation control is controlled to the right shake direction, and the offset control is controlled to the right direction. When the right correction is selected, the rotation control and the offset control are controlled to be opposite to the direction in the left correction.
In step S902, the image capturing apparatus obtains an imaging position l. The imaging position l is an imaging distance between O and B shown in fig. 5, and represents an imaging distance between the optical system 202 and the imaging plane 203. The imaging position l may be obtained from a focus lens position used in the control of the focus lens. It is sufficient to obtain the imaging position l only once between the time when the user presses the button and the time when the user stops pressing the distortion correction button.
In step S903, the driving amount determination unit 112 sets the rotation amount α' of one step of the camera unit 100. The effect of maintaining the imaging range is better as the rotation amount α' of one step is smaller.
In step S904, the driving amount determination unit 112 sets the target rotation position α of the camera unit 100 in the rotation control unit 114. The target rotation position α is set based on the rotation amount α' of one step set in step S903. The target rotation position α of the camera unit 100 is an amount obtained by adding only one step rotation amount α' per step rotation using the rotation position of the camera unit 100 at the start of distortion correction as a reference.
In step S905, the driving amount determination unit 112 determines the target offset amount x of the image sensor 106 based on the target rotational position α of the camera unit 100 set in step S904. This determination is made based on the formula (3), and the values obtained or set in steps S900, S902, and S904 are used as x0, I, and α as required.
In step S906, the driving amount determination unit 112 sets the target offset amount x of the image sensor 106 calculated in step S905 in the offset control unit 113.
In step S907, the rotation control unit 114 and the offset control unit 113 control the rotation driving unit 116 and the offset driving unit 115, respectively, and drive the camera unit 100 and the image sensor 106. That is, the camera unit 100 and the image sensor 106 are driven based on the target rotational position α of the camera unit 100 set in step S904 and the target offset x of the image sensor 106 set in step S906.
In step S908, the rotation control unit 114 and the offset control unit 113 determine whether the control position of any one of the rotation drive of the camera unit 100 and the offset drive of the image sensor 106 has reached the mechanical drive end, respectively. If it is determined that the control position of any one of the drives has reached the drive end, the distortion correction control ends. If it is determined that the control positions of both the drives have not reached the drive end, the process proceeds to step S909.
In step S909, the image capturing apparatus determines whether the pressing of the distortion correction button has ended. Similar to step S901, the determination is made based on whether the control signal transmitted from the monitoring device 111 and input via the communication unit 110 indicates the pressing of the distortion correction button. If the pressing has ended, the distortion correction control ends; if not, the process returns to step S904 to continue the distortion correction control.
If the distortion correction button is kept pressed, the processing in steps S904 to S908 is repeatedly performed, and the distortion correction may be performed while maintaining the relative position of the subject with respect to the imaging range. The offset driving amount of the image sensor 106 is calculated according to the rotation control of one step of the camera unit 100. However, it is also possible to set the shift driving amount of one step of the image sensor 106 and calculate the rotation driving amount from the shift driving amount. The offset control may not target the image sensor 106, but may control the optical system 202.
As described above, in the first synchronization control, the camera unit 100 and the image sensor 106 can be driven in synchronization while finely specifying their target control positions.
< second synchronization control >
Fig. 10 is a graph for explaining a control method in the second synchronization control. In the second synchronization control, the camera unit 100 and the image sensor 106 are synchronously controlled by designating the driving speeds of the rotation control of the camera unit 100 and the offset control of the image sensor 106 at the start of the control.
Fig. 11A and 11B are flowcharts of processing in the second synchronization control. The processing corresponding to the flowchart may be implemented by, for example, executing a corresponding program (stored in the auxiliary memory device 213 or the like) by the CPU 211 operating in the imaging device. The respective units of the image capturing apparatus perform the following operations. Note that the processing in steps S900 to S902, S907, and S909 is similar to that in the first synchronization control, and the description thereof will not be repeated.
In step S1100, the image capturing apparatus obtains an offset movable amount x of the image sensor 106 lim . Offset by an amount x lim Is the distance from the current position of the image sensor 106 to the mechanical drive end.
In step S1101, the image capturing apparatus obtains a rotationally movable amount α of the camera unit 100 lim . Rotatable movable amount alpha lim Is the angle from the current position of the camera unit 100 to the mechanical drive end.
In step S1102, the drive amount determination unit 112 calculates the maximum shift drive amount x of the image sensor 106 max . Maximum offset drive amount x max Is the rotation movable α obtained in step S1101 when the camera unit 100 is driven to rotate only lim The amount of offset driving of the image sensor 106 required to maintain the relative position of the subject with respect to the imaging range. Calculating the maximum offset driving amount x based on (3) max
In step S1103, the driving amount determination unit 112 compares the offset movable amount x calculated in step S1100 lim And the maximum shift driving amount x calculated in step S1102 max . If x lim >x max The process advances to step S1104. At x lim >x max When the camera unit 100 is controlled up to the mechanical driving end and the offset control of the image sensor 106 required to maintain the relative position of the object is performed, the image sensor 106 does not collide with the mechanical driving end. If x lim ≤x max The process advances to step S1106. At x lim ≤x max When the camera unit 100 is controlled up to the mechanical driving end, the image sensor 106 collides with the mechanical driving end and cannot maintain the relative position of the object.
In step S1104, the drive amount determination unit 112 sets the target offset position of the image sensor 106 to be obtained by setting only the maximum offset drive amount x calculated in step S1102 max With the current position x of the image sensor 106 now The positions obtained by the addition.
In step S1105, the drive amount determination unit 112 sets the target rotational position of the camera unit 100 to the rotational movable amount α obtained by only the step S1101 lim With the current position alpha of the camera unit 100 now The positions obtained by the addition.
In step S1106, the drive amount determination unit 112 calculates the maximum rotation drive amount α of the camera unit 100 max . Maximum rotation driving amount alpha max Is when the image sensor 106 is controlled to shift only the shift movable amount x obtained in step S1100 lim The rotational driving amount of the camera unit 100 required for maintaining the relative position of the subject with respect to the imaging range. Calculating the maximum rotation driving amount α based on equation (3) max
In step S1107, the drive amount determination unit 112 sets the target offset position of the image sensor 106 to the offset movable amount x by only the offset obtained in step S1100 lim With the current position x of the image sensor 106 now The positions obtained by the addition.
In step S1108, the drive amount determination unit 112 sets the target rotational position of the camera unit 100 to the maximum rotational drive amount α calculated in step S1106 by only max With the current position alpha of the camera unit 100 now The positions obtained by the addition.
In step S1109, the drive amount determination unit 112 sets the rotation speed v of the camera unit 100 in the rotation control unit 114 r . With rotation control of speed v r The lower the user can perform fine adjustment more easily at the start and end of specifying distortion correction.
In step S1110, the driving amount determination unit 112 sets the offset speed v of the image sensor 106 in the offset control unit 113 s . At this time, the offset velocity v s Is set so that the camera unit 100 and the image sensor 106 arrive at the target position at the same time (i.e., the driving times become equal).
In step S1111, the rotation control unit 114 and the offset control unit 113 determine whether the camera unit 100 and the image sensor 106 have reached the target positions, respectively. If both the camera unit 100 and the image sensor 106 have reached the target positions, the distortion correction control ends. If at least either one of the camera unit 100 and the image sensor 106 has not reached the target position, the process advances to step S909.
In step S1112, the image capturing apparatus executes distortion correction stop processing. In the stop processing, after the orientation of the image sensor 106 with respect to the camera unit 100 at the timing of accepting the distortion correction end instruction is shifted to the shift position where the imaging range is maintained, the control ends.
When the linearity between the rotation amount of the camera unit 100 and the offset amount of the image sensor 106 is low, the relative position of the subject with respect to the imaging range may not be maintained entirely at a point before the camera unit 100 and the image sensor 106 reach the target position. Therefore, when the distortion correction is ended by the user operation before the target control position, the imaging range is adjusted by the stop processing in step S1112.
As described above, in the second synchronization control, by designating the driving speeds of the control of the camera unit 100 and the image sensor 106 at the start of the control, the camera unit 100 and the image sensor 106 can be driven in synchronization.
In both the first synchronization control and the second synchronization control described above, the camera unit 100 and the image sensor 106 may be synchronized. However, since the maintenance effect of the relative position of the object is good during the distortion correction control, the first synchronization control is superior to the second synchronization control. In contrast, the second synchronization control is superior to the first synchronization control in shooting conditions requiring high-speed control because the number of calculations and the number of driving operations are small (one).
< hardware arrangement of image capturing device >
Next, an example of a hardware arrangement of the image capturing apparatus will be described with reference to the block diagram of fig. 20. Note that the arrangement shown in fig. 20 is merely an example of an arrangement suitable for an image capturing apparatus, and the arrangement of the image capturing apparatus may be modified/changed as appropriate.
The CPU 211 performs processing using computer programs and data stored in the main memory 212. The CPU 211 controls the operation of the entire image capturing apparatus, and performs or controls each of the processes performed by the image capturing apparatus in the above description. For example, the CPU 211 performs processing using computer programs and data stored in the main memory 212, thereby realizing the functions of the respective functional units of the camera signal processor 109, the drive amount determination unit 112, the offset control unit 113, and the rotation control unit 114 shown in fig. 1.
Main memory 212 is a memory such as Random Access Memory (RAM). The main memory 212 has an area for storing computer programs and data loaded from the auxiliary memory device 213, images captured by the camera unit 100, and various data received from the monitoring apparatus 111 via the communication unit 110. In addition, the main memory 212 has a work area used when the CPU 211 executes various processes. In this way, the main memory 212 can appropriately provide various areas.
The auxiliary memory device 213 is a mass memory such as a Hard Disk Drive (HDD), a Read Only Memory (ROM), or a Solid State Drive (SSD). The auxiliary memory device 213 holds an Operating System (OS) and computer programs and data for causing the CPU 211 to execute or control each process performed by the image capturing apparatus in the above description. The auxiliary memory device 213 also holds data received from the monitoring apparatus 111 via the communication unit 110. The computer programs and data stored in the auxiliary memory device 213 are appropriately loaded into the main memory 212 under the control of the CPU 211 and processed by the CPU 211.
The driving unit 214 drives the camera unit 100 based on the image capturing parameters received from the monitoring device 111. For example, the driving unit 214 corresponds to the offset driving unit 115 that performs offset control of the image sensor 106 and the rotation driving unit 116 that performs rotation control of the camera unit 100. Note that the control target of the driving unit 214 is not limited to a specific target, and may be other targets (for example, the position of the camera unit 100).
The camera unit 100 includes an image sensor and an optical system, and forms an image of a subject on the image sensor using an intersection point between an optical axis of the optical system and the image sensor as a center of image capturing. The image sensor is, for example, a Complementary Metal Oxide Semiconductor (CMOS) or a Charge Coupled Device (CCD). The communication unit 110 performs data communication with the monitoring apparatus 111.
As described above, according to the first embodiment, when the offset control of the image sensor is performed, the rotation control (tilting or panning) of the camera unit is performed in synchronization with the offset control of the image sensor. More specifically, the rotation control is performed in synchronization to compensate for the variation in the imaging range caused by the offset control. With this synchronization control, a variation in imaging range during correction control can be suppressed, and a high-quality image can be provided.
(modification)
In the modification, the following examples will be explained: the image capturing apparatus determines the shape of distortion generated in the subject based on the captured image, and performs rotation control of the camera unit 100 and offset control of the image sensor 106 to correct the distortion shape.
Fig. 12 is a flowchart of a process performed by the image capturing apparatus according to the modification. The processing corresponding to the flowchart may be implemented by, for example, executing a corresponding program (stored in the auxiliary memory device 213 or the like) by the CPU 211 operating in the imaging device.
In step S1200, the image capturing apparatus determines whether a distortion correction execution instruction is received from the user. If the image capturing apparatus determines that the distortion correction instruction from the user has been received, the process advances to step S1201. If the image capturing apparatus determines that the instruction is not received, it continues to wait for the distortion correction instruction.
In step S1201, the image capturing apparatus determines whether or not there is an object suffering from distortion (hereinafter to be referred to as a distorted object) in the captured image. If the image capturing apparatus determines that a distorted subject exists, the process advances to step S1202. If the image capturing apparatus determines that there is no distorted subject, the process advances to step S1203. As the distortion subject presence/absence determination method, a method based on depth information and a method based on edge detection will be described. However, the presence/absence of the distortion object may be determined by other methods.
Fig. 13 is a view for explaining distortion shape determination based on depth information. In a distortion object presence/absence determination method based on depth information (e.g., a depth image in which each pixel has a depth value at its position), when an object distance monotonously changes in a given object region, it is determined that the object is not directly facing an image capturing apparatus and is a distortion object.
Fig. 14 is a view for explaining distortion shape determination based on edge detection. In the distortion object presence/absence determination method based on edge detection, when an edge interval in a vertical direction (or a horizontal direction) is not constant with respect to a pair of edges detected in a given object region, it is determined that the object is a distortion object.
In step S1202, the image capturing apparatus sets the subject determined as the distortion subject in step S1201 as a distortion correction target. In some scenes, there may be a plurality of distorted subjects in the captured image. In this case, the correction target object is set by accepting a selection of the correction target object from the user.
Fig. 15 is a view for explaining distortion shape determination when there are a plurality of subjects. Fig. 16 is a view for explaining correction target determination when there are a plurality of subjects. More specifically, fig. 16 shows a screen in which selection of a correction target object is accepted from the user. Fig. 16 shows a display screen of the following form: a plurality of polygonal areas obtained by dividing the areas (e.g., 1600 and 1601) of the respective subjects determined as distorted subjects in step S1201 are superimposed and displayed on the captured image, and a selection of one polygonal area is accepted from the user.
In step S1203, the image capturing apparatus notifies the user that there is no distorted subject (correction target subject) in the captured image.
In step S1204, the image capturing apparatus determines the type of distortion shape (upper distortion, lower distortion, right distortion, or left distortion) of the correction target object. For the distortion shape determination, determination based on depth information and determination based on edge detection are possible.
For example, in the depth information corresponding to the captured image 1300 in fig. 13, the depth (subject distance) from the building area in the captured image 1300 changes from near side to far side in the direction from the lower end to the upper end of the image. For perspective reasons, the subject looks large because the distance is short, and small because the distance is long. Thus, it can be determined that the distorted shape of the building in which the image 1300 is captured is "upper distortion" (the lower portion of the image is large and the upper portion is small).
Similarly, in the depth information corresponding to the captured image 1301, the depth (subject distance) from the building region in the captured image 1301 changes from near side to far side in the direction from the left end to the right end of the image. It can be determined that the distorted shape of the building in which the image 1301 is captured is "right distortion" (the left part of the image is large and the right part is small). In this way, the distortion shape can be determined based on the direction of the depth change in the subject (building) within the captured image using the depth information.
In the captured image 1400 of fig. 14, regarding a pair of edges detected for a subject in the captured image, the interval between the pair of edges decreases in a direction from the lower end to the upper end of the image. In this case, it can be determined that the distorted shape of the object is "upper deformation". In addition, in the captured image 1401, with respect to a pair of edges detected for a subject in the captured image, the interval between the pair of edges decreases in the direction from the left end to the right end of the image. In this case, it can be determined that the distorted shape of the object is "right distortion". In this way, the distorted shape of the object can be determined based on the direction of the edge interval change between the pair of edges detected for the object.
In step S1205, based on the distortion shapes determined in step S1204, the image capturing apparatus sets the driving directions of the rotation control of the camera unit 100 and the offset control of the image sensor 106 suitable for each distortion shape.
Fig. 17 is a table showing a relationship between rotation control and offset control for each distortion shape. As shown in fig. 17, the driving directions of the camera unit 100 and the image sensor 106 for correcting each distortion shape may be uniquely determined for each distortion shape.
In step S1206, the image capturing apparatus performs distortion correction driving. It is assumed that the distortion correction driving (step-by-step driving) is performed every predetermined amount, and steps S1206 to S1208 are repeated. Note that this process is similar to the process described with reference to fig. 9 (steps S902 to S907), and the description thereof will not be repeated.
In step S1207, the image capturing apparatus calculates the current distortion amount generated in the correction target object. Even for the distortion amount calculation, calculation based on depth information and calculation based on edge detection are possible.
Fig. 18 is a view for explaining calculation of an amount of distortion based on depth information. Fig. 18 illustrates a change in the captured image during the distortion correction driving in step S1206. The captured image 1800 indicates that the difference in depth (object distance) between the upper portion and the lower portion of the correction target object is large. In this case, the distortion amount is calculated as a large value. In a captured image 1801 obtained after performing the distortion correction driving of one step or a plurality of steps, the difference in depth between the upper portion and the lower portion of the correction target object becomes smaller than the difference in depth in the captured image 1800. Thus, the amount of distortion is calculated as a smaller value than the captured image 1800. In the captured image 1802 obtained after further performing the distortion correction driving, depths at the upper portion and the lower portion of the correction target object are substantially equal. Therefore, the amount of distortion is calculated as a value close to 0. When the distortion amount is calculated based on the depth information, the difference in depth between a plurality of areas (for example, upper and lower portions) in the correction target object can be evaluated as the distortion amount.
Fig. 19 is a view for explaining calculation of an amount of distortion based on edge detection. Fig. 19 illustrates a change in the captured image during the distortion correction driving in step S1206. The captured image 1900 indicates that the difference in edge interval between a pair of edges is large between the upper portion and the lower portion of the correction target object. In this case, the distortion amount is calculated as a large value. In a captured image 1901 obtained after performing the distortion correction driving of one step or a plurality of steps, the difference in edge interval between the upper portion and the lower portion of the correction target object becomes smaller than the difference in the captured image 1900. The amount of distortion is calculated as a smaller value than the captured image 1900. In a captured image 1902 obtained after further performing the distortion correction drive, edge intervals at the upper portion and the lower portion of the correction target object are substantially equal. Therefore, the amount of distortion is calculated as a value close to 0. When the amount of distortion is calculated based on edge detection, the difference in edge intervals between a plurality of areas (for example, upper and lower portions) in the correction target object can be evaluated as the amount of distortion.
In step S1208, the image capturing apparatus determines whether the distortion amount calculated in step S1207 is equal to or smaller than a predetermined threshold. If the distortion amount is greater than the predetermined threshold, the distortion correction is insufficient and the process returns to step S1206. If the amount of distortion is equal to or smaller than the predetermined threshold, the process advances to step S1209.
In step S1209, the image capturing apparatus determines whether or not the distortion shape determination has been performed twice (step S1204). This is because there are sometimes vertical distortion (upper distortion or lower distortion) and horizontal distortion (right distortion or left distortion) of the captured image. Since the vertical distortion or the horizontal distortion is corrected by only the correction based on one distortion shape determination (steps S1204 to S1208), it is determined whether or not the distortion shape determination has been performed twice.
As described above, the image capturing apparatus can correct distortion generated in a subject of a captured image based on depth information corresponding to the captured image or a result of edge detection of the captured image.
Other embodiments
The embodiment(s) of the present invention may also be implemented by a computer of a system or apparatus for reading and executing computer-executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be more fully referred to as a "non-transitory computer-readable storage medium") to perform the functions of one or more of the above-described embodiment(s) and/or including one or more circuits (e.g., application Specific Integrated Circuits (ASICs)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by a computer of a system or apparatus by, for example, reading and executing computer-executable instructions from a storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may include one or more processors (e.g., a Central Processing Unit (CPU), a micro-processing unit (MPU)), and may include a separate computer or a network of separate processors to read out and execute the computer-executable instructions. The computer-executable instructions may be provided to the computer, for example, from a network or a storage medium. The storage medium may include, for example, a hard disk, random Access Memory (RAM), read Only Memory (ROM), storage for a distributed computing system, an optical disk (such as a Compact Disk (CD), digital Versatile Disk (DVD), or blu-ray disc (BD) TM ) One or more of a flash memory device, memory card, etc.
Other embodiments
The embodiments of the present invention can also be realized by a method in which software (program) that performs the functions of the above embodiments is supplied to a system or apparatus, a computer of the system or apparatus or a method in which a Central Processing Unit (CPU), a Micro Processing Unit (MPU), or the like reads out and executes the program, through a network or various storage mediums.
While the invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

Claims (20)

1. An image capturing apparatus, the image capturing apparatus comprising:
an image capturing unit including an imaging optical system and an image sensor;
a rotation driving unit configured to drive the image capturing unit in a panning direction and/or a tilting direction;
an offset driving unit configured to drive at least one of the imaging optical system and the image sensor in a plane parallel to an imaging plane; and
A synchronization control unit configured to synchronize a rotational drive of the image capturing unit with an offset drive of at least one of the imaging optical system and the image sensor so as not to change an imaging range of the image capturing unit during correction of distortion of a subject in a captured image captured by the image capturing unit.
2. The image capturing apparatus according to claim 1, wherein the offset driving unit drives at least one of the imaging optical system and the image sensor in a plane parallel to the imaging plane to correct distortion of a subject in a captured image captured by the image capturing unit.
3. The image capturing apparatus according to claim 2, wherein the synchronization control unit determines the second driving amount in one of the rotation drive and the offset drive based on the first driving amount in the other of the rotation drive and the offset drive.
4. The image capturing apparatus according to claim 3, wherein the synchronization control unit further determines the second driving amount based on a size of the image sensor and an imaging distance from the imaging optical system to the image sensor.
5. The image capturing apparatus according to claim 4, further comprising an acquisition unit configured to acquire a user instruction for correction of distortion of a subject in the captured image.
6. The image capturing apparatus according to claim 5, wherein the acquisition unit is configured to sequentially acquire a drive instruction to drive at least one of the imaging optical system and the image sensor by a predetermined amount, and
the synchronization control unit drives at least one of the imaging optical system and the image sensor by the predetermined amount based on the driving instruction if the acquisition unit acquires the driving instruction, sequentially determines a driving amount of the image capturing unit corresponding to the driving of the predetermined amount, and drives the image capturing unit.
7. The image capturing apparatus according to claim 5, wherein the acquisition unit is configured to acquire a driving instruction to drive at least one of the imaging optical system and the image sensor, and
the synchronization control unit determines the driving speeds of both the rotation driving unit and the offset driving unit and starts driving if the acquisition unit acquires the driving instruction.
8. The image capturing apparatus according to claim 7, wherein if the acquisition unit acquires the drive instruction, the synchronization control unit determines the drive speed so that drive times of both the rotation drive unit and the offset drive unit are equal.
9. The image capturing apparatus according to claim 8, wherein the acquisition unit is configured to also acquire a drive stop instruction to stop driving of at least one of the imaging optical system and the image sensor, and
if the acquisition unit acquires the drive stop instruction when the rotation drive unit and the offset drive unit are driven, the synchronization control unit adjusts one of the orientation of the image capturing unit and the offset position of the image sensor based on the timing at which the drive stop instruction is acquired.
10. The image capturing apparatus according to claim 9, wherein the driving instruction includes information of an offset direction of the image sensor by the offset driving unit.
11. The image capturing apparatus according to claim 1, further comprising an acquisition unit configured to acquire a drive start instruction to repeatedly drive the offset drive unit every predetermined amount,
Wherein the offset driving unit starts driving if the acquisition unit acquires the driving start instruction.
12. The image capture device of claim 11, the image capture device further comprising:
a determination unit configured to determine an amount of distortion of a subject in a captured image obtained by the image capturing unit;
a depth obtaining unit configured to obtain depth information corresponding to a captured image obtained by the image capturing unit,
wherein the determination unit determines the amount of distortion based on a difference in depth in a subject region included in the captured image.
13. The image capture device of claim 11, the image capture device further comprising:
a determination unit configured to determine an amount of distortion of a subject in a captured image obtained by the image capturing unit;
an edge detection unit configured to detect an edge of a subject included in a captured image obtained by the image capturing unit,
wherein the determining unit determines the amount of distortion based on a difference in edge interval between a pair of edges detected for the subject by the edge detecting unit.
14. The image capturing apparatus according to claim 1, further comprising a determination unit configured to determine a type of distortion shape of a subject in a captured image obtained by the image capturing unit,
wherein the driving directions of the rotation driving unit and the offset driving unit are determined based on the type of the distortion shape determined by the determining unit.
15. The image capturing apparatus according to claim 14, further comprising a depth obtaining unit for obtaining depth information corresponding to a captured image obtained by the image capturing unit,
wherein the determination unit determines the distortion shape type based on a direction of change of depth in a subject region included in the captured image.
16. The image capturing apparatus according to claim 14, further comprising an edge detection unit configured to detect an edge of a subject included in a captured image obtained by the image capturing unit,
wherein the determining unit determines the distortion shape type based on a direction of change of an edge interval between a pair of edges detected for the object by the edge detecting unit.
17. The image capturing apparatus according to claim 14, further comprising an acquisition unit configured to acquire a selection of one subject subjected to the determination by the determination unit if there are a plurality of subjects suffering from distortion in the captured image.
18. The image capturing apparatus according to claim 1, further comprising a determination unit configured to sequentially determine an amount of distortion of a subject in a captured image obtained by the image capturing unit,
wherein the synchronization control unit stops driving if the distortion amount determined by the determination unit is not greater than a predetermined threshold value.
19. A method of controlling an image capturing apparatus, the image capturing apparatus comprising:
an image capturing unit including an imaging optical system and an image sensor;
a rotation driving unit configured to drive the image capturing unit in a panning direction and/or a tilting direction; and
an offset driving unit configured to drive at least one of the imaging optical system and the image sensor in a plane parallel to an imaging plane, and
The control method comprises the following steps:
the rotational driving of the image capturing unit is synchronized with the offset driving of at least one of the imaging optical system and the image sensor so as not to change the imaging range of the image capturing unit during correction of distortion of a subject in a captured image captured by the image capturing unit.
20. A medium storing a program that causes a computer to execute a method of controlling an image capturing apparatus, the image capturing apparatus comprising:
an image capturing unit including an imaging optical system and an image sensor;
a rotation driving unit configured to drive the image capturing unit in a panning direction and/or a tilting direction; and
an offset driving unit configured to drive at least one of the imaging optical system and the image sensor in a plane parallel to an imaging plane, and
the control method comprises the following steps:
the rotational driving of the image capturing unit is synchronized with the offset driving of at least one of the imaging optical system and the image sensor so as not to change the imaging range of the image capturing unit during correction of distortion of a subject in a captured image captured by the image capturing unit.
CN202310216295.5A 2022-03-14 2023-03-08 Image capturing apparatus, control method, and medium Pending CN116761065A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2022-039559 2022-03-14
JP2022-184973 2022-11-18
JP2022184973A JP2023134346A (en) 2022-03-14 2022-11-18 Imaging apparatus and method for controlling the same

Publications (1)

Publication Number Publication Date
CN116761065A true CN116761065A (en) 2023-09-15

Family

ID=87950216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310216295.5A Pending CN116761065A (en) 2022-03-14 2023-03-08 Image capturing apparatus, control method, and medium

Country Status (1)

Country Link
CN (1) CN116761065A (en)

Similar Documents

Publication Publication Date Title
US9830947B2 (en) Image-capturing device
US9344634B2 (en) Imaging apparatus having subject detection function, method for controlling the imaging apparatus, and storage medium
KR101034109B1 (en) Image capture apparatus and computer readable recording medium storing with a program
JP2024040284A (en) Imaging device, computer program, and imaging control method
JP6823469B2 (en) Image blur correction device and its control method, image pickup device, program, storage medium
US11582394B2 (en) Control apparatus, control method, and storage medium for providing tilt control
JP2006162991A (en) Stereoscopic image photographing apparatus
JP5868061B2 (en) Imaging device
JP7433848B2 (en) Imaging device, computer program, storage medium, and imaging control method
KR100850466B1 (en) Apparatus for photography and method for controlling auto focus thereof
US9274402B2 (en) Imaging device that executes auto focus control by referring to distance image data regarding a depth of a region
JP2013110754A (en) Camera device, and photographing method and program of the same
JP4925168B2 (en) Imaging method and apparatus
EP4246991A1 (en) Image capturing apparatus, method of controlling the same, and program
CN116761065A (en) Image capturing apparatus, control method, and medium
EP4002830A2 (en) Image pick up apparatus, image pick up method, and storage medium
US11330179B2 (en) Imaging device and control method thereof
JP4981955B2 (en) Imaging apparatus and control method thereof
JP6778340B2 (en) Imaging device, imaging method, and program
JP2023134346A (en) Imaging apparatus and method for controlling the same
JPH08190113A (en) Camera with swing function
US11778321B2 (en) Image capturing apparatus capable of performing omnifocal photographing, method of controlling same, and storage medium
JP2015118338A (en) Image capturing device, control method therefor, program, and storage medium
JP2018106763A (en) Imaging apparatus
JP6525710B2 (en) Imaging device, control method therefor, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination