WO2022198534A1 - 一种摄像头模组的安装方法及移动平台 - Google Patents
一种摄像头模组的安装方法及移动平台 Download PDFInfo
- Publication number
- WO2022198534A1 WO2022198534A1 PCT/CN2021/082852 CN2021082852W WO2022198534A1 WO 2022198534 A1 WO2022198534 A1 WO 2022198534A1 CN 2021082852 W CN2021082852 W CN 2021082852W WO 2022198534 A1 WO2022198534 A1 WO 2022198534A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- camera module
- center
- coordinate system
- image sensor
- positive direction
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000003287 optical effect Effects 0.000 claims description 46
- 238000009434 installation Methods 0.000 claims description 42
- 230000033001 locomotion Effects 0.000 abstract description 36
- 238000001514 detection method Methods 0.000 abstract description 28
- 230000007774 longterm Effects 0.000 abstract 1
- 230000008447 perception Effects 0.000 description 27
- 238000003384 imaging method Methods 0.000 description 23
- 238000013461 design Methods 0.000 description 20
- 238000010586 diagram Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 20
- 239000011159 matrix material Substances 0.000 description 19
- 238000013519 translation Methods 0.000 description 10
- 239000013598 vector Substances 0.000 description 9
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B17/00—Details of cameras or camera bodies; Accessories therefor
- G03B17/02—Bodies
- G03B17/12—Bodies with means for supporting objectives, supplementary lenses, filters, masks, or turrets
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B30/00—Camera modules comprising integrated lens units and imaging units, specially adapted for being embedded in other devices, e.g. mobile phones or vehicles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/54—Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/55—Optical parts specially adapted for electronic image sensors; Mounting thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/57—Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R11/00—Arrangements for holding or mounting articles, not otherwise provided for
- B60R11/04—Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
Definitions
- the present application relates to the field of sensor technology, and in particular, to an installation method and a mobile platform of a camera module.
- the lens group and the image sensor are two important components of the camera module.
- the lens group may be a group of convex (concave) lenses
- the image sensor is the imaging surface.
- the image sensor converts the light transmitted from the lens group into electrical signals, and then internally converts the electrical signals into digital signals through analog-digital conversion to form an image.
- the camera module can be used to detect the surrounding environment, and then perform three-dimensional reconstruction of the surrounding environment, so as to achieve target positioning.
- the functions of the camera module can mainly include the detection and recognition of the surrounding vehicles, pedestrians, general obstacles, lane lines, road markers, traffic signs and other targets. It also includes the distance and speed measurement of the above-mentioned detected target, and also includes the estimation of the camera motion (that is, the self-vehicle motion) (including rotation and translation), and the three-dimensional reconstruction of the surrounding environment on this basis, so as to achieve target setting.
- the present application provides an installation method and a mobile platform for a camera module, which are used to solve the problem that in order to meet the required detection range, the installation method will affect the accuracy and robustness of the camera motion estimation, thereby resulting in a large target positioning error. .
- the present application provides a method for installing a camera module, which may specifically include: installing the camera module on a mobile platform, the camera module being parallel to the ground on which the mobile platform is located; wherein, The camera module includes a lens group and an image sensor, the lens group includes at least one lens; the projection of the center of the lens group on the plane of the image sensor is a first position; the first position and the image sensor The distance from the center is greater than a first threshold, and the first threshold is greater than 0.
- the camera module can be parallel to the ground where the mobile platform is located, the movement direction can be guaranteed to be perpendicular to the image sensor plane, which can improve the accuracy and robustness of the camera movement estimation, thereby reducing the target positioning error , and the distance between the first position projected by the center of the lens group on the plane of the image sensor and the center of the image sensor is greater than the first threshold, the perception ability of the part above or below the center of the lens group can be increased, so that the camera module does not need to be installed at an angle , it can meet the required detection range.
- the optical axis of the lens group and the normal of the image sensor plane are both parallel to the ground on which the moving platform is located. This ensures that the motion direction is perpendicular to the image sensor plane, which improves the accuracy and robustness of camera motion estimation.
- the line connecting the first position and the center of the image sensor is perpendicular to the horizontal axis of the first coordinate system, and the first coordinate system takes the center of the image sensor as the origin, and the horizontal direction The right is the positive direction of the horizontal axis, and the vertical downward is the rectangular coordinate system established by the positive direction of the vertical axis. In this way, the distance between the first position and the center of the image sensor is only in the vertical direction, and there is no distance in the horizontal direction, so that the detection performance of the camera module can be guaranteed.
- the distance between the first position and the center of the image sensor is related to a first angle
- the first angle is a vertical filed of view (VFOV) angle of the camera module
- VFOV vertical filed of view
- the first angle can represent the direction of the VFOV of the camera module, and further can indicate the position of the actual detection range, and the required detection range can be obtained by adjusting the first angle. In this way, the distance between the first position and the center of the image sensor can be determined according to actual detection requirements.
- the first angle is greater than 0 degrees and less than or equal to half of the VFOV supplementary angle of the camera module.
- the degree of the first angle to be adopted can be determined according to the actual required detection range to determine the direction of the VFOV of the camera module, and then the distance between the first position and the center of the image sensor can be determined to meet the actual detection requirements. need.
- the first position is higher than the center of the image sensor, so as to increase the perception capability of the part below the center of the lens group, that is, the area that was not originally imaged on the image sensor More information can be received by the image sensor through the lens group; or, the first position is lower than the center of the image sensor, so that the perception capability of the part above the center of the lens group can be increased, that is The information that the area cannot be imaged on the image sensor originally can be more received by the image sensor through the lens group.
- the absolute value of the ordinate y 0 of the optical center or expansion focus FOE corresponding to the camera module in the first coordinate system conforms to the following formula:
- f y is the focal length of the center of the camera module
- ⁇ is the first angle
- h( ⁇ ) tan( ⁇ )
- h( ⁇ ) is a unary N-th function of ⁇
- N is greater than 0 Integer
- is the absolute value of the parameter
- tan(*) represents the tangent function
- the first coordinate system is the center of the image sensor as the origin
- horizontal to the right is the positive direction of the horizontal axis
- vertical downward A Cartesian coordinate system established for the positive direction of the vertical axis.
- the ordinate y 0 of the optical center or expansion focus FOE corresponding to the camera module in the first coordinate system conforms to the following formula:
- f y is the focal length of the center of the camera module
- VFOV the focal length of the camera module
- the first coordinate system is a rectangular coordinate system established with the center of the image sensor as the origin
- the horizontal rightward is the positive direction of the horizontal axis
- the vertical downward is the positive direction of the vertical axis.
- the ordinate y 0 of the optical center or expansion focus FOE corresponding to the camera module in the first coordinate system conforms to the following formula:
- f y is the focal length of the center of the camera module
- ⁇ d is related to ⁇ 1
- ⁇ d can be a function g( ⁇ 1) related to ⁇ 1
- the first coordinate system is a rectangular coordinate system established with the center of the image sensor as the origin
- the horizontal rightward is the positive direction of the horizontal axis
- the vertical downward is the positive direction of the vertical axis.
- the ordinate y 0 of the optical center or expansion focus FOE corresponding to the camera module in the first coordinate system conforms to the following formula:
- f y is the focal length of the center of the camera module
- ⁇ 0 is the preset degree
- VFOV the VFOV of the camera module
- the first coordinate system is a rectangular coordinate system established with the center of the image sensor as the origin
- the horizontal rightward is the positive direction of the horizontal axis
- the vertical downward is the positive direction of the vertical axis.
- the ordinate y 0 of the optical center or expansion focus FOE corresponding to the camera module in the first coordinate system conforms to the following formula:
- f y is the focal length of the center of the camera module
- ⁇ s is related to ⁇ 2
- ⁇ s can be a function g( ⁇ 2) related to ⁇ 2
- ⁇ 0 is a preset degree
- the first coordinate system is the center of the image sensor as the origin
- the horizontal to the right is the positive direction of the horizontal axis
- the vertical downward is the positive direction of the vertical axis
- the present application provides a mobile platform, the mobile platform is installed with a camera module parallel to the ground on which it is located, the camera module includes a lens group and an image sensor, and the lens group includes at least one lens; Wherein: the projection of the center of the lens group on the image sensor plane is the first position; the distance between the first position and the center of the image sensor is greater than a first threshold, and the first threshold is greater than 0.
- the camera module can be parallel to the ground where the mobile platform is located, it can ensure that the motion direction is perpendicular to the image sensor plane, which can improve the accuracy and robustness of the camera motion estimation, thereby reducing the target positioning error, and through the lens group
- the distance between the first position where the center is projected on the plane of the image sensor and the center of the image sensor is greater than the first threshold, which can increase the perception capability of the part above or below the center of the lens group, so that the camera module does not need to be installed at an angle to meet all requirements. required detection range.
- the optical axis of the lens group and the normal of the image sensor plane are both parallel to the ground on which the moving platform is located. This ensures that the motion direction is perpendicular to the image sensor plane, which improves the accuracy and robustness of camera motion estimation.
- the line connecting the center of the lens group and the center of the image sensor is perpendicular to the horizontal axis direction in the first coordinate system, and the first coordinate system takes the center of the image sensor as the origin, and the horizontal
- the right direction is the positive direction of the horizontal axis
- the vertical downward direction is the rectangular coordinate system established by the positive direction of the vertical axis.
- the distance between the first position and the center of the image sensor is related to a first angle
- the first angle is the angle bisector of the vertical field of view VFOV of the camera module and the lens
- the first angle can represent the direction of the VFOV of the camera module, and further can indicate the position of the actual detection range, and the required detection range can be obtained by adjusting the first angle. In this way, the distance between the first position and the center of the image sensor can be determined according to actual detection requirements.
- the first angle is greater than 0 degrees and less than or equal to half of the VFOV supplementary angle of the camera module.
- the degree of the first angle to be adopted can be determined according to the actual required detection range to determine the direction of the VFOV of the camera module, and then the distance between the first position and the center of the image sensor can be determined to meet the actual detection requirements. need.
- the first position is higher than the center of the image sensor, so as to increase the perception capability of the part below the center of the lens group, that is, the area that was not originally imaged on the image sensor More information can be received by the image sensor through the lens group; or, the first position is lower than the center of the image sensor, so that the perception capability of the part above the center of the lens group can be increased, that is The information that the area cannot be imaged on the image sensor originally can be more received by the image sensor through the lens group.
- the absolute value of the ordinate y 0 of the optical center or expansion focus FOE corresponding to the camera module in the first coordinate system conforms to the following formula:
- f y is the focal length of the center of the camera module
- ⁇ is the first angle
- h( ⁇ ) tan( ⁇ )
- h( ⁇ ) is a unary N-th function of ⁇
- N is greater than 0
- the first coordinate system is a rectangular coordinate system established with the center of the image sensor as the origin
- the horizontal right is the positive direction of the horizontal axis
- the vertical downward is the positive direction of the vertical axis.
- the ordinate y 0 of the optical center or expansion focus FOE corresponding to the camera module in the first coordinate system conforms to the following formula:
- f y is the focal length of the center of the camera module
- VFOV the focal length of the camera module
- the first coordinate system is a rectangular coordinate system established with the center of the image sensor as the origin
- the horizontal rightward is the positive direction of the horizontal axis
- the vertical downward is the positive direction of the vertical axis.
- the ordinate y 0 of the optical center or expansion focus FOE corresponding to the camera module in the first coordinate system conforms to the following formula:
- f y is the focal length of the center of the camera module
- ⁇ d is related to ⁇ 1
- ⁇ d can be a function g( ⁇ 1) related to ⁇ 1
- the first coordinate system is a rectangular coordinate system established with the center of the image sensor as the origin
- the horizontal rightward is the positive direction of the horizontal axis
- the vertical downward is the positive direction of the vertical axis.
- the ordinate y 0 of the optical center or expansion focus FOE corresponding to the camera module in the first coordinate system conforms to the following formula:
- f y is the focal length of the center of the camera module
- ⁇ 0 is the preset degree
- VFOV the VFOV of the camera module
- the first coordinate system is a rectangular coordinate system established with the center of the image sensor as the origin
- the horizontal rightward is the positive direction of the horizontal axis
- the vertical downward is the positive direction of the vertical axis.
- the ordinate y 0 of the optical center or expansion focus FOE corresponding to the camera module in the first coordinate system conforms to the following formula:
- f y is the focal length of the center of the camera module
- ⁇ s is related to ⁇ 2
- ⁇ s can be a function g( ⁇ 2) related to ⁇ 2
- ⁇ 0 is a preset degree
- the first coordinate system is the center of the image sensor as the origin
- the horizontal to the right is the positive direction of the horizontal axis
- the vertical downward is the positive direction of the vertical axis
- FIG. 1 is a schematic diagram of an exploded view of a camera module provided by the application.
- FIG. 2 is a schematic diagram of an assembly position of a lens group and an image sensor in the prior art
- FIG. 3 is a schematic diagram of a perceptible VFOV and a horizontal field of view of a camera module in the prior art
- FIG. 4 is a schematic diagram of the installation of a front-view camera in a vehicle-mounted surround view perception system in the prior art
- FIG. 5 is a schematic diagram of the installation of a front-view camera in a vehicle-mounted front-view perception system in the prior art
- FIG. 6 is a schematic diagram of the installation of a camera module provided by the application.
- FIG. 7 is a side view of a camera module provided by the application.
- FIG. 10 is a schematic diagram of the installation of another camera module provided by the application.
- FIG. 11 is a schematic diagram of the installation of another camera module provided by the application.
- FIG. 12 is a schematic diagram of estimating camera motion by matching feature points in two frames of images captured by a camera module provided by the application;
- FIG. 13 is a schematic diagram of a solution result provided by the application.
- FIG. 14 is a schematic diagram of the influence of the translation direction of the camera module on the translation vector estimation provided by the present application.
- Embodiments of the present application provide an installation method and a mobile platform for a camera module, which are used to solve the problem that the installation method will affect the accuracy and robustness of the camera motion estimation for the required detection range, thereby causing a large target positioning error. question.
- At least one refers to one or more; multiple refers to two or more.
- the camera module may mainly include a lens group and an image sensor.
- the lens group may include at least one lens (lens), for example, the lens group in FIG. 1 may include a lens 1 , a lens 2 and a lens 3 .
- At least one lens included in the lens group may be a convex (concave) lens.
- the lens group can be fixed by a lens barrel and a lens holder, and there is a filter between the lens group and the lens holder.
- the image sensor may be a semiconductor chip, and the semiconductor chip includes a chip photosensitive area. Specifically, the image sensor can convert the light transmitted from the lens group into an electrical signal, and then internally convert the electrical signal into a digital signal through analog-digital conversion to form an image.
- the camera module may further include a circuit board, and the circuit board is a support body for electronic components and a carrier for electrical interconnection of electronic components.
- the number of lenses included in the lens group shown in FIG. 1 is only an example, and in practice, the lens group may include more or less lenses, which is not limited in this application.
- the center of the lens group and the center of the image sensor are aligned, such as the schematic diagram of the assembly position of the lens group and the image sensor shown in Figure 2 and the projection of the camera module shown in Figure 3. It can be seen from the schematic diagram that the projection of the center of the lens group on the plane of the image sensor coincides with the center of the image sensor (ie, the center is aligned).
- the actual vertical field of view (VFOV) that the camera module can actually perceive is small.
- the perceptible VFOV and horizontal filed of view (HFOV) of the camera module can be shown in FIG. 3 , where VFOV is the opening angle of the lens group pair height, and HFOV is the lens group pair width. , only the height and width are exemplified in Figure 3.
- the camera module can be widely used in various scenarios that require pose estimation and 3D reconstruction of the target, so as to detect the surrounding environment through the camera module, and perform 3D reconstruction of the surrounding environment to achieve target positioning.
- the camera module can be applied to a mobile platform, etc., to realize the detection of the surrounding environment such as the mobile platform.
- the camera module can be applied to a vehicle, and specifically can be applied to an in-vehicle perception system.
- the camera module can be used for the front-view camera of the vehicle-mounted forward-view perception system, and can also be applied to the front-view camera, side-view camera or rear-view camera of the vehicle-mounted surround-view perception system.
- the field of view (filed of view, FOV) of the front-view camera, side-view camera or rear-view camera may be 40-60 degrees in general specifications, 23-40 degrees narrower, or wider 100 to 180 degrees, etc.
- the front-view camera, the side-view camera, or the rear-view camera may specifically be a type of camera such as a monocular camera, a binocular camera, or a fish-eye camera.
- the functions of the camera module may mainly include detection and detection of objects such as vehicles, pedestrians, general obstacles, lane lines, road markers, and traffic signs around the vehicle. Recognition also includes the distance and speed measurement of the detected target, and also includes the estimation of the camera movement (including rotation and translation), and the three-dimensional reconstruction of the surrounding environment on this basis, so as to achieve target localization.
- the camera module when the camera module is applied to the vehicle surround view perception system, the camera module detects the surrounding environment of the vehicle body. In order to ensure that the blind area around the vehicle body is small enough, the camera module will be tilted downward and pointed to the ground during installation to ensure that the lower edge of the VFOV is close to the edge of the vehicle body.
- the camera module can be tilted downward by 30 degrees (that is, the camera module The optical axis of the group is inclined downward by 30 degrees) to ensure that the lower edge of its VFOV is close to the front cover of the vehicle.
- the camera module when the camera module is applied to the vehicle front-view perception system, it is based on the camera module to detect objects such as traffic signs and traffic lights. For example, as shown in the installation diagram of the front-view camera in the vehicle-mounted front-view perception system shown in Figure 5, assuming that the VFOV of the front-view camera is 40 degrees, limited by the hood of the vehicle, its horizontal downward field of view is about 12 Spend. In order to meet the detection range, the camera module can be tilted upward by 8 degrees during installation (that is, the optical axis of the camera module can be tilted upward by 8 degrees).
- the camera module in order to satisfy the detection range, the camera module is installed obliquely. In this way, the motion direction and the image sensor plane are not in a vertical relationship, which will affect the accuracy and robustness of the camera motion estimation, which will lead to a large target positioning error.
- the present application proposes an installation method and a mobile platform for a camera module, so as to improve the accuracy and robustness of camera motion estimation, thereby reducing target positioning errors.
- the embodiment of the present application provides a method for installing a camera module, which may specifically include: installing the camera module on a mobile platform, and the camera module is parallel to the ground where the mobile platform is located, for example, a camera module
- the schematic diagram of the installation can be shown in FIG. 6 .
- the shape of the camera module and the installation position on the mobile platform in FIG. 6 are just an example, which is not limited in this application.
- the camera module may include a lens group and an image sensor, the lens group includes at least one lens; the projection of the center of the lens group on the plane of the image sensor is a first position; the first position is the same as the image sensor.
- the distance from the center of the image sensor is greater than a first threshold, and the first threshold is greater than 0.
- the side view of the camera module may be as shown in FIG. 7( a ) or FIG. 7( b ).
- the center of the lens group in the existing camera module and the center of the image sensor are absolutely aligned, and at this time the center of the lens group is on the image sensor
- the projection of the plane coincides with the center of the image sensor, that is, the distance between the projection position and the center of the image sensor is 0. That is to say, without considering assembly errors, the first threshold in the camera module involved in the present application is greater than 0.
- the center of the lens group in the existing camera module is aligned with the center of the image sensor, and there is also an error value.
- the center of the lens group is at The distance between the projection position of the image sensor plane and the center of the image sensor is the error value, that is, the center of the lens group and the center of the image sensor are considered to be aligned. That is to say, considering the assembly error, the first threshold in the camera module involved in the present application is greater than the error value.
- the camera is installed on the mobile platform, and when the camera module is parallel to the ground on which the mobile platform is located, the optical axis of the lens group and the normal of the image sensor plane are both parallel to the The ground on which the mobile platform is located is parallel.
- the mobile platform may be a motor vehicle, an unmanned aerial vehicle, a rail car, a bicycle, a signal light, a speed measuring device, or a network device (such as a base station, terminal device in various systems), and the like.
- the camera module can be installed on transport equipment, household equipment, robots, PTZs and other movable equipment. This application does not limit the type of terminal equipment on which the camera module is installed and the function of the camera module.
- the line connecting the first position and the center of the image sensor may be perpendicular to the horizontal axis direction in the first coordinate system, and the first coordinate system takes the center of the image sensor as the origin, and is horizontal to the right.
- the positive direction of the horizontal axis and the vertical downward direction are the Cartesian coordinate system established by the positive direction of the vertical axis. That is, there is a vertical distance between the projection position of the center of the lens group on the image sensor plane and the center of the image sensor in the vertical direction, and there is no distance in the horizontal direction.
- the first position may be higher than the center of the image sensor.
- the projection diagram of the camera module can be as shown in FIG. 8 , in which the projection position of the lens group on the image sensor plane (that is, the first position can be seen) ) above the center of the image sensor.
- the camera module shown in FIG. 8 can increase the perception capability of the part below the center of the lens group, that is, the area cannot be imaged on the image sensor originally. The information can be more received by the image sensor through the lens group.
- the first position may be lower than the center of the image sensor.
- the projection diagram of the camera module can be shown in FIG. 9 , in which the projection position of the lens group on the image sensor plane (that is, the first position can be seen) ) below the center of the image sensor.
- the camera module shown in FIG. 9 can increase the perception capability of the part above the center of the lens group, that is, the area cannot be imaged on the image sensor originally. The information can be more received by the image sensor through the lens group.
- the distance between the first position and the center of the image sensor may be related to a first angle, wherein the first angle may be the angle bisector of the VFOV of the camera module and the The angle between the optical axes of the lens group.
- the first angle may be greater than 0 degrees and less than or equal to half of the VFOV supplementary angle of the camera module.
- the distance between the first position and the center of the image sensor is the absolute value of the ordinate in the first coordinate system of the optical center or focus of expansion (FOE) corresponding to the camera module involved below .
- FOE focus of expansion
- the absolute value of the ordinate y 0 of the optical center or FOE corresponding to the camera module in the first coordinate system may conform to the following formula 1:
- f y is the focal length of the center of the camera module
- ⁇ is the first angle
- h( ⁇ ) tan( ⁇ )
- h( ⁇ ) is a unary N-th function of ⁇
- N is greater than 0 Integer.
- is the absolute value of the parameter
- tan(*) represents the tangent function.
- the FOE may be the convergence point of the optical flow on the stationary target when the camera module moves based on the moving platform.
- the first angle may be an angle to be adjusted, that is, the detection range detected by the camera module needs to adjust the detection range corresponding to the first angle on the basis of satisfying the VFOV.
- the optical center coordinates calibrated by the camera internal parameters of the camera module as shown in FIG. 7(a) and FIG. 8 are located above the image sensor plane.
- the coordinates of the FOE are also in the upper half of the image sensor plane.
- y 0 is a negative value, then y 0 can be obtained based on the above formula 1, which can conform to the following formula two:
- the optical center coordinates calibrated at the time of the camera internal parameter calibration of the camera module as shown in FIG. 7(b) and FIG. 9 are located on the plane of the image sensor.
- the coordinates of the FOE are also in the lower half of the image sensor plane.
- y 0 is a positive value. Based on the above formula 1, it can be obtained that y 0 can meet the following Formula three:
- the function h( ⁇ ) of ⁇ may also be different.
- h( ⁇ ) when the imaging model of the camera module is a fisheye imaging model, h( ⁇ ) may be a univariate N-th function of ⁇ .
- the values of k 1 , k 2 , k 3 , and k 4 can be: -1.2101823606265119, 2.348159905176264, -2.8413822488946474, 1.3818466241138192; for another example, k 1 , k 2 , k 3 , and k 4 :-1.1529851704803267,2.114443595798193,-2.458009210238794,1.1606670303240054; ⁇ ,k 1 ,k 2 ,k 3 ,k 4 ⁇ :-1.1741024894366126,2.1870282871688733,-2.5272904743180695,1.170976436497773 ⁇
- the values of k 1 , k 2 , k 3 , and k 4 may also be other values, which will not be listed one by one in this application.
- the distance between the first position and the center of the image sensor may be related to the VFOV of the camera module.
- the distance between the first position and the center of the image sensor is the absolute value of the optical center or the ordinate of the FOE in the first coordinate system corresponding to the camera module involved below.
- the ordinate y 0 of the optical center or FOE corresponding to the camera module in the first coordinate system may conform to the following formula 4:
- f y is the focal length of the center of the camera module, is the VFOV of the camera module.
- the imaging model of the camera module is a pinhole imaging model
- the method of the above formula 4 may be used.
- the ordinate y 0 of the optical center or FOE corresponding to the camera module in the first coordinate system may conform to the following formula 5:
- f y is the focal length of the center of the camera module
- ⁇ d is related to ⁇ 1
- ⁇ d can be a function g( ⁇ 1) related to ⁇ 1
- VFOV the VFOV of the camera module
- the imaging model of the camera module is a fisheye imaging model
- the method of the above formula 5 may be used.
- ⁇ d ⁇ 1 (1+k 1 ⁇ 1 2 +k 2 ⁇ 1 4 +k 3 ⁇ 1 6 +k 4 ⁇ 1 8 ), k 1 , k 2 , k 3 , and k 4 are in the fisheye imaging model of the four coefficients.
- the values of k 1 , k 2 , k 3 , and k 4 can be: -1.2101823606265119, 2.348159905176264, -2.8413822488946474, 1.3818466241138192; for another example, k 1 , k 2 , k 3 , and k 4 :-1.1529851704803267,2.114443595798193,-2.458009210238794,1.1606670303240054; ⁇ ,k 1 ,k 2 ,k 3 ,k 4 ⁇ :-1.1741024894366126,2.1870282871688733,-2.5272904743180695,1.170976436497773 ⁇
- the values of k 1 , k 2 , k 3 , and k 4 may also be other values, which will not be listed one by one in this application.
- the camera module is the camera module shown in (a) in FIG. 7 and FIG. 8
- the methods in the above formula 4 and formula 5 can be used.
- the ordinate y 0 of the optical center or FOE corresponding to the camera module in the first coordinate system may conform to the following formula 6:
- f y is the focal length of the center of the camera module
- ⁇ 0 is the preset degree
- VFOV the VFOV of the camera module
- ⁇ 0 may be 0.21 radian (rad), etc., or ⁇ 0 may be 12 degrees, etc. Of course, ⁇ 0 may also be other degrees, which is not limited in this application.
- the imaging model of the camera module is a pinhole imaging model
- the method of formula 6 above may be used.
- the ordinate y 0 of the optical center or FOE corresponding to the camera module in the first coordinate system may conform to the following formula 7:
- f y is the focal length of the center of the camera module
- ⁇ s is related to ⁇ 2, for example, ⁇ s can be a function g( ⁇ 2) related to ⁇ 2, is the VFOV of the camera module.
- the imaging model of the camera module is a fisheye imaging model
- the method of the above formula 7 may be used.
- the values of k 1 , k 2 , k 3 , and k 4 can be: -1.2101823606265119, 2.348159905176264, -2.8413822488946474, 1.3818466241138192; for another example, k 1 , k 2 , k 3 , and k 4 :-1.1529851704803267,2.114443595798193,-2.458009210238794,1.1606670303240054; ⁇ ,k 1 ,k 2 ,k 3 ,k 4 ⁇ :-1.1741024894366126,2.1870282871688733,-2.5272904743180695,1.170976436497773 ⁇
- the values of k 1 , k 2 , k 3 , and k 4 may also be other values, which will not be listed one by one in this application.
- the camera module is the camera module shown in (b) in FIG. 7 and FIG. 9
- the methods in the above formula 6 and formula 7 can be used.
- the mobile platform may be a vehicle or the like.
- the camera module shown in FIG. 7(a) and FIG. 8 can be applied to the vehicle surround view perception system, and the camera shown in FIG. 7(b) and FIG. 9
- the module can be applied to the vehicle forward vision perception system.
- using the installation method of the camera module provided by the embodiment of the present application when the above-mentioned camera module is installed on the vehicle, it is not necessary to install the camera module obliquely during installation, and the detection can be satisfied. need.
- the camera module when the mobile platform is a vehicle, and the camera module installed on the vehicle is applied to the vehicle-mounted surround view perception system, the camera module does not need to be tilted toward the vehicle during installation. Point down to the ground, you can ensure that the blind spot around the body is small enough.
- the installation schematic diagram of the camera module may be as shown in FIG. 10 .
- the optical axis of the camera module is parallel to the ground where the vehicle is located, which can ensure that the motion direction is perpendicular to the image sensor plane, thereby improving the accuracy and robustness of camera motion estimation, thereby reducing Small target positioning error.
- the camera module when the mobile platform is a vehicle, and the camera module installed on the vehicle is applied to the vehicle-mounted front-view perception system, the camera module does not need to be installed during installation If it is installed inclined upward, it can increase the perception range of objects such as traffic signs and traffic lights.
- the camera module when the camera module is applied to the vehicle front-view perception system, assuming that the VFOV of the camera module is 40 degrees, the schematic diagram of the installation of the camera module may be as shown in FIG. 11 .
- the optical axis of the camera module is parallel to the ground where the vehicle is located, which can ensure that the motion direction is perpendicular to the image sensor plane, thereby improving the accuracy and robustness of the camera motion estimation, thereby reducing the Small target positioning error.
- the movement direction can be guaranteed to be perpendicular to the image sensor plane, which can improve the accuracy and robustness of the camera movement estimation , thereby reducing the target positioning error, and the required detection range can be met without tilting the camera module.
- an embodiment of the present application further provides a mobile platform, including a camera module, the camera module includes a lens group and an image sensor, the lens group includes at least one lens; wherein: the camera module and The ground on which the mobile platform is located is parallel; the projection of the center of the lens group on the plane of the image sensor is the first position; the distance between the first position and the center of the image sensor is greater than a first threshold, and the first position Threshold is greater than 0.
- the mobile platform may be, but is not limited to, a vehicle or the like.
- the camera module installation method provided by the embodiment of the present application is adopted, and the camera module is installed on a mobile platform, so that when the target is positioned, two frames of images captured by the camera module are matched.
- the feature point estimation of camera motion is an important link in the spatial 3D reconstruction algorithm. Taking FIG. 12 as an example, it is assumed that the motion between the two frames of images I 1 and I 2 is obtained, that is, the rotation R and translation t of the camera from the first frame image I 1 to the second frame image I 2 are obtained.
- the centers of the lens groups corresponding to the two frames of images are O 1 and O 2 respectively.
- a feature point p 1 in I 1 corresponds to a feature point p 2 in I 2 .
- the feature points in the two frames of images match or correspond to each other, indicating that the feature points in the two frames of images are the projections of the same three-dimensional point P in space on the two frames of images.
- the pixel positions of the pixel points p 1 and p 2 of point P in the two frames of images I 1 and I 2 can conform to the following formula 8:
- K is the camera internal parameter matrix.
- a vector In a homogeneous coordinate system, a vector will be equal to itself multiplied by any non-zero constant. This is usually used to express a projection relationship. For example, p 1 and s 1 p 1 are in a projection relationship, and p 1 and s 1 p 1 are equal in the sense of homogeneous coordinate system, or equal in the sense of scale, which can be written as: s 1 p 1 ⁇ p 1 .
- Equation 10 E can be estimated using n ⁇ 8 pairs of feature points. Putting all the points into one equation can be turned into a system of linear equations as shown in Equation 10:
- R z ( ⁇ /2) represents the rotation matrix obtained by rotating 90° along the Z axis. Therefore, when decomposing from E to R and t, there are a total of 4 possible solutions, which can be shown in Figure 13.
- ⁇ can conform to the following formula XI:
- the error of the eigenvalue ⁇ 1 can be
- the error of the eigenvector e can be given as
- the solution of matrix E is noise sensitive.
- the noise mainly comes from feature point detection errors, feature point matching errors, quantization errors, and camera internal parameter calibration errors.
- ⁇ 1 ⁇ 2 when the rank of the matrix A is less than 8, ⁇ 1 ⁇ 2 . It can be seen from the above formula 11 that the second term of ⁇ becomes infinite, which makes the estimation error of matrix E also infinite.
- the camera translation direction (movement direction) should be perpendicular to the image sensor plane.
- the camera module when the camera module is installed on the mobile platform, the camera module can be connected to the ground where the mobile platform is located.
- Parallel specifically, the optical axis of the lens group of the camera module and the normal of the image sensor plane can be both parallel to the ground where the mobile platform is located, such as shown in Figure 10 and Figure 11, that is, the camera movement direction is perpendicular to all
- the image sensor plane is described, thereby improving the accuracy and robustness of camera motion estimation.
- the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
- computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
Abstract
一种摄像头模组的安装方法及移动平台,用于自动驾驶或智能驾驶。包括:将摄像头模组安装在移动平台上,摄像头模组与移动平台所在的地面平行;其中摄像头模组包括镜头组和图像传感器,镜头组包括至少一个镜头;镜头组的中心在图像传感器平面的投影为第一位置;第一位置与图像传感器中心的距离大于第一阈值,第一阈值大于0。这样,由于摄像头模组可以与移动平台所在的地面平行,可以保证运动方向与图像传感器平面垂直,可以提高摄像头运动估计的准确性和鲁棒性,进而减小目标定位误差,并且无需将摄像头模组倾斜安装,就可以满足所需的探测范围。该方法可以应用于车联网,如车辆外联V2X、车间通信长期演进技术LTE-V、车辆-车辆V2V等。
Description
本申请涉及传感器技术领域,尤其涉及一种摄像头模组的安装方法及移动平台。
镜头组和图像传感器是摄像头模组的两个重要组成部分。其中,镜头组可以是一组凸(凹)透镜,图像传感器是成像面。图像传感器将从镜头组传导过来的光线转换为电信号,再内部通过模拟-数字转换将电信号转换为数字信号,形成图像。
目前,可以应用摄像头模组实现对周围环境的探测,进而对周围环境进行三维重建,从而实现目标定位。例如,在摄像头模组应用于车载感知系统时,摄像头模组的功能主要可以包括对自车周围车辆、行人、一般障碍物、车道线、路面标识物、交通标志牌等目标的检测和识别,也包括对上述检测到的目标的距离、速度测量,还包括对摄像头运动(也即自车运动)(包括旋转和平移)的估计、以及在此基础上的对周围环境的三维重建,从而实现目标定位。
摄像头模组在移动平台,例如,车辆等运动物体上的安装还需要考虑摄像头运动估计的准确性和鲁棒性,减少目标定位误差。
发明内容
本申请提供一种摄像头模组的安装方法及移动平台,用以解决为了满足所需的探测范围,安装方法会影响摄像头运动估计的准确性和鲁棒性,进而导致目标定位误差较大的问题。
第一方面,本申请提供了一种摄像头模组的安装方法,具体可以包括:将所述摄像头模组安装在移动平台上,所述摄像头模组与所述移动平台所在的地面平行;其中,所述摄像头模组包括镜头组和图像传感器,所述镜头组包括至少一个镜头;所述镜头组的中心在所述图像传感器平面的投影为第一位置;所述第一位置与所述图像传感器中心的距离大于第一阈值,所述第一阈值大于0。
通过上述安装方法,由于摄像头模组可以与所述移动平台所在的地面平行,可以保证运动方向与图像传感器平面垂直,这样可以提高摄像头运动估计的准确性和鲁棒性,进而减小目标定位误差,并且通过镜头组中心在图像传感器平面投影的第一位置与图像传感器中心的距离大于第一阈值,可以增大镜头组的中心以上部分或者以下部分的感知能力,从而无需将摄像头模组倾斜安装,就可以满足所需的探测范围。
在一个可能的设计中,所述镜头组的光轴和所述图像传感器平面的法线均与所述移动平台所在的地面平行。这样可以保证运动方向与图像传感器平面垂直,从而可以提高摄像头运动估计的准确性和鲁棒性。
在一个可能的设计中,所述第一位置与所述图像传感器中心的连线与第一坐标系中横轴方向垂直,所述第一坐标系为以所述图像传感器中心为原点,水平向右为横轴正方向,竖直向下为纵轴正方向建立的直角坐标系。这样可以使所述第一位置与所述图像传感器中心仅在竖直方向上存在距离,在水平方向上不存在距离,从而可以保证摄像头模组的探测 性能。
在一个可能的设计中,所述第一位置和所述图像传感器中心的距离与第一角度相关,所述第一角度为所述摄像头模组的垂直视场(vertical filed of view,VFOV)角的角平分线与所述镜头组的光轴之间的角度。具体的,所述第一角度可以表征所述摄像头模组的VFOV的方向,进而可以指示实际探测范围的位置,可以通过调整所述第一角度来获得所需的探测范围。这样可以根据实际探测需求来确定所述第一位置和所述图像传感器中心的距离。
在一个可能的设计中,所述第一角度大于0度且小于或者等于所述摄像头模组的VFOV补角的一半。可以根据实际所需的探测范围确定要采用的第一角度的度数,以确定所述摄像头模组的VFOV的方向,进而确定所述第一位置和所述图像传感器中心的距离,以满足实际探测需求。
在一个可能的设计中,所述第一位置高于所述图像传感器中心,从而可以增大所述镜头组的中心以下部分的感知能力,即该区域原来未能在所述图像传感器上成像的信息可以更多地通过所述镜头组被所述图像传感器接收到;或者,所述第一位置低于所述图像传感器中心,从而可以增大所述镜头组的中心以上部分的感知能力,即该区域原来未能在所述图像传感器上成像的信息可以更多地通过所述镜头组被所述图像传感器接收到。
在一个可能的设计中,所述摄像头模组对应的光心或扩张焦点FOE在第一坐标系中的纵坐标y
0的绝对值符合以下公式:
|y
0|=|f
yh(θ)|
其中,f
y为所述摄像头模组中心的焦距,θ为所述第一角度,h(θ)=tan(θ),或者h(θ)为θ的一元N次函数,N为大于0的整数,|*|为对其中参数取绝对值,tan(*)表示正切函数,所述第一坐标系为以所述图像传感器中心为原点,水平向右为横轴正方向,竖直向下为纵轴正方向建立的直角坐标系。
在一个可能的设计中,所述摄像头模组对应的光心或扩张焦点FOE在第一坐标系中的纵坐标y
0符合以下公式:
在一个可能的设计中,所述摄像头模组对应的光心或扩张焦点FOE在第一坐标系中的纵坐标y
0符合以下公式:
y
0=-f
yθ
d
其中,f
y为所述摄像头模组中心的焦距,θ
d与θ1相关,θ
d可以为与θ1有关的函数g(θ1),
为所述摄像头模组的VFOV,所述第一坐标系为以所述图像传感器中心为原点,水平向右为横轴正方向,竖直向下为纵轴正方向建立的直角坐标系。
在一个可能的设计中,所述摄像头模组对应的光心或扩张焦点FOE在第一坐标系中的纵坐标y
0符合以下公式:
在一个可能的设计中,所述摄像头模组对应的光心或扩张焦点FOE在第一坐标系中的纵坐标y
0符合以下公式:
y
0=f
yθ
s
其中,f
y为所述摄像头模组中心的焦距,θ
s与θ2相关,θ
s可以为与θ2有关的函数g(θ2),
为所述摄像头模组的VFOV,θ
0为预设度数,所述第一坐标系为以所述图像传感器中心为原点,水平向右为横轴正方向,竖直向下为纵轴正方向建立的直角坐标系。
第二方面,本申请提供了一种移动平台,所述移动平台安装有与其所在的地面平行的摄像头模组,所述摄像头模组包括镜头组和图像传感器,所述镜头组包括至少一个镜头;其中:所述镜头组的中心在所述图像传感器平面的投影为第一位置;所述第一位置与所述图像传感器中心的距离大于第一阈值,所述第一阈值大于0。
这样,由于摄像头模组可以与移动平台所在的地面平行,可以保证运动方向与图像传感器平面垂直,这样可以提高摄像头运动估计的准确性和鲁棒性,进而减小目标定位误差,并且通过镜头组中心在图像传感器平面投影的第一位置与图像传感器中心的距离大于第一阈值,可以增大镜头组的中心以上部分或者以下部分的感知能力,从而无需将摄像头模组倾斜安装,就可以满足所需的探测范围。
在一个可能的设计中,所述镜头组的光轴和所述图像传感器平面的法线均与所述移动平台所在的地面平行。这样可以保证运动方向与图像传感器平面垂直,从而可以提高摄像头运动估计的准确性和鲁棒性。
在一个可能的设计中,所述镜头组的中心与所述图像传感器中心的连线与第一坐标系中横轴方向垂直,所述第一坐标系为以所述图像传感器中心为原点,水平向右为横轴正方向,竖直向下为纵轴正方向建立的直角坐标系。这样可以使所述第一位置与所述图像传感器中心仅在竖直方向上存在距离,在水平方向上不存在距离,从而可以保证摄像头模组的性能。
在一个可能的设计中,所述第一位置和所述图像传感器中心的距离与第一角度相关,所述第一角度为所述摄像头模组的垂直视场VFOV的角平分线与所述镜头组的光轴之间的角度。具体的,所述第一角度可以表征所述摄像头模组的VFOV的方向,进而可以指示实际探测范围的位置,可以通过调整所述第一角度来获得所需的探测范围。这样可以根据实际探测需求来确定所述第一位置和所述图像传感器中心的距离。
在一个可能的设计中,所述第一角度大于0度且小于或者等于所述摄像头模组的VFOV补角的一半。可以根据实际所需的探测范围确定要采用的第一角度的度数,以确定所述摄像头模组的VFOV的方向,进而确定所述第一位置和所述图像传感器中心的距离,以满足实际探测需求。
在一个可能的设计中,所述第一位置高于所述图像传感器中心,从而可以增大所述镜头组的中心以下部分的感知能力,即该区域原来未能在所述图像传感器上成像的信息可以更多地通过所述镜头组被所述图像传感器接收到;或者,所述第一位置低于所述图像传感器中心,从而可以增大所述镜头组的中心以上部分的感知能力,即该区域原来未能在所述图像传感器上成像的信息可以更多地通过所述镜头组被所述图像传感器接收到。
在一个可能的设计中,所述摄像头模组对应的光心或扩张焦点FOE在第一坐标系中的纵坐标y
0的绝对值符合以下公式:
|y
0|=|f
yh(θ)|
其中,f
y为所述摄像头模组中心的焦距,θ为所述第一角度,h(θ)=tan(θ),或者h(θ)为θ的一元N次函数,N为大于0的整数,所述第一坐标系为以所述图像传感器中心为原点,水平向右为横轴正方向,竖直向下为纵轴正方向建立的直角坐标系。
在一个可能的设计中,所述摄像头模组对应的光心或扩张焦点FOE在第一坐标系中的纵坐标y
0符合以下公式:
在一个可能的设计中,所述摄像头模组对应的光心或扩张焦点FOE在第一坐标系中的纵坐标y
0符合以下公式:
y
0=-f
yθ
d
其中,f
y为所述摄像头模组中心的焦距,θ
d与θ1相关,例如θ
d可以为与θ1有关的函数g(θ1),
为所述摄像头模组的VFOV,所述第一坐标系为以所述图像传感器中心为原点,水平向右为横轴正方向,竖直向下为纵轴正方向建立的直角坐标系。
在一个可能的设计中,所述摄像头模组对应的光心或扩张焦点FOE在第一坐标系中的纵坐标y
0符合以下公式:
在一个可能的设计中,所述摄像头模组对应的光心或扩张焦点FOE在第一坐标系中的纵坐标y
0符合以下公式:
y
0=f
yθ
s
其中,f
y为所述摄像头模组中心的焦距,θ
s与θ2相关,θ
s可以为与θ2有关的函数g(θ2),
为所述摄像头模组的VFOV,θ
0为预设度数,所述第一坐标系为以所述图像传感器中心为原点,水平向右为横轴正方向,竖直向下为纵轴正方向建立的直角坐标 系。
图1为本申请提供的一种摄像头模组爆炸图的示意图;
图2为现有技术中的镜头组和图像传感器的组装位置的示意图;
图3为现有技术中的摄像头模组的可以感知的VFOV和水平视场的示意图;
图4为现有技术中的一种车载环视感知系统中前视摄像头安装示意图;
图5为现有技术中的一种车载前视感知系统中前视摄像头安装示意图;
图6为本申请提供的一种摄像头模组的安装示意图;
图7为本申请提供的一种摄像头模组的侧视图;
图8为本申请提供的一种摄像头模组的投影图;
图9为本申请提供的另一种摄像头模组的投影图;
图10为本申请提供的另一种摄像头模组的安装示意图;
图11为本申请提供的另一种摄像头模组的安装示意图;
图12为本申请提供的一种通过摄像头模组拍摄的两帧图像中匹配的特征点估计摄像头运动的示意图;
图13为本申请提供的一种求解结果的示意图;
图14为本申请提供的一种摄像头模组平移方向对平移向量估计的影响示意图。
下面将结合附图,对本申请实施例进行详细描述。
本申请实施例提供一种摄像头模组的安装方法及移动平台,用以解决为了所需的探测范围,安装方法会影响摄像头运动估计的准确性和鲁棒性,进而导致目标定位误差较大的问题。
本申请中所涉及的至少一个是指一个或多个;多个,是指两个或两个以上。
在本申请的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。
目前摄像头模组(camera compact module,CCM)的主要组成部分可以如图1摄像头模组爆炸图呈现的器件所示。在图1中,所述摄像头模组主要可以包括镜头组和图像传感器。所述镜头组可以包括至少一个镜头(镜片),例如图1中所述镜头组可以包括镜头1、镜头2和镜头3。其中所述镜头组包括的至少一个镜头可以为凸(凹)透镜。具体的,如图1中示出的,所述镜头组可以通过镜筒和镜座固定,在所述镜头组和所述镜座之间有滤光片。所述图像传感器可以是一个半导体芯片,所述半导体芯片上包含芯片感光区。具体的,所述图像传感器可以将从所述镜头组传导过来的光线转换为电信号,再内部通过模拟-数字转换将电信号转换为数字信号,形成图像。
示例性的,如图1所示,所述摄像头模组还可以包括电路板,所述电路板是电子元器件的支撑体,是电子元器件电气相互连接的载体。
需要说明的是,图1所示的镜头组包括的镜头的个数仅仅是示例,实际中镜头组可以包含更多或者更少的镜头,本申请对此不作限定。
目前,在摄像头模组的组装过程中,通常会保证镜头组的中心和图像传感器中心对齐,例如图2所示的镜头组和图像传感器的组装位置示意图和图3所示的摄像头模组的投影示意图中可以看出,所述镜头组的中心在所述图像传感器平面的投影与所述图像传感器中心重合(即中心对齐)。在这种组装方式下,受到图像传感器宽高比例的限制,通常摄像头模组实际可以感知的垂直视场(vertical filed of view,VFOV)较小。示例性的,摄像头模组的可以感知的VFOV和水平视场(horizontal filed of view,HFOV)可以如图3中所示,其中,VFOV为镜头组对高度的张角,HFOV为镜头组对宽度的张角,在图3中仅示例出该高度和宽度。
摄像头模组可以广泛应用于各种需要位姿估计和目标物三维重建的场景,以实现通过摄像头模组对周围环境的探测,对周围环境进行三维重建,从而实现目标定位。例如,摄像头模组可以应用于移动平台等,实现对移动平台等周围环境的探测。例如,摄像头模组可以应用于车辆,具体可以应用于车载感知系统。例如,摄像头模组可以用于车载前视感知系统的前视摄像头,也可以应用于车载环视感知系统的前视摄像头、侧视摄像头或者后视摄像头。其中,前视摄像头、侧视摄像头或者后视摄像头的视场(filed of view,FOV)可以是一般规格的40~60度,也可以是较窄的23~40度,还可以是较宽的100~180度等。示例性的,前视摄像头、侧视摄像头或者后视摄像头具体可以为单目摄像头、双目摄像头或者鱼眼摄像头等类型的摄像头。
示例性的,当摄像头模组应用于车载感知系统时,摄像头模组的功能主要可以包括对自车周围车辆、行人、一般障碍物、车道线、路面标识物、交通标志牌等目标的检测和识别,也包括对上述检测到的目标的距离、速度测量,还包括对摄像头运动(包括旋转和平移)的估计,以及在此基础上对周围环境的三维重建,从而实现目标定位。
在一个具体的示例中,当摄像头模组应用于车载环视感知系统时,是基于摄像头模组探测车身周围环境。为了保证车身四周的盲区足够小,在安装时会将摄像头模组倾斜向下指向地面,以保证其VFOV下沿贴近自车车身边缘。例如,图4所示的车载环视感知系统中前视摄像头安装示意图所示,假设前视摄像头的VFOV为120度,为了满足探测范围,可以将摄像头模组向下倾斜30度(也即将摄像头模组的光轴向下倾斜30度),以保证其VFOV下沿贴近自车前保。
需要说明的是,在车载环视感知系统中侧视摄像头和后视摄像头的安装原理与上述前视摄像头的安装原理相同,可以互相参照,此处不再详细描述。
在另一个具体的示例中,当摄像头模组应用于车载前视感知系统时,是基于摄像头模组探测交通标志牌和交通信号灯等目标。例如,图5所示的车载前视感知系统中前视摄像头安装示意图所示,假设前视摄像头的VFOV为40度,受限于自车引擎盖,其水平向下部分的视场约有12度。为了满足探测范围,在安装时可以将摄像头模组向上倾斜8度(也即将摄像头模组的光轴向上倾斜8度)。
在图4和图5所示的两个示例中,为了满足探测范围均将摄像头模组倾斜安装。这样会导致运动方向与图像传感器平面之间并不是垂直关系,从而会影响摄像头运动估计的准确性和鲁棒性,进而会导致目标定位误差较大。基于此,本申请提出一种摄像头模组的安装方法及移动平台,以提高摄像头运动估计的准确性和鲁棒性,进而减小目标定位误差。
为了更加清晰地描述本申请实施例的技术方案,下面结合附图,对本申请实施例提供的摄像头模组的安装方法及移动平台进行详细说明。
本申请实施例提供了一种摄像头模组的安装方法,具体可以为:将所述摄像头模组安装在移动平台上,所述摄像头模组与所述移动平台所在的地面平行,例如摄像头模组的安装示意图可以如图6所示,需要说明的是,图6中摄像头模组的形状以及在移动平台上的安装位置仅仅是一种示例,本申请对此不作限定。其中,所述摄像头模组可以包括镜头组和图像传感器,所述镜头组包括至少一个镜头;所述镜头组的中心在所述图像传感器平面的投影为第一位置;所述第一位置与所述图像传感器中心的距离大于第一阈值,所述第一阈值大于0。示例性的,所述摄像头模组的侧视图可以如图7中(a)或图7中(b)所示。
在一种实施例中,在不考虑组装误差的情况下,现有的摄像头模组中的镜头组的中心和图像传感器中心是绝对对齐的,此时所述镜头组的中心在所述图像传感器平面的投影与所述图像传感器中心重合在一起,也即投影位置与所述图像传感器中心的距离为0。也就是说在不考虑组装误差的情况下,本申请涉及的所述摄像头模组中所述第一阈值大于0。
在另一种实施例中,在考虑组装误差的情况下,现有的摄像头模组中的镜头组的中心和图像传感器中心对齐,也会存在一个误差值,此时所述镜头组的中心在所述图像传感器平面的投影位置与所述图像传感器中心距离所述误差值即认为镜头组的中心和图像传感器中心对齐。也就是说在考虑组装误差的情况下,本申请涉及的所述摄像头模组中所述第一阈值大于所述误差值。
具体的,所述摄像头安装在所述移动平台上,所述摄像头模组与所述移动平台所在的地面平行时,所述镜头组的光轴和所述图像传感器平面的法线均与所述移动平台所在的地面平行。这样可以提高摄像头运动估计的准确性和鲁棒性,从而可以减小目标定位误差,提高目标定位的准确性。示例性的,所述移动平台可以是机动车辆、无人机、轨道车、自行车、信号灯、测速装置或网络设备(如各种系统中的基站、终端设备)等等。例如,摄像头模组可以安装在运输设备、家居设备、机器人、云台等可移动的设备上。本申请对安装摄像头模组的终端设备类型和摄像头模组的功能不做限定。
示例性的,所述第一位置与所述图像传感器中心的连线可以与第一坐标系中横轴方向垂直,所述第一坐标系为以所述图像传感器中心为原点,水平向右为横轴正方向,竖直向下为纵轴正方向建立的直角坐标系。也就是说,所述镜头组的中心在所述图像传感器平面上的投影位置与所述图像传感器中心在竖直方向上存在距离,在水平方向上不存在距离。
在一种可选的实施方式中,如图7中(a)所示,所述第一位置可以高于所述图像传感器中心。在这种情况下,所述摄像头模组的投影图可以如图8所示,在图8中可以看出所述镜头组在所述图像传感器平面上的投影位置(也即所述第一位置)高于所述图像传感器中心。
具体的,图8所示的摄像头模组相对于图3所示的摄像头模组,可以增大所述镜头组的中心以下部分的感知能力,即该区域原来未能在所述图像传感器上成像的信息可以更多地通过所述镜头组被所述图像传感器接收到。
在另一种可选的实施方式中,如图7中(b)所示,所述第一位置可以低于所述图像传感器中心。在这种情况下,所述摄像头模组的投影图可以如图9所示,在图9中可以看出所述镜头组在所述图像传感器平面上的投影位置(也即所述第一位置)低于所述图像传感器中心。
具体的,图9所示的摄像头模组相对于图3所示的摄像头模组,可以增大所述镜头组的中心以上部分的感知能力,即该区域原来未能在所述图像传感器上成像的信息可以更多 地通过所述镜头组被所述图像传感器接收到。
在一种实施方式中,所述第一位置和所述图像传感器中心的距离可以与第一角度相关,其中,所述第一角度可以为所述摄像头模组的VFOV的角平分线与所述镜头组的光轴之间的角度。可选的,所述第一角度可以大于0度且小于或者等于所述摄像头模组的VFOV补角的一半。
其中,所述第一位置与所述图像传感器中心的距离为以下涉及的摄像头模组对应的光心或扩张焦点(focus of expansion,FOE)在所述第一坐标系中的纵坐标的绝对值。
具体的,所述摄像头模组对应的光心或FOE在所述第一坐标系中的纵坐标y
0的绝对值可以符合以下公式一:
|y
0|=|f
yh(θ)| 公式一;
其中,f
y为所述摄像头模组中心的焦距,θ为所述第一角度,h(θ)=tan(θ),或者h(θ)为θ的一元N次函数,N为大于0的整数。其中,|*|为对其中参数取绝对值,tan(*)表示正切函数。
可选的,所述FOE可以为当摄像头模组在基于移动平台运动时,静止目标上光流的汇聚点。
在上述方法中,所述第一角度可以为待调整角度,也即通过所述摄像头模组探测的探测范围需要在满足所述VFOV的基础上调整所述第一角度对应的探测范围。
在一种可选的实施方式中,在实际成像时,如图7中(a)和图8所示的摄像头模组的摄像头内参标定时标定出的光心坐标位于所述图像传感器平面的上半部分,所述FOE的坐标同样的在所述图像传感器平面的上半部分,此时在所述第一坐标系中y
0为负值,则基于上述公式一可以得到y
0可以符合以下公式二:
y
0=-f
yh(θ) 公式二。
在另一种可选的实施方式中,在实际成像时,如图7中(b)和图9所示的摄像头模组的摄像头内参标定时标定出的光心坐标位于所述图像传感器平面的下半部分,所述FOE的坐标同样的在所述图像传感器平面的下半部分,此时在所述第一坐标系中y
0为正值,则基于上述公式一可以得到y
0可以符合以下公式三:
y
0=f
yh(θ) 公式三。
可选的,由于摄像头模组的成像模型不同,θ的函数h(θ)也可能不同。例如,当所述摄像头模组的成像模型为小孔成像模型时,h(θ)=tan(θ)。又例如,当所述摄像头模组的成像模型为鱼眼成像模型时,h(θ)可以为θ的一元N次函数。可选的,θ的一元N次函数可以为θ的一元9次函数,例如,h(θ)=θ(1+k
1θ
2+k
2θ
4+k
3θ
6+k
4θ
8),其中,k
1,k
2,k
3,k
4为所述鱼眼成像模型中的四个系数。例如,k
1,k
2,k
3,k
4的取值可以分别为:-1.2101823606265119,2.348159905176264,-2.8413822488946474,1.3818466241138192;又例如,k
1,k
2,k
3,k
4的取值可以分别为:-1.1529851704803267,2.114443595798193,-2.458009210238794,1.1606670303240054;又例如,k
1,k
2,k
3,k
4的取值可以分别为:-1.1741024894366126,2.1870282871688733,-2.5272904743180695,1.170976436497773。当然,k
1,k
2,k
3,k
4的取值还可以为其他取值,本申请此处不再一一列举。
需要说明的是,上述θ的一元9次函数仅仅是一个示例,不作为对h(θ)的限定。
需要说明的是,上述仅仅以两种成像模型为例示出了可能的h(θ),并不能对h(θ)进行限定,h(θ)还可以有其它多种函数表示,本申请在此不再一一列举。
在一种可选的实施方式中,所述第一位置和所述图像传感器中心的距离可以与所述摄像头模组的VFOV相关。同样的,所述第一位置和所述图像传感器中心的距离为以下涉及的摄像头模组对应的光心或FOE在所述第一坐标系中的纵坐标的绝对值。
在一种示例中,所述摄像头模组对应的光心或FOE在所述第一坐标系中的纵坐标y
0可以符合以下公式四:
可选的,当所述摄像头模组的成像模型为小孔成像模型时可以采用上述公式四的方法。
在另一种示例中,所述摄像头模组对应的光心或FOE在所述第一坐标系中的纵坐标y
0可以符合以下公式五:
y
0=-f
yθ
d 公式五;
可选的,当所述摄像头模组的成像模型为鱼眼成像模型时可以采用上述公式五的方法。此时,θ
d=θ1(1+k
1θ1
2+k
2θ1
4+k
3θ1
6+k
4θ1
8),k
1,k
2,k
3,k
4为所述鱼眼成像模型中的四个系数。例如,k
1,k
2,k
3,k
4的取值可以分别为:-1.2101823606265119,2.348159905176264,-2.8413822488946474,1.3818466241138192;又例如,k
1,k
2,k
3,k
4的取值可以分别为:-1.1529851704803267,2.114443595798193,-2.458009210238794,1.1606670303240054;又例如,k
1,k
2,k
3,k
4的取值可以分别为:-1.1741024894366126,2.1870282871688733,-2.5272904743180695,1.170976436497773。当然,k
1,k
2,k
3,k
4的取值还可以为其他取值,本申请此处不再一一列举。
需要说明的是,上述θ
d的计算方式仅仅是一个示例,还可以有其它多种方式,本申请对此不作限定。
在具体实施时,当所述摄像头模组为图7中(a)和图8中所示的摄像头模组时,可以采用上述公式四和公式五中的方法。
在又一种示例中,所述摄像头模组对应的光心或FOE在所述第一坐标系中的纵坐标y
0可以符合以下公式六:
可选的,θ
0可以为0.21弧度(rad)等,或者θ
0可以为12度等,当然θ
0还可以为其它度数,本申请对此不作限定。
可选的,当所述摄像头模组的成像模型为小孔成像模型时可以采用上述公式六的方法。
在又一种示例中,所述摄像头模组对应的光心或FOE在所述第一坐标系中的纵坐标y
0可以符合以下公式七:
y
0=f
yθ
s 公式七;
可选的,当所述摄像头模组的成像模型为鱼眼成像模型时可以采用上述公式七的方法。此时,θ
s=θ2(1+k
1θ2
2+k
2θ2
4+k
3θ12
6+k
4θ2
8),k
1,k
2,k
3,k
4为所述鱼眼成像模型中的四个系数。例如,k
1,k
2,k
3,k
4的取值可以分别为:-1.2101823606265119,2.348159905176264,-2.8413822488946474,1.3818466241138192;又例如,k
1,k
2,k
3,k
4的取值可以分别为:-1.1529851704803267,2.114443595798193,-2.458009210238794,1.1606670303240054;又例如,k
1,k
2,k
3,k
4的取值可以分别为:-1.1741024894366126,2.1870282871688733,-2.5272904743180695,1.170976436497773。当然,k
1,k
2,k
3,k
4的取值还可以为其他取值,本申请此处不再一一列举。
需要说明的是,上述θ
s的计算方式仅仅是一个示例,还可以有其它多种方式,本申请对此不作限定。
在具体实施时,当所述摄像头模组为图7中(b)和图9中所示的摄像头模组时,可以采用上述公式六和公式七中的方法。
示例性的,所述移动平台可以是车辆等等。
例如,当所述摄像头模组安装在车辆上时,图7中(a)和图8所示的摄像头模组可以应用于车载环视感知系统,图7中(b)和图9所示的摄像头模组可以应用于车载前视感知系统。在一种实施方式中,采用本申请实施例提供的摄像头模组的安装方法,将上述涉及的摄像头模组安装在车辆上时,在安装时可以不必将摄像头模组倾斜安装,即可以满足探测需求。
在一种可选的实施方式中,当所述移动平台为车辆,并且安装在车辆上的所述摄像头模组应用于所述车载环视感知系统时,在安装时不需要将摄像头模组倾斜向下指向地面,则可以保证车身四周的盲区足够小。例如,当所述摄像头模组应用于所述车载环视感知系统的前视摄像头时,假设所述摄像头模组的VFOV为120度时,所述摄像头模组的安装示意图可以如图10所示。在图10中可以看出,所述摄像头模组的光轴平行于车辆所在的地面,这样可以保证运动方向与图像传感器平面垂直,从而可以提高摄像头运动估计的准确性和鲁棒性,进而减小目标定位误差。
在又一种可选的实施方式中,当所述移动平台为车辆,并且安装在车辆上的所述摄像头模组应用于所述车载前视感知系统时,在安装时不需要将摄像头模组倾斜向上安装,则可以增大交通标志牌、交通信号灯等目标的感知范围。例如,当所述摄像头模组应用于所述车载前视感知系统时,假设所述摄像头模组的VFOV为40度时,所述摄像头模组的安装示意图可以如图11所示。在图11中可以看出,所述摄像头模组的光轴平行于车辆所在的地面,这样可以保证运动方向与图像传感器平面垂直,从而可以提高摄像头运动估计的准确性和鲁棒性,进而减小目标定位误差。
采用本申请实施例提供的摄像头模组的安装方法,由于摄像头模组可以与移动平台所在的地面平行,可以保证运动方向与图像传感器平面垂直,这样可以提高摄像头运动估计的准确性和鲁棒性,进而减小目标定位误差,并且无需将摄像头模组倾斜安装,就可以满足所需的探测范围。
基于以上描述,本申请实施例还提供了一种移动平台,包括摄像头模组,所述摄像头 模组包括镜头组和图像传感器,所述镜头组包括至少一个镜头;其中:所述摄像头模组与所述移动平台所在的地面平行;所述镜头组的中心在所述图像传感器平面的投影为第一位置;所述第一位置与所述图像传感器中心的距离大于第一阈值,所述第一阈值大于0。具体的,所述摄像头模组的详细介绍可以参见上述实施例中涉及的相关描述,此处不再重复赘述。在一种可选的实施方式中,所述移动平台可以但不限于是车辆等等。
基于以上实施例,采用本申请实施例提供的摄像头模组的安装方法,将所述摄像头模组安装在移动平台上,以进行目标定位时,通过所述摄像头模组拍摄的两帧图像中匹配的特征点估计摄像头运动是空间三维重建算法中重要的一个环节。以图12为例,假设求取两帧图像I
1、I
2之间的运动,即求取从第一帧图像I
1到第二帧图像I
2摄像头的旋转R和平移t。两帧图像对应的镜头组的中心分别为O
1、O
2。考虑I
1中的一个特征点p
1在I
2中对应着特征点p
2。其中,两帧图像中特征点相匹配或相对应,表示两帧图像中特征点为同一个空间三维点P在这两帧图像上的投影。
从代数角度分析其中的几何关系,在第一帧图像的坐标系下,设P的空间位置为:P=[X,Y,Z]
T。
假设摄像头模组的成像模型为小孔成像模型时,根据小孔成像模型,P点在两帧图像I
1、I
2中的像素点p
1、p
2的像素位置可以符合以下公式八:
s
1p
1=KP,s
2p
2=K(RP+t) 公式八;
其中,K是摄像头内参矩阵。
在齐次坐标系下,一个向量将等于它自身乘上任意的非零常数。这通常用于表达一个投影关系。例如p
1和s
1p
1成投影关系,p
1和s
1p
1在齐次坐标系的意义下是相等的,或称为尺度意义下相等,可以记作:s
1p
1~p
1。
那么,公式八中两个投影可以记作:p
1~KP,p
2~K(RP+t);
令x
1=K
-1p
1,x
2=K
-1p
2,代入上式则有:x
2~Rx
1+t;
由于等号左侧中向量t^x
2和向量x
2垂直,因此两者内积为0,因此有以下公式九:
可以称上述公式九为对极约束。定义本质矩阵E=t^R,是一个3×3的矩阵。
将矩阵E展开写成向量的形式则有:e=[e
1,e
2,e
3,e
4,e
5,e
6,e
7,e
8,e
9]
T;
那么对极约束可以写成如下线性形式:[u
2u
1,u
2v
1,u
2,v
2u
1,v
2v
1,v
2,u
1,v
1,1]·e=0。
通常,可以使用n≥8对特征点来估计E。把所有点放到一个方程中,可以变成如公式十所示的线性方程组:
Ae=0 公式十
易证明,上述公式十所示的线性方程组的解为矩阵A
TA最小特征值对应的特征向量。
在求解得E之后,可以通过奇异值分解(singular value decomposition,SVD)恢复出摄像头的旋转R和平移t。设E的SVD为:E=UΣV
T,其中,U、V为正交阵,∑为奇异值矩阵。对于任意一个E,存在两个可能的R、t与之对应可以如下:
其中,R
z(π/2)表示沿Z轴旋转90°得到的旋转矩阵。因此,从E分解到R、t时,一共存在4个可能的解,可以如图13所示。
从图13中可以看出,只有第一种解(图13中(a)所示)中的P在两个摄像头中都具有正的深度。因此,只要把任意一点代入4种解中,检测该点在两个摄像头下的深度,就可以确定哪个解是正确的了。
令B=A
TA,通过正交矩阵H对角化,即H
-1BH=diag{λ
1,λ
2,...,λ
n},其中λ
i,i=1,2,...,n为矩阵B的特征值。
不失一般性,设λ
1为简单特征值,并令λ
1<λ
2≤λ
3≤…≤λ
n,再令对应的特征向量张成H=[h
1,h
2,...,h
n],其中,e为对应特征值λ
1的特征向量,即e在h
1张成的空间中。
因此,有|b
ij|≤1。记矩阵Δ
B的最小特征值为λ
1(∈),对应的特征向量为:e(∈)=e+δ
e。其中,δ
e在{h
2,h
3,...,h
n}张成的空间中。
令H
2=[h
2,h
3,...,h
n]。易证明,存在(n-1)维向量g
1,g
2,g
3,...,使得δ
e=∈H
2g
1+∈
2H
2g
2+∈
3H
2g
3+…,其中线性部分可以表示为:∈H
2g
1=HΔH
TΔ
Be。其中,Δ可以符合以下公式十一:
Δ=diag{0,(λ
1-λ
2)
-1,...,(λ
1-λ
n)
-1} 公式十一。
当矩阵A为非退化阵,即矩阵A的秩为8时,λ
1=0。而当矩阵A的秩小于8时,矩阵E的求解是噪声敏感的。所述噪声主要来自于特征点检测误差、特征点匹配误差、量化误差,以及摄像头内参标定误差等。具体的,当矩阵A的秩小于8时,有λ
1≈λ
2。由上述公式十一可以看出,Δ的第二项变为无穷大,这使得矩阵E的估计误差也变得无穷大。
上述噪声影响在摄像头模组实际工作时体现如下:
由上述公式九以及上述R、t计算过程可以看出:t·(x
2×Rx
1)=0。即t和x
2×Rx
1互相垂直。
如图14中(a)所示,当平移向量t垂直于图像传感器平面XY时,x
2×Rx
1覆盖着较大的区域(见阴影区域);而如图14中(b)所示,当平移向量t平行于图像传感器平面XY时,x
2×Rx
1覆盖区域较小(见阴影区域)。当矩阵A受到噪声影响时,阴影区域将偏离其原有位置,从而在平移向量t的估计中引入误差。很显然,此时图14中(a)所示场景相比图14中(b)所示场景具有更高的鲁棒性。
基于上述分析,可以看出:为了保证摄像头运动估计的准确性和鲁棒性,应使得摄像头平移方向(运动方向)垂直于图像传感器平面。
基于上述分析,很显然,由于采用本申请实施例提供摄像头模组的安装方法,在将所述摄像头模组安装在移动平台上时,可以使所述摄像头模组与所述移动平台所在的地面平行,具体的,可以使摄像头模组的镜头组的光轴和图像传感器平面的法线均与移动平台所在的地面平行,例如图10和图11所示,也即使得摄像头运动方向垂直于所述图像传感器平面,从而提高摄像头运动估计的准确性和鲁棒性。也就是说,相对于现有技术中摄像头模组安装在移动平台上时,由于摄像头模组倾斜安装例如图4和图5所示,导致摄像头运动方向与图像传感器平面不能垂直,影响摄像头运动估计的准确性和鲁棒性,采用本申请实施例提供的摄像头模组的安装方法后,摄像头运动估计的准确性和鲁棒性明显可以提高。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的保护范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。
Claims (20)
- 一种摄像头模组的安装方法,其特征在于:将所述摄像头模组安装在移动平台上,所述摄像头模组与所述移动平台所在的地面平行;所述摄像头模组包括镜头组和图像传感器,所述镜头组包括至少一个镜头;所述镜头组的中心在所述图像传感器平面的投影为第一位置;所述第一位置与所述图像传感器中心的距离大于第一阈值,所述第一阈值大于0。
- 如权利要求1所述的安装方法,其特征在于,所述镜头组的光轴和所述图像传感器平面的法线均与所述移动平台所在的地面平行。
- 如权利要求1或2所述的安装方法,其特征在于,所述第一位置与所述图像传感器中心的连线与第一坐标系中横轴方向垂直,所述第一坐标系为以所述图像传感器中心为原点,水平向右为横轴正方向,竖直向下为纵轴正方向建立的直角坐标系。
- 如权利要求1-3任一项所述的安装方法,其特征在于,所述第一位置和所述图像传感器中心的距离与第一角度相关,所述第一角度为所述摄像头模组的垂直视场VFOV的角平分线与所述镜头组的光轴之间的角度。
- 如权利要求4所述的安装方法,其特征在于,所述第一角度大于0度且小于或者等于所述摄像头模组的VFOV补角的一半。
- 如权利要求4或5所述的安装方法,其特征在于,所述摄像头模组对应的光心或扩张焦点FOE在第一坐标系中的纵坐标y 0的绝对值符合以下公式:|y 0|=|f yh(θ)|其中,f y为所述摄像头模组中心的焦距,θ为所述第一角度,h(θ)=tan(θ),或者h(θ)为θ的一元N次函数,N为大于0的整数,所述第一坐标系为以所述图像传感器中心为原点,水平向右为横轴正方向,竖直向下为纵轴正方向建立的直角坐标系。
- 一种移动平台,其特征在于,包括摄像头模组,所述摄像头模组包括镜头组和图像传感器,所述镜头组包括至少一个镜头;其中:所述摄像头模组与所述移动平台所在的地面平行;所述镜头组的中心在所述图像传感器平面的投影为第一位置;所述第一位置与所述图像传感器中心的距离大于第一阈值,所述第一阈值大于0。
- 如权利要求11所述的移动平台,其特征在于,所述镜头组的光轴和所述图像传感器平面的法线均与所述移动平台所在的地面平行。
- 如权利要求11或12所述的移动平台,其特征在于,所述镜头组的中心与所述图像传感器中心的连线与第一坐标系中横轴方向垂直,所述第一坐标系为以所述图像传感器中心为原点,水平向右为横轴正方向,竖直向下为纵轴正方向建立的直角坐标系。
- 如权利要求11-13任一项所述的移动平台,其特征在于,所述第一位置和所述图像传感器中心的距离与第一角度相关,所述第一角度为所述摄像头模组的垂直视场VFOV的角平分线与所述镜头组的光轴之间的角度。
- 如权利要求14所述的移动平台,其特征在于,所述第一角度大于0度且小于或者等于所述摄像头模组的VFOV补角的一半。
- 如权利要求14或15所述的移动平台,其特征在于,所述摄像头模组对应的光心或扩张焦点FOE在第一坐标系中的纵坐标y 0的绝对值符合以下公式:|y 0|=|f yh(θ)|其中,f y为所述摄像头模组中心的焦距,θ为所述第一角度,h(θ)=tan(θ),或者h(θ)为θ的一元N次函数,N为大于0的整数,所述第一坐标系为以所述图像传感器中心为原点,水平向右为横轴正方向,竖直向下为纵轴正方向建立的直角坐标系。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21932162.7A EP4319137A1 (en) | 2021-03-24 | 2021-03-24 | Camera module mounting method and mobile platform |
CN202180002256.4A CN113491106B (zh) | 2021-03-24 | 2021-03-24 | 一种摄像头模组的安装方法及移动平台 |
PCT/CN2021/082852 WO2022198534A1 (zh) | 2021-03-24 | 2021-03-24 | 一种摄像头模组的安装方法及移动平台 |
CN202211403956.7A CN115835031A (zh) | 2021-03-24 | 2021-03-24 | 一种摄像头模组的安装方法及移动平台 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/082852 WO2022198534A1 (zh) | 2021-03-24 | 2021-03-24 | 一种摄像头模组的安装方法及移动平台 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022198534A1 true WO2022198534A1 (zh) | 2022-09-29 |
Family
ID=77939958
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/082852 WO2022198534A1 (zh) | 2021-03-24 | 2021-03-24 | 一种摄像头模组的安装方法及移动平台 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4319137A1 (zh) |
CN (2) | CN113491106B (zh) |
WO (1) | WO2022198534A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113491106B (zh) * | 2021-03-24 | 2022-11-18 | 华为技术有限公司 | 一种摄像头模组的安装方法及移动平台 |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140152771A1 (en) * | 2012-12-01 | 2014-06-05 | Og Technologies, Inc. | Method and apparatus of profile measurement |
US20140226008A1 (en) * | 2013-02-08 | 2014-08-14 | Mekra Lang Gmbh & Co. Kg | Viewing system for vehicles, in particular commercial vehicles |
US20160021309A1 (en) * | 2014-07-21 | 2016-01-21 | Honeywell International Inc. | Image based surveillance system |
US20160044284A1 (en) * | 2014-06-13 | 2016-02-11 | Magna Electronics Inc. | Vehicle vision system with panoramic view |
US20160137126A1 (en) * | 2013-06-21 | 2016-05-19 | Magna Electronics Inc. | Vehicle vision system |
CN105991904A (zh) * | 2015-02-28 | 2016-10-05 | 福州瑞芯微电子股份有限公司 | 具有摄像功能的便携式电子设备与摄像模组 |
US20170302855A1 (en) * | 2016-04-19 | 2017-10-19 | Fujitsu Limited | Display controller and display control method |
CN209299385U (zh) * | 2019-01-25 | 2019-08-23 | 深圳市艾为智能有限公司 | 一种偏心鱼眼摄像头 |
WO2020178161A1 (en) * | 2019-03-02 | 2020-09-10 | Jaguar Land Rover Limited | A camera assembly and a method |
CN111866349A (zh) * | 2020-07-30 | 2020-10-30 | 深圳市当智科技有限公司 | 一种用于投影设备的摄像头、投影设备 |
CN113491106A (zh) * | 2021-03-24 | 2021-10-08 | 华为技术有限公司 | 一种摄像头模组的安装方法及移动平台 |
-
2021
- 2021-03-24 CN CN202180002256.4A patent/CN113491106B/zh active Active
- 2021-03-24 CN CN202211403956.7A patent/CN115835031A/zh active Pending
- 2021-03-24 EP EP21932162.7A patent/EP4319137A1/en active Pending
- 2021-03-24 WO PCT/CN2021/082852 patent/WO2022198534A1/zh active Application Filing
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140152771A1 (en) * | 2012-12-01 | 2014-06-05 | Og Technologies, Inc. | Method and apparatus of profile measurement |
US20140226008A1 (en) * | 2013-02-08 | 2014-08-14 | Mekra Lang Gmbh & Co. Kg | Viewing system for vehicles, in particular commercial vehicles |
US20160137126A1 (en) * | 2013-06-21 | 2016-05-19 | Magna Electronics Inc. | Vehicle vision system |
US20160044284A1 (en) * | 2014-06-13 | 2016-02-11 | Magna Electronics Inc. | Vehicle vision system with panoramic view |
US20160021309A1 (en) * | 2014-07-21 | 2016-01-21 | Honeywell International Inc. | Image based surveillance system |
CN105991904A (zh) * | 2015-02-28 | 2016-10-05 | 福州瑞芯微电子股份有限公司 | 具有摄像功能的便携式电子设备与摄像模组 |
US20170302855A1 (en) * | 2016-04-19 | 2017-10-19 | Fujitsu Limited | Display controller and display control method |
CN209299385U (zh) * | 2019-01-25 | 2019-08-23 | 深圳市艾为智能有限公司 | 一种偏心鱼眼摄像头 |
WO2020178161A1 (en) * | 2019-03-02 | 2020-09-10 | Jaguar Land Rover Limited | A camera assembly and a method |
CN111866349A (zh) * | 2020-07-30 | 2020-10-30 | 深圳市当智科技有限公司 | 一种用于投影设备的摄像头、投影设备 |
CN113491106A (zh) * | 2021-03-24 | 2021-10-08 | 华为技术有限公司 | 一种摄像头模组的安装方法及移动平台 |
Also Published As
Publication number | Publication date |
---|---|
CN113491106B (zh) | 2022-11-18 |
CN113491106A (zh) | 2021-10-08 |
EP4319137A1 (en) | 2024-02-07 |
CN115835031A (zh) | 2023-03-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2018282302B2 (en) | Integrated sensor calibration in natural scenes | |
US20180322658A1 (en) | Camera Calibration | |
US10909395B2 (en) | Object detection apparatus | |
CN107636679B (zh) | 一种障碍物检测方法及装置 | |
EP2665037B1 (en) | Onboard camera automatic calibration apparatus | |
JP6767998B2 (ja) | 画像の線からのカメラの外部パラメータ推定 | |
US11233983B2 (en) | Camera-parameter-set calculation apparatus, camera-parameter-set calculation method, and recording medium | |
CN113196007B (zh) | 一种应用于车辆的相机系统 | |
US11255959B2 (en) | Apparatus, method and computer program for computer vision | |
US10885389B2 (en) | Image processing device, image processing method, learning device, and learning method | |
CN105205459B (zh) | 一种图像特征点类型的识别方法和装置 | |
WO2020015748A1 (en) | Systems and methods for lidar detection | |
WO2022198534A1 (zh) | 一种摄像头模组的安装方法及移动平台 | |
CN114913290A (zh) | 多视角融合的场景重建方法、感知网络训练方法及装置 | |
CN109754415A (zh) | 一种基于多组双目视觉的车载全景立体感知系统 | |
WO2016203989A1 (ja) | 画像処理装置および画像処理方法 | |
Gandhi et al. | Motion based vehicle surround analysis using an omni-directional camera | |
Eichenseer et al. | Motion estimation for fisheye video sequences combining perspective projection with camera calibration information | |
JPH07218251A (ja) | ステレオ画像計測方法およびステレオ画像計測装置 | |
CN111860270A (zh) | 一种基于鱼眼相机的障碍物检测方法及装置 | |
Geiger | Monocular road mosaicing for urban environments | |
US20240054656A1 (en) | Signal processing device, signal processing method, and signal processing system | |
CN115797405A (zh) | 一种基于车辆轴距的多镜头自适应跟踪方法 | |
Tu et al. | Method of Using RealSense Camera to Estimate the Depth Map of Any Monocular Camera | |
CN112639864B (zh) | 用于测距的方法和装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21932162 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021932162 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021932162 Country of ref document: EP Effective date: 20231024 |