WO2022017069A1 - 多摄摄像模组、摄像系统、电子设备和成像方法 - Google Patents

多摄摄像模组、摄像系统、电子设备和成像方法 Download PDF

Info

Publication number
WO2022017069A1
WO2022017069A1 PCT/CN2021/100025 CN2021100025W WO2022017069A1 WO 2022017069 A1 WO2022017069 A1 WO 2022017069A1 CN 2021100025 W CN2021100025 W CN 2021100025W WO 2022017069 A1 WO2022017069 A1 WO 2022017069A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
camera unit
area
processed
Prior art date
Application number
PCT/CN2021/100025
Other languages
English (en)
French (fr)
Inventor
戎琦
袁栋立
王启
Original Assignee
宁波舜宇光电信息有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宁波舜宇光电信息有限公司 filed Critical 宁波舜宇光电信息有限公司
Priority to CN202180059015.3A priority Critical patent/CN116114243A/zh
Publication of WO2022017069A1 publication Critical patent/WO2022017069A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the present application relates to the field of camera modules, and in particular, to a multi-camera camera module, a camera system, an electronic device and an imaging method.
  • camera modules In order to meet consumers' requirements for shooting functions and imaging quality, in recent years, camera modules have undergone changes from single-camera camera modules to multi-camera camera modules. The focus modules are combined to form a dual camera module.
  • the image of the dual-camera module composed of the wide-angle module and the telephoto module is synthesized by the images collected by the wide-angle module and the telephoto module.
  • the focal length of the telephoto module is fixed and the field of view is small , the compensation for the image collected by the wide-angle camera module is limited, and it is difficult to improve the clarity of the image.
  • An advantage of the present application is to provide a multi-camera camera module, a camera system, an electronic device, and an imaging method, wherein the multi-camera camera module is structurally configured so that it can perform a Optical zooming, so that when the framing picture includes both near-field and far-field images, the multi-camera module can capture clear images of the subject at different depths of field, so that the final synthesized image has better imaging effects.
  • Another advantage of the present application is to provide a multi-camera camera module, a camera system, an electronic device, and an imaging method, wherein the multi-camera camera module is configured with a camera unit with an optical zoom function, and the multi-camera camera module has an optical zoom function.
  • the relative positional relationship between the camera unit with zoom function and other camera units can be changed, so that the multi-camera camera module can capture clear images of the subject at different depths of field, so that the final synthesized image has better quality. imaging effect.
  • Another advantage of the present application is to provide a multi-camera camera module, a camera system, an electronic device and an imaging method, wherein the relative positional relationship between a camera unit with an optical zoom function and other camera units can be changed by a moving mechanism , so that the camera unit with the optical zoom function can better compensate the images collected by other camera units (or other processing methods), so that the final synthesized image has a better imaging effect.
  • Another advantage of the present application is to provide a multi-camera camera module, a camera system, an electronic device and an imaging method, wherein the optical axis set by the camera unit with optical zoom is different from that set by other camera units.
  • the optical axis is inclined, so that the imaging range of the camera unit with the optical zoom function can be better aligned with the to-be-processed part of the images collected by other camera modules, so that the camera unit with the optical zoom function can be more It is better to compensate the images collected by other camera units to improve the imaging effect of the final synthesized image.
  • a multi-camera camera module which includes:
  • a second camera unit with a zoom function provided with a second optical axis
  • a moving mechanism configured to adjust the relative positional relationship between the first camera unit and the second camera unit.
  • the moving mechanism is configured to adjust the relative positional relationship between the first camera unit and the second camera unit based on an adjustment instruction, and the adjustment instruction is based on the The to-be-processed area is generated in the first image of the subject captured by the first camera unit.
  • the second optical axis is inclined in a direction toward the first optical axis to form an included angle with the first optical axis.
  • the included angle formed between the first optical axis and the second optical axis is 0.1° to 45°.
  • the angle formed between the first optical axis and the second optical axis ranges from 0.1° to 10°.
  • the second camera unit is mounted on the moving mechanism, so as to drive the second camera unit through the moving mechanism to change the relationship between the first camera unit and the The relative positional relationship between the second camera units.
  • the moving mechanism includes a casing, a carrier suspended in the casing and used to carry the second camera unit, and a carrier disposed in the casing A coil-magnet pair between the carrier and the housing and corresponding to each other.
  • the moving mechanism further includes a ball mounted between the carrier and the housing, so that the carrier is suspended on the housing by the ball. inside the casing.
  • the moving mechanism further includes an elastic element extending between the inner side wall of the housing and the outer side wall of the carrier, so that the carrier can be made to move through the elastic element. is suspended in the housing.
  • the first angle of view of the first camera unit is greater than 60°, and the maximum second angle of view of the second camera unit is less than 30°.
  • a camera system comprising:
  • the multi-camera camera module as described above;
  • a processor communicatively connected to the multi-camera module, wherein the processor is configured to generate the adjustment based on an area to be processed in a first image of a subject captured by the first camera unit instruction.
  • the processor is further configured to fuse the first image of the object captured by the first camera unit and the second image of the object captured by the second camera unit to obtain a fused image.
  • an electronic device which includes the above-mentioned multi-camera camera module.
  • an imaging method of a camera system which includes:
  • the moving mechanism is driven to drive the second camera unit to map the second image captured by the second camera unit to the first image corresponding to the position of the to-be-processed area.
  • the moving mechanism is driven to drive the second camera unit, wherein, during the process of moving the second camera unit, at least the photographed object captured by the second camera unit is obtained. a zoomed second image;
  • the first image and the zoomed second image are fused to obtain a fused image.
  • determining at least one area to be processed in the first image includes: determining at least one area in the first image with relatively low imaging quality as the at least one area to be processed.
  • determining at least one area to be processed in the first image includes: receiving a to-be-processed area designation instruction; and, in response to the to-be-processed area designation instruction, determining the first image at least one area to be processed.
  • determining at least one area to be processed in the first image includes: determining at least one area to be processed in the first image based on a default setting.
  • generating a second adjustment instruction based on a relative positional relationship between a mapped image in which the zoomed second image is mapped to the first image and the to-be-processed area includes: determining The number of pixels Mx and My of the area to be processed in the X direction and the Y direction set by the first image; determine the number of pixels of the mapped image in the X direction and the Y direction set by the first image Nx and Ny; and, based on the Mx, My, Nx and Ny, generating the second adjustment instruction.
  • generating the second adjustment instruction based on the Mx, My, Nx and Ny includes: in response to Nx>Mx and Ny>My, generating the second adjustment instruction, wherein, The adjustment instruction is used to drive the moving mechanism to drive the second camera unit, so that the center of the mapped image is aligned with the center of the area to be processed.
  • generating the second adjustment instruction based on the Mx, My, Nx, and Ny includes: in response to Mx being greater than Nx, determining a first integer multiple relationship between the Mx and Nx ; in response to My being greater than Ny, determining a second integer multiple relationship between the My and Ny; and, based on the first integer multiple relationship and the second integer multiple relationship, generating the second adjustment instruction, wherein , the second adjustment instruction is used to drive the moving mechanism to drive the second camera unit to move along the X direction at least a first integer multiple times; and, drive the moving mechanism to drive the second camera The camera unit moves along the Y direction at least a second integer multiple times.
  • obtaining at least one zoomed second image of the subject captured by the second camera unit includes: moving every time , to obtain a second zoomed image of the subject captured by the second camera unit to obtain a plurality of second zoomed images; wherein the first image and the zoomed second image are fused.
  • Two images to obtain a fused image comprising: fusing the first image and the plurality of zoomed second images to obtain the fused image.
  • generating an adjustment instruction based on the relative positional relationship between the mapped image mapped from the second image to the first image and the to-be-processed area includes: determining the to-be-processed area and, based on the pre-calibrated relative position between the center of the to-be-processed area and the mapped image and the translation position of the second camera unit According to the corresponding table, the adjustment instruction is generated.
  • generating a second adjustment instruction based on a relative positional relationship between a mapped image in which the zoomed second image is mapped to the first image and the to-be-processed area includes: determining the relative positional relationship between the center of the to-be-processed area and the center of the mapped image; and based on the pre-calibrated relative position between the center of the to-be-processed area and the mapped image and the second camera A corresponding table of translation positions of units is used to generate the second adjustment instruction.
  • FIG. 1 illustrates a schematic diagram of a multi-camera camera module according to an embodiment of the present application.
  • FIG. 2 illustrates another schematic diagram of the multi-camera module according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram illustrating viewfinder frames of a first camera unit and a second camera unit of the multi-camera module according to an embodiment of the present application.
  • FIG. 4 illustrates yet another schematic diagram of the multi-camera module according to an embodiment of the present application.
  • FIG. 5 illustrates a schematic diagram of a second camera unit in the multi-camera camera module according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram illustrating a variant implementation of the second camera unit in the multi-camera module according to an embodiment of the present application.
  • FIG. 7 illustrates a flowchart of an imaging method according to an embodiment of the present application.
  • FIG. 8 illustrates a schematic diagram of a camera system according to an embodiment of the present application.
  • FIG. 9 illustrates a schematic diagram of an electronic device according to an embodiment of the present application.
  • a multi-camera camera module 10 As shown in FIGS. 1 and 2 , a multi-camera camera module 10 according to an embodiment of the present application is illustrated, wherein the multi-camera camera module 10 is structurally configured such that it can be based on the distance between it and the object to be photographed Performing optical zooming, so that when the framing picture includes both near-field and far-field images, the multi-camera module 10 can capture clear images of the subject at different depths of field, so that the final synthesized image has better imaging Effect.
  • the multi-camera camera module 10 is configured with a camera unit with an optical zoom function, and the relative positional relationship between the camera unit with an optical zoom function and other camera units can be determined by change, so that the multi-camera camera module 10 can collect clear images of the subject at different depths of field, so that the final synthesized image has a better imaging effect.
  • the camera unit with zoom function and other camera units refer to different camera units structurally integrated in the multi-camera camera module 10, rather than structurally integrated on the detached camera module.
  • two or more camera units are integrally formed by a process such as molding to make the multi-camera camera module 10, and the multi-camera camera module 10 as a whole is connected to other peripheral devices, such as an image processor.
  • the multi-camera camera module 10 includes a first camera unit 11 , a second camera unit 12 , and is configured to adjust the first camera unit
  • the first camera unit 11 is implemented as a conventional camera module with a fixed equivalent focal length
  • the second camera unit 12 has an optical zoom capability.
  • the second camera unit 12 includes a photosensitive chip 121 , at least one lens group 122 located on a photosensitive path set by the photosensitive chip 121 , and is used to drive the at least one lens group 122 . At least part of the lenses in the lens group 122 are used for the drive assembly 123 for optical zooming.
  • the at least one lens group 122 includes a first lens group 124 and a second lens group 125
  • the driving assembly 123 includes a first driving element 127 and a second driving element 128, wherein the first driving element 127 is used to drive at least part of the lenses in the first lens group 124 to move to perform optical zooming, and the second driving element 128 is used to drive the second lens group 125 to move as a whole to move Optical focusing is performed to compensate for the degradation of image quality after optical zooming, so that the second imaging unit 12 has relatively better image quality after optical zooming.
  • the at least one lens group 122 includes a compensation lens group (the second lens group 125 ) and a zoom lens group (the first lens group 124 ), and the driving component 123 A zoom driver (the first driving element 127 ) and a focus driver (the second driving element 128 ) are included.
  • the at least one lens group 122 may further include a larger number of lens groups, for example, further include a third lens group 126, and the position of the third lens group 126 is fixed as a fixed position
  • the lens group in this regard, is not limited by this application.
  • the second camera unit 12 further includes a photosensitive path disposed on the photosensitive chip 121 .
  • a reflective element 129 eg, prism, mirror, etc. used to deflect the imaging light. That is, in the example illustrated in FIG. 1 , the second camera unit 12 is implemented as a periscope camera module.
  • the second camera unit 12 may be implemented as a conventional vertical camera module, which is not limited by the present application.
  • the second camera unit 12 can also achieve optical zooming in other ways.
  • the optical lens of the second camera unit 12 is a liquid lens, which can change the surface of the liquid lens by electrifying The optical zoom is carried out by using the type, which is also not limited by the present application.
  • the first camera unit 11 has a relatively large field of view, that is, it has a larger imaging window (or, in other words, The first camera unit 11 has a larger framing screen and can capture scenes in a larger space), while the second camera unit 12 has a relatively smaller field of view than the first camera unit 11 angle, that is, the imaging window of the second camera unit 12 is small.
  • the imaging window of the first camera unit 11 and the imaging window of the second camera unit 12 There is at least partial overlap.
  • the imaging window of the second camera unit 12 is smaller than the imaging window of the first camera unit 11 and if the distance between the two is appropriate, the imaging window of the second camera unit 12 is located at within the imaging window of the first camera unit 11 . Therefore, when a subject is captured by the multi-camera camera module 10, the images of the subject captured by the first camera unit 11 and the second camera unit 12 are related in terms of their contents, so that the image of the subject captured by the first camera unit 11 and the second camera unit 12 is related in content. A fusion image with better imaging effect is obtained by synthesizing the first image of the subject captured by the first imaging unit 11 and the second image of the subject captured by the second imaging unit 12 .
  • the first angle of view of the first camera unit 11 is greater than 60°, and the maximum second angle of view of the second camera unit 12 is less than 30°. It should be understood that during the optical zooming process of the second camera unit 12, the second field of view angle of the second camera unit 12 will change, but the maximum field angle will not exceed 30°.
  • the image of the object to be captured is captured by the multi-camera camera module 10
  • the first image of the object captured by the first camera unit 11 and the image captured by the second camera unit 12 are not.
  • the second image of the photographed object is related in content, but in the actual image synthesis process, the to-be-processed area in the first image may not be related to the content of the second image.
  • an area with low imaging quality in the first image is set as the area to be processed, ideally, the content of the second image should correspond to the area to be processed and have In this way, the first image and the second image can be fused to obtain an image effect with a higher imaging quality of the subject on the whole, but in the actual imaging process, the first image
  • the corresponding relationship between the two images and the to-be-processed area is determined by the physical positional relationship between the first camera unit 11 and the second camera unit 12 (that is, the first camera unit 11 and the second camera unit 12
  • the relative positional relationship between the two camera units 12 that is, when the relative positional relationship between the first camera unit 11 and the second camera unit 12 does not meet the preset requirements, the second image will not A corresponding relationship is generated between the to-be-processed regions in the second image, so that a better visual effect cannot be obtained through image fusion processing.
  • the relative positional relationship between the first camera unit 11 and the second camera unit 12 may be adjusted.
  • the position change between the first camera unit 11 and the second camera unit 12 is realized by a moving mechanism 13 , as shown in FIG. 1 .
  • the second camera unit 12 is installed on the moving mechanism 13 , so that the moving mechanism 13 drives the second camera unit 12 to change the The relative positional relationship between the first camera unit 11 and the second camera unit 12.
  • the moving mechanism 13 is configured to adjust the relative positional relationship between the first camera unit 11 and the second camera unit 12 based on an adjustment instruction, and the adjustment instruction is based on The to-be-processed area is generated in the first image of the subject captured by the first camera unit 11, that is, in the embodiment of the present application, the translation structure is configured to adjust the first image based on the requirements of subsequent image processing.
  • the relative positional relationship between a camera unit 11 and the second camera unit 12 is configured to adjust the relative positional relationship between a camera unit 11 and the second camera unit 12 .
  • FIG. 5 illustrates a schematic diagram of the second camera unit 12 in the multi-camera camera module 10 according to an embodiment of the present application.
  • the moving mechanism 13 includes: a casing 131 , a carrier 132 suspended in the casing 131 and used to carry the second camera unit 12 , and a coil-magnet pair 133 disposed between the carrier 132 and the housing 131 and corresponding to each other, wherein, after being turned on, the coil-magnet pair 133 can drive the carrier 132 to drive the The second camera unit 12 moves.
  • the moving mechanism 13 further includes a ball 134A installed between the carrier 132 and the housing 131 , so that the carrier 132 is suspended by the ball 134A. inside the casing 131 .
  • FIG. 6 is a schematic diagram illustrating a variant implementation of the second camera unit 12 in the multi-camera camera module 10 according to an embodiment of the present application.
  • the moving mechanism 13 further includes an elastic element 134B extending between the inner sidewall of the housing 131 and the outer sidewall of the carrier 132 to pass through the elastic element 134B enables the carrier 132 to be suspended in the housing 131 .
  • the elastic element 134B may be implemented as an elastic element 134B such as a leaf spring, a spring, an elastic sheet, or the like.
  • the position of the first camera unit 11 is kept fixed, and the position of the second camera unit 12 is adjusted by the moving mechanism 13, so as to change the first camera unit
  • the above technical purpose can also be achieved in other ways.
  • the position of the second camera unit 12 can be kept fixed, and the position of the first camera unit 11 can be set to be adjustable.
  • the positions of the first camera unit 11 and the second camera unit 12 are set to be adjustable at the same time.
  • the imaging of the second camera unit 12 can be directed to the side of the imaging window of the first camera unit 11 away from the second camera unit 12, so that when the second camera unit 12 is moved multiple times, the imaging window of the second camera unit 12 can cover to any part of the imaging window of the entire first camera unit 11 .
  • the imaging window of the second camera unit 12 can be more inclined to the central area of the imaging window of the first camera unit 11, so that the object captured by the second camera unit 12 can be captured.
  • the second image of , and the first image of the captured image collected by the first imaging unit 11 can have a higher degree of correlation in content. That is, in the embodiment of the present application, preferably, the second optical axis X2 set by the second camera unit 12 is inclined in a direction that tends to the first optical axis X1 set by the first camera unit 11 , to form an included angle with the first optical axis X1, as shown in FIG. 4 . Specifically, in the embodiment of the present application, the angle formed between the first optical axis X1 and the second optical axis X2 is 0.1° to 45°, and more preferably, the range of the included angle is 0.1 ° to 10°.
  • the multi-camera camera module 10 according to the embodiment of the present application is clarified, wherein the structure and configuration of the multi-camera camera module 10 enables it to perform optical zooming based on the distance between it and the object to be photographed, In this way, when the framing picture includes both close-up and long-range views, the multi-camera module 10 can capture clear images of the subject at different depths of field, so that the final synthesized image has better imaging effects.
  • the multi-camera camera module 10 is configured with a camera unit with an optical zoom function, and the relative positional relationship between the camera unit with an optical zoom function and other camera units can be is changed, so that the multi-camera camera module 10 can capture clear images of the subject at different depths of field, so that the final synthesized image has a better imaging effect.
  • the moving mechanism 13 is configured to adjust the relative positional relationship between the first camera unit 11 and the second camera unit 12 based on an adjustment instruction, and the adjustment instruction It is generated based on the to-be-processed area in the first image of the subject captured by the first imaging unit 11 .
  • FIG. 7 illustrates a flowchart of an imaging method suitable for the multi-camera camera module 10 according to an embodiment of the present application.
  • the imaging method includes the step of: S110 , obtaining a first image of the subject captured by the first camera unit 11 and the image captured by the second camera unit 12 .
  • S120 determine at least one area to be processed in the first image;
  • S130 map between the mapped image of the first image and the area to be processed based on the second image
  • S140 based on the adjustment instruction, drive the moving mechanism 13 to drive the second camera unit 12 to map the second image captured by the second camera unit 12 to the first image
  • the mapped image of an image corresponds to the position of the area to be processed;
  • S150 control the second camera unit 12 to perform optical zooming and obtain a zoomed second image of the subject;
  • S160 based on the zoomed image
  • the second image is mapped to the relative positional relationship between the mapped image of the first image and the area to be processed, and a second adjustment instruction is generated
  • a first image of the subject captured by the first imaging unit 11 and a second image of the subject captured by the second imaging unit 12 are obtained.
  • the imaging window of the first camera unit 11 is the same as the second camera unit 12 .
  • the images of the subject captured by the first camera unit 11 and the second camera unit 12 are related in terms of their contents, so that the image of the subject captured by the first camera unit 11 and the second camera unit 12 is related in content.
  • a fusion image with better imaging effect is obtained by synthesizing the first image of the subject captured by the first imaging unit 11 and the second image of the subject captured by the second imaging unit 12 .
  • step S120 at least one area to be processed in the first image is determined.
  • the selection of the to-be-processed area is related to the final image synthesis effect.
  • the to-be-processed area may be set as an area in the first image whose imaging quality is to be compensated, that is, an area with a lower imaging quality in the first image is determined as the to-be-processed area.
  • the to-be-processed area may be set to be the middle area (usually the middle area) in the first image. part corresponds to the subject).
  • the at least one to-be-processed area in the first image may be determined in at least the following manner.
  • the process of determining at least one area to be processed in the first image includes: determining at least one area in the first image with relatively low imaging quality as the at least one area to be processed.
  • at least one region with relatively low imaging quality in the first image may be determined as the at least one region to be processed by using, for example, a Brenner gradient function, a Tenengrad gradient function, or a Laplacian gradient function.
  • the area of the image with lower imaging quality may represent the area with lower definition in the image.
  • the process of determining at least one area to be processed in the first image includes: first, receiving a to-be-processed area designation instruction; then, in response to the to-be-processed area designation instruction, determining the to-be-processed area designation instruction At least one area to be processed in the first image. That is, in this example, the to-be-processed area is manually set, specifically, determined by the user applying a specified instruction, wherein the specified instruction includes clicking on the corresponding area of the first image, double-clicking the The corresponding regions of the first image, etc., are not limited by this application.
  • determining at least one area to be processed in the first image includes: determining at least one area to be processed in the first image based on a default setting. That is, in this example, based on the default settings of the system, at least one area to be processed in the first image is determined.
  • the second camera unit 12 can perform automatic optical zooming based on the operation in step S150 described later, or can The user selects the zoom ratio or the system default zoom ratio for optical zooming.
  • step S130 an adjustment instruction is generated based on the relative positional relationship between the mapped image mapped from the second image to the first image and the to-be-processed area.
  • the adjustment instruction is used to drive the moving mechanism 13 to drive the second camera unit 12 to map the second image captured by the second camera unit 12 to the first image.
  • the mapping image corresponds to the The location of the pending area.
  • step S130 is to generate a method for driving the moving mechanism 13 to drive the
  • the second imaging unit 12 makes the content of the second image of the object captured by the second imaging unit 12 correspond to the adjustment instruction of the area to be processed.
  • step S160 the specific process of generating the adjustment instruction based on the relative positional relationship between the mapped image mapped from the second image to the first image and the to-be-processed area will appear again, so it is not described here. Expand first.
  • step S140 based on the adjustment instruction, the moving mechanism 13 is driven to drive the second camera unit 12 to map the second image captured by the second camera unit 12 to the mapped image corresponding to the first image at the location of the to-be-treated area. That is, based on the adjustment instruction, the moving mechanism 13 is driven to drive the second camera unit 12 so that the content of the second image of the object captured by the second camera unit 12 corresponds to the to-be-processed area.
  • step S150 the second imaging unit 12 is controlled to perform optical zooming and obtain a zoomed second image of the subject.
  • the second camera unit 12 has an optical zoom capability. Therefore, in the embodiment of the present application, the second camera unit 12 can The distance or the clarity of the second image is optically zoomed, so that the second camera unit 12 can capture the second image of the measured object with relatively high imaging quality.
  • step S160 a second adjustment instruction is generated based on the relative positional relationship between the mapped image of the zoomed second image and the first image and the to-be-processed area, wherein the second adjustment The instruction is used to drive the moving mechanism 13 to drive the second camera unit 12 .
  • a process of generating a second adjustment instruction based on the relative positional relationship between the mapped image from the zoomed second image to the first image and the to-be-processed area first comprising: determining the pixel numbers Mx and My of the area to be processed in the X and Y directions set by the first image; then, determining the mapping image in the X direction set by the first image and the number of pixels Nx and Ny in the Y direction; then, based on the Mx, My, Nx and Ny, the second adjustment instruction is generated.
  • the process of generating the second adjustment instruction based on the Mx, My, Nx and Ny includes: in response to Nx>Mx and Ny>My, the second adjustment instruction is generated, wherein the adjustment instruction is used to drive the moving mechanism 13 to drive the second camera unit 12, so that the center of the mapped image is aligned with the to-be-processed the center of the area.
  • the second camera unit 12 is moved to make the second camera unit 12
  • the center of the imaging window of the camera unit 12 coincides with the center of the to-be-processed area in the imaging window of the first camera unit 11 (it is worth mentioning that in the specific implementation, it is sufficient to be nearly coincident).
  • Nx is less than Mx or when Nx is less than My, calculate the integer multiple relationship between Mx and Nx and My and Ny respectively (if there is a remainder, the multiple is increased by 1), and obtain the second camera unit 12 in the X direction and The number of times to be moved in the Y direction to move the second camera unit 12 multiple times, so that the multiple imaging windows of the second camera unit 12 can cover the to-be-processed area in the imaging window of the first camera unit 11 .
  • the process of generating the second adjustment instruction further includes: in response to Mx being greater than Nx, determining a first integer multiple relationship between the Mx and Nx; in response to My is greater than Ny, determining a second integer multiple relationship between My and Ny; and generating the second adjustment instruction based on the first integer multiple relationship and the second integer multiple relationship, wherein the The second adjustment command is used to drive the moving mechanism 13 to drive the second camera unit 12 to move along the X direction at least a first integer multiple times; and to drive the moving mechanism 13 to drive the second camera unit 12 The camera unit 12 moves along the Y direction at least a second integer multiple times.
  • the displacement of the second camera unit 12 may also be determined in other ways.
  • the displacement of the second camera unit 12 may be determined according to the center position of the region to be processed in the first image captured by the first camera unit 11 .
  • the position of the center of the to-be-processed area and the where k is a translation parameter, which can be calculated from the parameters of the second camera unit 12 and the first camera unit 11 , and the relevant parameters include the parameters of the second camera unit 12 and the first camera unit 11 .
  • the translation parameter k can be obtained by means of target calibration.
  • a target is set in front of the multi-camera camera module 10, the focal length of the second camera unit 12 is changed, the zoom ratio of the second camera unit 12 is changed, the second camera unit 12 is translated and recorded.
  • the translation amount is to obtain the translation amount of the mapped image on the first image of the second image captured by the second camera module on the first image under the zoom magnification.
  • the zoom magnification of the second camera unit 12 is changed to obtain the mapping image of the second image collected by the second camera module on the first image under the zoom magnifications of multiple groups of different optical zoom modules.
  • the translation amount on the first image is described, and the translation parameter k is calculated through multiple sets of data.
  • the translation amount of the second camera unit 12 may be determined by constructing a zoom ratio-translation amount comparison table. Specifically, a target plate is set in front of the multi-camera camera module 10, the focal length of the second camera unit 12 is changed, the zoom ratio is changed, the second camera unit 12 is translated and the translation amount is recorded to obtain Under the zoom magnification, the translation amount of the mapped image on the first image of the second image captured by the second camera module on the first image. Change the zoom magnification of the second camera unit 12 to obtain multiple groups of different zoom magnifications of the optical zoom module, and the mapping image of the second image collected by the second camera module on the first image is in the The translation amount on an image is obtained, and the zoom ratio-translation amount comparison table is obtained. The second camera unit 12 can obtain the relationship between the translation amount and the screen translation amount under different zoom magnifications according to the comparison table.
  • generating an adjustment instruction based on the relative positional relationship between the mapped image mapped from the second image to the first image and the to-be-processed area includes: determining the to-be-processed area The relative positional relationship between the center of the area and the center of the mapped image; and, based on the pre-calibrated relative position between the center of the to-be-processed area and the mapped image and the translation of the second camera unit 12 The corresponding table of positions is generated, and the adjustment instruction is generated.
  • generating a second adjustment instruction based on the relative positional relationship between the zoomed second image and the mapped image of the first image and the to-be-processed area including: determining the relative positional relationship between the center of the area to be processed and the center of the mapping image; and, based on the pre-calibrated relative position between the center of the area to be processed and the mapping image and the second The correspondence table of the translational positions of the camera unit 12 to generate the second adjustment instruction
  • step S170 based on the second adjustment instruction, the moving mechanism 13 is driven to drive the second camera unit 12 . 12 At least one zoomed second image of the photographed object collected.
  • the second camera unit 12 can capture the zoomed second image of the subject to obtain a plurality of the zoomed images. of the second image.
  • the position of the second camera unit 12 is changed twice by the moving mechanism 13 , wherein the purpose of the first change is to change the second camera unit 12 12 is moved to a position roughly corresponding to the to-be-processed area to perform optical zooming, and another purpose is to enable the imaging window of the second camera unit 12 to completely cover the to-be-processed area.
  • the purpose of another movement is achieved by moving the second imaging unit 12 once; when the imaging window of the second imaging unit 12 When it is smaller than the to-be-processed area, the purpose of another movement is to move the second camera unit 12 multiple times so that the combined viewport formed by the movement of the imaging window of the second camera unit 12 completely covers the to-be-processed area. way to achieve.
  • step S180 the first image and the zoomed second image are fused to obtain a fused image.
  • fusing the first image and the zoomed second image to obtain a fusion image includes: fusing the first image and the plurality of zoomed second images to obtain the fused image.
  • the imaging method based on the embodiments of the present application is clarified, wherein the implementation of the imaging method relies on the optimization and improvement of the structure and configuration of the multi-camera module 10 . That is, the optimization at the structural configuration level of the multi-camera camera module 10 provides the necessary hardware basis for the implementation of the imaging method, so that the imaging method and the hardware configuration of the multi-camera camera module 10 can achieve the desired results. It can provide users with a better visual experience.
  • a camera system is also provided.
  • FIG. 8 illustrates a schematic diagram of the camera system according to an embodiment of the present application.
  • the camera system 30 includes the multi-camera camera module 10 described above and the processor 20 communicatively connected to the multi-camera camera module 10 , wherein the processor 20 is configured
  • the adjustment instruction is generated for the to-be-processed area in the first image of the object captured by the first imaging unit 11 .
  • the moving mechanism 13 adjusts the relative positional relationship between the first camera unit 11 and the second camera unit 12 based on the adjustment instruction.
  • the processor 20 is further configured to fuse the first image of the subject captured by the first camera unit 11 with the subject captured by the second camera unit 12 of the second image to obtain the fused image.
  • an electronic device 100 is also provided.
  • FIG. 9 illustrates a schematic perspective view of an electronic device 100 according to an embodiment of the present application.
  • the electronic device 100 includes an electronic device main body 90 and the above-described multi-camera camera module 10 assembled in the electronic device main body 90 .
  • the multi-camera camera module 10 is preferably configured on the back of the electronic device main body 90 to be configured as a rear camera module, of course, it can also be configured as the electronic device main body
  • the front of the 90 is configured as a front camera module.
  • the electronic device main body 90 includes a screen and an integrated circuit, wherein the screen can be used to display image data collected by the multi-camera camera module 10 , and the integrated circuit The circuit can be used to process the image data collected by the multi-camera camera module 10 to control the multi-camera camera module 10 to realize its imaging function.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Cameras In General (AREA)

Abstract

公开了一种多摄摄像模组、摄像系统、电子设备和成像方法。所述多摄摄像模组包括:第一摄像单元,设有第一光轴;具有变焦功能的第二摄像单元,设有第二光轴;以及,被配置为调整所述第一摄像单元与所述第二摄像单元之间的相对位置关系的移动机构。这样,所述多摄摄像模组的结构配置使得其能够基于其与被摄目标之间的距离进行光学变焦,以使得在取景画面中同时包含近景和远景时,所述多摄摄像模组能采集到被摄目标在不同景深处的清晰图像,以使得最终合成的图像具有更优的成像效果。

Description

多摄摄像模组、摄像系统、电子设备和成像方法 技术领域
本申请涉及摄像模组领域,尤其涉及多摄摄像模组、摄像系统、电子设备和成像方法。
背景技术
随着移动电子设备的普及,被应用于移动电子设备的用于帮助使用者获取影像(例如视频或者图像)的摄像模组的相关技术得到了迅猛的发展和进步。尤其随着智能手机的发展,消费者对于拍摄功能的追求越来越多样化,对于成像质量的要求也越来越高,这对摄像模组提出了更多的挑战。
为了满足消费者对于拍摄功能和成像质量的要求,近年来,摄像模组经历了从单摄摄像模组到多摄摄像模组的变化,例如,部分厂商在结构配置上将广角模组和长焦模组结合以形成双摄摄像模组。
由广角模组和长焦模组构成的双摄摄像模组,其图像由广角模组和长焦模组所采集的图像合成,然而,由于长焦模组的焦距固定且视场角较小,其对广角摄像模组所采集的图像的补偿有限,难以提升图像的清晰度。
为了解决在摄像模组的取景画面中同时包含近景和远景时图像整体清晰度难以提升的问题,提出了一些摄像模组的设计方案,例如,在原先双摄摄像模组的基础上再增加额外的模组(例如,再增加一个焦距适中,视场角适中的模组);又如,为摄像模组配置光学防抖结构,以通过光学防抖结构调整光学镜头相对于感光芯片的位置并获得多张图像,再将多张图像进行合成,以提升清晰度。
但是,这些方案都不能从本质上解决上述技术问题,其原因在于:在上述摄像模组的设计方案中,摄像模组的焦距是固定的,而被摄目标与摄像设备之间的距离是随时发生变化的,在其获得的多张图片中,被摄目标的某些部分会不清晰,导致其所合成的图像的成像质量也难以提升。
因此,需要一种新型的模组结构设计方案,以解决上述解决问题,为使用者提供更优的拍摄体验。
发明内容
本申请的一优势在于提供一种多摄摄像模组、摄像系统、电子设备和成像方法,其中,所述多摄摄像模组的结构配置使得其能够基于其与被摄目标之间的距离进行光学变焦,以使得在取景画面中同时包含近景和远景时,所述多摄摄像模组能采集到被摄目标在不同景深处的清晰图像,以使得最终合成的图像具有更优的成像效果。
本申请的另一优势在于提供一种多摄摄像模组、摄像系统、电子设备和成像方法,其中,所述多摄摄像模组配置有具有光学变焦功能的摄像单元,并且,所述具有光学变焦功能的摄像单元与其他摄像单元之间的相对位置关系能够被改变,以使得所述多摄摄像模组能采集到被摄目标在不同景深处的清晰图像,从而最终合成的图像具有更优的成像效果。
本申请的另一优势在于提供一种多摄摄像模组、摄像系统、电子设备和成像方法,其中,具有光学变焦功能的摄像单元与其他摄像单元之间的相对位置关系能够通过移动机构被改变,以使得所述具有光学变焦功能的摄像单元能够更好对其他摄像单元所采集的图像进行补偿(或者,其他处理方式),以使得最终合成的图像具有更优的成像效果。
本申请的另一优势在于提供一种多摄摄像模组、摄像系统、电子设备和成像方法,其中,具有光学变焦的摄像单元所设定的光轴与往趋向于其他摄像单元所设定的光轴倾斜,以使得所述具有光学变焦功能的摄像单元的成像范围能够更好地对齐于其他摄像模组所采集的图像中待处理的部分,从而所述具有光学变焦功能的摄像单元能够更好对其他摄像单元所采集的图像进行补偿,以提高最终合成的图像的成像效果。
通过下面的描述,本申请的其它优势和特征将会变得显而易见,并可以通过权利要求书中特别指出的手段和组合得到实现。
为实现上述至少一目的或优势,本申请提供一种多摄摄像模组,其包括:
第一摄像单元,设有第一光轴;以及
具有变焦功能的第二摄像单元,设有第二光轴;以及
被配置为调整所述第一摄像单元与所述第二摄像单元之间的相对位置关系的移动机构。
在根据本申请的多摄摄像模组中,所述移动机构,被配置为基于调整指 令调整所述第一摄像单元与所述第二摄像单元之间的相对位置关系,所述调整指令基于所述第一摄像单元采集的被摄目标的第一图像中的待处理区域生成。
在根据本申请的多摄摄像模组中,所述第二光轴以趋向于所述第一光轴的方向倾斜,以与所述第一光轴形成一夹角。
在根据本申请的多摄摄像模组中,所述第一光轴与所述第二光轴之间所成的夹角为0.1°至45°。
在根据本申请的多摄摄像模组中,所述第一光轴与所述第二光轴之间所成夹角的范围为0.1°至10°。
在根据本申请的多摄摄像模组中,所述第二摄像单元被安装于所述移动机构,以通过所述移动机构驱动所述第二摄像单元来改变所述第一摄像单元与所述第二摄像单元之间的相对位置关系。
在根据本申请的多摄摄像模组中,所述移动机构,包括:壳体,悬持地设置于所述壳体内且用于承载所述第二摄像单元的载体,以及,设置于所述载体与壳体之间且相互对应的线圈-磁石对。
在根据本申请的多摄摄像模组中,所述移动机构进一步包括安装于所述载体与所述壳体之间的滚珠,以通过所述滚珠使得所述载体被悬持地设置于所述壳体内。
在根据本申请的多摄摄像模组中,所述移动机构进一步包括延伸于所述壳体的内侧壁与所述载体的外侧壁之间的弹性元件,以通过所述弹性元件使得所述载体被悬持地设置于所述壳体内。
在根据本申请的多摄摄像模组中,所述第一摄像单元的第一视场角大于60°,所述第二摄像单元的最大第二视场角小于30°。
根据本申请的另一方面,还提供一种摄像系统,其包括:
如上所述的多摄摄像模组;以及
可通信地连接于所述多摄摄像模组的处理器,其中,所述处理器被配置为基于所述第一摄像单元采集的被摄目标的第一图像中的待处理区域生成所述调整指令。
在根据本申请的摄像系统中,所述处理器进一步被配置为融合所述第一摄像单元采集的被摄目标的第一图像和所述第二摄像单元采集的该被摄目标的第二图像以获得融合图像。
根据本申请的又一方面,还提供了一种电子设备,其包括如上所述的多摄摄像模组。
根据本申请的又一方面,还提供了一种摄像系统的成像方法,其包括:
获得所述第一摄像单元采集的被摄目标的第一图像和所述第二摄像单元采集的该被摄目标的第二图像;
确定所述第一图像中的至少一待处理区域;
基于所述第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成调整指令;
基于所述调整指令,驱动移动机构以带动所述第二摄像单元至所述第二摄像单元采集的所述第二图像映射到所述第一图像的映射图像对应于所述待处理区域的位置;
控制所述第二摄像单元进行光学变焦并获得该被摄目标的变焦后的第二图像;
基于所述变焦后的第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成第二调整指令;
基于所述第二调整指令,驱动移动机构以带动所述第二摄像单元,其中,在移动所述第二摄像单元的过程中,获得通过所述第二摄像单元采集的该被摄目标的至少一变焦后的第二图像;以及
融合所述第一图像和所述变焦后的第二图像,以获得融合图像。
在根据本申请的成像方法中,确定所述第一图像中的至少一待处理区域,包括:确定所述第一图像中成像质量相对较低的至少一区域为所述至少一待处理区域。
在根据本申请的成像方法中,确定所述第一图像中的至少一待处理区域,包括:接收待处理区域指定指令;以及,响应于所述待处理区域指定指令,确定所述第一图像中的至少一待处理区域。
在根据本申请的成像方法中,确定所述第一图像中的至少一待处理区域,包括:基于默认设定,确定所述第一图像中的至少一待处理区域。
在根据本申请的成像方法中,基于所述变焦后的第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成第二调整指令,包括:确定所述待处理区域在所述第一图像设定的X方向和Y方向上的像素数量Mx和My;确定所述映射图像在所述第一图像设定的X方向和 Y方向上的像素数量Nx和Ny;以及,基于所述Mx、My、Nx和Ny,生成所述第二调整指令。
在根据本申请的成像方法中,基于所述Mx、My、Nx和Ny,生成所述第二调整指令,包括:响应于Nx>Mx且Ny>My,生成所述第二调整指令,其中,所述调整指令用于驱动所述移动机构以带动所述第二摄像单元,使得所述映射图像的中心对齐于所述待处理区域的中心。
在根据本申请的成像方法中,基于所述Mx、My、Nx和Ny,生成所述第二调整指令,包括:响应于Mx大于Nx,确定所述Mx与Nx之间的第一整数倍关系;响应于My大于Ny,确定所述My与Ny之间的第二整数倍关系;以及,基于所述第一整数倍关系和所述第二整数倍关系,生成所述第二调整指令,其中,所述第二调整指令用于驱动所述移动机构以带动所述第二摄像单元沿着所述X方向移动至少一第一整数倍次;以及,驱动所述移动机构以带动所述第二摄像单元沿着所述Y方向移动至少一第二整数倍次。
在根据本申请的成像方法中,在移动所述第二摄像单元的过程中,获得通过所述第二摄像单元采集的该被摄目标的至少一变焦后的第二图像,包括:每移动一次,获得所述第二摄像单元采集的该被摄目标的变焦后的第二图像,以获得多张所述变焦后的第二图像;其中,融合所述第一图像和所述变焦后的第二图像,以获得融合图像,包括:融合所述第一图像和所述多张变焦后的第二图像以获得所述融合图像。
在根据本申请的成像方法中,基于所述第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成调整指令,包括:确定所述待处理区域的中心与所述映射图像的中心之间的相对位置关系;以及,基于预标定的所述待处理区域的中心与所述映射图像之间的相对位置与所述第二摄像单元的平移位置的对应表,生成所述调整指令。
在根据本申请的成像方法中,基于所述变焦后的第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成第二调整指令,包括:确定所述待处理区域的中心与所述映射图像的中心之间的相对位置关系;以及,基于预标定的所述待处理区域的中心与所述映射图像之间的相对位置与所述第二摄像单元的平移位置的对应表,生成所述第二调整指令。
通过对随后的描述和附图的理解,本申请进一步的目的和优势将得以充分体现。
本申请的这些和其它目的、特点和优势,通过下述的详细说明,附图和权利要求得以充分体现。
附图说明
通过结合附图对本申请实施例进行更详细的描述,本申请的上述以及其他目的、特征和优势将变得更加明显。附图用来提供对本申请实施例的进一步理解,并且构成说明书的一部分,与本申请实施例一起用于解释本申请,并不构成对本申请的限制。在附图中,相同的参考标号通常代表相同部件或步骤。
图1图示了根据本申请实施例的多摄摄像模组的示意图。
图2图示了根据本申请实施例的所述多摄摄像模组的另一示意图。
图3图示了根据本申请实施例的所述多摄像模组的第一摄像单元和第二摄像单元的取景框的示意图。
图4图示了根据本申请实施例的所述多摄摄像模组的又一示意图。
图5图示了根据本申请实施例的所述多摄摄像模组中第二摄像单元的示意图。
图6图示了根据本申请实施例的所述多摄摄像模组中第二摄像单元的一变形实施的示意图。
图7图示了根据本申请实施例的成像方法的流程图。
图8图示了根据本申请实施例的摄像系统的示意图。
图9图示了根据本申请实施例的电子设备的示意图。
具体实施方式
下面,将参考附图详细地描述根据本申请的示例实施例。显然,所描述的实施例仅仅是本申请的一部分实施例,而不是本申请的全部实施例,应理解,本申请不受这里描述的示例实施例的限制。
示例性多摄摄像模组
如图1和图2所示,根据本申请实施例的多摄摄像模组10被阐明,其中,所述多摄摄像模组10的结构配置使得其能够基于其与被摄目标之间的距离进行光学变焦,以使得在取景画面中同时包含近景和远景时,所述多摄 摄像模组10能采集到被摄目标在不同景深处的清晰图像,以使得最终合成的图像具有更优的成像效果。具体地,根据本申请实施例的所述多摄摄像模组10配置有具有光学变焦功能的摄像单元,并且,所述具有光学变焦功能的摄像单元与其他摄像单元之间的相对位置关系能够被改变,以使得所述多摄摄像模组10能采集到被摄目标在不同景深处的清晰图像,从而最终合成的图像具有更优的成像效果。
值得注意的是,在本申请实施例中,所述具有变焦功能的摄像单元与其他摄像单元指的是所述多摄摄像模组10中在结构上集成的不同摄像单元,而不是指在结构上分离的摄像模块。具体地,在所述多摄摄像模组10中,两个或更多数量的摄像单元通过模塑等工艺整体成型以制成所述多摄摄像模组10,且所述多摄摄像模组10作为整体与其他外设设备,比如,图像处理器连接。
如图1和图2所示,根据本申请实施例的所述多摄摄像模组10,包括第一摄像单元11、第二摄像单元12,以及,被配置为用于调整所述第一摄像单元11与所述第二摄像单元12之间的相对位置关系的移动机构13,其中,所述第二摄像单元12为具有光学变焦功能的摄像单元(即,所述第一摄像单元11的焦距能作出调整)。也就是,在本申请实施例中,以所述多摄摄像模组10包括两个摄像单元的多摄摄像模组10为示例,当然,在本申请其他示例中,还可以包括更多数量的摄像单元,对此,并不为本申请所局限。
如图1所示,在所述多摄摄像模组10中,所述第一摄像单元11被实施为具有固定等效焦距的常规摄像模组,所述第二摄像单元12为具有光学变焦能力的摄像模组。更具体地,如图1所示,所述第二摄像单元12包括感光芯片121、位于所述感光芯片121设定的感光路径上的至少一透镜组122,以及,用于驱动所述至少一透镜组122中至少部分透镜以进行光学变焦的驱动组件123。
更明确地,在如图1所示意的示例中,所述至少一透镜组122包括第一透镜组124和第二透镜组125,所述驱动组件123包括第一驱动元件127和第二驱动元件128,其中,第一驱动元件127用于驱动所述第一透镜组124中至少部分透镜移动以进行光学变焦,所述第二驱动元件128用于驱动所述第二透镜组125整体进行移动以进行光学对焦,从而对光学变焦后图像质量下降进行补偿,以使得所述第二摄像单元12在进行光学变焦后具有相对较 优的成像质量。也就是,在本申请实施例中,所述至少一透镜组122中包括补偿透镜组(所述第二透镜组125)和变焦透镜组(所述第一透镜组124),所述驱动组件123包括变焦驱动器(所述第一驱动元件127)和对焦驱动器(所述第二驱动元件128)。
应可以理解,在本申请实施例中,所述至少一透镜组122还可以包括更多数量的透镜组,例如,还包括第三透镜组126,所述第三透镜组126的位置固定作为固定透镜组,对此,并不为本申请所局限。
进一步地,为了使得所述第二摄像单元12在高度方向上的尺寸能够缩减,在如图1所示意的示例中,所述第二摄像单元12进一步包括设置于所述感光芯片121的感光路径上用于转折成像光线的反射元件129(例如,棱镜、反射镜等)。也就是,在如图1所示意的示例中,所述第二摄像单元12被实施为潜望式摄像模组。
值得一提的是,在本申请实施例中,所述第二摄像单元12可被实施为传统的直立式摄像模组,对此,并不为本申请所局限。同时,所述第二摄像单元12还可以通过其他方式实现光学变焦,例如,在本申请其他示例中,所述第二摄像单元12的光学镜头为液体镜头,其能够通过通电改变液体镜头的面型来进行光学变焦,对此,同样并不为本申请所局限。
特别地,如图2所示,在所述多摄摄像模组10中,所述第一摄像单元11具有相对较大的视场角,也就是,其具有更大的成像视窗(或者说,所述第一摄像单元11具有更大的取景画面,能拍摄更大空间范围内的景象),而所述第二摄像单元12相较于所述第一摄像单元11具有相对较小的视场角,即,所述第二摄像单元12的成像视窗较小。如图3所示,在所述第一摄像单元11与所述第二摄像单元12同时拍摄被摄目标时,所述第一摄像单元11的成像视窗与所述第二摄像单元12的成像视窗存在至少部分重叠,更明确地,所述第二摄像单元12的成像视窗小于所述第一摄像单元11的成像视窗且如果两者布设距离合适的话,所述第二摄像单元12的成像视窗位于所述第一摄像单元11的成像视窗内。因此,当通过所述多摄摄像模组10拍摄被摄目标时,所述第一摄像单元11与所述第二摄像单元12所采集的被摄目标的图像在其内容上存在关联,从而可通过合成所述第一摄像单元11所采集的被摄目标的第一图像和所述第二摄像单元12所采集的被摄目标的第二图像以获得具有更佳成像效果的融合图像。
相应地,在本申请实施例中,所述第一摄像单元11的第一视场角大于60°,而所述第二摄像单元12的最大第二视场角小于30°。应可以理解,在所述第二摄像单元12进行光学变焦的过程中,所述第二摄像单元12的第二视场角将发生改变,但其视场角最大不会超过30°。
进一步地,虽然在通过所述多摄摄像模组10采集被摄目标的图像时,所述第一摄像单元11所采集的被摄目标的第一图像和所述第二摄像单元12所采集的被摄目标的第二图像在内容上存在关联,但在实际图像合成过程中,所述第一图像中待处理区域可能与所述第二图像的内容没有关联。例如,在一种图像融合方案中,所述第一图像中成像质量较低的区域被设置为待处理区域,理想情况下,所述第二图像的内容应对应于所述待处理区域且具有较高的成像质量,这样,才能够通过融合所述第一图像和所述第二图像以获得被摄目标在全局上具有较高成像质量的图像效果,但在实际成像过程中,所述第二图像与所述待处理区域之间的对应关系由所述第一摄像单元11与所述第二摄像单元12之间的物理位置关系决定(即,所述第一摄像单元11与所述第二摄像单元12的相对位置关系),也就是,当所述第一摄像单元11与所述第二摄像单元12的相对位置关系不满足预设要求时,所述第二图像不会与所述第二图像中待处理区域产生对应关系,从而无法通过图像融合处理获得更佳的视觉效果。
为了满足后续图像处理的要求,在本申请实施例中,所述第一摄像单元11与所述第二摄像单元12之间的相对位置关系可发生调整。特别地,所述第一摄像单元11与所述第二摄像单元12之间的位置变化通过移动机构13实现,如图1所示。
具体地,如图1所示,在本申请实施例中,所述第二摄像单元12被安装于所述移动机构13,以通过所述移动机构13驱动所述第二摄像单元12来改变所述第一摄像单元11与所述第二摄像单元12之间的相对位置关系。特别地,在本申请实施例中,所述移动机构13,被配置为基于调整指令调整所述第一摄像单元11与所述第二摄像单元12之间的相对位置关系,所述调整指令基于所述第一摄像单元11采集的被摄目标的第一图像中的待处理区域生成,也就是,在本申请实施例中,所述平移结构被配置为基于后续图像处理的要求调整所述第一摄像单元11与所述第二摄像单元12之间的相对位置关系。
图5图示了根据本申请实施例的所述多摄摄像模组10中第二摄像单元12的示意图。如图5所示,在本申请实施例中,所述移动机构13,包括:壳体131,悬持地设置于所述壳体131内且用于承载所述第二摄像单元12的载体132,以及,设置于所述载体132与壳体131之间且相互对应的线圈-磁石对133,其中,在被导通后,所述线圈-磁石对133能够驱动所述载体132以带动所述第二摄像单元12移动。特别地,如图5所示,所述移动机构13进一步包括安装于所述载体132与所述壳体131之间的滚珠134A,以通过所述滚珠134A使得所述载体132被悬持地设置于所述壳体131内。
图6图示了根据本申请实施例的所述多摄摄像模组10中第二摄像单元12的一变形实施的示意图。如图6所示,在该变形实施中,所述移动机构13进一步包括延伸于所述壳体131的内侧壁与所述载体132的外侧壁之间的弹性元件134B,以通过所述弹性元件134B使得所述载体132被悬持地设置于所述壳体131内。在具体实施中,所述弹性元件134B可被实施为诸如板簧、弹簧、弹片之类的弹性元件134B。
应可以理解,在本申请实施例中,所述第一摄像单元11的位置保持固定,所述第二摄像单元12的位置通过所述移动机构13进行调整,以实现改变所述第一摄像单元11与所述第二摄像单元12之间的相对位置关系的目的。当然,在本申请其他示例中,还可以通过其他方式来实现上述技术目的,例如,可以将所述第二摄像单元12的位置保持固定,而将所述第一摄像单元11的位置设置为可调;又如,将所述第一摄像单元11和所述第二摄像单元12的位置同时设置为可调。
值得一提的是,在本申请实施例中,当所述第一摄像单元11的位置固定,所述第二摄像单元12的位置可调时,优选地,所述第二摄像单元12的成像视窗能够趋向于第一摄像单元11的成像视窗远离第二摄像单元12的一侧,这样,当所述第二摄像单元12被多次移动时,所述第二摄像单元12的成像视窗能覆盖到整个所述第一摄像单元11的成像视窗的任何部分。在本申请一具体示例中,所述第二摄像单元12的成像视窗能够更加趋向于所述第一摄像单元11的成像视窗的中心区域,以使得所述第二摄像单元12采集的被摄目标的第二图像与所述第一摄像单元11采集的被摄像图像的第一图像能够在内容具有更高的相关度。也就是,在本申请实施例中,优选地,所述第二摄像单元12设定的第二光轴X2以趋向于所述第一摄像单元11设定 的第一光轴X1的方向倾斜,以与所述第一光轴X1形成一夹角,如图4所示。具体地,在本申请实施例中,所述第一光轴X1与所述第二光轴X2之间所成的夹角为0.1°至45°,更优选地,该夹角的范围为0.1°至10°。
综上,基于本申请实施例的所述多摄摄像模组10被阐明,其中,所述多摄摄像模组10的结构配置使得其能够基于其与被摄目标之间的距离进行光学变焦,以使得在取景画面中同时包含近景和远景时,所述多摄摄像模组10能采集到被摄目标在不同景深处的清晰图像,以使得最终合成的图像具有更优的成像效果。
特别地,在本申请实施例中,所述多摄摄像模组10配置有具有光学变焦功能的摄像单元,并且,所述具有光学变焦功能的摄像单元与其他摄像单元之间的相对位置关系能够被改变,以使得所述多摄摄像模组10能采集到被摄目标在不同景深处的清晰图像,从而最终合成的图像具有更优的成像效果。
如上所述,在本申请实施例中,所述移动机构13,被配置为基于调整指令调整所述第一摄像单元11与所述第二摄像单元12之间的相对位置关系,所述调整指令基于所述第一摄像单元11采集的被摄目标的第一图像中的待处理区域生成。
为了说明所述移动机构13的移动方式(即,所述第二摄像单元12与所述第一摄像单元11的相对位置之间的变化方式),以下对适用于所述多摄摄像模组10的成像方法进行说明。
示意性成像方法
图7图示了根据本申请实施例的适于所述多摄摄像模组10的成像方法的流程图。
如图7所示,根据本申请实施例的所述成像方法,包括步骤:S110,获得所述第一摄像单元11采集的被摄目标的第一图像和所述第二摄像单元12采集的该被摄目标的第二图像;S120,确定所述第一图像中的至少一待处理区域;S130,基于所述第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成调整指令;S140,基于所述调整指令,驱动移动机构13以带动所述第二摄像单元12至所述第二摄像单元12采集的所述第二图像映射到所述第一图像的映射图像对应于所述待处理区域的位 置;S150,控制所述第二摄像单元12进行光学变焦并获得该被摄目标的变焦后的第二图像;S160,基于所述变焦后的第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成第二调整指令;S170,基于所述第二调整指令,驱动移动机构13以带动所述第二摄像单元12,其中,在移动所述第二摄像单元12的过程中,获得通过所述第二摄像单元12采集的该被摄目标的至少一变焦后的第二图像;以及,S180,融合所述第一图像和所述变焦后的第二图像,以获得融合图像。
在步骤S110中,获得所述第一摄像单元11采集的被摄目标的第一图像和所述第二摄像单元12采集的该被摄目标的第二图像。如前所述,在本申请实施例中,在所述第一摄像单元11与所述第二摄像单元12同时拍摄被摄目标时,所述第一摄像单元11的成像视窗与所述第二摄像单元12的成像视窗存在至少部分重叠。更明确地,所述第二摄像单元12的成像视窗小于所述第一摄像单元11的成像视窗且如果两者布设距离合适的话,所述第二摄像单元12的成像视窗位于所述第一摄像单元11的成像视窗内。因此,当通过所述多摄摄像模组10拍摄被摄目标时,所述第一摄像单元11与所述第二摄像单元12所采集的被摄目标的图像在其内容上存在关联,从而可通过合成所述第一摄像单元11所采集的被摄目标的第一图像和所述第二摄像单元12所采集的被摄目标的第二图像以获得具有更佳成像效果的融合图像。
在步骤S120中,确定所述第一图像中的至少一待处理区域。这里,在本申请实施例中,所述待处理区域的选择与图像最终合成效果有关,例如,当图像最终合成效果被设定为生成被摄目标在全局都具有较高成像质量的图像时,所述待处理区域可被设置为所述第一图像中成像质量待补偿的区域,也就是,确定所述第一图像中成像质量较低的区域为所述待处理区域。再如,当图像最终合成效果被设定为虚化被摄目标的取景场景中的背景部分时,所述待处理区域可被设定为所述第一图像中的中间区域部分(通常中间区域部分对应于被摄主体)。
进一步地,在确定所述待处理区域的选择标准后,可至少通过如下方式确定所述第一图像中的所述至少一待处理区域。
在本申请一示例中,确定所述第一图像中的至少一待处理区域的过程,包括:确定所述第一图像中成像质量相对较低的至少一区域为所述至少一待处理区域。在具体实施中,可通过诸如Brenner梯度函数、Tenengrad梯度函 数、Laplacian梯度函数确定所述第一图像中成像质量相对较低的至少一区域为所述至少一待处理区域。值得一提的是,在本申请实施例中,图像的成像质量较低的区域可以表示图像中清晰度较低的区域。
在本申请另一示例中,确定所述第一图像中的至少一待处理区域的过程,包括:首先,接收待处理区域指定指令;然后,响应于所述待处理区域指定指令,确定所述第一图像中的至少一待处理区域。也就是,在该示例中,所述待处理区域由人为进行设定,具体地,通过使用者施加指定指令确定,其中,所述指定指令包括单击所述第一图像的相应区域、双击所述第一图像的相应区域等,对此,并不为本申请所局限。
在本申请又一示例中,确定所述第一图像中的至少一待处理区域,包括:基于默认设定,确定所述第一图像中的至少一待处理区域。也就是,在该示例中,基于系统的默认设定,确定所述第一图像中的至少一待处理区域。
值得一提的是,当所述待处理区域是使用者自己选择或者系统默认设定时,所述第二摄像单元12可以基于后续描述的步骤S150中的操作进行自动光学变焦,也可以是通过使用者选择变焦倍率或者是系统默认的变焦倍率进行光学变焦。
在步骤S130中,基于所述第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成调整指令。这里,所述调整指令用于驱动移动机构13以带动所述第二摄像单元12至所述第二摄像单元12采集的所述第二图像映射到所述第一图像的映射图像对应于所述待处理区域的位置。
也就是,在确定所述第一图像中所述至少一待处理区域后,改变所述第一摄像单元11与所述第二摄像单元12之间的相对位置关系,以使得所述第二摄像单元12所采集被摄目标的第二图像的内容对应于所述待处理区域。例如,在本申请上述一示例中,所述待处理区域为所述第一图像中成像质量待补偿的区域,相应地,步骤S130的目的在于生成用于驱动所述移动机构13以带动所述第二摄像单元12以使得所述第二摄像单元12所采集被摄目标的第二图像的内容对应于所述待处理区域的调整指令。
这里,在步骤S160中,基于所述第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系生成调整指令的具体过程会再次出现,故在此处不先具体展开。
在步骤S140中,基于所述调整指令,驱动移动机构13以带动所述第二摄像单元12至所述第二摄像单元12采集的所述第二图像映射到所述第一图像的映射图像对应于所述待处理区域的位置。也就是,基于所述调整指令,驱动所述移动机构13以带动所述第二摄像单元12以使得所述第二摄像单元12所采集被摄目标的第二图像的内容对应于所述待处理区域。
在步骤S150中,控制所述第二摄像单元12进行光学变焦并获得该被摄目标的变焦后的第二图像。如前所述,在本申请实施例中,所述第二摄像单元12具有光学变焦能力,因此,在本申请实施例中,所述第二摄像单元12能够基于其与被摄目标之间的距离或所述第二图像的清晰度进行光学变焦,以使得所述第二摄像单元12能够采集到被测目标的具有相对较高成像质量的第二图像。
在步骤S160中,基于所述变焦后的第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成第二调整指令,其中,所述第二调整指令用于驱动移动机构13以带动所述第二摄像单元12。
具体来说,在本申请一示例中,基于所述变焦后的第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成第二调整指令的过程,首先包括:确定所述待处理区域在所述第一图像设定的X方向和Y方向上的像素数量Mx和My;然后,确定所述映射图像在所述第一图像设定的X方向和Y方向上的像素数量Nx和Ny;接着,基于所述Mx、My、Nx和Ny,生成所述第二调整指令。
更具体地,在本申请实施例中的,当Nx>Mx且Ny>My时,基于所述Mx、My、Nx和Ny,生成所述第二调整指令的过程,包括:响应于Nx>Mx且Ny>My,生成所述第二调整指令,其中,所述调整指令用于驱动所述移动机构13以带动所述第二摄像单元12,使得所述映射图像的中心对齐于所述待处理区域的中心。也就是,当所述第二摄像单元12的成像视窗能够覆盖所述第一摄像单元11的成像视窗中所述待处理区域时,此时移动所述第二摄像单元12以使得所述第二摄像单元12的成像视窗的中心与所述第一摄像单元11的成像视窗中所述待处理区域的中心重合(值得一提的是,在具体实施中,近乎重合即可)。
反之,当Nx小于Mx或者当Nx小于My时,分别计算Mx与Nx和My与Ny之间的整数倍数关系(存在余数时,倍数加1),获取所述第二摄 像单元12在X方向和Y方向上分别要移动的次数,以多次移动第二摄像单元12,使所述第二摄像单元12的多个成像视窗能够覆盖所述第一摄像单元11的成像视窗中所述待处理区域。相应地,基于所述Mx、My、Nx和Ny,生成所述第二调整指令的过程,还包括:响应于Mx大于Nx,确定所述Mx与Nx之间的第一整数倍关系;响应于My大于Ny,确定所述My与Ny之间的第二整数倍关系;以及,基于所述第一整数倍关系和所述第二整数倍关系,生成所述第二调整指令,其中,所述第二调整指令用于驱动所述移动机构13以带动所述第二摄像单元12沿着所述X方向移动至少一第一整数倍次;以及,驱动所述移动机构13以带动所述第二摄像单元12沿着所述Y方向移动至少一第二整数倍次。
值得一提的是,在本申请其他示例中,还可通过其他方式来确定所述第二摄像单元12的位移。例如,所述第二摄像单元12的位移可以根据所述第一摄像单元11采集的所述第一图像中所述待处理区域的中心位置来确定。具体地,可设定所述待处理区与的中心的位置为(x1,y1),则所述第二摄像单元12所需的平移量d(x,y)=k(x1,y1),其中k为平移参数,其可以由所述第二摄像单元12和所述第一摄像单元11的参数计算得出,相关的参数包括所述第二摄像单元12和所述第一摄像单元11的光轴之间的夹角、所述第一摄像单元11和所述第二摄像单元12的视场角大小、像面尺寸等。
或者,可以通过标板标定的方式获取平移参数k。在所述多摄摄像模组10前设置一标板,改变所述第二摄像单元12的焦距,使所述第二摄像单元12的变焦倍率变化,平移所述第二摄像单元12并记录其平移量,获取该变焦倍率下,所述第二摄像模组所采集的第二图像在第一图像上的映射图像在所述第一图像上的平移量。然后,改变所述第二摄像单元12的变焦倍率,获取多组不同的光学变焦模组变焦倍率下,所述第二摄像模组所采集的第二图像在第一图像上的映射图像在所述第一图像上的平移量,通过多组数据,计算平移参数k。
又如,可通过构建变焦倍率-平移量对照表来确定所述第二摄像单元12的平移量。具体地,在所述多摄摄像模组10前方设置一标板,改变所述第二摄像单元12的焦距,使其变焦倍率变化,平移所述第二摄像单元12并记录其平移量,获取该变焦倍率下,所述第二摄像模组所采集的第二图像在第一图像上的映射图像在所述第一图像上的平移量。改变所述第二摄像单元12 的变焦倍率,获取多组不同的光学变焦模组变焦倍率下,所述第二摄像模组所采集的第二图像在第一图像上的映射图像在所述第一图像上的平移量,得到变焦倍率-平移量对照表。所述第二摄像单元12可依照该对照表,获取在不同变焦倍率下,其平移量与画面平移量之间的关系。
相应地,在本申请实施例中,基于所述第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成调整指令,包括:确定所述待处理区域的中心与所述映射图像的中心之间的相对位置关系;以及,基于预标定的所述待处理区域的中心与所述映射图像之间的相对位置与所述第二摄像单元12的平移位置的对应表,生成所述调整指令。
相应地,在本申请实施例中,基于所述变焦后的第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成第二调整指令,包括:确定所述待处理区域的中心与所述映射图像的中心之间的相对位置关系;以及,基于预标定的所述待处理区域的中心与所述映射图像之间的相对位置与所述第二摄像单元12的平移位置的对应表,生成所述第二调整指令
在步骤S170中,基于所述第二调整指令,驱动移动机构13以带动所述第二摄像单元12,其中,在移动所述第二摄像单元12的过程中,获得通过所述第二摄像单元12采集的该被摄目标的至少一变焦后的第二图像。
具体地,在具体实施中,可每移动一次所述第二摄像单元12,便通过所述第二摄像单元12采集该被摄目标的变焦后的第二图像,以获得多张所述变焦后的第二图像。
相应地,结合步骤S130和S140以及步骤S160和S170可知,在根据本申请实施例的成像方法中,所述第二摄像单元12与所述第一摄像单元11之间的相对位置关系被改变了两次:一次在所述第二摄像单元12进行光学变焦之前,另一次在所述第二摄像单元12光学变焦后。也就是,在根据本申请实施例的成像方法中,所述第二摄像单元12的位置被所述移动机构13改变了两次,其中,第一次改变的目的在于将所述第二摄像单元12移动至大致对应于所述待处理区域的位置以进行光学变焦,另一次的目的在于使得所述第二摄像单元12的成像视窗能够完全地覆盖所述待处理区域。特别地,当所述第二摄像单元12的成像视窗大于所述待处理区域时,另一次移动的目的通过一次移动所述第二摄像单元12实现;当所述第二摄像单元12的成 像视窗小于所述待处理区域时,另一次移动的目的通过多次移动所述第二摄像单元12使得所述第二摄像单元12的成像视窗移动所形成的组合视窗完全地覆盖所述待处理区域的方式实现。
在步骤S180中,融合所述第一图像和所述变焦后的第二图像,以获得融合图像。相应地,在本申请实施例中,融合所述第一图像和所述变焦后的第二图像,以获得融合图像,包括:融合所述第一图像和所述多张变焦后的第二图像以获得所述融合图像。
综上,基于本申请实施例的成像方法被阐明,其中,所述成像方法的实施依托于所述多摄摄像模组10在结构配置上的优化与改进。也就是,所述多摄摄像模组10的结构配置层面的优化为所述成像方法实施提供必要的硬件基础,从而藉由所述成像方法和所述多摄摄像模组10的硬件配置,方能为使用者提供更优的视觉体验。
示意性摄像系统
根据本申请另一方面,还提供一种摄像系统。
图8图示了根据本申请实施例的所述摄像系统的示意图。
如图8所示,所述摄像系统30包括如上所述的多摄摄像模组10以及可通信地连接于所述多摄摄像模组10的处理器20,其中,所述处理器20被配置为基于所述第一摄像单元11采集的被摄目标的第一图像中的待处理区域生成所述调整指令。相应地,在接收到所述调整指令后,所述移动机构13基于调整指令调整所述第一摄像单元11与所述第二摄像单元12之间的相对位置关系。
相应地,在本申请实施例中,所述处理器20进一步被配置为融合所述第一摄像单元11采集的被摄目标的第一图像和所述第二摄像单元12采集的该被摄目标的第二图像以获得融合图像。
示意性电子设备
根据本申请另一方面,还提供一种电子设备100。
图9图示了根据本申请实施例的电子设备100的立体示意图。
如图9所示,根据本申请实施例的所述电子设备100包括电子设备主体90和被组装于所述电子设备主体90的如上所述的多摄摄像模组10。在具体 实施中,所述多摄摄像模组10优选地被配置于所述电子设备主体90的背面,以被配置为后置摄像模组,当然,其也可被配置为所述电子设备主体90的前面,以被配置为前置摄像模组。
如图9所示,在本申请实施例中,所述电子设备主体90包括屏幕和集成电路,其中,所述屏幕可用于显示所述多摄摄像模组10所采集的图像数据,所述集成电路可用于处理所述多摄摄像模组10所采集的图像数据,以控制所述多摄摄像模组10实现其成像功能。
本领域的技术人员应理解,上述描述及附图中所示的本发明的实施例只作为举例而并不限制本发明。本发明的目的已经完整并有效地实现。本发明的功能及结构原理已在实施例中展示和说明,在没有背离所述原理下,本发明的实施方式可以有任何变形或修改。

Claims (23)

  1. 一种多摄摄像模组,其特征在于,包括:
    第一摄像单元,设有第一光轴;
    具有变焦功能的第二摄像单元,设有第二光轴;以及
    被配置为调整所述第一摄像单元与所述第二摄像单元之间的相对位置关系的移动机构。
  2. 根据权利要求1所述的多摄摄像模组,其中,所述移动机构,被配置为基于调整指令调整所述第一摄像单元与所述第二摄像单元之间的相对位置关系,所述调整指令基于所述第一摄像单元采集的被摄目标的第一图像中的待处理区域生成。
  3. 根据权利要求1所述的多摄摄像模组,其中,所述第二光轴以趋向于所述第一光轴的方向倾斜,以与所述第一光轴形成一夹角。
  4. 根据权利要求3所述的多摄摄像模组,其中,所述第一光轴与所述第二光轴之间所成的夹角为0.1°至45°。
  5. 根据权利要求3所述的多摄摄像模组,其中,所述第一光轴与所述第二光轴之间所成夹角的范围为0.1°至10°。
  6. 根据权利要求1所述的多摄摄像模组,其中,所述第二摄像单元被安装于所述移动机构,以通过所述移动机构驱动所述第二摄像单元来改变所述第一摄像单元与所述第二摄像单元之间的相对位置关系。
  7. 根据权利要求6所述的多摄摄像模组,其中,所述移动机构,包括:壳体,悬持地设置于所述壳体内且用于承载所述第二摄像单元的载体,以及,设置于所述载体与壳体之间且相互对应的线圈-磁石对。
  8. 根据权利要求7所述多摄摄像模组,其中,所述移动机构进一步包 括安装于所述载体与所述壳体之间的滚珠,以通过所述滚珠使得所述载体被悬持地设置于所述壳体内。
  9. 根据权利要求7所述的多摄摄像模组,其中,所述移动机构进一步包括延伸于所述壳体的内侧壁与所述载体的外侧壁之间的弹性元件,以通过所述弹性元件使得所述载体被悬持地设置于所述壳体内。
  10. 根据权利要求1所述的多摄摄像模组,其中,所述第一摄像单元的第一视场角大于60°,所述第二摄像单元的最大第二视场角小于30°。
  11. 一种摄像系统,其特征在于,包括:
    根据权利要求1-10任一所述的多摄摄像模组;以及
    可通信地连接于所述多摄摄像模组的处理器,其中,所述处理器被配置为基于所述第一摄像单元采集的被摄目标的第一图像中的待处理区域生成所述调整指令。
  12. 根据权利要求11所述的摄像系统,其中,所述处理器进一步被配置为融合所述第一摄像单元采集的被摄目标的第一图像和所述第二摄像单元采集的该被摄目标的第二图像以获得融合图像。
  13. 一种电子设备,其特征在于,包括:根据权利要求1-10任一所述的多摄摄像模组。
  14. 一种摄像系统的成像方法,其特征在于,包括:
    获得所述第一摄像单元采集的被摄目标的第一图像和所述第二摄像单元采集的该被摄目标的第二图像;
    确定所述第一图像中的至少一待处理区域;
    基于所述第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成调整指令;
    基于所述调整指令,驱动移动机构以带动所述第二摄像单元至所述第二摄像单元采集的所述第二图像映射到所述第一图像的映射图像对应于所述 待处理区域的位置;
    控制所述第二摄像单元进行光学变焦并获得该被摄目标的变焦后的第二图像;
    基于所述变焦后的第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成第二调整指令;
    基于所述第二调整指令,驱动移动机构以带动所述第二摄像单元,其中,在移动所述第二摄像单元的过程中,获得通过所述第二摄像单元采集的该被摄目标的至少一变焦后的第二图像;以及
    融合所述第一图像和所述变焦后的第二图像,以获得融合图像。
  15. 根据权利要求14所述的成像方法,其中,确定所述第一图像中的至少一待处理区域,包括:确定所述第一图像中成像质量相对较低的至少一区域为所述至少一待处理区域。
  16. 根据权利要求14所述的成像方法,其中,确定所述第一图像中的至少一待处理区域,包括:
    接收待处理区域指定指令;以及
    响应于所述待处理区域指定指令,确定所述第一图像中的至少一待处理区域。
  17. 根据权利要求14所述的成像方法,其中,确定所述第一图像中的至少一待处理区域,包括:基于默认设定,确定所述第一图像中的至少一待处理区域。
  18. 根据权利要求14所述的成像方法,其中,基于所述变焦后的第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成第二调整指令,包括:
    确定所述待处理区域在所述第一图像设定的X方向和Y方向上的像素数量Mx和My;
    确定所述映射图像在所述第一图像设定的X方向和Y方向上的像素数量Nx和Ny;以及
    基于所述Mx、My、Nx和Ny,生成所述第二调整指令。
  19. 根据权利要求18所述的成像方法,其中,基于所述Mx、My、Nx和Ny,生成所述第二调整指令,包括:
    响应于Nx>Mx且Ny>My,生成所述第二调整指令,其中,所述调整指令用于驱动所述移动机构以带动所述第二摄像单元,使得所述映射图像的中心对齐于所述待处理区域的中心。
  20. 根据权利要求18所述的成像方法,其中,基于所述Mx、My、Nx和Ny,生成所述第二调整指令,包括:
    响应于Mx大于Nx,确定所述Mx与Nx之间的第一整数倍关系;
    响应于My大于Ny,确定所述My与Ny之间的第二整数倍关系;
    基于所述第一整数倍关系和所述第二整数倍关系,生成所述第二调整指令,其中,所述第二调整指令用于驱动所述移动机构以带动所述第二摄像单元沿着所述X方向移动至少一第一整数倍次;以及,驱动所述移动机构以带动所述第二摄像单元沿着所述Y方向移动至少一第二整数倍次。
  21. 根据权利要求20所述的成像方法,其中,在移动所述第二摄像单元的过程中,获得通过所述第二摄像单元采集的该被摄目标的至少一变焦后的第二图像,包括:每移动一次,获得所述第二摄像单元采集的该被摄目标的变焦后的第二图像,以获得多张所述变焦后的第二图像;
    其中,融合所述第一图像和所述变焦后的第二图像,以获得融合图像,包括:融合所述第一图像和所述多张变焦后的第二图像以获得所述融合图像。
  22. 根据权利要求14所述的成像方法,其中,基于所述第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成调整指令,包括:
    确定所述待处理区域的中心与所述映射图像的中心之间的相对位置关系;以及
    基于预标定的所述待处理区域的中心与所述映射图像之间的相对位置与所述第二摄像单元的平移位置的对应表,生成所述调整指令。
  23. 根据权利要求14所述的成像方法,其中,基于所述变焦后的第二图像映射到所述第一图像的映射图像与所述待处理区域之间的相对位置关系,生成第二调整指令,包括:
    确定所述待处理区域的中心与所述映射图像的中心之间的相对位置关系;以及
    基于预标定的所述待处理区域的中心与所述映射图像之间的相对位置与所述第二摄像单元的平移位置的对应表,生成所述第二调整指令。
PCT/CN2021/100025 2020-07-23 2021-06-15 多摄摄像模组、摄像系统、电子设备和成像方法 WO2022017069A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202180059015.3A CN116114243A (zh) 2020-07-23 2021-06-15 多摄摄像模组、摄像系统、电子设备和成像方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010717497.4 2020-07-23
CN202010717497.4A CN113973171B (zh) 2020-07-23 2020-07-23 多摄摄像模组、摄像系统、电子设备和成像方法

Publications (1)

Publication Number Publication Date
WO2022017069A1 true WO2022017069A1 (zh) 2022-01-27

Family

ID=79585435

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/100025 WO2022017069A1 (zh) 2020-07-23 2021-06-15 多摄摄像模组、摄像系统、电子设备和成像方法

Country Status (2)

Country Link
CN (2) CN113973171B (zh)
WO (1) WO2022017069A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117135450A (zh) * 2023-01-30 2023-11-28 荣耀终端有限公司 一种对焦方法及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110122268A1 (en) * 2008-07-10 2011-05-26 Akihiro Okamoto Imaging device
US20130022342A1 (en) * 2011-07-19 2013-01-24 Elmo Company, Limited Imaging device and control method thereof
CN105827932A (zh) * 2015-06-30 2016-08-03 维沃移动通信有限公司 一种图像合成方法和移动终端
CN106357990A (zh) * 2016-08-29 2017-01-25 昆山丘钛微电子科技有限公司 具防抖功能的双摄像头装置
CN109309796A (zh) * 2017-07-27 2019-02-05 三星电子株式会社 使用多个相机获取图像的电子装置和用其处理图像的方法
CN110650330A (zh) * 2018-06-26 2020-01-03 宁波舜宇光电信息有限公司 阵列摄像模组测试方法及其标板装置
CN111345029A (zh) * 2019-05-30 2020-06-26 深圳市大疆创新科技有限公司 一种目标追踪方法、装置、可移动平台及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3463612B2 (ja) * 1999-01-21 2003-11-05 日本電気株式会社 画像入力方法、画像入力装置及び記録媒体
EP2018049B1 (en) * 2007-07-18 2013-05-01 Samsung Electronics Co., Ltd. Method of assembling a panoramic image and camera therefor
CN103379256A (zh) * 2012-04-25 2013-10-30 华为终端有限公司 图像处理方法及装置
CN110460783B (zh) * 2018-05-08 2021-01-26 宁波舜宇光电信息有限公司 阵列摄像模组及其图像处理系统、图像处理方法和电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110122268A1 (en) * 2008-07-10 2011-05-26 Akihiro Okamoto Imaging device
US20130022342A1 (en) * 2011-07-19 2013-01-24 Elmo Company, Limited Imaging device and control method thereof
CN105827932A (zh) * 2015-06-30 2016-08-03 维沃移动通信有限公司 一种图像合成方法和移动终端
CN106357990A (zh) * 2016-08-29 2017-01-25 昆山丘钛微电子科技有限公司 具防抖功能的双摄像头装置
CN109309796A (zh) * 2017-07-27 2019-02-05 三星电子株式会社 使用多个相机获取图像的电子装置和用其处理图像的方法
CN110650330A (zh) * 2018-06-26 2020-01-03 宁波舜宇光电信息有限公司 阵列摄像模组测试方法及其标板装置
CN111345029A (zh) * 2019-05-30 2020-06-26 深圳市大疆创新科技有限公司 一种目标追踪方法、装置、可移动平台及存储介质

Also Published As

Publication number Publication date
CN113973171A (zh) 2022-01-25
CN113973171B (zh) 2023-10-10
CN116114243A (zh) 2023-05-12

Similar Documents

Publication Publication Date Title
US10341567B2 (en) Photographing apparatus
US6829011B1 (en) Electronic imaging device
JP6486656B2 (ja) 撮像装置
WO2011068139A1 (ja) 立体映像撮像装置
CN103154816A (zh) 用于静态摄影的可变三维照相机组件
JP2002277736A (ja) 撮像装置
US9635347B2 (en) Stereoscopic relay optics
WO2003067323A1 (en) Digital camera with viewfinder designed for improved depth of field photographing
JPH11341522A (ja) 立体画像撮影装置
WO2021017683A1 (zh) 一种光学防抖装置及控制方法
CN112019734B (zh) 图像采集方法、装置、电子设备和计算机可读存储介质
US9635242B2 (en) Imaging apparatus
WO2022017069A1 (zh) 多摄摄像模组、摄像系统、电子设备和成像方法
KR101889275B1 (ko) 단안식 입체 카메라
US8228373B2 (en) 3-D camera rig with no-loss beamsplitter alternative
JP2000056412A (ja) 立体画像撮影用アタッチメント
WO2019065820A1 (ja) 撮影装置とその制御方法および制御プログラム
CN209710207U (zh) 摄像头模组和电子设备
JP5293282B2 (ja) デジタルカメラの撮影方法及びデジタルカメラ
JPH0946729A (ja) 立体撮像装置
US20050041133A1 (en) Video-camera unit and adapter for a video-camera unit
GB2336444A (en) Image forming apparatus with intermediate image surface
CN114070997A (zh) 多摄摄像模组、摄像系统、电子设备和自动变焦成像方法
JP2002182272A (ja) 双眼観察撮影装置
JPH0993481A (ja) 交換レンズ式カメラシステム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21845661

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21845661

Country of ref document: EP

Kind code of ref document: A1