CN112532839B - Camera module, imaging method, imaging device and mobile equipment - Google Patents

Camera module, imaging method, imaging device and mobile equipment Download PDF

Info

Publication number
CN112532839B
CN112532839B CN202011341376.0A CN202011341376A CN112532839B CN 112532839 B CN112532839 B CN 112532839B CN 202011341376 A CN202011341376 A CN 202011341376A CN 112532839 B CN112532839 B CN 112532839B
Authority
CN
China
Prior art keywords
image
lens
target
main
image sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011341376.0A
Other languages
Chinese (zh)
Other versions
CN112532839A (en
Inventor
傅琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Realme Mobile Telecommunications Shenzhen Co Ltd
Original Assignee
Realme Mobile Telecommunications Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Realme Mobile Telecommunications Shenzhen Co Ltd filed Critical Realme Mobile Telecommunications Shenzhen Co Ltd
Priority to CN202011341376.0A priority Critical patent/CN112532839B/en
Publication of CN112532839A publication Critical patent/CN112532839A/en
Priority to PCT/CN2021/123293 priority patent/WO2022111084A1/en
Application granted granted Critical
Publication of CN112532839B publication Critical patent/CN112532839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Abstract

The application discloses a camera module, an imaging method, an imaging device, a mobile device and a computer readable storage medium. The method is applied to the camera module, the camera module comprises at least two lenses, the at least two lenses share the same image sensor, each lens corresponds to one area of the image sensor, and the areas of the image sensors corresponding to different lenses are not overlapped; the method comprises the following steps: determining at least one target shot in the at least two shots in a current shooting mode; obtaining an image presented by each target lens based on a corresponding region of the image sensor; and outputting a target image according to the image. Through the scheme, the camera module can output images with higher quality under the conditions of reducing the size and the cost of the camera module.

Description

Camera module, imaging method, imaging device and mobile equipment
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a camera module, an imaging method, an imaging apparatus, a mobile device, and a computer-readable storage medium.
Background
The mobile device industry has started to adopt multi-shot schemes extensively. The multi-shot scheme employed by most manufacturers of mobile devices matches a single lens for a single image sensor. Considering the size and cost of the mobile device, manufacturers choose to match the high-resolution main camera lens with the medium-resolution wide-angle lens or match the telephoto lens with the low-resolution depth-of-field/macro lens, so that the overall height and size of the finally obtained camera module can be installed in the mobile device. Currently, the resolution of the main camera lens tends to be higher and higher, and in order to enable an image sensor with higher resolution to be integrated in a mobile device, the size of a single pixel needs to be smaller and smaller continuously by improving the process.
Disclosure of Invention
The application provides a camera module, an imaging method, an imaging device, a mobile device and a computer readable storage medium, which can output images with higher quality under the condition of reducing the size and the cost of the camera module.
In a first aspect, the present application provides a camera module, comprising:
the at least two lenses share the same image sensor, each lens corresponds to one area of the image sensor, and the areas of the image sensors corresponding to different lenses are not overlapped.
In a second aspect, the present application provides an imaging method based on the above camera module, including:
determining at least one target lens in the at least two lenses in a current shooting mode;
obtaining an image presented by each target lens based on a corresponding area of the image sensor;
and outputting the target image according to the image.
In a third aspect, the present application provides an imaging device based on the above camera module, including:
a determining unit configured to determine at least one target lens among the at least two lenses in a current photographing mode;
an acquisition unit for acquiring an image presented by each target lens based on a corresponding region of the image sensor;
and the output unit is used for outputting the target image according to the image.
In a fourth aspect, the present application provides a mobile device, where the mobile device includes the camera module, a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method according to the first aspect when executing the computer program.
In a fifth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
In a sixth aspect, the present application provides a computer program product comprising a computer program that when executed by one or more processors performs the steps of the method of the first aspect as described above.
Compared with the prior art, the application has the beneficial effects that: on the basis that a single image sensor is not arranged for each lens, and a single image sensor is adopted to match multiple types of lenses, manufacturers of mobile equipment can improve the resolution of the single image sensor and simultaneously maximally improve the size of a single pixel in the single image sensor. By the structure, the cost of the image sensor can be saved, and the volume required by the camera module can be reduced, so that the structural space of the mobile equipment is saved; in addition, under the condition that manufacturers of the mobile equipment select the image sensor with large background and high resolution, the image sensor is shared by a plurality of lenses, so that the resolution of other lenses except for the main shooting can be improved, and better lens expression is realized. Besides, the method also provides the possibility of simultaneous working of multiple shots. It is understood that the beneficial effects of the second to sixth aspects can be seen from the description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic view of a structure of a camera module according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a flow chart of an implementation of an imaging method provided by an embodiment of the present application;
fig. 3-1 is an exemplary diagram of a structure of a camera module according to an embodiment of the present disclosure;
fig. 3-2 is an exemplary diagram of another structure of a camera module according to an embodiment of the present disclosure;
FIG. 4-1 is a schematic diagram of a main shot image provided by an embodiment of the present application;
FIG. 4-2 is a schematic diagram of a wide-angle image provided by an embodiment of the present application;
4-3 are schematic diagrams of the main shot image and the wide-angle image provided by the embodiment of the application after fusion;
fig. 5 is a block diagram of a structure of an imaging apparatus provided in an embodiment of the present application;
fig. 6 is a schematic diagram of a structure of a mobile device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution proposed in the present application, the following description will be given by way of specific examples.
Under the trend that the resolution ratio of a current main shooting lens is higher and higher, the embodiment of the application firstly provides a camera module. This camera module includes: the at least two lenses share the same image sensor, each lens corresponds to one area of the image sensor, and the areas of the image sensors corresponding to different lenses are not overlapped. To accommodate current user shooting requirements, the image sensor may be a high resolution and large size single pixel image sensor. Referring to fig. 1, fig. 1 shows a schematic diagram of a camera module according to an embodiment of the present application, taking four lenses as an example. This figure 1 is explained below:
in fig. 1, four lenses, lens 1, lens 2, lens 3, and lens 4, are shown, each surrounded by a lens frame. For example only, the four lenses may be a main camera lens, a wide-angle lens, a 5 × telephoto lens, and a macro lens, respectively; alternatively, the zoom lens may be a main lens, a wide-angle lens, a 2 × telephoto lens, or a 5 × telephoto lens, which is not limited herein.
The four lenses all share the same image sensor 5. Specifically, the image sensor 5 is divided into 5 parts, namely, an imaged area 51, an imaged area 52, an imaged area 53, an imaged area 54 and an imaged-free area 55, according to the projection area of each lens on the image sensor. The area 51 is the area corresponding to the lens 1 on the image sensor 5, the area 52 is the area corresponding to the lens 2 on the image sensor 5, the area 53 is the area corresponding to the lens 3 on the image sensor 5, and the area 54 is the area corresponding to the lens 4 on the image sensor 5. The following explains the imaging process of the camera module:
when imaging is needed, all imaging surfaces of the image sensor 5 are obtained, that is, output results of all pixels of the image sensor 5 are obtained; then, cutting the output result according to the imaging effect of each lens; after cutting, the image of the area corresponding to each lens can be obtained. Based on this, it is understood that the imaging result of the pixels in the area 51 is the image output by the lens 1, the imaging result of the pixels in the area 52 is the image output by the lens 2, the imaging result of the pixels in the area 53 is the image output by the lens 3, and the imaging result of the pixels in the area 54 is the image output by the lens 4. That is, theoretically, 4 images with different lens imaging effects can be obtained at one time, and on the premise of adopting the high-resolution image sensor, the resolution effect of the 4 images is also better. In practical application, according to different imaging requirements, a plurality of areas of the image sensor can simultaneously output images; alternatively, only a single region may output an image.
Under a zoom scene, based on the camera module, images displayed by all the lenses can be read out in a low-delay manner; based on this, compared with the current mainstream multi-shot scheme that a single image sensor is matched with a single lens, the embodiment of the application can realize seamless connection fast zooming under the conditions of ensuring the definition of the picture and reducing the loss of the picture.
Under the virtual scene, based on the camera module, the data of any two lenses can be adopted for depth calculation, and the data of the other two lenses are used for accounting so as to improve the accuracy of the depth calculation. For example, lens 1 and lens 2 are a set, and lens 3 and lens 4 are a set; wherein, camera lens 1 and camera lens 2 are used for the depth to calculate, and camera lens 3 and camera lens 4 are used for accounting, specifically are: and respectively carrying out positioning compensation on the results of the depth calculation of the lens 1 and the lens 2 according to the calculation result of the lens 3, respectively carrying out positioning compensation on the results of the depth calculation of the lens 1 and the lens 2 according to the calculation result of the lens 4, finally calculating the mean value of the results of the depth calculation after the positioning compensation, analyzing the background area in the image based on the mean value, and blurring the background area. This procedure may result in a certain reduction in the error of the depth calculation (about two-thirds).
The following describes in detail an imaging method proposed by the embodiment of the present application based on the camera module. Referring to fig. 2, the imaging method includes:
in step 201, in the current shooting mode, at least one target lens is determined in the at least two lenses.
In the embodiment of the application, the camera module provides possibility for simultaneous working of multiple shots. The mobile device can select one, two or more lenses in the camera module as target lenses according to the current shooting mode.
For example, the current shooting mode and thus the target lens may be determined according to the currently adopted zoom magnification and the current lighting condition. For example only, in the case of sufficient illumination, the imaging quality of each lens during independent operation is not greatly affected, and at this time, a single lens can be selected as a target lens according to the zoom magnification; for example, in a low magnification case, the wide-angle lens may be determined to be the target lens; under the condition of normal magnification, the main shooting lens can be determined as a target lens; in the case of high magnification, the telephoto lens may be determined to be the target lens. Under the condition of insufficient illumination, the imaging quality of each lens during independent work can be influenced, and at the moment, two lenses can be selected as target lenses according to the zoom magnification; for example, in the case of low magnification, the wide-angle lens and the main shooting lens can be determined as target lenses; under the condition of normal magnification, the main shooting lens and any other lens can be determined as target lenses; in the case of high magnification, the telephoto lens and the main shooting lens may be determined to be target lenses.
For example, a user may also define a plurality of shooting modes on the mobile device in advance according to the shooting habit of the user, and configure a lens to be used for each shooting model; that is, the user may customize at least one photographing mode and pre-configure the target shots in each photographing mode. Therefore, in the shooting process, the target lens can be determined directly according to the current shooting mode. For example, if a user frequently takes an image with architectural details in a city background, a shooting mode 1 may be set, and the target lenses configured in the shooting mode 1 are the main camera lens and the telephoto lens, so as to obtain an image with rich details.
In step 202, an image presented by each target lens based on the corresponding area of the image sensor is obtained.
In the embodiment of the present application, as can be seen from the foregoing description of the imaging process of the camera module, the mobile device can obtain an image presented by each target lens based on the corresponding area of the image sensor. For example only, when the target lens includes a main shot lens, a corresponding image, that is, a main shot image, is output; when the target lens comprises a wide-angle lens, a corresponding image, namely a wide-angle image, is output; when the target lens comprises a telephoto lens, a corresponding image, namely a telephoto image, is output.
In some embodiments, in order to save system resources, it may be possible to suspend providing system resources to the non-target lens after step 101, considering that the non-target lens (i.e., the lens of the at least two lenses of the camera module other than the target lens) may not contribute to the subsequent imaging.
For example, the column selection analog-to-digital converter (column ADC) used in the area of the image sensor corresponding to the non-target lens may be turned off when the column selection ADC used in the area of the image sensor corresponding to the target lens is turned on; taking fig. 1 as an example, assuming that the current lens 1 is determined as the target lens, it is known that the lenses 2, 3 and 4 all belong to non-target lenses, however, considering that the area 1 and the area 3 share the column selection adc, in order to keep the column selection adc used in the area 1 on, only the column selection adc used in the area 2 and the area 4 are turned off.
For another example, the buffer space corresponding to the non-target shots may be reduced. It can be understood that, originally, each shot corresponds to a buffer space for buffering the output based on the shot, and the like. The non-target lens may not influence the imaging subsequently, so that the cache space corresponding to the non-target lens can be reduced, and the saved cache space can be provided for the target lens. That is, the resource of the buffer space is shifted from the original average allocation to the target shot tendency.
It should be noted that the means for suspending the provision of the system resource to the non-target shots may be performed all (i.e., not only turning off the column selection adc used in the area of the image sensor corresponding to the non-target shots but also reducing the buffer space corresponding to the non-target shots), or may be performed in one option (i.e., turning off the column selection adc used in the area of the image sensor corresponding to the non-target shots or reducing the buffer space corresponding to the non-target shots), which is not limited herein.
Step 203, outputting the target image according to the image.
In the embodiment of the present application, the mobile device may output the target image according to the image obtained in step 202. Specifically, if there is only one determined target lens, step 202 also obtains only one image presented based on the target lens; at this time, the mobile device may directly output an image presented based on the target lens to a preview interface of the mobile device for the user to review. On the contrary, if there are more than two determined target shots, the step 202 may obtain images respectively presented based on the more than two target shots, that is, obtain more than two images; at this time, the mobile device may first fuse the two or more images obtained in step 202, and output the fused images to a preview interface of the mobile device. It should be noted that, in an application scenario where the camera module includes a main shooting lens, a telephoto lens, a macro lens and a wide-angle lens, the difference of the field angles of any three lenses is too large, which results in too large difference of the image representations presented by different lenses, which not only results in large amount of calculation required for image fusion, but also makes it difficult to obtain a fused image with good quality, and based on this, the number of target lenses is preferably not more than two in such an application scenario.
In some embodiments, in a case where there are only two target shots and images rendered based on the two target shots need to be fused, the two target shots may be respectively recorded as a first target shot and a second target shot, and step 202 may be embodied as: obtaining a first image presented by the first target lens based on a corresponding area of the image sensor, and obtaining a second image presented by the second target lens based on a corresponding area of the image sensor; accordingly, step 203 may be embodied as: and fusing the first image and the second image to obtain and output the target image. Considering that the two images (i.e. the first image and the second image) may generate different imaging areas due to the difference of the field angles of the first objective lens and the second objective lens, when performing image fusion, the main image may be determined in the first image and the second image, and then the subsequent fusion is performed; at this time, there are two cases: in the first case, an image with a small angle of view of the first image and the second image is determined as a main image, and in this case, a non-main image may be cropped and the cropped non-main image may be merged with the main image; in the second case, an image with a large field angle of the first image and the second image is determined as a main image, and in this case, resolution up processing may be performed on a non-main image, and the non-main image after the resolution up processing may be merged with the main image. Here, the non-main image means: the first image and the second image are not determined to be the main image.
For example, the first target lens is a main shooting lens, and the second target lens is a wide-angle lens. Obviously, the angle of view of the wide-angle lens is larger than that of the main-pickup lens. If the mobile device needs to mainly use the wide-angle lens at this time, for example, the current zoom magnification is closer to a position of 0.5x between 0.5x and 1x, the wide-angle image output by the wide-angle lens is a main image, and the main image output by the main camera lens is a non-main image, at this time, resolution enhancement processing may be performed on the edge area of each object in the main image through a super-resolution algorithm, and the wide-angle image and the main image after the resolution enhancement processing may be fused. On the contrary, if the mobile device needs to mainly use the main shooting lens at this time, for example, the current zoom magnification is closer to the position of 1x between 0.5x and 1x, the main shooting image output by the main shooting lens is the main image, and the wide-angle image output by the wide-angle lens is the non-main image, at this time, the wide-angle image may be cropped based on the shooting range of the main shooting image, and the main shooting image and the cropped main shooting image may be merged.
In some embodiments, for a dark scene, a wide-angle binning output plus a main shot fullsize output can be adopted to realize one-time efficient mapping, and an image effect with high dynamic and high detail restoration can be obtained. In consideration of the output speed, the main lens and the wide-angle lens may be arranged in an up-down manner, that is, the main lens and the wide-angle lens correspond to different regions belonging to the same column in the image sensor, respectively. As shown in fig. 3-1, the lens 1 in the camera module may be a main lens, and the lens 3 may be a wide-angle lens, that is, the main lens and the wide-angle lens occupy the upper left position and the lower left position of the camera module, respectively; alternatively, as shown in fig. 3-2, the lens 2 in the camera module may be a main lens, and the lens 4 may be a wide-angle lens, that is, the main lens and the wide-angle lens occupy the upper right position and the lower right position of the camera module, respectively. It can be understood that as long as the area corresponding to the main shooting lens in the image sensor is not in the same line as the area corresponding to the wide-angle lens in the image sensor, the simultaneous output of the main shooting image and the wide-angle image can be realized; on the contrary, if the area corresponding to the main camera lens in the image sensor is in the same line as the area corresponding to the wide-angle lens in the image sensor, such as the upper right and the upper left, the output needs to be realized by reading twice, which may have a certain influence on the frame rate.
Referring to fig. 4-1, fig. 4-1 illustrates a high resolution telephoto image presented based on a telephoto lens; referring to fig. 4-2, fig. 4-2 shows a bining high-sensitivity wide-angle image based on the wide-angle lens; referring to fig. 4-3, fig. 4-3 shows that the image with high brightness and good details is still displayed in the dark scene after the wide-angle image and the main shot image are fused. In fact, in general, the fusion of a main shot image and a wide-angle image can be used to improve the correction effect of edge distortion of the finally output image; the fusion of the main shooting image and the long-focus image can be used for generating image details with super-resolution.
As can be seen from the above, according to the embodiment of the present application, a single image sensor is not provided for each lens, but a single image sensor is adopted to match multiple types of lenses, and on this basis, a manufacturer of a mobile device can increase the resolution of the single image sensor and increase the size of a single pixel of the single image sensor to the greatest extent. By the structure, the cost of the image sensor can be saved, and the volume required by the camera module can be reduced, so that the structural space of the mobile equipment is saved; moreover, under the condition that manufacturers of the mobile equipment select the image sensor with large background and high resolution, the image sensor is shared by a plurality of lenses, so that the resolution of other lenses except for the main camera can be improved, and better lens expression is realized. During imaging, multiple lenses can work simultaneously, and an image presented by one lens is selected to be output, or images presented by the multiple lenses are fused and then output; or a single lens works, and an image represented by the single lens is obtained and output. That is, the embodiment of the application provides possibility for simultaneous operation of multiple cameras.
The embodiment of the present application provides an imaging apparatus corresponding to the imaging method set forth above. The imaging device is also implemented based on the camera module, and can be integrated into a mobile device. Referring to fig. 5, an image forming apparatus 500 according to an embodiment of the present application includes:
a determining unit 501, configured to determine at least one target lens in the at least two lenses in a current shooting mode;
an obtaining unit 502, configured to obtain an image presented by each target lens based on a corresponding area of the image sensor;
an output unit 503, configured to output the target image according to the image.
Optionally, the imaging apparatus 500 further includes:
a management unit, configured to suspend providing system resources to a non-target shot after the determining unit 501 determines a target shot in the at least two shots, where the non-target shot is a shot, except for the target shot, in the at least two shots.
Optionally, the management unit is specifically configured to turn off a column selection analog-to-digital converter used in an area of the image sensor corresponding to the non-target lens; and/or reducing the cache space corresponding to the non-target shots.
Alternatively, if two or more target lenses are determined, the output unit 503 is specifically configured to fuse images presented by each target lens based on a corresponding region of the image sensor, and obtain and output the target image.
Optionally, if two target shots are determined, respectively recording the two target shots as a first target shot and a second target shot; the output unit 503 includes:
a main image determining sub-unit configured to determine a main image in a first image and a second image, the first image being an image in which the first object lens is presented based on a corresponding area of the image sensor, the second image being an image in which the second object lens is presented based on a corresponding area of the image sensor;
a first processing sub-unit configured to perform cropping processing on a non-main image and fuse the cropped non-main image with the main image if an image with a small field angle is determined to be the main image in the first image and the second image;
a second processing sub-unit, configured to perform resolution enhancement processing on a non-main image if an image with a large field angle is determined to be a main image in the first image and the second image, and fuse the non-main image and the main image after the resolution enhancement processing;
wherein the non-main image is: the first image and the second image are not determined to be the main image.
As can be seen from the above, according to the embodiments of the present application, instead of providing a separate image sensor for each lens, a single image sensor is used to match multiple types of lenses, and on this basis, a manufacturer of a mobile device can increase the resolution of the single image sensor and increase the size of a single pixel of the single image sensor to the maximum extent. By the structure, the cost of the image sensor can be saved, and the volume required by the camera module can be reduced, so that the structural space of the mobile equipment is saved; moreover, under the condition that manufacturers of the mobile equipment select the image sensor with large background and high resolution, the image sensor is shared by a plurality of lenses, so that the resolution of other lenses except for the main camera can be improved, and better lens expression is realized. During imaging, multiple lenses can work simultaneously, and an image presented by one lens is selected to be output, or images presented by the multiple lenses are fused and then output; or a single lens works, and an image represented by the single lens is obtained and output. That is, the embodiment of the application provides possibility for simultaneous operation of multiple cameras.
An embodiment of the present application further provides a mobile device, please refer to fig. 6, where the mobile device 6 in the embodiment of the present application includes: a camera module 601, a memory 602, one or more processors 603 (only one shown in fig. 6) and a computer program stored on the memory 602 and executable on the processors. Wherein: the camera module 601 includes at least two lenses, the at least two lenses share the same image sensor, each lens corresponds to an area of the image sensor, and the areas of the image sensors corresponding to different lenses do not overlap; the memory 602 is used for storing software programs and units, and the processor 603 executes various functional applications and data processing by running the software programs and units stored in the memory 602, so as to acquire resources corresponding to the preset events. Specifically, the processor 603 realizes the following steps by running the above-mentioned computer program stored in the memory 602:
determining at least one target lens in the at least two lenses in a current shooting mode;
obtaining an image presented by each target lens based on a corresponding area of the image sensor;
and outputting the target image according to the image.
Assuming that the above is the first possible implementation manner, in a second possible implementation manner provided on the basis of the first possible implementation manner, after determining the target shot in the at least two shots, the processor 603 further implements the following steps when executing the above computer program stored in the memory 602:
and pausing the providing of the system resources for the non-target lens, wherein the non-target lens is the lens except the target lens in the at least two lenses.
In a third possible implementation manner provided on the basis of the second possible implementation manner, the pausing of the provision of the system resource to the non-target shot includes:
turning off the column selection analog-to-digital converter used in the area of the image sensor corresponding to the non-target lens; and/or reducing the cache space corresponding to the non-target shots.
In a fourth possible embodiment based on the first possible embodiment, if two or more target shots are determined, the outputting the target image according to the image includes:
and fusing images presented by each target lens based on the corresponding area of the image sensor to obtain and output the target image.
In a fifth possible embodiment based on the fourth possible embodiment, if two target shots are determined, the two target shots are respectively referred to as a first target shot and a second target shot, and the fusing each target shot to obtain and output the target image based on the image presented by the corresponding region of the image sensor includes:
determining a main image in a first image and a second image, wherein the first image is an image presented by the first target lens based on a corresponding area of the image sensor, and the second image is an image presented by the second target lens based on a corresponding area of the image sensor;
if the image with small field angle is determined as the main image, the non-main image is cut, and the cut non-main image and the main image are merged;
if the image with a large field angle in the first image and the second image is determined as a main image, performing resolution enhancement processing on a non-main image, and fusing the non-main image subjected to the resolution enhancement processing with the main image;
wherein the non-main image is: the first image and the second image are not determined to be the main image.
It should be understood that in the embodiments of the present Application, the processor 603 may be a Central Processing Unit (CPU), and the processor may be other general purpose processors, Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 602 may include a read-only memory and a random access memory, and provides instructions and data to the processor 603. Some or all of the memory 602 may also include non-volatile random access memory. For example, the memory 602 may also store device class information.
As can be seen from the above, according to the embodiments of the present application, instead of providing a separate image sensor for each lens, a single image sensor is used to match multiple types of lenses, and on this basis, a manufacturer of a mobile device can increase the resolution of the single image sensor and increase the size of a single pixel of the single image sensor to the maximum extent. By the structure, the cost of the image sensor can be saved, and the volume required by the camera module can be reduced, so that the structural space of the mobile equipment is saved; moreover, under the condition that manufacturers of the mobile equipment select the image sensor with large background and high resolution, the image sensor is shared by a plurality of lenses, so that the resolution of other lenses except for the main camera can be improved, and better lens expression is realized. During imaging, multiple lenses can work simultaneously, and an image presented by one lens is selected to be output, or images presented by the multiple lenses are fused and then output; or a single lens works, and an image represented by the single lens is obtained and output. That is, the embodiment of the application provides possibility for simultaneous operation of multiple cameras.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of external device software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the above-described modules or units is only one logical functional division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer-readable storage medium may include: any entity or device capable of carrying the above-described computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer readable Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable storage medium may contain other contents which can be appropriately increased or decreased according to the requirements of the legislation and the patent practice in the jurisdiction, for example, in some jurisdictions, the computer readable storage medium does not include an electrical carrier signal and a telecommunication signal according to the legislation and the patent practice.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (9)

1. The utility model provides a camera module which characterized in that includes: the four lenses share the same image sensor, each lens corresponds to one area of the image sensor, and the areas of the image sensors corresponding to different lenses are not overlapped; the lens comprises a main shooting lens, a wide-angle lens, a telephoto lens and a macro lens; the main shooting lens and the wide-angle lens respectively correspond to different areas belonging to the same row in the image sensor; in a blurring scene, the camera module performs depth calculation by using a first lens and a second lens, and data of a third lens and a fourth lens are used for accounting, wherein the result of the depth calculation of the first lens and the result of the depth calculation of the second lens are respectively subjected to positioning compensation by using the accounting result of the third lens, the result of the depth calculation of the first lens and the result of the depth calculation of the second lens are respectively subjected to positioning compensation by using the accounting result of the fourth lens, and finally, the mean value of the results of the depth calculation after the positioning compensation is calculated, and a background area in an image is analyzed based on the mean value, so that the background area is blurred.
2. An imaging method based on the camera module set according to claim 1, comprising:
determining at least one target shot in the at least two shots in a current shooting mode;
obtaining an image presented by each target lens based on a corresponding region of the image sensor;
and outputting a target image according to the image.
3. The imaging method of claim 2, wherein after determining the target lens of the at least two lenses, the imaging method further comprises:
pausing provision of system resources to a non-target shot, wherein the non-target shot is a shot of the at least two shots other than the target shot.
4. The imaging method of claim 3, wherein the pausing provides system resources to the non-target lens, comprising:
closing a column selection analog-to-digital converter used in an area of the image sensor corresponding to the non-target lens; and/or reducing the cache space corresponding to the non-target shots.
5. The imaging method of claim 2, wherein said outputting the target image from the image if more than two target shots are determined comprises:
and fusing images presented by each target lens based on the corresponding area of the image sensor to obtain and output the target image.
6. The imaging method according to claim 5, wherein if two target lenses are determined, the two target lenses are respectively denoted as a first target lens and a second target lens, and the fusing each target lens to obtain and output the target image based on the image presented by the corresponding region of the image sensor comprises:
determining a main image in a first image and a second image, wherein the first image is an image presented by the first target lens based on a corresponding area of the image sensor, and the second image is an image presented by the second target lens based on a corresponding area of the image sensor;
if the image with the small field angle in the first image and the second image is determined as the main image, cutting a non-main image, and fusing the cut non-main image and the main image;
if the image with the large field angle in the first image and the second image is determined as the main image, performing resolution enhancement processing on a non-main image, and fusing the non-main image subjected to the resolution enhancement processing with the main image;
wherein the non-primary image refers to: the first image and the second image are not determined to be images of the main image.
7. An imaging device based on the camera module set according to claim 1, comprising:
a determining unit for determining at least one target lens among the at least two lenses in a current photographing mode;
an acquisition unit configured to acquire an image presented by each target lens based on a corresponding region of the image sensor;
and the output unit is used for outputting the target image according to the image.
8. A mobile device comprising a camera module according to claim 1, a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the method of any one of claims 2 to 6.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 2 to 6.
CN202011341376.0A 2020-11-25 2020-11-25 Camera module, imaging method, imaging device and mobile equipment Active CN112532839B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011341376.0A CN112532839B (en) 2020-11-25 2020-11-25 Camera module, imaging method, imaging device and mobile equipment
PCT/CN2021/123293 WO2022111084A1 (en) 2020-11-25 2021-10-12 Camera module, imaging method, imaging apparatus, and mobile device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011341376.0A CN112532839B (en) 2020-11-25 2020-11-25 Camera module, imaging method, imaging device and mobile equipment

Publications (2)

Publication Number Publication Date
CN112532839A CN112532839A (en) 2021-03-19
CN112532839B true CN112532839B (en) 2022-05-27

Family

ID=74993508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011341376.0A Active CN112532839B (en) 2020-11-25 2020-11-25 Camera module, imaging method, imaging device and mobile equipment

Country Status (2)

Country Link
CN (1) CN112532839B (en)
WO (1) WO2022111084A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112532839B (en) * 2020-11-25 2022-05-27 深圳市锐尔觅移动通信有限公司 Camera module, imaging method, imaging device and mobile equipment
WO2023236162A1 (en) * 2022-06-09 2023-12-14 北京小米移动软件有限公司 Camera module, image processing method and apparatus, terminal, electronic device and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104935915A (en) * 2015-07-17 2015-09-23 珠海康弘发展有限公司 Imaging device and three-dimensional imaging system and method
US9497367B1 (en) * 2015-07-22 2016-11-15 Ic Real Tech, Inc Maximizing effective surface area of a rectangular image sensor concurrently capturing image data from two lenses
CN107370933A (en) * 2017-09-19 2017-11-21 信利光电股份有限公司 A kind of multi-cam module
CN108419008A (en) * 2018-01-30 2018-08-17 努比亚技术有限公司 A kind of image pickup method, terminal and computer readable storage medium
CN109194881A (en) * 2018-11-29 2019-01-11 珠海格力电器股份有限公司 Image processing method, system and terminal
CN111147755A (en) * 2020-01-02 2020-05-12 普联技术有限公司 Zoom processing method and device for double cameras and terminal equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9696427B2 (en) * 2012-08-14 2017-07-04 Microsoft Technology Licensing, Llc Wide angle depth detection
US20160073094A1 (en) * 2014-09-05 2016-03-10 Microsoft Corporation Depth map enhancement
CN108234858B (en) * 2017-05-19 2020-05-01 深圳市商汤科技有限公司 Image blurring processing method and device, storage medium and electronic equipment
CN107222737B (en) * 2017-07-26 2019-05-17 维沃移动通信有限公司 A kind of processing method and mobile terminal of depth image data
CN111741283A (en) * 2019-03-25 2020-10-02 华为技术有限公司 Image processing apparatus and method
CN110336993B (en) * 2019-07-02 2021-07-09 Oppo广东移动通信有限公司 Depth camera control method and device, electronic equipment and storage medium
CN110675456B (en) * 2019-09-18 2020-06-16 深圳普罗米修斯视觉技术有限公司 Method and device for calibrating external parameters of multi-depth camera and storage medium
CN112532839B (en) * 2020-11-25 2022-05-27 深圳市锐尔觅移动通信有限公司 Camera module, imaging method, imaging device and mobile equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104935915A (en) * 2015-07-17 2015-09-23 珠海康弘发展有限公司 Imaging device and three-dimensional imaging system and method
US9497367B1 (en) * 2015-07-22 2016-11-15 Ic Real Tech, Inc Maximizing effective surface area of a rectangular image sensor concurrently capturing image data from two lenses
CN107370933A (en) * 2017-09-19 2017-11-21 信利光电股份有限公司 A kind of multi-cam module
CN108419008A (en) * 2018-01-30 2018-08-17 努比亚技术有限公司 A kind of image pickup method, terminal and computer readable storage medium
CN109194881A (en) * 2018-11-29 2019-01-11 珠海格力电器股份有限公司 Image processing method, system and terminal
CN111147755A (en) * 2020-01-02 2020-05-12 普联技术有限公司 Zoom processing method and device for double cameras and terminal equipment

Also Published As

Publication number Publication date
WO2022111084A1 (en) 2022-06-02
CN112532839A (en) 2021-03-19

Similar Documents

Publication Publication Date Title
US10412298B2 (en) Control method, control device and electronic device
WO2021073331A1 (en) Zoom blurred image acquiring method and device based on terminal device
KR102306272B1 (en) Dual camera-based imaging method, mobile terminal and storage medium
KR102306283B1 (en) Image processing method and device
CN110505411B (en) Image shooting method and device, storage medium and electronic equipment
CN110536057B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110493525B (en) Zoom image determination method and device, storage medium and terminal
CN112532839B (en) Camera module, imaging method, imaging device and mobile equipment
KR102209066B1 (en) Method and apparatus for image composition using multiple focal length
US20110261993A1 (en) Well focused catadioptric image acquisition
US11184518B2 (en) Focusing method using compensated FV value, storage medium and mobile phone for performing the same
CN109923850B (en) Image capturing device and method
US20190253593A1 (en) Photographing Method for Terminal and Terminal
KR20200031169A (en) Image processing method and device
CN112261387B (en) Image fusion method and device for multi-camera module, storage medium and mobile terminal
US8929685B2 (en) Device having image reconstructing function, method, and recording medium
WO2022007851A1 (en) Image reconstruction method and apparatus
CN107454328B (en) Image processing method, device, computer readable storage medium and computer equipment
US20220392036A1 (en) Image fusion
CN110930440B (en) Image alignment method, device, storage medium and electronic equipment
CN108307114B (en) Image processing method and device, storage medium and electronic equipment
CN112351193A (en) Zooming method based on time sequence control, image acquisition equipment and storage medium
CN111385466A (en) Automatic focusing method, device, equipment and storage medium
CN110930340B (en) Image processing method and device
JP6896181B2 (en) Image sensor, image sensor, image data processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant