CN114827432A - Focusing method and system, mobile terminal and readable storage medium - Google Patents

Focusing method and system, mobile terminal and readable storage medium Download PDF

Info

Publication number
CN114827432A
CN114827432A CN202110112384.6A CN202110112384A CN114827432A CN 114827432 A CN114827432 A CN 114827432A CN 202110112384 A CN202110112384 A CN 202110112384A CN 114827432 A CN114827432 A CN 114827432A
Authority
CN
China
Prior art keywords
focusing
image
shot
images
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110112384.6A
Other languages
Chinese (zh)
Inventor
库尔希德·阿里
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oneplus Technology Shenzhen Co Ltd
Original Assignee
Oneplus Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oneplus Technology Shenzhen Co Ltd filed Critical Oneplus Technology Shenzhen Co Ltd
Priority to CN202110112384.6A priority Critical patent/CN114827432A/en
Publication of CN114827432A publication Critical patent/CN114827432A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Abstract

The invention discloses a focusing method and a system, a mobile terminal and a readable storage medium, which can improve the number of focusing objects, and the focusing method comprises the following steps: acquiring focusing shot images of at least two cameras, wherein focusing objects of the at least two focusing shot images are different; carrying out image registration on each focusing shot image; and fusing the registered focusing shot images into an output image, wherein the output image comprises focusing objects in the focusing shot images.

Description

Focusing method and system, mobile terminal and readable storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a focusing method and system, a mobile terminal and a readable storage medium.
Background
Focusing (Focus), also known as focusing, refers to the process of adjusting the focal distance to make the image clear when a camera is used for shooting.
The current focusing method can focus on a target object, such as an object or a human face, in a photographed area. Since the camera can only focus on one object at the same time, if there are multiple objects in the image area, focusing on one of the objects may cause other objects to lose focus.
Disclosure of Invention
The embodiment of the application provides a focusing method and system, a mobile terminal and a readable storage medium, which can improve the number of focusing objects.
In a first aspect, a focusing method is provided, which includes:
acquiring focusing shot images of at least two cameras, wherein focusing objects corresponding to at least two different focusing shot images are different;
carrying out image registration on each focusing shot image;
and fusing the registered focusing shot images into an output image, wherein the output image comprises focusing objects in the focusing shot images.
In one embodiment, each of the in-focus captured images includes at least two identical objects.
In one embodiment, at least a part of image areas of the focused captured images of different cameras overlap, and the step of performing image registration on each focused captured image includes: acquiring one-to-one correspondence of the overlapping areas in each focusing shot image;
selecting a focusing shot image from each focusing shot image as a reference image, and acquiring a homography matrix according to the one-to-one correspondence, wherein the homography matrix comprises scaling information, translation information and/or rotation information of the rest focusing shot images relative to the reference image;
and performing homography transformation on the rest focusing shot images by using the homography matrix to obtain each focusing shot image matched with the view field.
In one embodiment, the overlapping regions include the same object, and the step of acquiring the one-to-one correspondence relationship between the overlapping regions in each of the focused captured images includes:
extracting characteristic points in each focusing shot image, and estimating the size and the position of each identical object in each focusing shot image according to the characteristic points of each focusing shot image;
and acquiring the one-to-one corresponding relation of the same characteristic points of each object in each focusing shot image.
In one embodiment, each of the focused captured images includes at least two identical objects, and the step of superimposing the registered focused captured images into one output image includes:
determining a focusing object and a fuzzy object in each focusing shot image;
if the focusing object of one focusing shot image is a fuzzy object corresponding to the other focusing shot images, combining the focusing objects in the focusing shot images or replacing the focusing objects with the fuzzy objects, and combining the rest areas to obtain the output image.
In one embodiment, the step of determining the focusing object and the blurring object in each of the focused captured images includes:
identifying an object in an image and acquiring the ambiguity of a region where the object is located;
if the ambiguity of the area where the object is located is greater than a first preset value, determining that the object is a fuzzy object, and if the ambiguity of the area where the object is located is less than a second preset value, determining that the object is a focusing object; the first preset value is greater than or equal to the second preset value; or acquiring focusing information of each camera, and determining a focusing object and a fuzzy object according to the focusing information.
In one embodiment, the step of obtaining the ambiguity of the region where the object is located includes:
the method comprises the steps of obtaining the area of a focusing shot image, the area of a region where an object in the focusing shot image is located, and a singular value, singular value weight and area weight of the region where the object is located.
In one embodiment, the sum of the singular value weight and the area weight is equal to 1.
And acquiring a first product of a singular value and a singular value weight of the region where the object is located and a ratio of the area of the region where the object is located to the area of the focusing shot image, acquiring a second product of the area ratio and the area weight, and taking the sum of the first product and the second product as the ambiguity of the region where the object is located.
In one embodiment, if an artifact is detected in the output image, pixels of the artifact area are replaced with adjacent pixels of the artifact area, so that the artifact is removed.
The focusing system comprises a processing unit, a storage unit and at least two cameras; the storage unit has stored therein a computer program which, when executed by the processing unit, causes the processing unit to perform the steps of the method as described in any of the embodiments above.
In one embodiment, the method further comprises the steps of obtaining a plurality of pieces of focusing information, and controlling different cameras to focus corresponding objects according to the focusing information and perform focusing shooting.
In one embodiment, the method further comprises the step of determining the number of cameras to be turned on according to the number of focusing objects.
A mobile terminal is also provided, which comprises the focusing system.
One or more non-transitory readable storage media storing computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the method as described in any of the embodiments above are also presented.
According to the focusing method and system, the mobile terminal and the readable storage medium, a plurality of focusing objects can be obtained by obtaining the focusing shot images of at least two cameras, then image registration is carried out on each focusing shot image to obtain each focusing shot image matched with the field of view, finally each registered focusing shot image is fused into one output image, and the focusing objects in each focusing shot image are kept in the output image, so that the number of the focusing objects in the image can be increased.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is to be understood that the drawings in the following description are illustrative only and are not restrictive of the invention.
FIG. 1 is a schematic structural diagram of a focusing system according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a focusing method according to an embodiment of the present application;
FIGS. 4a and 4b are schematic views of a focus shot image according to an embodiment of the present application;
fig. 5a and 5b are schematic views of focusing images in another embodiment of the present application.
FIG. 6 is a diagram of a smart phone showing a preview window in an embodiment of the present application;
FIG. 7 is a schematic diagram of a smart phone showing a plurality of preview windows in an embodiment of the present application;
fig. 8 is a flowchart illustrating an image registration method according to an embodiment of the present application;
FIG. 9 is a diagram illustrating a one-to-one correspondence relationship between feature points of an object in each of focused captured images according to an exemplary embodiment;
FIG. 10 is a schematic diagram of an output image in an embodiment of the present application;
FIG. 11 is a schematic diagram of an output image in another embodiment of the present application;
fig. 12 is a schematic view of a focusing device in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
FIG. 1 is a focusing system in one embodiment of the present application, which includes a processing unit 110, a storage unit 120, and at least two cameras; each camera is used for focusing the shot image, and the storage unit 120 stores a computer program, and when the computer program is executed by the processing unit 110, the processing unit 110 executes a focusing method provided in each of the following embodiments. In some examples, the at least two cameras included in the focusing system have shooting ranges that at least partially coincide. Further, in some examples, the transverse distance between adjacent cameras can be smaller than a preset value, and the optical axes of the cameras are parallel to each other, so that superposition of shooting ranges as many as possible is facilitated. Further, in other examples, the optical axes of the cameras form a preset included angle, and the partial shooting ranges of adjacent cameras overlap, so that the shooting ranges of the cameras partially overlap, and the overall view angle range of the focusing system is also expanded. Specifically, the at least two cameras may be both rear cameras or both front cameras.
Fig. 2 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, where the electronic device includes a focusing system as described above. As shown in fig. 2, the electronic device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor may be used as the processing unit 110 to provide computing and control capabilities to support the operation of the entire electronic device. The memory is used as the storage unit 120 for storing data, programs and the like, and the memory stores at least one computer program which can be executed by the processor to implement the focusing method suitable for the electronic device provided in the embodiment of the present application. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing a focusing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The network interface may be an ethernet card or a wireless network card, etc. for communicating with an external electronic device.
The electronic devices described in the present application may include mobile terminals such as smart phones, tablet computers, notebook computers, palmtop computers, Personal Digital Assistants (PDAs), Portable Media Players (PMPs), navigation devices, wearable devices, smart bands, pedometers, and fixed terminals such as Digital TVs, desktop computers, and the like. The following description will be given taking a mobile terminal as an example, and it will be understood by those skilled in the art that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for mobile purposes.
The following describes a focusing method of the present application, taking a smart phone as an example.
Please refer to fig. 3, which is a flowchart illustrating a focusing method according to an embodiment of the present application, where the focusing method in this embodiment can increase the number of focusing objects in an image, and includes the following steps:
step 302, acquiring focusing shot images of at least two cameras, wherein focusing objects corresponding to at least two different focusing shot images are different;
step 304, carrying out image registration on each focusing shot image;
and step 306, fusing the registered focusing shot images into an output image, wherein the output image comprises the focusing objects in the focusing shot images.
According to the focusing method, a plurality of focusing objects can be obtained by obtaining the focusing shot images of at least two cameras, then each focusing shot image is subjected to image registration to obtain each focusing shot image matched with the view field, finally each registered focusing shot image is fused into one output image, and the focusing objects in each focusing shot image are kept in the output image, so that the number of the focusing objects in the image can be increased.
In this embodiment, the focusing shot image means that the shot image contains a focusing object, the focusing shot image shot by one camera usually contains one focusing object, and the shot images of the objects obtained by different cameras may be different or the same.
The obtaining mode of the focusing shot image can be automatically obtained in response to the condition that each camera finishes focusing shooting, and can also be obtained according to user input information such as touch input, voice input and the like after each camera finishes focusing shot image.
The camera can focus on the same object and also can focus on different objects, and in order to increase the number of the focusing objects, at least two focusing shot images have different focusing objects in the embodiment. For example, images are captured by focusing three cameras, where two of the three cameras focus on the same object, and the two cameras focus on different objects, and the three cameras focus on three different objects.
Specifically, the focused shot images of different cameras are overlapped in at least partial image areas, the transverse distance that the different cameras can point is smaller than a preset value, and the cameras with optical axes parallel to each other are beneficial to the overlapping of the shooting ranges of the cameras. The optical axis can form different cameras with preset included angles, so that the shooting range of the cameras is partially overlapped, and the whole visual angle range can be further expanded.
The focused captured image of each camera obtained in step 302 may include at least two identical objects, that is, at least two objects included in the focused captured image of each camera are the same, and all the objects may also be the same, which all belong to the protection scope of the present application. For example, as shown in fig. 4a and 4b, for focus shot images captured by two different cameras respectively, most of the image areas of fig. 4a and 4b are overlapped, and the two focus shot images each include the same two objects, where object 1 in the focus shot image of fig. 4a is a focus object, object 2 is a blur object, object 2 in the focus shot image of fig. 4b is a focus object, and object 1 is a blur object.
For a scene requiring a large angle of view image and multiple focusing, the acquired focusing shot images of the cameras may be focusing shot images of different objects and overlapping partial shot areas, for example, as shown in fig. 5a and 5b, the focusing shot images captured by two different cameras respectively include an object 3 in the focusing shot image of fig. 5a and an object 4 in the focusing shot image of fig. 5b, and the two focusing shot images include different objects but overlapping partial image areas.
In some embodiments, before step 302, the method further includes the steps of acquiring a plurality of focusing information, and controlling different cameras to focus on corresponding objects according to each focusing information and performing focusing shooting. The focusing information may be user input information, such as user touch input, click input, voice input, or gesture input information to the target object. The focus information may be automatically generated information, for example, object detection information generated when an object is detected in the preview image, wherein one object detection information is generated when one object is detected.
Specifically, when the obtained focused shot images of the cameras contain the same object, a preview window may be displayed, and the focusing information in the preview window may be obtained. For example, as shown in fig. 6, which is a schematic diagram of a smart phone showing a preview window, a preview image in the preview window includes two objects, and as shown in fig. 6, a target focusing object is further marked by a rectangular frame when being selected in the preview window, so as to improve interactivity.
Specifically, when the obtained focused shot image of each camera includes different objects, a plurality of preview windows may be displayed, and one camera corresponds to one preview window, as shown in fig. 7, which is a schematic diagram of a smart phone showing the plurality of preview windows, and as shown in fig. 7, the target focused object is also marked by a rectangular frame when being selected in the preview window. In other embodiments, other marking manners may be used to mark the target focusing object.
In some embodiments, the number of cameras that need to be turned on may be determined according to the number of objects in focus. For example, if there are three objects to be focused, three cameras are turned on to respectively perform focusing shooting on the three objects, and in a scene where the number of cameras included in the smartphone is higher than the number of objects to be focused, the number of cameras to be turned on is determined by the number of the objects to be focused, so that power consumption can be reduced. Wherein the number of focusing objects can be determined specifically according to the number of focusing information.
In some embodiments, the cameras may be further controlled to focus and shoot images synchronously, and then the focusing and shooting images synchronously shot by at least two cameras are obtained in step 302, so that the influence of the change of external factors on the focusing and shooting images can be reduced, and the consistency of the obtained focusing and shooting images can be improved.
In some embodiments, the cameras in the corresponding positions can be controlled to perform focusing shooting according to the positions of the objects, and particularly, the camera with the shortest vertical distance to the objects can be selected to perform focusing shooting, so that the focusing effect can be improved, the deformation of shot images can be reduced, and the registration difficulty can be reduced. For example, the shooting range includes A, B, C three objects, the terminal includes three cameras with the centers of the optical axes on a horizontal line, the object a is on the left, and the left camera is controlled to shoot the object a in focus.
For step 304, when at least partial image areas of the in-focus captured images of different cameras overlap, the step of performing image registration on each in-focus captured image includes: acquiring one-to-one correspondence of the overlapping areas in each focusing shot image;
selecting a focusing shot image from each focusing shot image as a reference image, and acquiring a homography matrix according to the one-to-one correspondence, wherein the homography matrix comprises scaling information, translation information and/or rotation information of the rest focusing shot images relative to the reference image;
and performing homography transformation on the rest focusing shot images by using the homography matrix to obtain each focusing shot image matched with the view field.
Specifically, referring to fig. 8, when each of the focus captured images includes the same object, that is, the overlapping area includes the same object, the step of performing image registration on each of the focus captured images to obtain each of the focus captured images with a matched field of view includes:
step 802, extracting feature points in each focusing shot image, and estimating the size and the position of each identical object in each focusing shot image according to the feature points of each focusing shot image;
step 804, acquiring the one-to-one correspondence of the feature points of the same objects in the focusing shot images;
806, selecting a focusing shot image from the focusing shot images as a reference image, and acquiring a homography matrix according to the one-to-one correspondence, wherein the homography matrix comprises scaling information, translation information and/or rotation information of the rest focusing shot images relative to the reference image;
and 808, performing homography transformation on the rest focusing shot images by using the homography matrix to obtain each focusing shot image matched with the view field.
In specific implementation, in step 802, a SIFT feature point extraction algorithm may be used to extract feature points in each focused captured image, and the size of the object may be estimated according to the feature contour formed by the feature points of each focused captured image, and the position of the object may be estimated according to the position where the formed feature contour is located. The one-to-one correspondence in step 804 specifically refers to correspondence of positions, sizes, and the like of the same features in the object, and as shown in fig. 9, is a one-to-one correspondence diagram of feature points of the object in each focused captured image according to an embodiment, and the feature points at both ends of each connection line in fig. 9 are the same feature. In step 808, since the rest of the focused captured images are subjected to homography transformation with the reference image as the standard, the transformed field of view is matched with the reference image, and the field of view of the subsequent output image is also matched with the reference image, in step 806, the focused captured image of the camera positioned in the middle area can be selected as the reference image, and the reference image can be selected according to the image quality and/or the size of the object of each focused captured image, which is beneficial to improving the image quality and/or the image details of the subsequent output image. In step 806, the zoom information, the pan information, and/or the rotation information of the remaining respective focus shot images with respect to the reference image may be determined according to the zoom information, the pan information, and/or the rotation information of the same object between the remaining respective focus shot images and the reference image.
In other embodiments, when each of the focus captured images includes a different object but partial image regions overlap, the step of performing image registration on each of the focus captured images to obtain each of the focus captured images with a field of view matching includes: extracting overlapped image areas in each focusing shot image, extracting characteristic points of the overlapped image areas, acquiring one-to-one correspondence of the characteristic points of the overlapped image areas in each focusing shot image, selecting one focusing shot image from each focusing shot image as a reference image, and acquiring a homography matrix according to the one-to-one correspondence, wherein the homography matrix comprises scaling information, translation information and/or rotation information of the rest focusing shot images relative to the reference image; and performing homography transformation on the rest focusing shot images by using the homography matrix to obtain each focusing shot image matched with the view field. The specific limitations of the embodiments are the same as the above, and are not repeated.
In some embodiments, the obtained focus captured images of the cameras include at least two objects, and the objects are the same, step 306 is to superimpose the focus captured images with the matched fields of view into one output image, and the output image includes all the focus objects. For example, as shown in fig. 4a and 4b, two identical objects are contained, since the fields of view of the two cameras are different, it is necessary to perform image registration and then superposition on fig. 4a and 4b, and the output image after superposition (as shown in fig. 10) contains the focusing object 1 of fig. 4a and the focusing object 2 of fig. 4b, so as to realize multi-object focusing shooting.
In other embodiments, when the acquired focused captured images of the cameras include different objects but some of the capturing ranges overlap, step 306 splices the focused captured images with matched fields of view into an output image, where the overlapped image areas are overlapped and the output image includes the focused objects. By the embodiments, the shooting range is expanded, and multi-object focusing shooting is realized. For example, as shown in fig. 5a and 5b, the two focus shot images include different subjects, but partial image regions overlap, and the output image (as shown in fig. 11) includes the focus subjects of fig. 5a and 5b, thereby expanding the shooting range and realizing multi-subject focus shooting.
In step 306, since the registered images have consistent overlapping area size and corresponding position, they can be fused into one output image. In some embodiments, when each of the focus shot images contains at least two identical objects, the step of superimposing the registered focus shot images into one output image includes:
the first substep: determining a focusing object and a fuzzy object in each focusing shot image;
the second substep: and if the focusing object of one focusing shot image corresponds to a fuzzy object in other focusing shot images, combining the focusing objects in the focusing shot images and combining the rest areas to obtain the output image.
Specifically, merging the focusing objects in each of the focusing photographed images includes copying a focusing object region in each of the focusing photographed images onto one solid color image and replacing the corresponding solid color region, and merging the remaining regions of each of the focusing photographed images and also copying the remaining regions into the solid color image and replacing the corresponding solid color region. The size of the pure color image is consistent with the size of one of the registered focusing shooting images.
Specifically, merging the remaining regions in the respective focus shot images may include averaging pixels of the overlapping regions in the remaining regions in the respective focus shot images and then merging, and the non-overlapping regions may be directly retained in the output image, for example, as shown in fig. 10, and the non-overlapping edge regions may also be retained in the output image.
More specifically, taking fig. 4a and 4b as an example, fi (x) can be followed 1 ,y 1 )=fi 1 (x 1 ,y 1 )/2+fi 2 (x 1 ,y 1 ) Is calculated as the average pixel, fi (x) 1 ,y 1 ) Representing coordinates (x) 1 ,y 1 ) Average pixel value of (f;) 1 (x 1 ,y 1 ) Represents the coordinates (x) in FIG. 4a 1 ,y 1 ) Pixel value of (f), fi 2 (x 1 ,y 1 ) Represents the coordinates (x) in FIG. 4b 1 ,y 1 ) The pixel value of (c).
In some embodiments, the searching for a focusing object and a blurring object in each of the focus-taken images includes:
the first substep: identifying an object in an image and acquiring the ambiguity of a region where the object is located;
the second substep: if the ambiguity of the area where the object is located is greater than a first preset value, determining that the object is a fuzzy object, and if the ambiguity of the area where the object is located is less than a second preset value, determining that the object is a focusing object; the first preset value is greater than or equal to the second preset value.
In other embodiments, the focusing information of each camera can be obtained, and the focusing object and the fuzzy object in the shot image can be determined according to the focusing information. Usually, when a camera focuses, a focusing object is determined according to focusing information, and then focusing shooting is performed according to focusing parameters, so that the focusing object and a fuzzy object in a shot image can be determined according to the focusing information, and the focusing information at least comprises focusing object information and can further comprise focusing parameters and the like.
Specifically, the method for acquiring the ambiguity comprises the following steps: acquiring the area of a focusing shot image, the area of a region where an object in the focusing shot image is located, a singular value of the region where the object is located, singular value weight and area weight, wherein the sum of the singular value weight and the area weight is equal to 1; then, a first product of the singular value of the region where the object is located and the singular value weight is obtained, the ratio of the area of the region where the object is located to the area of the focused shot image is obtained, a second product of the area ratio and the area weight is obtained, and the sum of the first product and the second product is used as the fuzziness of the region where the object is located.
Specifically, the ambiguity can be obtained according to the following formula:
Figure BDA0002919391010000111
where m denotes the ambiguity of the region in which the object is located, p 1 、p 2 Respectively representing singular value weight, area weight, alpha 1 Singular values, D and D, representing the area in which the object is located k Respectively showing the area of the region where the subject is located and the area of the in-focus captured image。
In other embodiments, when each of the focus shot images contains at least two identical objects, the step of superimposing the registered focus shot images into one output image includes: and determining a focusing object and a fuzzy object in each focusing shot image, if the focusing object of one focusing shot image is a fuzzy object in other focusing shot images, replacing the fuzzy object with the object, and combining the rest areas, thereby realizing the superposition of each focusing shot image. In the specific implementation, a weight greater than a preset value is allocated to each feature of the focusing object, a weight less than the preset value, for example, a weight of 0, is allocated to each feature of the corresponding fuzzy object, then each feature of the focusing object and each feature of the corresponding fuzzy object are weighted according to the allocated weights, and the remaining regions are combined. The specific implementation of merging the remaining regions is described in the foregoing, and is not described in detail.
For example, it is detected that the object 1 of fig. 4a is in focus, but is blurred in fig. 4b, and the object 2 of fig. 4a is blurred, but is in focus in fig. 4b, then a weight of 0.8 is assigned to each feature of the object 1 of fig. 4a, a weight of 0.2 is assigned to each feature of the object 1 of fig. 4b, a weight of 0.2 is assigned to each feature of the object 2 of fig. 4a, a weight of 0.8 is assigned to each feature of the object 2 of fig. 4b, then the object 1 of fig. 4a and 4b is weighted, and the remaining regions of the object 2 of fig. 4a and 4b are merged, thereby implementing the superposition of fig. 4a and 4 b.
The inventor also researches and discovers that artifacts may occur to combine the various in-focus captured images into one output image, which affects the image quality. Based on this, it is proposed that, in some embodiments, after step 306, if it is detected that an artifact exists in the output image, the artifact is removed, specifically, pixels of the artifact region may be replaced with adjacent pixels of the artifact region, so as to remove the artifact, and specifically, the artifact may be further weakened by reducing a chrominance difference, a texture difference, and the like between the artifact region and the adjacent region. Therefore, seamless fusion is facilitated to be realized, and the image quality of the output image can be improved.
In some embodiments, if it is detected that the acquired images of the cameras include a plurality of identical objects, the shooting range of the cameras can be adjusted, so that the image areas of the focused shot images of the cameras larger than the preset area coincide, that is, coincide as much as possible, which is beneficial to improving the fusion effect of the focused shot images after registration and reducing artifacts.
It should be understood that, although the steps in the flowchart of fig. 3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
FIG. 12 is a block diagram of a focusing device according to an embodiment. As shown in fig. 12, the focusing apparatus includes:
a focusing shot image obtaining module 1210, configured to obtain focusing shot images of at least two cameras, where at least two different focusing shot images have different focusing objects;
an image registration module 1220, configured to perform image registration on each of the focus shot images;
an image fusion module 1230, configured to fuse the registered focus shot images into an output image, where the output image includes a focus object in each focus shot image.
The division of the modules in the focusing device is only used for illustration, and in other embodiments, the focusing device may be divided into different modules as needed to complete all or part of the functions of the focusing device.
For specific definition of the focusing device, reference may be made to the definition of the focusing method above, and details are not repeated here. The modules in the focusing device can be realized by software, hardware and their combination in whole or in part. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The implementation of each module in the focusing apparatus provided in the embodiments of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the focusing method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform a focusing method.
Any reference to memory, storage, database or other medium used herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (12)

1. A focusing method, comprising:
acquiring focusing shot images of at least two cameras, wherein focusing objects corresponding to at least two different focusing shot images are different;
carrying out image registration on each focusing shot image;
and fusing the registered focusing shot images into an output image, wherein the output image comprises focusing objects in the focusing shot images.
2. The focusing method according to claim 1, wherein the in-focus captured images of different cameras overlap at least in part of the image areas; the step of image registration of each in-focus captured image includes:
acquiring one-to-one correspondence of the overlapping areas in each focusing shot image;
selecting a focusing shot image from each focusing shot image as a reference image, and acquiring a homography matrix according to the one-to-one correspondence, wherein the homography matrix comprises scaling information, translation information and/or rotation information of the rest focusing shot images relative to the reference image;
and performing homography transformation on the rest focusing shot images by using the homography matrix to obtain each focusing shot image matched with the view field.
3. The focusing method according to claim 2, wherein the overlapping regions include the same object, and the step of acquiring the one-to-one correspondence relationship of the overlapping regions in the respective focused captured images includes:
extracting characteristic points in each focusing shot image, and estimating the size and the position of each identical object in each focusing shot image according to the characteristic points of each focusing shot image;
and acquiring the one-to-one corresponding relation of the same characteristic points of each object in each focusing shot image.
4. The focusing method according to claim 1, wherein each of the focus shot images contains at least two identical objects, and the step of superimposing the registered focus shot images into one output image comprises:
determining a focusing object and a fuzzy object in each focusing shot image;
if the focusing object of one focusing shot image is a fuzzy object corresponding to the other focusing shot images, combining the focusing objects in the focusing shot images or replacing the focusing objects with the fuzzy objects, and combining the rest areas to obtain the output image.
5. The focusing method according to claim 4, wherein the step of determining the focusing object and the blurring object in each of the focused captured images comprises:
identifying an object in an image and acquiring the ambiguity of a region where the object is located; if the ambiguity of the area where the object is located is greater than a first preset value, determining that the object is a fuzzy object, and if the ambiguity of the area where the object is located is less than a second preset value, determining that the object is a focusing object; the first preset value is greater than or equal to the second preset value; or acquiring focusing information of each camera, and determining a focusing object and a fuzzy object according to the focusing information.
6. The focusing method of claim 5, wherein the step of obtaining the ambiguity of the region in which the object is located comprises:
acquiring the area of a focusing shot image, the area of a region where an object in the focusing shot image is located, and a singular value, singular value weight and area weight of the region where the object is located;
and acquiring a first product of a singular value and a singular value weight of the region where the object is located and a ratio of the area of the region where the object is located to the area of the focusing shot image, acquiring a second product of the area ratio and the area weight, and taking the sum of the first product and the second product as the ambiguity of the region where the object is located.
7. The focusing method of claim 1, wherein if an artifact is detected in the output image, the artifact is removed by replacing pixels of the artifact area with neighboring pixels of the artifact area.
8. The focusing method according to claim 1, further comprising the steps of acquiring a plurality of focusing information, and controlling different cameras to focus on the corresponding object according to each focusing information and performing focusing shooting.
9. The focusing method of claim 8, further comprising the step of determining the number of cameras that need to be turned on according to the number of focusing objects.
10. A focusing system is characterized by comprising a processing unit, a storage unit and at least two cameras; a storage unit has stored therein a computer program which, when executed by the processing unit, causes the processing unit to carry out the steps of the method according to any one of claims 1 to 9.
11. A mobile terminal characterized by comprising the focusing system of claim 10.
12. One or more non-transitory readable storage media storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of any one of claims 1-9.
CN202110112384.6A 2021-01-27 2021-01-27 Focusing method and system, mobile terminal and readable storage medium Pending CN114827432A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110112384.6A CN114827432A (en) 2021-01-27 2021-01-27 Focusing method and system, mobile terminal and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110112384.6A CN114827432A (en) 2021-01-27 2021-01-27 Focusing method and system, mobile terminal and readable storage medium

Publications (1)

Publication Number Publication Date
CN114827432A true CN114827432A (en) 2022-07-29

Family

ID=82523820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110112384.6A Pending CN114827432A (en) 2021-01-27 2021-01-27 Focusing method and system, mobile terminal and readable storage medium

Country Status (1)

Country Link
CN (1) CN114827432A (en)

Similar Documents

Publication Publication Date Title
KR102574141B1 (en) Image display method and device
CN112017216B (en) Image processing method, device, computer readable storage medium and computer equipment
CN110493525B (en) Zoom image determination method and device, storage medium and terminal
US20190251675A1 (en) Image processing method, image processing device and storage medium
CN107749944A (en) A kind of image pickup method and device
CN109474780B (en) Method and device for image processing
EP3598385B1 (en) Face deblurring method and device
CN106981078B (en) Sight line correction method and device, intelligent conference terminal and storage medium
CN110661977B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN112367459B (en) Image processing method, electronic device, and non-volatile computer-readable storage medium
CN111968134B (en) Target segmentation method, device, computer readable storage medium and computer equipment
Chandramouli et al. Convnet-based depth estimation, reflection separation and deblurring of plenoptic images
CN112215877A (en) Image processing method and device, electronic equipment and readable storage medium
CN112930677B (en) Method for switching between first lens and second lens and electronic device
CN111163265A (en) Image processing method, image processing device, mobile terminal and computer storage medium
CN113436113A (en) Anti-shake image processing method, device, electronic equipment and storage medium
CN112017215B (en) Image processing method, device, computer readable storage medium and computer equipment
CN110956679A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110278366B (en) Panoramic image blurring method, terminal and computer readable storage medium
CN112802033A (en) Image processing method and device, computer readable storage medium and electronic device
CN105139340A (en) Method and device for splicing panoramic photos
CN113793259B (en) Image zooming method, computer device and storage medium
CN113298187A (en) Image processing method and device, and computer readable storage medium
CN114827432A (en) Focusing method and system, mobile terminal and readable storage medium
CN114862734A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination