CN113014820A - Processing method and device and electronic equipment - Google Patents

Processing method and device and electronic equipment Download PDF

Info

Publication number
CN113014820A
CN113014820A CN202110276812.9A CN202110276812A CN113014820A CN 113014820 A CN113014820 A CN 113014820A CN 202110276812 A CN202110276812 A CN 202110276812A CN 113014820 A CN113014820 A CN 113014820A
Authority
CN
China
Prior art keywords
image
camera
focus position
preview
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110276812.9A
Other languages
Chinese (zh)
Inventor
马彬强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202110276812.9A priority Critical patent/CN113014820A/en
Publication of CN113014820A publication Critical patent/CN113014820A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a processing method, a processing device and electronic equipment, wherein the method comprises the following steps: acquiring a first operation aiming at a preview image; determining a first focus position and a second focus position in the preview image based on the first operation; focusing the first focus position by adopting a first camera to obtain a first image, and focusing the second focus position by adopting a second camera to obtain a second image; synthesizing the first image and the second image to obtain a synthesized image, wherein the synthesized image comprises a first image area corresponding to the first focus position in the first image and a second image area corresponding to the second focus position in the second image; and outputting the composite image to a preview display area. According to the implementation scheme, under the image preview mode without executing shooting action, the images with different focuses acquired by different cameras in real time can be synthesized and output and displayed, so that a user can clearly view the current multi-focus focusing effect under the image preview mode.

Description

Processing method and device and electronic equipment
Technical Field
The present application relates to data processing technologies, and in particular, to a processing method and apparatus, and an electronic device.
Background
In some scenarios, a user needs to acquire multiple subjects in an image to satisfy a certain definition requirement, i.e., multiple focal points are needed in the image. Such as promotional pictures for advertising, where one focus is on the model's eyes and the other focus is on the product being promoted.
The current smart phone cannot meet the actual requirements.
Disclosure of Invention
In view of this, the present application provides the following technical solutions:
a method of processing, comprising:
acquiring a first operation aiming at a preview image;
determining a first focus position and a second focus position in the preview image based on the first operation;
focusing the first focus position by adopting a first camera to obtain a first image, and focusing the second focus position by adopting a second camera to obtain a second image;
synthesizing the first image and the second image to obtain a synthesized image, wherein the synthesized image comprises a first image area corresponding to the first focus position in the first image and a second image area corresponding to the second focus position in the second image;
and outputting the composite image to a preview display area.
Optionally, the synthesizing the first image and the second image to obtain a synthesized image includes:
synthesizing the first image acquired in real time and the second image acquired in real time to obtain a real-time synthesized image;
the outputting the composite image to a preview display area includes:
and outputting the real-time composite image in real time in an image preview mode.
Optionally, the method further includes:
acquiring a second operation, wherein the second operation is an operation of determining a third focal position in the preview image;
focusing the third focal position by adopting a third camera based on the second operation to obtain a third image; or the like, or, alternatively,
calculating and processing to obtain a third image with a focusing effect at the third focal position based on the third focal position and the first image and the second image;
then, the synthesizing the first image and the second image to obtain a synthesized image includes:
and synthesizing the first image, the second image and the third image to obtain a synthesized image, wherein the synthesized image comprises a first image area corresponding to the first focus position in the first image, a second image area corresponding to the second focus position in the second image and a third image area corresponding to the third focus position in the third image.
Optionally, a distance between the first camera and the second camera is smaller than a first value, and focal lengths of the first camera and the second camera are the same or different.
Optionally, when the focal length of the first camera is greater than the focal length of the second camera, the acquiring a first operation for a preview image includes:
and acquiring a first operation aiming at the preview image acquired by the first camera.
Optionally, before the acquiring the first operation on the preview image, the method further includes:
carrying out magnification or reduction operation on a preview image acquired by a first camera;
after the preview image is amplified by a first multiple, images acquired by other cameras of which the focal length is larger than that of the first camera are adopted as preview images to be output;
and after the preview image is reduced by a second multiple, outputting an image acquired by other cameras with focal sections smaller than that of the first camera as a preview image.
Optionally, when the body corresponding to the first focus position is farther than the body corresponding to the second focus position, the focal length of the first camera is larger than the focal length of the second camera;
and under the condition that the main body corresponding to the first focus position is close to the main body corresponding to the second focus position, the focal length of the first camera is smaller than that of the second camera.
Optionally, before the acquiring the first operation on the preview image, the method further includes:
determining a focus mode of image acquisition based on the acquired third operation, wherein the focus mode is used for indicating the number of focuses contained in the composite image;
based on the focus pattern, the number of cameras that need to be used is determined.
The application also discloses a processing apparatus, includes:
the operation acquisition module is used for acquiring a first operation aiming at the preview image;
a focus determination module configured to determine a first focus position and a second focus position in the preview image based on the first operation;
the focusing processing module is used for focusing the first focus position by adopting a first camera to obtain a first image and focusing the second focus position by adopting a second camera to obtain a second image;
an image synthesis module, configured to synthesize the first image and the second image to obtain a synthesized image, where the synthesized image includes a first image area in the first image corresponding to the first focus position and a second image area in the second image corresponding to the second focus position;
and the image output module is used for outputting the composite image to a preview display area.
Further, the present application also discloses an electronic device, including:
a processor;
a memory for storing executable instructions of the processor;
wherein the executable instructions comprise: acquiring a first operation aiming at a preview image; determining a first focus position and a second focus position in the preview image based on the first operation; focusing the first focus position by adopting a first camera to obtain a first image, and focusing the second focus position by adopting a second camera to obtain a second image; synthesizing the first image and the second image to obtain a synthesized image, wherein the synthesized image comprises a first image area corresponding to the first focus position in the first image and a second image area corresponding to the second focus position in the second image; and outputting the composite image to a preview display area.
Compared with the prior art, the embodiment of the application discloses a processing method, a processing device and electronic equipment, and the method comprises the following steps: acquiring a first operation aiming at a preview image; determining a first focus position and a second focus position in the preview image based on the first operation; focusing the first focus position by adopting a first camera to obtain a first image, and focusing the second focus position by adopting a second camera to obtain a second image; synthesizing the first image and the second image to obtain a synthesized image, wherein the synthesized image comprises a first image area corresponding to the first focus position in the first image and a second image area corresponding to the second focus position in the second image; and outputting the composite image to a preview display area. According to the implementation scheme, under the image preview mode without executing the shooting action, the images with different focuses acquired by different cameras in real time can be synthesized and output and displayed, so that a user can clearly view the current multi-focus focusing effect under the image preview mode, the multi-focus image meeting the user requirements is directly obtained after the user determines to shoot, the operation of the whole process is simple, convenient and quick, and the use experience of the user is favorably improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a processing method disclosed in an embodiment of the present application;
fig. 2 is a schematic flowchart of a process of synthesizing a first image and a second image according to an embodiment of the present disclosure;
FIG. 3 is a partial flow diagram of a processing method disclosed in an embodiment of the present application;
FIG. 4 is a partial flow diagram of another process disclosed in an embodiment of the present application;
FIG. 5 is a schematic diagram of the arrangement positions of a plurality of cameras on the electronic device;
fig. 6 is a schematic view of a preview image at the same position of cameras with different focal lengths, disclosed in an embodiment of the present application;
FIG. 7 is a partial flow chart of yet another method of processing disclosed in embodiments of the present application;
fig. 8 is a schematic structural diagram of a processing apparatus according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application can be applied to electronic equipment, the product form of the electronic equipment is not limited by the application, and the electronic equipment can include but is not limited to a smart phone, a tablet computer, wearable equipment, a Personal Computer (PC), a netbook and the like, and can be selected according to application requirements.
Fig. 1 is a flowchart of a processing method disclosed in an embodiment of the present application, and the processing method shown in fig. 1 is applicable to an electronic device including at least two cameras. Because the processing method is used for acquiring the multi-focus image and different cameras are used for focusing at different positions, the at least two cameras can be arranged close to each other to reduce imaging errors caused by shooting angles. Referring to fig. 1, in one implementation, a processing method may include:
step 101: a first operation for a preview image is acquired.
The preview image refers to an image in a view finder range acquired by a camera of the electronic device when a user opens a shooting application, and the electronic device controls the image to be displayed in a preview display area to form the preview image, so that the user can adjust a shooting angle and a shooting distance based on the preview image.
Of course, the user can also perform corresponding trigger operation on the preview image, so that the system can control the display effect of the preview image based on the corresponding trigger operation, and the user can find the display effect which is considered to be the most suitable by the user. As the first operation described above, it may be an operation of determining the focus of an image.
Step 102: determining a first focus position and a second focus position in the preview image based on the first operation.
The first operation may be a focus selection operation, and the first operation may be a single action or a set including at least two actions in a sequential order. For example, a user can directly determine two focus positions by simultaneously touching two positions on a preview image displayed on a touch display screen with an index finger and a middle finger; the index finger can also be used to click to select three focus positions one after the other. Of course, the specific implementation form of the first operation is not limited in this application, for example, the first operation may also be an operation implemented by an input device, such as a mouse, a touch pen, or the like.
Based on the operation position of the first operation, a first focus position and a second focus position in the preview image can be determined, and the subsequent system can select appropriate cameras to focus on the first focus position and the second focus position respectively based on the depth of field of the first focus position and the second focus position.
Step 103: and focusing the first focus position by adopting a first camera to obtain a first image, and focusing the second focus position by adopting a second camera to obtain a second image.
On the premise that the first focus position and the second focus position are determined, the system can determine to focus the camera by adopting a proper camera based on the depth of field corresponding to the focus positions. The proper camera is that the image effect of focusing the determined focal position by adopting the camera is clearer than that of focusing the same focal position by adopting other cameras. In this implementation, the focal sections of the first camera and the second camera are different, and therefore, the focusing effects of the first camera and the second camera on the same focal position are different.
Of course, in some implementations, the focal lengths of the first camera and the second camera are the same, in which case the focusing effect of the two cameras is the same for the same focal position; the two cameras are only used for focusing different focus positions at the same time.
After the first camera focuses on the first focus position, the definition of an area at the first focus position in an obtained first image is higher than that of other areas; similarly, in the second image, the sharpness is higher at the second focus position than in other regions.
Step 104: and synthesizing the first image and the second image to obtain a synthesized image, wherein the synthesized image comprises a first image area corresponding to the first focus position in the first image and a second image area corresponding to the second focus position in the second image.
After the first image and the second image are obtained, the first image and the second image are synthesized. Because the cameras for shooting the two images are arranged close to each other, the shooting angles and the shooting contents of the two images are basically identical, so that the two images can be synthesized, and the obtained synthesized image is natural and real in vision.
From the user's requirement perspective, the image that the user wants to obtain is clearer at the first focus position and the second focus position, and therefore, the composite image needs to include a first image area corresponding to the first focus position in the first image and a second image area corresponding to the second focus position in the second image.
Fig. 2 is a schematic flow chart of a process of synthesizing a first image and a second image, which is disclosed in an embodiment of the present application, and with reference to fig. 2, a definition of display content at a first focus position in the first image is better than that of other areas, and a definition of display content at a second focus position in the second image is better than that of other areas.
Step 105: and outputting the composite image to a preview display area.
And after the composite image is obtained through processing, outputting the obtained composite image to a preview display area for displaying so that a user can view an image display effect obtained after focusing the first focus position and the second focus position.
It should be noted that, in the implementation process of the embodiment of the present application, the electronic device has not performed a "photographing" operation, that is, the composite image obtained by the foregoing composite processing is only used for showing the multi-focus focusing effect of the image, and is not actually obtained and stored locally.
According to the processing method, under the image preview mode without executing the shooting action, the images with different focuses acquired by different cameras in real time can be synthesized and output and displayed, so that a user can clearly view the current multi-focus focusing effect under the image preview mode, a multi-focus image meeting the user requirements is directly obtained after the user determines to shoot, the operation of the whole process is simple, convenient and quick, and the use experience of the user is favorably improved.
In the above embodiment, the synthesizing the first image and the second image to obtain a synthesized image may include: and synthesizing the first image acquired in real time and the second image acquired in real time to obtain a real-time synthesized image. The outputting the composite image to a preview display area may include: and outputting the real-time composite image in real time in an image preview mode.
Because the scenery shot by the cameras is possibly dynamic, for example, in an advertisement shooting scene, commodities are possibly put on the exhibition stand to be rotationally displayed, so that a user can better watch a real-time preview image, and the first camera and the second camera can also acquire the image in real time; and then the first image collected in real time and the second image collected in real time are synthesized to obtain a real-time synthesized image for preview display.
It should be noted that the real-time composite image is also an image for preview display, and may be stored in the cache, and only when the user subsequently triggers the "taking" action, the system may store the real-time composite image output at the moment corresponding to the "taking" action locally. The processing methods in the embodiments of the present application are all executed in an image preview mode.
Fig. 3 is a partial flowchart of a processing method disclosed in an embodiment of the present application, and with reference to fig. 3, in addition to the steps of the processing method shown in fig. 1, before performing a combining process on the first image and the second image, the method may further include:
step 301: and acquiring a second operation, wherein the second operation is an operation of determining a third focal position in the preview image.
In some scenarios, the user may need to select multiple focal points, for example, in the case where multiple people are included in the preview image, the user may want to focus the positions of the faces of three or even more people so that the image effect can clearly see the faces of the multiple people.
Step 302: and focusing the third focus position by adopting a third camera based on the second operation to obtain a third image.
Since one camera can focus at only one position at a time, in the foregoing case, the third camera can focus at the third focus position determined by the user. All images currently having a single focus may then be composited according to the logic for compositing the first and second images as previously described.
Then, the synthesizing the first image and the second image to obtain a synthesized image may include: and synthesizing the first image, the second image and the third image to obtain a synthesized image, wherein the synthesized image comprises a first image area corresponding to the first focus position in the first image, a second image area corresponding to the second focus position in the second image and a third image area corresponding to the third focus position in the third image.
Fig. 4 is a partial flowchart of another processing method disclosed in the embodiment of the present application, and referring to fig. 4, in another implementation, in addition to the processing method steps shown in fig. 1, before the combining the first image and the second image, the method may further include:
step 401: and acquiring a second operation, wherein the second operation is an operation of determining a third focal position in the preview image.
Step 402: and calculating and processing to obtain a third image with a focusing effect at the third focal position based on the third focal position and the first image and the second image.
In some scenarios, the electronic device may be configured with only two cameras, in which case, when the user wants to acquire an image including three or even more focuses, after acquiring the first image corresponding to the first focus position and the second image corresponding to the second focus position, the third image with the focusing effect of the third focus position may be obtained by performing corresponding calculation on the third focus position corresponding region based on the first image and the second image. The content of the third image may be the same as the first image or the second image, or may be a partial content of the first image or the second image, which is not limited in the present application; but it is necessary to ensure that the third image contains a region with a third focal position correspondence.
The subsequent image composition also includes: and synthesizing the first image, the second image and the third image to obtain a synthesized image, wherein the synthesized image comprises a first image area corresponding to the first focus position in the first image, a second image area corresponding to the second focus position in the second image and a third image area corresponding to the third focus position in the third image.
It should be noted that, in the case that there are only two cameras and the user selects more than two focus positions, the system may automatically adapt the focus positions where the two cameras need to focus according to the difference between the two camera focal segments, and the focus positions where no camera focuses remain, and the focusing process is obtained by calculating the first image and the second image collected by the two cameras.
For example, the three focusing positions are A, B, C respectively, and the a position is farthest relative to the camera, and the C position is closest relative to the camera; when only two cameras are arranged currently, for one middle-focus camera and one short-focus camera, the middle-focus camera is automatically matched to focus the position B, and the short-focus camera is used to focus the position A; and then, obtaining an image focused at the C position by adopting the focusing image calculation processing obtained by the two cameras.
In the foregoing embodiment, a distance between the first camera and the second camera is smaller than a first value, and focal lengths of the first camera and the second camera are the same or different.
For the distance between the first camera and the second camera to be smaller than the first value, as mentioned above, in order to reduce the imaging error caused by the shooting angle, the two cameras need to be arranged close to each other, and certainly, the same is true when the number of the cameras is more than two. The first value may be a small value, such as 0.5 cm. Referring to fig. 5, a schematic diagram of an arrangement of a plurality of cameras is shown.
In addition, the focal length of the first camera and the second camera may be the same or different. When the focal sections of the two cameras are the same, the two cameras can focus respectively on two objects with close distances; when the focal sections of the two camera meats are different, the two objects which are far away can be focused respectively. The distance between the two objects is far, which means that the difference between the distances between the two objects and the camera is large.
In one implementation, when the focal length of the first camera is greater than the focal length of the second camera, the acquiring the first operation for the preview image may include: and acquiring a first operation aiming at the preview image acquired by the first camera.
On the premise that the position and the shooting direction of the camera are fixed, the picture range shot by the camera with the large focal length is smaller than that shot by the camera with the small focal length. When a photographing application is opened, an image acquired by a camera with a small focal length is preferably used as a preview image, if a user selects a point at the edge position of the preview image as a focal position and an adaptive focal length of the focal position corresponds to a camera with a large focal length, the situation that the camera with the large focal length does not include the focal position in the viewing range may occur, and therefore focusing of the focal position cannot be achieved. Fig. 6 is a schematic view of preview images of cameras with different focal lengths at the same position disclosed in the embodiment of the present application, and with reference to fig. 6, if a focal position selected by a user is as shown in the figure, the camera with a larger focal length cannot shoot the focal position, and focusing on the focal position cannot be achieved, which is very unfriendly for user experience.
Therefore, when the focal length of the first camera is larger than that of the second camera, the preview image acquired by the first camera with a larger focal length needs to be acquired and displayed. Because the framing range of the second camera is larger than that of the first camera, the condition that the focus position in the preview image cannot be shot by the second camera does not occur, and therefore the focusing operation can be carried out smoothly.
Of course, in a scene where the electronic device is configured with three cameras with different focal lengths, considering that the user usually does not place the position that the user wants to focus on at the boundary of the viewing range, and considering the actual need, the first displayed preview image can be acquired by the camera in the middle focal length.
Based on the foregoing embodiment, before the acquiring the first operation on the preview image, the method may further include: carrying out magnification or reduction operation on a preview image acquired by a first camera; after the preview image is amplified by a first multiple, images acquired by other cameras of which the focal length is larger than that of the first camera are adopted as preview images to be output; and after the preview image is reduced by a second multiple, outputting an image acquired by other cameras with focal sections smaller than that of the first camera as a preview image.
When the preview image is enlarged, the user wants to shoot a remote object, and the system automatically adjusts a camera which can shoot the remote object and has a focal length larger than that of the acquisition camera of the preview image to acquire an image and output the image as a new preview image; when the preview image is reduced, the fact that the user probably wants to shoot a near object or a larger-range scene is indicated, the system automatically adjusts the camera of the acquisition camera which can shoot the near object, has a wider view-finding range and has a focus section smaller than the preview image to acquire the image and outputs the image as a new preview image.
In practical application, when the main body corresponding to the first focus position is far away from the main body corresponding to the second focus position, the focal length of the first camera is larger than that of the second camera; and under the condition that the main body corresponding to the first focus position is close to the main body corresponding to the second focus position, the focal length of the first camera is smaller than that of the second camera.
Fig. 7 is a partial flowchart of another processing method disclosed in an embodiment of the present application, and as shown in fig. 7, before the acquiring the first operation for the preview image, the processing method may further include:
step 701: determining a focus mode for image acquisition based on the acquired third operation, the focus mode being indicative of a number of foci included in the composite image.
Step 702: based on the focus pattern, the number of cameras that need to be used is determined.
For example, a user may select a single focus blur or multi-focus mode, in which only one camera is used to focus on one focus position; in the multi-focus mode, two or more cameras are enabled to realize focusing processing of multiple focus positions.
For example, in one scenario, a user selects a "multi-focus" mode in a camera application, the camera application automatically switches to a dual-focus mode and uses a preview of a medium focus lens, the user presses a certain point on a screen to select a first focus position, and the camera application uses the medium focus lens to focus; then the user presses another point of the screen to select a second focus position, and the camera application judges whether the focus is closer to or farther away from the previous focus; if so, focusing the focus by using the middle focus lens and simultaneously focusing the first focus by using the short focus lens; if the focus is directly focused with a short focus farther than the previous focus. After the two lenses are focused, the user can press the shutter button to shoot. Of course, since most of the existing terminal devices are non-physical shutters, real-time image streams are transmitted to the camera by the image sensor, and real-time synthesized images are displayed for the user to preview, the user can also press the shutter to shoot in the focusing process, and the user does not need to wait for the completion of focusing of the two lenses to use the shutter. That is, the user does not have to wait for the two lenses to be in focus to complete before shooting.
In one scene, a user selects a 'multi-focus' mode in a camera application, the camera application automatically switches to a dual-focus mode and uses a preview picture of a telephoto lens, and when the user performs an amplification operation on a default preview image, and the amplification operation enables the camera to be switched to the telephoto lens, the camera application automatically switches to a long-focus + middle-focus mode; and triggering a zoom-out operation on the preview image, wherein when the zoom-out operation enables the camera to be switched to a short-focus lens, the camera application is automatically switched to a mode of middle focus and short focus.
For electronic devices containing multiple cameras, the user may increase or decrease the number of focal points in the camera application. The camera application may also split the aforementioned multi-focus mode into a bifocal mode, a trifocal mode, a quad mode, and so on.
Certainly, the camera application can also add a 'full focus' mode, all cameras are used by default, and different cameras use different focus segments, so that the close shot and the distant shot in the final composite image are in a completely clear state as much as possible, and a user is not required to select a focus any more. When the user zooms in or zooms out the preview image, the number of the used cameras is automatically increased or decreased.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present application is not limited by the order of acts or acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
The method is described in detail in the embodiments disclosed in the present application, and the method of the present application can be implemented by various types of apparatuses, so that an apparatus is also disclosed in the present application, and the following detailed description is given of specific embodiments.
Fig. 8 is a schematic structural diagram of a processing device disclosed in an embodiment of the present application, and referring to fig. 8, the processing device 80 may include:
an operation acquiring module 801, configured to acquire a first operation for the preview image.
A focus determination module 802, configured to determine a first focus position and a second focus position in the preview image based on the first operation.
The focusing processing module 803 is configured to focus the first focus position by using a first camera to obtain a first image, and focus the second focus position by using a second camera to obtain a second image.
An image synthesizing module 804, configured to synthesize the first image and the second image to obtain a synthesized image, where the synthesized image includes a first image area in the first image corresponding to the first focus position and a second image area in the second image corresponding to the second focus position.
An image output module 805, configured to output the composite image to a preview display area.
The processing device can synthesize and output and display images with different focuses acquired by different cameras in real time in an image preview mode without executing shooting actions, so that a user can clearly view the current multi-focus focusing effect in the image preview mode, a multi-focus image meeting the user requirements is directly obtained after the user determines to shoot, the operation of the whole process is simple, convenient and quick, and the use experience of the user is favorably improved.
In one implementation, the image composition module is specifically operable to: and synthesizing the first image acquired in real time and the second image acquired in real time to obtain a real-time synthesized image. The image output module is specifically configured to: and outputting the real-time composite image in real time in an image preview mode.
In one implementation, the operation obtaining module is further configured to: obtaining a second operation, where the second operation is an operation of determining a third focus position in the preview image, and the focus processing module is further configured to: focusing the third focal position by adopting a third camera based on the second operation to obtain a third image; or, a third image with a focusing effect at the third focal position is obtained through calculation processing based on the third focal position, the first image and the second image. The image synthesis module is specifically configured to: and synthesizing the first image, the second image and the third image to obtain a synthesized image, wherein the synthesized image comprises a first image area corresponding to the first focus position in the first image, a second image area corresponding to the second focus position in the second image and a third image area corresponding to the third focus position in the third image.
In one implementation, a distance between the first camera and the second camera is smaller than a first value, and focal sections of the first camera and the second camera are the same or different.
In one implementation, when the focal length of the first camera is greater than the focal length of the second camera, the operation acquisition module is specifically configured to: and acquiring a first operation aiming at the preview image acquired by the first camera.
In one implementation, the processing device further includes: the proportion control module is used for carrying out amplification or reduction operation on the preview image acquired by the first camera; the preview adjusting module is used for outputting images acquired by other cameras with focal sections larger than that of the first camera as preview images after the preview images are amplified by a first multiple; and after the preview image is reduced by a second multiple, outputting an image acquired by other cameras with focal sections smaller than that of the first camera as a preview image.
In one implementation, after the preview image is amplified by a first multiple, images acquired by other cameras with focal sections larger than that of the first camera are used as preview images to be output; and after the preview image is reduced by a second multiple, outputting an image acquired by other cameras with focal sections smaller than that of the first camera as a preview image.
In one implementation, the processing device further includes: a mode determination module, configured to determine a focus mode of image acquisition based on an acquired third operation before the operation acquisition module acquires the first operation on the preview image, where the focus mode is used to indicate a number of focuses included in the composite image; based on the focus pattern, the number of cameras that need to be used is determined.
Further this application still discloses an electronic equipment, includes:
a processor;
a memory for storing executable instructions of the processor;
wherein the executable instructions comprise: acquiring a first operation aiming at a preview image; determining a first focus position and a second focus position in the preview image based on the first operation; focusing the first focus position by adopting a first camera to obtain a first image, and focusing the second focus position by adopting a second camera to obtain a second image; synthesizing the first image and the second image to obtain a synthesized image, wherein the synthesized image comprises a first image area corresponding to the first focus position in the first image and a second image area corresponding to the second focus position in the second image; and outputting the composite image to a preview display area.
Any one of the processing devices in the above embodiments includes a processor and a memory, the operation acquisition module, the focus determination module, the focusing processing module, the image synthesis module, the image output module, and the like in the above embodiments are stored in the memory as program modules, and the processor executes the program modules stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program module from the memory. The kernel can be provided with one or more, and the processing of the return visit data is realized by adjusting the kernel parameters.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
An embodiment of the present application provides a storage medium on which a program is stored, which when executed by a processor implements the processing method described in the above embodiment.
The embodiment of the present application provides a processor, where the processor is configured to execute a program, where the program executes the processing method described in the foregoing embodiment when running.
Further, the present embodiment provides an electronic device, which includes a processor and a memory. Wherein the memory is used for storing executable instructions of the processor, and the processor is configured to execute the processing method described in the above embodiment via executing the executable instructions.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of processing, comprising:
acquiring a first operation aiming at a preview image;
determining a first focus position and a second focus position in the preview image based on the first operation;
focusing the first focus position by adopting a first camera to obtain a first image, and focusing the second focus position by adopting a second camera to obtain a second image;
synthesizing the first image and the second image to obtain a synthesized image, wherein the synthesized image comprises a first image area corresponding to the first focus position in the first image and a second image area corresponding to the second focus position in the second image;
and outputting the composite image to a preview display area.
2. The processing method according to claim 1, wherein the synthesizing the first image and the second image to obtain a synthesized image comprises:
synthesizing the first image acquired in real time and the second image acquired in real time to obtain a real-time synthesized image;
the outputting the composite image to a preview display area includes:
and outputting the real-time composite image in real time in an image preview mode.
3. The processing method of claim 1, further comprising:
acquiring a second operation, wherein the second operation is an operation of determining a third focal position in the preview image;
focusing the third focal position by adopting a third camera based on the second operation to obtain a third image; or the like, or, alternatively,
calculating and processing to obtain a third image with a focusing effect at the third focal position based on the third focal position and the first image and the second image;
then, the synthesizing the first image and the second image to obtain a synthesized image includes:
and synthesizing the first image, the second image and the third image to obtain a synthesized image, wherein the synthesized image comprises a first image area corresponding to the first focus position in the first image, a second image area corresponding to the second focus position in the second image and a third image area corresponding to the third focus position in the third image.
4. The processing method according to claim 1, wherein a distance between the first camera and the second camera is smaller than a first value, and focal segments of the first camera and the second camera are the same or different.
5. The processing method according to claim 4, wherein when the focal length of the first camera is larger than the focal length of the second camera, the acquiring a first operation for a preview image comprises:
and acquiring a first operation aiming at the preview image acquired by the first camera.
6. The processing method according to claim 1, further comprising, before the acquiring a first operation for a preview image:
carrying out magnification or reduction operation on a preview image acquired by a first camera;
after the preview image is amplified by a first multiple, images acquired by other cameras of which the focal length is larger than that of the first camera are adopted as preview images to be output;
and after the preview image is reduced by a second multiple, outputting an image acquired by other cameras with focal sections smaller than that of the first camera as a preview image.
7. The processing method according to claim 1, wherein in a case where the subject corresponding to the first focus position is farther than the subject corresponding to the second focus position, the focal length of the first camera is larger than the focal length of the second camera;
and under the condition that the main body corresponding to the first focus position is close to the main body corresponding to the second focus position, the focal length of the first camera is smaller than that of the second camera.
8. The processing method according to any one of claims 1 to 7, further comprising, before the acquiring the first operation for the preview image:
determining a focus mode of image acquisition based on the acquired third operation, wherein the focus mode is used for indicating the number of focuses contained in the composite image;
based on the focus pattern, the number of cameras that need to be used is determined.
9. A processing apparatus, comprising:
the operation acquisition module is used for acquiring a first operation aiming at the preview image;
a focus determination module configured to determine a first focus position and a second focus position in the preview image based on the first operation;
the focusing processing module is used for focusing the first focus position by adopting a first camera to obtain a first image and focusing the second focus position by adopting a second camera to obtain a second image;
an image synthesis module, configured to synthesize the first image and the second image to obtain a synthesized image, where the synthesized image includes a first image area in the first image corresponding to the first focus position and a second image area in the second image corresponding to the second focus position;
and the image output module is used for outputting the composite image to a preview display area.
10. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the executable instructions comprise: acquiring a first operation aiming at a preview image; determining a first focus position and a second focus position in the preview image based on the first operation; focusing the first focus position by adopting a first camera to obtain a first image, and focusing the second focus position by adopting a second camera to obtain a second image; synthesizing the first image and the second image to obtain a synthesized image, wherein the synthesized image comprises a first image area corresponding to the first focus position in the first image and a second image area corresponding to the second focus position in the second image; and outputting the composite image to a preview display area.
CN202110276812.9A 2021-03-15 2021-03-15 Processing method and device and electronic equipment Pending CN113014820A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110276812.9A CN113014820A (en) 2021-03-15 2021-03-15 Processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110276812.9A CN113014820A (en) 2021-03-15 2021-03-15 Processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113014820A true CN113014820A (en) 2021-06-22

Family

ID=76407342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110276812.9A Pending CN113014820A (en) 2021-03-15 2021-03-15 Processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113014820A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114157810A (en) * 2021-12-21 2022-03-08 西安维沃软件技术有限公司 Shooting method, shooting device, electronic equipment and medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001223874A (en) * 2000-02-04 2001-08-17 Kiyoharu Aizawa Arbitrary focused image composite device and camera for simultaneously picking up a plurality of images, which is used for the same
CN104349063A (en) * 2014-10-27 2015-02-11 东莞宇龙通信科技有限公司 Method, device and terminal for controlling camera shooting
CN106060386A (en) * 2016-06-08 2016-10-26 维沃移动通信有限公司 Preview image generation method and mobile terminal
CN106973227A (en) * 2017-03-31 2017-07-21 努比亚技术有限公司 Intelligent photographing method and device based on dual camera
CN108513069A (en) * 2018-03-30 2018-09-07 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108900763A (en) * 2018-05-30 2018-11-27 Oppo(重庆)智能科技有限公司 Filming apparatus, electronic equipment and image acquiring method
CN110266957A (en) * 2019-07-09 2019-09-20 维沃移动通信有限公司 Image shooting method and mobile terminal
CN111654629A (en) * 2020-06-11 2020-09-11 展讯通信(上海)有限公司 Camera switching method and device, electronic equipment and readable storage medium
CN111866370A (en) * 2020-05-28 2020-10-30 北京迈格威科技有限公司 Method, device, equipment, medium, camera array and assembly for synthesizing panoramic deep image
CN112492212A (en) * 2020-12-02 2021-03-12 维沃移动通信有限公司 Photographing method and device, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001223874A (en) * 2000-02-04 2001-08-17 Kiyoharu Aizawa Arbitrary focused image composite device and camera for simultaneously picking up a plurality of images, which is used for the same
CN104349063A (en) * 2014-10-27 2015-02-11 东莞宇龙通信科技有限公司 Method, device and terminal for controlling camera shooting
CN106060386A (en) * 2016-06-08 2016-10-26 维沃移动通信有限公司 Preview image generation method and mobile terminal
CN106973227A (en) * 2017-03-31 2017-07-21 努比亚技术有限公司 Intelligent photographing method and device based on dual camera
CN108513069A (en) * 2018-03-30 2018-09-07 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108900763A (en) * 2018-05-30 2018-11-27 Oppo(重庆)智能科技有限公司 Filming apparatus, electronic equipment and image acquiring method
CN110266957A (en) * 2019-07-09 2019-09-20 维沃移动通信有限公司 Image shooting method and mobile terminal
CN111866370A (en) * 2020-05-28 2020-10-30 北京迈格威科技有限公司 Method, device, equipment, medium, camera array and assembly for synthesizing panoramic deep image
CN111654629A (en) * 2020-06-11 2020-09-11 展讯通信(上海)有限公司 Camera switching method and device, electronic equipment and readable storage medium
CN112492212A (en) * 2020-12-02 2021-03-12 维沃移动通信有限公司 Photographing method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114157810A (en) * 2021-12-21 2022-03-08 西安维沃软件技术有限公司 Shooting method, shooting device, electronic equipment and medium
CN114157810B (en) * 2021-12-21 2023-08-18 西安维沃软件技术有限公司 Shooting method, shooting device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
JP6263623B2 (en) Image generation method and dual lens apparatus
KR101873668B1 (en) Mobile terminal photographing method and mobile terminal
KR101665130B1 (en) Apparatus and method for generating image including a plurality of persons
KR20150120317A (en) Method and electronic device for implementing refocusing
CN110661977B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN112887609B (en) Shooting method and device, electronic equipment and storage medium
CN112017137B (en) Image processing method, device, electronic equipment and computer readable storage medium
JP6907274B2 (en) Imaging device and imaging method
CN114125179B (en) Shooting method and device
JP2022103020A (en) Photographing method and device, terminal, and storage medium
KR20120002834A (en) Image pickup apparatus for providing reference image and method for providing reference image thereof
CN116109922A (en) Bird recognition method, bird recognition apparatus, and bird recognition system
CN108521862A (en) Method and apparatus for track up
CN114390201A (en) Focusing method and device thereof
CN113014820A (en) Processing method and device and electronic equipment
CN107392850B (en) Image processing method and system
JP6645711B2 (en) Image processing apparatus, image processing method, and program
CN114025100B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114785969A (en) Shooting method and device
CN112399077B (en) Shooting method and device and electronic equipment
JP6213470B2 (en) Image processing apparatus, imaging apparatus, and program
JP6157238B2 (en) Image processing apparatus, image processing method, and image processing program
JP2015002476A (en) Image processing apparatus
CN115802171A (en) Image processing method and device
CN117692756A (en) Shooting method, shooting device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210622

RJ01 Rejection of invention patent application after publication