CN113873160A - Image processing method, image processing device, electronic equipment and computer storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN113873160A
CN113873160A CN202111166409.7A CN202111166409A CN113873160A CN 113873160 A CN113873160 A CN 113873160A CN 202111166409 A CN202111166409 A CN 202111166409A CN 113873160 A CN113873160 A CN 113873160A
Authority
CN
China
Prior art keywords
image
target
depth information
target object
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111166409.7A
Other languages
Chinese (zh)
Other versions
CN113873160B (en
Inventor
董巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111166409.7A priority Critical patent/CN113873160B/en
Publication of CN113873160A publication Critical patent/CN113873160A/en
Application granted granted Critical
Publication of CN113873160B publication Critical patent/CN113873160B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a computer storage medium, and belongs to the technical field of communication. The image processing method comprises the following steps: under the condition that a target object is shot based on a first focusing position to obtain a first image, controlling an automatic focusing motor to move to a second focusing position to shoot the target object to obtain a second image; and determining depth information of a target image according to the first image and the second image, wherein the target image comprises a target object.

Description

Image processing method, image processing device, electronic equipment and computer storage medium
Technical Field
The present application belongs to the field of communication technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer storage medium.
Background
With the increasing popularization of electronic devices and the continuous progress of camera technologies, users often use electronic devices to take photos, the blurring of the photos is a common photographing function of the electronic devices, and the effect of the blurring of the photos is that the main body of the photographed images is clear and the background is fuzzy.
In practical application, a software algorithm is often used for simulating a blurring effect to shoot depth information of different objects in a scene, so that the depth information of a shot image is obtained, but the current shooting method has the conditions of low depth information calculation precision and wrong depth information calculation, so that the blurring effect of subsequent images is poor, and the shooting experience of a user is influenced.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, an electronic device, and a computer storage medium, which can solve the problem of low depth information calculation accuracy.
In a first aspect, an embodiment of the present application provides an image processing method, including:
under the condition that a target object is shot based on a first focusing position to obtain a first image, controlling an automatic focusing motor to move to a second focusing position to shoot the target object to obtain a second image;
and determining depth information of a target image according to the first image and the second image, wherein the target image comprises a target object.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the automatic focusing control system comprises an acquisition module, a focusing module and a focusing module, wherein the acquisition module is used for controlling an automatic focusing motor to move to a second focusing position to shoot a target object to obtain a second image under the condition that the target object is shot based on a first focusing position to obtain a first image;
and the processing module is used for determining the depth information of the target image according to the first image and the second image, and the target image comprises a target object.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, when a target object is shot, the automatic focusing motor is controlled to move to different positions, the first image and the second image of the target pair at different focusing positions are respectively obtained, and in order to improve the accuracy of the depth of field information of the obtained target image, the depth of field information of the target image is determined according to the first image and the second image at different focusing positions.
Drawings
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic hardware structure diagram of another electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
In order to solve the problems in the background art, embodiments of the present application provide an image processing method, an image processing apparatus, an electronic device, and a computer storage medium. When the electronic equipment shoots a target object, the automatic focusing motor is controlled to move to different positions, the first image and the second image of the target at different focusing positions are respectively obtained, and in order to improve the accuracy of the depth of field information of the obtained target image, the depth of field information of the target image is determined according to the first image and the second image at different focusing positions.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application, including steps 110 to 120.
And 110, controlling the automatic focusing motor to move to the second focusing position to shoot the target object to obtain a second image under the condition that the first image is obtained by shooting the target object based on the first focusing position.
In the embodiment of the application, the target object refers to a scene picture which can be collected in a shooting preview interface of the electronic device. After the electronic equipment starts the shooting function, the electronic equipment shoots a target object through the camera, and a shot image can be obtained. The electronic device may include one camera or a plurality of cameras. An auto focus motor, such as an af (auto focus) motor, is also included in the electronic device. In the process of shooting the target object by automatic focusing, images with different spatial depth information can be obtained corresponding to different focusing positions.
In some embodiments, multiple image capturing modes may be included in the electronic device, and the depth information of the target object may be acquired as an image processing procedure in a default image capturing mode, or may be an image processing procedure in an image capturing mode such as a portrait mode, a blurring mode, and a large aperture mode, which is not limited in particular herein.
Specifically, in step 110 of this embodiment of the present application, a user starts a shooting function of an electronic device, and after the electronic device receives a shooting instruction, the electronic device executes the shooting function, automatically focuses at a first focusing position, and shoots a target object to obtain a first image. And then, the electronic equipment controls the automatic focusing motor to move to the second focusing position without user operation, automatically focuses at the second focusing position, and shoots the target object to obtain a second image, so that the shooting process is simplified, and the shooting efficiency is improved.
In some embodiments, the first focus position may refer to a distance between the focus and the lens, that is, when the distance corresponding to the second focus position may be greater than the distance corresponding to the first focus position, correspondingly, when the electronic device controls the auto-focus motor to move to the second focus position, the auto-focus motor moves away from the lens; the distance corresponding to the second focusing position may also be smaller than the distance corresponding to the first focusing position, and correspondingly, when the electronic device controls the auto-focusing motor to move to the second focusing position, the auto-focusing motor moves toward the lens, which is not limited herein.
In some embodiments, in order to improve the shooting quality of the target image, step 110 in this embodiment may specifically include the following steps:
under the condition that a target object is shot based on a first focusing position to obtain N frames of third images, controlling an automatic focusing motor to move to a second focusing position to shoot the target object to obtain M frames of fourth images, wherein N and M are integers which are more than or equal to 1;
and respectively carrying out preset fusion noise reduction processing on the N frames of third images and the M frames of fourth images to obtain first images and second images.
Specifically, the user starts a shooting function of the electronic device, and in the shooting process, the electronic device executes the shooting function, automatically focuses at a first focusing position, and shoots a target object, and meanwhile, the electronic device can capture frames backwards, that is, when shooting is triggered, the electronic device automatically shoots for multiple times, so as to obtain a third image of N frames.
In some embodiments, to reduce the shooting delay and improve the user shooting experience, when the shooting is triggered, the electronic device may calculate an actual shooting time, and obtain a third image of multiple frames from a buffer frame of the preview as an N-frame third image obtained by shooting the target object based on the first focus position. And after the N frames of third images are obtained, carrying out fusion noise reduction processing on the N frames of third images according to a preset fusion noise reduction algorithm to obtain a high-definition first image.
In the embodiment of the application, after the shooting at the first focusing position is completed, the electronic device controls the automatic focusing motor to move to the second focusing position without user operation, the automatic focusing is performed at the second focusing position, and the target object is shot to obtain the M-frame fourth image. And after the M frames of fourth images are obtained, performing fusion noise reduction processing on the M frames of fourth images according to a preset fusion noise reduction algorithm to obtain a high-definition second image.
In order to improve the perceptibility of the blurring effect in the shooting process and improve the visual perception of a user, a clear preview image can be displayed on a shooting preview interface of the electronic equipment corresponding to the first focusing position, the automatic focusing motor is controlled by the electronic equipment to move to the second focusing position, and the preview image displayed on the shooting preview interface of the electronic equipment can show a visual change effect from clear to fuzzy in the process of finishing focusing at the second position. In order to facilitate the user to know the shooting quality of the image, after the electronic equipment finishes shooting based on the second focusing position, the automatic focusing motor can be controlled to return to the first focusing position again, so that a preview image displayed on a shooting preview interface can show a visual change effect from fuzzy to clear, the visual perception of the user is improved, and the user can conveniently know the blurring degree of the image.
After the first image and the second image are obtained, step 120 may be performed next.
And step 120, determining depth information of the target image according to the first image and the second image.
Specifically, the first image and the second image correspond to different focusing positions respectively, and the target object is photographed to obtain an image, so that the first image and the second image respectively include different spatial depth information. Thus, from the first image and the second image, the depth information of the target image with high accuracy can be determined. Especially for the electronic equipment only comprising a single camera, the accuracy of the depth information of the target image can be obviously improved. Moreover, the accuracy of the depth of field information of the target image is high, so that the subsequent blurring effect of blurring the target image is improved, and the shooting experience of a user is improved.
In some embodiments, determining the depth information of the target image may specifically include the following steps: firstly, carrying out image registration processing on a first image and a second image; next, determining edge depth information of the target image according to the first image and the second image after the image registration processing; and then, determining the depth information of the target image according to the edge depth information.
In some embodiments, since the electronic device may shake when shooting, in order to improve the accuracy of the edge depth information of the target image, the first image and the second image are subjected to image registration processing before specifically calculating the edge depth information of the target image.
In the embodiment of the present application, the edge of the target image may be determined accurately according to different spatial depth information included in the first image and the second image, and then the edge depth information of the edge may be obtained.
Illustratively, the clarity of the same target object in the first image and the second image may be different. Taking the first image as an example, the gradient information of the first image may be determined according to a gray information map of the first image, and may be used to characterize the degree of sharpness of the target object in the first image. The degree of sharpness of the target object in the first image may also be obtained by characterizing and calculating using other physical quantities, which is not specifically limited herein. Gradient information of the second image is determined based on the same processing as for the first image to characterize the degree of sharpness of the target object in the second image. Therefore, the edge of the target image can be determined according to the first image and the second image, and the edge depth information of the edge can be calculated.
In some embodiments, to improve the accuracy of the calculation, the following steps may be included in determining the edge depth information:
respectively acquiring pixel position information of a target object in a first image and a second image;
and determining edge depth information of the target image according to the first focus position, the second focus position and pixel position information of the target object in the first image and the second image.
Specifically, after the pixel registration processing is performed on the first image and the second image, the pixel position information of the target object in the first image and the second image can be accurately determined, for example, the second image and the first image can be registered and aligned by using an image registration method with the first image as a reference, so that the pixel position information of the target object in the first image and the second image can be obtained, and then, the edge depth information of the target image is obtained by calculation according to the first focus position, the second focus position and the pixel position information of the target object in the first image and the second image, so that the accuracy of depth information calculation is improved, and the calculation error is effectively reduced.
As a specific example, determining depth information of the target image according to the edge depth information may include: and calculating to obtain the depth information of the target image based on the first image, the edge depth information of the target object and a preset guide function.
For example, the first image may be used as a reference image, that is, the first image may be used as a guide map. Inputting the edge depth information of the first image and the target image into a preset guide function, thereby obtaining the depth information of the target image, i.e. the depth information corresponding to each pixel point. Illustratively, the depth information of the target image may be obtained according to formula (1).
Dfull=F(Dedge,I1) (1)
Wherein, I1As a first image, DedgeAs edges of the target imageDepth information, DfullIs the depth information of the target image.
According to the image processing method provided by the embodiment of the application, when the target object is shot, the automatic focusing motor is controlled to move to different positions, the first image and the second image of the target pair at different focusing positions are respectively obtained, then, the depth of field information of the target image is determined according to the first image and the second image at different focusing positions, and the accuracy of the depth of field information of the target image is high, so that the quality of an image blurring effect is improved, and the shooting experience of a user is improved.
As a specific example, after the electronic device finishes capturing the target image, the electronic device may save the target image in a preset storage address, for example, save the target image in an album. The user can find the target image, view and edit the target image. For example, a user may enter an image editing mode by clicking an editing control, a first input of the user may be received at an image editing interface, and the electronic device may determine a blurring region of the target image in response to the first input. Therefore, the blurring effect of the target image can be obviously improved, and the accuracy of the boundary between the shooting subject and the blurring area in the target image is improved.
As another specific example, in the shooting process, a first input of a user may be received at the image shooting preview interface, and the electronic device may determine a blurring region of the target image in response to the first input, and after obtaining the depth information of the target image according to the image processing method provided in this embodiment of the present application, the electronic device may directly perform blurring on the blurring region of the target, and after the blurring process is completed, display the blurring processed target image.
The first input may be a click input of a user on a shooting preview interface of the electronic device, a slide and circle selection input of the user on a screen of the electronic device, or other feasibility inputs, which may be specifically determined according to actual use requirements, and is not specifically limited herein.
According to the image processing method provided by the embodiment of the application, in the shooting process, a user only needs to trigger the electronic equipment to shoot, the target blurring area is determined through the first input, the target image after blurring processing is obtained according to the depth of field information of the target image, the shooting process is effectively simplified, and the shooting experience of the user is improved.
As another specific example, during the shooting process, the electronic device may further determine a target blurring region in the target image according to the depth information, perform blurring on the target blurring region, and then display the blurred target image. That is, the electronic device may automatically determine the focusing subject according to the depth of field information, and perform blurring processing on the region of the target image outside the focusing subject. The blurring effect can be automatically displayed on the target image displayed after shooting, the shooting process is simplified, post-processing is not needed, and the shooting experience of a user is improved.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
Fig. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in connection with fig. 2. The image processing apparatus 200 may include:
in some embodiments, the acquisition module 210 is configured to control the auto-focus motor to move to the second focus position to capture a second image when the first image is obtained by capturing the target object based on the first focus position;
the processing module 220 is configured to determine depth information of a target image according to the first image and the second image, where the target image includes a target object.
In this way, when a target object is shot, the automatic focusing motor is controlled to move to different positions, the first image and the second image of the target pair at different focusing positions are respectively obtained, and in order to improve the accuracy of the depth of field information of the obtained target image, the depth of field information of the target image is determined according to the first image and the second image at different focusing positions.
In some embodiments, the processing module 220 is further configured to perform an image registration process on the first image and the second image;
the processing module 220 is further configured to determine edge depth information of the target image according to the first image and the second image after the image registration processing;
the processing module 220 is further configured to determine depth information of the target image according to the edge depth information.
Therefore, the edge of the target image can be accurately determined, and the edge depth information of the edge can be obtained. The blurring effect of blurring the target image subsequently is improved, and the shooting experience of the user is improved.
In some embodiments, the processing module 220 is further configured to obtain pixel position information of the target object in the first image and the second image respectively;
the processing module 220 is further configured to determine edge depth information of the target image according to the first focus position, the second focus position, and pixel position information of the target object in the first image and the second image.
Therefore, the accuracy of the edge depth information of the target image can be improved, the accuracy of the depth information calculation can be improved, and the calculation error can be effectively reduced.
In some embodiments, the processing module 220 is further configured to calculate depth information of the target image based on the first image, the edge depth information of the target image, and a preset guidance function.
Thus, the accuracy of the depth information of the target image can be improved.
In some embodiments, a receiving module to receive a first input;
a processing module 220, further configured to determine a target blurring region in the target image in response to the first input;
the processing module 220 is further configured to virtualize the target virtualization area according to the depth of field information;
and the display module is used for displaying the blurred target image.
Therefore, the blurring effect of the target image can be obviously improved, and the accuracy of the boundary between the shooting subject and the blurring area in the target image is improved.
In some embodiments, the processing module 220 is further configured to determine a target blurring region in the target image according to the depth information;
the processing module 220 is further configured to virtualize a target image performed by the target virtualization area;
and the display module is also used for displaying the blurred target image.
Therefore, the blurring effect can be automatically displayed on the target image displayed after shooting, the shooting process is simplified, post-processing is not needed, and the shooting experience of a user is improved.
In some embodiments, the acquisition module 210 is further configured to control the auto-focus motor to move to the second focus position to shoot the target object to obtain M frames of fourth images under the condition that the target object is shot at the first focus position to obtain N frames of third images, where N and M are integers greater than or equal to 1;
and the processing module is further used for respectively carrying out preset fusion noise reduction processing on the N frames of third images and the M frames of fourth images to obtain first images and second images.
Thus, the shooting quality of the target image can be improved.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 3, an electronic device 300 is further provided in this embodiment of the present application, and includes a processor 301, a memory 302, and a program or an instruction stored in the memory 302 and capable of being executed on the processor 301, where the program or the instruction is executed by the processor 301 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, it is not described here again.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 4 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
In some embodiments, the input unit 401 is configured to control the auto focus motor to move to the second focus position to capture the second image when the first image is captured based on the first focus position;
a processor 410 for determining depth information of a target image from the first image and the second image, the target image comprising a target object.
In this way, when a target object is shot, the automatic focusing motor is controlled to move to different positions, the first image and the second image of the target pair at different focusing positions are respectively obtained, and in order to improve the accuracy of the depth of field information of the obtained target image, the depth of field information of the target image is determined according to the first image and the second image at different focusing positions.
In some embodiments, the processor 410 is further configured to perform an image registration process on the first image and the second image;
the processor 410 is further configured to determine edge depth information of the target image according to the first image and the second image after the image registration processing;
the processor 410 is further configured to determine depth information of the target image according to the edge depth information.
Therefore, the edge of the target image can be accurately determined, and the edge depth information of the edge can be obtained. The blurring effect of blurring the target image subsequently is improved, and the shooting experience of the user is improved.
In some embodiments, the processor 410 is further configured to obtain pixel position information of the target object in the first image and the second image, respectively;
the processor 410 is further configured to determine edge depth information of the target image according to the first focus position, the second focus position, and pixel position information of the target object in the first image and the second image.
Therefore, the accuracy of the edge depth information of the target image can be improved, the accuracy of the depth information calculation can be improved, and the calculation error can be effectively reduced.
In some embodiments, the processor 410 is further configured to calculate depth information of the target image based on the first image, the edge depth information of the target image, and a preset guidance function.
Thus, the accuracy of the depth information of the target image can be improved.
In some embodiments, a user input unit 407 for receiving a first input;
a processor 410, further for determining a target blurring region in the target image in response to a first input;
the processor 410 is further configured to blur the target blurring region according to the depth of field information;
and a display unit 406, configured to display the blurred target image.
Therefore, the blurring effect of the target image can be obviously improved, and the accuracy of the boundary between the shooting subject and the blurring area in the target image is improved.
In some embodiments, the processor 410 is further configured to determine a target blurring region in the target image according to the depth information;
a processor 410, further configured to blur the target image performed by the target blurring region;
the display unit 406 is further configured to display the blurred target image.
Therefore, the blurring effect can be automatically displayed on the target image displayed after shooting, the shooting process is simplified, post-processing is not needed, and the shooting experience of a user is improved.
In some embodiments, the input unit 401 is further configured to control the auto-focus motor to move to the second focus position to capture the target object to obtain M frames of fourth images when capturing N frames of third images of the target object based on the first focus position, where N and M are integers greater than or equal to 1;
and the processing module is further used for respectively carrying out preset fusion noise reduction processing on the N frames of third images and the M frames of fourth images to obtain first images and second images.
Thus, the shooting quality of the target image can be improved.
It should be understood that in the embodiment of the present application, the input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes a touch panel 4071 and other input devices 4072. A touch panel 4071, also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 409 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 410 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image processing method, comprising:
under the condition that a target object is shot based on a first focusing position to obtain a first image, controlling an automatic focusing motor to move to a second focusing position to shoot the target object to obtain a second image;
determining depth information of a target image according to the first image and the second image, wherein the target image comprises the target object.
2. The method of claim 1, wherein determining depth information for a target image from the first image and the second image comprises:
performing image registration processing on the first image and the second image;
determining edge depth information of the target image according to the first image and the second image after image registration processing;
and determining the depth information of the target image according to the edge depth information.
3. The method of claim 2, wherein determining edge depth information of the target image from the first and second images after image registration processing comprises:
respectively acquiring pixel position information of the target object in the first image and the second image;
and determining the edge depth information of the target image according to the first focus position, the second focus position and the pixel position information of the target object in the first image and the second image.
4. The method of claim 2, wherein determining depth information for the target image from the edge depth information comprises:
and calculating to obtain the depth of field information of the target image based on the first image, the edge depth information of the target image and a preset guide function.
5. The method of claim 1, further comprising:
receiving a first input;
determining a target blurring region in the target image in response to the first input;
blurring the target blurring region according to the depth of field information;
and displaying the blurred target image.
6. The method of claim 1, further comprising:
determining a target blurring area in the target image according to the depth information;
blurring the target image carried out by the target blurring area;
and displaying the blurred target image.
7. The method of claim 1, wherein controlling the auto-focus motor to move to a second focus position to capture a second image of the target object while capturing the first image based on the first focus position comprises:
under the condition that a target object is shot based on a first focusing position to obtain N frames of third images, controlling an automatic focusing motor to move to a second focusing position to shoot the target object to obtain M frames of fourth images, wherein N and M are integers which are more than or equal to 1;
and respectively carrying out preset fusion noise reduction processing on the N frames of third images and the M frames of fourth images to obtain the first images and the second images.
8. An image processing apparatus, characterized in that the apparatus comprises:
the automatic focusing control system comprises an acquisition module, a focusing module and a control module, wherein the acquisition module is used for controlling an automatic focusing motor to move to a second focusing position to shoot a target object to obtain a second image under the condition that the target object is shot based on a first focusing position to obtain a first image;
and the processing module is used for determining the depth information of a target image according to the first image and the second image, wherein the target image comprises the target object.
9. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executed on the processor, which when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 7.
10. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the image processing method according to any one of claims 1 to 7.
CN202111166409.7A 2021-09-30 2021-09-30 Image processing method, device, electronic equipment and computer storage medium Active CN113873160B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111166409.7A CN113873160B (en) 2021-09-30 2021-09-30 Image processing method, device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111166409.7A CN113873160B (en) 2021-09-30 2021-09-30 Image processing method, device, electronic equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN113873160A true CN113873160A (en) 2021-12-31
CN113873160B CN113873160B (en) 2024-03-05

Family

ID=79001606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111166409.7A Active CN113873160B (en) 2021-09-30 2021-09-30 Image processing method, device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN113873160B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103973978A (en) * 2014-04-17 2014-08-06 华为技术有限公司 Method and electronic device for achieving refocusing
US20150146994A1 (en) * 2013-11-28 2015-05-28 Canon Kabushiki Kaisha Method, system and apparatus for determining a depth value of a pixel
CN106303202A (en) * 2015-06-09 2017-01-04 联想(北京)有限公司 A kind of image information processing method and device
CN107493432A (en) * 2017-08-31 2017-12-19 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN108076286A (en) * 2017-11-30 2018-05-25 广东欧珀移动通信有限公司 Image weakening method, device, mobile terminal and storage medium
CN108419009A (en) * 2018-02-02 2018-08-17 成都西纬科技有限公司 Image definition enhancing method and device
CN112950721A (en) * 2021-02-09 2021-06-11 深圳市汇顶科技股份有限公司 Depth information determination method and device and binocular vision system
CN113438388A (en) * 2021-07-06 2021-09-24 Oppo广东移动通信有限公司 Processing method, camera assembly, electronic device, processing device and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150146994A1 (en) * 2013-11-28 2015-05-28 Canon Kabushiki Kaisha Method, system and apparatus for determining a depth value of a pixel
CN103973978A (en) * 2014-04-17 2014-08-06 华为技术有限公司 Method and electronic device for achieving refocusing
CN106303202A (en) * 2015-06-09 2017-01-04 联想(北京)有限公司 A kind of image information processing method and device
CN107493432A (en) * 2017-08-31 2017-12-19 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN108076286A (en) * 2017-11-30 2018-05-25 广东欧珀移动通信有限公司 Image weakening method, device, mobile terminal and storage medium
CN108419009A (en) * 2018-02-02 2018-08-17 成都西纬科技有限公司 Image definition enhancing method and device
CN112950721A (en) * 2021-02-09 2021-06-11 深圳市汇顶科技股份有限公司 Depth information determination method and device and binocular vision system
CN113438388A (en) * 2021-07-06 2021-09-24 Oppo广东移动通信有限公司 Processing method, camera assembly, electronic device, processing device and medium

Also Published As

Publication number Publication date
CN113873160B (en) 2024-03-05

Similar Documents

Publication Publication Date Title
US20160337593A1 (en) Image presentation method, terminal device and computer storage medium
CN112637500B (en) Image processing method and device
CN112887617B (en) Shooting method and device and electronic equipment
CN112738397A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114390201A (en) Focusing method and device thereof
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN113747067B (en) Photographing method, photographing device, electronic equipment and storage medium
CN113709368A (en) Image display method, device and equipment
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112653841B (en) Shooting method and device and electronic equipment
CN114286011B (en) Focusing method and device
CN115134532A (en) Image processing method, image processing device, storage medium and electronic equipment
CN107105158B (en) Photographing method and mobile terminal
CN112383708B (en) Shooting method and device, electronic equipment and readable storage medium
CN111654623B (en) Photographing method and device and electronic equipment
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium
CN112672058B (en) Shooting method and device
CN114245018A (en) Image shooting method and device
CN114390206A (en) Shooting method and device and electronic equipment
CN114125226A (en) Image shooting method and device, electronic equipment and readable storage medium
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114339051A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113873147A (en) Video recording method and device and electronic equipment
CN113891018A (en) Shooting method and device and electronic equipment
CN112399092A (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant