CN111246093B - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111246093B
CN111246093B CN202010048794.4A CN202010048794A CN111246093B CN 111246093 B CN111246093 B CN 111246093B CN 202010048794 A CN202010048794 A CN 202010048794A CN 111246093 B CN111246093 B CN 111246093B
Authority
CN
China
Prior art keywords
image
camera
lens
subject
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010048794.4A
Other languages
Chinese (zh)
Other versions
CN111246093A (en
Inventor
王会朝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010048794.4A priority Critical patent/CN111246093B/en
Publication of CN111246093A publication Critical patent/CN111246093A/en
Application granted granted Critical
Publication of CN111246093B publication Critical patent/CN111246093B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • H04N7/144Constructional details of the terminal equipment, e.g. arrangements of the camera and the display camera and display on the same optical axis, e.g. optically multiplexing the camera and display for eye to eye contact

Abstract

The application discloses an image processing method, an image processing device, a storage medium and an electronic device. The method is applied to electronic equipment, the electronic equipment at least comprises a first camera and a second camera, and the method comprises the following steps: acquiring a first image with clear imaging of a shooting subject by using the first camera; performing image segmentation on the first image to obtain a main image, wherein the main image is an image area corresponding to the shooting subject in the first image; acquiring a second image by using the second camera, wherein the shooting subject in the second image is out of focus; carrying out image fusion processing on the main image and the second image to obtain a target image; and saving the target image as a video frame to obtain a target video. The embodiment of the application can improve the blurring effect of the video image.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present application belongs to the field of image technologies, and in particular, to an image processing method, an image processing apparatus, a storage medium, and an electronic device.
Background
Image blurring is often used in image processing techniques. For example, when image processing is performed, the electronic device may blur the background of the image, and thereby may obtain an effect of highlighting the subject. The image with the prominent main body and the blurred background has strong expressive force.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a storage medium and an electronic device, which can improve the blurring effect of a video image.
In a first aspect, an embodiment of the present application provides an image processing method, which is applied to an electronic device, where the electronic device includes at least a first camera and a second camera, and the method includes:
acquiring a first image with clear imaging of a shooting subject by using the first camera;
performing image segmentation on the first image to obtain a main image, wherein the main image is an image area corresponding to the shooting subject in the first image;
acquiring a second image by using the second camera, wherein the shooting subject in the second image is out of focus;
carrying out image fusion processing on the main image and the second image to obtain a target image;
and storing the target image as a video frame to obtain a target video.
In a second aspect, an embodiment of the present application provides an image processing apparatus, which is applied to an electronic device, where the electronic device includes at least a first camera and a second camera, and the apparatus includes:
the first acquisition module is used for acquiring a first image with clear imaging of a shooting subject by using the first camera;
the image segmentation module is used for carrying out image segmentation on the first image to obtain a main image, wherein the main image is an image area corresponding to the shooting subject in the first image;
the second acquisition module is used for acquiring a second image by using the second camera, wherein the shooting subject in the second image is out of focus;
the image fusion module is used for carrying out image fusion processing on the main image and the second image to obtain a target image;
and the storage module is used for storing the target image as a video frame so as to obtain a target video.
In a third aspect, an embodiment of the present application provides a storage medium, on which a computer program is stored, which, when executed on a computer, causes the computer to execute a flow in an image processing method provided by an embodiment of the present application.
In a fourth aspect, an embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the flow in the image processing method provided in the embodiment of the present application by calling a computer program stored in the memory.
In the embodiment of the present application, since the second image used for the fusion is an image in which the subject is out of focus, the blurring in the second image is real, natural blurring, not blurring generated by simulation. Therefore, compared with a scheme of directly simulating and generating a blurring effect on an original image in the related art, the image processing method provided by the embodiment of the application can obtain an image with a real blurring effect, and a video image generated by using the image with the real blurring effect also has the real blurring effect, that is, the blurring effect and the imaging quality of the video image are improved.
Drawings
The technical solutions and advantages of the present application will become apparent from the following detailed description of specific embodiments of the present application when taken in conjunction with the accompanying drawings.
Fig. 1 is a schematic flowchart of a first image processing method according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of a second image processing method according to an embodiment of the present application.
Fig. 3 is a schematic flowchart of a third image processing method according to an embodiment of the present application.
Fig. 4 to fig. 7 are scene schematic diagrams of an image processing method according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Fig. 10 is a schematic structural diagram of an image processing circuit according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
It can be understood that the execution subject of the embodiment of the present application may be an electronic device such as a smartphone or a tablet computer with a camera.
Referring to fig. 1, fig. 1 is a first flowchart illustrating an image processing method according to an embodiment of the present disclosure. The image processing method can be applied to electronic equipment, and the electronic equipment at least comprises a first camera and a second camera. The flow of the image processing method may include:
101. and acquiring a first image with clear imaging of the shooting subject by using the first camera.
Image blurring is often used in image processing techniques. For example, when image processing is performed, the electronic device may blur the background of the image, and thereby may obtain an effect of highlighting the subject. The image with the prominent main body and the blurred background has strong expressive force. In the related art, in an imaging scene such as person imaging and macro imaging, image blurring processing is often used. However, in the related art, the effect of image blurring is poor. Taking background blurring as an example, in the related art, generally, a single image is captured, a subject is divided from the image, and blurring processing is performed on a background region other than the subject. Therefore, the blurring process in the related art is generated in an analog manner on the original image, which is not true and natural enough, and the blurring effect is poor.
In the embodiment of the present application, an example in which an electronic device has two cameras is described. Of course, in other embodiments, the electronic device may also have more than two cameras, such as three cameras or four cameras, which is not specifically limited in this embodiment.
For example, the electronic device may first acquire a first image with clear image of a subject (e.g., a person) by using the first camera. That is, the subject is imaged sharp on the first image.
102. And performing image segmentation on the first image to obtain a subject image, wherein the subject image is an image area corresponding to the shooting subject in the first image.
For example, after acquiring the first image, the electronic device may segment a subject image from the first image, where the subject image is an image of an image area corresponding to the subject in the first image.
For example, if the subject is a person, the electronic device may segment an image of the person from the first image.
103. And acquiring a second image by using the second camera, wherein the second image is used for shooting the subject out of focus.
For example, the electronic device may also take a second image with a second camera, where the subject is out of focus. That is, the subject is blurred in the second image.
104. And carrying out image fusion processing on the main image and the second image to obtain a target image.
For example, after the subject image is segmented from the first image and a second image of the subject out of focus is captured by the second camera, the electronic device may fuse the subject image and the second image to obtain the target image.
It is understood that since the subject image used for the fusion is a sharp image of the subject, the subject in the target image is also imaged sharply. Further, since the other region outside the subject is also blurred due to the subject being out of focus in the second image, the other region outside the subject in the target image is blurred.
105. And saving the target image as a video frame to obtain a target video.
For example, after obtaining the target image, the electronic device may save the target image as a frame of video frame image. It can be understood that a video segment can be generated by storing a plurality of different video frames, so as to obtain the target video. Each frame of image in the target video may be an image with a real blurring effect obtained by processing with the image processing method provided by the embodiment of the application, so that the image of the target video has the real blurring effect.
It is understood that, in the embodiment of the present application, since the second image used for the fusion is an image in which the subject is out of focus, the blurring in the second image is real, natural blurring, not blurring generated by simulation. Therefore, compared with a scheme of directly simulating and generating a blurring effect on an original image in the related art, the image processing method provided by the embodiment of the application can obtain an image with a real blurring effect, and a video image generated by using the image with the real blurring effect also has the real blurring effect, that is, the blurring effect and the imaging quality of the video image are improved.
Referring to fig. 2, fig. 2 is a second flowchart illustrating an image processing method according to an embodiment of the present disclosure. The image processing method can be applied to electronic equipment, and the electronic equipment at least comprises a first camera and a second camera.
When light exists in a shooting scene, if the area where the light is located is subjected to blurring processing, the light can be changed into light spots. The light spots can make the image have a hazy feeling, so that the image has better expressive force. However, in the related art, an image is generally taken, and then an algorithm is used to blur the non-subject region where the lamp light is located on the image, so as to generate the light spot. Therefore, in the related art, the light spots are all generated on the original image in an analog mode, and the imaging effect of the light spots generated in the analog mode is not real and natural.
The image processing method provided by the embodiment of the application can obtain the video with real and natural light spots. The flow of the image processing method provided by the embodiment of the application can include:
201. the electronic equipment acquires a first image with clear imaging of a shooting subject by using a first camera, and lamplight exists in a shooting scene corresponding to the first image.
For example, the electronic device may acquire a first image with a sharp image of the subject using the first camera. And light exists in the shooting scene corresponding to the first image.
In one embodiment, the first camera may be a main camera of the electronic device.
202. The electronic equipment carries out image segmentation on the first image to obtain a subject image, wherein the subject image is an image area corresponding to a shooting subject in the first image.
For example, after a first image with a sharp image of a subject is captured, the electronic device may perform image segmentation on the first image by using a preset image segmentation algorithm to segment a subject image from the first image. The subject image is an image of an image area corresponding to the subject in the first image.
It should be noted that the image segmentation is to divide the image into a plurality of specific regions with unique properties. In some embodiments, the present embodiment may segment the image as follows: a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a particular theory-based segmentation method, and the like. From a mathematical point of view, image segmentation is the process of dividing a digital image into mutually disjoint regions. The process of image segmentation is also a labeling process, i.e. pixels belonging to the same region are assigned the same number.
203. The electronic equipment determines the position of a lens of a first camera as a first position when the first camera shoots a first image.
204. The electronic device detects a distance between the shooting subject and the first camera.
For example, 203 and 204 may include:
in this embodiment, the electronic device may determine, as the first position, a position where a lens of the first camera is located when the first camera captures the first image.
Thereafter, the electronic device may detect a distance between the photographic subject and the first camera.
If the distance between the shooting subject and the first camera is smaller than the preset threshold, the shooting subject can be considered to be close. At this time, the flow proceeds to 205.
If the distance between the shooting subject and the first camera is greater than or equal to the preset threshold, the shooting subject can be considered to be far away. At this time, the flow proceeds to 207.
In one embodiment, the electronic device may detect a distance between the subject and the first camera according to a first position where a lens of the first camera is located when the first camera takes the first image.
Note that, the device for controlling lens focusing in the electronic apparatus is a Voice Coil Motor (VCM). The voice coil motor can convert the current into mechanical force, and the positioning and force control of the voice coil motor are determined by an external controller. The voice coil motor in the electronic device has a corresponding voice coil motor driving circuit (VCM Driver IC). The voice coil motor driving circuit can accurately control the moving distance and the moving direction of the coil in the voice coil motor, so that the lens is driven to move to achieve the focusing effect.
The vcm operates based on ampere's theorem, that is, when the coil in the vcm is conducting, the current generated force pushes the lens fixed on the carrier to move, so as to change the focus distance. It can be seen that the control of the focus distance by the voice coil motor is actually achieved by controlling the current in the coil. In short, the driving circuit of the vcm provides a source power of "current", and after the current is supplied to the coil of the vcm, the magnetic field in the vcm is utilized to generate a force for driving the coil (lens).
The voice coil motor drive circuit is actually a DAC circuit with a control algorithm. The digital position information-containing DAC code value uploaded by the I2C bus can be converted into corresponding output current (the output current corresponding to the DAC code value); and then the output current is converted into a focusing distance through a voice coil motor device. Different output currents form a loop through the voice coil motor, different ampere forces are generated, and the force pushes a lens on the voice coil motor to move. Thus, after focusing is completed, the camera will stay in a clearly focused position with a corresponding digital-to-analog converted code value (DAC code).
As mentioned above, the lens is driven to different positions corresponding to different DAC code values. When the distance between the shooting subject and the first camera is different, the lens is driven to different positions for clear imaging. Therefore, the distance between the subject and the first camera can be detected according to the position of the lens when the subject is clearly imaged.
For example, the DAC code value of the first camera has a value range of [ S1, S3], and S2 is greater than S1 and less than S3. The electronic device may preset that when the current value of the DAC code value is within the range of S1, S2, it indicates that the distance between the photographic subject and the first camera is greater than or equal to a preset threshold, i.e., when the photographic subject is far away. When the current value of the DAC code value is within the range of (S2, S3], it indicates that the distance of the photographic subject from the first camera is less than the preset threshold, i.e., the photographic subject is close.
Of course, the electronic device may also detect the distance between the shooting subject and the first camera in other ways, so as to determine whether the shooting subject is near or far. For example, the electronic device may calculate the distance between the photographic subject and the first camera according to the time difference between the outgoing laser detection signal and the receipt of the returned laser signal, so as to determine whether the photographic subject is near or far, and so on.
205. When the distance between the shooting main body and the first camera is smaller than a preset threshold value, the electronic equipment selects a corresponding lens position from a plurality of lens positions according to a preset first strategy to serve as a second position of the lens of the first camera, wherein the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the second position is larger than the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the first position.
206. And according to the mapping relation between the lens position of the first camera and the lens position of the second camera, the electronic equipment drives the lens of the second camera to a third position and shoots a second image, wherein in the mapping relation between the lens position of the first camera and the lens position of the second camera, the second position where the lens of the first camera is located corresponds to the third position where the lens of the second camera is located.
For example, 205 and 206 may include:
the electronic equipment detects that the distance between the shooting main body and the first camera is smaller than a preset threshold value, namely the shooting main body is close to the first camera. In this case, the electronic device may select a corresponding lens position from a plurality of lens positions as a second position of the lens of the first camera according to a preset first policy, wherein a distance between the lens and the image sensor when the lens of the first camera is at the second position is greater than a distance between the lens and the image sensor when the lens of the first camera is at the first position. Then, according to a preset mapping relationship between the lens position of the first camera and the lens position of the second camera, the electronic device may drive the lens of the second camera to a third position, and when the lens of the second camera moves to the third position, the electronic device may capture a second image. In the mapping relationship between the lens position of the first camera and the lens position of the second camera, the second position where the lens of the first camera is located corresponds to the third position where the lens of the second camera is located. For example, when the first camera and the second camera are cameras of the same specification, the DAC code value corresponding to the first camera when the lens of the first camera is at the second position is the same as the DAC code value corresponding to the second camera when the lens of the second camera is at the third position.
In some embodiments, the preset first policy may be to randomly select one lens position as the second position of the lens of the first camera as long as the distance between the lens and the image sensor when the lens of the first camera is at the second position is greater than the distance between the lens and the image sensor when the lens of the first camera is at the first position.
It should be noted that, because the distance between the lens of the first camera and the image sensor (of the first camera) is greater than the distance between the lens of the first camera and the image sensor (of the first camera) when the lens of the first camera is at the second position, the second image captured by the lens of the second camera at the third position can be clearly imaged into a scene closer to the first image than the position of the subject, so that the subject captured in the second image captured by the second camera can be out of focus, that is, the subject is blurred. At the same time, the view of the background area is also blurred in the second image. I.e. the second image now belongs to the far focus virtual focus. In this case, a spot of the lamp light in the shooting scene may be formed in the second image. The light spot is naturally formed by passing through the far focus virtual focus, so that the light spot is a truly generated light spot.
In one embodiment, the distance between the lens and the image sensor when the lens of the first camera is at the second position is greater than the distance between the lens and the image sensor when the lens of the first camera is at any other position. That is, the distance between the second position and the image sensor (of the first camera) in the first camera may be greater than the distance between any of the other lens positions and the image sensor (of the first camera). That is, the third position is a position at which the lens of the second camera is driven to the outermost side. In this case, the blurring effect of the object in the background area in the second image obtained by photographing is the best. At this time, the light in the shooting scene can form a light spot with the best blurring effect in the second image.
207. When the distance between the shooting main body and the first camera is larger than or equal to a preset threshold value, the electronic equipment selects a corresponding lens position from a plurality of lens positions according to a preset second strategy to serve as a fourth position of the lens of the first camera, wherein the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the fourth position is smaller than the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the first position.
208. According to the mapping relation between the lens position of the first camera and the lens position of the second camera, the electronic equipment drives the lens of the second camera to a fifth position and shoots a second image, wherein in the mapping relation between the lens position of the first camera and the lens position of the second camera, the fourth position where the lens of the first camera is located corresponds to the fifth position where the lens of the second camera is located.
For example, 207 and 208 may include:
the electronic equipment detects that the distance between the shooting main body and the first camera is larger than or equal to a preset threshold value, namely the shooting main body is far away. In this case, the electronic device may select a corresponding lens position from a plurality of lens positions according to a preset second policy as a fourth position of the lens of the first camera, where a distance between the lens and the image sensor when the lens of the first camera is at the fourth position is smaller than a distance between the lens and the image sensor when the lens of the first camera is at the first position. Then, according to the mapping relationship between the lens position of the first camera and the lens position of the second camera, the electronic device may drive the lens of the second camera to a fifth position, and when the lens of the second camera moves to the fifth position, the electronic device may capture a second image. In the mapping relationship between the lens position of the first camera and the lens position of the second camera, the fourth position where the lens of the first camera is located corresponds to the fifth position where the lens of the second camera is located. For example, when the first camera and the second camera are cameras of the same specification, the DAC code value corresponding to the first camera when the lens of the first camera is at the fourth position is the same as the DAC code value corresponding to the second camera when the lens of the second camera is at the fifth position.
In some embodiments, the preset second policy may be to randomly select a lens position as the fourth position of the lens of the first camera, as long as the distance between the lens and the image sensor when the lens of the first camera is at the fourth position is smaller than the distance between the lens and the image sensor when the lens of the first camera is at the first position.
It should be noted that, because the distance between the lens of the first camera and the image sensor (of the first camera) when the lens of the first camera is located at the fourth position is smaller than the distance between the lens of the first camera and the image sensor (of the first camera) when the lens of the first camera is located at the first position, the second image captured by the lens of the second camera at the fifth position is focused to a distant scene compared with the position of the capturing subject, so that the capturing subject in the second image captured by the second camera can be out of focus, that is, the capturing subject is blurred. At the same time, the foreground region in the second image is also blurred. That is, the second image now belongs to the near focus virtual focus. At this time, a spot of the lamp light in the shooting scene may be formed in the second image, and the spot is naturally formed by passing through the near-focus virtual focus, so that the spot is a spot actually generated.
In one embodiment, the distance between the lens and the image sensor when the lens of the first camera is at the fourth position is smaller than the distance between the lens and the image sensor when the lens of the first camera is at any other position. That is, the distance between the fourth position and the image sensor (of the first camera) in the first camera may be smaller than the distance between any other lens position and the image sensor (of the first camera). That is, the lens of the second camera is driven to the innermost position in the fifth position. In this case, the blurring effect of the object in the foreground region in the second image obtained by photographing is the best. At this time, the light in the shooting scene can form a light spot with the best blurring effect in the second image.
209. From the first image and the second image, the electronic device calculates an out-of-focus coefficient.
210. And according to the defocus coefficient, the electronic equipment adjusts the proportion of the second image to obtain the second image with the adjusted proportion.
For example, 209 and 210 may include:
after capturing the second image of the subject out of focus, the electronic device may calculate an out-of-focus coefficient from the first image and the second image.
It should be noted that, in this embodiment, the defocus coefficient may refer to a magnification (deformation) of the object in the second image relative to the object in the first image. Since the subject in the second image is out of focus, the object in the second image is blurred, and the blur may deform, i.e. enlarge, the object. Thus, the object in the second image is magnified relative to the object in the first image. For example, a light spot formed by defocusing a light in the first image having a diameter of 15 pixels and a light spot formed by defocusing the light in the second image having a diameter of 30 pixels indicates that the object in the second image is magnified by a factor of 2 relative to the object in the first image.
In order to keep the first image and the second image in a consistent scale, image fusion is facilitated. In this embodiment, the scale of the second image may be adjusted according to the calculated defocus coefficient, so as to obtain the scaled second image. For example, if the object in the second image is enlarged by a factor of 2 with respect to the object in the first image, the second image needs to be reduced to one-half of the size of the original image before image fusion.
211. And the electronic equipment performs image fusion processing on the main body image and the second image after the proportion adjustment to obtain a target image, wherein the shooting main body in the target image is clear in imaging and other areas except the shooting main body are blurred.
For example, after obtaining the scaled second image, the electronic device may perform image fusion processing on the main image and the scaled second image to obtain the target image. The shooting subject in the target image is imaged clearly, and other areas except the shooting subject in the target image are blurring effects.
It can be understood that the embodiment can generate real and natural light spots and fuse the real light spots into the target image, so that the light spots in the target image are also real and natural, thereby improving the imaging quality of the image.
It is understood that since the subject image used for the fusion is a sharp image of the subject, the subject in the target image is also imaged sharply. Further, since the subject is out of focus in the second image and the other region outside the subject is blurred, the other region except the subject in the target image is blurred.
212. The electronic equipment saves the target image as a video frame to obtain a target video.
For example, after obtaining the target image, the electronic device may save the target image as a frame of video frame image. It can be understood that a video segment can be generated by storing a plurality of different video frames, so as to obtain the target video. Each frame of image in the target video may be an image with a real blurring effect obtained by processing with the image processing method provided by the embodiment of the application, so that the image of the target video has the real blurring effect.
It is to be understood that, in the embodiment of the present application, since the second image used for the fusion is an image in which the subject is out of focus, the blurring in the second image is real blurring, not blurring generated by simulation. Therefore, compared with a scheme of directly simulating and generating a blurring effect on an original image in the related art, the image processing method provided by the embodiment of the application can obtain an image with a real blurring effect, and a video image generated by using the image with the real blurring effect also has the real blurring effect, that is, the blurring effect and the imaging quality of the video image are improved.
In another embodiment, the electronic device may also detect whether light exists in the shooting scene, and if light exists in the shooting scene, the image processing method provided by the present application may be used to obtain an image with real light spots. For example, the electronic device may detect whether there is a light in a shooting scene by means of scene recognition. Such a way of scene recognition may be, for example, scene recognition implemented based on artificial intelligence techniques.
Besides detecting whether light exists in the shooting scene in a scene recognition mode, the embodiment can also detect whether light exists in the shooting scene in the following mode, and when the light exists in the shooting scene, an image with real light spots is obtained by using the image processing method provided by the embodiment: for example, the electronic device may first acquire a first image with a sharp image of a subject by using a first camera. Then, the electronic device may obtain brightness distribution information of the first image, and detect whether the number of pixels in the first image having brightness values greater than a preset brightness threshold is greater than a preset value according to the brightness distribution information of the first image. If it is detected that the number of the pixel points of which the brightness values are greater than the preset brightness threshold value in the first image is greater than the preset value, it can be considered that an overexposure area exists in the first image, and the overexposure area is likely to be formed by a lamp or light. In this case, it can be considered that there is light in the shooting scene, and the image processing method provided by the present embodiment is required to obtain the target image with real light spots.
In one embodiment, the luminance distribution information of the first image may be a luminance histogram of the first image.
In another embodiment, since the electronic device has at least two cameras, the electronic device may further obtain depth information of the first image by using the first camera and the second camera, and perform image segmentation on the first image according to the depth information, so as to segment the image of the subject:
the electronic equipment determines the position of a lens of a first camera as a first position when the first camera shoots a first image;
according to the mapping relation between the lens position of the first camera and the lens position of the second camera, the electronic equipment drives the lens of the second camera to a sixth position and shoots a third image, wherein in the mapping relation between the lens position of the first camera and the lens position of the second camera, the first position where the lens of the first camera is located corresponds to the sixth position where the lens of the second camera is located;
the electronic equipment acquires the parallax information of the first image and the third image and calculates the depth information of the first image according to the parallax information.
For example, after capturing the first image, the electronic device may acquire a third image by using the second camera, acquire parallax information of the first image and the third image, and calculate depth information of the first image according to the parallax information.
For example, the electronic device may acquire a position of a first camera when the first camera captures a first image, and determine the position as a first position. Then, the electronic device may drive the lens of the second camera to a sixth position according to a preset mapping relationship between the lens position of the first camera and the lens position of the second camera, and capture a third image when the lens of the second camera is located at the sixth position. In the mapping relation between the lens position of the first camera and the lens position of the second camera, the first position where the lens of the first camera is located corresponds to the sixth position where the lens of the second camera is located.
For example, the first camera and the second camera are cameras of the same specification. When the lens of the first camera is located at the first position, the electronic device may obtain a first digital-to-analog conversion code value (DAC code) corresponding to the first camera at this time. Then, the electronic device may drive the lens of the second camera to move to the sixth position according to the first digital-to-analog conversion code value. It can be understood that, since the DAC code value corresponding to the sixth position is the same as the DAC code value corresponding to the first position, the sixth position where the lens of the second camera is located corresponds to the first position where the lens of the first camera is located.
After the third image is acquired, the electronic device may acquire parallax information of the first image and the third image, and calculate depth information of the first image according to the parallax information.
After the depth information of the first image is obtained through calculation, the electronic device can segment the main body image from the first image according to the depth information.
In addition to calculating the depth information of the first image by using the parallax information of the images captured by the first camera and the second camera, in other embodiments, the second camera of the electronic device may also be a depth sensing camera, for example, the second camera may be a TOF camera or a 3D structured light camera. Then, the electronic device may obtain the depth information of the first image according to the second camera, and further segment the first image according to the depth information of the first image, so as to obtain a subject image of the subject.
It should be noted that a TOF (Time of Flight) camera is mainly composed of an infrared light projector and a receiving module. The projecting apparatus of TOF camera can outwards throw the infrared light, and this infrared light meets measured object after reflection to receive by TOF camera's receiving module, can calculate the degree of depth information of being shone the object through recording infrared light from the time of launching to receiving, and accomplish 3D and model. That is, the depth information of the photographed object can be acquired by the TOF camera.
The basic principle of the 3D structured light technology is that light rays with certain structural characteristics are projected to a shot object through a near-infrared laser, and then collected through a special infrared camera. The light with a certain structure can acquire different image phase information due to different depth areas of a shot object, and then the change of the structure is converted into depth information through an arithmetic unit, so that a three-dimensional structure is obtained. In short, the three-dimensional structure of the object to be photographed is acquired by an optical means, and the acquired information is applied more deeply. Namely, the 3D structured light camera projects a plurality of light spots outward to the object by using the dot matrix projector, takes a three-dimensional light image of the object with the infrared camera, and calculates the depth information of the object via the processing system.
In the present application, the depth information of the first image is calculated based on the parallax information of the images captured by the first camera and the second camera, which has low hardware cost and simple calculation, and the depth information of the first image can be quickly calculated, thereby increasing the speed of segmenting the main image from the first image. And if the second camera in the electronic device is a TOF camera, the identification distance of the TOF camera is long, so that when the shooting subject is far away from the camera, the depth information of the first image can be accurately calculated by using the TOF camera, and the precision of segmenting the subject image from the first image is improved. If the second camera in the electronic device is a 3D structured light camera, the accuracy of calculating the depth information of the first image can be improved by using the 3D structured light camera because the recognition accuracy of the 3D structured light camera is high, so that the accuracy of segmenting the main image from the first image is improved.
Referring to fig. 3, fig. 3 is a third flowchart illustrating an image processing method according to an embodiment of the present disclosure. The image processing method can be applied to electronic equipment, and the electronic equipment at least comprises a first camera and a second camera. The flow of the image processing method may include:
301. when the electronic equipment detects a plurality of face images, the electronic equipment prompts whether to enter a privacy video call mode or not at a video call side where the electronic equipment is located, wherein the privacy video call mode indicates that only video calls with clear images are provided for a current user using the electronic equipment at the video call side where the electronic equipment is located.
For example, the image processing method provided by the embodiment can be applied to a video call scene. When a video call is performed, at the video call side where the electronic device is located, when the electronic device detects a plurality of face images, that is, a plurality of face images exist in a video call picture of the electronic device, it can be considered that a plurality of people exist in a current scene. At this time, the electronic device may prompt the user whether to enter the private video call mode. The privacy video call mode represents that only video calls with clear images are provided for the current user using the electronic equipment on the video call side where the electronic equipment is located.
If the user chooses not to enter the private video-call mode, the electronic device may perform other operations.
If the user chooses to enter the private video call mode, then 302 may be entered.
302. Upon receiving an instruction to instruct entry into a private video call mode, the electronic device obtains authentication information of a current user using the electronic device.
For example, if the user selects to enter the privacy video call mode, the electronic device may obtain the authentication information of the current user using the electronic device when receiving an instruction sent by the user to instruct to enter the privacy video call mode.
In an embodiment, the above process of obtaining the authentication information of the current user when receiving the instruction for instructing to enter the private video call mode may include:
when a voice instruction for indicating entering a privacy video call mode is received, the electronic equipment acquires the identity authentication information of the current user, wherein the identity authentication information is the voiceprint information of the current user.
For example, since the video call is being performed, the user may send a voice instruction for instructing to enter the private video call mode in a voice control manner. When receiving the voice instruction, the electronic device may extract voiceprint information of the current user from the voice and determine the voiceprint information as the authentication information of the current user.
It should be noted that the voiceprint information refers to voiceprint characteristics, and the voiceprint characteristics of each person are different, so that the voiceprint information can be used as the authentication information.
It can be understood that, in this embodiment, the electronic device may extract voiceprint information of the current user as the authentication information in the video call scene, and this way may enable the user not to additionally input other authentication information, and the operation mode is simple and effective, and may improve user experience.
In other embodiments, the authentication information may also be fingerprint information or password information of the user, etc. which may be used to identify the user.
303. And according to the preset mapping relation between the identity authentication information and the face image, the electronic equipment acquires a target face image corresponding to the identity authentication information of the current user.
304. The electronic equipment determines a user corresponding to the target face image in the shooting scene as a shooting subject for video call.
For example, 303 and 304 may include:
after the identity authentication information of the current user is obtained, the electronic device can obtain a target face image corresponding to the identity authentication information of the current user according to a preset mapping relation between the identity authentication information and the face image, and determines a user corresponding to the target face image in a shooting scene as a shooting subject for video call.
Therefore, in the embodiment of the application, the electronic device can automatically recognize the shooting subject in a video call scene.
305. By using the first camera, the electronic equipment acquires a first image which is clearly imaged by a shooting subject, wherein the first image is a frame image in a video image data stream for video call acquired by the first camera.
For example, after determining the photographic subject, the electronic device may acquire a first image with clear imaging of the photographic subject by using the first camera. The first image is a frame image in a video image data stream for video call acquired by the first camera.
306. The electronic equipment carries out image segmentation on the first image to obtain a main image, wherein the main image is an image area corresponding to a shooting subject in the first image.
For example, after a first image with a sharp image of a subject is captured, the electronic device may perform image segmentation on the first image to segment a subject image from the first image. The subject image is an image of an image area corresponding to the subject in the first image.
307. Using the second camera, the electronic device obtains a second image in which the subject is out of focus, the second image being one frame of image in a video image data stream for a video call obtained by the second camera.
For example, the electronic device may also take a second image with a second camera, where the subject is out of focus. That is, the subject is blurred in the second image. And the second image is a frame image in the video image data stream for the video call acquired by the second camera.
308. And the electronic equipment performs image fusion processing on the main image and the second image to obtain a target image.
For example, after the subject image is segmented from the first image and a second image of the subject out of focus is captured by the second camera, the electronic device may fuse the subject image and the second image to obtain the target image.
It is understood that since the subject image used for the fusion is a sharp image of the subject, the subject in the target image is also imaged sharply. Moreover, since the other regions outside the subject are also blurred due to the out-of-focus of the subject in the second image, the other regions except the subject in the target image are blurred, and in particular, other persons (i.e., non-subject) in the scene are blurred.
In some embodiments, the process of the electronic device performing fusion processing on the subject image and the second image to obtain the target image may include:
the electronic equipment determines an area corresponding to the shooting subject in the second image as a target area, and the image of the target area is matched with the subject image;
the electronic equipment replaces the image of the target area in the second image with the main image, and determines the second image after completing the image replacement as the target image.
For example, the electronic device may first determine a region corresponding to the subject from the second image, and determine the region as a target region whose image matches the subject image divided from the first image. For example, the electronic device may perform image alignment (matching) of the subject image and the second image, and after the image alignment, the electronic device may determine a region in the second image corresponding to the subject image as the target region.
After the target area is determined, the electronic device may directly replace the image of the target area in the second image with the subject image, and determine the second image after the image replacement as the target image, thereby completing the fusion processing of the second image and the subject image.
It can be understood that, the fusion processing of the second image and the subject image by using the direct image replacement method described above has better definition of the image of the subject in the final target image, because the image of the subject portion in the target image is the image of the subject in the first image with clear image.
Alternatively, in another embodiment, the process of the electronic device performing fusion processing on the subject image and the second image to obtain the target image may include:
the electronic equipment determines an area corresponding to the shooting subject in the second image as a target area, and the image of the target area is matched with the subject image;
and the electronic equipment performs image fusion processing on the image of the target area in the second image and the main body image, and determines the fused image as the target image.
For example, the electronic device may first determine a region corresponding to the subject from the second image, and determine the region as a target region whose image matches the subject image divided from the first image. For example, the electronic device may perform image alignment (matching) of the subject image and the second image, and after the image alignment, the electronic device may determine a region in the second image corresponding to the subject image as the target region.
After the target area is determined, the electronic device may perform image pixel fusion processing on the image of the target area in the second image and the subject image, and determine the image after the fusion as the target image.
It can be understood that the above-described target image obtained by performing the image pixel fusion process on the image of the target region in the second image and the subject image has a good degree of fusion between the image of the subject region and the second image as a whole.
309. The electronic equipment saves the target image as a video frame to obtain a target video.
For example, after obtaining the target image, the electronic device may save the target image as a frame of video frame image. It can be understood that a video segment can be generated by storing a plurality of different video frames, so as to obtain the target video. Each frame of image in the target video may be an image with a real blurring effect obtained by processing with the image processing method provided by the embodiment of the application, so that the image of the target video has the real blurring effect.
It can be understood that, in the present embodiment, in a video call scene, the electronic device may automatically identify a shooting subject performing a video call when the user selects the privacy video call mode. Then, the electronic device may generate a video in which the photographic subject is imaged clearly, rather than the photographic subject being blurred, and perform a video call based on such video. In this way, non-photographic subjects in the current scene (i.e., other persons in the current scene) are blurred and blurred in the video, so that the privacy of those non-photographic subjects can be protected.
Referring to fig. 4 to 7, fig. 4 to 7 are schematic scene diagrams of an image processing method according to an embodiment of the present application.
For example, the electronic device includes two cameras, a first camera and a second camera. When the user aims at the camera to shoot a scene and presses the video recording button, the electronic device can shoot an image X1 with clear imaging of the shooting subject by using the first camera. For example, the image X1 may be as shown in fig. 4, where the subject is a seat of a bicycle in fig. 4. The electronic device may determine the position where the lens of the first camera is located when the first camera captures the image X1 as the first position.
Thereafter, the electronic device may perform image segmentation on the image X1 by using a preset image segmentation algorithm, so as to segment a subject image (i.e., a seat of a bicycle) from the image X1.
Thereafter, the electronic apparatus may detect a distance of the photographic subject from the first camera, that is, the electronic apparatus may detect whether the photographic subject is near or far. For example, in the present embodiment, the electronic apparatus detects that the photographic subject (i.e., the seat of the bicycle) is close.
In this case, the electronic device may select a corresponding lens position from the plurality of lens positions as the second position of the lens of the first camera, wherein a distance from the image sensor when the lens of the first camera is at the second position is greater than a distance from the image sensor when the lens of the first camera is at the first position, i.e., relative to the first position, the lens is further away from the image sensor when the lens is at the second position, i.e., the lens is extended forward. When the lens of the first camera is at the second position, the distance between the lens of the first camera and the image sensor is larger than the distance between the lens of the first camera and the image sensor at any other position, and at this time, the second position of the lens is the position farthest away from the image sensor.
Then, the electronic device may drive the lens of the second camera to a third position according to a mapping relationship between the lens position of the first camera and the lens position of the second camera, where the second position where the lens of the first camera is located corresponds to the third position where the lens of the second camera is located in the mapping relationship between the lens position of the first camera and the lens position of the second camera. For example, when the specifications of the first camera and the second camera are the same, the DAC code value corresponding to the third position is the same as the DAC code value corresponding to the second position.
When the lens of the second camera is moved to the third position, the electronic device may capture an image Y1 with the second camera. It is understood that the image Y1 is an out-of-focus image of the subject. For example, the image Y1 is shown in FIG. 5. from FIG. 5, it can be seen that the image of the bicycle seat is blurred, as well as the image of the bicycle periphery.
After capturing image Y1 with the subject out of focus, the electronics can calculate the out of focus coefficient from image X1 and image Y1. Then, the electronic apparatus may adjust the scale of the image Y1 according to the calculated defocus coefficient to obtain an adjusted scale image Y1.
Thereafter, the electronic apparatus may perform image fusion processing on the subject image and the scaled image Y1, thereby obtaining a target image Z1. In this case, the subject in the target image Z1 is imaged clearly, and the other areas in the target image except for the subject are blurred. For example, the target image Z1 can be seen in fig. 6, where it can be seen from fig. 6 that the subject bicycle seat is imaged clearly and the areas other than the bicycle seat are blurred.
Assuming that the user does not move the electronic device within a certain period of time, i.e. the camera is not moved, the electronic device may capture an image data stream at a preset frame rate when the lens of the first camera is located at the first position, for example, as shown in fig. 7, the images in the image data stream are X2, X3, X4, and the like, respectively. Meanwhile, the electronic device may capture an image data stream at a preset frame rate when the lens of the second camera is located at the third position, for example, images in the image data stream are Y2, Y3, Y4, and the like, respectively. Among them, X2 and Y2 can be regarded as images obtained by synchronized photographing, X3 and Y3 can be regarded as images obtained by synchronized photographing, and X4 and Y4 can be regarded as images obtained by synchronized photographing.
After the first camera captures the image X2, the electronic device may segment the subject image from the image X2 and fuse the subject image with the image Y2 captured by the second camera, e.g., to obtain image Z2. The electronic device may save the image Z2 as a frame of video frame image as well.
Similarly, after the first camera captures the image X3, the electronic device may segment the subject image from the image X3 and fuse the subject image with the image Y3 captured by the second camera, e.g., fuse to obtain image Z3. The electronic device may save the image Z3 as a frame of video frame image as well.
After the first camera captures the image X4, the electronic device may segment the subject image from the image X4 and fuse the subject image with the image Y4 captured by the second camera, e.g., to obtain image Z4. The electronic device may save the image Z4 as a frame of video frame image as well.
The electronic device can generate a piece of video from the saved video frame images Z1, Z2, Z3 and Z4, wherein the piece of video is taken by the user during the period of time. Since each frame of image in the video has the effect of real blurring, the video also has the effect of real blurring.
It is understood that in the embodiments of the present application, the electronic device may provide an image with real blurring, and in particular, with a spot generated by the real blurring. Compared with the technology of simulating and generating the light spot by utilizing an algorithm in the related technology, the light spot in the embodiment is acquired by the second camera directly when the shooting subject is out of focus, so that the light spot in the embodiment is natural and real. That is, the light spots in the video generated by the present embodiment are true and natural.
In addition, because the light spots are generated by using the algorithm simulation, a great deal of time and calculation force are needed for light spot rendering. Therefore, in the embodiment, the rendering time required for generating the light spot can be saved by directly collecting the light spot by the second camera.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus can be applied to an electronic device including at least a first camera and a second camera. The image processing apparatus 300 may include: a first acquisition module 301, an image segmentation module 302, a second acquisition module 303, and an image fusion module 304.
The first obtaining module 301 is configured to obtain a first image with clear imaging of a shooting subject by using the first camera.
An image segmentation module 302, configured to perform image segmentation on the first image to obtain a subject image, where the subject image is an image area corresponding to the shooting subject in the first image.
A second obtaining module 303, configured to obtain a second image by using the second camera, where the shooting subject is out of focus in the second image.
And an image fusion module 304, configured to perform image fusion processing on the main image and the second image to obtain a target image.
A saving module 305, configured to save the target image as a video frame to obtain a target video.
In one embodiment, the application scene of the image processing method is a video call scene; the first image is a frame of image in a video image data stream for the video call acquired by the first camera, and the second image is a frame of image in a video image data stream for the video call acquired by the second camera.
The first obtaining module 301 may further be configured to: at a video call side where the electronic equipment is located, when the electronic equipment detects a plurality of face images, acquiring identity authentication information of a current user using the electronic equipment; acquiring a target face image corresponding to the identity authentication information of the current user according to a preset mapping relation between the identity authentication information and the face image; and determining the user corresponding to the target face image in the shooting scene as a shooting subject for carrying out video call.
In one embodiment, the first obtaining module 301 may be configured to: when the electronic equipment detects a plurality of face images, prompting whether to enter a privacy video call mode, wherein the privacy video call mode indicates that only video calls with clear images are provided for a current user using the electronic equipment on a video call side where the electronic equipment is located; and when an instruction for indicating to enter a privacy video call mode is received, acquiring the authentication information of the current user.
In one embodiment, the first obtaining module 301 may be configured to: and when a voice instruction for indicating to enter a privacy video call mode is received, acquiring the identity authentication information of the current user, wherein the identity authentication information is the voiceprint information of the current user.
In one embodiment, light is present in the shooting scene corresponding to the first image and the second image.
In one embodiment, the second obtaining module 303 may be configured to:
determining the position of a lens of the first camera as a first position when the first camera shoots the first image;
detecting the distance between the shooting subject and the first camera;
when the distance between the shooting subject and the first camera is smaller than a preset threshold value, selecting a corresponding lens position from a plurality of lens positions according to a preset first strategy as a second position of the lens of the first camera, wherein the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the second position is larger than the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the first position;
and driving the lens of the second camera to a third position according to the mapping relation between the lens position of the first camera and the lens position of the second camera, and shooting a second image, wherein in the mapping relation between the lens position of the first camera and the lens position of the second camera, the second position where the lens of the first camera is located corresponds to the third position where the lens of the second camera is located.
In an embodiment, the second obtaining module 303 may be further configured to:
when the distance between the shooting subject and the first camera is larger than or equal to a preset threshold value, selecting a corresponding lens position from a plurality of lens positions according to a preset second strategy as a fourth position of the lens of the first camera, wherein the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the fourth position is smaller than the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the first position;
and driving the lens of the second camera to a fifth position according to a mapping relation between the lens position of the first camera and the lens position of the second camera, and shooting a second image, wherein in the mapping relation between the lens position of the first camera and the lens position of the second camera, the fourth position where the lens of the first camera is located corresponds to the fifth position where the lens of the second camera is located.
In one embodiment, the distance from the image sensor when the lens of the first camera is at the second position is greater than the distance from the image sensor when the lens of the first camera is at any other position.
In one embodiment, the distance from the image sensor when the lens of the first camera is at the fourth position is less than the distance from the image sensor when the lens of the first camera is at any other position.
In one embodiment, the image fusion module 304 may be further configured to:
calculating an out-of-focus coefficient from the first image and the second image;
according to the defocus coefficient, adjusting the proportion of the second image to obtain a second image with the adjusted proportion;
and carrying out image fusion processing on the main image and the second image after the proportion adjustment to obtain a target image.
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which, when executed on a computer, causes the computer to execute the flow in the image processing method provided by this embodiment.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the flow in the image processing method provided in this embodiment by calling the computer program stored in the memory.
For example, the electronic device may be a mobile terminal such as a tablet computer or a smart phone. Referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
The electronic device 400 may include a camera module 401, a memory 402, a processor 403, and the like. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 9 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The camera module 401 may include at least a first camera and a second camera.
The memory 402 may be used to store applications and data. The memory 402 stores applications containing executable code. The application programs may constitute various functional modules. The processor 403 executes various functional applications and data processing by running an application program stored in the memory 402.
The processor 403 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing an application program stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 403 in the electronic device loads the executable code corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 403 runs the application programs stored in the memory 402, so as to execute:
acquiring a first image with clear imaging of a shooting subject by using the first camera;
performing image segmentation on the first image to obtain a main image, wherein the main image is an image area corresponding to the shooting subject in the first image;
acquiring a second image by using the second camera, wherein the shooting subject in the second image is out of focus;
carrying out image fusion processing on the main image and the second image to obtain a target image;
and storing the target image as a video frame to obtain a target video.
In other embodiments, the electronic device may have components other than a camera module, memory, and processor, such as a touch screen display, speakers, a microphone, and a battery.
Touch screens, among other things, can be used to display information such as images, text, and the like. Also, the touch display screen may also have a function of an input-output unit. For example, a touch screen display may be used to receive input numbers, character information, or user characteristic information (e.g., a fingerprint), and to generate optical or trackball signal inputs related to user settings and function control. Touch screens may also be used to display information entered by or provided to a user as well as various graphical user interfaces of an electronic device, which may be composed of graphics, text, icons, video, and any combination thereof.
The speaker may be used to play sound signals. The microphone may then pick up sound signals from the surroundings. The battery may provide power to various components of the overall electronic device.
The embodiment of the invention also provides the electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an Image Signal Processing (Image Signal Processing) pipeline. The image processing circuit may include at least: a camera, an Image Signal Processor (ISP Processor), control logic, an Image memory, and a display. Wherein the camera may comprise at least one or more lenses and an image sensor.
The image sensor may include an array of color filters (e.g., Bayer filters). The image sensor may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor and provide a set of raw image data that may be processed by an image signal processor.
The image signal processor may process the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the image signal processor may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision. The raw image data can be stored in an image memory after being processed by an image signal processor. The image signal processor may also receive image data from an image memory.
The image Memory may be part of a Memory device, a storage device, or a separate dedicated Memory within the electronic device, and may include a DMA (Direct Memory Access) feature.
When image data is received from the image memory, the image signal processor may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to an image memory for additional processing before being displayed. The image signal processor may also receive processed data from the image memory and perform image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the image signal processor may also be sent to an image memory, and the display may read image data from the image memory. In one embodiment, the image memory may be configured to implement one or more frame buffers.
The statistical data determined by the image signal processor may be sent to the control logic. For example, the statistical data may include statistical information of the image sensor such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens shading correction, and the like.
The control logic may include a processor and/or microcontroller that executes one or more routines (e.g., firmware). One or more routines may determine camera control parameters and ISP control parameters based on the received statistics. For example, the control parameters of the camera may include camera flash control parameters, control parameters of the lens (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), etc.
Referring to fig. 10, fig. 10 is a schematic structural diagram of the image processing circuit in the present embodiment. As shown in fig. 10, only aspects of the image processing technique related to the embodiment of the present invention are shown for convenience of explanation.
The image processing circuit may include: a first camera 510, a second camera 520, a first image signal processor 530, a second image signal processor 540, a control logic 550, an image memory 560, and a display 570. Among other things, the first camera 510 may include one or more first lenses 511 and a first image sensor 512. The second camera 520 may include one or more second lenses 521 and a second image sensor 522.
The first image collected by the first camera 510 is transmitted to the first image signal processor 530 for processing. After the first image signal processor 530 processes the first image, the statistical data of the first image (e.g., brightness of the image, contrast value of the image, color of the image, etc.) may be sent to the control logic 550. The control logic 550 may determine the control parameters of the first camera 510 according to the statistical data, so that the first camera 510 may perform operations such as auto-focusing and auto-exposure according to the control parameters. The first image may be stored in the image memory 560 after being processed by the first image signal processor 530. The first image signal processor 530 may also read an image stored in the image memory 560 for processing. In addition, the first image may be directly transmitted to the display 570 for display after being processed by the image signal processor 530. The display 570 may also read the image in the image memory 560 for display.
The second image collected by the second camera 520 is transmitted to the second image signal processor 540 for processing. After the second image signal processor 540 processes the second image, the statistical data of the second image (e.g., brightness of the image, contrast value of the image, color of the image, etc.) may be sent to the control logic 550. The control logic 550 may determine the control parameters of the second camera 520 according to the statistical data, so that the second camera 520 may perform operations such as auto-focusing and auto-exposure according to the control parameters. The second image may be stored in the image memory 560 after being processed by the second image signal processor 540. The second image signal processor 540 may also read the image stored in the image memory 560 for processing. In addition, the second image may be directly transmitted to the display 570 for display after being processed by the image signal processor 540. The display 570 may also read the image in the image memory 560 for display.
In other embodiments, the first image signal processor and the second image signal processor may be combined into a unified image signal processor to process data of the first image sensor and the second image sensor, respectively.
In addition, not shown in the figure, the electronic device may further include a CPU and a power supply module. The CPU is connected with the logic controller, the first image signal processor, the second image signal processor, the image memory and the display, and is used for realizing global control. The power supply module is used for supplying power to each module.
Generally, a mobile phone with two camera modules works in some shooting modes. At the moment, the CPU controls the power supply module to supply power to the first camera and the second camera. The image sensor in the first camera is electrified, and the image sensor in the second camera is electrified, so that the acquisition and conversion of images can be realized. In some shooting modes, one camera in the double camera modules can work. For example, only a tele camera works. In this case, the CPU may control the power supply module to supply power to the image sensor of the corresponding camera.
The following is a flow for implementing the image processing method provided by this embodiment by using the image processing technique in fig. 10:
acquiring a first image with clear imaging of a shooting subject by using the first camera;
performing image segmentation on the first image to obtain a main image, wherein the main image is an image area corresponding to the shooting subject in the first image;
acquiring a second image by using the second camera, wherein the shooting subject in the second image is out of focus;
carrying out image fusion processing on the main image and the second image to obtain a target image;
and storing the target image as a video frame to obtain a target video.
In one embodiment, the application scene of the image processing method is a video call scene; the first image is a frame of image in a video image data stream for the video call acquired by the first camera, and the second image is a frame of image in a video image data stream for the video call acquired by the second camera;
the electronic device may further perform: at a video call side where the electronic equipment is located, when the electronic equipment detects a plurality of face images, acquiring identity authentication information of a current user using the electronic equipment; acquiring a target face image corresponding to the identity authentication information of the current user according to a preset mapping relation between the identity authentication information and the face image; and determining the user corresponding to the target face image in the shooting scene as a shooting subject for carrying out video call.
In one embodiment, when the electronic device performs the acquiring of the authentication information of the current user using the electronic device when the electronic device detects a plurality of face images, the following steps may be performed: when the electronic equipment detects a plurality of face images, prompting whether to enter a privacy video call mode, wherein the privacy video call mode indicates that only video calls with clear images are provided for a current user using the electronic equipment on a video call side where the electronic equipment is located; and when an instruction for indicating to enter a privacy video call mode is received, acquiring the authentication information of the current user.
In one embodiment, when the electronic device executes the obtaining of the authentication information of the current user when receiving the instruction for instructing to enter the private video call mode, the electronic device may execute: and when a voice instruction for indicating to enter a privacy video call mode is received, acquiring the identity authentication information of the current user, wherein the identity authentication information is the voiceprint information of the current user.
In one embodiment, light is present in the shooting scene corresponding to the first image and the second image.
In one embodiment, the electronic device may further perform: determining the position of a lens of the first camera as a first position when the first camera shoots the first image;
then, when the electronic device executes the acquiring of the second image by using the second camera, the electronic device may further execute: detecting the distance between the shooting subject and the first camera; when the distance between the shooting subject and the first camera is smaller than a preset threshold value, selecting a corresponding lens position from a plurality of lens positions according to a preset first strategy as a second position of the lens of the first camera, wherein the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the second position is larger than the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the first position; and driving the lens of the second camera to a third position according to the mapping relation between the lens position of the first camera and the lens position of the second camera, and shooting a second image, wherein in the mapping relation between the lens position of the first camera and the lens position of the second camera, the second position where the lens of the first camera is located corresponds to the third position where the lens of the second camera is located.
In an embodiment, when the electronic device executes the acquiring of the second image by using the second camera, the electronic device may further execute: when the distance between the shooting subject and the first camera is larger than or equal to a preset threshold value, selecting a corresponding lens position from a plurality of lens positions according to a preset second strategy as a fourth position of the lens of the first camera, wherein the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the fourth position is smaller than the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the first position; and driving the lens of the second camera to a fifth position according to a mapping relation between the lens position of the first camera and the lens position of the second camera, and shooting a second image, wherein in the mapping relation between the lens position of the first camera and the lens position of the second camera, the fourth position where the lens of the first camera is located corresponds to the fifth position where the lens of the second camera is located.
In one embodiment, the distance from the image sensor when the lens of the first camera is at the second position is greater than the distance from the image sensor when the lens of the first camera is at any other position.
In one embodiment, the distance from the image sensor when the lens of the first camera is at the fourth position is less than the distance from the image sensor when the lens of the first camera is at any other position.
In one embodiment, the electronic device may further perform: calculating an out-of-focus coefficient from the first image and the second image; according to the defocus coefficient, adjusting the proportion of the second image to obtain a second image with the adjusted proportion; the image fusion processing of the main image and the second image to obtain the target image comprises: and carrying out image fusion processing on the main image and the second image after the proportion adjustment to obtain a target image.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the image processing method, and are not described herein again.
The image processing apparatus provided in the embodiment of the present application and the image processing method in the above embodiment belong to the same concept, and any method provided in the embodiment of the image processing method may be run on the image processing apparatus, and a specific implementation process thereof is described in the embodiment of the image processing method in detail, and is not described herein again.
It should be noted that, for the image processing method described in the embodiment of the present application, it can be understood by those skilled in the art that all or part of the process of implementing the image processing method described in the embodiment of the present application can be completed by controlling the relevant hardware through a computer program, where the computer program can be stored in a computer-readable storage medium, such as a memory, and executed by at least one processor, and during the execution, the process of the embodiment of the image processing method can be included. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the image processing apparatus according to the embodiment of the present application, each functional module may be integrated into one processing chip, each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The foregoing detailed description has provided an image processing method, an image processing apparatus, a storage medium, and an electronic device according to embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. An image processing method is applied to electronic equipment, and is characterized in that the electronic equipment at least comprises a first camera and a second camera, and the method comprises the following steps:
acquiring a first image with clear imaging of a shooting subject by using the first camera;
performing image segmentation on the first image to obtain a main image, wherein the main image is an image area corresponding to the shooting subject in the first image;
determining the position of a lens of the first camera as a first position when the first camera shoots the first image;
detecting the distance between the shooting main body and the first camera through the value range of the digital-to-analog conversion code value of the first camera when shooting the first image;
when the distance between the shooting subject and the first camera is smaller than a preset threshold value, selecting a corresponding lens position from a plurality of lens positions according to a preset first strategy as a second position of the lens of the first camera, wherein the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the second position is larger than the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the first position; driving the lens of the second camera to a third position according to a mapping relation between the lens position of the first camera and the lens position of the second camera, and shooting a second image, wherein in the mapping relation between the lens position of the first camera and the lens position of the second camera, the second position where the lens of the first camera is located corresponds to the third position where the lens of the second camera is located, and the shooting subject in the second image is out of focus;
when the distance between the shooting subject and the first camera is larger than or equal to a preset threshold value, selecting a corresponding lens position from a plurality of lens positions according to a preset second strategy as a fourth position of the lens of the first camera, wherein the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the fourth position is smaller than the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the first position; driving the lens of the second camera to a fifth position according to a mapping relation between the lens position of the first camera and the lens position of the second camera, and shooting a second image, wherein in the mapping relation between the lens position of the first camera and the lens position of the second camera, a fourth position where the lens of the first camera is located corresponds to a fifth position where the lens of the second camera is located, and the shooting subject in the second image is out of focus;
carrying out image fusion processing on the main image and the second image to obtain a target image;
and storing the target image as a video frame to obtain a target video.
2. The image processing method according to claim 1, wherein an application scene of the image processing method is a scene of a video call; the first image is a frame of image in a video image data stream for the video call acquired by the first camera, and the second image is a frame of image in a video image data stream for the video call acquired by the second camera;
before the acquiring, by the first camera, a first image in which a subject is clearly imaged, the method further includes:
at a video call side where the electronic equipment is located, when the electronic equipment detects a plurality of face images, acquiring identity authentication information of a current user using the electronic equipment;
acquiring a target face image corresponding to the identity authentication information of the current user according to a preset mapping relation between the identity authentication information and the face image;
and determining the user corresponding to the target face image in the shooting scene as a shooting subject for carrying out video call.
3. The image processing method according to claim 2, wherein the obtaining of the authentication information of the current user using the electronic device when the electronic device detects a plurality of face images comprises:
when the electronic equipment detects a plurality of face images, prompting whether to enter a privacy video call mode, wherein the privacy video call mode indicates that only video calls with clear images are provided for a current user using the electronic equipment on a video call side where the electronic equipment is located;
and when an instruction for indicating to enter a privacy video call mode is received, acquiring the authentication information of the current user.
4. The image processing method according to claim 3, wherein the obtaining of the authentication information of the current user upon receiving an instruction for instructing entry into a private video call mode comprises:
and when a voice instruction for indicating to enter a privacy video call mode is received, acquiring the identity authentication information of the current user, wherein the identity authentication information is the voiceprint information of the current user.
5. The image processing method according to claim 1, wherein light is present in the shooting scene corresponding to the first image and the second image.
6. The image processing method according to claim 1, wherein a distance from the image sensor when the lens of the first camera is at the second position is larger than a distance from the image sensor when the lens of the first camera is at any other position.
7. The image processing method according to claim 1, wherein a distance from the image sensor when the lens of the first camera is at the fourth position is smaller than a distance from the image sensor when the lens of the first camera is at any other position.
8. The image processing method according to claim 1, characterized in that the method further comprises:
calculating an out-of-focus coefficient from the first image and the second image;
according to the defocus coefficient, adjusting the proportion of the second image to obtain a second image with the adjusted proportion;
the image fusion processing of the main image and the second image to obtain the target image comprises: and carrying out image fusion processing on the main image and the second image after the proportion adjustment to obtain a target image.
9. An image processing apparatus applied to an electronic device, wherein the electronic device at least comprises a first camera and a second camera, the apparatus comprising:
the first acquisition module is used for acquiring a first image with clear imaging of a shooting subject by using the first camera;
the image segmentation module is used for carrying out image segmentation on the first image to obtain a main image, wherein the main image is an image area corresponding to the shooting subject in the first image;
the second acquisition module is used for determining the position of the lens of the first camera as a first position when the first camera shoots the first image; detecting the distance between the shooting main body and the first camera through the value range of the digital-to-analog conversion code value of the first camera when shooting the first image; when the distance between the shooting subject and the first camera is smaller than a preset threshold value, selecting a corresponding lens position from a plurality of lens positions according to a preset first strategy as a second position of the lens of the first camera, wherein the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the second position is larger than the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the first position; driving the lens of the second camera to a third position according to a mapping relation between the lens position of the first camera and the lens position of the second camera, and shooting a second image, wherein in the mapping relation between the lens position of the first camera and the lens position of the second camera, the second position where the lens of the first camera is located corresponds to the third position where the lens of the second camera is located, and the shooting subject in the second image is out of focus; when the distance between the shooting subject and the first camera is larger than or equal to a preset threshold value, selecting a corresponding lens position from a plurality of lens positions according to a preset second strategy as a fourth position of the lens of the first camera, wherein the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the fourth position is smaller than the distance between the lens of the first camera and the image sensor when the lens of the first camera is at the first position; driving the lens of the second camera to a fifth position according to a mapping relation between the lens position of the first camera and the lens position of the second camera, and shooting a second image, wherein in the mapping relation between the lens position of the first camera and the lens position of the second camera, a fourth position where the lens of the first camera is located corresponds to a fifth position where the lens of the second camera is located, and the shooting subject in the second image is out of focus;
the image fusion module is used for carrying out image fusion processing on the main image and the second image to obtain a target image;
and the storage module is used for storing the target image as a video frame so as to obtain a target video.
10. A computer-readable storage medium, on which a computer program is stored, which, when executed on a computer, causes the computer to carry out the method according to any one of claims 1 to 8.
11. An electronic device comprising a memory, a processor, wherein the processor is configured to perform the method of any one of claims 1 to 8 by invoking a computer program stored in the memory.
CN202010048794.4A 2020-01-16 2020-01-16 Image processing method, image processing device, storage medium and electronic equipment Active CN111246093B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010048794.4A CN111246093B (en) 2020-01-16 2020-01-16 Image processing method, image processing device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010048794.4A CN111246093B (en) 2020-01-16 2020-01-16 Image processing method, image processing device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111246093A CN111246093A (en) 2020-06-05
CN111246093B true CN111246093B (en) 2021-07-20

Family

ID=70877911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010048794.4A Active CN111246093B (en) 2020-01-16 2020-01-16 Image processing method, image processing device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111246093B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598594A (en) * 2020-12-24 2021-04-02 Oppo(重庆)智能科技有限公司 Color consistency correction method and related device
CN112954286B (en) * 2021-02-25 2023-06-13 当趣网络科技(杭州)有限公司 Photographing processing method and system based on projector, electronic equipment and medium
CN112991248A (en) * 2021-03-10 2021-06-18 维沃移动通信有限公司 Image processing method and device
CN113038019B (en) * 2021-03-24 2023-04-07 Oppo广东移动通信有限公司 Camera adjusting method and device, electronic equipment and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856719A (en) * 2014-03-26 2014-06-11 深圳市金立通信设备有限公司 Photographing method and terminal
CN106791416A (en) * 2016-12-29 2017-05-31 努比亚技术有限公司 A kind of background blurring image pickup method and terminal
CN107203978A (en) * 2017-05-24 2017-09-26 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107454332A (en) * 2017-08-28 2017-12-08 厦门美图之家科技有限公司 Image processing method, device and electronic equipment
CN107623817A (en) * 2017-09-11 2018-01-23 广东欧珀移动通信有限公司 video background processing method, device and mobile terminal
CN108900763A (en) * 2018-05-30 2018-11-27 Oppo(重庆)智能科技有限公司 Filming apparatus, electronic equipment and image acquiring method
CN109151329A (en) * 2018-11-22 2019-01-04 Oppo广东移动通信有限公司 Photographic method, device, terminal and computer readable storage medium
CN109618173A (en) * 2018-12-17 2019-04-12 深圳Tcl新技术有限公司 Video-frequency compression method, device and computer readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201320734A (en) * 2011-11-03 2013-05-16 Altek Corp Image processing method for producing background blurred image and image capturing device thereof
CN107680128B (en) * 2017-10-31 2020-03-27 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107945105B (en) * 2017-11-30 2021-05-25 Oppo广东移动通信有限公司 Background blurring processing method, device and equipment
CN108156368A (en) * 2017-12-05 2018-06-12 深圳市金立通信设备有限公司 A kind of image processing method, terminal and computer readable storage medium
CN110035218B (en) * 2018-01-11 2021-06-15 华为技术有限公司 Image processing method, image processing device and photographing equipment
CN110691192B (en) * 2019-09-03 2021-04-13 RealMe重庆移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856719A (en) * 2014-03-26 2014-06-11 深圳市金立通信设备有限公司 Photographing method and terminal
CN106791416A (en) * 2016-12-29 2017-05-31 努比亚技术有限公司 A kind of background blurring image pickup method and terminal
CN107203978A (en) * 2017-05-24 2017-09-26 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107454332A (en) * 2017-08-28 2017-12-08 厦门美图之家科技有限公司 Image processing method, device and electronic equipment
CN107623817A (en) * 2017-09-11 2018-01-23 广东欧珀移动通信有限公司 video background processing method, device and mobile terminal
CN108900763A (en) * 2018-05-30 2018-11-27 Oppo(重庆)智能科技有限公司 Filming apparatus, electronic equipment and image acquiring method
CN109151329A (en) * 2018-11-22 2019-01-04 Oppo广东移动通信有限公司 Photographic method, device, terminal and computer readable storage medium
CN109618173A (en) * 2018-12-17 2019-04-12 深圳Tcl新技术有限公司 Video-frequency compression method, device and computer readable storage medium

Also Published As

Publication number Publication date
CN111246093A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN107948519B (en) Image processing method, device and equipment
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
CN111246093B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
JP6935587B2 (en) Methods and equipment for image processing
US10609273B2 (en) Image pickup device and method of tracking subject thereof
CN106878605B (en) Image generation method based on electronic equipment and electronic equipment
KR102266649B1 (en) Image processing method and device
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
WO2019105297A1 (en) Image blurring method and apparatus, mobile device, and storage medium
CN108093158B (en) Image blurring processing method and device, mobile device and computer readable medium
KR20100055946A (en) Method and apparatus for generating thumbnail of moving picture
CN111246092B (en) Image processing method, image processing device, storage medium and electronic equipment
US20220329729A1 (en) Photographing method, storage medium and electronic device
CN110717871A (en) Image processing method, image processing device, storage medium and electronic equipment
JP2010072619A (en) Exposure operation device and camera
CN110581957B (en) Image processing method, image processing device, storage medium and electronic equipment
CN113298735A (en) Image processing method, image processing device, electronic equipment and storage medium
JP2020102059A (en) Image processor
CN111212231B (en) Image processing method, image processing device, storage medium and electronic equipment
CN106878606B (en) Image generation method based on electronic equipment and electronic equipment
CN106878604B (en) Image generation method based on electronic equipment and electronic equipment
JP2005284203A (en) Digital still camera and its control method
CN114762313B (en) Image processing method, device, storage medium and electronic equipment
JP2018182700A (en) Image processing apparatus, control method of the same, program, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant