CN111726526B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111726526B
CN111726526B CN202010573559.9A CN202010573559A CN111726526B CN 111726526 B CN111726526 B CN 111726526B CN 202010573559 A CN202010573559 A CN 202010573559A CN 111726526 B CN111726526 B CN 111726526B
Authority
CN
China
Prior art keywords
depth
image
depth information
information
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010573559.9A
Other languages
Chinese (zh)
Other versions
CN111726526A (en
Inventor
朱文波
方攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010573559.9A priority Critical patent/CN111726526B/en
Publication of CN111726526A publication Critical patent/CN111726526A/en
Application granted granted Critical
Publication of CN111726526B publication Critical patent/CN111726526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The embodiment of the application discloses an image processing method, which comprises the following steps: acquiring first depth information of an acquired first depth image; acquiring second depth information corresponding to an image area of the first depth image, which is predicted to be the same as the second depth image, based on the first depth information; acquiring a second depth image, and acquiring third depth information corresponding to different image areas of the second depth image and the first depth image; and performing blurring processing on the second depth image based on the second depth information and the third depth information, and outputting the image after blurring processing. The embodiment of the application also discloses an image processing device, electronic equipment and a storage medium.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to, but not limited to, the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a storage medium.
Background
At present, the technical scheme of image blurring has a great dependence on the accuracy of an algorithm or single-frame depth information, and for a single-lens blurring technology, the depth information of an image needs to be obtained through multiple exposures and other modes; for the multi-shot blurring technique, the depth information is calculated through multi-shot parallax, and the accuracy of the depth information of different points in the image calculated by a depth map algorithm is relied on. However, the above-mentioned blurring techniques are difficult to obtain a more accurate depth map, resulting in a poor blurring effect.
Summary of the invention
The embodiments of the present application are intended to provide an image processing method, an image processing apparatus, an electronic device, and a storage medium, which solve the problem in the related art that it is difficult to obtain a more accurate depth map, resulting in a poor blurring effect; the depth information of the current frame is corrected based on the depth information of the historical frame, the accuracy of the depth map is improved, the image is blurred based on the corrected depth information, and the image with an ideal blurring effect is obtained.
The technical scheme of the application is realized as follows:
a method of image processing, the method comprising:
acquiring first depth information of an acquired first depth image;
acquiring second depth information corresponding to an image area of the first depth image, which is predicted to be the same as the second depth image, based on the first depth information;
acquiring a second depth image, and acquiring third depth information corresponding to an image area of the second depth image different from the first depth image;
and performing blurring processing on the second depth image based on the second depth information and the third depth information, and outputting a blurred image.
Optionally, after acquiring the first depth information of the acquired first depth image, the method includes:
acquiring motion trend information of an image acquisition device;
correspondingly, the obtaining, based on the first depth information, second depth information corresponding to an image region where the predicted first depth image is the same as the second depth image includes:
and based on the motion trend information, carrying out segmentation processing on the first depth information to obtain the second depth information.
Optionally, the acquiring of the motion trend information of the image capturing device includes:
determining the motion trend information based on a signal which is acquired by a gyroscope and represents the motion trend of the image acquisition device and/or an analysis result obtained by analyzing a multi-frame historical depth image;
and the analysis result is a result of inputting the multiple frames of historical depth images into a trained network model and analyzing the motion trend of the image acquisition state, the multiple frames of historical depth images have the same acquisition scene with the second depth image, and the multiple frames of historical depth images comprise the first depth image.
Optionally, the obtaining third depth information corresponding to an image area of the second depth image different from the image area of the first depth image includes:
reducing a resolution of an image region of the first depth image that is the same as the second depth image;
fusing the first depth image with the reduced resolution and the second depth image with the reduced resolution, and acquiring fourth depth information of the fused depth image, wherein the fourth depth information is used as updated depth information of the second depth image;
and acquiring the third depth information based on the fourth depth information.
Optionally, the blurring processing on the second depth image based on the second depth information and the third depth information includes:
acquiring the depth information variable quantity of an image area with the same first depth image and second depth image;
and performing blurring processing on the second depth image based on the depth information variation, the second depth information and the third depth information.
Optionally, the blurring processing on the second depth image based on the depth information variation, the second depth information, and the third depth information includes:
inputting the depth information variable quantity, the second depth information and the third depth information into a depth map synthesis algorithm model to obtain output fifth depth information;
blurring the second depth image based on the fifth depth information.
Optionally, the blurring processing on the second depth image based on the fifth depth information includes:
blurring a foreground portion or a background portion of the second depth image based on the fifth depth information.
An image processing apparatus, the image processing apparatus comprising:
the acquisition module is used for acquiring first depth information of the acquired first depth image;
the obtaining module is further configured to obtain second depth information corresponding to an image region where the predicted first depth image and the predicted second depth image are the same, based on the first depth information;
the image acquisition module is used for acquiring a second depth image;
the acquiring module is further configured to acquire third depth information corresponding to an image area of the second depth image different from the first depth image;
and the processing module is used for carrying out blurring processing on the second depth image based on the second depth information and the third depth information and outputting a blurred image.
An electronic device, the electronic device comprising: a processor, a memory, and a communication bus;
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is used for executing the image processing program stored in the memory so as to realize the steps of the image processing method.
A storage medium storing one or more programs executable by one or more processors to implement the steps of the image processing method described above.
The image processing method, the image processing device, the electronic equipment and the storage medium provided by the embodiment of the application acquire first depth information of the acquired first depth image; acquiring second depth information corresponding to an image area of the first depth image, which is predicted to be the same as the second depth image, based on the first depth information; acquiring a second depth image, and acquiring third depth information corresponding to different image areas of the second depth image and the first depth image; performing blurring processing on the second depth image based on the second depth information and the third depth information, and outputting a blurred image; that is to say, in the image blurring process, the second depth information in the first depth information is retained in advance, that is, the available depth information in the historical frame is retained, the third depth information corresponding to the image area different from the first depth image in the depth information of the second depth image is obtained after the second depth image is acquired, and then the second depth image is blurred by using the second depth information and the third depth information, so that the depth information of the current frame is corrected based on the depth information of the historical frame, the accuracy of the depth map is improved, and the second depth image is blurred based on the corrected depth information, so that an image with an ideal blurring effect is obtained.
Drawings
Fig. 1 is a first flowchart illustrating an image processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart illustrating an image processing method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a depth information principle provided by an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a depth information principle provided by an embodiment of the present application;
fig. 5 is a third schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
The image blurring scheme commonly used at present is based on a depth map to perform image optimization, and for obtaining the depth map, two types of methods are available, namely one camera and two cameras. Depth information of a photographing region can be obtained through (AI) or Time of flight (TOF) shots based on one camera. At present, a mainstream Depth information acquisition scheme utilizes a Depth Map (Depth Map) constructed by double shooting, so as to infer the context of a scene and control the blurring degree. The double-shot method can calculate the distance of each pixel point out-of-focus plane by utilizing the visual angle difference of two cameras, so that the depth information is calculated according to the distance of the focus plane.
It should be noted that, the technical scheme of image blurring greatly depends on the accuracy of an algorithm or single-frame depth information, for the single-shot blurring technique, depth data of an image needs to be obtained through some modes such as multiple exposures and the like, or blurring processing is performed through an AI algorithm, otherwise blurring can be performed only through a cutout mode, so that the blurring effect has no layering sense, and user experience is affected; for the multi-shot blurring technique, if the depth information is calculated by using the multi-shot parallax, whether the depth information of different points in the image can be accurately calculated depends on the processing effect of the depth map algorithm, however, the blurring technique means cannot obtain a more accurate depth map, and the blurring effect is poor.
An embodiment of the present application provides an image processing method applied to an electronic device, and as shown in fig. 1, the method includes the following steps:
step 101, acquiring first depth information of the acquired first depth image.
In the embodiment of the present application, the electronic device may include a mobile terminal device such as a mobile phone, a tablet computer, a notebook computer, a Personal Digital Assistant (PDA), a camera, a wearable device, and a fixed terminal device such as a desktop computer.
In an embodiment of the present application, the first depth image includes at least one frame of image. The first depth image serves as a historical reference image of a subsequently acquired second depth image. Depth information is referred to herein as characterizing the distance between a scene and a shot point at various pixel points within a depth image.
And 102, acquiring second depth information corresponding to the image area of the predicted first depth image, which is the same as the second depth image, based on the first depth information.
In the embodiment of the application, after the electronic device acquires the first depth image, the first depth information of the first depth image is acquired. Further, the electronic device may predict an image region where the first depth image is the same as the second depth image. Therefore, the electronic device obtains second depth information corresponding to an image region where the predicted first depth image is the same as the second depth image based on the first depth information, namely, the electronic device divides the depth map of the current frame based on the range of the same region, and at this time, the electronic device can also cache partial depth information of the current frame, namely, depth information corresponding to the same region.
And 103, acquiring a second depth image, and acquiring third depth information corresponding to an image area of the second depth image different from the first depth image.
In the embodiment of the application, under the condition that the electronic device acquires the second depth image, the depth information of the second depth image can be acquired, and third depth information corresponding to an image area of the second depth image different from that of the first depth image is acquired. That is, the third depth information is depth information corresponding to the change area.
And 104, performing blurring processing on the second depth image based on the second depth information and the third depth information, and outputting a blurred image.
In this embodiment, when the electronic device acquires the second depth information and the third depth information, the depth information of the current frame may be corrected based on the depth information of the history frame, for example, the second depth image may be blurred based on the depth information variation of the same region in the history frame, the depth information of the cached part of the history frame, and the depth information of the difference region, and the blurred image may be output.
The image processing method provided by the embodiment of the application obtains first depth information of an acquired first depth image; acquiring second depth information corresponding to an image area of the first depth image, which is predicted to be the same as the second depth image, based on the first depth information; acquiring a second depth image, and acquiring third depth information corresponding to different image areas of the second depth image and the first depth image; performing blurring processing on the second depth image based on the second depth information and the third depth information, and outputting a blurred image; that is to say, in the image blurring process, the second depth information in the first depth information is retained in advance, that is, the available depth information in the historical frame is retained, the third depth information corresponding to the image area different from the first depth image in the depth information of the second depth image is obtained after the second depth image is acquired, and then the second depth image is blurred by using the second depth information and the third depth information, so that the depth information of the current frame is corrected based on the depth information of the historical frame, the accuracy of the depth map is improved, and the second depth image is blurred based on the corrected depth information, so that an image with an ideal blurring effect is obtained.
An embodiment of the present application provides an image processing method applied to an electronic device, and as shown in fig. 2, the method includes the following steps:
step 201, acquiring first depth information of the acquired first depth image.
Here, the electronic device acquires first depth information of the acquired first depth image in a scene in which image blurring is required.
And 202, acquiring motion trend information of the image acquisition device.
In this embodiment of the application, the obtaining of the motion trend information of the image capturing device in step 202 may be implemented as follows:
the method comprises the steps of firstly, determining motion trend information based on signals representing the motion trend of an image acquisition device acquired by a gyroscope. The electronic equipment is provided with a gyroscope for acquiring signals representing the movement trend of the image acquisition device.
And secondly, determining motion trend information based on an analysis result obtained by analyzing the multi-frame historical depth image.
Here, the analysis result is a result of inputting a plurality of frames of historical depth images to the trained network model, and analyzing a motion trend of the image capture state, the plurality of frames of historical depth images are the same as the capture scene of the second depth image, and the plurality of frames of historical depth images include the first depth image.
In some embodiments, after acquiring multiple frames of historical depth images in the same acquisition scene, the electronic device inputs the multiple frames of historical depth images to the trained network model to obtain the analysis result.
And thirdly, determining motion trend information based on the signals representing the motion trend of the image acquisition device acquired by the gyroscope and analysis results obtained by analyzing the multi-frame historical depth images.
In the embodiment of the application, the electronic equipment can jointly determine the motion trend information based on the signals representing the motion trend of the image acquisition device acquired by the gyroscope and the analysis result obtained by analyzing the multi-frame historical depth image, so that the accuracy of the motion trend information is improved.
And 203, based on the motion trend information, performing segmentation processing on the first depth information to obtain second depth information.
And step 204, acquiring a second depth image.
And 205, reducing the resolution of the image area of the first depth image, which is the same as the second depth image.
And step 206, fusing the first depth image with the reduced resolution and the second depth image with the reduced resolution, and acquiring fourth depth information of the fused depth image.
The fourth depth information is updated depth information of the second depth image.
Here, when the current frame depth information is calculated for the same image area of the current frame and the historical frame, the resolution of the depth map of the same area may be reduced, and then the calculation result and the depth map of the historical frame are fused to generate more accurate depth information of the current frame.
It should be noted that, referring to fig. 3 and 4, the blurring effect on the image depends on the accuracy of the depth map, and theoretically, the farther away from the focal plane, the larger the circle of confusion, and the more blurred the image is. That is, the closer to the focal plane the image is. The blurring effect is ideally that the blur is outside the depth of field, but the blur is different, with the closer to the focal plane, the sharper the blur, and the farther away from the focal plane, the more blur. These effects are dependent on the accuracy of the depth information and the timeliness of the computation.
The image processing method provided by the application improves the accuracy of the depth information, provides reliable reference factors for the blurring processing of the image, and further improves the blurring effect.
And step 207, acquiring third depth information based on the fourth depth information.
And step 208, acquiring the depth information variation of the same image area of the first depth image and the second depth image.
Here, the same image area may be calculated according to a change in an absolute distance of a reference point in the image, on a change in depth of the history frame and the current frame.
And 209, blurring the second depth image based on the depth information variation, the second depth information and the third depth information.
In this embodiment of the application, step 209 performs blurring processing on the second depth image based on the depth information variation, the second depth information, and the third depth information, and may be implemented by the following steps:
and firstly, inputting the depth information variable quantity, the second depth information and the third depth information into a depth map synthesis algorithm model to obtain fifth output depth information.
And secondly, blurring the second depth image based on the fifth depth information.
In other embodiments of the present application, based on the fifth depth information, the blurring processing on the second depth image may include: blurring the foreground part or the background part of the second depth image based on the fifth depth information. That is to say, under the condition that accurate depth information is acquired, blurring processing can be flexibly selected to be performed on the foreground part or the background part of the second depth image according to the current application scene. For example, the electronic device detects input information of the operation object, determines the blurring object to be a foreground portion or a background portion based on the input information, and performs blurring processing on the blurring object. Here, the input information may be voice information; the input information may also be touch information to a screen of the electronic device through which the captured image may be viewed.
Therefore, the available depth information in the historical frame is reserved in advance through the judgment of the motion trend, the depth information of the historical frame is used for calculating the depth information of the current frame on a new frame, or the depth information of the current frame is corrected by using the depth information of the historical frame, and the purpose of improving the accuracy of the depth map is achieved. In addition, the depth information of the historical frame is utilized, so that the calculation power consumption of the current frame can be greatly reduced, and the purposes of improving the calculation speed of the depth map and reducing the system power consumption are achieved.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
An embodiment of the present application provides an image processing method, which is applied to an electronic device provided with a camera, and as shown in fig. 5, the method includes the following steps:
step 301, turning on the camera, and entering a scene needing blurring processing.
Step 302, calculating and caching the depth information of the current image.
And 303, acquiring signals acquired by the gyroscope, and calculating a motion trend.
And step 304, estimating the same image area of the next frame image and the current image according to the calculation result of the motion trend.
And 305, segmenting and storing the depth information of the current frame according to the range of the same region.
Wherein depth information of the same image area of the historical multiframe can be stored.
Step 306, new image frames are acquired.
And 307, segmenting different areas of the historical frame, and calculating detailed depth information of the different areas.
Step 308, calculating the depth information change of the same image area.
Wherein the depth information change comprises a depth change amplitude of the new frame relative to the historical frame, e.g. an increasing or decreasing value of the distance from a certain point relative to the historical frame.
Wherein, the depth information change of the same image area is denoted by dep'.
And 309, inputting dep' and the depth information of the cached partial historical frame into a depth map synthesis algorithm model based on the depth information of the difference region obtained by the latest picture, and outputting the updated depth information.
And 310, blurring the image according to the updated depth information by using a blurring algorithm.
And 311, outputting the blurred image.
In the embodiment of the application, the synthesis can be performed according to the depth information of a plurality of historical frames, so that the effect of multi-frame synthesis is realized, and the accuracy of the depth information is improved, for example, the synthesis is performed by using the historical depth information of three frames or five frames. Or for the image area of the current frame and the historical frame, when the depth information of the current frame is calculated, the resolution of the depth map of the same area can be reduced, and then the calculation result and the depth map of the historical frame are fused to generate more accurate depth information of the current frame.
According to the method, the motion trend and similar multi-frame fusion technology is applied to the depth map calculation of the image, the depth information of the historical frame is firstly stored, then the relevant information of the camera motion trend is obtained through the information of the gyroscope, the depth information of the historical frame is segmented according to the motion trend parameters, for example, the predicted depth information of the same image area in the next frame is reserved, after a new frame comes, the depth information calculation is firstly carried out on the image areas outside the same area, then the depth areas of the whole image are synthesized according to the relative depth change degree of the same image area at different moments, and the depth information of the same area of the historical frame and the current frame and the depth information of the new area of the current frame generate the depth information of the whole image. The scheme can be synthesized according to the depth information of a plurality of historical frames, so that the accuracy of the depth information is improved; or for the image area of the current frame and the image area of the historical frame, when the depth information is calculated in the current frame, the resolution of the depth map of the same area can be reduced, then the depth map of the historical frame and the depth map calculated in the current frame are fused to generate more accurate depth information of the current frame, and then the image is blurred by utilizing the depth information of the current frame.
The image processing scheme provided by the application can be used for processing the depth information of the image and can also be applied to other applications needing to process the whole image. In some application scenarios, the information of the gyroscope can be used for predicting the movement trend, the available information in the historical data is reserved in advance, and the historical data is used for fusion correction after the new data is obtained.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
An embodiment of the present application provides an image processing apparatus, which can be applied to an image processing method provided in the embodiment corresponding to fig. 1 and 2, and as shown in fig. 6, the image processing apparatus 4 includes:
an obtaining module 41, configured to obtain first depth information of the acquired first depth image;
the obtaining module 41 is further configured to obtain, based on the first depth information, second depth information corresponding to an image region where the predicted first depth image is the same as the second depth image;
an image acquisition module 42 for acquiring a second depth image;
the obtaining module 41 is further configured to obtain third depth information corresponding to an image area of the second depth image different from the first depth image;
and a processing module 43, configured to perform blurring processing on the second depth image based on the second depth information and the third depth information, and output a blurred image.
In other embodiments of the present application, the obtaining module 41 is further configured to obtain motion trend information of the image capturing device; and based on the motion trend information, performing segmentation processing on the first depth information to obtain second depth information.
In other embodiments of the present application, the obtaining module 41 is further configured to determine motion trend information based on a signal representing a motion trend of the image capturing device, which is captured by the gyroscope, and/or an analysis result obtained by analyzing the multi-frame historical depth image;
and the analysis result is a result of inputting the multi-frame historical depth images into the trained network model and analyzing the motion trend of the image acquisition state, the acquisition scenes of the multi-frame historical depth images and the second depth image are the same, and the multi-frame historical depth images comprise the first depth image.
In other embodiments of the present application, the obtaining module 41 is further configured to reduce the resolution of an image area where the first depth image is the same as the second depth image;
fusing the first depth image with the reduced resolution and the second depth image with the reduced resolution, and acquiring fourth depth information of the fused depth image, wherein the fourth depth information is used as updated depth information of the second depth image;
and acquiring third depth information based on the fourth depth information.
In other embodiments of the present application, the processing module 43 is further configured to obtain a depth information variation of an image region where the first depth image and the second depth image are the same;
and blurring the second depth image based on the depth information variation, the second depth information and the third depth information.
In other embodiments of the present application, the processing module 43 is further configured to input the depth information variation, the second depth information, and the third depth information into a depth map synthesis algorithm model to obtain fifth output depth information;
and performing blurring processing on the second depth image based on the fifth depth information.
In other embodiments of the present application, the processing module 43 is further configured to perform blurring on the foreground portion or the background portion of the second depth image based on the fifth depth information.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
An embodiment of the present application provides an electronic device, which may be applied to an image processing method provided in the embodiment corresponding to fig. 1 and 2, and as shown in fig. 7, the electronic device 5 (the electronic device 5 in fig. 7 corresponds to the image processing apparatus 4 in fig. 6) includes: a processor 51, a memory 52, and a communication bus 53, wherein:
the communication bus 53 is used to realize a communication connection between the processor 51 and the memory 52.
The processor 51 is configured to execute an image processing program stored in the memory 52 to implement the steps of:
acquiring first depth information of an acquired first depth image;
acquiring second depth information corresponding to an image area of the first depth image, which is predicted to be the same as the second depth image, based on the first depth information;
acquiring a second depth image, and acquiring third depth information corresponding to different image areas of the second depth image and the first depth image;
and performing blurring processing on the second depth image based on the second depth information and the third depth information, and outputting the image after blurring processing.
In other embodiments of the present application, the processor 51 is configured to execute an image processing program stored in the memory 52 to implement the following steps:
acquiring motion trend information of an image acquisition device;
correspondingly, based on the first depth information, obtaining second depth information corresponding to an image area where the predicted first depth image is the same as the second depth image, including:
and based on the motion trend information, performing segmentation processing on the first depth information to obtain second depth information.
In other embodiments of the present application, the processor 51 is configured to execute an image processing program stored in the memory 52 to implement the following steps:
determining motion trend information based on a signal representing the motion trend of the image acquisition device acquired by the gyroscope and/or an analysis result obtained by analyzing the multi-frame historical depth image;
and the analysis result is a result of inputting the multi-frame historical depth images into the trained network model and analyzing the motion trend of the image acquisition state, the acquisition scenes of the multi-frame historical depth images and the second depth image are the same, and the multi-frame historical depth images comprise the first depth image.
In other embodiments of the present application, the processor 51 is configured to execute an image processing program stored in the memory 52 to implement the following steps:
reducing the resolution of the same image area of the first depth image and the second depth image;
fusing the first depth image with the reduced resolution and the second depth image with the reduced resolution, and acquiring fourth depth information of the fused depth image, wherein the fourth depth information is used as updated depth information of the second depth image;
and acquiring third depth information based on the fourth depth information.
In other embodiments of the present application, the processor 51 is configured to execute an image processing program stored in the memory 52 to implement the following steps:
acquiring the depth information variable quantity of the same image area of the first depth image and the second depth image;
and blurring the second depth image based on the depth information variation, the second depth information and the third depth information.
In other embodiments of the present application, the processor 51 is configured to execute an image processing program stored in the memory 52 to implement the following steps:
inputting the depth information variable quantity, the second depth information and the third depth information into a depth map synthesis algorithm model to obtain fifth output depth information;
and performing blurring processing on the second depth image based on the fifth depth information.
In other embodiments of the present application, the processor 51 is configured to execute an image processing program stored in the memory 52 to implement the following steps:
blurring the foreground part or the background part of the second depth image based on the fifth depth information.
It should be noted that, for a specific implementation process of the step executed by the processor in this embodiment, reference may be made to an implementation process in the image processing method provided in the embodiment corresponding to fig. 1 and 2, and details are not described here again.
Embodiments of the application provide a computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to perform the steps of:
acquiring first depth information of an acquired first depth image;
acquiring second depth information corresponding to an image area of the first depth image, which is predicted to be the same as the second depth image, based on the first depth information;
acquiring a second depth image, and acquiring third depth information corresponding to different image areas of the second depth image and the first depth image;
and performing blurring processing on the second depth image based on the second depth information and the third depth information, and outputting the image after blurring processing.
In other embodiments of the present application, the one or more programs are executable by the one or more processors and further implement the steps of:
acquiring motion trend information of an image acquisition device;
correspondingly, based on the first depth information, obtaining second depth information corresponding to an image area where the predicted first depth image is the same as the second depth image, including:
and based on the motion trend information, performing segmentation processing on the first depth information to obtain second depth information.
In other embodiments of the present application, the one or more programs are executable by the one or more processors and further implement the steps of:
determining motion trend information based on a signal representing the motion trend of the image acquisition device acquired by the gyroscope and/or an analysis result obtained by analyzing the multi-frame historical depth image;
and the analysis result is a result of inputting the multi-frame historical depth images into the trained network model and analyzing the motion trend of the image acquisition state, the acquisition scenes of the multi-frame historical depth images and the second depth image are the same, and the multi-frame historical depth images comprise the first depth image.
In other embodiments of the present application, the one or more programs are executable by the one or more processors and further implement the steps of:
reducing the resolution of the same image area of the first depth image and the second depth image;
fusing the first depth image with the reduced resolution and the second depth image with the reduced resolution, and acquiring fourth depth information of the fused depth image, wherein the fourth depth information is used as updated depth information of the second depth image;
and acquiring third depth information based on the fourth depth information.
In other embodiments of the present application, the one or more programs are executable by the one or more processors and further implement the steps of:
acquiring the depth information variable quantity of the same image area of the first depth image and the second depth image;
and blurring the second depth image based on the depth information variation, the second depth information and the third depth information.
In other embodiments of the present application, the one or more programs are executable by the one or more processors and further implement the steps of:
inputting the depth information variable quantity, the second depth information and the third depth information into a depth map synthesis algorithm model to obtain fifth output depth information;
and performing blurring processing on the second depth image based on the fifth depth information.
In other embodiments of the present application, the one or more programs are executable by the one or more processors and further implement the steps of:
blurring the foreground part or the background part of the second depth image based on the fifth depth information.
It should be noted that, for a specific implementation process of the step executed by the processor in this embodiment, reference may be made to an implementation process in the image processing method provided in the embodiment corresponding to fig. 1 and 2, and details are not described here again.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application.

Claims (8)

1. An image processing method, characterized in that the method comprises:
acquiring first depth information of an acquired first depth image;
acquiring motion trend information of an image acquisition device;
based on the motion trend information, carrying out segmentation processing on the first depth information to obtain second depth information corresponding to an image area of the predicted first depth image, which is the same as the second depth image;
acquiring a second depth image, and acquiring third depth information corresponding to an image area of the second depth image different from the first depth image;
performing blurring processing on the second depth image based on the second depth information and the third depth information, and outputting a blurred image;
wherein the obtaining third depth information corresponding to an image region of the second depth image different from the first depth image includes:
reducing a resolution of an image region of the first depth image that is the same as the second depth image;
fusing the first depth image with the reduced resolution and the second depth image with the reduced resolution, and acquiring fourth depth information of the fused depth image, wherein the fourth depth information is used as updated depth information of the second depth image;
and acquiring the third depth information based on the fourth depth information.
2. The method according to claim 1, wherein the acquiring motion trend information of the image acquisition device comprises:
determining the motion trend information based on a signal which is acquired by a gyroscope and represents the motion trend of the image acquisition device and/or an analysis result obtained by analyzing a multi-frame historical depth image;
the analysis result is a result of inputting the multiple frames of historical depth images into a trained network model, and analyzing the motion trend of the image acquisition state, wherein the multiple frames of historical depth images have the same acquisition scene as the second depth image, and include the first depth image.
3. The method of claim 1, wherein the blurring the second depth image based on the second depth information and the third depth information comprises:
acquiring the depth information variable quantity of an image area with the same first depth image and second depth image;
and performing blurring processing on the second depth image based on the depth information variation, the second depth information and the third depth information.
4. The method of claim 3, wherein the blurring the second depth image based on the depth information variation, the second depth information, and the third depth information comprises:
inputting the depth information variable quantity, the second depth information and the third depth information into a depth map synthesis algorithm model to obtain output fifth depth information;
blurring the second depth image based on the fifth depth information.
5. The method of claim 4, wherein blurring the second depth image based on the fifth depth information comprises:
blurring a foreground portion or a background portion of the second depth image based on the fifth depth information.
6. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring first depth information of the acquired first depth image;
the acquisition module is also used for acquiring the motion trend information of the image acquisition device; based on the motion trend information, carrying out segmentation processing on the first depth information to obtain second depth information corresponding to an image area of the predicted first depth image, which is the same as the second depth image;
the image acquisition module is used for acquiring a second depth image;
the acquiring module is further configured to acquire third depth information corresponding to an image area of the second depth image different from the first depth image;
the processing module is used for carrying out blurring processing on the second depth image based on the second depth information and the third depth information and outputting a blurred image;
the obtaining module is further configured to reduce a resolution of an image area where the first depth image and the second depth image are the same; fusing the first depth image with the reduced resolution and the second depth image with the reduced resolution, and acquiring fourth depth information of the fused depth image, wherein the fourth depth information is used as updated depth information of the second depth image; and acquiring the third depth information based on the fourth depth information.
7. An electronic device, characterized in that the electronic device comprises: a processor, a memory, and a communication bus;
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is configured to execute an image processing program stored in the memory to implement the steps of the image processing method according to any one of claims 1 to 5.
8. A storage medium characterized in that the storage medium stores one or more programs executable by one or more processors to implement the steps of the image processing method according to any one of claims 1 to 5.
CN202010573559.9A 2020-06-22 2020-06-22 Image processing method and device, electronic equipment and storage medium Active CN111726526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010573559.9A CN111726526B (en) 2020-06-22 2020-06-22 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010573559.9A CN111726526B (en) 2020-06-22 2020-06-22 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111726526A CN111726526A (en) 2020-09-29
CN111726526B true CN111726526B (en) 2021-12-21

Family

ID=72569968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010573559.9A Active CN111726526B (en) 2020-06-22 2020-06-22 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111726526B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014806B (en) * 2021-02-07 2022-09-13 维沃移动通信有限公司 Blurred image shooting method and device
CN115134532A (en) * 2022-07-26 2022-09-30 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101682794A (en) * 2007-05-11 2010-03-24 皇家飞利浦电子股份有限公司 Method, apparatus and system for processing depth-related information
CN102821289A (en) * 2011-06-06 2012-12-12 索尼公司 Image processing apparatus, image processing method, and program
CN102842110A (en) * 2011-06-20 2012-12-26 富士胶片株式会社 Image processing device and image processing method
CN103037226A (en) * 2011-09-30 2013-04-10 联咏科技股份有限公司 Method and device for depth fusion
CN109889724A (en) * 2019-01-30 2019-06-14 北京达佳互联信息技术有限公司 Image weakening method, device, electronic equipment and readable storage medium storing program for executing

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7733412B2 (en) * 2004-06-03 2010-06-08 Canon Kabushiki Kaisha Image pickup apparatus and image pickup method
CN201937736U (en) * 2007-04-23 2011-08-17 德萨拉技术爱尔兰有限公司 Digital camera
US8279325B2 (en) * 2008-11-25 2012-10-02 Lytro, Inc. System and method for acquiring, editing, generating and outputting video data
JP6325841B2 (en) * 2014-02-27 2018-05-16 オリンパス株式会社 Imaging apparatus, imaging method, and program
EP3284061B1 (en) * 2015-04-17 2021-11-10 FotoNation Limited Systems and methods for performing high speed video capture and depth estimation using array cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101682794A (en) * 2007-05-11 2010-03-24 皇家飞利浦电子股份有限公司 Method, apparatus and system for processing depth-related information
CN102821289A (en) * 2011-06-06 2012-12-12 索尼公司 Image processing apparatus, image processing method, and program
CN102842110A (en) * 2011-06-20 2012-12-26 富士胶片株式会社 Image processing device and image processing method
CN103037226A (en) * 2011-09-30 2013-04-10 联咏科技股份有限公司 Method and device for depth fusion
CN109889724A (en) * 2019-01-30 2019-06-14 北京达佳互联信息技术有限公司 Image weakening method, device, electronic equipment and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN111726526A (en) 2020-09-29

Similar Documents

Publication Publication Date Title
CN108898567B (en) Image noise reduction method, device and system
KR102480245B1 (en) Automated generation of panning shots
US20190208125A1 (en) Depth Map Calculation in a Stereo Camera System
CN113129241B (en) Image processing method and device, computer readable medium and electronic equipment
CN111726526B (en) Image processing method and device, electronic equipment and storage medium
CN110766706A (en) Image fusion method and device, terminal equipment and storage medium
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN111553362A (en) Video processing method, electronic equipment and computer readable storage medium
CN110992395A (en) Image training sample generation method and device and motion tracking method and device
CN112258404A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114390201A (en) Focusing method and device thereof
CN112215877A (en) Image processing method and device, electronic equipment and readable storage medium
CN114096994A (en) Image alignment method and device, electronic equipment and storage medium
US9686470B2 (en) Scene stability detection
CN111583329B (en) Augmented reality glasses display method and device, electronic equipment and storage medium
CN113205011A (en) Image mask determining method and device, storage medium and electronic equipment
CN111833459A (en) Image processing method and device, electronic equipment and storage medium
JP2020136774A (en) Image processing apparatus for detecting motion vector, control method of the same, and program
EP3429186B1 (en) Image registration method and device for terminal
CN115496664A (en) Model training method and device, electronic equipment and readable storage medium
CN112954197B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN115134532A (en) Image processing method, image processing device, storage medium and electronic equipment
CN115049572A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115439386A (en) Image fusion method and device, electronic equipment and storage medium
CN113703704A (en) Interface display method, head-mounted display device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant