CN113766135B - Photographing method simulating depth of field effect and mobile terminal thereof - Google Patents

Photographing method simulating depth of field effect and mobile terminal thereof Download PDF

Info

Publication number
CN113766135B
CN113766135B CN202111113346.9A CN202111113346A CN113766135B CN 113766135 B CN113766135 B CN 113766135B CN 202111113346 A CN202111113346 A CN 202111113346A CN 113766135 B CN113766135 B CN 113766135B
Authority
CN
China
Prior art keywords
bitmap data
blurring
area
depth
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111113346.9A
Other languages
Chinese (zh)
Other versions
CN113766135A (en
Inventor
张旭
郭斌
罗然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhuoyi Technology Co Ltd
Original Assignee
Shanghai Zhuoyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhuoyi Technology Co Ltd filed Critical Shanghai Zhuoyi Technology Co Ltd
Priority to CN202111113346.9A priority Critical patent/CN113766135B/en
Publication of CN113766135A publication Critical patent/CN113766135A/en
Application granted granted Critical
Publication of CN113766135B publication Critical patent/CN113766135B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Abstract

The invention provides a photographing method simulating a depth of field effect and a mobile terminal thereof, wherein the method is used for an android system terminal and comprises the following steps: s1, setting a gradual change blurring area, a focus area, a gradual change blurring coefficient and a blurring coefficient of bitmap data; s2, when the simulated depth of field event is triggered, capturing a camera preview frame as bitmap data to be blurred and original image bitmap data; s3, setting a RenderScript calling CPU to use a blurring coefficient to perform Gaussian blurring processing calculation on bitmap data to be blurred, and acquiring the blurred bitmap data; s4, replacing pixels in the focus area of the blurring bitmap data with pixels in the focus area of the original bitmap data; and S5, processing the gradually-changed blurring area in the bitmap data of the original image according to the gradually-changed blurring coefficient to obtain gradually-changed blurred pixels so as to replace the pixels in the gradually-changed blurring area of the bitmap data. Therefore, the depth-of-field shooting effect of the telephoto lens is simulated.

Description

Photographing method simulating depth of field effect and mobile terminal thereof
Technical Field
The invention relates to the technical field of camera shooting, in particular to a shooting method for simulating a depth of field effect and a mobile terminal thereof.
Background
With the continuous improvement of the performance of the mobile phone, the mobile phone photographing technology is better and better, users can take satisfactory photos without buying professional cameras, and many users like to use the mobile phone to take photos by themselves, so that the mobile phone has entertainment besides the communication function.
At present, mobile terminals such as mobile phones and tablet computers are mostly provided with front-rear cameras, so that mobile terminal users can easily take pictures or self-shoot. However, a practical problem is that most of the existing low-end and medium-end mobile phones or tablet computers are not usually configured with a telephoto lens due to cost, and the configuration of the front camera with the telephoto lens is very few.
Therefore, when the middle and low-end mobile equipment shoots the photo, the depth of field blurring shooting effect of the telephoto lens cannot be realized through the performance of the telephoto lens, so that the self shooting experience and the mobile equipment with the telephoto lens form a larger use experience difference.
In order to solve the problem, a method for achieving a depth of field effect based on OpenGL (patent publication No. CN 102750726B) has been proposed in the prior art, which mainly includes: acquiring original color cache information of an image, wherein the original color cache information comprises depth cache information and color cache information; calculating new color cache information according to the depth cache information and the color cache information; calculating the poisson distribution circle diameter of a pixel point of the image, and calculating image level information according to the poisson distribution circle diameter of the pixel point; and calculating final color cache information according to the original color cache information, the new color cache information, the Poisson distribution circle diameter of the pixel point and the image hierarchy information, and rendering the image. In the embodiment of the invention, the image scene information of different levels is calculated to realize the depth of field by combining the Mipmap principle based on OpenGL with Poisson distribution, so that the efficiency and the effect of realizing the depth of field are improved.
However, the prior art has a drawback that the scheme depends on the performance of the GPU for performing the operation, but the contradiction is that the performance of the GPU of the existing middle and low-end mobile device is weak, so even if the preview frame after blurring processing is performed by using the scheme, the frame may be only about 10fps, and a remarkable stuck phenomenon occurs during previewing.
Disclosure of Invention
In view of this, in order to improve the defects of the prior art, reduce the performance dependence on the GPU, and simultaneously simulate the depth-of-field shooting effect to achieve good use experience, the present invention provides a shooting method simulating the depth-of-field effect and a mobile terminal scheme thereof.
In order to achieve the above object, in a first aspect of the present invention, there is provided a photographing method simulating a depth of field effect, which is used in an android system terminal, and includes the steps of:
s1, setting a gradual change blurring area, a focus area, a gradual change blurring coefficient and a blurring coefficient of bitmap data;
s2, when the simulated depth of field event is triggered, capturing a camera preview frame as a bitmap to be blurred
Data and original bitmap data;
s3, setting the render script to call the CPU to perform blurring on bitmap data to be blurred by using the blurring coefficient
Gaussian fuzzy processing calculation is carried out to obtain blurring bitmap data;
s4, replacing pixels in the focus area of the blurred bitmap data with pixels in the focus area of the bitmap data of the original image
A pixel;
and S5, processing the gradually-changed blurring area in the bitmap data of the original image according to the gradually-changed blurring coefficient to obtain gradually-changed blurred pixels so as to replace the pixels in the gradually-changed blurring area of the bitmap data.
Wherein in a possible preferred embodiment, the calculation process for obtaining the gradient blurred pixels comprises: let A be the bitmap data of blurring, B be the bitmap data of original image, S be the coefficient of gradual change blurring, calculate formula F = A S + B (1-S), in order to obtain gradual change blurred pixel F.
Wherein in a possible preferred embodiment, the step of capturing the camera preview frame comprises:
s21, calling an open method of the Camera class to open the Camera and obtain a Camera instance object;
s22, defining a TextureView and using the TextureView as the View of a Camera preview display;
s23 instantiates the SurfaceTextureListener class and sets it to the TextureView through the setSurfaceTextureListener method, when the TextureView view is available, the onSurfaceTextureAvailable method of the SurfaceTextureListener class is called back to obtain the SurfaceTexture texture object;
s24, setting the acquired texture object as a parameter of a setPoviewTexture method to a Camera object, and successfully displaying Camera preview data by TextureView;
s25, calling a getBitmap method built in the TextureView class to acquire real-time preview frame Bitmap data of Camera.
In a possible preferred embodiment, the step of setting a fading blurring area and a focus area of the bitmap data includes:
s11, defining an inner area and an outer area, wherein the outer area is overlapped with the inner area, the inner area has no blurring effect and is a focus area, the non-overlapped area of the outer area and the inner area is a gradually-changed blurring area, and a processing center is the center of the focus area and the center of the gradually-changed blurring area;
s12, creating a self-defined class BlurInfo, which comprises the following steps: x, Y coordinates of the processing center, inner region size, an example of an outer region size parameter, is taken as parameter info of smoothRender.
In a possible preferred embodiment, the step of setting the RenderScript to call the CPU to perform gaussian blur processing calculation on the bitmap data to be blurred by using the blurring coefficient includes:
s31, creating an Allocation input object from the bitmap to be virtualized by an Allocation type built-in createFrombitmap method;
s32, creating an allocated output object by an Allocation type built-in createTyped method, wherein the type is an input type;
s33, calling a RenderScript built-in setAdius method to set the fuzzy degree; setting fuzzy input allocation by taking the created allocation input object as a parameter of a built-in setInput method of the RenderScript class;
s34, taking the created distribution output object as a parameter of a RenderScript type built-in forEach method, applying a filter to input distribution and storing the input distribution into the distribution output;
s35 calls an Allocation class built-in copyto method to copy from the Allocation output object to the Bitmap to obtain the virtualized Bitmap data.
In order to achieve the above object, according to a second aspect of the present invention, there is provided a mobile terminal, which employs the android system, wherein when the mobile terminal starts a photographing program, the mobile terminal performs any one of the steps of the photographing method simulating the depth of field effect.
In order to achieve the above object, according to a third aspect of the present invention, there is provided a mobile terminal, using an android system, including: the device comprises a camera, a controller, a processor, a display and a memory, wherein a gradient blurring area, a focus area, a gradient blurring coefficient S and a blurring coefficient of preset bitmap data are stored in the memory; the controller is used for enabling the processor to call an open method of the Camera class to open the Camera and obtain a Camera instance object when the controller receives the trigger of the simulated depth of field event; defining a TextureView and using it as the View of the Camera preview display; instantiating a SurfaceTextureListener class and setting the SurfaceTextureListener class to the TextureView through a setSurfaceTextureListener method, and calling back an onSurfaceTextureAvailable method of the SurfaceTextureListener class to obtain a SurfaceTexture texture object when the TextureView view is available; setting the acquired texture object as a parameter of a setPoviewTexture method to a Camera object, and successfully displaying Camera preview data by TextureView; calling a getBitmap method built in a TextureView class to acquire real-time preview frame Bitmap data of Camera, capturing a Camera preview frame as Bitmap data to be blurred and original image Bitmap data B, calling a RenderScript command to call a processor to perform Gaussian blur processing calculation on the Bitmap data to be blurred by using a blurring coefficient in a memory to acquire blurred Bitmap data A, and replacing pixels in a focus area of the blurred Bitmap data by pixels in the focus area of the original image Bitmap data; and simultaneously, the processor calculates a formula F = A S + B (1-S) to process the gradually-changed blurred area in the original bitmap data to obtain gradually-changed blurred pixels F, and replaces pixels in the gradually-changed blurred area of the blurred bitmap data to obtain a depth preview frame to transmit the depth preview frame to the display to show the depth preview frame.
In order to achieve the above object, according to a fourth aspect of the present invention, there is provided a mobile terminal, using an android system, including: the device comprises a camera, a controller, a processor, a display and a memory, wherein a gradient blurring area, a focus area, a gradient blurring coefficient S and a blurring coefficient of preset bitmap data are stored in the memory; the controller is used for enabling the processor to capture a camera preview frame as bitmap data to be blurred and original bitmap data B when a simulated depth of field event is triggered, wherein the processor creates an Allocation input object from the bitmap to be blurred by calling an Allocation class built-in createFrombitmap method, creates an Allocation output object by calling an Allocation class built-in createTyped method, the type of the Allocation output object is an input type, the Allocation output object is called a RenderScript class built-in setAdius method to set the degree of blurring, the created Allocation input object is used as a parameter of the RenderScript class built-in setInput method, fuzzy input Allocation is set, the created Allocation output object is used as a parameter of the RenderScript class built-in for Each method, and the filter is applied to the input Allocation and is stored in the Allocation output; calling an Allocation type built-in copyto method to copy the Bitmap from the allocated output object to obtain the virtualized Bitmap data A, and replacing pixels in a focus area of the virtualized Bitmap data with pixels in the focus area of the original Bitmap data; and meanwhile, the processor calculates a formula F = A S + B (1-S) to process the gradual change blurring area in the original bitmap data to obtain gradual change blurring pixels F, and replaces the pixels in the gradual change blurring area of the blurring bitmap data to obtain a depth of field preview frame to transmit the depth of field preview frame to the display to show the depth of field preview frame.
The photographing method for simulating the depth of field effect and the mobile terminal thereof have the following beneficial effects:
the method and the device simulate the depth of field effect of the camera by using the existing Gaussian blur technical means, so that the mobile terminal without the telephoto lens can also have the depth of field shooting effect, and the hardware threshold of the depth of field shooting effect is reduced.
In addition, in view of the defect that the depth of field effect is too much dependent on the GPU in the prior art, the scheme adopts a means of capturing a camera preview frame by TextureView to acquire the bitmap data with the blurring and the original bitmap data for subsequent combination, processes the blurring bitmap data first, and then fuses with the processed original bitmap data twice, so that the scheme can be suitable for processing by only calling a CPU, thereby reducing the dependency on the performance of the GPU, improving the fps value of the processed preview frame, and enabling a part of mobile terminals with weak GPU performance but surplus CPU performance to obtain better use experience.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram illustrating steps of a photographing method simulating a depth of field effect according to the present invention;
FIG. 2 is a schematic diagram of positions of a focus area and a gradual change blurring area in the photographing method simulating the depth of field effect according to the present invention;
FIG. 3 is a schematic diagram illustrating a step of capturing bitmap data by the photographing method simulating the depth of field effect according to the present invention;
FIG. 4 is a schematic view of the depth of field effect after being processed by the photographing method for simulating the depth of field effect according to the present invention;
fig. 5 is a schematic diagram of a basic hardware structure of a mobile terminal to which the photographing method simulating the depth of field effect is applied.
Detailed Description
The following describes in detail embodiments of the present invention. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention. Furthermore, the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions.
Referring to fig. 1, the photographing method for simulating the depth of field effect provided by the present invention is applied to an android system terminal, and includes the steps of:
s1, setting a gradual change blurring area, a focus area, a gradual change blurring coefficient and a blurring coefficient of bitmap data;
s2, when the simulated depth of field event is triggered, capturing a camera preview frame as bitmap data to be blurred and original image bitmap data;
s3, setting a RenderScript calling CPU to use a blurring coefficient to perform Gaussian blurring processing calculation on bitmap data to be blurred, and acquiring the blurred bitmap data;
s4, replacing pixels in the focus area of the blurring bitmap data with pixels in the focus area of the original bitmap data;
and S5, processing the gradually-changed blurring area in the bitmap data of the original image according to the gradually-changed blurring coefficient to obtain gradually-changed blurred pixels so as to replace the pixels in the gradually-changed blurring area of the bitmap data.
Specifically, the step of setting the gradation blurring area and the focus area of the bitmap data includes:
s11, defining an inner area and an outer area, wherein the outer area is overlapped with the inner area, the inner area has no blurring effect and is a focus area, the non-overlapped area of the outer area and the inner area is a gradually-changed blurring area, and a processing center is the center of the focus area and the center of the gradually-changed blurring area;
s12, creating a self-defined class BlurInfo, which comprises the following steps: x, Y coordinates of the processing center, inner region size, an example of an outer region size parameter, is given as parameter info of smoothRender.
In the present embodiment, as shown in fig. 2, a depth-of-field scene simulating a circular focus area and a gradual change blurring area of a conventional telephoto lens is illustrated, for this reason, in the present embodiment, the inner area and the outer area are circular, the gradual change blurring area is annular, and the parameter of the size of the inner area and the size of the outer area are radius sizes.
Examples are as follows:
the/BlurInfo fuzzy information is a custom class, comprising X, Y coordinates of the processing center; radius of the inner circle R1 of inRadius; outRadius outer circle R2 radius BlurInfo = newBlurInfo ();
info.x = (int) (mmonsinglex/apectScale); // x coordinate, obtained from the focal position
info.y = (int) (mOnSingleY/apectScale); // y coordinates, obtained from the focal position
info. Inradius = (int) (IN _ SHARPNESS _ RADIUS × scale/apectScale); v/inner circle radius inner circle has no blurring effect
Outradius = (int) (OUT _ SHARPNESS _ RADIUS × (scale)/apectScale); the middle annular area of the outer circle radius and the inner circle radius has a gradual change interval from non-virtual to virtual.
Further, after the simulation parameters of the depth of field are set, as shown in fig. 3, in a preferred embodiment, the capturing step of the camera preview frame in step S2 includes:
s21, calling an open method of the Camera class to open the Camera and obtain a Camera instance object;
s22, defining a TextureView and using the TextureView as the View of a Camera preview display;
s23 instantiates the SurfaceTextureListener class and sets the SurfaceTextureView to the TextureView through the setSurfaceTextureListener method, and when the TextureView view is available, the onSurfaceTextureAvailable method of the SurfaceTextureListener class is called back to obtain the SurfaceTexture texture object;
s24, setting the acquired texture object as a parameter of a setPoviewTexture method to a Camera object, and successfully displaying Camera preview data by TextureView;
s25, a getBitmap method built in the TextureView class is called, and real-time preview frame Bitmap data of Camera can be captured.
In order to facilitate the subsequent fusion processing of the two bitmaps, the idea of the present invention is that an instantiated Bitmap object previwbitmap is used for storing a real-time preview frame, and the preview frame object preview is subjected to a blurring processing through a RenderScript type blurbltmap ().
And then instantiating a Bitmap object blurry Bitmap to store data after blurring the preview frame object preview, and then processing the blurry Bitmap and the preview Bitmap designated area again through a custom algorithm to store final blurred data in the blurry Bitmap object, so that the depth of field effect of the current photo can be seen from the camera preview.
However, in the above-mentioned secondary processing scheme, in fact, in order to make the processing procedure of simulating the depth of field effect smoothly performed on the CPU without depending on the GPU performance, it is preferable to use RenderScript to implement the gaussian blur processing, such as by commanding:
db shell setprop debug. Rs. Default-CPU-driver 1// is performed on the CPU with a forced Gaussian blur processing procedure.
Specifically, in a possible preferred embodiment, the present application calls the CPU to perform gaussian blurring calculation on bitmap data to be blurred by using a blurring coefficient by setting RenderScript, and the main steps include:
s31, creating an Allocation input object from the bitmap to be virtualized by an Allocation type built-in createFrombitmap method;
s32, creating an allocated output object by an Allocation type built-in createTyped method, wherein the type is an input type;
s33, calling a RenderScript built-in setAdius method to set the fuzzy degree; setting fuzzy input allocation by taking the created allocation input object as a parameter of a built-in setInput method of the RenderScript class;
s34, the created distribution output object is used as a parameter of a RenderScript type built-in forEach method, and a filter is applied to input distribution and is stored in the distribution output;
s35 calls an Allocation class built-in copyto method to copy from the Allocation output object to the Bitmap to obtain the virtualized Bitmap data.
Examples are as follows:
Figure GDA0004019608570000111
Figure GDA0004019608570000121
Figure GDA0004019608570000131
after the blurring bitmap data is obtained through the processing of the steps, the loop of reprocessing can be entered through a custom algorithm, wherein the processing program is used for achieving two purposes, namely pixel replacement of the focus area and pixel replacement of the gradual changing blurring area.
Firstly, defining the self-defining algorithm parameters comprises the following steps: blurring bitmap data A, original image bitmap data B, focal region coordinates, outer circle radius outRadius and inner circle radius inRadius.
Secondly, in the algorithm, the virtualization bitmap data A is used as a basic graph, and when pixel points of the virtualization bitmap data A and pixel points of original image bitmap data B are traversed, three processing processes are set:
1) Replacing pixels in the focus area of the blurring bitmap data by pixels in the focus area of the bitmap data B of the original image in the area inside the inner circle, namely forming a clear part in the depth of field effect;
2) The area outside the excircle is not processed, and the pixel point of the virtualized bitmap data A is still used;
3) The non-overlapping part of excircle and interior circle, define the part of ring promptly, its region that corresponds realizes the superimposed gradual change effect of pixel according to the distance difference to the centre of a circle, if:
31 Calculating a linear gradual change blurring coefficient S, S = (temp-inRadius)/(float) (outRadius-inRadius), and a value range of S [0,1] according to a distance from a pixel point to a center of a focus area; the distance from the temp pixel point to the circle center; inRadius is the inner circle radius; outRadius is the outer circle radius; and S is a pixel gradient coefficient.
32 Calculate linear superposition formula F = a × S + B (1-S) to replace pixels in the gradually changing blurring region of the blurring bitmap data. Wherein A is a blurring bitmap pixel point; b is bitmap pixel points of the original image; s is a pixel gradient coefficient of a middle annular area of the inner and outer circle radiuses; f is a pixel point which is fuzzy according to the gradient coefficient. ( The picture is composed of pixel points, and each pixel point has a corresponding RGBA value. RGBA is a color space representing Red (Red) Green (Green) Blue (Blue) and Alpha. The alpha channel is typically used as the opacity parameter. )
rgba is respectively substituted into the formulas:
f (r) = a (r) × S + B (r) × (1-S);
f (g) = A (g) × S + B (g) × (1-S);
f (B) = a (B) × S + B (B) × (1-S);
Alpha:F(a)=A(b)*S+B(b)*(1-S);
by way of example:
when the distance from the pixel point in the circular ring to the circle center is 0.1 in the radius inRadius of the inner circle, S is 0.1.
The pixel points a1: rgba (100,100,100,100) in the bitmap data a are blurred.
And pixel points B1: rgba (150,150,150,150) in the bitmap data B of the original image.
By the linear formula: f = A0.1 + B (1-0.1); f = A0.1 + B0.9; calculating to obtain pixel points f1: rgba (145,145,145,145) after the gradual change blurred image; the value of the rgba of the pixel f1 is closer to B1, which indicates that the pixel in the ring is closer to the inner circle and closer to the pixel corresponding to the bitmap data B of the original image.
When the distance between the pixel point and the circle center and the excircle radius outRadius are 0.9, S is 0.9.
Blurred bitmap data A in pixel a2: rgba (50,50,50,50)
Original bitmap data B pixel B3: rgba (120,120,120,120)
By the linear formula:
F=A*0.9+B*(1–0.9);
F=A*0.9+B*0.1;
gradual blur post-pixel points f2: rgba (57,57,57,57);
the value of the rgba of the pixel f2 is closer to a2, which shows that the pixel in the ring is close to the excircle and is closer to the pixel corresponding to the virtual bit A.
The final effect is as shown in fig. 4, the closer the pixel at the center of the circle is, the closer the pixel is to the pixel corresponding to the bitmap data B of the original image; the farther from the center of the circle, the closer to the pixel corresponding to the blurring bitmap data a.
Examples are as follows:
Figure GDA0004019608570000151
Figure GDA0004019608570000161
Figure GDA0004019608570000171
Figure GDA0004019608570000181
in addition, the photographing method simulating the depth of field effect provided by the invention can be adapted to the existing mobile terminal adopting an android system, that is, when the mobile terminal starts a photographing program, the mobile terminal executes the steps of the photographing method simulating the depth of field effect in the embodiment.
Further, as shown in fig. 5, the present invention is a structure of a mobile terminal capable of adapting to the photographing method simulating the depth of field effect, and preferably, the mobile terminal adopts an android system, which includes: the device comprises a camera, a controller, a processor, a display and a memory, wherein a gradient blurring area, a focus area, a gradient blurring coefficient S and a blurring coefficient of preset bitmap data are stored in the memory; the controller is used for enabling the processor to call an open method of the Camera class to open the Camera and obtain a Camera instance object when the controller receives the trigger of the simulated depth of field event; defining a TextureView and using the TextureView as a View of a Camera preview display; instantiating a SurfaceTextureListener class and setting the SurfaceTextureListener class to the TextureView through a setSurfaceTextureListener method, and calling back an onSurfaceTextureAvailable method of the SurfaceTextureListener class to obtain a SurfaceTexture texture object when the TextureView view is available; setting the acquired texture object as a parameter of a setPoviewTexture method to a Camera object, and successfully displaying Camera preview data by TextureView; calling a getBitmap method built in a TextureView class to acquire real-time preview frame Bitmap data of Camera, capturing a Camera preview frame as Bitmap data to be blurred and original image Bitmap data B, calling a RenderScript command to call a processor to perform Gaussian blur processing calculation on the Bitmap data to be blurred by using a blurring coefficient in a memory to acquire blurred Bitmap data A, and replacing pixels in a focus area of the blurred Bitmap data by pixels in the focus area of the original image Bitmap data; and meanwhile, the processor calculates a formula F = A (1-S) + B S to process the gradual change blurring area in the original bitmap data to obtain gradual change blurring pixels F, and replaces the pixels in the gradual change blurring area of the blurring bitmap data to obtain a depth preview frame to transmit the depth preview frame to the display to show the depth preview frame.
Further, in another example, the mobile terminal adopting the android system includes: the device comprises a camera, a controller, a processor, a display and a memory, wherein a gradient blurring area, a focus area, a gradient blurring coefficient S and a blurring coefficient of preset bitmap data are stored in the memory; the controller is used for enabling the processor to capture a camera preview frame as bitmap data to be blurred and original bitmap data B when a simulated depth of field event is triggered, wherein the processor creates an Allocation input object from the bitmap to be blurred by calling an Allocation class built-in createFrombitmap method, creates an Allocation output object by calling an Allocation class built-in createTyped method, the type of the Allocation output object is an input type, the Allocation output object is called a RenderScript class built-in setAdius method to set the degree of blurring, the created Allocation input object is used as a parameter of the RenderScript class built-in setInput method, fuzzy input Allocation is set, the created Allocation output object is used as a parameter of the RenderScript class built-in for Each method, and the filter is applied to the input Allocation and is stored in the Allocation output; calling an Allocation type built-in copyto method to copy the Bitmap from the allocated output object to obtain the virtualized Bitmap data A, and replacing pixels in a focus area of the virtualized Bitmap data with pixels in the focus area of the original Bitmap data; and meanwhile, the processor calculates a formula F = A (1-S) + B S to process the gradual change blurring area in the original bitmap data to obtain gradual change blurring pixels F, and replaces the pixels in the gradual change blurring area of the blurring bitmap data to obtain a depth preview frame to transmit the depth preview frame to the display to show the depth preview frame.
In summary, the photographing method for simulating the depth of field effect and the mobile terminal thereof provided by the invention can simulate the depth of field effect of a camera by using the existing gaussian blur technical means, so that the mobile terminal without a telephoto lens can also have the depth of field photographing effect, and the hardware threshold of the depth of field photographing effect is reduced. In addition, in view of the defect that the depth of field effect is too much dependent on the GPU in the prior art, the scheme adopts a means of capturing a camera preview frame by TextureView to acquire the bitmap data with the blurring and the original bitmap data for subsequent combination, processes the blurring bitmap data first, and then fuses with the processed original bitmap data twice, so that the scheme can be suitable for processing by only calling a CPU, thereby reducing the dependency on the performance of the GPU, improving the fps value of the processed preview frame, and enabling a part of mobile terminals with weak GPU performance but surplus CPU performance to obtain better use experience.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand the invention for and utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof, and any modification, equivalent replacement, or improvement made within the spirit and principle of the invention should be included in the protection scope of the invention.
It will be appreciated by those skilled in the art that, in addition to implementing the system, apparatus and various modules thereof provided by the present invention in the form of pure computer readable program code, the same procedures may be implemented entirely by logically programming method steps such that the system, apparatus and various modules thereof provided by the present invention are implemented in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
In addition, all or part of the steps of the method according to the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a single chip, a chip, or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In addition, any combination of various different implementation manners of the embodiments of the present invention is also possible, and the embodiments of the present invention should be considered as disclosed in the embodiments of the present invention as long as the combination does not depart from the spirit of the embodiments of the present invention.

Claims (7)

1. A photographing method simulating a depth-of-field effect is used for an android system terminal, and is characterized by comprising the following steps:
s1, setting a gradual change blurring area, a focus area, a gradual change blurring coefficient and a blurring coefficient of bitmap data;
s2, when the simulated depth of field event is triggered, capturing a camera preview frame as bitmap data to be blurred and original image bitmap data;
s3, setting a RenderScript calling CPU to use a blurring coefficient to perform Gaussian blurring processing calculation on bitmap data to be blurred, and acquiring the blurred bitmap data;
s4, replacing pixels in the focus area of the blurring bitmap data with pixels in the focus area of the bitmap data of the original image;
s5, processing a gradual change blurring area in original image bitmap data according to the gradual change blurring coefficient, and obtaining gradual change blurring pixels to replace pixels in the gradual change blurring area of the blurring bitmap data, wherein the calculation process for obtaining the gradual change blurring pixels comprises the following steps: let A be the bitmap data of blurring, B be the bitmap data of original image, S be the coefficient of gradual change blurring, calculate formula F = A S + B (1-S), in order to obtain gradual change blurred pixel F.
2. The method of taking pictures simulating a depth of field effect of claim 1, wherein the step of capturing the camera preview frame comprises:
s21, calling an open method of the Camera class to open the Camera and obtain a Camera instance object;
s22, defining a TextureView and using the TextureView as the View of a Camera preview display;
s23 instantiates the SurfaceTextureListener class and sets it to the TextureView through the setSurfaceTextureListener method, when the TextureView view is available, the onSurfaceTextureAvailable method of the SurfaceTextureListener class is called back to obtain the SurfaceTexture texture object;
s24, setting the acquired texture object as a parameter of a setPoviewTexture method to a Camera object, and successfully displaying Camera preview data by TextureView;
s25, calling a getBitmap method built in the TextureView class to acquire real-time preview frame Bitmap data of Camera.
3. The method of claim 1, wherein the step of setting the fading region and the focus region of the bitmap data comprises:
s11, defining an inner area and an outer area, wherein the outer area is overlapped with the inner area, the inner area has no blurring effect and is a focus area, the non-overlapped area of the outer area and the inner area is a gradually-changed blurring area, and a processing center is the center of the focus area and the center of the gradually-changed blurring area;
s12, creating a self-defined class BlurInfo, which comprises the following steps: x, Y coordinates of the processing center, inner zone size, an example of outer circle size parameter, as parameter info of smoothRender.
4. The method for photographing with simulated depth of field effect as claimed in claim 1, wherein the step of setting RenderScript to call CPU to perform gaussian blur processing calculation on bitmap data to be blurred by using blurring coefficients comprises:
s31, creating an Allocation input object from the bitmap to be virtualized by an Allocation type built-in createFrombitmap method;
s32, creating an allocated output object by an Allocation type built-in createTyped method, wherein the type is an input type;
s33, calling a RenderScript built-in setAdius method to set the fuzzy degree; setting fuzzy input allocation by taking the created allocation input object as a parameter of a built-in setInput method of the RenderScript class;
s34, taking the created distribution output object as a parameter of a RenderScript type built-in forEach method, applying a filter to input distribution and storing the input distribution into the distribution output;
s35 calls an Allocation class built-in copyto method to copy from the Allocation output object to the Bitmap to obtain the virtualized Bitmap data.
5. A mobile terminal adopts an android system, and is characterized in that when a photographing program is started, the mobile terminal executes the steps of the photographing method simulating the depth of field effect according to any one of claims 1 to 4.
6. A mobile terminal, adopting an android system, comprises: a camera, a controller, a processor, a display and a memory, which is characterized in that,
the memory is stored with a preset gradual change blurring area, a focus area, a gradual change blurring coefficient S and a blurring coefficient of bitmap data;
the controller is used for enabling the processor to call an open method of the Camera class to open the Camera and obtain a Camera instance object when the controller receives the trigger of the simulated depth of field event; defining a TextureView and using it as the View of the Camera preview display; instantiating a SurfaceTextureListener class and setting the SurfaceTextureListener class to the TextureView through a setSurfaceTextureListener method, and calling back an onSurfaceTextureAvailable method of the SurfaceTextureListener class to obtain a SurfaceTexture texture object when the TextureView view is available; calling a getBitmap method built in a TextureView class to acquire real-time preview frame Bitmap data of the Camera, capturing a Camera preview frame as Bitmap data to be blurred and original image Bitmap data B, calling a RenderScript command calling processor to use a blurring coefficient in a memory to perform Gaussian blur processing calculation on the Bitmap data to be blurred so as to acquire the blurred Bitmap data A, and replacing pixels in a focus area of the blurred Bitmap data with pixels in the focus area of the original image Bitmap data;
and meanwhile, the processor calculates a formula F = A S + B (1-S) to process the gradual change blurring area in the original bitmap data to obtain gradual change blurring pixels F, and replaces the pixels in the gradual change blurring area of the blurring bitmap data to obtain a depth of field preview frame to transmit the depth of field preview frame to the display to show the depth of field preview frame.
7. A mobile terminal, adopting an android system, comprises: a camera, a controller, a processor, a display and a memory, which is characterized in that,
the memory is stored with a preset gradual change blurring area, a focus area, a gradual change blurring coefficient S and a blurring coefficient of bitmap data;
the controller is used for enabling the processor to capture a camera preview frame as bitmap data to be virtualized and original bitmap data B when a simulated depth of field event is triggered, wherein the processor creates an Allocation input object from the bitmap to be virtualized by calling an Allocation type built-in creatFromBitmap method, creates an Allocation output object with an Allocation type built-in creatTyped method, calls a RenderScript type built-in setAdius method to set the fuzzy degree, uses the created Allocation input object as a parameter of the RenderScript type built-in setInput method, sets fuzzy input Allocation, uses the created Allocation output object as a parameter of the RenderScript type built-in setInput method, and applies the filter to the input Allocation and stores the filter in the Allocation output; calling an Allocation type built-in copyto method to copy the Bitmap from the allocated output object to obtain the virtualized Bitmap data A, and replacing pixels in a focus area of the virtualized Bitmap data with pixels in the focus area of the original Bitmap data;
and simultaneously, the processor calculates a formula F = A S + B (1-S) to process the gradually-changed blurred area in the original bitmap data to obtain gradually-changed blurred pixels F, and replaces pixels in the gradually-changed blurred area of the blurred bitmap data to obtain a depth preview frame to transmit the depth preview frame to the display to show the depth preview frame.
CN202111113346.9A 2021-09-23 2021-09-23 Photographing method simulating depth of field effect and mobile terminal thereof Active CN113766135B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111113346.9A CN113766135B (en) 2021-09-23 2021-09-23 Photographing method simulating depth of field effect and mobile terminal thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111113346.9A CN113766135B (en) 2021-09-23 2021-09-23 Photographing method simulating depth of field effect and mobile terminal thereof

Publications (2)

Publication Number Publication Date
CN113766135A CN113766135A (en) 2021-12-07
CN113766135B true CN113766135B (en) 2023-02-28

Family

ID=78796970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111113346.9A Active CN113766135B (en) 2021-09-23 2021-09-23 Photographing method simulating depth of field effect and mobile terminal thereof

Country Status (1)

Country Link
CN (1) CN113766135B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311481A (en) * 2018-12-12 2020-06-19 Tcl集团股份有限公司 Background blurring method and device, terminal equipment and storage medium
CN112631698A (en) * 2020-12-18 2021-04-09 平安普惠企业管理有限公司 Data display method and device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108156368A (en) * 2017-12-05 2018-06-12 深圳市金立通信设备有限公司 A kind of image processing method, terminal and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311481A (en) * 2018-12-12 2020-06-19 Tcl集团股份有限公司 Background blurring method and device, terminal equipment and storage medium
CN112631698A (en) * 2020-12-18 2021-04-09 平安普惠企业管理有限公司 Data display method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113766135A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
US9961273B2 (en) Mobile terminal and shooting method thereof
TWI704524B (en) Method and device for image polishing
Wang et al. Seeing dynamic scene in the dark: A high-quality video dataset with mechatronic alignment
CN102946513B (en) A kind of method, device and terminal starting filming apparatus high dynamic range function
CN107948500A (en) Image processing method and device
CN108154465B (en) Image processing method and device
CN111127591B (en) Image hair dyeing processing method, device, terminal and storage medium
CN105991915A (en) Shooting method and apparatus, and terminal
CN110599410B (en) Image processing method, device, terminal and storage medium
EP3679513B1 (en) Techniques for providing virtual light adjustments to image data
WO2020019164A1 (en) Video processing method and device, and computer-readable storage medium
CN110266954A (en) Image processing method, device, storage medium and electronic equipment
KR101294735B1 (en) Image processing method and photographing apparatus using the same
CN104967778A (en) Focusing reminding method and terminal
CN110971841A (en) Image processing method, image processing device, storage medium and electronic equipment
CN112770042A (en) Image processing method and device, computer readable medium, wireless communication terminal
JP2022103020A (en) Photographing method and device, terminal, and storage medium
CN113766135B (en) Photographing method simulating depth of field effect and mobile terminal thereof
KR20230074136A (en) Salience-based capture or image processing
CN109447931A (en) Image processing method and device
CN110706162A (en) Image processing method and device and computer storage medium
WO2021145913A1 (en) Estimating depth based on iris size
CN108898650B (en) Human-shaped material creating method and related device
CN113691737B (en) Video shooting method and device and storage medium
CN110418056A (en) A kind of image processing method, device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20211207

Assignee: SHANGHAI ZHUO YOU NETWORK TECHNOLOGY Co.,Ltd.

Assignor: SHANGHAI DROI TECHNOLOGY Co.,Ltd.

Contract record no.: X2023310000083

Denomination of invention: A Photography Method for Simulating Depth of Field Effect and Its Mobile Terminal

Granted publication date: 20230228

License type: Common License

Record date: 20230608

EE01 Entry into force of recordation of patent licensing contract