CN108024057A - Background blurring processing method, device and equipment - Google Patents
Background blurring processing method, device and equipment Download PDFInfo
- Publication number
- CN108024057A CN108024057A CN201711242157.5A CN201711242157A CN108024057A CN 108024057 A CN108024057 A CN 108024057A CN 201711242157 A CN201711242157 A CN 201711242157A CN 108024057 A CN108024057 A CN 108024057A
- Authority
- CN
- China
- Prior art keywords
- area
- background area
- depth
- brightness
- background
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/71—Circuitry for evaluating the brightness variation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2621—Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
Abstract
Present applicant proposes a kind of background blurring processing method, device and equipment, wherein, method includes:The sub-picture that the master image and secondary camera obtained according to main camera obtains, obtains the depth of view information of master image;Foreground area and background area are determined according to the focusing area of master image and depth of view information;Luminance difference between foreground area and background area is adjusted according to preset strategy;When detecting that luminance difference is adjusted to meet preset condition, virtualization processing generation target image is carried out to background area.Thus, this method solve in the prior art, not prominent enough the technical problem of foreground area after being blurred to background area, by adjusting background area and the luminance difference of foreground area so that the foreground area after virtualization is more prominent, improves virtualization treatment effect.
Description
Technical field
This application involves technical field of image processing, more particularly to a kind of background blurring processing method, device and equipment.
Background technology
In general, for the prominent main body taken pictures, virtualization processing can be carried out to the background area taken pictures, however, when taking pictures,
If the brightness of prospect is improper, main body in the image after virtualization processing may be caused not protrude, such as, if the master to take pictures
Body can cause shot subject exposure insufficient, the effect of backlight occur between light source and camera.And in the backlight field
In the subject image shot under scape, brightness is very low, and details is more fuzzy, after being blurred to background area, cannot still protrude
Take pictures main body, the visual effect after image procossing is poor.
Apply for content
The application provides a kind of background blurring processing method, device and equipment, to solve in the prior art, foreground area
When brightness is relatively low, cause the technical problem that foreground area is not prominent enough after being blurred to background area.
The embodiment of the present application provides a kind of background blurring processing method, including:According to main camera obtain master image with
And the sub-picture that secondary camera obtains, obtain the depth of view information of the master image;According to the focusing area of the master image and institute
State depth of view information and determine foreground area and background area;The foreground area and the background area are adjusted according to preset strategy
Between luminance difference;When detecting that the luminance difference is adjusted to meet preset condition, virtualization processing is carried out to the background area
Generate target image.
Another embodiment of the application provides a kind of background blurring processing unit, including:Computing module, for according to main shooting
The sub-picture that the master image and secondary camera that head obtains obtain, obtains the depth of view information of the master image;Determining module, is used
Foreground area and background area are determined in the focusing area according to the master image and the depth of view information;Module is adjusted, is used for
Luminance difference between the foreground area and the background area is adjusted according to preset strategy;Processing module, for detecting
When the luminance difference is adjusted to meet preset condition, virtualization processing generation target image is carried out to the background area.
The another embodiment of the application provides a kind of computer equipment, including memory and processor, is stored up in the memory
There is computer-readable instruction, when described instruction is performed by the processor so that the processor performs the above-mentioned reality of the application
Apply the background blurring processing method described in example.
The application a further embodiment provides a kind of non-transitorycomputer readable storage medium, is stored thereon with computer journey
Sequence, realizes the background blurring processing method as described in the above embodiments of the present application when which is executed by processor.
Technical solution provided by the embodiments of the present application can include the following benefits:
The sub-picture that the master image and secondary camera obtained according to main camera obtains, obtains the depth of field letter of master image
Breath, foreground area and background area are determined according to the focusing area of master image and depth of view information, and prospect is adjusted according to preset strategy
Luminance difference between region and background area, when detecting that luminance difference is adjusted to meet preset condition, carries out background area empty
Change processing generation target image.Thus, solve in the prior art, foreground area is inadequate after being blurred to background area
Prominent technical problem, by adjusting background area and the luminance difference of foreground area so that the foreground area after virtualization is more prominent
Go out, improve virtualization treatment effect.
Brief description of the drawings
The above-mentioned and/or additional aspect of the application and advantage will become from the following description of the accompanying drawings of embodiments
Substantially and it is readily appreciated that, wherein:
Fig. 1 is the flow chart according to the background blurring processing method of the application one embodiment;
Fig. 2 is the principle of triangulation schematic diagram according to the application one embodiment;
Fig. 3 is the dual camera visual angle coverage schematic diagram according to the application one embodiment;
Fig. 4 is the process schematic that the depth of field is calculated according to the dual camera of the application one embodiment;
Fig. 5 (a) is taken pictures image schematic diagram according to the portrait of the application one embodiment;
Fig. 5 (b) is the image schematic diagram after background blurring processing according to prior art;
Fig. 5 (c) is the image schematic diagram after the background blurring processing according to the application one embodiment;
Fig. 6 is the flow chart according to the background blurring processing method of the application another embodiment;
Fig. 7 is the flow chart according to the background blurring processing method of the application another embodiment;
Fig. 8 is the structure diagram according to the background blurring processing unit of the application one embodiment;
Fig. 9 is the structure diagram according to the background blurring processing unit of the application another embodiment;And
Figure 10 is the schematic diagram according to the image processing circuit of the application another embodiment.
Embodiment
Embodiments herein is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or has the function of same or like element.Below with reference to
The embodiment of attached drawing description is exemplary, it is intended to for explaining the application, and it is not intended that limitation to the application.
Below with reference to the accompanying drawings the background blurring processing method, device and equipment of the embodiment of the present application are described.
Fig. 1 is according to the flow chart of the background blurring processing method of the application one embodiment, as shown in Figure 1, this method
Including:
Step 101, the sub-picture that the master image and secondary camera obtained according to main camera obtains, obtains master image
Depth of view information.
Wherein, after being focused on to the main body of shooting, one section of human eye is allowed before and after the focus area where main body
The spatial depth scope of blur-free imaging be the depth of field.
It should be noted that in practical applications, the human eye explanation depth of field mainly differentiates the depth of field by binocular vision, this with
The principle that dual camera differentiates the depth of field is the same, is mainly realized by the principle of range of triangle as shown in Figure 2, based on Fig. 2
In, in real space, depict imaging object, and two camera position ORAnd OT, and two cameras
Focal plane, the distance of plane is f where two cameras of focal plane distance, is imaged in two each camera of focal plane position,
So as to obtain two shooting images.
Wherein, P and P ' is position of the same target in different shooting images respectively.Wherein, P points are apart from place shooting figure
The distance of the left border of picture is XR, P ' points are X apart from the distance of the left border of place shooting imageT。ORAnd OTRespectively
Two cameras, for the two cameras in same plane, distance is B.
Based on principle of triangulation, the distance between plane Z, has as follows where the object and two cameras in Fig. 2
Relation:
Based on this, can push awayWherein, d is position of the same target in different shooting images
The distance between put difference.Since B, f are definite value, the distance Z of object can be determined according to d.
It is emphasized that above formula is implemented based on two parallel identical cameras, but actually make
With when actually have many problems, for example some total scene cannot phase in upper two cameras of figure calculate the depth of field
Hand over, therefore the actual FOV designs that two cameras are calculated for the depth of field can be different, wherein, main camera is for taking reality
The master image of border figure, the secondary image that secondary camera obtains is primarily used to, with reference to the depth of field is calculated, analyze based on more than, secondary
The FOV of camera is generally larger than main camera, even but so as shown in figure 3, object closer to the distance still have can
Obtained when can be different in two cameras among image, shown in the relation equation below for the calculating field depth being adjusted:
According to the formula after adjustment, the field depth of master image can be calculated
Deng.
Certainly, except triangle telemetry, the depth of field of master image can also be calculated using other modes, such as, master takes the photograph
When taking pictures as head and secondary camera for same scene, the distance of the object distance camera in scene and main camera and
The proportional relations such as the displacement difference of secondary camera imaging, posture difference, therefore, can basis in one embodiment of the application
This proportionate relationship obtains above-mentioned distance Z.
For example, as shown in figure 4, the sub-picture that the master image that is obtained by main camera and secondary camera obtain,
The figure of difference difference is calculated, is represented here with disparity map, what is represented on this figure is the displacement difference of the upper identical point of two figures
It is different, but since the displacement difference in triangle polyester fibre and Z are directly proportional, many times disparity map is just directly used as the depth of field
Figure.
Step 102, foreground area and background area are determined according to the focusing area of master image and depth of view information.
It is appreciated that the scope being imaged before focusing area is foreground depth of field, the corresponding region of foreground depth of field is prospect
Region, the scope of blur-free imaging is the background depth of field after focusing area, and the corresponding region of the background depth of field is background area, its
In, foreground area contains the subject image taken pictures.
Step 103, the luminance difference between foreground area and background area is adjusted according to preset strategy.
Step 104, when detecting that luminance difference is adjusted to meet preset condition, virtualization processing generation mesh is carried out to background area
Logo image.
It is appreciated that generally for the main body of prominent shooting, by the background area outside focus area where the main body of shooting
Domain is blurred, however, under some scenes, such as, when the brightness of foreground area is relatively low, and for example, scene of taking pictures is
Backlight scene, alternatively, the light source for the main body taken pictures is blocked, or the scene such as take pictures in night, even if to background area into
Row virtualization, may can not also protrude the main body of shooting, so as to cause shooting main body not enough to protrude, visual effect is poor.
Such as shown in Fig. 5 (a), when the main body taken pictures is personage, if scene of taking pictures is backlight scene, even if
Background area is blurred, as shown in Fig. 5 (b), since character image brightness is relatively low, can not also protrude personage, visual effect
It is poor.
Through as the above analysis, causing after blurring background area, the reason for foreground area is not prominent enough, mainly have following
Two aspects:On the one hand it is that background area is higher compared to foreground area brightness so that brightness is relatively low behind virtualization background area
Foreground area is not prominent enough, is on the other hand that the brightness of background area and foreground area is relatively low so that behind virtualization background area
In whole relatively low image of brightness, it is difficult to the subject image in prominent foreground area.
Thus, in order to solve the above-mentioned technical problem, in embodiments herein, foreground area is adjusted according to preset strategy
Luminance difference between background area so that foreground area is protruded relative to background area, when detecting that luminance difference is adjusted to full
Sufficient preset condition, carries out background area virtualization processing generation target image, wherein, the foreground area in target image protrudes,
Visual effect is preferable.
Certainly,, can also be bright only according to prospect in order to mitigate system processing pressure in one embodiment of the application
Spend relatively low scene and implement the luminance difference adjustable strategies, that is, detect the brightness of foreground area, wherein, default first threshold can
Being demarcated according to lot of experimental data, to judge whether the brightness of foreground area is relatively low, which can also be
Demarcated according to the personal like of user, if detecting that the brightness of foreground area is less than default first threshold, according to pre-
If the luminance difference between Developing Tactics foreground area and background area.
In practical implementation, since background distance is taken pictures the more remote demand more not phase that may currently take pictures with user of main body
Close, such as, user with nearby one flower group photo when, may more distant place scenery it is more uncorrelated with demand of currently taking pictures, because
And in order to further meet the demand of taking pictures of user, it can also be carried out according to foreground area and the depth of view information of background area empty
Change.
Specifically, in one embodiment of the application, before being calculated according to the focusing area of master image and depth of view information
First depth of view information of scene area and the second depth of view information of background area, according to the first depth of view information and the second depth of view information
The baseline values of virtualization degree are obtained, such as, the first depth of view information is smaller, and the second depth of view information is bigger, then shows currently to take pictures
Scene is unrelated with background, so that the baseline values of the virtualization degree obtained at this time are bigger, and for example, the first depth of view information
Bigger, the second depth of view information is smaller, then showing currently to take pictures scene may be related to background, so that the virtualization journey obtained at this time
The baseline values of degree are smaller, at this time, carry out Gaussian Blur processing generation mesh to background area according to the baseline values of virtualization degree
Logo image, background area have obtained virtualization corresponding with the demand of taking pictures, and meet the current demand of taking pictures of user.
Wherein, above-mentioned preset condition is with being included under current preset strategy, the luminance difference of foreground area and background area, when
So, which can be demarcated according to lot of experimental data or according to individual subscriber fancy setting, met
After background area is blurred, foreground area can protrude.
It should be noted that according to the difference of application scenarios, above-mentioned preset strategy is different, illustrates as follows:
As a kind of example:
Preset strategy includes:When background area brightness is no brighter, the brightness of foreground area is improved.That is,
In embodiments herein, when the brightness of background area is relatively low, show that the brightness of whole image is relatively low, even if reducing at this time
The luminance difference of background area and foreground area is reduced in the brightness of background area, foreground area may also it is relatively low due to brightness and
It is not prominent enough in image after virtualization, thus, in such a scenario, the brightness by improving foreground area is used as default plan
Slightly.
Specifically, in this example, as shown in fig. 6, step 103 includes:
Step 201, the brightness of background area is detected.
It should be noted that according to the difference of application scenarios, the bright of different implementation detection background areas can be used
Degree, as a kind of possible implementation, the corresponding gray value of image by calculating background area, determines according to gray value
Go out the brightness of background area.
Step 202, if judging to know, the brightness of background area is less than or equal to default second threshold and is more than first threshold,
Wherein, second threshold is more than first threshold, then the first adjustment brightness of foreground area is calculated according to preset algorithm.
Wherein, second threshold can be demarcated or according to individual subscriber liked to mark according to lot of experimental data
Fixed, the second threshold is judging whether background area is brighter.
Step 203, the brightness of foreground area is improved according to the first adjustment brightness.
Specifically, knowing how the brightness of background area is less than or equal to default second threshold, then according to default algorithm
The first adjustment brightness of foreground area is calculated, to improve the brightness of foreground area according to the first adjustment brightness, so that, foreground zone
After the brightness in domain brings up to the first adjustment brightness, the luminance difference of background area and foreground area reduces, and background area is carried out empty
In the target image generated after change processing, foreground area is more prominent.
It is emphasized that the preset algorithm of the first adjustment brightness of above-mentioned adjustment foreground area, in different applied fields
Under scape, there may be different implementations, as a kind of possible implementation, according to the focusing area and the depth of field of master image
Information calculates the first depth of view information of foreground area and the second depth of view information of background area, and then, calculate the first depth of view information
With the depth of field ratio of the second depth of view information, wherein, the depth of field ratio of the first depth of view information and the second depth of view information is bigger, then table
The distance of bright foreground and background is nearer, and ratio is smaller, then shows that the distance of foreground and background is more remote, according to preset algorithm to preceding
The brightness of scene area and depth of field ratio carry out calculating the first adjustment brightness for obtaining foreground area, wherein, depth of field ratio is bigger,
The distance of foreground and background is nearer, and the first adjustment brightness calculated according to preset algorithm is bigger, in order to cause closer distance
Foreground area and background area distinguishing limit it is obvious, conversely, depth of field ratio is smaller, the distance of foreground and background is more remote, root
The the first adjustment brightness calculated according to preset algorithm is smaller, you can so that the area of the foreground area of closer distance and background area
Boundary is obvious.
As another example:
Preset strategy includes:When background area brightness is brighter, the brightness of background area is reduced.That is, in this Shen
In embodiment please, when the brightness of background area is higher, shows that the brightness of whole image is higher, cause foreground area at this time
The reason for not prominent enough after virtualization is probably that background area is brighter, reduce at this time background area brightness reduce background area and
The luminance difference of foreground area, image of the foreground area after virtualization can protrude, thus, in such a scenario, carried on the back by reducing
The brightness of scene area is used as preset strategy.
Specifically, in this example, as shown in fig. 7, step 103 includes:
Step 301, the brightness of background area is detected.
Step 302, if judging to know that the brightness of background area is more than second threshold, background area is calculated according to preset algorithm
The second adjustment brightness in domain.
Step 303, according to the brightness of second adjustment luminance-reduction background area.
Specifically, knowing how the brightness of background area is more than default second threshold, then calculated according to default algorithm
The second adjustment brightness of background area, with the brightness according to second adjustment luminance-reduction background area, so that, background area
To after second adjustment brightness, the luminance difference of background area and foreground area reduces luminance-reduction, and background area is carried out at virtualization
In the target image generated after reason, foreground area is more prominent.
It is emphasized that the preset algorithm of the first adjustment brightness of above-mentioned adjustment foreground area, in different applied fields
Under scape, there may be different implementations, as a kind of possible implementation, according to the depth of view information of background area and bright
Degree adjustment background area brightness, in the implementation, in order to ensure blur effect, the depth of view information of background area is bigger, more and
The foreground area currently taken pictures is unrelated, bigger to the brightness adjustment degree of background area, and the brightness of background area is bigger, more holds
It is not prominent enough to easily lead to foreground area, it is bigger to the brightness adjustment degree of background area.Thus, default and background can be inquired about
Corresponding first Dynamic gene of depth of view information in region, wherein, to be analyzed as more than, the depth of view information of background area is bigger,
The value of first Dynamic gene is bigger, and bigger with the brightness of background area, and the value of the corresponding second adjustment factor is bigger, into
And calculate according to brightness of the preset algorithm to background area, the first Dynamic gene and the second adjustment factor and obtain background area
The second adjustment brightness in domain.
Wherein, in different application scenarios, above-mentioned preset algorithm is different, such as, default algorithm can include:According to
Weighted value corresponding with the first background luminance, the first Dynamic gene and the second adjustment factor respectively, calculates second adjustment brightness,
Wherein, above-mentioned weighted value can be demarcated according to experimental data.
Thus, the background blurring processing method of the embodiment of the present application, is not that directly background area is blurred, but first
After the luminance difference adjustment of background area and foreground area so that after foreground area obtains tentatively prominent processing, then to background area
Domain is blurred, so that the target image foreground area after blurring protrudes.
For example, continue to illustrate by personage of the main body taken pictures, when the brightness for detecting foreground area is less than the first threshold
During value, then after adjusting foreground area and the luminance difference of background area according to related adjustable strategies, background area is blurred,
So as to as shown in Fig. 5 (c), after virtualization processing generation target image is carried out to background area, cause in target image
Foreground area it is more prominent.
In conclusion the background blurring processing method of the embodiment of the present application, the master image that is obtained according to main camera and
The sub-picture that secondary camera obtains, calculates the depth of view information of master image, is determined according to the focusing area of master image and depth of view information
Foreground area and background area, if detecting, the brightness of foreground area is less than default first threshold, according to preset strategy tune
Luminance difference between whole foreground area and background area, when detecting that luminance difference is adjusted to meet preset condition, to background area
Carry out virtualization processing generation target image.Thus, solve in the prior art, when the brightness of foreground area is relatively low, cause right
Not prominent enough the technical problem of foreground area after background area is blurred, by adjusting the bright of background area and foreground area
Degree is poor so that the foreground area after virtualization is more prominent, improves virtualization treatment effect.
In order to realize above-described embodiment, the application also proposed a kind of background blurring processing unit, and Fig. 8 is according to the application
The structure diagram of the background blurring processing unit of one embodiment, as shown in figure 8, the background blurring processing unit includes:Meter
Calculate module 100, determining module 200, adjustment module 300 and processing module 400.
Wherein, computing module 100, the secondary figure obtained for the master image obtained according to main camera and secondary camera
Picture, obtains the depth of view information of master image.
Determining module 200, foreground area and background area are determined for the focusing area according to master image and depth of view information.
Module 300 is adjusted, for adjusting the luminance difference between foreground area and background area according to preset strategy.
In one embodiment of the application, Fig. 9 is the background blurring processing unit according to another embodiment of the application
Structure diagram, as shown in figure 9, the adjustment module 300 includes detection unit 310, computing unit 320 and adjustment unit
330。
Wherein, detection unit 310, for detecting the brightness of background area.
Computing unit 320, the brightness for knowing background area in judgement are less than or equal to default second threshold and are more than
During first threshold, wherein, second threshold is more than first threshold, and the first adjustment brightness of foreground area is calculated according to preset algorithm.
Adjustment unit 330, for improving the brightness of foreground area according to the first adjustment brightness.
Processing module 400, for when detecting that luminance difference is adjusted to meet preset condition, being blurred to background area
Processing generation target image.
It should be noted that the foregoing description to embodiment of the method, is also applied for the device of the embodiment of the present application, it is realized
Principle is similar, and details are not described herein.
The division of modules is only used for for example, in other embodiments in above-mentioned background blurring processing unit, can
Background blurring processing unit is divided into different modules as required, to complete the whole of above-mentioned background blurring processing unit
Or partial function.
In conclusion the background blurring processing unit of the embodiment of the present application, the master image that is obtained according to main camera and
The sub-picture that secondary camera obtains, calculates the depth of view information of master image, is determined according to the focusing area of master image and depth of view information
Foreground area and background area, the luminance difference between foreground area and background area is adjusted according to preset strategy, bright when detecting
Degree difference is adjusted to meet preset condition, and virtualization processing generation target image is carried out to background area.Thus, solves existing skill
In art, when the brightness of foreground area is relatively low, the technology that foreground area is not prominent enough after being blurred to background area is caused to be asked
Topic, by adjusting background area and the luminance difference of foreground area so that the foreground area after virtualization is more prominent, improves void
Change treatment effect.
In order to realize above-described embodiment, the application also proposed a kind of computer equipment, and above computer equipment includes
Image processing circuit, image processing circuit can utilize hardware and or software component to realize, it may include define ISP (Image
Signal Processing, picture signal processing) pipeline various processing units.Figure 10 is image procossing in one embodiment
The schematic diagram of circuit.As shown in Figure 10, for purposes of illustration only, only showing and the relevant image processing techniques of the embodiment of the present application
Various aspects.
As shown in Figure 10, image processing circuit includes ISP processors 640 and control logic device 650.Imaging device 610 is caught
The view data caught is handled by ISP processors 640 first, and ISP processors 640 analyze view data available to catch
In the image statistics of definite and/or imaging device 610 one or more control parameters.Imaging device 610 may include have
There is the camera of one or more lens 612 and imaging sensor 614.Imaging sensor 614 may include colour filter array (such as
Bayer filters), imaging sensor 614 can obtain the luminous intensity caught with each imaging pixel of imaging sensor 614 and ripple
Long message, and the one group of raw image data that can be handled by ISP processors 640 is provided.Sensor 620 can be based on sensor 620
Raw image data is supplied to ISP processors 640 by interface type.620 interface of sensor can utilize SMIA (Standard
Mobile Imaging Architecture, Standard Mobile Imager framework) interface, other serial or parallel camera interfaces or
The combination of above-mentioned interface.
ISP processors 640 handle raw image data pixel by pixel in various formats.For example, each image pixel can
Bit depth with 8,10,12 or 14 bits, ISP processors 640 can carry out raw image data at one or more images
Reason operation, statistical information of the collection on view data.Wherein, image processing operations can be by identical or different bit depth essence
Degree carries out.
ISP processors 640 can also receive pixel data from video memory 630.For example, from 620 interface of sensor by original
Beginning pixel data is sent to video memory 630, and the raw pixel data in video memory 630 is available to ISP processors
640 is for processing.Video memory 630 can be the independence in a part, storage device or electronic equipment for storage arrangement
Private memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving the raw image data from 620 interface of sensor or from video memory 630, ISP processing
Device 640 can carry out one or more image processing operations, such as time-domain filtering.View data after processing can be transmitted deposits to image
Reservoir 630, to carry out other processing before shown.ISP processors 640 receive processing number from video memory 630
According to, and the processing data are carried out with the image real time transfer in original domain and in RGB and YCbCr color spaces.After processing
View data may be output to display 670, for user watch and/or by graphics engine or GPU (Graphics
Processing Unit, graphics processor) further processing.In addition, the output of ISP processors 640 also can be transmitted to image
Memory 630, and display 670 can read view data from video memory 630.In one embodiment, video memory
630 can be configured as realizing one or more frame buffers.In addition, the output of ISP processors 640 can be transmitted to encoder/solution
Code device 660, so as to encoding/decoding image data.The view data of coding can be saved, and be shown in 670 equipment of display
Decompressed before upper.Encoder/decoder 660 can be realized by CPU or GPU or coprocessor.
The definite statistics of ISP processors 640, which can be transmitted, gives control logic device Unit 650.For example, statistics can wrap
The image such as automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 612 shadow correction of lens is included to pass
614 statistical information of sensor.Control logic device 650 may include the processor for performing one or more routines (such as firmware) and/or micro-
Controller, one or more routines according to the statistics of reception, can determine imaging device 610 control parameter and control
Parameter processed.For example, control parameter may include 620 control parameter of sensor (such as gain, time of integration of spectrum assignment), shine
The combination of camera flash control parameter, 612 control parameter of lens (such as focusing or zoom focal length) or these parameters.ISP
Control parameter may include gain level and the color school for being used for automatic white balance and color adjustment (for example, during RGB processing)
Positive matrices, and 612 shadow correction parameter of lens.
It it is below the step of realizing background blurring processing method with image processing techniques in Figure 10:
The sub-picture that the master image and secondary camera obtained according to main camera obtains, obtains the depth of field of the master image
Information;
Foreground area and background area are determined according to the focusing area of the master image and the depth of view information;
Luminance difference between the foreground area and the background area is adjusted according to preset strategy;
When detecting that the luminance difference is adjusted to meet preset condition, virtualization processing generation mesh is carried out to the background area
Logo image.
In order to realize above-described embodiment, the application also proposes a kind of non-transitorycomputer readable storage medium, when described
Instruction in storage medium is performed by processor, enabling performs the background blurring processing as described in above-described embodiment
Method.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description
Point is contained at least one embodiment or example of the application.In the present specification, schematic expression of the above terms is not
It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office
Combined in an appropriate manner in one or more embodiments or example.In addition, without conflicting with each other, this area
Technical staff can carry out the different embodiments or example described in this specification and different embodiments or exemplary feature
With reference to and combination.
In addition, term " first ", " second " are only used for description purpose, and it is not intended that instruction or hint relative importance
Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or
Person implicitly includes at least one this feature.In the description of the present application, " multiple " are meant that at least two, such as two,
Three etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include
The module of the code of the executable instruction of one or more the step of being used for realization custom logic function or process, fragment or
Part, and the scope of the preferred embodiment of the application includes other realization, wherein can not press shown or discussion
Sequentially, including according to involved function by it is basic at the same time in the way of or in the opposite order, carry out perform function, this should be by this
The embodiment person of ordinary skill in the field of application is understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be from instruction
The system of execution system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or
Equipment and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicating, propagating
Or transmission program uses for instruction execution system, device or equipment or with reference to these instruction execution systems, device or equipment
Device.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Connected up with one or more
Electrical connection section (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk are read-only
Memory (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other conjunctions
Suitable medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or necessity
When handled with other suitable methods electronically to obtain described program, be then stored in computer storage
In.
It should be appreciated that each several part of the application can be realized with hardware, software, firmware or combinations thereof.Above-mentioned
In embodiment, multiple steps or method can be performed soft in memory and by suitable instruction execution system with storage
Part or firmware are realized.Such as, if with hardware come realize with another embodiment, can be with well known in the art
Any one of row technology or their combination are realized:With the logic gate electricity for realizing logic function to data-signal
The discrete logic on road, has the application-specific integrated circuit of suitable combinational logic gate circuit, and programmable gate array (PGA) is existing
Field programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries
Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage
In medium, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the application can be integrated in a processing module, can also
That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould
Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a calculating
In machine read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above
Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the application
System, those of ordinary skill in the art above-described embodiment can be changed, changed within the scope of application, replaced and
Modification.
Claims (10)
- A kind of 1. background blurring processing method, it is characterised in that including:The sub-picture that the master image and secondary camera obtained according to main camera obtains, obtains the depth of field letter of the master image Breath;Foreground area and background area are determined according to the focusing area of the master image and the depth of view information;Luminance difference between the foreground area and the background area is adjusted according to preset strategy;When detecting that the luminance difference is adjusted to meet preset condition, virtualization processing generation target figure is carried out to the background area Picture.
- 2. the method as described in claim 1, it is characterised in that described according to the preset strategy adjustment foreground area and described Luminance difference between background area includes:Detect the brightness of the foreground area;If detecting, the brightness of the foreground area is less than default first threshold, and the foreground zone is adjusted according to preset strategy Luminance difference between domain and the background area.
- 3. the method as described in claim 1, it is characterised in that described according to the preset strategy adjustment foreground area and described Luminance difference between background area, including:Detect the brightness of the background area;If judging to know, the brightness of the background area is less than or equal to default second threshold and is more than first threshold, wherein, institute State second threshold and be more than the first threshold, then the first adjustment brightness of the foreground area is calculated according to preset algorithm;The brightness of the foreground area is improved according to the described first adjustment brightness.
- 4. method as claimed in claim 3, it is characterised in that described that the first of the foreground area is calculated according to preset algorithm Brightness is adjusted, including:The first depth of view information and the institute of the foreground area are calculated according to the focusing area of the master image and the depth of view information State the second depth of view information of background area;Calculate the depth of field ratio of first depth of view information and second depth of view information;According to brightness of the preset algorithm to the foreground area and the depth of field ratio calculate and obtain the foreground area First adjustment brightness.
- 5. method as claimed in claim 3, it is characterised in that after the brightness of the detection background area, also wrap Include:If judgement knows that the brightness of the background area is more than the second threshold, the background area is calculated according to preset algorithm The second adjustment brightness in domain;According to the brightness of background area described in the second adjustment luminance-reduction.
- 6. method as claimed in claim 5, it is characterised in that described that the second of the background area is calculated according to preset algorithm Brightness is adjusted, including:Inquire about default the first Dynamic gene corresponding with the depth of view information of the background area, and with the background area The corresponding second adjustment factor of brightness;According to brightness of the preset algorithm to the background area, first Dynamic gene and the second adjustment factor are counted Calculate the second adjustment brightness for obtaining the background area.
- 7. the method as described in claim 1, it is characterised in that described that virtualization processing generation target is carried out to the background area Image, including:The first depth of view information and the institute of the foreground area are calculated according to the focusing area of the master image and the depth of view information State the second depth of view information of background area;The baseline values of virtualization degree are obtained according to first depth of view information and second depth of view information;Gaussian Blur processing generation target image is carried out to the background area according to the baseline values of the virtualization degree.
- A kind of 8. background blurring processing unit, it is characterised in that including:Computing module, the sub-picture obtained for the master image obtained according to main camera and secondary camera, obtains the master The depth of view information of image;Determining module, foreground area and background area are determined for the focusing area according to the master image and the depth of view information Domain;Module is adjusted, for adjusting the luminance difference between the foreground area and the background area according to preset strategy;Processing module, for when detecting that the luminance difference is adjusted to meet preset condition, being carried out to the background area empty Change processing generation target image.
- 9. a kind of computer equipment, it is characterised in that including memory, processor and storage on a memory and can be in processor The computer program of upper operation, when the processor performs described program, realizes the background as described in any in claim 1-7 Blur processing method.
- 10. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the program is by processor The background blurring processing method as described in any in claim 1-7 is realized during execution.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711242157.5A CN108024057B (en) | 2017-11-30 | 2017-11-30 | Background blurring processing method, device and equipment |
PCT/CN2018/116230 WO2019105254A1 (en) | 2017-11-30 | 2018-11-19 | Background blur processing method, apparatus and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711242157.5A CN108024057B (en) | 2017-11-30 | 2017-11-30 | Background blurring processing method, device and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108024057A true CN108024057A (en) | 2018-05-11 |
CN108024057B CN108024057B (en) | 2020-01-10 |
Family
ID=62077779
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711242157.5A Active CN108024057B (en) | 2017-11-30 | 2017-11-30 | Background blurring processing method, device and equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108024057B (en) |
WO (1) | WO2019105254A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019105254A1 (en) * | 2017-11-30 | 2019-06-06 | Oppo广东移动通信有限公司 | Background blur processing method, apparatus and device |
CN110198421A (en) * | 2019-06-17 | 2019-09-03 | Oppo广东移动通信有限公司 | Method for processing video frequency and Related product |
CN110677621A (en) * | 2019-09-03 | 2020-01-10 | RealMe重庆移动通信有限公司 | Camera calling method and device, storage medium and electronic equipment |
US11461910B2 (en) | 2018-08-08 | 2022-10-04 | Samsung Electronics Co., Ltd. | Electronic device for blurring image obtained by combining plural images based on depth information and method for driving the electronic device |
CN116668804A (en) * | 2023-06-14 | 2023-08-29 | 山东恒辉软件有限公司 | Video image analysis processing method, device and storage medium |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114286004A (en) * | 2021-12-28 | 2022-04-05 | 维沃移动通信有限公司 | Focusing method, shooting device, electronic equipment and medium |
CN114859581B (en) * | 2022-03-24 | 2023-10-24 | 京东方科技集团股份有限公司 | Backlight testing device and backlight testing method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000196956A (en) * | 1998-12-28 | 2000-07-14 | Hitachi Software Eng Co Ltd | Device and method for synthesizing picture |
US6157733A (en) * | 1997-04-18 | 2000-12-05 | At&T Corp. | Integration of monocular cues to improve depth perception |
CN1926851A (en) * | 2004-01-16 | 2007-03-07 | 索尼电脑娱乐公司 | Method and apparatus for optimizing capture device settings through depth information |
CN105025229A (en) * | 2015-07-30 | 2015-11-04 | 广东欧珀移动通信有限公司 | Method for adjusting photo brightness and relevant device |
CN106060423A (en) * | 2016-06-02 | 2016-10-26 | 广东欧珀移动通信有限公司 | Bokeh photograph generation method and device, and mobile terminal |
CN106993112A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Background-blurring method and device and electronic installation based on the depth of field |
CN107357500A (en) * | 2017-06-21 | 2017-11-17 | 努比亚技术有限公司 | A kind of picture-adjusting method, terminal and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10233919A (en) * | 1997-02-21 | 1998-09-02 | Fuji Photo Film Co Ltd | Image processor |
TWI524755B (en) * | 2008-03-05 | 2016-03-01 | 半導體能源研究所股份有限公司 | Image processing method, image processing system, and computer program |
CN105574866A (en) * | 2015-12-15 | 2016-05-11 | 努比亚技术有限公司 | Image processing method and apparatus |
CN108024057B (en) * | 2017-11-30 | 2020-01-10 | Oppo广东移动通信有限公司 | Background blurring processing method, device and equipment |
-
2017
- 2017-11-30 CN CN201711242157.5A patent/CN108024057B/en active Active
-
2018
- 2018-11-19 WO PCT/CN2018/116230 patent/WO2019105254A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6157733A (en) * | 1997-04-18 | 2000-12-05 | At&T Corp. | Integration of monocular cues to improve depth perception |
JP2000196956A (en) * | 1998-12-28 | 2000-07-14 | Hitachi Software Eng Co Ltd | Device and method for synthesizing picture |
CN1926851A (en) * | 2004-01-16 | 2007-03-07 | 索尼电脑娱乐公司 | Method and apparatus for optimizing capture device settings through depth information |
CN105025229A (en) * | 2015-07-30 | 2015-11-04 | 广东欧珀移动通信有限公司 | Method for adjusting photo brightness and relevant device |
CN106060423A (en) * | 2016-06-02 | 2016-10-26 | 广东欧珀移动通信有限公司 | Bokeh photograph generation method and device, and mobile terminal |
CN106993112A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Background-blurring method and device and electronic installation based on the depth of field |
CN107357500A (en) * | 2017-06-21 | 2017-11-17 | 努比亚技术有限公司 | A kind of picture-adjusting method, terminal and storage medium |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019105254A1 (en) * | 2017-11-30 | 2019-06-06 | Oppo广东移动通信有限公司 | Background blur processing method, apparatus and device |
US11461910B2 (en) | 2018-08-08 | 2022-10-04 | Samsung Electronics Co., Ltd. | Electronic device for blurring image obtained by combining plural images based on depth information and method for driving the electronic device |
CN110198421A (en) * | 2019-06-17 | 2019-09-03 | Oppo广东移动通信有限公司 | Method for processing video frequency and Related product |
CN110198421B (en) * | 2019-06-17 | 2021-08-10 | Oppo广东移动通信有限公司 | Video processing method and related product |
CN110677621A (en) * | 2019-09-03 | 2020-01-10 | RealMe重庆移动通信有限公司 | Camera calling method and device, storage medium and electronic equipment |
CN110677621B (en) * | 2019-09-03 | 2021-04-13 | RealMe重庆移动通信有限公司 | Camera calling method and device, storage medium and electronic equipment |
CN116668804A (en) * | 2023-06-14 | 2023-08-29 | 山东恒辉软件有限公司 | Video image analysis processing method, device and storage medium |
CN116668804B (en) * | 2023-06-14 | 2023-12-22 | 山东恒辉软件有限公司 | Video image analysis processing method, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108024057B (en) | 2020-01-10 |
WO2019105254A1 (en) | 2019-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107959778B (en) | Imaging method and device based on dual camera | |
US10997696B2 (en) | Image processing method, apparatus and device | |
JP7015374B2 (en) | Methods for image processing using dual cameras and mobile terminals | |
CN107977940A (en) | background blurring processing method, device and equipment | |
CN108024057A (en) | Background blurring processing method, device and equipment | |
CN107835372A (en) | Imaging method, device, mobile terminal and storage medium based on dual camera | |
EP3480783B1 (en) | Image-processing method, apparatus and device | |
CN108712608B (en) | Terminal equipment shooting method and device | |
CN108024056B (en) | Imaging method and device based on dual camera | |
CN108024054B (en) | Image processing method, device, equipment and storage medium | |
CN108154514B (en) | Image processing method, device and equipment | |
CN107948520A (en) | Image processing method and device | |
CN107493432A (en) | Image processing method, device, mobile terminal and computer-readable recording medium | |
CN107846556A (en) | imaging method, device, mobile terminal and storage medium | |
CN108024058B (en) | Image blurs processing method, device, mobile terminal and storage medium | |
CN108111749A (en) | Image processing method and device | |
CN107800971B (en) | Auto-exposure control processing method, device and the equipment of pan-shot | |
CN108156369A (en) | Image processing method and device | |
CN108053438A (en) | Depth of field acquisition methods, device and equipment | |
CN107872631A (en) | Image capturing method, device and mobile terminal based on dual camera | |
CN107396079A (en) | White balance adjustment method and device | |
CN108900785A (en) | Exposal control method, device and electronic equipment | |
CN108052883B (en) | User photographing method, device and equipment | |
CN110290325A (en) | Image processing method, device, storage medium and electronic equipment | |
CN107580205B (en) | White balance adjustment method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant before: Guangdong OPPO Mobile Communications Co., Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |