CN108053363A - Background blurring processing method, device and equipment - Google Patents

Background blurring processing method, device and equipment Download PDF

Info

Publication number
CN108053363A
CN108053363A CN201711242134.4A CN201711242134A CN108053363A CN 108053363 A CN108053363 A CN 108053363A CN 201711242134 A CN201711242134 A CN 201711242134A CN 108053363 A CN108053363 A CN 108053363A
Authority
CN
China
Prior art keywords
virtualization
depth
different subregions
master image
view information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711242134.4A
Other languages
Chinese (zh)
Inventor
欧阳丹
谭国辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711242134.4A priority Critical patent/CN108053363A/en
Publication of CN108053363A publication Critical patent/CN108053363A/en
Priority to PCT/CN2018/116475 priority patent/WO2019105261A1/en
Pending legal-status Critical Current

Links

Classifications

    • G06T3/04
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

This application discloses a kind of background blurring processing method, device and equipment, wherein, method includes:The master image of main camera shooting and the sub-picture of secondary camera shooting are obtained, and the depth of view information of master image is obtained according to master image and sub-picture;The original virtualization intensity of different subregions in master image background area is determined according to depth of view information and focusing area;The distribution orientation of different subregions is determined according to the display mode of master image, and the virtualization weight of different subregions is determined according to weight Provisioning Policy corresponding from the distribution orientation;Determine that the target of different subregions blurs intensity according to the original virtualization intensity of different subregions and corresponding virtualization weight;Intensity is blurred according to the target of different subregions, virtualization processing is carried out to master image background area.Hereby it is achieved that virtualization effect is more naturally, be more nearly the empty burnt effect of true optics.

Description

Background blurring processing method, device and equipment
Technical field
This application involves a kind of technical field of image processing more particularly to background blurring processing method, device and equipment.
Background technology
With the progress of the terminal devices manufacturing technology such as smart mobile phone, current more terminal device has used double take the photograph to be System calculates depth of view information by the two images of dual camera acquisition simultaneously, depth of view information is recycled to carry out virtualization processing, Wherein, when blurring processing, user usually requires that the effect of virtualization is more nearly the empty burnt effect of real optics, i.e. the depth of field is bigger Place virtualization intensity higher, possibly can not accurately be calculated over certain distance however, the computational accuracy of the depth of field is limited at present The depth of field, so as to realize the virtualization of distance areas farther out in image according to the depth of field, cause the visual effect that virtualization is handled not It is good.
Apply for content
The application provides a kind of background blurring processing method, device and equipment, to solve in the prior art, because can not be accurate The depth of field of certain distance is calculated over, so as to realize the virtualization of distance areas farther out in image according to the depth of field, cause to blur The technical issues of visual effect of processing is bad.
The embodiment of the present application provides a kind of background blurring processing method, including:Obtain the master image of main camera shooting with And the sub-picture of secondary camera shooting, and according to the master image and the depth of view information of the sub-picture acquisition master image; The original virtualization intensity of different subregions in the master image background area is determined according to the depth of view information and focusing area;Root The distribution orientation of the different subregions is determined according to the display mode of the master image, and according to corresponding with the distribution orientation Weight Provisioning Policy determines the virtualization weight of the different subregions;According to the original virtualization intensity of the different subregions and right The virtualization weight answered determines the target virtualization intensity of the different subregions;Intensity is blurred according to the target of the different subregions Virtualization processing is carried out to the master image background area.
Another embodiment of the application provides a kind of background blurring processing unit, including:First acquisition module, for obtaining master The master image of camera shooting and the sub-picture of secondary camera shooting, and institute is obtained according to the master image and the sub-picture State the depth of view information of master image;First determining module, for determining the master image according to the depth of view information and focusing area The original virtualization intensity of different subregions in background area;Second determining module, for the display mode according to the master image Determine the distribution orientation of the different subregion, and according to weight Provisioning Policy corresponding with the distribution orientation determine it is described not With the virtualization weight of subregion;3rd determining module, for the original virtualization intensity according to the different subregions and corresponding Virtualization weight determines the target virtualization intensity of the different subregions;Processing module, for the mesh according to the different subregions Mark virtualization intensity carries out virtualization processing to the master image background area.
The another embodiment of the application provides a kind of computer equipment, including memory and processor, is stored up in the memory There is computer-readable instruction, when described instruction is performed by the processor so that the processor performs the above-mentioned reality of the application Apply the background blurring processing method described in example.
The application a further embodiment provides a kind of non-transitorycomputer readable storage medium, is stored thereon with computer journey Sequence realizes the background blurring processing method as described in the above embodiments of the present application when the computer program is executed by processor.
Technical solution provided by the embodiments of the present application can include the following benefits:
The master image of main camera shooting and the sub-picture of secondary camera shooting are obtained, and according to master image and sub-picture The depth of view information of master image is obtained, the original of different subregions in master image background area is determined according to depth of view information and focusing area Begin virtualization intensity, and the distribution orientation of different subregions is determined according to the display mode of master image, and according to corresponding with distribution orientation Weight Provisioning Policy determine the virtualization weights of different subregions, and then, according to the original virtualization intensity of different subregions and right The virtualization weight answered determines the target virtualization intensity of different subregions, finally, intensity pair is blurred according to the target of different subregions Master image background area carries out virtualization processing.Hereby it is achieved that virtualization effect is more naturally, be more nearly the empty coke of true optics Effect.
Description of the drawings
The application is above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein:
Fig. 1 is the flow chart according to the background blurring processing method of the application one embodiment;
Fig. 2 is the principle schematic according to the range of triangle of the application one embodiment;
Fig. 3 is the dual camera visual angle coverage schematic diagram according to the application one embodiment;
Fig. 4 is to obtain schematic diagram according to the dual camera depth of field of the application one embodiment;
Fig. 5 (a) is the division schematic diagram of multiple subregions in the master image background area according to the application one embodiment;
Fig. 5 (b) is the division signal of multiple subregions in the master image background area according to another embodiment of the application Figure;
Fig. 6 is the flow chart according to the background blurring processing method of one specific embodiment of the application;
Fig. 7 is the structure diagram according to the background blurring processing unit of the application one embodiment;
Fig. 8 is the structure diagram according to the background blurring processing unit of the application another embodiment;
Fig. 9 is the structure diagram according to the background blurring processing unit of the application another embodiment;And
Figure 10 is the schematic diagram according to the image processing circuit of the application another embodiment.
Specific embodiment
Embodiments herein is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or has the function of same or like element.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the application, and it is not intended that limitation to the application.
Below with reference to the accompanying drawings the background blurring processing method, device and terminal device of the embodiment of the present application is described in detail.
Wherein, the executive agent of the background blurring treating method and apparatus of the embodiment of the present application can be terminal device, In, terminal device can be that the hardware that mobile phone, tablet computer, personal digital assistant, Wearable etc. have dual camera is set It is standby.The Wearable can be Intelligent bracelet, smartwatch, intelligent glasses etc..
It is understood based on above analysis, in the prior art, due to being blurred according to depth of view information, when the depth of field is believed When breath can not be obtained accurately due to the limitation of precision, then it can cause the virtualization intensity in corresponding region can not realize, so as to influence The virtualization effect of image.
In order to solve the above-mentioned technical problem, this application provides a kind of background blurring processing method, with regard to virtualization intensity and not The position relationship in corresponding region with the depth of field controls the corresponding region of the different depth of field to carry out the virtualization of varying strength, even if as a result, Depth of view information cannot be accurately got, the corresponding region of the different depth of field can also be caused to obtain the virtualization of proper strength.
Fig. 1 is according to the flow chart of the background blurring processing method of the application one embodiment, as shown in Figure 1, this method Comprise the following steps:
Step 101, the master image of main camera shooting and the sub-picture of secondary camera shooting are obtained, and according to master image The depth of view information of master image is obtained with sub-picture.
Wherein, after being focused on to the main body of shooting, one section of human eye is allowed before and after the focus area where main body Blur-free imaging spatial depth scope be the depth of field.
It should be noted that in practical applications, the human eye explanation depth of field mainly differentiates the depth of field by binocular vision, this with The principle that dual camera differentiates the depth of field is the same, is mainly realized by the principle of range of triangle as shown in Figure 2, based on Fig. 2 In, in real space, depict imaging object and two camera position ORAnd OTAnd the coke of two cameras Plane, the distance of plane is f where two cameras of focal plane distance, is imaged in two each camera of focal plane position, from And obtain two captured images.
Wherein, P and P ' is position of the same target in different captured images respectively.Wherein, P points are apart from place shooting figure The distance of the left border of picture is XR, the distance of left border of the P ' points apart from place captured image is XT。ORAnd OTRespectively two A camera, for the two cameras in same plane, distance is B.
Based on principle of triangulation, the distance between plane Z, has as follows where the object and two cameras in Fig. 2 Relation:
Based on this, can push awayWherein, d is position of the same target in different captured images The distance between put difference.Since B, f are definite value, the distance Z of object can be determined according to d.
It is emphasized that above formula is implemented based on two parallel identical cameras, but actually make With when actually have many problems, for example some total scene cannot phase in upper two cameras of figure calculate the depth of field It hands over, therefore the actual FOV designs for calculating two cameras for the depth of field can be different, wherein, main camera is for taking reality The master image of border figure, the secondary image that secondary camera obtains is primarily used to reference to the depth of field is calculated, and based on above analysis, pair is taken the photograph As the FOV of head is generally larger than main camera, even but so as shown in figure 3, object closer to the distance is still possible to It is obtained when different in two cameras among image, shown in the relation equation below for the calculating field depth being adjusted:
According to the formula after adjustment, the field depth of master image can be calculated Deng.
Certainly, except triangle telemetry, the depth of field of master image can also be calculated using other modes, for example, master takes the photograph When taking pictures as head and secondary camera for same scene, the distance of object distance camera and main camera and pair in scene The proportional relations such as displacement difference, the posture difference of camera imaging, therefore, can be according to this in one embodiment of the application Kind proportionate relationship obtains above-mentioned distance Z.
For example, as shown in figure 4, the sub-picture that the master image and secondary camera that are obtained by main camera obtain, The figure of difference difference is calculated, is represented here with disparity map, what is represented on this figure is the displacement difference of the upper identical point of two figures It is different, but since the displacement difference in triangulation location and Z are directly proportional, many times disparity map is just directly used as the depth of field Figure.
Step 102, the original void of different subregions in master image background area is determined according to depth of view information and focusing area Change intensity.
It is appreciated that since the scope being imaged before focusing area is foreground depth of field, the corresponding region of foreground depth of field is Foreground area, the scope of blur-free imaging is the background depth of field after focusing area, and the corresponding region of the background depth of field is background area, Master image background area is determined according to depth of view information and focusing area, and then, it primarily determines that out to different sub-districts in background area The original virtualization intensity in domain, using the original virtualization intensity as the adjustment base subsequently blurred to background area per sub-regions It is accurate.
As a kind of possible realization method, background area is drawn according to horizontal direction (parallel to the direction of focusing area) It is divided into different subregions.
First depth of view information and the background area of foreground area in master image are determined according to depth of view information and focusing area Second depth of view information obtains the average depth of view information of different subregions in master image background area according to the second depth of view information, into And the original virtualization intensity of different subregions is obtained according to the first depth of view information and the average depth of view information of different subregions.
Specifically, in this example, background area is divided into multiple and different subregions according to horizontal direction, wherein, Size and shape per sub-regions can be according to using needing be adjusted, and the size and shapes of multiple and different subregions can be with It is identical, can not also be same, and then, respectively obtain per the depth of field of the sub-regions middle-range from the closest position of focusing area and away from The depth of field of highest distance position with a distance from focusing area, and be averaged according to the depth of field of the proximal most position and the depth of field of highest distance position, Using the depth of field after average as the average depth of view information of corresponding sub-region, original is determined according to the average depth of view information of the subregion Begin virtualization intensity, wherein, average depth of view information is higher, and original virtualization intensity is bigger.
Wherein, the big I in depth of field section per sub-regions is adjusted according to using needs, multiple and different subregions Depth of field section size may be the same or different, and as shown in Fig. 5 (a), multiple subregions can be according to the shape of background area The subregion of multiple causes not of uniform size is divided into, alternatively, in order to further blur the convenience of processing, it is multiple as shown in Fig. 5 (b) Subregion can be that width is identical with picture traverse to be distributed from below image to upper horizontal.
If the it is emphasized that depth of field of the subregion apart from the closest position of focusing area and apart from focusing area The depth of field apart from highest distance position, since the limitation of depth of field computational accuracy is not got, then can be obtained according in the subregion Depth of field probability distribution in the depth of view information taken calculates average depth of field of the subregion etc..
Step 103, the distribution orientation of different subregions is determined according to the display mode of master image, and according to being distributed orientation Corresponding weight Provisioning Policy determines the virtualization weight of different subregions.
Wherein, above-mentioned display mode include the display of shooting main body reversely, the accounting of whole relatively image etc..
It is understood that in the case where first count is according to scene, the lower section of image is ground, and the top of image is sky etc., and The main body taken pictures is rest on the ground, thus, the subregion in master image background area, nearer it is to image top, with working as Preceding shooting main body is unrelated, related to current shooting main body nearer it is to the lower section of image.Alternatively, shine scene in second count Under, the right of image is portrait, and the left side of image is sandy beach or sea etc., then if current exposal model is portrait mode of figure, Then more be proximate to the left side of image, then it is unrelated with the main body of current shooting, be more proximate to the right of image, with currently Main body correlation of shooting etc. or, in third shot according under scene, i.e., under portrait exposal model, portrait accounts in whole image According to large percentage, background area only in four corner locations of image, then then has and works as away from portrait region Preceding shooting main body is unrelated etc..
Thus, in embodiments herein, the distribution of different subregions can be determined according to the display mode of master image Orientation, and then, according to the virtualization weight that different subregions are determined from being distributed the corresponding weight Provisioning Policy in orientation, for example, for In above-mentioned first scene, the subregion closer to upper position is unrelated with the shooting main body in master image, closer to lower orientation Subregion it is related to the shooting main body in master image, so as to according to weight decreasing strategy from top to bottom determine different sub-districts The virtualization weight in domain, wherein, the upper and lower orientation not merely include it is upper and lower in conventional physical, with reference to above-mentioned analysis, it with The display mode of master image is related, if the display mode of master image is display up and down, in the upper and lower orientation expression thing Neo-Confucianism It is upper and lower, if the display mode of master image shows to be horizontal, the upper and lower orientation references left and right directions.
For example, when master image display mode for up and down display, and take pictures main body compared with whole image accounting compared with Small, during upper and lower in upper and lower orientation expression thing Neo-Confucianism, the subregion virtualization weight above picture is bigger, close to lower section Subregion virtualization weight it is small, can so make final virtualization effect more natural, closer to the empty burnt effect of true optics.
Wherein, in practical applications, according to the difference of application scenarios, different realization methods can be used and speculate master image Display mode, for example, the direction of terminal device can be known by the gyroscope information of detection terminal equipment, and then, thus it is speculated that go out The display mode of master image when terminal device is taken pictures.
During the application is implemented, the virtualization weight of different subregions is determined according to weight Provisioning Policy corresponding from distribution orientation Mode there are many, for example, can previously according to many experiments obtain distribution orientation with blur effect relation, and then, according to The relation is established and the correspondence of storage and distribution orientation and weight Provisioning Policy, in order to determine the display side of master image After formula, inquire about the correspondence and determine corresponding weight Provisioning Policy.
For more clear explanation, below to determine the virtualization of different subregions according to weight decreasing strategy from top to bottom It is illustrated exemplified by the mode of weight:
Mode one:
In this mode, the elements of a fix comprising subregion are pre-set and blur the linear distribution song of weight correspondence Line or, nonlinear Distribution curve, wherein, which can be any point coordinate of subregion central area, and then, The elements of a fix of different subregions are obtained, are inquired about according to the elements of a fix according to default linear weight distribution curve or non-linear Weight profile determines the virtualization weight of different subregions.
Mode two:
The display direction for learning image according to lot of experimental data is corresponding with the virtualization weight of the subregion of different azimuth Relation builds deep neural network model according to learning outcome, so as to input the display direction and not of master image in the model With the orientation of subregion, and then, the virtualization weight of corresponding subregion is obtained according to the output of the model.
Step 104, different subregions are determined according to the original virtualization intensity of different subregions and corresponding virtualization weight Target blurs intensity.
Step 105, intensity is blurred according to the target of different subregions and virtualization processing is carried out to master image background area.
Specifically, after the original virtualization intensity of different subregions and corresponding virtualization weight is determined, according to different sub-districts The original virtualization intensity in domain and corresponding virtualization weight determine the target virtualization intensity of different subregions, and according to different subregions Target virtualization intensity virtualization processing is carried out to master image background area, shown as a result, according to the orientation of subregion and master image The different subregion of the relation pair in direction carries out the virtualization of varying strength, need not know the accurate depth of field of every sub-regions Information also may be such that different subregions obtain the virtualization of respective intensities so that virtualization result is more natural.
That is, the original virtualization intensity of the different subregions determined only according to the depth of view information tentatively obtained, it can It can be obtained due to depth of view information inaccurate, and the corresponding intensity that blurs of the empty burnt effect of the virtualization intensity and true optics be caused to have Deviation, and in embodiments herein, difference is further determined according to the orientation of the display direction of master image and different subregions The virtualization weight of subregion, and then original virtualization intensity is corrected according to the virtualization weight and determines that target blurs intensity so that virtualization As a result it is more natural.
It should be noted that according to the difference of application scenarios, to the original virtualization intensity of different subregions and corresponding void Change weight and determine that the realization method of the target virtualization intensity of different subregions is different, include but not limited to following several ways:
Mode one:
By obtaining the original virtualization intensity of different subregions and the product of corresponding virtualization weight, different subregions are determined Target virtualization intensity, for example, the virtualization weight of two sub-regions that original virtualization intensity is respectively a and b is respectively 80% He 90%, then it can blur intensity using a*80% and b*90% as the target for above-mentioned two subregion.
Mode two:
In order to enable the virtualization effect linking of adjacent subregion is more naturally, to original virtualization intensity difference away from larger phase The virtualization weight of adjacent subregion carries out a degree of processing that furthers, for example, for adjacent subarea domain 1 and 2, if subregion 1 Original virtualization intensity it is larger compared to subregion 2, then the virtualization weight of subregion 1 is multiplied by this time one be less than 1 coefficient, Wherein, the original virtualization intensity of subregion 1 is bigger compared to subregion 2, and the coefficient is smaller, and then, by the original void of subregion 1 The product for changing intensity, virtualization weight and coefficient blurs intensity as the target of subregion 1, by the original virtualization intensity of subregion 2 Intensity is blurred as the target of subregion 2 with the product of virtualization weight, alternatively, being multiplied by the virtualization weight of subregion 2 one big In 1 coefficient, wherein, the original virtualization intensity of subregion 1 is bigger compared to subregion 2, and the coefficient is bigger, and then, by sub-district The original virtualization intensity in domain 2 blurs target virtualization intensity of the product of weight and coefficient as subregion 2, by the original of subregion 1 The virtualization intensity that begins and target virtualization intensity of the product as subregion 1 for blurring weight.
Further, intensity is blurred according to the target of different subregions and virtualization processing is carried out to the master image background area Mode also include but not limited to following processing mode:
As a kind of possible realization method:
The depth of view information that each pixel in intensity and different subregions is blurred according to the target of different subregions determines background area The virtualization coefficient of each pixel in domain carries out Gaussian Blur according to the virtualization coefficient of each pixel in background area to background area Processing generation virtualization photo, hereby it is achieved that background depth of view information is bigger, the stronger effect of virtualization degree.
Certainly, in the present embodiment, if the depth of view information of each pixel cannot accurately be learnt, can be retouched such as step 102 The mode stated obtains the depth of view information of subregion.
In order to more clearly description and the background blurring processing mode of the application, with reference to specifically application scenarios into Row citing:
As shown in fig. 6, the master image of main camera shooting and the sub-picture of secondary camera shooting are obtained, and according to described Master image and the sub-picture obtain the depth of view information of the master image, and then, it is true according to the depth of view information and focusing area Original virtualization intensity is multiplied by and the original virtualization by the original virtualization intensity of different subregions in the fixed master image background area The virtualization weight of subregion where intensity gets the target virtualization intensity of corresponding sub-region, and then, it is blurred according to the target strong Degree carries out virtualization to the background area of master image and handles the image after being blurred.
Wherein, with continued reference to Fig. 6, the display direction of master image can be speculated according to the gyroscope information of terminal device, into And according to the display direction of master image, the orientation up and down of different subregions and true according to weight decreasing strategy from top to bottom is determined The virtualization weight of the fixed different subregions.
Based on above example, background area is divided into subregion according to the direction parallel with focusing area except above-mentioned Outside, background area can also be divided into different subregions according to vertical direction (perpendicular to the direction of focusing area).
First depth of view information and the background area of foreground area in master image are determined according to depth of view information and focusing area Background area is divided into different subregions by the second depth of view information according to the size of the second depth of view information, wherein, distance focusing area The domain nearer subregion depth of field of distance is smaller, and the flat of different subregions in master image background area is obtained according to the second depth of view information Equal depth of view information, and then, according to the first depth of view information and the original of the different subregions of the average depth of view information of different subregions acquisition Begin virtualization intensity.
Specifically, in this example, background area is divided into multiple and different subregions according to the size of the depth of field, into And obtain respectively per the depth of field of the sub-regions middle-range from the closest position of focusing area and with a distance from focusing area distance most The depth of field of distant positions, and be averaged according to the depth of field of the proximal most position and the depth of field of highest distance position, which is made For the average depth of view information of corresponding sub-region, and then, original virtualization intensity is determined according to the average depth of view information of the subregion, In, average depth of view information is higher, and original virtualization intensity is bigger.
Wherein, it is emphasized that, if the depth of field and distance apart from the closest position of focusing area of subregion The depth of field apart from highest distance position of focusing area, then can be according to the son since the limitation of depth of field computational accuracy is not got Depth of field probability distribution in the depth of view information obtained in region calculates average depth of field of the subregion etc., alternatively, apart from focusing The depth of view information of the one or more subregions of region farther out obtain it is inaccurate, then can according to get more accurately away from The variation tendency of the average depth of view information of multiple subregions from focusing area, derive apart from focusing area farther out one or The average depth of view information of multiple subregions.
And then in embodiments herein, it may be employed and calculate every height with the same mode shown in above-mentioned example The virtualization weight in region and then the target obtained per sub-regions blur intensity and carry out virtualization processing.
In conclusion the background blurring processing method of the application, obtains the master image of main camera shooting and secondary camera shooting The sub-picture of head shooting, and according to master image and the depth of view information of sub-picture acquisition master image, according to depth of view information and focusing area Domain determines the original virtualization intensity of different subregions in master image background area, is determined not according to the display mode of the master image Determine that the virtualization of different subregions is weighed with the distribution orientation of subregion, and according to weight Provisioning Policy corresponding from distribution orientation Weight, and then, determine that the target of different subregions blurs according to the original virtualization intensity of different subregions and corresponding virtualization weight Intensity finally, blurs intensity according to the target of different subregions and carries out virtualization processing to master image background area.Hereby it is achieved that Virtualization effect is more naturally, be more nearly the empty burnt effect of true optics.
In order to realize above-described embodiment, the application also proposed a kind of background blurring processing unit, and Fig. 7 is according to the application The structure diagram of the background blurring processing unit of one embodiment, as shown in fig. 7, the background blurring processing unit includes:The One acquisition module 100, the first determining module 200, the second determining module 300, the 3rd determining module 400 and processing module 500.
Wherein, the first acquisition module 100, for obtaining the pair of the master image of main camera shooting and secondary camera shooting Image, and according to master image and the depth of view information of sub-picture acquisition master image.
First determining module 200, it is Bu Tong sub in master image background area for being determined according to depth of view information with focusing area The original virtualization intensity in region.
In one embodiment of the application, as shown in figure 8, on the basis of as shown in Figure 7, the first determining module 200 Including the first determination unit 210, first acquisition unit 220 and second acquisition unit 230.
Wherein, the first determination unit 210, for being determined according to depth of view information and focusing area in master image background area really Determine the first depth of view information of foreground area in master image and the second depth of view information of background area.
First acquisition unit 220, for obtaining different subregions in master image background area according to the second depth of view information Average depth of view information.
Second acquisition unit 230, for being obtained not according to the average depth of view information of the first depth of view information and different subregions With the original virtualization intensity of subregion.
Second determining module 300 for determining the distribution orientation of different subregions according to the display mode of master image, and is pressed The virtualization weight of different subregions is determined according to weight Provisioning Policy corresponding from distribution orientation.
In one embodiment of the application, as shown in figure 9, on the basis of as shown in Figure 7, the second determining module 300 Including the 3rd acquiring unit 310 and the second determination unit 320.
Wherein, the 3rd acquiring unit 310, for obtaining the elements of a fix of different subregions.
Second determination unit 320, for being inquired about according to the elements of a fix according to default linear weight distribution curve or non- Linear weight distribution curve determines the virtualization weight of different subregions.
3rd determining module 400 determines for the original virtualization intensity according to different subregions and corresponding virtualization weight The target virtualization intensity of different subregions.
Processing module 500 blurs master image background area for blurring intensity according to the target of different subregions Processing.
It should be noted that the foregoing description to embodiment of the method, is also applied for the device of the embodiment of the present application, realizes Principle is similar, and details are not described herein.
The division of modules is only used for for example, in other embodiments in above-mentioned background blurring processing unit, can Background blurring processing unit is divided into different modules as required, with complete the whole of above-mentioned background blurring processing unit or Partial function.
In conclusion the background blurring processing unit of the application, obtains the master image of main camera shooting and secondary camera shooting The sub-picture of head shooting, and according to master image and the depth of view information of sub-picture acquisition master image, according to depth of view information and focusing area Domain determines the original virtualization intensity of different subregions in master image background area, and different sons are determined according to the display mode of master image The distribution orientation in region, and according to the virtualization weight that different subregions are determined from the corresponding weight Provisioning Policy in distribution orientation, into And determine that the target of different subregions blurs intensity according to the original virtualization intensity and corresponding virtualization weight of different subregions, Finally, intensity is blurred according to the target of different subregions and virtualization processing is carried out to master image background area.Hereby it is achieved that virtualization Effect is more naturally, be more nearly the empty burnt effect of true optics.
In order to realize above-described embodiment, the application also proposed a kind of computer equipment, wherein, computer equipment is to include The arbitrary equipment of the processor of memory comprising storage computer program and operation computer program, for example, can be intelligence Mobile phone, PC etc. further include image processing circuit in above computer equipment, and image processing circuit can utilize hardware And/or component software is realized, it may include defines each of ISP (Image Signal Processing, picture signal processing) pipeline Kind processing unit.Figure 10 is the schematic diagram of image processing circuit in one embodiment.As shown in Figure 10, for purposes of illustration only, only showing Go out the various aspects with the relevant image processing techniques of the embodiment of the present application.
As shown in Figure 10, image processing circuit includes ISP processors 1040 and control logic device 1050.Imaging device 1010 The image data of capture is handled first by ISP processors 1040, and ISP processors 1040 analyze image data can with capture For the image statistics of definite and/or imaging device 1010 one or more control parameters.(the photograph of imaging device 1010 Machine) it may include the camera with one or more lens 1012 and imaging sensor 1014, wherein, in order to implement the application's Background blurring processing method, imaging device 1010 include two groups of cameras, wherein, with continued reference to Fig. 8, imaging device 1010 can base In main camera and secondary camera, photographed scene image, imaging sensor 1014 may include colour filter array (such as Bayer simultaneously Filter), imaging sensor 1014 can obtain the luminous intensity captured with each imaging pixel of imaging sensor 1014 and wavelength is believed Breath, and the one group of raw image data that can be handled by ISP processors 1040 is provided.Sensor 1020 can be connect based on sensor 1020 Raw image data is supplied to ISP processors 1040 by mouth type, wherein, ISP processors 1040 can be carried based on sensor 1020 Imaging sensor 1014 in raw image data and secondary camera that imaging sensor 1014 in the main camera supplied obtains The raw image data of acquisition calculates depth of view information etc..1020 interface of sensor can utilize SMIA (Standard Mobile Imaging Architecture, Standard Mobile Imager framework) interface, other serial or parallel camera interfaces or above-mentioned interface Combination.
ISP processors 1040 handle raw image data pixel by pixel in various formats.For example, each image pixel can Bit depth with 8,10,12 or 14 bits, ISP processors 1040 can carry out raw image data at one or more images Reason operation, statistical information of the collection on image data.Wherein, image processing operations can be by identical or different bit depth precision It carries out.
ISP processors 1040 can also receive pixel data from video memory 1030.It for example, will from 1020 interface of sensor Raw pixel data is sent to video memory 1030, and the raw pixel data in video memory 1030 is available at ISP It is for processing to manage device 1040.Video memory 1030 can be in a part, storage device or electronic equipment for memory device Independent private memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving the raw image data from 1020 interface of sensor or from video memory 1030, at ISP Reason device 1040 can carry out one or more image processing operations, such as time-domain filtering.Treated, and image data can be transmitted to image Memory 1030, to carry out other processing before shown.ISP processors 1040 are from 1030 receiving area of video memory Data are managed, and the processing data are carried out with the image real time transfer in original domain and in RGB and YCbCr color spaces.Place Image data after reason may be output to display 1070, so that user watches and/or by graphics engine or GPU (Graphics Processing Unit, graphics processor) it is further processed.In addition, the output of ISP processors 1040 also can be transmitted to image Memory 1030, and display 1070 can read image data from video memory 1030.In one embodiment, image stores Device 1030 can be configured as realizing one or more frame buffers.In addition, the output of ISP processors 1040 can be transmitted to coding Device/decoder 1060, so as to encoding/decoding image data.The image data of coding can be saved, and be shown in display It is decompressed before in 1070 equipment.Encoder/decoder 1060 can be realized by CPU or GPU or coprocessor.
The definite statistics of ISP processors 1040, which can be transmitted, gives control logic device Unit 1050.For example, statistics can It is passed including the images such as automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 1012 shadow correction of lens 1014 statistical information of sensor.Control logic device 1050 may include the processor for performing one or more routines (such as firmware) and/or Microcontroller, one or more routines according to the statistics of reception, can determine imaging device 1010 control parameter and Control parameter.For example, control parameter may include 1020 control parameter of sensor (such as gain, time of integration of spectrum assignment), The combination of camera flash control parameter, 1012 control parameter of lens (such as focusing or zoom focal length) or these parameters. ISP control parameters may include the gain level and color for automatic white balance and color adjustment (for example, during RGB processing) 1012 shadow correction parameter of correction matrix and lens.
It is the step of realizing background blurring processing method with image processing techniques in Figure 10 below:
The master image of main camera shooting and the sub-picture of secondary camera shooting are obtained, and according to the master image and institute State the depth of view information that sub-picture obtains the master image;
The original void of different subregions in the master image background area is determined according to the depth of view information and focusing area Change intensity;
Determine the distribution orientation of the different subregion according to the display mode of the master image, and according to the distribution The corresponding weight Provisioning Policy in orientation determines the virtualization weight of the different subregions;
The different subregions are determined according to the original virtualization intensity of the different subregions and corresponding virtualization weight Target blurs intensity;
Virtualization processing is carried out to the master image background area according to the target virtualization intensity of the different subregions.
In order to realize above-described embodiment, the application also proposes a kind of non-transitorycomputer readable storage medium, when described Instruction in storage medium is performed by processor, enabling performs the background blurring processing method such as above-described embodiment.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the application.In the present specification, schematic expression of the above terms is not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office It is combined in an appropriate manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this field Art personnel can tie the different embodiments described in this specification or example and different embodiments or exemplary feature It closes and combines.
In addition, term " first ", " second " are only used for description purpose, and it is not intended that instruction or hint relative importance Or the implicit quantity for indicating indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the present application, " multiple " are meant that at least two, such as two, three It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, represent to include Module, segment or the portion of the code of the executable instruction of one or more the step of being used to implement custom logic function or process Point, and the scope of the preferred embodiment of the application includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be by the application Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction Row system, device or equipment instruction fetch and the system executed instruction) it uses or combines these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment It puts.The more specific example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Connecting portion (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable Medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or if necessary with it His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the application can be realized with hardware, software, firmware or combination thereof.Above-mentioned In embodiment, software that multiple steps or method can in memory and by suitable instruction execution system be performed with storage Or firmware is realized.Such as, if realized with hardware in another embodiment, following skill well known in the art can be used Any one of art or their combination are realized:With for data-signal realize logic function logic gates from Logic circuit is dissipated, the application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that realize all or part of step that above-described embodiment method carries Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium In matter, the program upon execution, one or a combination set of the step of including embodiment of the method.
In addition, each functional unit in each embodiment of the application can be integrated in a processing module, it can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould The form that hardware had both may be employed in block is realized, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and is independent production marketing or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although it has been shown and retouches above Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the application System, those of ordinary skill in the art can be changed above-described embodiment, change, replace and become within the scope of application Type.

Claims (10)

1. a kind of background blurring processing method, which is characterized in that including:
The master image of main camera shooting and the sub-picture of secondary camera shooting are obtained, and according to the master image and the pair The depth of view information of master image described in image acquisition;
Determine that the original virtualization of different subregions in the master image background area is strong according to the depth of view information and focusing area Degree;
Determine the distribution orientation of the different subregion according to the display mode of the master image, and according to the distribution orientation Corresponding weight Provisioning Policy determines the virtualization weight of the different subregions;
The target of the different subregions is determined according to the original virtualization intensity of the different subregions and corresponding virtualization weight Blur intensity;
Virtualization processing is carried out to the master image background area according to the target virtualization intensity of the different subregions.
2. the method as described in claim 1, which is characterized in that described according to determining the depth of view information and focusing area The original virtualization intensity of different subregions in master image background area, including:
The first depth of view information and background area of foreground area in the master image are determined according to the depth of view information and focusing area Second depth of view information in domain;
The average depth of view information of different subregions in the master image background area is obtained according to second depth of view information;
The original of the different subregions is obtained according to the average depth of view information of first depth of view information and the different subregions Begin virtualization intensity.
3. the method as described in claim 1, which is characterized in that when the distribution orientation of the different subregions is upper and lower orientation When, it is described to determine that the virtualization weight of the different subregions includes according to weight Provisioning Policy corresponding from the distribution orientation:
The virtualization weight of the different subregions is determined according to weight decreasing strategy from top to bottom.
4. method as claimed in claim 3, which is characterized in that it is described according to weight decreasing strategy from top to bottom determine it is described not Virtualization weight with subregion includes:
Obtain the elements of a fix of the different subregions;
According to elements of a fix inquiry according to default linear weight distribution curve or nonlinear weight distribution curve, determine The virtualization weight of the difference subregion.
5. the method as described in claim 1, which is characterized in that the original virtualization intensity according to the different subregions and Corresponding virtualization weight determines the target virtualization intensity of the different subregions, including:
The original virtualization intensity of the different subregions and the product of corresponding virtualization weight are obtained, determines the different subregions Target virtualization intensity.
6. the method as described in claim 1, which is characterized in that the target according to the different subregions blurs intensity pair The master image background area carries out virtualization processing, including:
Institute is determined according to the depth of view information of each pixel in the target virtualization intensity of the different subregions and the different subregions State the virtualization coefficient of each pixel in background area;
It is empty that Gaussian Blur processing generation is carried out to the background area according to the virtualization coefficient of each pixel in the background area Change photo.
7. a kind of background blurring processing unit, which is characterized in that including:
First acquisition module, for obtaining the sub-picture of the master image of main camera shooting and secondary camera shooting, and according to The master image and the sub-picture obtain the depth of view information of the master image;
First determining module, it is Bu Tong sub in the master image background area for being determined according to the depth of view information with focusing area The original virtualization intensity in region;
Second determining module, for determining the distribution orientation of the different subregions according to the display mode of the master image, and The virtualization weight of the different subregions is determined according to weight Provisioning Policy corresponding from the distribution orientation;
3rd determining module, determine for the original virtualization intensity according to the different subregions and corresponding virtualization weight described in The target virtualization intensity of different subregions;
Processing module, for being blurred according to the target virtualization intensity of the different subregions to the master image background area Processing.
8. device as claimed in claim 7, which is characterized in that first determining module includes:
First determination unit, for being determined to determine institute in the master image background area according to the depth of view information and focusing area State the first depth of view information of foreground area in master image and the second depth of view information of background area;
First acquisition unit, for obtaining different subregions in the master image background area according to second depth of view information Average depth of view information;
Second acquisition unit, for obtaining institute according to the average depth of view information of first depth of view information and the different subregions State the original virtualization intensity of different subregions.
9. a kind of computer equipment, which is characterized in that including memory, processor and storage on a memory and can be in processor The computer program of upper operation when the processor performs described program, realizes the background as described in any in claim 1-5 Blur processing method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The background blurring processing method as described in any in claim 1-5 is realized during execution.
CN201711242134.4A 2017-11-30 2017-11-30 Background blurring processing method, device and equipment Pending CN108053363A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711242134.4A CN108053363A (en) 2017-11-30 2017-11-30 Background blurring processing method, device and equipment
PCT/CN2018/116475 WO2019105261A1 (en) 2017-11-30 2018-11-20 Background blurring method and apparatus, and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711242134.4A CN108053363A (en) 2017-11-30 2017-11-30 Background blurring processing method, device and equipment

Publications (1)

Publication Number Publication Date
CN108053363A true CN108053363A (en) 2018-05-18

Family

ID=62121994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711242134.4A Pending CN108053363A (en) 2017-11-30 2017-11-30 Background blurring processing method, device and equipment

Country Status (2)

Country Link
CN (1) CN108053363A (en)
WO (1) WO2019105261A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191469A (en) * 2018-08-17 2019-01-11 广东工业大学 A kind of image automatic focusing method, apparatus, equipment and readable storage medium storing program for executing
WO2019105261A1 (en) * 2017-11-30 2019-06-06 Oppo广东移动通信有限公司 Background blurring method and apparatus, and device
CN110555809A (en) * 2018-06-04 2019-12-10 瑞昱半导体股份有限公司 background blurring method based on foreground image and electronic device
WO2020192692A1 (en) * 2019-03-25 2020-10-01 华为技术有限公司 Image processing method and related apparatus
CN112040203A (en) * 2020-09-02 2020-12-04 Oppo(重庆)智能科技有限公司 Computer storage medium, terminal device, image processing method and device
CN112785487A (en) * 2019-11-06 2021-05-11 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN112889265A (en) * 2018-11-02 2021-06-01 Oppo广东移动通信有限公司 Depth image processing method, depth image processing device and electronic device
WO2021114061A1 (en) * 2019-12-09 2021-06-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Electric device and method of controlling an electric device
CN113066001A (en) * 2021-02-26 2021-07-02 华为技术有限公司 Image processing method and related equipment
CN114339071A (en) * 2021-12-28 2022-04-12 维沃移动通信有限公司 Image processing circuit, image processing method and electronic device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
CN107395965A (en) * 2017-07-14 2017-11-24 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053363A (en) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 Background blurring processing method, device and equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
CN107395965A (en) * 2017-07-14 2017-11-24 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019105261A1 (en) * 2017-11-30 2019-06-06 Oppo广东移动通信有限公司 Background blurring method and apparatus, and device
CN110555809A (en) * 2018-06-04 2019-12-10 瑞昱半导体股份有限公司 background blurring method based on foreground image and electronic device
CN110555809B (en) * 2018-06-04 2022-03-15 瑞昱半导体股份有限公司 Background blurring method based on foreground image and electronic device
CN109191469A (en) * 2018-08-17 2019-01-11 广东工业大学 A kind of image automatic focusing method, apparatus, equipment and readable storage medium storing program for executing
CN112889265A (en) * 2018-11-02 2021-06-01 Oppo广东移动通信有限公司 Depth image processing method, depth image processing device and electronic device
CN112889265B (en) * 2018-11-02 2022-12-09 Oppo广东移动通信有限公司 Depth image processing method, depth image processing device and electronic device
US11562496B2 (en) 2018-11-02 2023-01-24 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Depth image processing method, depth image processing apparatus and electronic device
WO2020192692A1 (en) * 2019-03-25 2020-10-01 华为技术有限公司 Image processing method and related apparatus
CN112785487A (en) * 2019-11-06 2021-05-11 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN112785487B (en) * 2019-11-06 2023-08-04 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
WO2021114061A1 (en) * 2019-12-09 2021-06-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Electric device and method of controlling an electric device
CN112040203A (en) * 2020-09-02 2020-12-04 Oppo(重庆)智能科技有限公司 Computer storage medium, terminal device, image processing method and device
CN113066001A (en) * 2021-02-26 2021-07-02 华为技术有限公司 Image processing method and related equipment
WO2022179581A1 (en) * 2021-02-26 2022-09-01 华为技术有限公司 Image processing method and related device
CN114339071A (en) * 2021-12-28 2022-04-12 维沃移动通信有限公司 Image processing circuit, image processing method and electronic device

Also Published As

Publication number Publication date
WO2019105261A1 (en) 2019-06-06

Similar Documents

Publication Publication Date Title
CN108053363A (en) Background blurring processing method, device and equipment
Abuolaim et al. Defocus deblurring using dual-pixel data
CN107977940A (en) background blurring processing method, device and equipment
US10997696B2 (en) Image processing method, apparatus and device
CN108055452A (en) Image processing method, device and equipment
CN108537155B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108111749B (en) Image processing method and device
CN103945118B (en) Image weakening method, device and electronic equipment
KR20210139450A (en) Image display method and device
JP2020536457A (en) Image processing methods and devices, electronic devices, and computer-readable storage media
CN107509031B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108024054A (en) Image processing method, device and equipment
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108419028B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107493432A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN107945105A (en) Background blurring processing method, device and equipment
CN108140130A (en) The bilateral image procossing that edge perceives
CN105049718A (en) Image processing method and terminal
CN107409166A (en) Panning lens automatically generate
CN107959778A (en) Imaging method and device based on dual camera
JP2015231220A (en) Image processing apparatus, imaging device, image processing method, imaging method and program
US9100559B2 (en) Image processing apparatus, image pickup apparatus, image processing method, and image processing program using compound kernel
CN108154514A (en) Image processing method, device and equipment
CN108712608A (en) Terminal device image pickup method and device
CN109194877A (en) Image compensation method and device, computer readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180518

RJ01 Rejection of invention patent application after publication