CN107945105A - Background blurring processing method, device and equipment - Google Patents

Background blurring processing method, device and equipment Download PDF

Info

Publication number
CN107945105A
CN107945105A CN201711242468.1A CN201711242468A CN107945105A CN 107945105 A CN107945105 A CN 107945105A CN 201711242468 A CN201711242468 A CN 201711242468A CN 107945105 A CN107945105 A CN 107945105A
Authority
CN
China
Prior art keywords
depth
area
view information
master image
virtualization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711242468.1A
Other languages
Chinese (zh)
Other versions
CN107945105B (en
Inventor
欧阳丹
谭国辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711242468.1A priority Critical patent/CN107945105B/en
Publication of CN107945105A publication Critical patent/CN107945105A/en
Application granted granted Critical
Publication of CN107945105B publication Critical patent/CN107945105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/04

Landscapes

  • Studio Devices (AREA)

Abstract

This application discloses a kind of background blurring processing method, device and equipment, wherein, method includes:Obtain the master image of main camera shooting and the sub-picture of secondary camera shooting;It whether there is default destination object in detection master image;If detection knows that there are destination object, it is determined that target area corresponding with destination object;According to master image and sub-picture, using the first depth of view information of default first depth of field algorithm calculating target area;The second depth of view information of nontarget area is obtained using default second depth of field algorithm, wherein, the computational accuracy of the first depth of field algorithm is higher than the second depth of field algorithm;Virtualization processing carries out the background area of target area according to the first depth of view information, and virtualization processing is carried out to the background area of nontarget area according to the second depth of view information.Hereby it is achieved that protecting destination object not to be blurred during virtualization processing, the visual effect of image procossing is improved.

Description

Background blurring processing method, device and equipment
Technical field
This application involves technical field of image processing, more particularly to a kind of background blurring processing, device and equipment.
Background technology
In general, for the prominent main body taken pictures, virtualization processing can be carried out to the background area taken pictures, however, present terminal Equipment is limited to the limitation of the disposal ability of processor, when causing to blur some images, may result in shooting main body Image be blurred, such as, user may carry out the corresponding image of scissors hand when showing scissors hand posture and being taken pictures Virtualization, causes virtualization effect poor.
Apply for content
The application provides a kind of background blurring processing method, device and equipment, to solve in the prior art, terminal device by It is limited to the limitation of the disposal ability of processor, when causing to blur some images, may result in the image of shooting main body The technical problem being blurred.
The embodiment of the present application provides a kind of background blurring processing method, including:Obtain the master image of main camera shooting with And the sub-picture of secondary camera shooting;Detect and whether there is default destination object in the master image;If presence is known in detection The destination object, it is determined that target area corresponding with destination object described in the master image;According to the master image and The sub-picture, the first depth of view information of the target area is calculated using default first depth of field algorithm;Using default Two depth of field algorithms obtain the second depth of view information of the nontarget area in the master image;According to first depth of view information to institute The background area for stating target area carries out virtualization processing;Background area according to second depth of view information to the nontarget area Domain carries out virtualization processing.
Another embodiment of the application provides a kind of background blurring processing unit, including:First acquisition module, for obtaining master The master image of camera shooting and the sub-picture of secondary camera shooting;Detection module, for detect in the master image whether There are default destination object;Determining module, for knowing in detection there are during the destination object, determines and the master image Described in the corresponding target area of destination object;Second acquisition module, for according to the master image and the sub-picture, application Default first depth of field algorithm calculates the first depth of view information of the target area, and is obtained using default second depth of field algorithm Second depth of view information of negated target area;Processing module, for according to first depth of view information to the target area Background area carries out virtualization processing, and carries out void to the background area of the nontarget area according to second depth of view information Change is handled.
The another embodiment of the application provides a kind of computer equipment, including memory and processor, is stored up in the memory There is computer-readable instruction, when described instruction is performed by the processor so that the processor performs the above-mentioned reality of the application Apply the background blurring processing method described in example.
The application a further embodiment provides a kind of non-transitorycomputer readable storage medium, is stored thereon with computer journey Sequence, realizes the background blurring processing method as described in the above embodiments of the present application when which is executed by processor.
Technical solution provided by the embodiments of the present application can include the following benefits:
The master image of main camera shooting and the sub-picture of secondary camera shooting are obtained, and detects in master image and whether deposits In default destination object, if detection knows that there are destination object, it is determined that target area corresponding with destination object, and according to Master image and sub-picture, the first depth of view information of target area is calculated using default first depth of field algorithm, and application is preset The second depth of field algorithm obtain nontarget area the second depth of view information, and then, according to the first depth of view information to target area Background area carries out virtualization processing, and carries out virtualization processing to the background area of nontarget area according to the second depth of view information. Hereby it is achieved that protecting destination object not to be blurred during virtualization processing, the visual effect of image procossing is improved.
Brief description of the drawings
The above-mentioned and/or additional aspect of the application and advantage will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein:
Fig. 1 is the flow chart according to the background blurring processing method of the application one embodiment;
Fig. 2 is the principle schematic according to the range of triangle of the application one embodiment;
Fig. 3 is to obtain schematic diagram according to the dual camera depth of field of the application one embodiment;
Fig. 4 is the flow chart according to the background blurring processing method of the application another embodiment;
Fig. 5 is the flow chart according to the background blurring processing method of the application another embodiment;
Fig. 6 is the flow chart according to the background blurring processing method of one specific embodiment of the application;
Fig. 7 (a) is background blurring treatment effect schematic diagram according to prior art;
Fig. 7 (b) is the background blurring treatment effect schematic diagram according to the application one embodiment;
Fig. 8 is the structure diagram according to the background blurring processing unit of the application one embodiment;
Fig. 9 is the structure diagram according to the background blurring processing unit of the application another embodiment;
Figure 10 is the structure diagram according to the background blurring processing unit of the application another embodiment;And
Figure 11 is the schematic diagram according to the image processing circuit of the application another embodiment.
Embodiment
Embodiments herein is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or has the function of same or like element.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the application, and it is not intended that limitation to the application.
Below with reference to the accompanying drawings the background blurring processing method, device and equipment of the embodiment of the present application are described.
Fig. 1 is according to the flow chart of the background blurring processing method of the application one embodiment, as shown in Figure 1, this method Including:
Step 101, the master image of main camera shooting and the sub-picture of secondary camera shooting are obtained.
Step 102, detect in master image and whether there is default destination object.
Specifically, dual camera system calculates depth of view information by master image and sub-picture, wherein, dual camera system The secondary camera of depth of view information is obtained including a main camera for obtaining shooting main body master image, and an auxiliary master image, Wherein, the set-up mode of main camera and secondary camera can be along horizontally arranged, or or along vertical Direction setting etc., in order to which more clearly how description dual camera obtains depth of view information, below with reference to the accompanying drawings explanation is double images Head obtains the principle of depth of view information:
In practical applications, human eye explanation depth of view information mainly differentiates depth of view information by binocular vision, this takes the photograph with double The principle that depth of view information is differentiated as head, mainly realized by the principle of range of triangle as shown in Figure 2, based on Fig. 2 In, in real space, depict imaging object, and two camera position ORAnd OT, and Jiao of two cameras Plane, the distance of plane is f where two cameras of focal plane distance, is imaged in two cameras of focal plane position, from And obtain two shooting images.
Wherein, P and P ' is position of the same target in different shooting images respectively.Wherein, P points are apart from place shooting figure The distance of the left border of picture is XR, P ' points are X apart from the distance of the left border of place shooting imageT。ORAnd OTRespectively two A camera, for the two cameras in same plane, distance is B.
Based on principle of triangulation, the distance between plane Z, has as follows where the object and two cameras in Fig. 2 Relation:
Based on this, can push awayWherein, d is position of the same target in different shooting images The distance between put difference.Since B, f are definite value, the distance Z of object can be determined according to d.
Certainly, except triangle telemetry, the depth of view information of master image can also be calculated using other modes, such as, When main camera and secondary camera are taken pictures for same scene, the distance of the object distance camera in scene and main camera The proportional relations such as displacement difference, posture difference with secondary camera imaging, therefore, in one embodiment of the application, Ke Yigen Above-mentioned distance Z is obtained according to this proportionate relationship.
For example, as shown in figure 3, the sub-picture that the master image that is obtained by main camera and secondary camera obtain, The figure of difference difference is calculated, is represented here with disparity map, what is represented on this figure is the displacement difference of the upper identical point of two figures It is different, but since the displacement difference in triangle polyester fibre and Z are directly proportional, many times disparity map is just directly used as depth of field letter Breath figure.
It is double when taking the photograph system and being blurred in the background area to image, it may result in some targets pair for being not intended to virtualization The image of elephant is blurred, thus, the destination object blurred for the ease of ensureing user to be not intended to is not blurred, and is detected in master image With the presence or absence of destination object, which can include specific gesture motion (such as scissors hand, oiling gesture etc.), can be with Including famous buildings (such as the Great Wall, Mount Huang etc.), alternatively, can include objects of some specific shapes etc. (such as Circular object, triangle etc.).
It should be appreciated that according to the difference of application scenarios, it can realize in master image and whether there is in different ways The detection of default destination object, example are as follows:
As a kind of example:
In this example, the Template Information of the contour edge comprising destination object is pre-set, detects the prospect of master image The contour edge of photographed scene in region, default Template Information is matched with contour edge, if successful match, is detected Know that there are default destination object in master image.
Wherein, in this example, the contour edge in default Template Information can include the contour edge of destination object Coordinate value, the position relationship between each pixel etc..
It is appreciated that in this example, identify in foreground area and whether there is only by the contour edge of photographed scene Destination object, improves detection efficiency, in order to further improve image processing efficiency.
As another example:
In this example, the Template Information for including destination object shape information, the shape letter of the destination object are pre-set Exterior contour information of the breath comprising destination object and internal filling pattern information, detect photographed scene in the foreground area of master image Shape information, default Template Information is matched with shape information, if successful match, detection is known in master image and deposited Default destination object.
It is appreciated that in this example, identified by the shape information of photographed scene in foreground area and whether there is target Object, avoids the erroneous judgement of the shooting body shape similar to some exterior contours, improves the accuracy rate of identification.
In some instances, above two example is may also be combined with, after first being identified to contour edge, then carries out shape information knowledge Not, to improve recognition accuracy.
Step 103, if detection knows that there are destination object, it is determined that target area corresponding with destination object in master image Domain.
Step 104, according to master image and sub-picture, using the first scape of default first depth of field algorithm calculating target area Deeply convince breath.
Step 105, the second depth of view information of the nontarget area in master image is obtained using default second depth of field algorithm.
Specifically, if detection is known there are destination object, in order to avoid the virtualization to destination object, target is determined The corresponding target area of object, and according to master image and sub-picture, target area is calculated using default first depth of field algorithm First depth of view information, and default second depth of field algorithm of application obtain the second depth of view information of nontarget area, wherein, first The computational accuracy of depth of field algorithm is higher than the second depth of field algorithm, thus, on the one hand, the calculating of the corresponding background area of non-targeted object The algorithm of the depth of field is the second relatively low depth of field algorithm of computational accuracy, and thus, operand is less than the first depth of field algorithm, can mitigate The operating pressure of terminal device, avoids virtualization processing time longer, causes image procossing to take increasing, on the other hand, to target The algorithm of the calculating depth of field of the corresponding target area of object is the first of a relatively high depth of field algorithm of computational accuracy, thereby it is ensured that Targeted object region is not blurred, and due to only using the first depth of field algorithm to the corresponding target area of destination object, it is right The operating pressure of the processor of terminal device influences less, will not substantially increase image processing time.
Certainly, in specific implementation process, in order to meet the individual demand of user, the image procossing effect of interest is realized Fruit, the computational accuracy of the first depth of field algorithm can also be equal to or less than the second depth of field algorithm, and this is not restricted.
In one embodiment of the application, when detect default destination object is not present in master image after, if inspection Default destination object is not detected, then the 3rd depth of view information of master image is calculated using the second depth of field algorithm, according to the 3rd depth of field Information carries out virtualization processing to the background area of master image, to mitigate the processing pressure of system.
Step 106, virtualization processing is carried out to the background area of target area according to the first depth of view information.
Step 107, virtualization processing is carried out to the background area of nontarget area according to the second depth of view information.
Specifically, after the depth of view information of target area and target area is calculated according to different computational accuracies respectively, Virtualization processing is carried out to the background area of target area according to the first depth of view information, and according to the second depth of view information to non-targeted The background area in region carries out virtualization processing, and destination object is protected in the image after virtualization processing.
Specifically, in practical applications, can be realized in different ways according to the according to the difference of application scenarios One depth of view information carries out the background area of target area virtualization processing, and according to second depth of view information to the non-mesh The background area in mark region carries out virtualization processing, illustrates as follows:
The first example:
As shown in figure 4, being carried out according to the first depth of view information to the background area of target area in step 103 can at virtualization Including:
Step 201, the first foreground area of target area is determined according to the focusing area of the first depth of view information and master image Depth of view information and the first background area depth of view information.
It is appreciated that in target area may include destination object where foreground area, while comprising destination object beyond Background area, thus, in order to further handle the region where destination object, according to the first depth of view information and master map The focusing area of picture determines the first foreground area depth of view information and the first background area depth of view information of target area, wherein, mesh The scope for marking the blur-free imaging before being located at focusing area in region is the first foreground area, is located at focusing area in target area The scope of blur-free imaging afterwards is the first background area.
It should be noted that according to the difference of application scenarios, the first foreground depth of field and the first background are separated to target area The mode in region is different, is illustrated below:
The first example:
The relevant parameter of shooting can be obtained, to be calculated according to the formula of shooting camera in target area outside focus area Image-region depth of view information.
In this example, can obtain shooting camera allows disperse circular diameter, f-number, focal length, focal distance etc. to join Number, so that according to formula:First foreground area depth of view information=(f-number * allows square of disperse circular diameter * focal distances)/ (focal length square+f-number * allows disperse circular diameter * focal distances) calculates the first foreground area, according to the first prospect Region disconnecting goes out prospect, and according to the first background area of formula depth of view information=(f-number * allows disperse circular diameter * focusings From square)/(focal length square-f-number * allows disperse circular diameter * focal distances) calculate target area background One background area depth of view information.
Second of example:
The depth of field data information in the current goal region obtained respectively according to dual camera determines the figure outside focus area As the depth of field map in region, after the first foreground area and the focus area before focus area is determined according to depth of field map One background area.
Specifically, in this example, due to two cameras position and differ, thus, two rear cameras The opposite destination object with shooting has certain a differential seat angle and range difference, thus the preview image data of the two acquisition there is also Certain phase difference.
For example, for the A points on photographic subjects object, in the preview image data of camera 1, A points are corresponding Pixel point coordinates is (30,50), and in the preview image data of camera 2, the corresponding pixel point coordinates of A points is (30,48), The phase difference of A points corresponding pixel in two preview image datas is 50-48=2.
In this example, the pass of depth of view information and phase difference can be established previously according to experimental data or camera parameter System, and then, it can be searched according to phase difference of each pixel in target image in the preview image data that two cameras obtain Corresponding depth of view information.
For example, for the corresponding phase difference 2 of above-mentioned A points, if inquired according to default correspondence corresponding The depth of field is 5 meters, then the corresponding depth of view information of A points is 5 meters in target area.Thus, it can obtain each picture in current goal region The depth of view information of vegetarian refreshments, that is, obtain the depth of field map of the image-region outside focus area.
And then after the depth of field map of the image-region outside focus area is obtained, can further determine that focus area it First foreground area depth of view information of preceding image-region, and the first background area depth of view information after focus area.
Step 202, the first virtualization journey is obtained according to the first foreground area depth of view information and the first background area depth of view information The baseline values of degree.
Wherein, the baseline values of the first virtualization degree can specify the intensity grade of virtualization, such as, it is strong, weak etc., wherein the One foreground area depth of view information and the first background area depth of view information gap are bigger, show the foreground and background area in target area It is point more obvious, thus, virtualization degree can be smaller, so the baseline values of the first virtualization degree are smaller, conversely, the first foreground zone Domain depth of view information and the first background area depth of view information gap are smaller, and it is more unknown to show that the foreground and background in target area is distinguished It is aobvious, thus, virtualization degree can be bigger, so the baseline values of the first virtualization degree of virtualization are bigger.
Step 203, according to the baseline values and the first background area depth of view information of the first virtualization degree, target area is determined Background area in each pixel virtualization coefficient.
Wherein, the baseline values for blurring coefficient with the first virtualization degree are corresponding, and the baseline values of the first virtualization degree are higher, Virtualization coefficient is bigger, and the virtualization degree of the first background area is higher, in embodiments herein, according to the first virtualization degree Baseline values and the first background area depth of view information, determine the virtualization coefficient of each pixel in the background area of target area.
Step 204, Gaussian Blur processing is carried out to the background area of target area according to the virtualization coefficient of each pixel.
Specifically, Gaussian Blur processing is carried out to the background area of target area according to the virtualization coefficient of each pixel, made Background area in target area depth of view information it is bigger, the depth of view information of background area is higher, and virtualization degree is bigger.
Further, as shown in figure 5, in step 103 according to second depth of view information to the nontarget area Background area, which carries out virtualization processing, to be included:
Step 301, the second foreground zone of nontarget area is determined according to the focusing area of the second depth of view information and master image Domain depth of view information and the second background area depth of view information.
It is appreciated that nontarget area can include foreground area and background area, thus, in order to further be easy to figure The background area of picture is handled, before second that nontarget area is determined according to the focusing area of the second depth of view information and master image Scene area depth of view information and the second background area depth of view information, wherein, according to the focusing area of the second depth of view information and master image The second foreground area depth of view information of nontarget area and the mode of the second background area depth of view information are determined, and according to the first scape Deeply convince that the focusing area of breath and master image determines the first foreground area depth of view information and the first background area depth of field of target area The mode of information is similar, and details are not described herein.
Step 302, the second virtualization journey is obtained according to the second foreground area depth of view information and the second background area depth of view information The baseline values of degree.
Wherein, the baseline values of the second virtualization degree can specify the degree of virtualization, wherein the second foreground area depth of field is believed Breath and the second background area depth of view information gap are bigger, show that the foreground and background differentiation in nontarget area is more obvious, thus, Virtualization degree can be smaller, so the baseline values of the second virtualization degree are smaller, conversely, the second foreground area depth of view information and the Two background area depth of view information gaps are smaller, show that the foreground and background in nontarget area is distinguished and get over unobvious, thus, virtualization Degree can be bigger, so the baseline values of the second virtualization degree of virtualization are bigger.
Step 303, Gaussian Blur is carried out to the background area of nontarget area according to the baseline values of the second virtualization degree Processing.
Specifically, the background area of nontarget area is carried out at Gaussian Blur according to the baseline values of the second virtualization degree Reason so that the depth of view information of the background area in nontarget area is bigger, and the depth of view information of background area is higher, and virtualization degree is got over Greatly.
In order to enable those skilled in the art, can more be apparent from the implementation of the background blurring processing of the application Process and treatment effect, illustrate with reference to specific application scenarios:
Specifically, as shown in fig. 6, when default destination object is default gesture, then after obtaining master image, detection It whether there is default images of gestures in master image, if it does, at the background blurring processing using above-described embodiment description Target area where the default images of gestures of reason method carries out process of refinement, is calculated using the depth of field given tacit consent in advance compared to system Method, the higher depth of field algorithm of precision carry out background blurring processing, other regions can use the relatively low depth of field of the precision of default Algorithm, which calculates, carries out normal background blurring processing, thus, it is possible to improve the virtualization effect of some special scenes, while again will not Increase too many processing time.
Continue by taking above-mentioned scene as an example, as shown in Fig. 7 (a), carried on the back using the background blurring processing mode of the prior art , may be to the corresponding image of default finger due to the limitation of the depth of view information computational accuracy of terminal device after scape virtualization processing Region is blurred, and causes virtualization effect poor, and after using the background blurring processing mode of the application, such as Fig. 7 (b) institutes Show, the background blurring processing to become more meticulous to the target area where images of gestures so that hand gesture protrudes also not empty Change, image virtualization effect is preferable.
In conclusion the background blurring processing method of the embodiment of the present application, obtain main camera shooting master image and The sub-picture of secondary camera shooting, and detect and whether there is default destination object in master image, if detection knows that there are target Object, it is determined that target area corresponding with destination object, and according to master image and sub-picture, calculated using default first depth of field Method calculates the first depth of view information of target area, and default second depth of field algorithm of application obtains the second scape of nontarget area Deeply convince breath, and then, virtualization processing is carried out to the background area of target area according to the first depth of view information, and according to second depth of field Information carries out virtualization processing to the background area of nontarget area.Hereby it is achieved that virtualization processing when protect destination object not by Virtualization, improves the visual effect of image procossing.
In order to realize above-described embodiment, the application also proposed a kind of background blurring processing unit, and Fig. 8 is according to the application The structure diagram of the background blurring processing unit of one embodiment, as shown in figure 8, the background blurring processing unit includes first Acquisition module 100, detection module 200, determining module 300, the second acquisition module 400 and processing module 500.
Wherein, the first acquisition module 100, for obtaining the master image of main camera shooting and the pair of secondary camera shooting Image.
Detection module 200, whether there is default destination object for detecting in master image.
In one embodiment of the application, as shown in figure 9, detection module 200 includes detection unit 210 and knows unit 220。
Wherein, detection unit 210, the contour edge of photographed scene in the foreground area for detecting the master image.
Know unit 220, for default template information to be matched with the contour edge, if successful match, Detection knows that there are default destination object in the master image.
Determining module 300, for knowing in detection there are during destination object, determines corresponding with destination object in master image Target area.
In one embodiment of the application, as shown in Figure 10, determining module 300 includes the first determination unit 310, obtains Unit 320, the second determination unit 330 and processing unit 340, wherein,
First determination unit 310, for determining target area according to the focusing area of the first depth of view information and master image First foreground area depth of view information and the first background area depth of view information.
Acquiring unit 320, for obtaining the according to the first foreground area depth of view information and the first background area depth of view information The baseline values of one virtualization degree.
Second determination unit 330, for blurring the baseline values and the first background area depth of view information of degree according to first, Determine the virtualization coefficient of each pixel in the background area of target area.
Processing unit 340, Gaussian mode is carried out for the virtualization coefficient according to each pixel to the background area of target area Paste processing.
Second acquisition module 400, for according to master image and sub-picture, target to be calculated using default first depth of field algorithm First depth of view information in region, and default second depth of field algorithm of application obtain the second depth of view information of nontarget area.
Wherein, in one embodiment of the application, the computational accuracy of the first depth of field algorithm is higher than the second depth of field algorithm.
Processing module 500, for carrying out virtualization processing to the background area of target area according to the first depth of view information, and Virtualization processing is carried out to the background area of nontarget area according to the second depth of view information.
It should be noted that the foregoing description to embodiment of the method, is also applied for the device of the embodiment of the present application, it is realized Principle is similar, and details are not described herein.
The division of modules is only used for for example, in other embodiments in above-mentioned background blurring processing unit, can Background blurring processing unit is divided into different modules as required, with complete the whole of above-mentioned background blurring processing unit or Partial function.
In conclusion the background blurring processing unit of the embodiment of the present application, obtain main camera shooting master image and The sub-picture of secondary camera shooting, and detect and whether there is default destination object in master image, if detection knows that there are target Object, it is determined that target area corresponding with destination object, and according to master image and sub-picture, calculated using default first depth of field Method calculates the first depth of view information of target area, and default second depth of field algorithm of application obtains the second scape of nontarget area Deeply convince breath, and then, virtualization processing is carried out to the background area of target area according to the first depth of view information, and according to second depth of field Information carries out virtualization processing to the background area of nontarget area.Hereby it is achieved that virtualization processing when protect destination object not by Virtualization, improves the visual effect of image procossing.
In order to realize above-described embodiment, the application also proposed a kind of computer equipment, wherein, computer equipment is to include The arbitrary equipment of the processor of memory comprising storage computer program and operation computer program, such as, can be intelligence Mobile phone, PC etc., further include image processing circuit in above computer equipment, image processing circuit can utilize hardware And/or component software is realized, it may include defines each of ISP (Image Signal Processing, picture signal processing) pipeline Kind processing unit.Figure 11 is the schematic diagram of image processing circuit in one embodiment.As shown in figure 11, for purposes of illustration only, only showing Go out the various aspects with the relevant image processing techniques of the embodiment of the present application.
As shown in figure 11, image processing circuit includes ISP processors 1040 and control logic device 1050.Imaging device 1010 The view data of seizure is handled by ISP processors 1040 first, and ISP processors 1040 analyze view data can with seizure Image statistics for definite and/or imaging device 1010 one or more control parameters.(the photograph of imaging device 1010 Machine) it may include the camera with one or more lens 1012 and imaging sensor 1014, wherein, in order to implement the application's Background blurring processing method, imaging device 1010 include two groups of cameras, wherein, with continued reference to Figure 11, imaging device 1010 can Based on main camera and secondary camera, photographed scene image, imaging sensor 1014 may include colour filter array (such as at the same time Bayer filters), imaging sensor 1014 can obtain the luminous intensity caught with each imaging pixel of imaging sensor 1014 and ripple Long message, and the one group of raw image data that can be handled by ISP processors 1040 is provided.Sensor 1020 can be based on sensor Raw image data is supplied to ISP processors 1040 by 1020 interface types, wherein, ISP processors 1040 can be based on sensor Image sensing in raw image data and secondary camera that imaging sensor 1014 in the 1020 main cameras provided obtains The raw image data that device 1014 obtains calculates depth of view information etc..1020 interface of sensor can utilize SMIA (Standard Mobile Imaging Architecture, Standard Mobile Imager framework) interface, other serial or parallel utilizing camera interfaces or The combination of above-mentioned interface.
ISP processors 1040 handle raw image data pixel by pixel in various formats.For example, each image pixel can Bit depth with 8,10,12 or 14 bits, ISP processors 1040 can carry out raw image data at one or more images Reason operation, statistical information of the collection on view data.Wherein, image processing operations can be by identical or different bit depth precision Carry out.
ISP processors 1040 can also receive pixel data from video memory 1030.For example, will from 1020 interface of sensor Raw pixel data is sent to video memory 1030, and the raw pixel data in video memory 1030 is available at ISP It is for processing to manage device 1040.Video memory 1030 can be in a part, storage device or electronic equipment for storage arrangement Independent private memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving the raw image data from 1020 interface of sensor or from video memory 1030, at ISP Reason device 1040 can carry out one or more image processing operations, such as time-domain filtering.View data after processing can be transmitted to image Memory 1030, to carry out other processing before shown.ISP processors 1040 are from 1030 receiving area of video memory Data are managed, and the processing data are carried out with the image real time transfer in original domain and in RGB and YCbCr color spaces.Place View data after reason may be output to display 1070, so that user watches and/or by graphics engine or GPU (Graphics Processing Unit, graphics processor) further processing.In addition, the output of ISP processors 1040 also can be transmitted to image Memory 1030, and display 1070 can read view data from video memory 1030.In one embodiment, image stores Device 1030 can be configured as realizing one or more frame buffers.In addition, the output of ISP processors 1040 can be transmitted to coding Device/decoder 1060, so as to encoding/decoding image data.The view data of coding can be saved, and be shown in display Decompressed before in 1070 equipment.Encoder/decoder 1060 can be realized by CPU or GPU or coprocessor.
The definite statistics of ISP processors 1040, which can be transmitted, gives control logic device Unit 1050.For example, statistics can Passed including the image such as automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 1012 shadow correction of lens 1014 statistical information of sensor.Control logic device 1050 may include the processor for performing one or more routines (such as firmware) and/or Microcontroller, one or more routines according to the statistics of reception, can determine imaging device 1010 control parameter and Control parameter.For example, control parameter may include 1020 control parameter of sensor (such as gain, time of integration of spectrum assignment), The combination of camera flash of light control parameter, 1012 control parameter of lens (such as focusing or zoom focal length) or these parameters. ISP control parameters may include the gain level and color for being used for automatic white balance and color adjustment (for example, during RGB processing) Correction matrix, and 1012 shadow correction parameter of lens.
It it is below the step of realizing background blurring processing method with image processing techniques in Figure 11:
Obtain the master image of main camera shooting and the sub-picture of secondary camera shooting;
Detect and whether there is default destination object in the master image;
If detection knows that there are the destination object, it is determined that target area corresponding with destination object described in master image Domain;
According to the master image and the sub-picture, the of the target area is calculated using default first depth of field algorithm One depth of view information;
The second depth of view information of nontarget area in master image is obtained using default second depth of field algorithm;
Virtualization processing is carried out to the background area of the target area according to first depth of view information, and according to described Second depth of view information carries out virtualization processing to the background area of the nontarget area.
In order to realize above-described embodiment, the application also proposes a kind of non-transitorycomputer readable storage medium, when described Instruction in storage medium is performed by processor, enabling performs the background blurring processing method such as above-described embodiment.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the application.In the present specification, schematic expression of the above terms is not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office Combined in an appropriate manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this area Art personnel can be tied the different embodiments or example described in this specification and different embodiments or exemplary feature Close and combine.
In addition, term " first ", " second " are only used for description purpose, and it is not intended that instruction or hint relative importance Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the present application, " multiple " are meant that at least two, such as two, three It is a etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include Module, fragment or the portion of the code of the executable instruction of one or more the step of being used for realization custom logic function or process Point, and the scope of the preferred embodiment of the application includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic at the same time in the way of or in the opposite order, carry out perform function, this should be by the application Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Connecting portion (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable Medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or if necessary with it His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the application can be realized with hardware, software, firmware or combinations thereof.Above-mentioned In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage Or firmware is realized.Such as, if realized with hardware with another embodiment, following skill well known in the art can be used Any one of art or their combination are realized:With the logic gates for realizing logic function to data-signal from Logic circuit is dissipated, the application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the application can be integrated in a processing module, can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the application System, those of ordinary skill in the art can be changed above-described embodiment, change, replace and become within the scope of application Type.

Claims (10)

  1. A kind of 1. background blurring processing method, it is characterised in that including:
    Obtain the master image of main camera shooting and the sub-picture of secondary camera shooting;
    Detect and whether there is default destination object in the master image;
    If detection knows that there are the destination object, it is determined that target area corresponding with destination object described in the master image Domain;
    According to the master image and the sub-picture, using the first scape of default first depth of field algorithm calculating target area Deeply convince breath;
    The second depth of view information of the nontarget area in the master image is obtained using default second depth of field algorithm;
    Virtualization processing is carried out to the background area of the target area according to first depth of view information;
    Virtualization processing is carried out to the background area of the nontarget area according to second depth of view information.
  2. 2. the method as described in claim 1, it is characterised in that whether there is default target in the detection master image Object, including:
    Detect the contour edge of photographed scene in the foreground area of the master image;
    Default template information is matched with the contour edge, if successful match, detection is known in the master image There are default destination object.
  3. 3. the method as described in claim 1, it is characterised in that it is described according to first depth of view information to the target area Background area carry out virtualization processing, including:
    The first foreground area of the target area is determined according to the focusing area of first depth of view information and the master image Depth of view information and the first background area depth of view information;
    The basis of the first virtualization degree is obtained according to the first foreground area depth of view information and the first background area depth of view information Numerical value;
    According to the baseline values and first background area depth of view information of the described first virtualization degree, the target area is determined Background area in each pixel virtualization coefficient;
    Gaussian Blur processing is carried out to the background area of the target area according to the virtualization coefficient of each pixel.
  4. 4. method as claimed in claim 3, it is characterised in that it is described according to second depth of view information to the non-target area The background area in domain carries out virtualization processing, including:
    The second foreground zone of the nontarget area is determined according to the focusing area of second depth of view information and the master image Domain depth of view information and the second background area depth of view information;
    The basis of the second virtualization degree is obtained according to the second foreground area depth of view information and the second background area depth of view information Numerical value;
    Gaussian Blur processing is carried out to the background area of the nontarget area according to the baseline values of the described second virtualization degree.
  5. 5. the method as described in claim 1-4 is any, it is characterised in that with the presence or absence of pre- in the detection master image If destination object after, further include:
    If detection knows that there is no the destination object, application second depth of field algorithm to calculate the 3rd scape of the master image Deeply convince breath, wherein, the computational accuracy of second depth of field algorithm is less than first depth of field algorithm;
    Virtualization processing is carried out to the background area of the master image according to the 3rd depth of view information.
  6. A kind of 6. background blurring processing unit, it is characterised in that including:
    First acquisition module, for obtaining the master image of main camera shooting and the sub-picture of secondary camera shooting;
    Detection module, whether there is default destination object for detecting in the master image;
    Determining module, for knowing that there are during the destination object, determine and destination object described in the master image in detection Corresponding target area;
    Second acquisition module, for according to the master image and the sub-picture, institute to be calculated using default first depth of field algorithm The first depth of view information of target area is stated, and default second depth of field algorithm of application obtains second depth of field letter of nontarget area Breath;
    Processing module, for carrying out virtualization processing to the background area of the target area according to first depth of view information, with And virtualization processing is carried out to the background area of the nontarget area according to second depth of view information.
  7. 7. device as claimed in claim 6, it is characterised in that the detection module includes:
    Detection unit, the contour edge of photographed scene in the foreground area for detecting the master image;
    Know unit, for default template information to be matched with the contour edge, if successful match, detection is known There are default destination object in the master image.
  8. 8. device as claimed in claim 6, it is characterised in that the processing module includes:
    First determination unit, for determining the target area according to the focusing area of first depth of view information and the master image The first foreground area depth of view information and the first background area depth of view information in domain;
    Acquiring unit, it is empty for obtaining first according to the first foreground area depth of view information and the first background area depth of view information The baseline values of change degree;
    Second determination unit, believes for the baseline values according to the described first virtualization degree and first background area depth of field Breath, determines the virtualization coefficient of each pixel in the background area of the target area;
    Processing unit, for carrying out Gaussian mode to the background area of the target area according to the virtualization coefficient of each pixel Paste processing.
  9. 9. a kind of computer equipment, it is characterised in that including memory, processor and storage on a memory and can be in processor The computer program of upper operation, when the processor performs described program, realizes the back of the body as any one of claim 1-5 Scape blurs processing method.
  10. 10. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the program is by processor The background blurring processing method as any one of claim 1-5 is realized during execution.
CN201711242468.1A 2017-11-30 2017-11-30 Background blurring processing method, device and equipment Active CN107945105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711242468.1A CN107945105B (en) 2017-11-30 2017-11-30 Background blurring processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711242468.1A CN107945105B (en) 2017-11-30 2017-11-30 Background blurring processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN107945105A true CN107945105A (en) 2018-04-20
CN107945105B CN107945105B (en) 2021-05-25

Family

ID=61948126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711242468.1A Active CN107945105B (en) 2017-11-30 2017-11-30 Background blurring processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN107945105B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151314A (en) * 2018-09-10 2019-01-04 珠海格力电器股份有限公司 A kind of camera shooting virtualization processing method, device, storage medium and the terminal of terminal
CN110349080A (en) * 2019-06-10 2019-10-18 北京迈格威科技有限公司 A kind of image processing method and device
CN110363702A (en) * 2019-07-10 2019-10-22 Oppo(重庆)智能科技有限公司 Image processing method and Related product
CN110956577A (en) * 2018-09-27 2020-04-03 Oppo广东移动通信有限公司 Control method of electronic device, and computer-readable storage medium
CN111246093A (en) * 2020-01-16 2020-06-05 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111311482A (en) * 2018-12-12 2020-06-19 Tcl集团股份有限公司 Background blurring method and device, terminal equipment and storage medium
CN111803070A (en) * 2020-06-19 2020-10-23 浙江大华技术股份有限公司 Height measuring method and electronic equipment
CN112532882A (en) * 2020-11-26 2021-03-19 维沃移动通信有限公司 Image display method and device
WO2021114061A1 (en) * 2019-12-09 2021-06-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Electric device and method of controlling an electric device
CN113014791A (en) * 2019-12-20 2021-06-22 中兴通讯股份有限公司 Image generation method and device
CN114125296A (en) * 2021-11-24 2022-03-01 广东维沃软件技术有限公司 Image processing method, image processing device, electronic equipment and readable storage medium
WO2022198525A1 (en) * 2021-03-24 2022-09-29 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method of improving stability of bokeh processing and electronic device
CN115134532A (en) * 2022-07-26 2022-09-30 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751405A (en) * 2015-03-11 2015-07-01 百度在线网络技术(北京)有限公司 Method and device for blurring image
CN105303514A (en) * 2014-06-17 2016-02-03 腾讯科技(深圳)有限公司 Image processing method and apparatus
CN105578070A (en) * 2015-12-21 2016-05-11 深圳市金立通信设备有限公司 Image processing method and terminal
CN105979165A (en) * 2016-06-02 2016-09-28 广东欧珀移动通信有限公司 Blurred photos generation method, blurred photos generation device and mobile terminal
CN106060423A (en) * 2016-06-02 2016-10-26 广东欧珀移动通信有限公司 Bokeh photograph generation method and device, and mobile terminal
US20170070720A1 (en) * 2015-09-04 2017-03-09 Apple Inc. Photo-realistic Shallow Depth-of-Field Rendering from Focal Stacks
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
CN107343144A (en) * 2017-07-10 2017-11-10 广东欧珀移动通信有限公司 Dual camera switching handling method, device and its equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303514A (en) * 2014-06-17 2016-02-03 腾讯科技(深圳)有限公司 Image processing method and apparatus
CN104751405A (en) * 2015-03-11 2015-07-01 百度在线网络技术(北京)有限公司 Method and device for blurring image
US20170070720A1 (en) * 2015-09-04 2017-03-09 Apple Inc. Photo-realistic Shallow Depth-of-Field Rendering from Focal Stacks
CN105578070A (en) * 2015-12-21 2016-05-11 深圳市金立通信设备有限公司 Image processing method and terminal
CN105979165A (en) * 2016-06-02 2016-09-28 广东欧珀移动通信有限公司 Blurred photos generation method, blurred photos generation device and mobile terminal
CN106060423A (en) * 2016-06-02 2016-10-26 广东欧珀移动通信有限公司 Bokeh photograph generation method and device, and mobile terminal
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
CN107343144A (en) * 2017-07-10 2017-11-10 广东欧珀移动通信有限公司 Dual camera switching handling method, device and its equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIN XUETING ET AL: "Blur with depth: A depth cue method based on blur effect in augmented reality", 《2013 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR)》 *
肖进胜 等: "基于多聚焦图像深度信息提取的背景虚化显示", 《自动化学报》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151314A (en) * 2018-09-10 2019-01-04 珠海格力电器股份有限公司 A kind of camera shooting virtualization processing method, device, storage medium and the terminal of terminal
CN110956577A (en) * 2018-09-27 2020-04-03 Oppo广东移动通信有限公司 Control method of electronic device, and computer-readable storage medium
CN111311482A (en) * 2018-12-12 2020-06-19 Tcl集团股份有限公司 Background blurring method and device, terminal equipment and storage medium
CN110349080B (en) * 2019-06-10 2023-07-04 北京迈格威科技有限公司 Image processing method and device
CN110349080A (en) * 2019-06-10 2019-10-18 北京迈格威科技有限公司 A kind of image processing method and device
CN110363702A (en) * 2019-07-10 2019-10-22 Oppo(重庆)智能科技有限公司 Image processing method and Related product
CN110363702B (en) * 2019-07-10 2023-10-20 Oppo(重庆)智能科技有限公司 Image processing method and related product
CN114514735B (en) * 2019-12-09 2023-10-03 Oppo广东移动通信有限公司 Electronic apparatus and method of controlling the same
WO2021114061A1 (en) * 2019-12-09 2021-06-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Electric device and method of controlling an electric device
CN114514735A (en) * 2019-12-09 2022-05-17 Oppo广东移动通信有限公司 Electronic apparatus and method of controlling the same
CN113014791A (en) * 2019-12-20 2021-06-22 中兴通讯股份有限公司 Image generation method and device
CN113014791B (en) * 2019-12-20 2023-09-19 中兴通讯股份有限公司 Image generation method and device
CN111246093A (en) * 2020-01-16 2020-06-05 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111803070A (en) * 2020-06-19 2020-10-23 浙江大华技术股份有限公司 Height measuring method and electronic equipment
CN112532882B (en) * 2020-11-26 2022-09-16 维沃移动通信有限公司 Image display method and device
CN112532882A (en) * 2020-11-26 2021-03-19 维沃移动通信有限公司 Image display method and device
WO2022198525A1 (en) * 2021-03-24 2022-09-29 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method of improving stability of bokeh processing and electronic device
CN114125296A (en) * 2021-11-24 2022-03-01 广东维沃软件技术有限公司 Image processing method, image processing device, electronic equipment and readable storage medium
CN115134532A (en) * 2022-07-26 2022-09-30 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN107945105B (en) 2021-05-25

Similar Documents

Publication Publication Date Title
CN107945105A (en) Background blurring processing method, device and equipment
US10997696B2 (en) Image processing method, apparatus and device
CN107977940A (en) background blurring processing method, device and equipment
JP6347675B2 (en) Image processing apparatus, imaging apparatus, image processing method, imaging method, and program
CN108076286A (en) Image weakening method, device, mobile terminal and storage medium
CN107948514B (en) Image blurs processing method, device, mobile device and computer storage medium
JP2020528700A (en) Methods and mobile terminals for image processing using dual cameras
JP2020528622A (en) Image processing methods, equipment and devices
CN108024058B (en) Image blurs processing method, device, mobile terminal and storage medium
CN108053363A (en) Background blurring processing method, device and equipment
CN107481186B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN108111749A (en) Image processing method and device
CN108024054A (en) Image processing method, device and equipment
CN107493432A (en) Image processing method, device, mobile terminal and computer-readable recording medium
JP2015197745A (en) Image processing apparatus, imaging apparatus, image processing method, and program
JP5779089B2 (en) Edge detection apparatus, edge detection program, and edge detection method
CN109194877A (en) Image compensation method and device, computer readable storage medium and electronic equipment
CN108053438A (en) Depth of field acquisition methods, device and equipment
CN108024057A (en) Background blurring processing method, device and equipment
CN108093158A (en) Image virtualization processing method, device and mobile equipment
CN107623814A (en) The sensitive information screen method and device of shooting image
CN107563979A (en) Image processing method, device, computer-readable recording medium and computer equipment
CN107704798A (en) Image weakening method, device, computer-readable recording medium and computer equipment
CN107563329A (en) Image processing method, device, computer-readable recording medium and mobile terminal
CN108052883A (en) User's photographic method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant