CN107704798A - Image weakening method, device, computer-readable recording medium and computer equipment - Google Patents
Image weakening method, device, computer-readable recording medium and computer equipment Download PDFInfo
- Publication number
- CN107704798A CN107704798A CN201710676169.2A CN201710676169A CN107704798A CN 107704798 A CN107704798 A CN 107704798A CN 201710676169 A CN201710676169 A CN 201710676169A CN 107704798 A CN107704798 A CN 107704798A
- Authority
- CN
- China
- Prior art keywords
- region
- image
- human face
- intensity
- virtualization
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000003313 weakening effect Effects 0.000 title claims abstract description 26
- 238000012545 processing Methods 0.000 claims abstract description 107
- 230000015654 memory Effects 0.000 claims description 24
- 230000008859 change Effects 0.000 claims description 14
- 238000001514 detection method Methods 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 9
- 238000004040 coloring Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 4
- 238000012937 correction Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000006641 stabilisation Effects 0.000 description 2
- 238000011105 stabilization Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000539 dimer Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of image weakening method, device, computer-readable recording medium and computer equipment.Methods described includes:Obtain pending image;The human face region in the pending image is detected, and obtains physical distance information corresponding to the human face region;Virtualization processing is carried out to the background area in the pending image according to the background blurring intensity of physical distance acquisition of information, and according to the background blurring intensity.Above-mentioned image weakening method, device, computer-readable recording medium and computer equipment, the accuracy of image procossing can be improved.
Description
Technical field
The present invention relates to field of computer technology, more particularly to image weakening method, device, computer-readable storage medium
Matter and computer equipment.
Background technology
Nowadays Taking Photographic is increasingly be unable to do without in people's life, in particular with the development of intelligent terminal, intelligent terminal
After realizing camera function, make to take pictures and apply more extensive.Simultaneously either in personal lifestyle or commercial use, all to clapping
According to quality and Consumer's Experience require more and more higher.
However, the scene taken pictures is often complicated and changeable, in order that the photo that must be shot adapts to scene complicated and changeable,
The main body of shooting is more highlighted so as to embody stereovision, common processing method is to maintain the definition of shooting main body, and will clap
The region taken the photograph beyond main body carries out virtualization processing.Virtualization processing is exactly to be blurred the region beyond main body so that main body
It is more prominent.Traditional weakening method is first to identify the main body in image, and then the region beyond main body is directly fixed
The virtualization of degree so that background and main body are differently shown.
The content of the invention
The embodiment of the present application provides a kind of image weakening method, device, computer-readable recording medium and computer equipment,
The accuracy of image procossing can be improved.
A kind of image weakening method, methods described include:
Obtain pending image;
The human face region in the pending image is detected, and obtains physical distance information corresponding to the human face region;
Wait to locate to described according to the background blurring intensity of physical distance acquisition of information, and according to the background blurring intensity
Background area in reason image carries out virtualization processing.
A kind of image blurs device, and described device includes:
Image collection module, for obtaining pending image;
Data obtaining module, for detecting the human face region in the pending image, and obtain the human face region pair
The physical distance information answered;
Background blurring module, for according to the background blurring intensity of physical distance acquisition of information, and according to the background
Virtualization intensity carries out virtualization processing to the background area in the pending image.
One or more includes the non-volatile computer readable storage medium storing program for executing of computer executable instructions, when the calculating
When machine executable instruction is executed by one or more processors so that the computing device following steps:
Obtain pending image;
The human face region in the pending image is detected, and obtains physical distance information corresponding to the human face region;
Wait to locate to described according to the background blurring intensity of physical distance acquisition of information, and according to the background blurring intensity
Background area in reason image carries out virtualization processing.
A kind of computer equipment, including memory and processor, computer-readable instruction are stored in the memory, institute
When stating instruction by the computing device so that the computing device following steps:
Obtain pending image;
The human face region in the pending image is detected, and obtains physical distance information corresponding to the human face region;
Wait to locate to described according to the background blurring intensity of physical distance acquisition of information, and according to the background blurring intensity
Background area in reason image carries out virtualization processing.
Image weakening method, device, computer-readable recording medium and the computer equipment that the embodiment of the present application provides, it is first
First detect the human face region in pending image, and the virtualization of background area is obtained according to the physical distance information of human face region
Intensity, virtualization processing is then carried out to background area according to virtualization intensity.Physical distance information can reflect face and camera lens
Distance, the virtualization intensity apart from different acquisitions are also different so that blur the more accurate of processing.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with
Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is the internal structure schematic diagram of electronic equipment in one embodiment;
Fig. 2 is the internal structure schematic diagram of server in one embodiment;
Fig. 3 is the flow chart of image weakening method in one embodiment;
Fig. 4 is the flow chart of image weakening method in another embodiment;
Fig. 5 is the schematic diagram that physical distance information is obtained in one embodiment;
Fig. 6 is the structural representation that image blurs device in one embodiment;
Fig. 7 is the structural representation that image blurs device in another embodiment;
Fig. 8 is the schematic diagram of image processing circuit in one embodiment.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
It is appreciated that term " first " used in the present invention, " second " etc. can be used to describe various elements herein,
But these elements should not be limited by these terms.These terms are only used for distinguishing first element and another element.Citing comes
Say, without departing from the scope of the invention, the first client can be referred to as the second client, and similarly, can incite somebody to action
Second client is referred to as the first client.First client and the second client both clients, but it is not same visitor
Family end.
Fig. 1 is the internal structure schematic diagram of electronic equipment in one embodiment.As shown in figure 1, the electronic equipment includes leading to
Cross processor, non-volatile memory medium, built-in storage and the network interface, display screen and input unit of system bus connection.
Wherein, the non-volatile memory medium of electronic equipment is stored with operating system and computer-readable instruction.The computer-readable finger
Make when being executed by processor to realize a kind of image weakening method.The processor is used to provide calculating and control ability, and support is whole
The operation of individual electronic equipment.Built-in storage in electronic equipment is the fortune of the computer-readable instruction in non-volatile memory medium
Row provides environment.Network interface is used to carry out network service with server, such as sends image virtualization request to server, receives clothes
Image after the virtualization processing that business device returns etc..The display screen of electronic equipment can be that LCDs or electric ink are shown
Screen etc., input unit can be button, the track set on the touch layer or electronic equipment casing covered on display screen
Ball or Trackpad or external keyboard, Trackpad or mouse etc..The electronic equipment can be mobile phone, tablet personal computer or
Person's personal digital assistant or Wearable etc..It will be understood by those skilled in the art that the structure shown in Fig. 1, only with
The block diagram of the related part-structure of application scheme, the limit for the electronic equipment being applied thereon to application scheme is not formed
Fixed, specific electronic equipment can include, than more or less parts shown in figure, either combining some parts or having
Different part arrangements.
Fig. 2 is the internal structure schematic diagram of server in one embodiment.As shown in Fig. 2 the server is including passing through
Processor, non-volatile memory medium, built-in storage and the network interface of bus of uniting connection.Wherein, the server is non-volatile
Property storage medium is stored with operating system and computer-readable instruction.To realize when the computer-readable instruction is executed by processor
A kind of image weakening method.The processor of the server is used to provide calculating and control ability, supports the operation of whole server.
The terminal that the network interface of the server is used for according to this with outside is communicated by network connection, such as the image that receiving terminal is sent
Virtualization is asked and returns to image after virtualization processing etc. to terminal.Server can use independent server either multiple clothes
The server cluster of business device composition is realized.It will be understood by those skilled in the art that the structure shown in Fig. 2, only with this
The block diagram of the related part-structure of application scheme, does not form the restriction for the server being applied thereon to application scheme,
Specific server can include, than more or less parts shown in figure, either combining some parts or with difference
Part arrangement.
Fig. 3 is the flow chart of image weakening method in one embodiment.As shown in figure 3, the image weakening method includes step
Rapid 302 to step 306, wherein:
Step 302, pending image is obtained.
In the embodiment that the application provides, pending image refers to that needs blur the image of processing, can pass through
Image collecting device is acquired.Image collecting device refers to the device for gathering image, such as image collecting device can be shone
The devices such as camera, video camera on camera, mobile terminal.After user terminal receives image virtualization instruction, can directly it exist
User terminal carries out virtualization processing to pending image, and image virtualization request can also be initiated to server, right on the server
Pending image carries out virtualization processing.
It is understood that image virtualization instruction can be that user inputs, it can also be what user terminal triggered automatically.
For example, user by user terminal input photographing instruction, mobile terminal after the photographing instruction is detected, by camera come
Gather pending image.Then automatic triggering generation image virtualization instruction, and virtualization processing is carried out to pending image.Wherein,
Photographing instruction can be mobile terminal physical button or contact action triggering or phonetic order etc..
Step 304, the human face region in pending image is detected, and obtains physical distance information corresponding to human face region.
In one embodiment, human face region refers to the region where face in pending image, and physical distance information is
Refer to the relevant parameter for representing physical distance of the image collecting device into pending image between object corresponding to each pixel.
Physical distance information corresponding to human face region refers to image collecting device to the relevant parameter of the physical distance between face.
Specifically, the characteristic point in pending image can be identified first, then come out feature point extraction and default people
Face model is matched, if the characteristic point of the extraction matches with default faceform, by the region where extraction characteristic point
As human face region.
In the embodiment that the application provides, pending image is made up of several pixels, and each pixel has correspondingly
Physical distance information, the physical distance information represent physics corresponding to object to image collecting device represented by the pixel away from
From.
It is understood that there may be multiple human face regions in pending image, the people in pending image is detected
Behind face region, region area corresponding to each human face region can be obtained;Obtain corresponding to the maximum human face region of region area
Physical distance information.According to the background blurring intensity of physical distance acquisition of information corresponding to the maximum human face region of region area.Area
Domain area refers to size corresponding to human face region, and the region area can be the quantity of the pixel included in human face region
It is indicated or the ratio of area size and pending image size is indicated as shared by human face region.
Usually, when gathering the physical distance of object, there is a coverage scope, in the range of the coverage
Object, corresponding physical distance information can be accurately acquired, the object more than the coverage scope can not be accurately
Measurement.Different according to the span of hardware performance difference coverage scope, the coverage scope can be entered by hardware
Row regulation.Therefore, the physical distance information in the range of the coverage can be represented with accurate numerical value, more than this effectively away from
Physical distance information from scope is represented with a fixed numerical value.
That is, the human face region in the range of the coverage can be detected only, the people of the coverage scope was looked into
Face region can not obtain.Then step 304 can include:Detect the face area in the range of pre-determined distance in pending image
Domain, and obtain physical distance information corresponding to human face region.Wherein, pre-determined distance scope can with but be limited to refer to working substance reason away from
From the span of information.
Step 306, according to the background blurring intensity of physical distance acquisition of information, and according to background blurring intensity to pending figure
Background area as in carries out virtualization processing.
In embodiment provided by the invention, virtualization processing refers to image carrying out Fuzzy processing, according to virtualization intensity
Virtualization processing is carried out, virtualization intensity is different, and the degree for blurring processing is also different.Background area can refer in pending image,
Other regions in addition to human face region or portrait area.Wherein, portrait area refers to the whole portrait institute in pending image
Region.
Background blurring intensity refers to represent the parameter for background area blur the degree of processing.According to human face region
The background blurring intensity of physical distance acquisition of information, virtualization processing is carried out to background area further according to the background blurring intensity, obtained
Virtualization result will with face to image collecting device actual physics distance change and change.Usually, physical distance
Information is bigger, and background blurring intensity is smaller, to background area blur the degree of processing with regard to smaller;Physical distance information is got over
Small, background blurring intensity is bigger, and the degree for background area blur processing is bigger.
Above-mentioned image weakening method, the human face region in pending image is detected first, and according to the physics of human face region
Range information obtains the virtualization intensity of background area, then carries out virtualization processing to background area according to virtualization intensity.Physics
Range information can reflect the distance of face and camera lens, and the virtualization intensity apart from different acquisitions is also different, and virtualization degree is with physics
The change of range information and change so that the effect for blurring processing can adapt to different photographed scenes, and virtualization processing is more smart
Really.
Fig. 4 is the flow chart of image weakening method in another embodiment.As shown in figure 4, the image weakening method includes
Step 402 to step 410, wherein:
Step 402, pending image is obtained.
In the embodiment that the application provides, pending image can directly be obtained on local or server
Take.Specifically, user terminal, can be directly according to the figure included in image virtualization instruction after image virtualization instruction is received
The pending image as corresponding to storage address and image identification go acquisition.Image storage address can be user terminal local,
Can also be on server.After pending image is got, pending image can be subjected to virtualization processing in local,
Virtualization processing can also be carried out to pending image on the server.
Step 404, the human face region in pending image is detected, and it is corresponding to obtain each human face region in pending image
Physical distance information.
In embodiment provided by the invention, dual camera can be installed on image collecting device, surveyed by dual camera
Image collecting device is measured to the physical distance information between object.Specifically, distinguished by the first camera and second camera
Shoot the image of object;First angle and the second angle are obtained according to the image, wherein, the first angle is the first camera to thing
To the angle between horizontal line where second camera, the second angle is second camera for horizontal line where body and the first camera
To horizontal line where object and second camera to the angle between horizontal line where the first camera;According to the first angle,
Two angles and the first camera obtain image collecting device to the physical distance between object the distance between to second camera
Information.
Fig. 5 is the schematic diagram that physical distance information is obtained in one embodiment.As shown in figure 5, pass through the first camera 502
Shoot the image of object 506 respectively with second camera 504, the first included angle A 1 and the second angle can be obtained according to the image
A2, the distance between second camera 504 T then is arrived further according to the first included angle A 1, the second included angle A 2 and the first camera 502,
Physical distance of first camera 402 to the place horizontal line of second camera 504 between any point and object 506 can be obtained
D。
It is understood that often including multiple portraits in same scene, therefore can also be included in pending image
Multiple human face regions.Each human face region in pending image is extracted, and obtains physics corresponding to human face region
Range information.Usually, when obtaining image for some scene, depth map corresponding to the scene can be obtained simultaneously.Wherein,
The depth map obtained is one-to-one with image, and the value in depth map represents the physical distance letter of respective pixel in image
Breath.That is, depth map corresponding to being obtained while pending image is obtained, detects the people in pending image
After face region, the pixel coordinate can in human face region obtained in depth map corresponding to physical distance information.
In one embodiment, because each pixel has corresponding physical distance information, and human face region contain it is more
Individual pixel., can be to institute in human face region after physical distance information corresponding to each pixel in getting human face region
There is physical distance information averaged corresponding to pixel, or obtain physical distance information corresponding to some pixel, carry out table
Show physical distance information corresponding to the human face region.For example, physical distance information corresponding to the center pixel of human face region is obtained,
To represent physical distance information corresponding to the human face region.
Step 406, according to the background blurring intensity of physical distance acquisition of information, and according to background blurring intensity to pending figure
Background area as in carries out virtualization processing.
In one embodiment, it is believed that portrait and face are on same vertical plane, the thing of portrait to image collecting device
The physical distance phase of distance and face to image collecting device is managed in same scope.Therefore, physical distance information is being got
After human face region, physical distance can get the portrait area in pending image according to corresponding to human face region, then
Background area is determined in pending image according to portrait area can.
Specifically, the human face region in pending image is detected, and physical distance information obtains according to corresponding to human face region
Portrait distance range is taken, and the portrait area in pending image can be obtained according to portrait distance range, further according to the portrait
Region obtains background area.Wherein, portrait distance range refers to physical distance information corresponding to portrait area in pending image
Span.Physical distance due to image collecting device to face can be regarded as equal with the physical distance to portrait
, after human face region is detected, physical distance information corresponding to human face region is obtained, further according to thing corresponding to human face region
Reason range information is assured that the scope of physical distance information corresponding to portrait area, and the physical distance information in the range of this is recognized
To be physical distance information corresponding to portrait area, the physical distance information outside the scope is considered as the physics of background area
Range information.
Further, can also include before step 406:The physical distance acquisition of information portrait according to corresponding to human face region
Distance range, and the image-region in the pending image of physical distance acquisition of information in portrait distance range;Obtain figure
The background area in pending image in addition to portrait area is obtained as the colouring information in region, and according to colouring information.
It is in same physical distance scope in pending image with face according to the image-region that portrait distance range extracts
Region where interior object, it is assumed that with the presence of other objects beside people, then the image-region extracted may be present
Other objects in addition to portrait area.Further portrait area can at this time be extracted according to the colouring information of image-region
Out.
In embodiment provided by the invention, colouring information refers to the relevant parameter of the color for representing image, such as
Colouring information can include the information such as the tone of color, saturation degree, lightness in image.Wherein, the tone of color refers to color
Angle is measured, and its span is 0 °~360 °, is calculated counterclockwise since red, and red is 0 °, and green is 120 °,
Blueness is 240 °.Saturation degree refers to color close to the degree of spectrum, and general saturation degree is higher, and color is more bright-coloured;Saturation degree is lower,
Color is dimer.Lightness then represents the light levels of color.
Different objects often has different color characteristics, i.e., the colouring information presented in the picture is also different.
Such as the color of trees is green, sky is blueness, is greatly yellow etc..Colouring information in image-region can carry
Take the background area outside portrait area and portrait area.
Specifically, the color component of image-region is obtained, extracts the area of color component within a preset range in image-region
Domain is as portrait area.Color component refers to pending image being converted into image caused by a certain image from color dimension
Component, such as color component can refer to RGB color component, CMY color components, hsv color component of image etc., it is possible to understand that
Be that can mutually be changed between RGB color component, CMY color components, hsv color component.
In one embodiment, the hsv color component of image-region is obtained, hsv color component in image-region is extracted and exists
Region in preset range is as portrait area.Wherein, hsv color component refer respectively to the tone (H) of image, saturation degree (S),
Lightness (V) component, a preset range is set to these three components respectively, and by these three components in image-region in default model
Extracted region in enclosing comes out, as portrait area.
For example, portrait area is obtained by hsv color component, can is specifically the HSV face for obtaining image-region
Colouring component, and obtain the area of the condition that meets in image-region " H values 20~25, S values in 10~50, V values between 50~85 "
Domain, as portrait area.
In one embodiment, step 406 can include:Region area corresponding to each human face region is obtained, according to thing
Manage range information and region area obtains background blurring intensity, and according to background blurring intensity to the background area in pending image
Domain carries out virtualization processing.
If getting multiple human face regions, each human face region has corresponding physical distance information, and according to acquisition
Physical distance information obtain background blurring intensity.Further, area corresponding to each human face region can be obtained first
Domain area, background blurring intensity is obtained according to region area and physical distance information.For example, getting multiple human face regions
Afterwards, the physical distance information according to corresponding to region area maximum or minimum human face region, obtains background blurring intensity.May be used also
To be to obtain physical distance information corresponding to each human face region, and the physical distance information according to corresponding to each human face region
Average value obtains background blurring intensity.
In one embodiment, with background blurring intensity there is corresponding relation in physical distance information, get physical distance
After information, background blurring intensity is got according to the physical distance information and the corresponding relation can.It is empty further according to background
Change intensity and virtualization processing is carried out to background area.
Step 408, the physical distance information according to corresponding to each human face region, each face area in pending image is obtained
Portrait corresponding to domain blurs intensity.
In one embodiment, after getting multiple human face regions, can continue to portrait area corresponding to human face region
Domain carries out virtualization processing, and the physical distance acquisition of information portrait according to corresponding to human face region blurs intensity, portrait virtualization intensity
Illustrate the degree for portrait area blur processing.
Step 410, intensity is blurred according to portrait and virtualization processing is carried out to portrait area corresponding to human face region.
Further, region area corresponding to human face region is obtained, using the maximum human face region of region area as base
Plinth region, and blur region using the human face region in addition to base area as face;According to base area and face virtualization area
Physical distance information corresponding to domain, obtain portrait virtualization intensity corresponding to face virtualization region;Intensity is blurred to people according to portrait
Portrait area corresponding to face virtualization region carries out virtualization processing.Physical distance acquisition of information is carried on the back according to corresponding to base area simultaneously
Scape blurs intensity.
That is, human face region is divided into by base area and face virtualization region according to region area, by base area
Different degrees of virtualization processing is done with face virtualization region.For example, portrait area corresponding to base area is not virtualization processing, people
Portrait area corresponding to face virtualization region needs to do virtualization processing.Physical distance information is as base according to corresponding to using human face region
Plinth, obtain portrait virtualization intensity corresponding to face virtualization region.
As an example it is assumed that tri- human face regions of A, B, C are included in pending image, corresponding physical distance information difference
For Da、DbAnd Dc.Wherein, the region area of a-quadrant is maximum, then by region based on a-quadrant, B area and C regions are as people
Face blurs region.There is corresponding relation in physical distance information corresponding to a-quadrant, it is corresponding to get a-quadrant with background blurring intensity
Physical distance information after, then can obtain background blurring intensity.The background blurring intensity can represent to blur background area
The intensity of processing, it is assumed that background blurring intensity is X, and the portrait virtualization intensity difference of portrait area corresponding to B area and C regions
For XbAnd Xc, then XbAnd XcIt can be calculated by equation below:
Above-mentioned image weakening method, the human face region in pending image is detected first, and according to the physics of human face region
Range information obtains the virtualization intensity of background area, then carries out virtualization processing to background area according to virtualization intensity.Physics
Range information can reflect the distance of face and camera lens, and the virtualization intensity apart from different acquisitions is also different, and virtualization degree is with physics
The change of range information and change so that the effect for blurring processing can adapt to different photographed scenes, and virtualization processing is more smart
Really.Meanwhile human face region is divided into base area and face virtualization region, different human face regions is done at different virtualizations
Reason, it further increasing the accuracy of virtualization processing.
Fig. 6 is the structural representation that image blurs device in one embodiment.Image virtualization device 600 obtains including image
Modulus block 602, data obtaining module 604 and background blurring module 606.Wherein:
Image collection module 602, for obtaining pending image.
Data obtaining module 604, for detecting the human face region in the pending image, and obtain the human face region
Corresponding physical distance information.
Background blurring module 606, for according to the background blurring intensity of physical distance acquisition of information, and according to the back of the body
Scape virtualization intensity carries out virtualization processing to the background area in the pending image.
Above-mentioned image blurs device, detects the human face region in pending image first, and according to the physics of human face region
Range information obtains the virtualization intensity of background area, then carries out virtualization processing to background area according to virtualization intensity.Physics
Range information can reflect the distance of face and camera lens, and the virtualization intensity apart from different acquisitions is also different, and virtualization degree is with physics
The change of range information and change so that the effect for blurring processing can adapt to different photographed scenes, and virtualization processing is more smart
Really.
Fig. 7 is the structural representation that image blurs device in another embodiment.Image virtualization device 700 includes image
Acquisition module 702, data obtaining module 704, background blurring module 706, region acquisition module 708, the and of parameter acquisition module 710
Portrait blurring module 712.Wherein:
Image collection module 702, for obtaining pending image.
Data obtaining module 704, for detecting the human face region in the pending image, and obtain the pending figure
The physical distance information corresponding to each human face region as in.
Background blurring module 706, for according to the background blurring intensity of physical distance acquisition of information, and according to the back of the body
Scape virtualization intensity carries out virtualization processing to the background area in the pending image.
Region acquisition module 708, for obtaining region area corresponding to each human face region, by the people that region area is maximum
Region based on face region, and blur region using the human face region in addition to base area as face.
Intensity acquisition module 710, for according to the base area and face virtualization region corresponding to physical distance information,
Obtain portrait virtualization intensity corresponding to face virtualization region.
Portrait blurring module 712, for blurring intensity to portrait corresponding to face virtualization region according to the portrait
Region carries out virtualization processing.
Above-mentioned image blurs device, detects the human face region in pending image first, and according to the physics of human face region
Range information obtains the virtualization intensity of background area, then carries out virtualization processing to background area according to virtualization intensity.Physics
Range information can reflect the distance of face and camera lens, and the virtualization intensity apart from different acquisitions is also different, and virtualization degree is with physics
The change of range information and change so that the effect for blurring processing can adapt to different photographed scenes, and virtualization processing is more smart
Really.Meanwhile human face region is divided into base area and face virtualization region, different human face regions is done at different virtualizations
Reason, it further increasing the accuracy of virtualization processing.
In another embodiment, data obtaining module 704 is additionally operable to detect the human face region in the pending image,
And obtain physical distance information corresponding to the human face region
In the embodiment that the application provides, background blurring module 706 is additionally operable to obtain area corresponding to each human face region
Domain area, background blurring intensity is obtained according to the physical distance information and region area, and according to the background blurring intensity
Virtualization processing is carried out to the background area in the pending image.
In one embodiment, intensity acquisition module 710 be additionally operable to according to corresponding to each human face region physics away from
From information, obtain portrait corresponding to each human face region in the pending image and blur intensity.
In one of the embodiments, portrait blurring module 712 is used to blur intensity to the face according to the portrait
Portrait area corresponding to region carries out virtualization processing.
The division of modules is only used for for example, in other embodiments, will can scheme in above-mentioned image virtualization device
Different modules is divided into as required as blurring device, to complete all or part of function that above-mentioned image blurs device.
The embodiment of the present invention additionally provides a kind of computer-readable recording medium.One or more can perform comprising computer
The non-volatile computer readable storage medium storing program for executing of instruction, when the computer executable instructions are executed by one or more processors
When so that the computing device following steps:
Obtain pending image;
The human face region in the pending image is detected, and obtains physical distance information corresponding to the human face region;
Wait to locate to described according to the background blurring intensity of physical distance acquisition of information, and according to the background blurring intensity
Background area in reason image carries out virtualization processing.
In one embodiment, the human face region in the detection pending image being executed by processor, and obtain
Physical distance information corresponding to the human face region is taken to include:
The human face region in the pending image is detected, and it is corresponding to obtain each human face region in the pending image
Physical distance information.
In the other embodiment that the application provides, what is be executed by processor is described according to the physical distance acquisition of information
Background blurring intensity, and virtualization processing bag is carried out to the background area in the pending image according to the background blurring intensity
Include:
Region area corresponding to each human face region is obtained, background is obtained according to the physical distance information and region area
Intensity is blurred, and virtualization processing is carried out to the background area in the pending image according to the background blurring intensity.
In another embodiment, the methods described being executed by processor also includes:
According to physical distance information corresponding to each human face region, each face area in the pending image is obtained
Portrait corresponding to domain blurs intensity;
Intensity is blurred according to the portrait virtualization processing is carried out to portrait area corresponding to the human face region.
In one of the embodiments, the methods described being executed by processor also includes:
Region area corresponding to each human face region is obtained, by region based on the maximum human face region of region area,
And blur region using the human face region in addition to base area as face;
The physical distance information according to corresponding to each human face region, obtain each individual in the pending image
Portrait virtualization intensity includes corresponding to face region:
According to physical distance information corresponding to the base area and face virtualization region, it is corresponding to obtain face virtualization region
Portrait virtualization intensity;
It is described according to the portrait blur intensity portrait area corresponding to the human face region is carried out virtualization processing include:
Intensity is blurred according to the portrait virtualization processing is carried out to portrait area corresponding to face virtualization region.
The embodiment of the present invention also provides a kind of computer equipment.Above computer equipment includes image processing circuit, figure
As process circuit can utilize hardware and/or component software to realize, it may include define ISP (Image Signal
Processing, picture signal processing) pipeline various processing units.Fig. 8 is that image processing circuit shows in one embodiment
It is intended to.As shown in figure 8, for purposes of illustration only, the various aspects of the image processing techniques related to the embodiment of the present invention are only shown.
As shown in figure 8, image processing circuit includes ISP processors 840 and control logic device 850.Imaging device 810 is caught
View data handled first by ISP processors 840, ISP processors 840 view data is analyzed with catch can be used for it is true
The image statistics of fixed and/or imaging device 810 one or more control parameters.Imaging device 810 may include there is one
The camera of individual or multiple lens 812 and imaging sensor 814.Imaging sensor 814 may include colour filter array (such as
Bayer filters), imaging sensor 814 can obtain the luminous intensity caught with each imaging pixel of imaging sensor 814 and wavelength
Information, and the one group of raw image data that can be handled by ISP processors 840 is provided.Sensor 820 (such as gyroscope) can be based on passing
The parameter (such as stabilization parameter) of the image procossing of collection is supplied to ISP processors 840 by the interface type of sensor 820.Sensor 820
Interface can utilize SMIA (Standard Mobile Imaging Architecture, Standard Mobile Imager framework) interface,
The combination of other serial or parallel camera interfaces or above-mentioned interface.
In addition, raw image data can also be sent to sensor 820 by imaging sensor 814, sensor 820 can be based on passing
The interface type of sensor 820 is supplied to ISP processors 840 to be handled raw image data, or sensor 820 is by original graph
As in data Cun Chudao video memories 830.
ISP processors 840 handle raw image data pixel by pixel in various formats.For example, each image pixel can
Bit depth with 8,10,12 or 14 bits, ISP processors 840 can be carried out at one or more images to raw image data
Reason operation, statistical information of the collection on view data.Wherein, image processing operations can be by identical or different bit depth precision
Carry out.
ISP processors 840 can also receive pixel data from video memory 830.For example, the interface of sensor 820 will be original
View data is sent to video memory 830, and the raw image data in video memory 830 is available to ISP processors 840
It is for processing.Video memory 830 can be independent special in the part of storage arrangement, storage device or electronic equipment
With memory, and it may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving from the interface of imaging sensor 814 or from the interface of sensor 820 or from video memory 830
During raw image data, ISP processors 840 can carry out one or more image processing operations, such as time-domain filtering.ISP processors
View data after 840 processing can be transmitted to video memory 830, to carry out other processing before shown.At ISP
Manage device 840 from the reception processing data of video memory 830, and to the processing data carry out original domain in and RGB and YCbCr
Image real time transfer in color space.View data after processing may be output to display 880, for user viewing and/or
Further handled by graphics engine or GPU (Graphics Processing Unit, graphics processor).In addition, ISP processors
840 output also can be transmitted to video memory 830, and display 880 can read view data from video memory 830.
In one embodiment, video memory 830 can be configured as realizing one or more frame buffers.In addition, ISP processors 840
Output can be transmitted to encoder/decoder 870, so as to encoding/decoding image data.The view data of coding can be saved,
And decompressed before being shown in the equipment of display 880.
View data after ISP processing can be transmitted to blurring module 860, to be blurred before shown to image
Processing.Blurring module 860 may include according to the background blurring intensity of physical distance acquisition of information, and root to view data virtualization processing
Virtualization processing etc. is carried out to the background area in view data according to background blurring intensity.Blurring module 860 carries out view data
After virtualization processing, the view data after can virtualization be handled is sent to encoder/decoder 870, so as to encoding/decoding image number
According to.The view data of coding can be saved, and show with the equipment of display 880 before decompress.It is it is understood that empty
Change the view data after module 860 is handled can directly to issue display 880 without encoder/decoder 870 and shown
Show.View data after the processing of ISP processors 840 can also first pass through encoder/decoder 870 and handle, then again by void
Change module 860 to be handled.Wherein, blurring module 860 or encoder/decoder 870 can be CPU (Central in mobile terminal
Processing Unit, central processing unit) or GPU (Graphics Processing Unit, graphics processor) etc..
The statistics that ISP processors 840 determine, which can be transmitted, gives the unit of control logic device 850.For example, statistics can wrap
Include the image sensings such as automatic exposure, AWB, automatic focusing, flicker detection, black level compensation, the shadow correction of lens 812
The statistical information of device 814.Control logic device 850 may include the processor and/or micro-control for performing one or more routines (such as firmware)
Device processed, one or more routines according to the statistics of reception, can determine control parameter and the ISP processing of imaging device 810
The control parameter of device 840.For example, the control parameter of imaging device 810 may include the control parameter of sensor 820 (such as gain, expose
Time of integration of photocontrol, stabilization parameter etc.), camera flash control parameter, the control parameter of lens 812 (such as focus on or become
Jiao's focal length) or these parameters combination.ISP control parameters may include be used for AWB and color adjustment (for example,
During RGB processing) gain level and color correction matrix, and the shadow correction parameter of lens 812.
It it is below the step of realizing image weakening method with image processing techniques in Fig. 8:
Obtain pending image;
The human face region in the pending image is detected, and obtains physical distance information corresponding to the human face region;
Wait to locate to described according to the background blurring intensity of physical distance acquisition of information, and according to the background blurring intensity
Background area in reason image carries out virtualization processing.
In one embodiment, the human face region in the detection pending image, and obtain the human face region
Corresponding physical distance information includes:
The human face region in the pending image is detected, and it is corresponding to obtain each human face region in the pending image
Physical distance information.
It is described according to the background blurring intensity of physical distance acquisition of information in the other embodiment that the application provides,
And the background area in the pending image is carried out according to the background blurring intensity virtualization processing include:
Region area corresponding to each human face region is obtained, background is obtained according to the physical distance information and region area
Intensity is blurred, and virtualization processing is carried out to the background area in the pending image according to the background blurring intensity.
In another embodiment, methods described also includes:
According to physical distance information corresponding to each human face region, each face area in the pending image is obtained
Portrait corresponding to domain blurs intensity;
Intensity is blurred according to the portrait virtualization processing is carried out to portrait area corresponding to the human face region.
In one of the embodiments, methods described also includes:
Region area corresponding to each human face region is obtained, by region based on the maximum human face region of region area,
And blur region using the human face region in addition to base area as face;
The physical distance information according to corresponding to each human face region, obtain each individual in the pending image
Portrait virtualization intensity includes corresponding to face region:
According to physical distance information corresponding to the base area and face virtualization region, it is corresponding to obtain face virtualization region
Portrait virtualization intensity;
It is described according to the portrait blur intensity portrait area corresponding to the human face region is carried out virtualization processing include:
Intensity is blurred according to the portrait virtualization processing is carried out to portrait area corresponding to face virtualization region.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be with
The hardware of correlation is instructed to complete by computer program, described program can be stored in a non-volatile computer and can be read
In storage medium, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage is situated between
Matter can be magnetic disc, CD, read-only memory (Read-Only Memory, ROM) etc..
Embodiment described above only expresses the several embodiments of the present invention, and its description is more specific and detailed, but simultaneously
Therefore the limitation to the scope of the claims of the present invention can not be interpreted as.It should be pointed out that for one of ordinary skill in the art
For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present invention
Protect scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.
Claims (12)
1. a kind of image weakening method, it is characterised in that methods described includes:
Obtain pending image;
The human face region in the pending image is detected, and obtains physical distance information corresponding to the human face region;
According to the background blurring intensity of physical distance acquisition of information, and according to the background blurring intensity to the pending figure
Background area as in carries out virtualization processing.
2. image weakening method according to claim 1, it is characterised in that the people in the detection pending image
Face region, and obtain physical distance information corresponding to the human face region and include:
The human face region in the pending image is detected, and obtains thing corresponding to each human face region in the pending image
Manage range information.
3. image weakening method according to claim 2, it is characterised in that described according to the physical distance acquisition of information
Background blurring intensity, and virtualization processing bag is carried out to the background area in the pending image according to the background blurring intensity
Include:
Region area corresponding to each human face region is obtained, is obtained according to the physical distance information and region area background blurring
Intensity, and virtualization processing is carried out to the background area in the pending image according to the background blurring intensity.
4. image weakening method according to claim 2, it is characterised in that methods described also includes:
According to physical distance information corresponding to each human face region, each human face region pair in the pending image is obtained
The portrait virtualization intensity answered;
Intensity is blurred according to the portrait virtualization processing is carried out to portrait area corresponding to the human face region.
5. image weakening method according to claim 4, it is characterised in that methods described also includes:
Region area corresponding to each human face region is obtained, by region based on the maximum human face region of region area, and will
Human face region in addition to base area blurs region as face;
The physical distance information according to corresponding to each human face region, obtain each face area in the pending image
Portrait virtualization intensity includes corresponding to domain:
According to physical distance information corresponding to the base area and face virtualization region, people corresponding to face virtualization region is obtained
As virtualization intensity;
It is described according to the portrait blur intensity portrait area corresponding to the human face region is carried out virtualization processing include:
Intensity is blurred according to the portrait virtualization processing is carried out to portrait area corresponding to face virtualization region.
6. a kind of image blurs device, it is characterised in that described device includes:
Image collection module, for obtaining pending image;
Data obtaining module, for detecting the human face region in the pending image, and obtain corresponding to the human face region
Physical distance information;
Background blurring module, for according to the background blurring intensity of physical distance acquisition of information, and according to described background blurring
Intensity carries out virtualization processing to the background area in the pending image.
7. image according to claim 6 blurs device, it is characterised in that described information acquisition module is additionally operable to detect institute
The human face region in pending image is stated, and obtains physical distance corresponding to each human face region in the pending image and believes
Breath.
8. image according to claim 7 blurs device, it is characterised in that the background blurring module is additionally operable to obtain respectively
Region area corresponding to individual human face region, background blurring intensity, and root are obtained according to the physical distance information and region area
Virtualization processing is carried out to the background area in the pending image according to the background blurring intensity.
9. image according to claim 7 blurs device, it is characterised in that described device also includes:
Intensity acquisition module, for the physical distance information according to corresponding to each human face region, obtain the pending figure
Portrait corresponding to each human face region blurs intensity as in;
Portrait blurring module, portrait area corresponding to the human face region is blurred for blurring intensity according to the portrait
Processing.
10. image according to claim 9 blurs device, it is characterised in that described device also includes:
Region acquisition module, for obtaining region area corresponding to each human face region, by the human face region that region area is maximum
Based on region, and using the human face region in addition to base area as face blur region;
The intensity acquisition module is additionally operable to the physical distance information according to corresponding to the base area and face virtualization region, obtains
Take portrait virtualization intensity corresponding to face virtualization region;
The portrait blurring module is additionally operable to blur intensity to portrait area corresponding to face virtualization region according to the portrait
Domain carries out virtualization processing.
11. one or more includes the non-volatile computer readable storage medium storing program for executing of computer executable instructions, when the calculating
When machine executable instruction is executed by one or more processors so that the computing device such as any one of claim 1 to 5
Described image weakening method.
12. a kind of computer equipment, including memory and processor, computer-readable instruction is stored in the memory, institute
When stating instruction by the computing device so that image of the computing device as any one of claim 1 to 5 is empty
Change method.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710676169.2A CN107704798B (en) | 2017-08-09 | 2017-08-09 | Image blurring method and device, computer readable storage medium and computer device |
PCT/CN2018/099403 WO2019029573A1 (en) | 2017-08-09 | 2018-08-08 | Image blurring method, computer-readable storage medium and computer device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710676169.2A CN107704798B (en) | 2017-08-09 | 2017-08-09 | Image blurring method and device, computer readable storage medium and computer device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107704798A true CN107704798A (en) | 2018-02-16 |
CN107704798B CN107704798B (en) | 2020-06-12 |
Family
ID=61170965
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710676169.2A Active CN107704798B (en) | 2017-08-09 | 2017-08-09 | Image blurring method and device, computer readable storage medium and computer device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107704798B (en) |
WO (1) | WO2019029573A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019029573A1 (en) * | 2017-08-09 | 2019-02-14 | Oppo广东移动通信有限公司 | Image blurring method, computer-readable storage medium and computer device |
CN110099251A (en) * | 2019-04-29 | 2019-08-06 | 努比亚技术有限公司 | Processing method, device and the computer readable storage medium of monitor video |
CN110971827A (en) * | 2019-12-09 | 2020-04-07 | Oppo广东移动通信有限公司 | Portrait mode shooting method and device, terminal equipment and storage medium |
CN112217992A (en) * | 2020-09-29 | 2021-01-12 | Oppo(重庆)智能科技有限公司 | Image blurring method, image blurring device, mobile terminal, and storage medium |
CN113673474A (en) * | 2021-08-31 | 2021-11-19 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN115883958A (en) * | 2022-11-22 | 2023-03-31 | 荣耀终端有限公司 | Portrait shooting method |
CN117714893A (en) * | 2023-05-17 | 2024-03-15 | 荣耀终端有限公司 | Image blurring processing method and device, electronic equipment and storage medium |
WO2024164736A1 (en) * | 2023-02-06 | 2024-08-15 | Oppo广东移动通信有限公司 | Video processing method and apparatus, and computer-readable medium and electronic device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100245598A1 (en) * | 2009-03-31 | 2010-09-30 | Casio Computer Co., Ltd. | Image composing apparatus and computer readable recording medium |
CN102843509A (en) * | 2011-06-14 | 2012-12-26 | 宾得理光映像有限公司 | Image processing device and image processing method |
CN103945118A (en) * | 2014-03-14 | 2014-07-23 | 华为技术有限公司 | Picture blurring method and device and electronic equipment |
CN103973977A (en) * | 2014-04-15 | 2014-08-06 | 联想(北京)有限公司 | Blurring processing method and device for preview interface and electronic equipment |
CN104333700A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | Image blurring method and image blurring device |
CN105389801A (en) * | 2015-10-20 | 2016-03-09 | 厦门美图之家科技有限公司 | Figure outline setting method, shooting terminal, figure image blurring method and system |
CN106331492A (en) * | 2016-08-29 | 2017-01-11 | 广东欧珀移动通信有限公司 | Image processing method and terminal |
CN106548185A (en) * | 2016-11-25 | 2017-03-29 | 三星电子(中国)研发中心 | A kind of foreground area determines method and apparatus |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6495122B2 (en) * | 2015-07-02 | 2019-04-03 | オリンパス株式会社 | Imaging apparatus and image processing method |
CN106875348B (en) * | 2016-12-30 | 2019-10-18 | 成都西纬科技有限公司 | A kind of heavy focus image processing method |
CN106952222A (en) * | 2017-03-17 | 2017-07-14 | 成都通甲优博科技有限责任公司 | A kind of interactive image weakening method and device |
CN107704798B (en) * | 2017-08-09 | 2020-06-12 | Oppo广东移动通信有限公司 | Image blurring method and device, computer readable storage medium and computer device |
-
2017
- 2017-08-09 CN CN201710676169.2A patent/CN107704798B/en active Active
-
2018
- 2018-08-08 WO PCT/CN2018/099403 patent/WO2019029573A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100245598A1 (en) * | 2009-03-31 | 2010-09-30 | Casio Computer Co., Ltd. | Image composing apparatus and computer readable recording medium |
CN102843509A (en) * | 2011-06-14 | 2012-12-26 | 宾得理光映像有限公司 | Image processing device and image processing method |
CN103945118A (en) * | 2014-03-14 | 2014-07-23 | 华为技术有限公司 | Picture blurring method and device and electronic equipment |
CN103973977A (en) * | 2014-04-15 | 2014-08-06 | 联想(北京)有限公司 | Blurring processing method and device for preview interface and electronic equipment |
CN104333700A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | Image blurring method and image blurring device |
CN105389801A (en) * | 2015-10-20 | 2016-03-09 | 厦门美图之家科技有限公司 | Figure outline setting method, shooting terminal, figure image blurring method and system |
CN106331492A (en) * | 2016-08-29 | 2017-01-11 | 广东欧珀移动通信有限公司 | Image processing method and terminal |
CN106548185A (en) * | 2016-11-25 | 2017-03-29 | 三星电子(中国)研发中心 | A kind of foreground area determines method and apparatus |
Non-Patent Citations (2)
Title |
---|
ERIC CRISTOFALO ET AL.: "Out-of-focus: Learning Depth from Image Bokeh for Robotic Perception", 《ARXIV:1705.01152V1》 * |
肖进胜 等: "基于多聚焦图像深度信息提取的背景虚化显示", 《自动化学报》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019029573A1 (en) * | 2017-08-09 | 2019-02-14 | Oppo广东移动通信有限公司 | Image blurring method, computer-readable storage medium and computer device |
CN110099251A (en) * | 2019-04-29 | 2019-08-06 | 努比亚技术有限公司 | Processing method, device and the computer readable storage medium of monitor video |
CN110971827A (en) * | 2019-12-09 | 2020-04-07 | Oppo广东移动通信有限公司 | Portrait mode shooting method and device, terminal equipment and storage medium |
CN110971827B (en) * | 2019-12-09 | 2022-02-18 | Oppo广东移动通信有限公司 | Portrait mode shooting method and device, terminal equipment and storage medium |
CN112217992A (en) * | 2020-09-29 | 2021-01-12 | Oppo(重庆)智能科技有限公司 | Image blurring method, image blurring device, mobile terminal, and storage medium |
CN113673474A (en) * | 2021-08-31 | 2021-11-19 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN113673474B (en) * | 2021-08-31 | 2024-01-12 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN115883958A (en) * | 2022-11-22 | 2023-03-31 | 荣耀终端有限公司 | Portrait shooting method |
WO2024164736A1 (en) * | 2023-02-06 | 2024-08-15 | Oppo广东移动通信有限公司 | Video processing method and apparatus, and computer-readable medium and electronic device |
CN117714893A (en) * | 2023-05-17 | 2024-03-15 | 荣耀终端有限公司 | Image blurring processing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107704798B (en) | 2020-06-12 |
WO2019029573A1 (en) | 2019-02-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107704798A (en) | Image weakening method, device, computer-readable recording medium and computer equipment | |
CN109218628B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
WO2020038028A1 (en) | Method for capturing images at night, apparatus, electronic device, and storage medium | |
WO2020038074A1 (en) | Exposure control method and apparatus, and electronic device | |
WO2020034737A1 (en) | Imaging control method, apparatus, electronic device, and computer-readable storage medium | |
CN107481186B (en) | Image processing method, image processing device, computer-readable storage medium and computer equipment | |
WO2020038087A1 (en) | Method and apparatus for photographic control in super night scene mode and electronic device | |
CN108322646A (en) | Image processing method, device, storage medium and electronic equipment | |
CN109348088A (en) | Image denoising method, device, electronic equipment and computer readable storage medium | |
CN107493432A (en) | Image processing method, device, mobile terminal and computer-readable recording medium | |
CN107395991A (en) | Image combining method, device, computer-readable recording medium and computer equipment | |
US11233948B2 (en) | Exposure control method and device, and electronic device | |
CN107465903B (en) | Image white balance method, device and computer readable storage medium | |
CN107563979B (en) | Image processing method, image processing device, computer-readable storage medium and computer equipment | |
CN109685853B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN105611185A (en) | Image generation method and device and terminal device | |
CN114820405A (en) | Image fusion method, device, equipment and computer readable storage medium | |
CN108053438A (en) | Depth of field acquisition methods, device and equipment | |
CN109068060B (en) | Image processing method and device, terminal device and computer readable storage medium | |
CN110689565B (en) | Depth map determination method and device and electronic equipment | |
CN107454335A (en) | Image processing method, device, computer-readable recording medium and mobile terminal | |
CN107563329A (en) | Image processing method, device, computer-readable recording medium and mobile terminal | |
CN113159229B (en) | Image fusion method, electronic equipment and related products | |
CN107454317A (en) | Image processing method, device, computer-readable recording medium and computer equipment | |
CN107454328B (en) | Image processing method, device, computer readable storage medium and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |