CN107864336B - A kind of image processing method, mobile terminal - Google Patents
A kind of image processing method, mobile terminal Download PDFInfo
- Publication number
- CN107864336B CN107864336B CN201711194076.2A CN201711194076A CN107864336B CN 107864336 B CN107864336 B CN 107864336B CN 201711194076 A CN201711194076 A CN 201711194076A CN 107864336 B CN107864336 B CN 107864336B
- Authority
- CN
- China
- Prior art keywords
- face region
- human face
- nonbody
- virtualization
- main body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 21
- 230000001815 facial effect Effects 0.000 claims abstract description 72
- 238000012545 processing Methods 0.000 claims abstract description 50
- 238000000034 method Methods 0.000 claims description 20
- 230000000875 corresponding effect Effects 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 5
- 230000002596 correlated effect Effects 0.000 claims description 5
- 239000011800 void material Substances 0.000 claims description 4
- 230000006870 function Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 6
- 230000006854 communication Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000012512 characterization method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 241001269238 Data Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 210000000746 body region Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Telephone Function (AREA)
Abstract
The embodiment of the invention discloses a kind of image processing methods, mobile terminal, wherein, image processing method includes: the facial size of N number of human face region in the image for obtain camera acquisition, and from N number of human face region, determine main body human face region and nonbody human face region, the facial size of facial size and nonbody human face region further according to the main body human face region, the virtualization for carrying out different virtualization degree to nonbody human face region respectively are handled.Virtualization processing is carried out to nonbody human face region in image data respectively with different virtualization degree so as to realize, effectively promotes the virtualization discrimination in the case of single camera is shot to more personage's main bodys.
Description
Technical field
The present embodiments relate to field of communication technology more particularly to a kind of image processing methods, mobile terminal.
Background technique
Nowadays it is increasingly be unable to do without Taking Photographic in people's life, in particular with the development of intelligent terminal, intelligent terminal
After realizing camera function, makes to take pictures and apply more extensive.Simultaneously either in personal lifestyle or commercial use, all to bat
According to quality and user experience require it is higher and higher.However, the scene taken pictures is often complicated and changeable, in order to enable shooting
Photo adapts to scene complicated and changeable, highlights the main body of shooting more to embody stereovision, common processing method is to maintain
The clarity of main body is shot, and the region shot other than main body is subjected to virtualization processing.Virtualization processing is exactly will be other than main body
Region is blurred, so that main body is more prominent.
In the prior art, it when carrying out virtualization processing to image data, is generally divided into double take the photograph and blurs and singly take the photograph virtualization,
In, it is double to take the photograph the depth of view information for blurring and referring to through auxiliary camera, carry out the differentiation of realization body and background, then by background area
It is blurred.It singly takes the photograph virtualization to refer in the case where no auxiliary camera obtains depth of view information, distinguishes the main body in image data
And background, and background parts are blurred.
But traditional singly takes the photograph virtualization mode due to lacking depth of view information, can only simply distinguish main body and background, and only to back
Scape is blurred.Thus cause when the main body of image data is personage, although personage's main body can be highlighted, each one owner is physically weak
Change degree is single.That is, when the quantity of personage's main body is more than one, even if each one owner's body is actually apart from each other, but
Virtualization degree is still consistent, low to each one the virtualization discrimination of owner's body, causes to have lacked stereovision.
Summary of the invention
The embodiment of the present invention provides a kind of image processing method and mobile terminal, in the case of solving single camera shooting,
The low problem of virtualization discrimination to more personage's main bodys.
In order to solve the above-mentioned technical problem, the present invention is implemented as follows:
In a first aspect, providing a kind of image processing method, it is applied to mobile terminal, method includes:
Obtain the facial size of N number of human face region in the image of camera acquisition;
From N number of human face region, main body human face region and nonbody human face region are determined;
According to the facial size of the facial size of the main body human face region and the nonbody human face region, respectively to institute
State the virtualization processing that nonbody human face region carries out different virtualization degree;
Wherein, N is the integer greater than 1.
Second aspect, the embodiment of the invention also provides a kind of mobile terminals, comprising:
Dimension acquisition module, for obtaining the facial size of N number of human face region in the image that camera acquires;
Area determination module, for determining main body human face region and nonbody human face region from N number of human face region;
Processing module is blurred, for according to the facial size of the main body human face region and the nonbody human face region
Facial size, the virtualization for carrying out different virtualization degree to the nonbody human face region respectively are handled;Wherein, N is whole greater than 1
Number.
The third aspect memory and is stored in institute the embodiment of the invention also provides a kind of mobile terminal, including processor
The computer program that can be run on memory and on the processor is stated, when the computer program is executed by the processor
The step of realizing above-mentioned image processing method.
Fourth aspect, it is described computer-readable to deposit the embodiment of the invention also provides a kind of computer readable storage medium
Computer program is stored on storage media, the computer program realizes the step of above-mentioned image processing method when being executed by processor
Suddenly.
In embodiments of the present invention, by the facial size of N number of human face region in the image of acquisition camera acquisition, and from
In N number of human face region, main body human face region and nonbody human face region are determined, further according to the facial size of the main body human face region
With the facial size of nonbody human face region, the virtualization for carrying out different virtualization degree to nonbody human face region respectively is handled.From
And realize and virtualization processing is carried out to nonbody human face region in image data respectively with different virtualization degree, effectively promote single camera shooting
To the virtualization discrimination of more personage's main bodys in the case of head shooting, so that image data has more stereovision.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention
Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention
Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings
Obtain other attached drawings.
Fig. 1 is a kind of flow chart of image processing method of the embodiment of the present invention;
Fig. 2 is the flow chart of another image processing method of the embodiment of the present invention;
Fig. 3 is a kind of block diagram of mobile terminal of the embodiment of the present invention;
Fig. 4 is the block diagram of another mobile terminal of the embodiment of the present invention;
Fig. 5 is a kind of hardware structural diagram of mobile terminal of the embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Referring to Fig.1, a kind of flow chart of image processing method in the embodiment of the present invention is shown, provided by the present embodiment
Method can include: by mobile terminal execution, image processing method
Step 101, the facial size of N number of human face region in the image of camera acquisition is obtained.
Specifically, when there are when N number of personage in the scene using single camera shooting, wherein N is the integer greater than 1.It is existing
Having in technology can have cannot be distinguished each personage's distant relationships.In view of the foregoing drawbacks, the embodiment of the present invention can first lead to
It crosses and recognition of face is carried out to the image data that shooting obtains, N number of human face region in the image to obtain camera acquisition.It obtains again
The facial size of N number of human face region.
Specifically, the facial size of human face region can be characterized by the area value of human face region, people can also be passed through
The width value in face region characterizes.According to Perspective Principles, there are near big and far smaller rules for each object being imaged in a plane.Cause
This, can judge the front-rear position of each human face region in the scene according to the width value and/or area value of each human face region
Relationship.
In practical applications, which includes preview frame data and at least one of frame data of taking pictures.The preview frame
Data refer to the image data obtained under preview mode, which refers to the picture number obtained under photographing mode
According to.Acquisition when i.e. the image data can be for photographed scene progress preview, or after completing shooting to photographed scene
It obtains.The acquisition modes of the image data do not limit.
Step 102, from N number of human face region, main body human face region and nonbody human face region are determined.
N number of human face region may include main body human face region and nonbody human face region, wherein main body human face region is
Region where the face for needing to highlight emphatically in image data.Nonbody human face region is people relatively secondary in image data
Face region.By distinguishing the primary-slave relation of each human face region, each human face region in image data can be made to have more stereovision.
Specifically, due to when shooting near the personage of camera, the personage for usually needing to highlight emphatically, the personage
The size of corresponding human face region is maximum in the image data that shooting obtains.Therefore, master is being determined from N number of human face region
When body human face region, the maximum human face region of facial size in N number of human face region can be determined as to main body human face region, and will
Each human face region in N number of human face region in addition to the maximum human face region of the facial size is identified as nonbody face
Region.Main body human face region and nonbody human face region are automatically determined to realize.Alternatively, it is also possible to receive user in the picture
The first input, which is determined as main body human face region, and will be in N number of human face region except the
Each human face region except the corresponding human face region of one input is identified as nonbody human face region.Wherein, described first
Input is the predetermined registration operation that user selects main body human face region, and the predetermined registration operation includes the touch-control that picture is corresponded to human face region
Operation and at least one of the selection operation of human face region reference numeral.To make determining result more meet the need of user
It asks.For example, the touching of picture can be corresponded to human face region by receiving user after mobile terminal recognizes N number of human face region
Control operation, determines user needs using which human face region as main body human face region.N number of human face region is recognized in mobile terminal
Afterwards, which can also be numbered, and the number of each human face region is showed into user, so as to logical
It crosses and receives user to the selection operation of human face region reference numeral, determine user needs using which human face region as main body face
Region.Wherein, selection operation can be the modes such as touch-control selection or acoustic control selection.For example, user can be by saying face
Region reference numeral realizes the selection to the number human face region.
Step 103, according to the facial size of the facial size of main body human face region and nonbody human face region, respectively to non-
Main body human face region carries out the virtualization processing of different virtualization degree.
Specifically, from N number of human face region, it, can be true after determining main body human face region and nonbody human face region
Determine the facial size of main body human face region and the facial size of nonbody human face region.It may thereby determine that each nonbody face
The absolute value of the facial size difference of region and main body human face region.It is right respectively and according to the absolute value of the face dimension difference
Each nonbody human face region carries out the virtualization processing of different virtualization degree.According to Perspective Principles, nonbody human face region and main body
The absolute value of the facial size difference of human face region is bigger, remoter from main body human face region, can be to the nonbody human face region
Virtualization degree it is deeper, thus to the virtualization discriminations of more personage's main bodys in the case of effectively promoting single camera shooting.
In practical applications, the background area in image can be first obtained, and determines the virtualization degree of the background area.Root
According to the absolute value of the face dimension difference and the virtualization degree of background area, the virtualization journey of each nonbody human face region is determined
Degree.According still further to the virtualization degree of each nonbody human face region, virtualization processing is carried out to each nonbody human face region respectively.Its
In, which is all image-regions in image in addition to human face region, the virtualization journey of each nonbody human face region
It spends and is positively correlated with the absolute value of corresponding facial size difference.
In conclusion in embodiments of the present invention, the face of N number of human face region in the image by obtaining camera acquisition
Size, and from N number of human face region, main body human face region and nonbody human face region are determined, further according to the main body human face region
Facial size and nonbody human face region facial size, the void of different virtualization degree is carried out to nonbody human face region respectively
Change processing.Virtualization processing is carried out to nonbody human face region in image data respectively with different virtualization degree to realize, effectively
The virtualization discrimination in the case of single camera is shot to more personage's main bodys is promoted, so that image data has more stereovision.
Referring to Fig. 2, the flow chart of another image processing method in the embodiment of the present invention is shown, the present embodiment is provided
Method can include: by mobile terminal execution, image processing method
Step 201, the profile coordinate of N number of human face region in image data is obtained.
After being shot and getting image data using single camera, can by carrying out recognition of face to image,
Determine the human face region in the image data.When the human face region recognized from image data is N number of, due to single camera
It shoots obtained image data and lacks depth of view information, i.e., can not obtain the front-rear position relationship of subject.Accordingly, it is difficult to by
Depth of view information distinguishes the front-rear position relationship of N number of human face region.At this point it is possible to pass through the profile for obtaining N number of human face region
Coordinate analyzes the front-rear position relationship of N number of human face region.And then can use the front-rear position relationship, it avoids to whole people
Face region carries out virtualization processing with identical virtualization degree, enhances the stereovision of each human face region.
Wherein, the image data of acquisition can be when mobile terminal is taken pictures, using the image data of preview mode acquisition.
Or image data or other image datas that mobile terminal uses photographing mode to obtain.The acquisition of the image data
Mode does not limit.That is, the embodiment of the present invention can be in the case where lacking depth of view information, according to human face region flat
Profile coordinate on face can distinguish the front-rear position relationship of each human face region in image data, and between each other remote
Short range degree.Therefore, it can be used as institute's providing method of the embodiment of the present invention by the image data that above-mentioned all kinds of modes obtain to fit
Object.Have the characteristics that applied widely.
Step 202, according to profile coordinate, the facial size of N number of human face region is determined.
In getting image data after the profile coordinate of each human face region, the shape of human face region can be determined,
And then according to the feature of the shape, the facial size of N number of human face region is determined.For example, after recognition of face, the face of acquisition
Region can be rectangle, and the features such as area, the width of the rectangle may be incorporated for the size of characterization human face region.In practical application
In, the factors such as hair style, headwear, cap of personage may interfere with the accuracy to the judgement of human face region length.It is therefore preferable that
, it can be using the width of human face region as the feature for measuring human face region size.
Specifically, the profile coordinate can be a coordinate in rectangular coordinate system, wherein ordinate value is identical,
And two profile coordinates of abscissa difference maximum absolute value, it can be used for measuring the width of the human face region.If with face area
The size of the width characterization human face region in domain, then the absolute value of the difference of two profile coordinate abscissas, can be used as the people
The size in face region.
Step 203, main body human face region and nonbody human face region are determined from N number of human face region.
Specifically, main body human face region can be determined first from N number of human face region, table is needed with the prominent image data
Existing emphasis determines nonbody human face region by the main body human face region.And using the main body human face region as to each non-
The benchmark of main body human face region virtualization.
When determining main body human face region, can be chosen from each human face region by preset selection rule
Main body human face region out.For example, often the people closer from camera is more emphasis to be needed to embody when imaging to more people
, and according to Perspective Principles, the size for being usually located at people's corresponding human face region in image data of foremost is maximum.Cause
This, can choose the maximum human face region of size in N number of human face region, as main body human face region, and by the image data
In other human face regions as nonbody human face region.As another embodiment, it also can receive user in the picture
First input, and the corresponding human face region of the first input is determined as main body human face region.For example, what can be will identify that is every
A human face region shows user in the display interface, and after receiving the human face region of user's selection, which is selected
Human face region is as main body human face region, and using human face regions other in the image data as nonbody human face region.To
Sufficiently meet the needs of user independently selects, enhances operability and practicability.In practical applications, user can pass through touching
Control or the modes such as acoustic control select one in N number of human face region as main body human face region.It for another example, can be by N number of face area
Domain carries out label according to sequence from left to right, and shows user.User can select a certain face by selection label
Region is as main body human face region.And by each face area in N number of human face region in addition to the corresponding human face region of the label
Domain is identified as nonbody human face region.
Step 204, the absolute value of the facial size difference of each nonbody human face region and main body human face region is determined.
After determining the main body human face region and nonbody human face region in N number of human face region, Subject-Human can be determined
The people of the facial size and each nonbody human face region and main body human face region of face region and each nonbody human face region
The absolute value of face dimension difference.
Specifically, can be characterized every according to the ratio of each nonbody human face region width and main body face peak width
The facial size difference of a nonbody human face region and main body human face region.For example, each nonbody face area can first be calculated
The ratio a in domain and main body face peak width.Pass through the absolute value of the difference of ratio a and 1 | 1-a | * X is measured each non-master
The how far of body human face region and main body human face region, and then it is determined for the virtualization to each nonbody human face region
Degree.For example, if not the width ratio of main body human face region A and main body human face region is 0.8, then nonbody human face region A with
The absolute value of the facial size difference of main body human face region is 0.2.If not the width of main body human face region B and main body human face region
Ratio is 1.3, then the absolute value of the facial size difference of nonbody human face region B and main body human face region is 0.3.
Step 205, according to the absolute value of facial size difference, different virtualization degree are carried out to nonbody human face region respectively
Virtualization processing.
Specifically, when being blurred to image data, it will usually blur the background area in image data.And
And in practical applications, since the image data of single camera shooting lacks depth of view information, to background in image data
Region generallys use unified virtualization degree.
It, can be according to the absolute value of the face dimension difference and to the background after determining to the virtualization degree of background area
The virtualization degree in region determines the virtualization degree of each nonbody human face region.According still further to the void of each nonbody human face region
Change degree carries out virtualization processing to each nonbody human face region respectively.Wherein, background area be image in except human face region it
Outer all image-regions, the absolute value positive of the virtualization degree of each nonbody human face region and corresponding facial size difference
It closes.
For example, if being X, the facial size of nonbody human face region and main body human face region to the virtualization degree of background area
The absolute value of difference is m, then can be mX to the virtualization degree of nonbody human face region.And according to the virtualization degree to nonbody
Human face region carries out virtualization processing.When in image data there are more than two nonbody human face region, and each nonbody face
When the absolute value difference of the corresponding facial size difference in region, by each nonbody human face region with different virtualization degree into
Row virtualization processing, to embody the distance difference of each personage in the corresponding scene of image data.Wherein, to nonbody face area
The virtualization degree and the absolute value of the face dimension difference in domain are positively correlated.In practical applications, it is not limited to above-mentioned ratio approach.
In conclusion in embodiments of the present invention, by obtaining N number of human face region in image data in same plane
Profile coordinate distinguishes the front-rear position relationship of each human face region in image data, and this method is allowed to lack depth of field letter
In the case where breath, remain to carry out virtualization processing to human face region in image data respectively with different virtualization degree, not only effectively
The virtualization discrimination in the case of single camera is shot to more personage's main bodys is promoted, so that image data has more stereovision.And
And there is the wider scope of application.In addition, determining main body human face region due to that can input according to user in the picture first
Therefore can sufficiently meet the needs of user independently selects with nonbody human face region, enhance operability and practicability.
Referring to Fig. 3, one of a kind of block diagram of mobile terminal in the embodiment of the present invention is shown.Mobile terminal includes: size
Obtain module 31, area determination module 32 and virtualization processing module 33.
Wherein, dimension acquisition module 31, for obtaining the facial size of N number of human face region in the image that camera acquires.
Area determination module 32, for determining main body human face region and nonbody human face region from N number of human face region.
Processing module 33 is blurred, for according to the facial size of the main body human face region and the face of nonbody human face region
Size, the virtualization for carrying out different virtualization degree to nonbody human face region respectively are handled;Wherein, N is the integer greater than 1.
Referring to Fig. 4, in a preferred embodiment of the invention, on the basis of Fig. 3, virtualization processing module 33 is wrapped
Include: difference determines submodule 331 and virtualization processing submodule 332.Area determination module 32 includes: to automatically determine submodule 321
Submodule 322 is determined with inputting.
Wherein, difference determines submodule 331, for determining the face of each nonbody human face region and main body human face region
The absolute value of dimension difference;
Virtualization processing submodule 332, for the absolute value according to the face dimension difference, respectively to the nonbody face area
Domain carries out the virtualization processing of different virtualization degree;Wherein, the facial size of the human face region include human face region width value and/
Or the area value of human face region.
Submodule 321 is automatically determined, based on determining the maximum human face region of facial size in N number of human face region
Human face region;Each human face region in N number of human face region in addition to the maximum human face region of the facial size is determined respectively
For nonbody human face region.
It inputs and determines submodule 322, for receiving the first input of user in the images;First input is corresponding
Human face region is determined as main body human face region;By in N number of human face region each of in addition to the first corresponding human face region of input
Human face region is identified as nonbody human face region.
Further, virtualization processing submodule 332 includes: that background area acquiring unit 3321, background blurring degree are true
Order member 3322, face virtualization extent determination unit 3323 and face blur processing unit 3324.
Wherein, background area acquiring unit 3321, for obtaining the background area in the image;
Background blurring extent determination unit 3322, for determining the virtualization degree of the background area;
Face blurs extent determination unit 3323, for according to the absolute value of the face dimension difference and the background area
Virtualization degree determines the virtualization degree of each nonbody human face region;
Face blurs processing unit 3324, for the virtualization degree according to each nonbody human face region, respectively to every
A nonbody human face region carries out virtualization processing;Wherein, which is all figures in the image in addition to human face region
As region;The virtualization degree of each nonbody human face region is positively correlated with the absolute value of corresponding facial size difference.
Mobile terminal provided in an embodiment of the present invention can be realized mobile terminal in the embodiment of the method for Fig. 1 to Fig. 2 and realize
Each process, to avoid repeating, which is not described herein again.In embodiments of the present invention, it is taken the photograph by the acquisition of dimension acquisition module 31
The facial size of N number of human face region in the image acquired as head, and through area determination module 32 from N number of human face region, really
Determine main body human face region and nonbody human face region, recycles virtualization processing module 33 according to the face ruler of the main body human face region
Very little and nonbody human face region facial size, the virtualization for carrying out different virtualization degree to nonbody human face region respectively are handled.
Virtualization processing is carried out to nonbody human face region in image data respectively with different virtualization degree to realize, is effectively promoted and is singly taken the photograph
To the virtualization discrimination of more personage's main bodys in the case of being shot as head, so that image data has more stereovision.
A kind of hardware structural diagram of Fig. 5 mobile terminal of each embodiment to realize the present invention.
The mobile terminal 500 includes but is not limited to: radio frequency unit 501, network module 502, audio output unit 503, defeated
Enter unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor
The components such as 510 and power supply 511.It will be understood by those skilled in the art that mobile terminal structure shown in Fig. 5 is not constituted
Restriction to mobile terminal, mobile terminal may include than illustrating more or fewer components, perhaps combine certain components or
Different component layouts.In embodiments of the present invention, mobile terminal include but is not limited to mobile phone, tablet computer, laptop,
Palm PC, car-mounted terminal, wearable device and pedometer etc..
Wherein, radio frequency unit 501, the facial size for N number of human face region in the image of camera acquisition, wherein N is
Integer greater than 1.
Processor 510 determines main body human face region and nonbody human face region for from N number of human face region, and according to
The facial size of the main body human face region and the facial size of nonbody human face region respectively carry out not nonbody human face region
With the virtualization processing of virtualization degree.
In conclusion in embodiments of the present invention, the face of N number of human face region in the image by obtaining camera acquisition
Size, and from N number of human face region, main body human face region and nonbody human face region are determined, further according to the main body human face region
Facial size and nonbody human face region facial size, the void of different virtualization degree is carried out to nonbody human face region respectively
Change processing.Virtualization processing is carried out to nonbody human face region in image data respectively with different virtualization degree to realize, effectively
The virtualization discrimination in the case of single camera is shot to more personage's main bodys is promoted, so that image data has more stereovision.
It should be understood that the embodiment of the present invention in, radio frequency unit 501 can be used for receiving and sending messages or communication process in, signal
Send and receive, specifically, by from base station downlink data receive after, to processor 510 handle;In addition, by uplink
Data are sent to base station.In general, radio frequency unit 501 includes but is not limited to antenna, at least one amplifier, transceiver, coupling
Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 501 can also by wireless communication system and network and other set
Standby communication.
Mobile terminal provides wireless broadband internet by network module 502 for user and accesses, and such as user is helped to receive
It sends e-mails, browse webpage and access streaming video etc..
Audio output unit 503 can be received by radio frequency unit 501 or network module 502 or in memory 509
The audio data of storage is converted into audio signal and exports to be sound.Moreover, audio output unit 503 can also be provided and be moved
The relevant audio output of specific function that dynamic terminal 500 executes is (for example, call signal receives sound, message sink sound etc.
Deng).Audio output unit 503 includes loudspeaker, buzzer and receiver etc..
Input unit 504 is for receiving audio or video signal.Input unit 504 may include graphics processor
(Graphics Processing Unit, GPU) 5041 and microphone 5042, graphics processor 5041 is in video acquisition mode
Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out
Reason.Treated, and picture frame may be displayed on display unit 506.Through graphics processor 5041, treated that picture frame can be deposited
Storage is sent in memory 509 (or other storage mediums) or via radio frequency unit 501 or network module 502.Mike
Wind 5042 can receive sound, and can be audio data by such acoustic processing.Treated audio data can be
The format output that mobile communication base station can be sent to via radio frequency unit 501 is converted in the case where telephone calling model.
Mobile terminal 500 further includes at least one sensor 505, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 5061, and proximity sensor can close when mobile terminal 500 is moved in one's ear
Display panel 5061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axis) size of acceleration, it can detect that size and the direction of gravity when static, can be used to identify mobile terminal posture (ratio
Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);It passes
Sensor 505 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, wet
Meter, thermometer, infrared sensor etc. are spent, details are not described herein.
Display unit 506 is for showing information input by user or being supplied to the information of user.Display unit 506 can wrap
Display panel 5061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 5061.
User input unit 507 can be used for receiving the number or character information of input, and generate the use with mobile terminal
Family setting and the related key signals input of function control.Specifically, user input unit 507 include touch panel 5071 and
Other input equipments 5072.Touch panel 5071, also referred to as touch screen collect the touch operation of user on it or nearby
(for example user uses any suitable objects or attachment such as finger, stylus on touch panel 5071 or in touch panel 5071
Neighbouring operation).Touch panel 5071 may include both touch detecting apparatus and touch controller.Wherein, touch detection
Device detects the touch orientation of user, and detects touch operation bring signal, transmits a signal to touch controller;Touch control
Device processed receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 510, receiving area
It manages the order that device 510 is sent and is executed.Furthermore, it is possible to more using resistance-type, condenser type, infrared ray and surface acoustic wave etc.
Seed type realizes touch panel 5071.In addition to touch panel 5071, user input unit 507 can also include other input equipments
5072.Specifically, other input equipments 5072 can include but is not limited to physical keyboard, function key (such as volume control button,
Switch key etc.), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 5071 can be covered on display panel 5061, when touch panel 5071 is detected at it
On or near touch operation after, send processor 510 to determine the type of touch event, be followed by subsequent processing device 510 according to touching
The type for touching event provides corresponding visual output on display panel 5061.Although in Fig. 5, touch panel 5071 and display
Panel 5061 is the function that outputs and inputs of realizing mobile terminal as two independent components, but in some embodiments
In, can be integrated by touch panel 5071 and display panel 5061 and realize the function that outputs and inputs of mobile terminal, it is specific this
Place is without limitation.
Interface unit 508 is the interface that external device (ED) is connect with mobile terminal 500.For example, external device (ED) may include having
Line or wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, storage card end
Mouth, port, the port audio input/output (I/O), video i/o port, earphone end for connecting the device with identification module
Mouthful etc..Interface unit 508 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and
By one or more elements that the input received is transferred in mobile terminal 500 or can be used in 500 He of mobile terminal
Data are transmitted between external device (ED).
Memory 509 can be used for storing software program and various data.Memory 509 can mainly include storing program area
The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 509 may include high-speed random access memory, it can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 510 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection
A part by running or execute the software program and/or module that are stored in memory 509, and calls and is stored in storage
Data in device 509 execute the various functions and processing data of mobile terminal, to carry out integral monitoring to mobile terminal.Place
Managing device 510 may include one or more processing units;Preferably, processor 510 can integrate application processor and modulatedemodulate is mediated
Manage device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is main
Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 510.
Mobile terminal 500 can also include the power supply 511 (such as battery) powered to all parts, it is preferred that power supply 511
Can be logically contiguous by power-supply management system and processor 510, to realize management charging by power-supply management system, put
The functions such as electricity and power managed.
In addition, mobile terminal 500 includes some unshowned functional modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of mobile terminal, including processor 510, and memory 509 is stored in
On memory 509 and the computer program that can run on the processor 510, the computer program are executed by processor 510
Each process of the above-mentioned image processing method embodiment of Shi Shixian, and identical technical effect can be reached, to avoid repeating, here
It repeats no more.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium
Calculation machine program, the computer program realize each process of above-mentioned image processing method embodiment, and energy when being executed by processor
Reach identical technical effect, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, such as only
Read memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation
RAM), magnetic or disk etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service
Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form belongs within protection of the invention.
Claims (8)
1. a kind of image processing method is applied to mobile terminal characterized by comprising
Obtain the facial size of N number of human face region in the image of camera acquisition;
From N number of human face region, main body human face region and nonbody human face region are determined;
According to the facial size of the facial size of the main body human face region and the nonbody human face region, respectively to described non-
Main body human face region carries out the virtualization processing of different virtualization degree;
Wherein, N is the integer greater than 1;
It is described according to the facial size of the main body human face region and the facial size of the nonbody human face region, respectively to institute
State the step of nonbody human face region carries out the virtualization processing of different virtualization degree, comprising:
Determine the absolute value of the facial size difference of each nonbody human face region and main body human face region;
According to the absolute value of the facial size difference, the void of different virtualization degree is carried out to the nonbody human face region respectively
Change processing;
Wherein, the facial size of the human face region includes the width value of human face region and/or the area value of human face region;
The absolute value according to the facial size difference carries out different virtualization degree to the nonbody human face region respectively
Virtualization processing the step of, comprising:
Obtain the background area in described image;
Determine the virtualization degree of the background area;
According to the virtualization degree of the absolute value of the facial size difference and the background area, each nonbody face area is determined
The virtualization degree in domain;
According to the virtualization degree of each nonbody human face region, each nonbody human face region is carried out at virtualization respectively
Reason;
Wherein, the background area is all image-regions in described image in addition to human face region;Each nonbody face
The virtualization degree in region is positively correlated with the absolute value of corresponding facial size difference.
2. determining main body human face region the method according to claim 1, wherein described from N number of human face region
The step of with nonbody human face region, comprising:
The maximum human face region of facial size in N number of human face region is determined as main body human face region;
Each human face region in N number of human face region in addition to the maximum human face region of the facial size is identified as non-
Main body human face region.
3. determining main body human face region the method according to claim 1, wherein described from N number of human face region
The step of with nonbody human face region, comprising:
Receive first input of the user in described image;
The corresponding human face region of first input is determined as main body human face region;
Each human face region in N number of human face region in addition to the corresponding human face region of the first input is identified as nonbody
Human face region.
4. a kind of mobile terminal characterized by comprising
Dimension acquisition module, for obtaining the facial size of N number of human face region in the image that camera acquires;
Area determination module, for determining main body human face region and nonbody human face region from N number of human face region;
Processing module is blurred, for according to the facial size of the main body human face region and the face of the nonbody human face region
Size, the virtualization for carrying out different virtualization degree to the nonbody human face region respectively are handled;Wherein, N is the integer greater than 1;
The virtualization processing module, comprising:
Difference determines submodule, for determining the exhausted of each nonbody human face region and the facial size difference of main body human face region
To value;
Virtualization processing submodule, for the absolute value according to the facial size difference, respectively to the nonbody human face region
Carry out the virtualization processing of different virtualization degree;Wherein, the facial size of the human face region include human face region width value and/
Or the area value of human face region;
The virtualization handles submodule, comprising:
Background area acquiring unit, for obtaining the background area in described image;
Background blurring extent determination unit, for determining the virtualization degree of the background area;
Face blurs extent determination unit, for according to the absolute value of the facial size difference and the virtualization of the background area
Degree determines the virtualization degree of each nonbody human face region;
Face blurs processing unit, for the virtualization degree according to each nonbody human face region, respectively to each non-master
Body human face region carries out virtualization processing;Wherein, the background area is all images in described image in addition to human face region
Region;The virtualization degree of each nonbody human face region is positively correlated with the absolute value of corresponding facial size difference.
5. mobile terminal according to claim 4, which is characterized in that the area determination module, comprising:
Submodule is automatically determined, for the maximum human face region of facial size in N number of human face region to be determined as main body face area
Domain;Each human face region in N number of human face region in addition to the maximum human face region of the facial size is identified as non-
Main body human face region.
6. mobile terminal according to claim 4, which is characterized in that the area determination module, comprising:
It inputs and determines submodule, for receiving first input of the user in described image;By the corresponding people of first input
Face region is determined as main body human face region;By everyone in addition to the first corresponding human face region of input in N number of human face region
Face region is identified as nonbody human face region.
7. a kind of mobile terminal, which is characterized in that including processor, memory and be stored on the memory and can be described
The computer program run on processor is realized when the computer program is executed by the processor as in claims 1 to 3
The step of described in any item image processing methods.
8. a kind of computer readable storage medium, which is characterized in that store computer journey on the computer readable storage medium
Sequence, the computer program realize image processing method as claimed any one in claims 1 to 3 when being executed by processor
Step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711194076.2A CN107864336B (en) | 2017-11-24 | 2017-11-24 | A kind of image processing method, mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711194076.2A CN107864336B (en) | 2017-11-24 | 2017-11-24 | A kind of image processing method, mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107864336A CN107864336A (en) | 2018-03-30 |
CN107864336B true CN107864336B (en) | 2019-07-26 |
Family
ID=61703437
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711194076.2A Active CN107864336B (en) | 2017-11-24 | 2017-11-24 | A kind of image processing method, mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107864336B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11893668B2 (en) | 2021-03-31 | 2024-02-06 | Leica Camera Ag | Imaging system and method for generating a final digital image via applying a profile to image information |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110198421B (en) * | 2019-06-17 | 2021-08-10 | Oppo广东移动通信有限公司 | Video processing method and related product |
CN112672102B (en) * | 2019-10-15 | 2023-03-24 | 杭州海康威视数字技术股份有限公司 | Video generation method and device |
CN112351204A (en) * | 2020-10-27 | 2021-02-09 | 歌尔智能科技有限公司 | Photographing method, photographing device, mobile terminal and computer readable storage medium |
CN113014830A (en) * | 2021-03-01 | 2021-06-22 | 鹏城实验室 | Video blurring method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009064188A (en) * | 2007-09-05 | 2009-03-26 | Seiko Epson Corp | Image processing apparatus, image processing method, and image processing system |
CN104751405A (en) * | 2015-03-11 | 2015-07-01 | 百度在线网络技术(北京)有限公司 | Method and device for blurring image |
CN104967786A (en) * | 2015-07-10 | 2015-10-07 | 广州三星通信技术研究有限公司 | Image selection method and device |
CN106971165A (en) * | 2017-03-29 | 2017-07-21 | 武汉斗鱼网络科技有限公司 | The implementation method and device of a kind of filter |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4935302B2 (en) * | 2006-11-02 | 2012-05-23 | 株式会社ニコン | Electronic camera and program |
JP4916355B2 (en) * | 2007-03-20 | 2012-04-11 | 三洋電機株式会社 | Aperture control apparatus and image processing apparatus |
JP5460173B2 (en) * | 2009-08-13 | 2014-04-02 | 富士フイルム株式会社 | Image processing method, image processing apparatus, image processing program, and imaging apparatus |
CN102932541A (en) * | 2012-10-25 | 2013-02-13 | 广东欧珀移动通信有限公司 | Mobile phone photographing method and system |
CN105303514B (en) * | 2014-06-17 | 2019-11-05 | 腾讯科技(深圳)有限公司 | Image processing method and device |
CN104794462B (en) * | 2015-05-11 | 2018-05-22 | 成都野望数码科技有限公司 | A kind of character image processing method and processing device |
-
2017
- 2017-11-24 CN CN201711194076.2A patent/CN107864336B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009064188A (en) * | 2007-09-05 | 2009-03-26 | Seiko Epson Corp | Image processing apparatus, image processing method, and image processing system |
CN104751405A (en) * | 2015-03-11 | 2015-07-01 | 百度在线网络技术(北京)有限公司 | Method and device for blurring image |
CN104967786A (en) * | 2015-07-10 | 2015-10-07 | 广州三星通信技术研究有限公司 | Image selection method and device |
CN106971165A (en) * | 2017-03-29 | 2017-07-21 | 武汉斗鱼网络科技有限公司 | The implementation method and device of a kind of filter |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11893668B2 (en) | 2021-03-31 | 2024-02-06 | Leica Camera Ag | Imaging system and method for generating a final digital image via applying a profile to image information |
Also Published As
Publication number | Publication date |
---|---|
CN107864336A (en) | 2018-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107864336B (en) | A kind of image processing method, mobile terminal | |
CN108989672A (en) | A kind of image pickup method and mobile terminal | |
CN108989678A (en) | A kind of image processing method, mobile terminal | |
CN110113528A (en) | A kind of parameter acquiring method and terminal device | |
CN107682639B (en) | A kind of image processing method, device and mobile terminal | |
CN109685915A (en) | A kind of image processing method, device and mobile terminal | |
CN109005336A (en) | A kind of image capturing method and terminal device | |
CN110300267A (en) | Photographic method and terminal device | |
CN108462826A (en) | A kind of method and mobile terminal of auxiliary photo-taking | |
CN107888833A (en) | A kind of image capturing method and mobile terminal | |
CN109688253A (en) | A kind of image pickup method and terminal | |
CN108174081B (en) | A kind of image pickup method and mobile terminal | |
CN109743503A (en) | Reminding method and terminal | |
CN109241832A (en) | A kind of method and terminal device of face In vivo detection | |
CN108881544A (en) | A kind of method taken pictures and mobile terminal | |
CN109544445A (en) | A kind of image processing method, device and mobile terminal | |
CN108984082A (en) | A kind of image display method and mobile terminal | |
CN108307110A (en) | A kind of image weakening method and mobile terminal | |
CN109819166A (en) | A kind of image processing method and electronic equipment | |
CN109005355A (en) | A kind of image pickup method and mobile terminal | |
CN109727212A (en) | A kind of image processing method and mobile terminal | |
CN109144393A (en) | A kind of image display method and mobile terminal | |
CN108510266A (en) | A kind of Digital Object Unique Identifier recognition methods and mobile terminal | |
CN109274957A (en) | A kind of depth image image pickup method and mobile terminal | |
CN109547700A (en) | Photographic method and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |