CN108830901A - A kind of image processing method and electronic equipment - Google Patents
A kind of image processing method and electronic equipment Download PDFInfo
- Publication number
- CN108830901A CN108830901A CN201810651981.4A CN201810651981A CN108830901A CN 108830901 A CN108830901 A CN 108830901A CN 201810651981 A CN201810651981 A CN 201810651981A CN 108830901 A CN108830901 A CN 108830901A
- Authority
- CN
- China
- Prior art keywords
- image
- dimensional
- eye
- eye model
- processed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 28
- 238000000034 method Methods 0.000 claims abstract description 43
- 238000012545 processing Methods 0.000 claims abstract description 35
- 230000009466 transformation Effects 0.000 claims abstract description 16
- 230000009467 reduction Effects 0.000 claims abstract description 14
- 238000013528 artificial neural network Methods 0.000 claims description 41
- 239000011521 glass Substances 0.000 claims description 32
- 238000005457 optimization Methods 0.000 claims description 29
- 238000012549 training Methods 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 10
- 238000010801 machine learning Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 7
- 230000005611 electricity Effects 0.000 claims description 3
- 239000004744 fabric Substances 0.000 claims 1
- 210000005036 nerve Anatomy 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 12
- 230000007704 transition Effects 0.000 abstract description 8
- 230000006870 function Effects 0.000 description 14
- 238000001514 detection method Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 230000006854 communication Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 241001396014 Priacanthus arenatus Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 210000004218 nerve net Anatomy 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G06T3/06—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The present invention provides a kind of image processing method and electronic equipment.This method includes:Eye locations information is obtained from image to be processed;According to the eye locations information, eye depth characteristic is extracted from depth image corresponding with the image to be processed;According to the three-dimensional eye model of eye depth characteristic building first;Processing is optimized to the described first three-dimensional eye model, obtains the second three-dimensional eye model;Dimensionality reduction transformation is carried out to the described second three-dimensional eye model, the image that obtains that treated.The eye transition harmony of photo is preferable when the embodiment of the present invention to take pictures, and U.S. face effect is good.
Description
Technical field
The present invention relates to field of communication technology more particularly to a kind of image processing methods and electronic equipment.
Background technique
With the fast development of the communication technology, electronic equipment obtains unprecedented universal, and electronic equipment has become people
Live in indispensable a part.Perfect with electronic equipment camera function, more and more users wish to use electronics
Equipment takes self of true beauty, therefore beautiful Yancheng is a kind of trend on electronic equipment.Since eyes embody the essence of people
Refreshing looks, therefore most of user is desirable in when taking pictures and takes a pair of pretty eyes, and existing electronic equipment mainly passes through
The modes such as big eye realize eye U.S. face function, and through this eye U.S. face eye function, treated that photo can be perceived visually
The effect of partial enlargement is obtained to eye.However after this eye U.S. face function treatment, the photo eye transition taken is coordinated
Property is poor, causes U.S. face effect poor.
Summary of the invention
The present invention relates to a kind of image processing method and electronic equipments, to solve the eye mistake of photo when electronic equipment is taken pictures
Cross the poor problem for leading to U.S. face effect difference of harmony.
In order to solve the above-mentioned technical problem, the invention is realized in this way:A kind of image processing method, sets applied to electronics
It is standby, including:
Eye locations information is obtained from image to be processed;
According to the eye locations information, it is special from depth image corresponding with the image to be processed to extract eye depth
Sign;
According to the three-dimensional eye model of eye depth characteristic building first;
Processing is optimized to the described first three-dimensional eye model, obtains the second three-dimensional eye model;
Dimensionality reduction transformation is carried out to the described second three-dimensional eye model, the image that obtains that treated.
In a first aspect, the embodiment of the invention provides a kind of image processing methods, including:
Eye locations information is obtained from image to be processed;
According to the eye locations information, it is special from depth image corresponding with the image to be processed to extract eye depth
Sign;
According to the three-dimensional eye model of eye depth characteristic building first;
Processing is optimized to the described first three-dimensional eye model, obtains the second three-dimensional eye model;
Dimensionality reduction transformation is carried out to the described second three-dimensional eye model, the image that obtains that treated.
Second aspect, the embodiment of the invention also provides a kind of electronic equipment, including:
Module is obtained, for obtaining eye locations information from image to be processed;
Extraction module, for according to the eye locations information, from depth image corresponding with the image to be processed
Extract eye depth characteristic;
Module is constructed, according to the three-dimensional eye model of eye depth characteristic building first;
Optimization module obtains the second three-dimensional eye model for optimizing processing to the described first three-dimensional eye model;
Conversion module, for carrying out dimensionality reduction transformation to the described second three-dimensional eye model, the image that obtains that treated.
The third aspect, the embodiment of the invention also provides a kind of electronic equipment, including:It memory, processor and is stored in
On the memory and the computer program that can run on the processor, when the processor executes the computer program
Realize the step in the image processing method provided in an embodiment of the present invention.
Fourth aspect, it is described computer-readable to deposit the embodiment of the invention also provides a kind of computer readable storage medium
Computer program is stored on storage media, the computer program realizes figure provided in an embodiment of the present invention when being executed by processor
As the step in processing method.
The embodiment of the present invention obtains eye locations information from image to be processed;According to the eye locations information, from
Eye depth characteristic is extracted in the corresponding depth image of the image to be processed;According to eye depth characteristic building the one or three
Tie up eye model;Processing is optimized to the described first three-dimensional eye model, obtains the second three-dimensional eye model;To described second
Three-dimensional eye model carries out dimensionality reduction transformation, the image that obtains that treated, so that the eye transition harmony of photo is preferable when taking pictures,
U.S. face effect is good.
Detailed description of the invention
Fig. 1 shows a kind of flow charts for image processing method that example embodiment of the present invention provides;
Fig. 2 indicates the flow chart for another image processing method that example embodiment of the present invention provides;
Fig. 3 indicates the flow chart for another image processing method that example embodiment of the present invention provides;
Fig. 4 indicates the flow chart for another glasses image minimizing technology that example embodiment of the present invention provides;
Fig. 5 indicates the flow chart for another image processing method that example embodiment of the present invention provides;
Fig. 6 indicates the structure chart of a kind of electronic equipment provided in an embodiment of the present invention;
Fig. 7 indicates the structure chart of another electronic equipment provided in an embodiment of the present invention;
Fig. 8 shows the structure charts of another electronic equipment provided in an embodiment of the present invention;
Fig. 9 indicates the structure chart of another electronic equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Referring to Figure 1, Fig. 1 is a kind of flow chart of image processing method provided in an embodiment of the present invention, this method application
In electronic equipment, as shown in Figure 1, including the following steps:
Step 101, eye locations information is obtained from image to be processed.
Wherein, eye locations information is obtained from image to be processed includes:An original image is shot using electronic equipment
And corresponding depth map, the face location in original image is obtained using face area detecting method, eye is used according to face location
Portion's detection method obtains eye locations information.
The method based on template matching, the method based on singular value features can be used, based on feature in face area detecting method
Face method, based on integral image characteristic method etc., the embodiment of the present invention is not construed as limiting this.
Eye detecting method can be used the method based on Hough transform loop truss, the method based on correlation filter group,
Method etc. based on template matching, the embodiment of the present invention is not construed as limiting this.
The electronic equipment with binocular camera can be selected to shoot original image and corresponding depth map, can also be made
With the electronic equipment for the camera that can shoot original image and corresponding depth map with other bands, the present invention does not limit this
It is fixed.
Step 102, according to the eye locations information, eye is extracted from depth image corresponding with the image to be processed
Portion's depth characteristic.
After obtaining eye locations information in a step 101, then according to the eye figure of eye locations information extraction depth map
Picture, to obtain eye depth characteristic.
Step 103, according to the three-dimensional eye model of eye depth characteristic building first.
Each of depth map pixel indicate current object with a distance from camera lens, wherein each pixel in depth map
It can be expressed as p (x, y, z), wherein x, y indicates the two-dimensional space information of pixel abscissa, ordinate, and z indicates the depth of object
Spend information.May include using the three-dimensional eye model of eye depth information building first:Use the eyes image of depth map
In each pixel spatial information, successively form the described first three-dimensional eye model in three-dimensional space described point.
Step 104, processing is optimized to the described first three-dimensional eye model, obtains the second three-dimensional eye model.
Optimize processing using the obtain in step 103 first three-dimensional eye model, for example, using machine learning mode,
Support vector machines etc. optimizes processing to the described first three-dimensional eye model, to obtain the second three-dimensional eye model, this hair
Bright embodiment is not construed as limiting this.
Step 105, dimensionality reduction transformation is carried out to the described second three-dimensional eye model, the image that obtains that treated.
Dimensionality reduction transformation is carried out to the described second three-dimensional eye model, obtaining that treated, image may include:By described
Two three-dimensional eye model conversations are two-dimentional eye model, using the two-dimentional eye model to the eyes image of the target image
It is adjusted, the embodiment of the present invention is not construed as limiting this.
It may be implemented to optimize eyes image using the depth characteristic of eyes image through the above steps, compared to existing
There is technology to carry out U.S. face to eyes image using eyes image two dimensional character, the embodiment of the present invention is special using the three-dimensional of eyes image
After sign optimizes eyes image, eyes image transition harmonizing nature, U.S. face better effect.
In addition, above-mentioned electronic equipment can be mobile phone, tablet computer (Tablet Personal Computer), on knee
Computer (Laptop Computer), personal digital assistant (personal digital assistant, abbreviation PDA), in movement
Net device (Mobile Internet Device, MID), wearable device (Wearable Device), computer or notes
This computer etc..
The embodiment of the present invention obtains eye locations information from image to be processed;According to the eye locations information, from
Eye depth characteristic is extracted in the corresponding depth image of the image to be processed;According to eye depth characteristic building the one or three
Tie up eye model;Processing is optimized to the described first three-dimensional eye model, obtains the second three-dimensional eye model;To described second
Three-dimensional eye model carries out dimensionality reduction transformation, the eye transition association of photo when the figure embodiment of the present invention that obtains that treated to take pictures
Tonality is preferable, and U.S. face effect is good.
Fig. 2 is referred to, Fig. 2 is a kind of flow chart of eyes image processing method provided in an embodiment of the present invention, this method
Applied to electronic equipment, as shown in Fig. 2, including the following steps:
Step 201, eye locations information is obtained from image to be processed.
Step 201 is identical as the step 101 in first embodiment of the invention in the present embodiment, and this will not be repeated here.
Step 202, according to the eye locations information, eye is extracted from depth image corresponding with the image to be processed
Portion's depth characteristic.
Step 202 is identical as the step 102 in first embodiment of the invention in the present embodiment, and this will not be repeated here.
Step 203, according to the three-dimensional eye model of eye depth characteristic building first.
Step 203 is identical as the step 103 in first embodiment of the invention in the present embodiment, and this will not be repeated here.
Step 204, the crucial point feature of three-dimensional of the described first three-dimensional eye model is detected, and using default neural network pair
The three-dimensional crucial point feature optimizes processing, obtains the second three-dimensional eye model.
Machine learning is carried out to the three-dimensional crucial point feature by the default neural network, it is special to obtain optimization eye
Sign;The described first three-dimensional eye model is adjusted according to the three-dimensional crucial point feature and the optimization eye feature, is obtained
To the described second three-dimensional eye model.
It is three-dimensional to described first using method of adjustment according to the three-dimensional crucial point feature and the optimization eye feature
Eye model optimizes, to obtain the described second three-dimensional eye model.The method of adjustment can be warp, and (triangle becomes
Change) method etc., the present invention is not especially limited this.
Optionally, the default neural network can be obtained by this mode:Neural network is established, using training sample set
Training is optimized to the neural network, obtains carrying out for the crucial point feature of three-dimensional to the described first three-dimensional eye model
The default neural network of optimization.
Neural network is an important branch of machine learning, and neural network, which gathers around to have, possesses input layer, output layer and hidden
Containing layer.The feature vector of input reaches output layer by hidden layer transformation, obtains classification results in output layer.Machine learning at present
In common neural network include deep neural network, convolutional neural networks, Recognition with Recurrent Neural Network etc..
By taking deep neural network as an example, the training sample set includes the first training sample subset and the second training sample
Collection, wherein the first training sample subset includes the image set of eye deformation or distortion etc., the second training sample subset includes passing through
Manually repaired the image set of eye.Use deep neural network as training pattern, by the first training sample subset and second
Training sample subset is input to deep neural network, and the eye three-dimensional key message that deep neural network extracts image is instructed
Practice, obtain the image of eye deformation or distortion and manually repaired the mapping relations between the image of eye, is i.e. completion depth mind
Optimization training through network.Eye warp image to be tested is input to trained deep neural network, can be obtained excellent
Eyes image after change.
For the embodiment of the present invention by taking deep neural network as an example, other neural networks are also suitable the embodiment of the present invention, this hair
Bright embodiment is not especially limited this.
The default neural network can be obtained by aforesaid way, can also directly be obtained by cloud or server, this
Inventive embodiments are not especially limited this.
Optionally, machine learning is carried out to the three-dimensional crucial point feature by the default neural network, is optimized
Eye feature may include:Three-dimensional crucial point feature is input to trained default neural network, is reflected according to what training obtained
Relationship is penetrated, the eye feature optimized.
Optionally, the three-dimensional crucial point feature can be corner feature, the three-dimensional eye model of the detection described first
The method of the crucial point feature of three-dimensional can be based on the Corner Detection of gray level image, the Corner Detection based on bianry image, base
In the Corner Detection etc. of contour curve, the present invention is not especially limit this.
Step 205, it is two-dimentional eye model by the described second three-dimensional eye model conversation, uses the two-dimentional eye model
The eyes image of the image to be processed is adjusted.
Optionally, a kind of image processing method as shown in Figure 3, obtains image to be processed and corresponding depth first
Then figure positions face using eye locations detection method using the face location in method for detecting human face detection target image
In eye locations.According to eye depth characteristic corresponding in eye locations information extraction depth map, and then construct three-dimensional eye
Model.Processing is optimized to three-dimensional eye model, is two-dimentional eye model by the three-dimensional eye model conversation after optimization processing,
The eyes image in image to be processed is adjusted using two-dimentional eye model.The realization of specific steps in the present embodiment with
It is identical that step is corresponded in previous embodiment, is no longer repeated one by one here.
Wherein, as shown in Figure 3 that the eyes image of image to be processed is adjusted using two-dimentional eye model, it is specific to wrap
It includes:Processing is optimized to the two-dimentional eye model according to the optimization eye feature and the three-dimensional crucial point feature, is obtained
To the second two-dimentional eye model;The eyes image of the image to be processed is adjusted using the described second two-dimentional eye model
It is whole.
Place is optimized to the two-dimentional eye model according to the optimization eye feature and the three-dimensional crucial point feature
Reason, obtaining the second two-dimentional eye model may include:It converts the optimization eye feature and the three-dimensional crucial point feature to
Corresponding two dimensional character is adjusted the two-dimentional eye model using method of adjustment, obtains according to corresponding two-dimensional signal
Described second two-dimentional eye model.The method of adjustment can be warp (triangular transformation) method etc., and the present invention does not make this to have
Body limits.
Optionally, the image to be processed includes glasses image, in the eye depth information for obtaining image to be processed
Before, the method also includes:Identify distributing position of the glasses image in the image to be processed.Present embodiment energy
The method of removal glasses enough is provided for the user of wearing spectacles, to meet the individual demand of different user.
Realize that the specific method of above-mentioned steps can be as illustrated in the flow diagram of fig. 4.Realizing method flow as shown in Figure 4
Before, a large amount of training sample sets of the building comprising glasses, extraction corner feature training classifier, the classifier can be first
Haar classifier.After training Haar classifier, start process as shown in Figure 4:Image to be processed is obtained, using training
Haar classifier extract the corner feature of the glasses image in the image to be processed, the spectacle-frame of glasses image is examined
It surveys.The spectacle-frame inflection point in corner feature is differentiated according to classifier, and the position of spectacle-frame is obtained according to spectacle-frame inflection point.Then root
According to the rough distributing position of position estimation glasses image of spectacle-frame.The corresponding image of the distributing position is inputted into skin color segmentation
Model is labeled as lens area for the pixel of non-area of skin color, thus obtains the second distributing position of glasses, export glasses picture
Picture.The removal of glasses image finally can be completed according to the second distributing position calling restorative procedure.
Optionally, the Skin Color Information for extracting different ethnic groups under a large amount of scene utilizes the Skin Color Information building colour of skin point
Cut model.Arbitrary image pixel, which is input to skin color segmentation model, can obtain the probability value p that each pixel belongs to the colour of skin, and p is got over
Indicate that the pixel is the area of skin color of people greatly, the lower expression pixel of p is non-area of skin color.And glasses belong to non-area of skin color,
The corresponding image pixel of glasses distributing position is input to skin color segmentation model, the image pixel for obtaining glasses distributing position is corresponding
Probability value p, according to the size of probability value p i.e. can determine whether to belong to area of skin color, for being not belonging to the pixel of area of skin color,
It is marked as lens area.
Optionally, the detailed process of eyes image processing method is carried out when the user dons the glasses as shown in figure 5, obtaining first
Image to be processed and corresponding depth map are taken, the glasses in image to be processed are removed using glasses minimizing technology as shown in Figure 4
Image, then using the eyes image processing method in above-described embodiment to the image to be processed and depth for eliminating glasses image
Degree figure optimizes processing, can be realized and carries out eye U.S. face to the user of wearing spectacles.
The embodiment of the present invention optimizes processing to the three-dimensional crucial point feature by default neural network, acquisition the
Described second three-dimensional eye model conversation is two-dimentional eye model by two three-dimensional eye models, uses the two-dimentional eye model
The eyes image of the image to be processed is adjusted so that when taking pictures photo eye transition harmonizing nature, U.S. face effect
More preferably.
Fig. 6 is referred to, Fig. 6 is the structure chart of a kind of electronic equipment provided in an embodiment of the present invention, as shown in fig. 6, described
Electronic equipment 300 includes:
Module 301 is obtained, for obtaining eye locations information from image to be processed;
Extraction module 302, for according to the eye locations information, from depth image corresponding with the image to be processed
Middle extraction eye depth characteristic;
Module 303 is constructed, according to the three-dimensional eye model of eye depth characteristic building first;
Optimization module 304 obtains the second three-dimensional eye mould for optimizing processing to the described first three-dimensional eye model
Type;
Conversion module 305, for carrying out dimensionality reduction transformation to the described second three-dimensional eye model, the image that obtains that treated.
As shown in fig. 7, the electronic equipment 300 further includes:
Training module 306 optimizes instruction to the neural network using training sample set for establishing neural network
Practice, obtains the default neural network optimized for the crucial point feature of three-dimensional to the described first three-dimensional eye model.
Optionally, the optimization module 304 is used to detect the crucial point feature of three-dimensional of the described first three-dimensional eye model, and
Processing is optimized to the three-dimensional crucial point feature using default neural network, obtains the second three-dimensional eye model.
Optionally, the conversion module 305 is used to the described second three-dimensional eye model conversation be two-dimentional eye model, makes
The eyes image of the image to be processed is adjusted with the two-dimentional eye model.
Optionally, the optimization module 304 be used for by the default neural network to the three-dimensional key point feature into
Row machine learning obtains optimization eye feature;According to the three-dimensional crucial point feature and the optimization eye feature to described the
One three-dimensional eye model is adjusted, and obtains the described second three-dimensional eye model;
Optionally, the conversion module 305 is used for according to the optimization eye feature and the three-dimensional crucial point feature pair
The two dimension eye model optimizes processing, obtains the second two-dimentional eye model;Use the described second two-dimentional eye model pair
The eyes image of the image to be processed is adjusted.
Optionally, the image to be processed includes glasses image, and as described in Figure 8, the electronic equipment further includes:
Glasses remove module 307, for identification distributing position of the glasses image in the image to be processed;According to
The distributing position removes the glasses in the image to be processed.
Electronic equipment provided in an embodiment of the present invention can be realized electronic equipment in the embodiment of the method for Fig. 1 to Fig. 5 and realize
Each process, to avoid repeating, which is not described herein again.And the eye transition of electronic equipment photo when enabling to take pictures is coordinated
Property is preferable, and U.S. face effect is good.
The hardware structural diagram of Fig. 9 a kind of electronic equipment of each embodiment to realize the present invention.
The electronic equipment 900 includes but is not limited to:It is radio frequency unit 901, network module 902, audio output unit 903, defeated
Enter unit 904, sensor 905, display unit 906, user input unit 907, interface unit 908, memory 909, processor
The components such as 910 and power supply 911.It will be understood by those skilled in the art that electronic devices structure shown in Fig. 9 is not constituted
Restriction to electronic equipment, electronic equipment may include than illustrating more or fewer components, perhaps combine certain components or
Different component layouts.In embodiments of the present invention, electronic equipment include but is not limited to mobile phone, tablet computer, laptop,
Palm PC, car-mounted terminal, wearable device, pedometer, computer and laptop etc..
Wherein, processor 910, for obtaining eye locations information from image to be processed;
According to the eye locations information, it is special from depth image corresponding with the image to be processed to extract eye depth
Sign;
According to the three-dimensional eye model of eye depth characteristic building first;
Processing is optimized to the described first three-dimensional eye model, obtains the second three-dimensional eye model;
Dimensionality reduction transformation is carried out to the described second three-dimensional eye model, the image that obtains that treated.
Optionally, processor 910 executes the crucial point feature of three-dimensional of the three-dimensional eye model of detection described first, and using pre-
If neural network optimizes processing to the three-dimensional crucial point feature, the second three-dimensional eye model is obtained.
Optionally, neural network is established in the execution of processor 910, is optimized using training sample set to the neural network
Training obtains the default nerve net optimized for the crucial point feature of three-dimensional to the described first three-dimensional eye model
Network.
Optionally, it is two-dimentional eye model that processor 910, which is executed the described second three-dimensional eye model conversation, using described
Two-dimentional eye model is adjusted the eyes image of the image to be processed.
Processor 910, which is executed, carries out machine learning to the three-dimensional crucial point feature by the default neural network, obtains
To optimization eye feature;
The described first three-dimensional eye model is adjusted according to the three-dimensional crucial point feature and the optimization eye feature
It is whole, obtain the described second three-dimensional eye model;
Processor 910 is also used to:
Place is optimized to the two-dimentional eye model according to the optimization eye feature and the three-dimensional crucial point feature
Reason obtains the second two-dimentional eye model;
Processor 910 is also used to:
The image to be processed includes glasses image, before the eye depth information for obtaining image to be processed, is known
Distributing position of the not described glasses image in the image to be processed;Remove the image to be processed according to the distributing position
In glasses.
The eye transition harmony of photo is preferable when electronic equipment 900 enables to take pictures, and U.S. face effect is good.
It should be understood that the embodiment of the present invention in, radio frequency unit 901 can be used for receiving and sending messages or communication process in, signal
Send and receive, specifically, by from base station downlink data receive after, to processor 910 handle;In addition, by uplink
Data are sent to base station.In general, radio frequency unit 901 includes but is not limited to antenna, at least one amplifier, transceiver, coupling
Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 901 can also by wireless communication system and network and other set
Standby communication.
Electronic equipment provides wireless broadband internet by network module 902 for user and accesses, and such as user is helped to receive
It sends e-mails, browse webpage and access streaming video etc..
Audio output unit 903 can be received by radio frequency unit 901 or network module 902 or in memory 909
The audio data of storage is converted into audio signal and exports to be sound.Moreover, audio output unit 903 can also provide and electricity
The relevant audio output of specific function that sub- equipment 900 executes is (for example, call signal receives sound, message sink sound etc.
Deng).Audio output unit 903 includes loudspeaker, buzzer and receiver etc..
Input unit 904 is for receiving audio or video signal.Input unit 904 may include graphics processor
(Graphics Processing Unit, GPU) 9041 and microphone 9042, graphics processor 9041 is in video acquisition mode
Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out
Reason.Treated, and picture frame may be displayed on display unit 906.Through graphics processor 9041, treated that picture frame can be deposited
Storage is sent in memory 909 (or other storage mediums) or via radio frequency unit 901 or network module 902.Mike
Wind 9042 can receive sound, and can be audio data by such acoustic processing.Treated audio data can be
The format output that mobile communication base station can be sent to via radio frequency unit 901 is converted in the case where telephone calling model.
Electronic equipment 900 further includes at least one sensor 905, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 9061, and proximity sensor can close when electronic equipment 900 is moved in one's ear
Display panel 9061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axis) size of acceleration, it can detect that size and the direction of gravity when static, can be used to identify electronic equipment posture (ratio
Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);It passes
Sensor 905 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, wet
Meter, thermometer, infrared sensor etc. are spent, details are not described herein.
Display unit 906 is for showing information input by user or being supplied to the information of user.Display unit 906 can wrap
Display panel 9061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 9061.
User input unit 907 can be used for receiving the number or character information of input, and generate the use with electronic equipment
Family setting and the related key signals input of function control.Specifically, user input unit 907 include touch panel 9071 and
Other input equipments 9072.Touch panel 9071, also referred to as touch screen collect the touch operation of user on it or nearby
(for example user uses any suitable objects or attachment such as finger, stylus on touch panel 9071 or in touch panel 9071
Neighbouring operation).Touch panel 9071 may include both touch detecting apparatus and touch controller.Wherein, touch detection
Device detects the touch orientation of user, and detects touch operation bring signal, transmits a signal to touch controller;Touch control
Device processed receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 910, receiving area
It manages the order that device 910 is sent and is executed.Furthermore, it is possible to more using resistance-type, condenser type, infrared ray and surface acoustic wave etc.
Seed type realizes touch panel 9071.In addition to touch panel 9071, user input unit 907 can also include other input equipments
9072.Specifically, other input equipments 9072 can include but is not limited to physical keyboard, function key (such as volume control button,
Switch key etc.), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 9071 can be covered on display panel 9061, when touch panel 9071 is detected at it
On or near touch operation after, send processor 910 to determine the type of touch event, be followed by subsequent processing device 910 according to touching
The type for touching event provides corresponding visual output on display panel 9061.Although in Fig. 9, touch panel 9071 and display
Panel 9061 is the function that outputs and inputs of realizing electronic equipment as two independent components, but in some embodiments
In, can be integrated by touch panel 9071 and display panel 9061 and realize the function that outputs and inputs of electronic equipment, it is specific this
Place is without limitation.
Interface unit 908 is the interface that external device (ED) is connect with electronic equipment 900.For example, external device (ED) may include having
Line or wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, storage card end
Mouth, port, the port audio input/output (I/O), video i/o port, earphone end for connecting the device with identification module
Mouthful etc..Interface unit 908 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and
By one or more elements that the input received is transferred in electronic equipment 900 or can be used in 900 He of electronic equipment
Data are transmitted between external device (ED).
Memory 909 can be used for storing software program and various data.Memory 909 can mainly include storing program area
The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 909 may include high-speed random access memory, it can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 910 is the control centre of electronic equipment, utilizes each of various interfaces and the entire electronic equipment of connection
A part by running or execute the software program and/or module that are stored in memory 909, and calls and is stored in storage
Data in device 909 execute the various functions and processing data of electronic equipment, to carry out integral monitoring to electronic equipment.Place
Managing device 910 may include one or more processing units;Preferably, processor 910 can integrate application processor and modulatedemodulate is mediated
Manage device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is main
Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 910.
Electronic equipment 900 can also include the power supply 911 (such as battery) powered to all parts, it is preferred that power supply 911
Can be logically contiguous by power-supply management system and processor 910, to realize management charging by power-supply management system, put
The functions such as electricity and power managed.
In addition, electronic equipment 900 includes some unshowned functional modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of electronic equipment, including processor 910, and memory 909 is stored in
On memory 909 and the computer program that can run on the processor 910, the computer program are executed by processor 910
Each process of the above-mentioned eyes image processing method embodiment of Shi Shixian, and identical technical effect can be reached, to avoid repeating,
Which is not described herein again.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium
Calculation machine program, the computer program realize each process of above-mentioned eyes image processing method embodiment when being executed by processor,
And identical technical effect can be reached, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium,
Such as read-only memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, letter
Claim RAM), magnetic or disk etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service
Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form belongs within protection of the invention.
Claims (14)
1. a kind of image processing method is applied to electronic equipment, which is characterized in that including:
Eye locations information is obtained from image to be processed;
According to the eye locations information, eye depth characteristic is extracted from depth image corresponding with the image to be processed;
According to the three-dimensional eye model of eye depth characteristic building first;
Processing is optimized to the described first three-dimensional eye model, obtains the second three-dimensional eye model;
Dimensionality reduction transformation is carried out to the described second three-dimensional eye model, the image that obtains that treated.
2. image processing method as described in claim 1, which is characterized in that described to be carried out to the described first three-dimensional eye model
Optimization processing, obtaining the second three-dimensional eye model includes:
The crucial point feature of three-dimensional of the described first three-dimensional eye model is detected, and using default neural network to described three-dimensional crucial
Point feature optimizes processing, obtains the second three-dimensional eye model.
3. image processing method as claimed in claim 2, which is characterized in that obtain eye position from image to be processed described
Before confidence breath, the method also includes:
Neural network is established, training is optimized to the neural network using training sample set, is obtained for described first
The default neural network that the crucial point feature of the three-dimensional of three-dimensional eye model optimizes.
4. image processing method as described in any one of claims 1-3, which is characterized in that described to the described second three-dimensional eye
Model carries out dimensionality reduction transformation, and obtaining that treated, image includes:
It is two-dimentional eye model by the described second three-dimensional eye model conversation, using the two-dimentional eye model to described to be processed
The eyes image of image is adjusted.
5. image processing method as claimed in claim 4, which is characterized in that described to use default neural network to the three-dimensional
Crucial point feature optimizes processing, obtains the second three-dimensional eye model and includes:
Machine learning is carried out to the three-dimensional crucial point feature by the default neural network, obtains optimization eye feature;
The described first three-dimensional eye model is adjusted according to the three-dimensional crucial point feature and the optimization eye feature, is obtained
To the described second three-dimensional eye model;
Described to carry out dimensionality reduction transformation to the described second three-dimensional eye model, obtaining that treated, image includes:
Processing is optimized to the two-dimentional eye model according to the optimization eye feature and the three-dimensional crucial point feature, is obtained
To the second two-dimentional eye model;
The eyes image of the image to be processed is adjusted using the described second two-dimentional eye model.
6. image processing method as described in claim 1, which is characterized in that the image to be processed includes glasses image,
Before obtaining eye locations information in image to be processed, the method also includes:
Identify distributing position of the glasses image in the image to be processed;
Remove the glasses image in the image to be processed according to the distributing position.
7. a kind of electronic equipment, which is characterized in that including:
Module is obtained, for obtaining eye locations information from image to be processed;
Extraction module is extracted from depth image corresponding with the image to be processed for according to the eye locations information
Eye depth characteristic;
Module is constructed, according to the three-dimensional eye model of eye depth characteristic building first;
Optimization module obtains the second three-dimensional eye model for optimizing processing to the described first three-dimensional eye model;
Conversion module, for carrying out dimensionality reduction transformation to the described second three-dimensional eye model, the image that obtains that treated.
8. electronic equipment as claimed in claim 7, which is characterized in that the optimization module is for detecting the described first three-dimensional eye
The crucial point feature of the three-dimensional of portion's model, and processing is optimized to the three-dimensional crucial point feature using default neural network, it obtains
Obtain the second three-dimensional eye model.
9. electronic equipment as claimed in claim 8, which is characterized in that the electronic equipment further includes:
Training module optimizes training to the neural network using training sample set, is used for establishing neural network
In the default neural network that the crucial point feature of three-dimensional to the described first three-dimensional eye model optimizes.
10. such as the described in any item electronic equipments of claim 7-9, which is characterized in that the conversion module is used for described the
Two three-dimensional eye model conversations are two-dimentional eye model, using the two-dimentional eye model to the eye figure of the image to be processed
As being adjusted.
11. electronic equipment as claimed in claim 10, which is characterized in that the optimization module is used to pass through the default nerve
Network carries out machine learning to the three-dimensional crucial point feature, obtains optimization eye feature;According to the three-dimensional crucial point feature
The described first three-dimensional eye model is adjusted with the optimization eye feature, obtains the described second three-dimensional eye model;
The conversion module is used for according to the optimization eye feature and the three-dimensional crucial point feature to the two-dimentional eye mould
Type optimizes processing, obtains the second two-dimentional eye model;Using the described second two-dimentional eye model to the image to be processed
Eyes image be adjusted.
12. electronic equipment as claimed in claim 7, which is characterized in that the image to be processed includes glasses image, the electricity
Sub- equipment further includes:
Glasses remove module, for identification distributing position of the glasses image in the image to be processed;According to described point
Remove the glasses image in the image to be processed in cloth position.
13. a kind of electronic equipment, which is characterized in that including:It memory, processor and is stored on the memory and described
The computer program run on processor, the processor are realized when executing the computer program as appointed in claim 1-6
Step in image processing method described in one.
14. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium
Program realizes such as image processing method of any of claims 1-6 when the computer program is executed by processor
In step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810651981.4A CN108830901B (en) | 2018-06-22 | 2018-06-22 | Image processing method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810651981.4A CN108830901B (en) | 2018-06-22 | 2018-06-22 | Image processing method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108830901A true CN108830901A (en) | 2018-11-16 |
CN108830901B CN108830901B (en) | 2020-09-25 |
Family
ID=64137692
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810651981.4A Active CN108830901B (en) | 2018-06-22 | 2018-06-22 | Image processing method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108830901B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111107281A (en) * | 2019-12-30 | 2020-05-05 | 维沃移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
CN113902790A (en) * | 2021-12-09 | 2022-01-07 | 北京的卢深视科技有限公司 | Beauty guidance method, device, electronic equipment and computer readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105787884A (en) * | 2014-12-18 | 2016-07-20 | 联想(北京)有限公司 | Image processing method and electronic device |
CN107124548A (en) * | 2017-04-25 | 2017-09-01 | 深圳市金立通信设备有限公司 | A kind of photographic method and terminal |
CN107704813A (en) * | 2017-09-19 | 2018-02-16 | 北京飞搜科技有限公司 | A kind of face vivo identification method and system |
CN107977605A (en) * | 2017-11-08 | 2018-05-01 | 清华大学 | Ocular Boundary characteristic extraction method and device based on deep learning |
-
2018
- 2018-06-22 CN CN201810651981.4A patent/CN108830901B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105787884A (en) * | 2014-12-18 | 2016-07-20 | 联想(北京)有限公司 | Image processing method and electronic device |
CN107124548A (en) * | 2017-04-25 | 2017-09-01 | 深圳市金立通信设备有限公司 | A kind of photographic method and terminal |
CN107704813A (en) * | 2017-09-19 | 2018-02-16 | 北京飞搜科技有限公司 | A kind of face vivo identification method and system |
CN107977605A (en) * | 2017-11-08 | 2018-05-01 | 清华大学 | Ocular Boundary characteristic extraction method and device based on deep learning |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111107281A (en) * | 2019-12-30 | 2020-05-05 | 维沃移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
CN113902790A (en) * | 2021-12-09 | 2022-01-07 | 北京的卢深视科技有限公司 | Beauty guidance method, device, electronic equipment and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108830901B (en) | 2020-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107835367A (en) | A kind of image processing method, device and mobile terminal | |
CN108712603B (en) | Image processing method and mobile terminal | |
CN107948499A (en) | A kind of image capturing method and mobile terminal | |
CN107833177A (en) | A kind of image processing method and mobile terminal | |
CN108076290A (en) | A kind of image processing method and mobile terminal | |
CN108491775A (en) | A kind of image correcting method and mobile terminal | |
CN107592459A (en) | A kind of photographic method and mobile terminal | |
CN110706179A (en) | Image processing method and electronic equipment | |
CN108989678A (en) | A kind of image processing method, mobile terminal | |
CN109461117A (en) | A kind of image processing method and mobile terminal | |
CN107786811B (en) | A kind of photographic method and mobile terminal | |
CN107845057A (en) | One kind is taken pictures method for previewing and mobile terminal | |
CN110490897A (en) | Imitate the method and electronic equipment that video generates | |
CN109685915A (en) | A kind of image processing method, device and mobile terminal | |
CN107566749A (en) | Image pickup method and mobile terminal | |
CN109461124A (en) | A kind of image processing method and terminal device | |
CN109167914A (en) | A kind of image processing method and mobile terminal | |
CN109272466A (en) | A kind of tooth beautification method and device | |
CN108462826A (en) | A kind of method and mobile terminal of auxiliary photo-taking | |
CN110213485A (en) | A kind of image processing method and terminal | |
CN108550117A (en) | A kind of image processing method, device and terminal device | |
CN108307110A (en) | A kind of image weakening method and mobile terminal | |
CN108881544A (en) | A kind of method taken pictures and mobile terminal | |
CN109448069A (en) | A kind of template generation method and mobile terminal | |
CN109816601A (en) | A kind of image processing method and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |