CN107665074A - A kind of color temperature adjusting method and mobile terminal - Google Patents

A kind of color temperature adjusting method and mobile terminal Download PDF

Info

Publication number
CN107665074A
CN107665074A CN201710973108.2A CN201710973108A CN107665074A CN 107665074 A CN107665074 A CN 107665074A CN 201710973108 A CN201710973108 A CN 201710973108A CN 107665074 A CN107665074 A CN 107665074A
Authority
CN
China
Prior art keywords
facial image
mobile terminal
characteristic
type
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710973108.2A
Other languages
Chinese (zh)
Inventor
王青鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710973108.2A priority Critical patent/CN107665074A/en
Publication of CN107665074A publication Critical patent/CN107665074A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Abstract

The invention provides a kind of color temperature adjusting method and mobile terminal, belong to technical field of mobile terminals.Wherein, mobile terminal can be in bright screen state, preview image is gathered by camera, when human face region in preview image then be present, target facial image is determined according to human face region, can then be based on target facial image and at least one preset reference facial image, determines the type of emotion of the target facial image, finally according to the type of emotion of target facial image, the colour temperature of mobile terminal screen is adjusted.Adjusted manually without user, mobile terminal can automatically adjust colour temperature according to mood, simplify the operation of colour temperature regulation, improve the convenience of colour temperature regulation.

Description

A kind of color temperature adjusting method and mobile terminal
Technical field
The present embodiments relate to communication technical field, more particularly to a kind of color temperature adjusting method and mobile terminal.
Background technology
With the continuous development of mobile terminal technology, the utilization rate more and more higher of mobile terminal.User is often using shifting Dynamic terminal viewing video, browses news, etc..During use, the screen of the opposite mobile terminal of user's meeting long-time, because The Showing Effectiveness On Screen of this mobile terminal often produces a very large impact to user.Typically, can be by adjusting mobile terminal Display parameters adjust the display effect of mobile terminal.For example, it can be adjusted by adjusting the color temperature value of mobile terminal screen The display effect of mobile terminal.
In the prior art, when adjusting the colour temperature of mobile terminal screen, typically user's manual unlocking mobile terminal is set Menu is put, then colour temperature is adjusted according to the mode for setting the colour temperature option in menu manually to adjust.But manually The operating process of adjustment is complex, and convenience is relatively low.
The content of the invention
The present invention provides a kind of color temperature adjusting method and mobile terminal, to solve the colour temperature in regulation mobile terminal screen When, the problem of operating process is complex, and convenience is relatively low.
In order to solve the above-mentioned technical problem, the present invention is realized in:
In a first aspect, the embodiments of the invention provide a kind of color temperature adjusting method, applied to the mobile end including camera End, methods described include:
When the mobile terminal is in bright screen state, preview image is gathered by the camera;
If human face region be present in the preview image, according to the human face region, target facial image is determined;
Based on the target facial image and at least one preset reference facial image, the target facial image is determined Type of emotion;
According to the type of emotion of the target facial image, the colour temperature of the mobile terminal screen is adjusted.
Second aspect, the embodiments of the invention provide a kind of mobile terminal, the mobile terminal includes:
Acquisition module, for when the mobile terminal is in bright screen state, preview image to be gathered by the camera;
First determining module, if for human face region be present in the preview image, according to the human face region, it is determined that Target facial image;
Second determining module, for based on the target facial image and at least one preset reference facial image, it is determined that The type of emotion of the target facial image;
Adjustment module, for the type of emotion according to the target facial image, adjust the color of the mobile terminal screen Temperature.
The third aspect, the embodiments of the invention provide a kind of mobile terminal, including processor, memory and it is stored in described On memory and the computer program that can run on the processor, the computer program is by real during the computing device Now the step of color temperature adjusting method as described in relation to the first aspect.
Fourth aspect, the embodiments of the invention provide a kind of computer-readable recording medium, it is characterised in that the calculating Computer program is stored on machine readable storage medium storing program for executing, is realized as described in relation to the first aspect when the computer program is executed by processor Color temperature adjusting method the step of.
In embodiments of the present invention, mobile terminal can gather preview image, then in bright screen state by camera When human face region in preview image be present, target facial image is determined according to human face region, can then be based on target face Image and at least one preset reference facial image, the type of emotion of the target facial image is determined, finally according to target face The type of emotion of image, adjust the colour temperature of mobile terminal screen.Adjusted manually without user, mobile terminal can be according to mood certainly Dynamic regulation colour temperature, simplifies the operation of colour temperature regulation, improves the convenience of colour temperature regulation.
Brief description of the drawings
Fig. 1 is one of flow chart of color temperature adjusting method provided in an embodiment of the present invention;
Fig. 2-1 is the two of the flow chart of color temperature adjusting method provided in an embodiment of the present invention;
Fig. 2-2 is a kind of preview image collection schematic diagram provided in an embodiment of the present invention;
Fig. 3 is a kind of block diagram of mobile terminal provided in an embodiment of the present invention;
Fig. 4-1 is the block diagram of another mobile terminal provided in an embodiment of the present invention;
Fig. 4-2 is a kind of block diagram of first calculating sub module provided in an embodiment of the present invention;
Fig. 4-3 is a kind of block diagram of second calculating sub module provided in an embodiment of the present invention;
Fig. 5 is a kind of hardware architecture diagram for the mobile terminal for realizing each embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, rather than whole embodiments.Based on this hair Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made Example, belongs to the scope of protection of the invention.
Fig. 1 is one of flow chart of color temperature adjusting method provided in an embodiment of the present invention, as shown in figure 1, this method can be with Applied to the mobile terminal including camera, this method can include:
Step 101, when the mobile terminal is in bright screen state, pass through the camera gather preview image.
In embodiments of the present invention, due in actual scene, user be typically all when mobile terminal is in bright screen state, The screen of mobile terminal is watched, therefore, in the embodiment of the present invention, mobile terminal can be adopted in bright screen state by camera Collect a preview image, that is, be exactly, current picture is gathered by camera, a preview image is obtained, further, works as user When watching the screen of mobile terminal, when user can be in the coverage of front camera, it is preferred, therefore, that the camera It can be the front camera of mobile terminal.Specifically, when mobile terminal is in bright screen state, mobile terminal can be according to adopting Collect cycle progress preview image collection.Example, it is assumed that the cycle is 1s, and the mobile terminal is mobile phone, and user " Zhang San " passes through The power key of mobile phone is pressed, wakes up the screen of mobile phone, mobile phone is in bright screen state, now mobile phone can pass through every one second Front camera gathers a preview image, and the preview image can be in the preview data stream of the camera applications of mobile terminal Collection, the data volume that preview image is collected in preview data stream can be smaller, can save and subsequently be directed to the preview graph The operational ton of picture.
If human face region be present in step 102, the preview image, according to the human face region, target face is determined Image.
In embodiments of the present invention, the target facial image can be gray-scale map corresponding to human face region in preview image Picture.Specifically, mobile terminal can first determine to whether there is face in the preview image, for example, can pass through recognition of face skill Art determines to whether there is face in the preview image, if face be present in the preview image, by the people in preview image Face extracted region comes out, and then carries out gray proces to the human face region, obtains target facial image.Example, it is assumed that preview Image is made up of the face and shooting background part of user " Zhang San ", and mobile terminal to the preview image by carrying out face knowledge Not, it may be determined that go out in the preview image and face be present, then the face region of user " Zhang San " can be extracted, Then gray proces are carried out to the region extracted, you can obtain target facial image.
Step 103, based on the target facial image and at least one preset reference facial image, determine the target person The type of emotion of face image.
In embodiments of the present invention, being in a bad mood for people can be summarized as seven kinds of moods, each preset reference face figure As that can represent a kind of mood respectively, the mood of preset reference facial image can be:Neutral, surprised, fear, detest, be angry, be high One kind in emerging or sadness.Wherein it is possible to which sad and fear is divided into the first kind, indignation is divided into Second Type, will It is neutral, detest, be surprised and be happily divided into the 3rd type, so, each preset reference facial image can all correspond to a kind of feelings Thread type.
Example, it is assumed that 7 preset reference facial images are pre-set, corresponding to 7 preset reference facial images Mood can be respectively " neutrality ", " surprised ", " fear ", " detest ", " indignation ", " happiness " and " sadness ".Preset reference face Image can select the standard faces image that can represent corresponding mood.Example, human face expression can be selected to represent The standard faces image of " happiness " this mood is as preset reference facial image corresponding to " happiness ".
In the embodiment of the present invention, mobile terminal can by target facial image respectively with each preset reference facial image pair Than determining the preset reference facial image most like with target facial image, then by the most like preset reference people The type of emotion of face image, it is defined as the type of emotion of target facial image.Specifically, mobile terminal can calculate target face Euclidean distance between the characteristic vector of image and the characteristic vector of each preset reference facial image, Euclidean distance is minimum The type of emotion of preset reference facial image, it is defined as the type of emotion of the target facial image.Example, it is assumed that Euclidean distance Mood corresponding to minimum preset reference facial image is " sadness ", because " sadness " belongs to the first kind, then can determine The type of emotion of target facial image is " first kind ".
Step 104, the type of emotion according to the target facial image, adjust the colour temperature of the mobile terminal screen.
In embodiments of the present invention, colour temperature is a physical quantity for being used to define light source colour in light optics.By black matrix A temperature is heated to, when making the color of its light launched identical with the color for the light that some light source is launched, this is used to add The temperature of hot dark matter is referred to as colour temperature (unit:K, Kelvin).Colour temperature can influence the mood of people, and colour temperature is lower, and user feels To happiness, comfortable degree is larger, and colour temperature is higher, and the degree that user feels calm is larger.Typically, when colour temperature is less than 3300K When, user can be made to feel warm, comfortably;When colour temperature is between 3300K to 5300K, user can be made to feel happy.Work as color When temperature is more than 5300K, user can be made to feel calm, it is tranquil.
Therefore, in the embodiment of the present invention, when the type of emotion of target facial image is the first kind, will can move eventually The color temperature value of end screen is adjusted to first interval, and the first interval can be (0,3300K), so, can allow the user to Feel warm, comfortably, and then improve the mood of user, slow down sad and frightened degree.When the mood of target facial image When type is Second Type, the color temperature value of mobile terminal screen can be adjusted to second interval, the second interval can be (5300K ,+∞), so, it can allow the user to cool down, relieving mood.When the type of emotion of target facial image is the During three types, the color temperature value of mobile terminal screen can be adjusted to 3rd interval, the 3rd interval can be [3300K, 5300K], so, it can allow the user to feel happy, if user is now to detest, surprised mood, use can be slowed down Family is detested, surprised degree;If user is now neutral mood, the happy impression brought after colour temperature regulation can improve use The mood at family;If user is now glad mood, the happy impression brought after colour temperature regulation can improve the happiness of user Degree.In the embodiment of the present invention, the mode of colour temperature is adjusted according to mood, the color temperature value after regulation can be caused more to meet user Current emotional, and then improve colour temperature regulation accuracy rate.
In summary, color temperature adjusting method provided in an embodiment of the present invention, mobile terminal can pass through in bright screen state Camera gathers preview image, and when human face region in preview image then be present, target face figure is determined according to human face region Picture, target facial image and at least one preset reference facial image can be then based on, determine the feelings of the target facial image Thread type, finally according to the type of emotion of target facial image, adjust the colour temperature of mobile terminal screen.Adjusted manually without user Section, mobile terminal can automatically adjust colour temperature according to mood, simplify the operation of colour temperature regulation, improve the convenient of colour temperature regulation Property.
Fig. 2-1 is the two of the flow chart of color temperature adjusting method provided in an embodiment of the present invention, as shown in Fig. 2-1, this method It can include:
Step 201, when the mobile terminal is in bright screen state, pass through the camera gather preview image.
Example, Fig. 2-2 is a kind of preview image collection schematic diagram provided in an embodiment of the present invention, as shown in Fig. 2-2, figure Include mobile terminal 01 and user 02, when mobile terminal 01 is in bright screen state, mobile terminal 01 can be taken the photograph by preposition As head 01a collection preview images.
If human face region be present in step 202, the preview image, according to the human face region, target face is determined Image.
In embodiments of the present invention, mobile terminal can first determine to whether there is face in the preview image, for example, can be with Determine whether there is face in the preview image by face recognition technology, face be present if recognized in the preview image, It can determine that human face region in preview image be present, target face figure then obtained in the preview image according to human face region Picture.
Specifically, according to human face region, the step of determining target facial image, can there is following two achievable modes:
In achievable mode one, it can be realized by following sub-steps (1)~sub-step (2) according to human face region, The step of determining target facial image:
Sub-step (1) is divided if the number of the human face region is 1, by the human face region from the preview image Cut, obtain the first facial image.
When only a user watches the screen of mobile terminal, then a people is only existed in the preview image collected Face region, now mobile terminal preview image can be split, the human face region is extracted, obtains the first face figure Picture.
Sub-step (2) carries out gray proces to first facial image, obtains target facial image.
In this step, gray proces are the processes that coloured image is converted to gray level image, are obtained after carrying out gray proces The data of gray level image can be less than coloured image, but gray level image remains unchanged the entirety that can reflect entire image and local color Distribution and the feature of degree and brightness degree.In actual scene, the preview image collected is generally coloured image, accordingly, The first obtained facial image is also coloured image, and therefore, in the embodiment of the present invention, mobile terminal can be to the first facial image Gray proces are carried out, the first facial image is converted into gray level image by coloured image, so, can be reduced follow-up for first Amount of calculation when facial image is handled.
In achievable mode two, it can be realized by following sub-steps (3)~sub-step (4) according to human face region, The step of determining target facial image:
Sub-step (3) is if the number of the human face region is more than 1, by the maximum human face region of area from the preview graph Split as in, obtain the second facial image.
In actual application scenarios, it is possible that multiple users watch the situation of a mobile terminal simultaneously, now, move Multiple human face regions can be included in the preview image that dynamic terminal collects, now, mobile terminal can determine this multiple face area The maximum human face region of area, then splits to preview image in domain, and the maximum human face region of area is extracted, obtained To the second facial image.
Example, it is assumed that three human face regions in pre-set image a be present, these three human face regions be respectively human face region 1, Human face region 2 and human face region 3, mobile terminal can calculate the area of these three human face regions respectively, it is assumed that human face region 1 Area is 4, the area of human face region 2 is 6, the area of human face region 3 is 3, because the area of human face region 2 is maximum, therefore is moved Dynamic terminal this extracting section can come out human face region in preview image 2, obtain the second facial image.
Sub-step (4) carries out gray proces to second facial image, obtains target facial image.
The implementation of this step is similar with the implementation process of above-mentioned sub-step (2), and the embodiment of the present invention is no longer detailed herein State.
It should be noted that in practical application, gray level image can also be normalized after gray level image is obtained Processing, the processing that series of standards is carried out to image convert, such as, head pose correction, etc. is carried out, is allowed to be transformed to fix Canonical form, it can so improve picture quality, eliminate noise, improve the quality of view data.
Step 203, the characteristic vector for calculating the target facial image, obtain first eigenvector.
Specifically, this step can be realized by following sub-steps (5)~sub-step (7).
Sub-step (5) according to the preset reference facial image identical characteristic area dividing mode, by the target Facial image is divided into N number of fisrt feature region.
In this step, this feature region division mode can be pre-defined by developer, for specific division The mode embodiment of the present invention does not limit, as long as ensureing to use identical for target facial image and preset reference facial image Dividing mode.Example, when people makes the expression for representing different moods, the distance between face can occur to become accordingly Change, therefore, in the embodiment of the present invention, dividing mode can be determined with the face in facial image, mobile terminal is according to the division Mode can divide to target facial image, obtain multiple fisrt feature regions.For example, can be by target facial image Eyebrow region is divided into fisrt feature region 1, and target facial image mesophryon hair and eyes region are divided into first Characteristic area 2, eyes region in target facial image is divided into fisrt feature region 3, by mouth in target facial image Bar region is divided into fisrt feature region 4.
Sub-step (6) chooses mode according to the preset reference facial image identical characteristic point, it is determined that each first 1 fisrt feature points in characteristic area.
In this step, this feature point is chosen mode and can pre-defined by developer, for specific selection side The formula embodiment of the present invention does not limit, as long as ensureing to choose target facial image and preset reference facial image using identical Mode.Example, the fisrt feature region 1 for representing eyebrow region, can by the central point of left side eyebrow and The central point of the right eyebrow is labeled as the fisrt feature point in fisrt feature region 1;For representing eyebrow and eyes region Fisrt feature region 2, the central point of the central point of pupil and eyebrow can be labeled as to the fisrt feature in fisrt feature region 2 Point;Fisrt feature region 3 for representing eyes regions, can by the intermediate point of upper eyelid and the intermediate point of lower eyelid, Labeled as the fisrt feature point in fisrt feature region 3;Fisrt feature region 4 for representing face region, can will be upper The point where point and the right corners of the mouth where the intermediate point at lip edge, the intermediate point at lower lip edge, the left side corners of the mouth is labeled as The fisrt feature point in fisrt feature region 4.
Sub-step (7) determines the distance between described at least two fisrt feature point for each fisrt feature region For the characteristic value in the fisrt feature region, N number of the First Eigenvalue is obtained.
In this step, for fisrt feature region 1, mobile terminal can calculate left side eyebrow in fisrt feature region 1 The distance x0 of the central point of central point and the right eyebrow, the central point distance x0 of two eyebrows is defined as the spy in fisrt feature region 1 Value indicative;For fisrt feature region 2, mobile terminal can calculate distance x1 between the central point of pupil and the central point of eyebrow, X1 is defined as to the characteristic value in fisrt feature region 2;For fisrt feature region 3, mobile terminal can be calculated in upper eyelid Between the intermediate point of point and lower eyelid distance x2, x2 is defined as to the characteristic value in fisrt feature region 3;For fisrt feature region 4, mobile terminal can calculate the distance between the intermediate point at upper lip edge and the intermediate point at lower lip edge x3, and x3 is determined For the characteristic value in fisrt feature region 4;The distance between the point where point and the right corners of the mouth where calculating left side corners of the mouth x4, will X4 is defined as the characteristic value in fisrt feature region 4.
The characteristic vector of target facial image is represented with X, can now obtain first eigenvector X=(x0, x1, x2, X3, x4).In actual scene, when user is in different moods, the distances of each face of face can respective change, such as, when with When the expression of " sadness " mood is made at family, it can frown, narrow one's eyes, curl one's lip, now, relative to nature, between two eyebrow places between the eyebrows Distance x0 can diminish, the distance between upper lower eyelid x2 can diminish, and face can be elongated, that is, is exactly, the point where the corners of the mouth of the left side The distance between point where the corners of the mouth of the right x4 can become big, thus, it will be seen that first eigenvector can embody face figure The mood of face as in.
The characteristic vector for each preset reference facial image that step 204, extraction prestore, obtains at least one contrast Characteristic vector.
In the embodiment of the present invention, mobile terminal can precalculate the characteristic vector of each preset reference facial image, will The contrast characteristic's vector obtained after calculating prestores in the terminal.Assuming that there are 7 preset reference facial images, then move Dynamic terminal can precalculate the characteristic vector of each preset reference facial image, by obtain 7 contrast characteristic's vector storages To mobile terminal.So, so, mobile terminal can directly extract the spy of each preset reference facial image prestored Sign vector, obtain at least one contrast characteristic's vector.
Specifically, mobile terminal extraction prestore each preset reference facial image characteristic vector the step of it Before, the step of calculating the characteristic vector of each preset reference facial image, it can include:
Step a, according to default characteristic area dividing mode, each preset reference facial image is divided into M contrast Characteristic area.
In this step, M is the positive integer not less than 2, and mobile terminal is precalculating each preset reference facial image During characteristic vector, preset reference facial image can be divided according to default characteristic area dividing mode, obtained multiple Contrast characteristic region, example, for preset reference facial image 1, mobile terminal can be in advance by preset reference facial image 1 Eyebrow region be divided into contrast characteristic region 11, the eyebrow of preset reference facial image 1 and eyes region are drawn It is divided into contrast characteristic region 12, the eyes region of preset reference facial image 1 is divided into contrast characteristic region 13, will be pre- If it is divided into contrast characteristic region 14 with reference to the face region of facial image 1.
Step b, mode is chosen according to characteristic point corresponding to default characteristic area, it is determined that in each contrast characteristic region 1 contrast characteristic's points.
Example, by taking preset reference image 1 as an example, mobile terminal can be by the central point of left side eyebrow and the right eyebrow Central point is labeled as contrast characteristic's point in contrast characteristic region 11;By the central point of the central point of pupil and eyebrow labeled as contrast Contrast characteristic's point of characteristic area 12;By the intermediate point of upper eyelid and the intermediate point of lower eyelid, labeled as contrast characteristic region 13 Contrast characteristic's point;By the point and the right mouth where the intermediate point at upper lip edge, the intermediate point at lower lip edge, the left side corners of the mouth Point where angle is labeled as contrast characteristic's point in contrast characteristic region 14.
Step c, for each contrast characteristic region, the distance between described at least two contrast characteristics point is defined as institute The characteristic value in contrast characteristic region is stated, obtains M contrast characteristic's value.
Example, by taking preset reference image 1 as an example, mobile terminal can calculate left side eyebrow in contrast characteristic region 1 The distance y10 of the central point of central point and the right eyebrow, the central point distance y10 of two eyebrows is defined as contrast characteristic region 1 Characteristic value;Distance y11 between the central point of pupil and the central point of eyebrow can be calculated, y11 is defined as contrast characteristic region 2 Characteristic value;The distance y12 of the intermediate point of upper eyelid and the intermediate point of lower eyelid is calculated, y12 is defined as contrast characteristic region 3 Characteristic value;The distance between the intermediate point at upper lip edge and the intermediate point at lower lip edge y13 are calculated, y13 is defined as The characteristic value in contrast characteristic region 4;The distance between the point where point and the right corners of the mouth where calculating left side corners of the mouth y14, will Y14 is defined as the characteristic value in contrast characteristic region 4.Contrast characteristic's vector Y1=of preset reference facial image 1 can be obtained (y10, y11, y12, y13, y14).Accordingly, can obtain in the same way:Preset reference facial image 2 to bit Levy vectorial Y2=(y20, y21, y22, y23, y24);Preset reference facial image 3 contrast characteristic's vector Y3=(y30, y31, Y32, y33, y34);Contrast characteristic's vector Y4=(y40, y41, y42, y43, y44) of preset reference facial image 4;Default ginseng Examine contrast characteristic's vector Y5=(y50, y51, y52, y53, y54) of facial image 5;Preset reference facial image 6 to bit Levy vectorial Y6=(y60, y61, y62, y63, y64);Preset reference facial image 7 contrast characteristic's vector Y7=(y70, y71, Y72, y73, y74).
So, mobile terminal can directly extract the characteristic vector of each preset reference facial image prestored.Show Example, mobile terminal can directly read 7 contrast characteristic's vectors of storage, and then obtain contrast characteristic's vector Y1~Y7.
In the embodiment of the present invention, by precalculating the characteristic vector of each preset reference facial image, it is multiple right to obtain Prestored than characteristic vector, and by multiple contrast characteristic's vectors to mobile terminal, so, mobile terminal only passes through reading Mode can get the characteristic vector of each preset reference facial image, and operation is relatively simple, and then when can save operation Between, improve the response speed of system.
It should be noted that in practical application, the step of precalculating the characteristic vector of each preset reference facial image Can be before step 201, the embodiment of the present invention is not construed as limiting to this.Meanwhile mobile terminal can also be completed to calculate target After the characteristic vector of facial image, the characteristic vector of each preset reference facial image is obtained by way of calculating, and then Contrast characteristic's vector is obtained, the embodiment of the present invention is not construed as limiting to this.
Euclidean distance between step 205, the calculating first eigenvector and each contrast characteristic's vector.
Example, mobile terminal can calculate first eigenvector X and contrast characteristic's vector Y1, contrast characteristic's vector respectively Y2, contrast characteristic's vector Y3, contrast characteristic's vector Y4, contrast characteristic's vector Y5, contrast characteristic's vector Y6 and contrast characteristic to The Euclidean distance between Y7 is measured, obtains 7 Euclidean distances.
Further, calculating the process of Euclidean distance can be realized by following sub-steps (9)~sub-step (11):
It is corresponding with contrast characteristic's vector that sub-step (9) calculates each the First Eigenvalue in the first eigenvector Contrast characteristic value between difference, obtain N number of feature difference.
Wherein, N is not less than 2 and is not more than M positive integers.Example, mobile terminal can calculate each the First Eigenvalue Difference between corresponding contrast characteristic's value.Example, with calculate first eigenvector X=(x0, x1, x2, x3, x4) and Exemplified by Euclidean distance between contrast characteristic's vector Y1=(y10, y11, y12, y13, y14).Due to the First Eigenvalue x0 with it is right The distance of the central point of the left side eyebrow of eyebrow region and the central point of the right eyebrow is represented for y10 than characteristic value, because This can determine that contrast characteristic's value corresponding to the First Eigenvalue x0 is y10;Because the First Eigenvalue x1 and contrast characteristic's value are y11 Distance between eyebrow and the central point of pupil and the central point of eyebrow of eyes region is represented, thus may determine that first Contrast characteristic's value is y11 corresponding to characteristic value x1;Because the First Eigenvalue x2 and contrast characteristic's value are that y12 represents eyes place The distance of the intermediate point of the upper eyelid in region and the intermediate point of lower eyelid, thus may determine that being contrasted corresponding to the First Eigenvalue x2 Characteristic value is y12;Because the First Eigenvalue x3 and contrast characteristic's value are the upper lip edge that y13 represents face region The distance between intermediate point at intermediate point and lower lip edge, thus may determine that contrast characteristic corresponding to the First Eigenvalue x3 is worth For y13;Represented for y14 due to the First Eigenvalue x4 and contrast characteristic's value point where the left side corners of the mouth of face region and The distance between point where the corners of the mouth of the right, thus may determine that contrast characteristic's value is y14 corresponding to the First Eigenvalue x4.
It is possible to further draw, each the First Eigenvalue is corresponding with contrast characteristic's vector Y1 in first eigenvector X Contrast characteristic value between difference, that is, be exactly to obtain 5 fisrt feature differences:(y10-x0)、(y11-x1)、(y12-x2)、 And (y14-x4) (y13-x3).
It should be noted that each the First Eigenvalue is corresponding right with contrast characteristic's vector Y1 in first eigenvector X It is also denoted as than the difference between characteristic value:(x0-y10), (x1-y11), (x2-y12), (x3-y13) and (x4-y14), This hair embodiment is not construed as limiting to this.
Sub-step (10) calculates the quadratic sum of N number of feature difference, obtains target quadratic sum.
Example, target quadratic sum is represented with T, can obtain target quadratic sum is:
T=(y10-x0)2+(y11-x1)2+(y12-x2)2+(y13-x3)2+(y14-x4)2
Sub-step (11) carries out evolution to the target quadratic sum, obtains the first eigenvector and the contrast characteristic Euclidean distance between vector.
Example, Euclidean distance is represented with E, can obtain European between first eigenvector X and contrast characteristic's vector Y1 Distance E1For:
Further, with reference to above-mentioned calculating process, can calculate between first eigenvector X and contrast characteristic's vector Y2 Euclidean distance E2, the Euclidean distance E3 between first eigenvector X and contrast characteristic's vector Y3, first eigenvector X with it is right Than the Euclidean distance E4 between characteristic vector Y4, the Euclidean distance E5 between first eigenvector X and contrast characteristic's vector Y5, the Between Euclidean distance E6 between one feature vector, X and contrast characteristic's vector Y6, first eigenvector X and contrast characteristic's vector Y7 Euclidean distance E5, i.e. obtain Euclidean distance E1~Euclidean distance E7.
It should be noted that be in above-mentioned steps using the quantity in fisrt feature region be equal to contrast characteristic region quantity as Example illustrates, that is, is exactly, and N is equal to M, and in practical application, the quantity in fisrt feature region is also less than contrast characteristic area The quantity in domain, that is, it is exactly that N is less than M.Example, it is assumed that only divided in target facial image and represented the of eyebrow region One characteristic area 1 and the fisrt feature region 3 for representing eyes region, that is, it is exactly, it is special that first eigenvector X includes first Value indicative x0 and the First Eigenvalue x3, then Euclidean distance between first eigenvector X and contrast characteristic's vector Y1 is calculated When, it can only calculate the difference between x0 and y10, the difference between x2 and y13, then calculate square of the two feature differences With, finally to the quadratic sum carry out evolution, obtain the Euclidean distance between first eigenvector X and contrast characteristic's vector Y1. When the characteristic value quantity that first eigenvector includes is less than the characteristic value quantity that contrast characteristic's vector includes, the calculating of Euclidean distance Amount can accordingly reduce, and then can improve calculating speed, and still, accordingly, the precision for the Euclidean distance determined can be corresponding Reduce, and then mood corresponding to the target facial image determined may be caused inaccurate.
Step 206, the type of emotion of preset reference facial image corresponding to the Euclidean distance of minimum is defined as the mesh Mark the type of emotion of facial image.
In this step, mobile terminal can compare the size of each Euclidean distance, it is determined that minimum Euclidean distance, then really Contrast characteristic's vector, then determines preset reference face corresponding to contrast characteristic's vector corresponding to the fixed minimum Euclidean distance The type of emotion of image, the type of emotion of the preset reference facial image is finally defined as to the mood class of target facial image Type.
Example, mobile terminal can compare Euclidean distance E1, Euclidean distance E2, Euclidean distance E3, Euclidean distance E4, Europe Formula distance E5, Euclidean distance E6With Euclidean distance E7Size, it is assumed that E in this 7 Euclidean distances3It is minimum, then mobile terminal can So that the type of emotion of preset reference facial image 3 to be defined as to the type of emotion of target facial image, it is assumed that preset reference face Mood corresponding to image 3 is sadness, because sadness belongs to the first kind, then, mobile terminal can determine target facial image Type of emotion be the first kind.
It should be noted that it can also be determined in practical application by the methods of convolutional neural networks or Bayes's classification The characteristic vector of target facial image, the embodiment of the present invention are not construed as limiting to this.
Step 207, the type of emotion according to the target facial image, adjust the colour temperature of the mobile terminal screen.
, can be by the color of mobile terminal screen when the type of emotion of target facial image is the first kind in this step Temperature value is adjusted to first interval, and the first interval can be (0,3300K).Specifically, mobile terminal can be in firstth area One color temperature value of interior random selection, for example, 3200K, is then adjusted the color temperature value of mobile terminal screen to 3200K;Work as mesh When the type of emotion for marking facial image is Second Type, the color temperature value of mobile terminal screen can be adjusted to second interval, The second interval can be (5300K ,+∞).Specifically, mobile terminal can randomly choose a colour temperature in the second interval Value, for example, 5400K, is then adjusted the color temperature value of mobile terminal screen to 5400K;When the type of emotion of target facial image For three types when, the color temperature value of mobile terminal screen can be adjusted to 3rd interval, the 3rd interval can be [3300K, 5300K].Specifically, mobile terminal can randomly choose a color temperature value in the 3rd interval, for example, 4000K, Then the color temperature value of mobile terminal screen is adjusted to 4000K.
Further, mobile terminal first can also determine target color temperature value according to default formula, then by mobile terminal The color temperature value of screen is adjusted to target color temperature value.Wherein, the default formula can represent mood corresponding to target facial image Degree., can will when the type of emotion of target facial image is the first kind specifically, it is determined that during target color temperature value The minimum Euclidean distance that above-mentioned steps are calculated substitutes into formula C1=-A1/E+B1+ 3300, determine target color temperature value.Wherein, E represents minimum Euclidean distance, A1For the first default adjustment parameter, B1For the first preset limit parameter, B1For making C1It is less than 3300K, A1And B1Occurrence can go out rational value by experimental selection, the embodiment of the present invention is not limited this.
When the type of emotion of target facial image is Second Type, the minimum Europe that above-mentioned steps can be calculated Formula distance substitutes into formula C2=A2/E+B2+ 5300 determine target color temperature value.Wherein, A2For the second default adjustment parameter, B2For second Preset limit parameter, B2For making C2More than 5300K, A2And B2Occurrence can go out rational value, this hair by experimental selection Bright embodiment is not limited this.
When the type of emotion of target facial image is three type, the minimum Europe that above-mentioned steps can be calculated Formula distance substitutes into formula C3=A3/E+B3+ (5300+3300)/2 determine target color temperature value.Wherein, A3For the second default regulation ginseng Number, B3For the 3rd preset limit parameter, B3For making C3Not less than 3300K and it is not more than 5300K, A3And B3Occurrence can lead to Cross experimental selection and go out rational value, the embodiment of the present invention is not limited this.
It should be noted that in practical application, corresponding to different mood division methods, the mood that includes in each type May be different with the embodiment of the present invention, or, for the different naming methods of mood, it is also possible to make what each type included Difference in mood and the embodiment of the present invention, it is to be understood that every face characteristic according to user determines user Mood, then in conjunction with regulation feature of three color temperature value range intervals for mood, mobile terminal screen color temperature value is carried out The method of regulation, the protection domain of the embodiment of the present invention all should be belonged to.
The regulating frequency of step 208, the colour temperature of the statistics mobile terminal screen.
In this step, mobile terminal can record every time regulation colour temperature time point, then to be unit per hour when Between, the number of interior regulation colour temperature per hour is calculated, and then obtain the regulating frequency of colour temperature.Example, it is assumed that mobile terminal is 10 9 points of months No. 5 have adjusted a colour temperature, and 9 points October 5 have adjusted a colour temperature for 05 minute, October 59 points 35 minutes A colour temperature is have adjusted, 9 points October 5 have adjusted a colour temperature for 55 minutes, it can be deduced that frequency is 4 times/hour.
Step 209, the regulating frequency according to the colour temperature, adjust the cycle of the camera collection preview image.
In this step, when regulating frequency is higher, that is, it is exactly, when the emotional change of user is more frequent, can shortens logical Spend the cycle of camera collection preview image, when regulating frequency than it is relatively low when, that is, be exactly, can be with when the mood of user is relatively stable Increase gathers the cycle of preview image by camera.Specifically, a regulating frequency and collection period pair can be pre-established It should be related to, after regulating frequency is calculated, collection period can be determined according to the corresponding relation, will finally be adopted by camera Collect the periodic adjustment of preview image to the collection period.In the embodiment of the present invention, mobile terminal can become according to the mood of user The frequent degree of change determines collection period, can capture the emotional change of user in time, and then can be in time according to difference Mood different colour temperature is set.
In summary, another color temperature adjusting method provided in an embodiment of the present invention, mobile terminal can be in bright screen states When, preview image is gathered by camera, when human face region in preview image then be present, target is determined according to human face region Facial image, target facial image and at least one preset reference facial image can be then based on, determines the target face figure The type of emotion of picture, finally according to the type of emotion of target facial image, adjust the colour temperature of mobile terminal screen.Without user's hand Dynamic regulation, mobile terminal can automatically adjust colour temperature according to mood, simplify the operation of colour temperature regulation, improve colour temperature regulation Convenience.Meanwhile mobile terminal can also adjust the cycle of collection preview image according to the regulating frequency of colour temperature, and then ensure that Different colour temperatures can be set according to different moods in time, ensure that the promptness of colour temperature regulation.
Fig. 3 is a kind of block diagram of mobile terminal provided in an embodiment of the present invention, as shown in figure 3, the mobile terminal 30 can be with Including:
Acquisition module 301, for when the mobile terminal is in bright screen state, preview graph to be gathered by the camera Picture.
First determining module 302, if for human face region be present in the preview image, according to the human face region, Determine target facial image.
Second determining module 303, for based on the target facial image and at least one preset reference facial image, really The type of emotion of the fixed target facial image.
Adjustment module 304, for the type of emotion according to the target facial image, adjust the mobile terminal screen Colour temperature.
Mobile terminal provided in an embodiment of the present invention can realize that mobile terminal is realized each in Fig. 1 embodiment of the method Process, to avoid repeating, repeat no more here.Mobile terminal provided in an embodiment of the present invention, acquisition module can be in bright screen shapes During state, preview image, when then the first determining module can have human face region in preview image, root are gathered by camera Target facial image is determined according to human face region, then the second determining module can be based on target facial image and at least one default With reference to facial image, the type of emotion of the target facial image is determined, last adjustment module can be according to target facial image Type of emotion, adjust the colour temperature of mobile terminal screen.Adjusted manually without user, mobile terminal can automatically adjust according to mood Colour temperature, the operation of colour temperature regulation is simplified, improve the convenience of colour temperature regulation.
Fig. 4-1 is the block diagram of another mobile terminal provided in an embodiment of the present invention, as shown in Fig. 4-1, the mobile terminal 40 can include:
Acquisition module 401, for when the mobile terminal is in bright screen state, preview graph to be gathered by the camera Picture.
First determining module 402, if for human face region be present in the preview image, according to the human face region, Determine target facial image.
Second determining module 403, for based on the target facial image and at least one preset reference facial image, really The type of emotion of the fixed target facial image.
Adjustment module 404, for the type of emotion according to the target facial image, adjust the mobile terminal screen Colour temperature.
Optionally, above-mentioned first determining module 402, can be also used for:
If the number of the human face region is 1, the human face region is split from the preview image, obtains first Facial image.
Gray proces are carried out to first facial image, obtain target facial image.
Optionally, above-mentioned first determining module 402, can be also used for:
If the number of the human face region is more than 1, the maximum human face region of area is split from the preview image, Obtain the second facial image.
Gray proces are carried out to second facial image, obtain target facial image.
Optionally, above-mentioned second determining module 403 can include:
First calculating sub module 4031, for calculating the characteristic vector of the target facial image, obtain fisrt feature to Amount.
Extracting sub-module 4032, for extracting the characteristic vector of each preset reference facial image prestored, obtain At least one contrast characteristic's vector.
Second calculating sub module 4033, for calculating the Europe between the first eigenvector and each contrast characteristic's vector Formula distance.
First determination sub-module 4034, for by minimum Euclidean distance corresponding to preset reference facial image mood class Type is defined as the type of emotion of the target facial image.
Wherein, the characteristic vector includes the characteristic value of each characteristic area in facial image.
Optionally, above-mentioned second determining module 403, in addition to:
Submodule is divided, for according to default characteristic area dividing mode, each preset reference facial image to be divided For M contrast characteristic region.
Second determination sub-module, for choosing mode according to characteristic point corresponding to default characteristic area, it is determined that each right Than 1 contrast characteristic's points in characteristic area.
3rd submodule, for for each contrast characteristic region, by between at least two contrast characteristics point away from From the characteristic value for being defined as the contrast characteristic region, M contrast characteristic's value is obtained.
Wherein, M is the positive integer not less than 2.
Optionally, Fig. 4-2 is a kind of block diagram of first calculating sub module provided in an embodiment of the present invention, as shown in the Fig. 4-2, First calculating sub module 4031 can include:
Division unit 4031a, for according to the default dividing mode, being divided to the target facial image, Obtain at least one fisrt feature region.
First determining unit 4031b, for choosing mode according to the preset reference facial image identical characteristic point, It is determined that 1 fisrt feature points in each fisrt feature region.
Second determining unit 4031c, for for each fisrt feature region, by least two fisrt feature point it Between distance be defined as the characteristic value in the fisrt feature region, obtain N number of the First Eigenvalue.
Wherein, N is not less than 2 and is not more than M positive integers.
Optionally, Fig. 4-3 is a kind of block diagram of second calculating sub module provided in an embodiment of the present invention, as shown in Fig. 4-3, Second calculating sub module 4033 can include:
First computing unit 4033a, for calculating each the First Eigenvalue and the contrast in the first eigenvector Difference in characteristic vector between corresponding contrast characteristic's value, obtains N number of feature difference.
Second computing unit 4033b, for calculating the quadratic sum of N number of feature difference, obtain target quadratic sum.
Evolution unit 4033c, for the target quadratic sum carry out evolution, obtain the first eigenvector with it is described Euclidean distance between contrast characteristic's vector.
Optionally, above-mentioned adjustment module 404, is used for:
When the type of emotion of the target facial image is the first kind, the color temperature value of the mobile terminal screen is adjusted In section to first interval.
When the type of emotion of the target facial image is Second Type, the color temperature value of the mobile terminal screen is adjusted In section to second interval.
When the type of emotion of the target facial image is three type, the color temperature value of the mobile terminal screen is adjusted In section value to 3rd interval.
Optionally, above-mentioned adjustment module 404, is used for:
When the type of emotion of the target facial image is the first kind, according to formula C1=-A1/E+B1+ 3300, really Set the goal color temperature value.
When the type of emotion of the target facial image is Second Type, according to formula C2=A2/E+B2+ 5300 determine Target color temperature value.
When the type of emotion of the target facial image is three type, according to formula C3=A3/E+B3+(5300+ 3300) target color temperature value/2 is determined.
The color temperature value of the mobile terminal screen is adjusted to the target color temperature value.
Wherein, E represents the minimum Euclidean distance, A1For the first default adjustment parameter, B1Join for the first preset limit Number, the B1For making C1No more than 3300K;A2For the second default adjustment parameter, B2For the second preset limit parameter, the B2With In making C2Not less than 5300K;A3For the second default adjustment parameter, B3For the 3rd preset limit parameter, the B3For making C3Less In 3300K and it is not less than 5300K.
Optionally, above-mentioned mobile terminal 40 also includes:
Statistical module 405, the regulating frequency of the colour temperature for counting the mobile terminal screen.
Adjusting module 406, for the regulating frequency according to the colour temperature, adjust the week that the camera gathers preview image Phase.
Mobile terminal provided in an embodiment of the present invention can realize that mobile terminal is realized each in Fig. 2-1 embodiment of the method Individual process, to avoid repeating, repeat no more here.Another mobile terminal provided in an embodiment of the present invention, acquisition module can be with In bright screen state, preview image is gathered by camera, then the first determining module can have face in preview image During region, target facial image is determined according to human face region, then target facial image can be based on the second determining module With at least one preset reference facial image, the type of emotion of the target facial image is determined, last adjustment module can basis The type of emotion of target facial image, adjust the colour temperature of mobile terminal screen.Adjusted manually without user, mobile terminal can root Colour temperature is automatically adjusted according to mood, simplifies the operation of colour temperature regulation, improves the convenience of colour temperature regulation.Further, count Module can also count the regulating frequency of colour temperature, and it is pre- that last adjusting module can adjust collection according to the regulating frequency of regulation colour temperature Cycle of image is look at, and then ensure that different colour temperatures can be set according to different moods in time, ensure that colour temperature regulation Promptness.
Fig. 5 is a kind of hardware architecture diagram for the mobile terminal for realizing each embodiment of the present invention,
The mobile terminal 500 includes but is not limited to:It is radio frequency unit 501, mixed-media network modules mixed-media 502, audio output unit 503, defeated Enter unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor 510th, the part such as power supply 511 and camera 512.It will be understood by those skilled in the art that the mobile terminal structure shown in Fig. 5 The restriction to mobile terminal is not formed, mobile terminal can include parts more more or less than diagram, or combine some Part, or different parts arrangement.In embodiments of the present invention, mobile terminal includes but is not limited to mobile phone, tablet personal computer, pen Remember this computer, palm PC, car-mounted terminal, wearable device and pedometer etc..
Camera 512 is used for the instruction acquisition preview image sent according to processor 110.
Wherein, processor 510, it is used for:When the mobile terminal is in bright screen state, gathered by the camera pre- Look at image;
Processor 510, is used for:If human face region be present in the preview image, according to the human face region, mesh is determined Mark facial image;
Processor 510, is used for:Based on the target facial image and at least one preset reference facial image, institute is determined State the type of emotion of target facial image;
Processor 510, is used for:According to the type of emotion of the target facial image, the mobile terminal screen is adjusted Colour temperature.
In summary, mobile terminal can gather preview image, then in preview graph in bright screen state by camera When human face region be present as in, target facial image is determined according to human face region, can then be based on target facial image and extremely A few preset reference facial image, the type of emotion of the target facial image is determined, finally according to the feelings of target facial image Thread type, adjust the colour temperature of mobile terminal screen.Adjusted manually without user, mobile terminal can automatically adjust color according to mood Temperature, the operation of colour temperature regulation is simplified, improve the convenience of colour temperature regulation.
It should be understood that in the embodiment of the present invention, radio frequency unit 501 can be used for receiving and sending messages or communication process in, signal Reception and transmission, specifically, by from base station downlink data receive after, handled to processor 510;In addition, will be up Data are sent to base station.Generally, radio frequency unit 501 includes but is not limited to antenna, at least one amplifier, transceiver, coupling Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 501 can also by wireless communication system and network and other set Standby communication.
Mobile terminal has provided the user wireless broadband internet by mixed-media network modules mixed-media 502 and accessed, and such as helps user to receive Send e-mails, browse webpage and access streaming video etc..
Audio output unit 503 can be receiving by radio frequency unit 501 or mixed-media network modules mixed-media 502 or in memory 509 It is sound that the voice data of storage, which is converted into audio signal and exported,.Moreover, audio output unit 503 can also be provided and moved The audio output for the specific function correlation that dynamic terminal 500 performs is (for example, call signal receives sound, message sink sound etc. Deng).Audio output unit 503 includes loudspeaker, buzzer and receiver etc..
Input block 504 is used to receive audio or video signal.Input block 504 can include graphics processor (Graphics Processing Unit, GPU) 5041 and microphone 5042, graphics processor 5041 is in video acquisition mode Or the static images or the view data of video obtained in image capture mode by image capture apparatus (such as camera) are carried out Reason.Picture frame after processing may be displayed on display unit 506.Picture frame after the processing of graphics processor 5041 can be deposited Storage is transmitted in memory 509 (or other storage mediums) or via radio frequency unit 501 or mixed-media network modules mixed-media 502.Mike Wind 5042 can receive sound, and can be voice data by such acoustic processing.Voice data after processing can be The form output of mobile communication base station can be sent to via radio frequency unit 501 by being converted in the case of telephone calling model.
Mobile terminal 500 also includes at least one sensor 505, such as optical sensor, motion sensor and other biographies Sensor.Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to environment The light and shade of light adjusts the brightness of display panel 5061, and proximity transducer can close when mobile terminal 500 is moved in one's ear Display panel 5061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general For three axles) size of acceleration, size and the direction of gravity are can detect that when static, available for identification mobile terminal posture (ratio Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);Pass Sensor 505 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, wet Meter, thermometer, infrared ray sensor etc. are spent, will not be repeated here.
Display unit 506 is used for the information for showing the information inputted by user or being supplied to user.Display unit 506 can wrap Display panel 5061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 5061.
User input unit 507 can be used for the numeral or character information for receiving input, and produce the use with mobile terminal The key signals input that family is set and function control is relevant.Specifically, user input unit 507 include contact panel 5071 and Other input equipments 5072.Contact panel 5071, also referred to as touch-screen, collect touch operation of the user on or near it (for example user uses any suitable objects or annex such as finger, stylus on contact panel 5071 or in contact panel 5071 Neighbouring operation).Contact panel 5071 may include both touch detecting apparatus and touch controller.Wherein, touch detection Device detects the touch orientation of user, and detects the signal that touch operation is brought, and transmits a signal to touch controller;Touch control Device processed receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 510, receiving area Manage the order that device 510 is sent and performed.It is furthermore, it is possible to more using resistance-type, condenser type, infrared ray and surface acoustic wave etc. Type realizes contact panel 5071.Except contact panel 5071, user input unit 507 can also include other input equipments 5072.Specifically, other input equipments 5072 can include but is not limited to physical keyboard, function key (such as volume control button, Switch key etc.), trace ball, mouse, action bars, will not be repeated here.
Further, contact panel 5071 can be covered on display panel 5061, when contact panel 5071 is detected at it On or near touch operation after, send processor 510 to determine the type of touch event, be followed by subsequent processing device 510 according to touch The type for touching event provides corresponding visual output on display panel 5061.Although in Figure 5, contact panel 5071 and display Panel 5061 is the part independent as two to realize the input of mobile terminal and output function, but in some embodiments In, can be integrated by contact panel 5071 and display panel 5061 and realize input and the output function of mobile terminal, it is specific this Place does not limit.
Interface unit 508 is the interface that external device (ED) is connected with mobile terminal 500.For example, external device (ED) can include Line or wireless head-band earphone port, external power source (or battery charger) port, wired or wireless FPDP, storage card end Mouth, port, audio input/output (I/O) port, video i/o port, earphone end for connecting the device with identification module Mouthful etc..Interface unit 508 can be used for receive the input (for example, data message, electric power etc.) from external device (ED) and One or more elements that the input received is transferred in mobile terminal 500 can be used in the He of mobile terminal 500 Data are transmitted between external device (ED).
Memory 509 can be used for storage software program and various data.Memory 509 can mainly include storing program area And storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one function Sound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such as Voice data, phone directory etc.) etc..In addition, memory 509 can include high-speed random access memory, can also include non-easy The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 510 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connection Individual part, by running or performing the software program and/or module that are stored in memory 509, and call and be stored in storage Data in device 509, the various functions and processing data of mobile terminal are performed, so as to carry out integral monitoring to mobile terminal.Place Reason device 510 may include one or more processing units;Preferably, processor 510 can integrate application processor and modulatedemodulate is mediated Device is managed, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is main Handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 510.
Mobile terminal 500 can also include the power supply 511 (such as battery) to all parts power supply, it is preferred that power supply 511 Can be logically contiguous by power-supply management system and processor 510, so as to realize management charging by power-supply management system, put The function such as electricity and power managed.
In addition, mobile terminal 500 includes some unshowned functional modules, will not be repeated here.
Preferably, the embodiment of the present invention also provides a kind of mobile terminal, including processor 510, memory 509, is stored in On memory 509 and the computer program that can be run on the processor 510, the computer program are performed by processor 510 Each process of the above-mentioned color temperature adjusting method embodiments of Shi Shixian, and identical technique effect can be reached, to avoid repeating, here Repeat no more.
The embodiment of the present invention also provides a kind of computer-readable recording medium, and meter is stored with computer-readable recording medium Calculation machine program, the computer program realize each process of above-mentioned color temperature adjusting method embodiment, and energy when being executed by processor Reach identical technique effect, to avoid repeating, repeat no more here.Wherein, described computer-readable recording medium, such as only Read memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation RAM), magnetic disc or CD etc..
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row His property includes, so that process, method, article or device including a series of elements not only include those key elements, and And also include the other element being not expressly set out, or also include for this process, method, article or device institute inherently Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this Other identical element also be present in the process of key element, method, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on such understanding, technical scheme is substantially done to prior art in other words Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium In (such as ROM/RAM, magnetic disc, CD), including some instructions to cause a station terminal (can be mobile phone, computer, service Device, air conditioner, or network equipment etc.) perform method described in each embodiment of the present invention.
Embodiments of the invention are described above in conjunction with accompanying drawing, but the invention is not limited in above-mentioned specific Embodiment, above-mentioned embodiment is only schematical, rather than restricted, one of ordinary skill in the art Under the enlightenment of the present invention, in the case of present inventive concept and scope of the claimed protection is not departed from, it can also make a lot Form, belong within the protection of the present invention.

Claims (22)

  1. A kind of 1. color temperature adjusting method, applied to the mobile terminal including camera, it is characterised in that methods described includes:
    When the mobile terminal is in bright screen state, preview image is gathered by the camera;
    If human face region be present in the preview image, according to the human face region, target facial image is determined;
    Based on the target facial image and at least one preset reference facial image, the mood of the target facial image is determined Type;
    According to the type of emotion of the target facial image, the colour temperature of the mobile terminal screen is adjusted.
  2. 2. according to the method for claim 1, it is characterised in that it is described according to the human face region, determine target face figure The step of picture, including:
    If the number of the human face region is 1, the human face region is split from the preview image, obtains the first face Image;
    Gray proces are carried out to first facial image, obtain target facial image.
  3. 3. according to the method for claim 1, it is characterised in that it is described according to the human face region, determine target face figure The step of picture, including:
    If the number of the human face region is more than 1, the maximum human face region of area is split from the preview image, obtained Second facial image;
    Gray proces are carried out to second facial image, obtain target facial image.
  4. 4. according to the method for claim 1, it is characterised in that described based on the target facial image and at least one pre- If with reference to facial image, the step of determining the type of emotion of the target facial image, including:
    The characteristic vector of the target facial image is calculated, obtains first eigenvector;
    The characteristic vector of each preset reference facial image prestored is extracted, obtains at least one contrast characteristic's vector;
    Calculate the Euclidean distance between the first eigenvector and each contrast characteristic's vector;
    The type of emotion of preset reference facial image corresponding to the Euclidean distance of minimum is defined as the target facial image Type of emotion;
    Wherein, the characteristic vector includes the characteristic value of each characteristic area in facial image.
  5. 5. according to the method for claim 4, it is characterised in that in each preset reference face for extracting and prestoring Before the step of characteristic vector of image, methods described also includes:
    According to default characteristic area dividing mode, each preset reference facial image is divided into M contrast characteristic region;
    Mode is chosen according to characteristic point corresponding to default characteristic area, it is determined that at least two couple in each contrast characteristic region Compare characteristic point;
    For each contrast characteristic region, the distance between described at least two contrast characteristics point is defined as the contrast characteristic The characteristic value in region, obtain M contrast characteristic's value;
    Wherein, M is the positive integer not less than 2.
  6. 6. according to the method for claim 5, it is characterised in that the characteristic vector for calculating the target facial image, The step of obtaining first eigenvector, including:
    According to the preset reference facial image identical characteristic area dividing mode, the target facial image is divided into N number of fisrt feature region;
    Mode is chosen according to the preset reference facial image identical characteristic point, it is determined that in each fisrt feature region extremely Few two fisrt feature points;
    For each fisrt feature region, the distance between described at least two fisrt feature point is defined as the fisrt feature The characteristic value in region, obtain N number of the First Eigenvalue;
    Wherein, N is not less than 2 and is not more than M positive integers.
  7. 7. according to the method for claim 6, it is characterised in that described to calculate the first eigenvector and each to bit The step of levying the Euclidean distance between vector, including:
    Calculate the contrast characteristic's value corresponding with contrast characteristic's vector of each the First Eigenvalue in the first eigenvector Between difference, obtain N number of feature difference;
    The quadratic sum of N number of feature difference is calculated, obtains target quadratic sum;
    Evolution is carried out to the target quadratic sum, obtained European between the first eigenvector and contrast characteristic's vector Distance.
  8. 8. according to the method for claim 1, it is characterised in that the type of emotion according to the target facial image, The step of adjusting the colour temperature of the mobile terminal screen, including:
    When the type of emotion of the target facial image is the first kind, by the color temperature value of the mobile terminal screen adjust to In first interval;
    When the type of emotion of the target facial image is Second Type, by the color temperature value of the mobile terminal screen adjust to In second interval;
    When the type of emotion of the target facial image is three type, by the color temperature value regulated value of the mobile terminal screen To 3rd interval.
  9. 9. according to the method for claim 1, it is characterised in that the type of emotion according to the target facial image, The step of adjusting the screen color temp of the mobile terminal, including:
    When the type of emotion of the target facial image is the first kind, according to formula C1=-A1/E+B1+ 3300, determine mesh Mark color temperature value;
    When the type of emotion of the target facial image is Second Type, according to formula C2=A2/E+B2+ 5300, determine target Color temperature value;
    When the type of emotion of the target facial image is three type, according to formula C3=A3/E+B3+(5300+3300)/ 2, determine target color temperature value;
    The color temperature value of the mobile terminal screen is adjusted to the target color temperature value;
    Wherein, E represents the minimum Euclidean distance, A1For the first default adjustment parameter, B1For the first preset limit parameter, institute State B1For making C1Less than 3300K;A2For the second default adjustment parameter, B2For the second preset limit parameter, the B2For making C2 More than 5300K;A3For the second default adjustment parameter, B3For the 3rd preset limit parameter, the B3For making C3Not less than 3300K And it is not more than 5300K.
  10. 10. according to the method for claim 1, it is characterised in that in the mood class according to the target facial image Type, after the step of adjusting the colour temperature of the mobile terminal screen, methods described also includes:
    Count the regulating frequency of the colour temperature of the mobile terminal screen;
    According to the regulating frequency of the colour temperature, the cycle that the camera gathers preview image is adjusted.
  11. A kind of 11. mobile terminal, it is characterised in that including:
    Acquisition module, for when the mobile terminal is in bright screen state, preview image to be gathered by the camera;
    First determining module, if for human face region be present in the preview image, according to the human face region, determine target Facial image;
    Second determining module, for based on the target facial image and at least one preset reference facial image, it is determined that described The type of emotion of target facial image;
    Adjustment module, for the type of emotion according to the target facial image, adjust the colour temperature of the mobile terminal screen.
  12. 12. mobile terminal according to claim 11, it is characterised in that first determining module, be used for:
    If the number of the human face region is 1, the human face region is split from the preview image, obtains the first face Image;
    Gray proces are carried out to first facial image, obtain target facial image.
  13. 13. mobile terminal according to claim 11, it is characterised in that first determining module, be used for:
    If the number of the human face region is more than 1, the maximum human face region of area is split from the preview image, obtained Second facial image;
    Gray proces are carried out to second facial image, obtain target facial image.
  14. 14. mobile terminal according to claim 11, it is characterised in that second determining module, including:
    First calculating sub module, for calculating the characteristic vector of the target facial image, obtain first eigenvector;
    Extracting sub-module, for extracting the characteristic vector of each preset reference facial image prestored, obtain at least one Contrast characteristic's vector;
    Second calculating sub module, for calculating the Euclidean distance between the first eigenvector and each contrast characteristic's vector;
    First determination sub-module, for the type of emotion of preset reference facial image corresponding to the Euclidean distance of minimum to be defined as The type of emotion of the target facial image;
    Wherein, the characteristic vector includes the characteristic value of each characteristic area in facial image.
  15. 15. mobile terminal according to claim 14, it is characterised in that second determining module, in addition to:
    Submodule is divided, for according to default characteristic area dividing mode, each preset reference facial image to be divided into M Contrast characteristic region;
    Second determination sub-module, for choosing mode according to characteristic point corresponding to default characteristic area, it is determined that each to bit Levy 1 contrast characteristic's points in region;
    3rd submodule, it is for for each contrast characteristic region, the distance between described at least two contrast characteristics point is true It is set to the characteristic value in the contrast characteristic region, obtains M contrast characteristic's value;
    Wherein, M is the positive integer not less than 2.
  16. 16. mobile terminal according to claim 15, it is characterised in that first calculating sub module, including:
    Division unit, for according to the default dividing mode, being divided to the target facial image, obtaining at least one Individual fisrt feature region;
    First determining unit, for choosing mode according to the preset reference facial image identical characteristic point, it is determined that each 1 fisrt feature points in fisrt feature region;
    Second determining unit, for for each fisrt feature region, by the distance between described at least two fisrt feature point It is defined as the characteristic value in the fisrt feature region, obtains N number of the First Eigenvalue;
    Wherein, N is not less than 2 and is not more than M positive integers.
  17. 17. mobile terminal according to claim 16, it is characterised in that second calculating sub module, including:
    First computing unit, for calculating in the first eigenvector in each the First Eigenvalue and contrast characteristic's vector Difference between corresponding contrast characteristic's value, obtains N number of feature difference;
    Second computing unit, for calculating the quadratic sum of N number of feature difference, obtain target quadratic sum;
    Evolution unit, for carrying out evolution to the target quadratic sum, obtain the first eigenvector and the contrast characteristic Euclidean distance between vector.
  18. 18. mobile terminal according to claim 11, it is characterised in that the adjustment module, be used for:
    When the type of emotion of the target facial image is the first kind, by the color temperature value of the mobile terminal screen adjust to In first interval;
    When the type of emotion of the target facial image is Second Type, by the color temperature value of the mobile terminal screen adjust to In second interval;
    When the type of emotion of the target facial image is three type, by the color temperature value regulated value of the mobile terminal screen To 3rd interval.
  19. 19. mobile terminal according to claim 11, it is characterised in that the adjustment module, be used for:
    When the type of emotion of the target facial image is the first kind, according to formula C1=-A1/E+B1+ 3300, determine mesh Mark color temperature value;
    When the type of emotion of the target facial image is Second Type, according to formula C2=A2/E+B2+ 5300, determine target Color temperature value;
    When the type of emotion of the target facial image is three type, according to formula C3=A3/E+B3+(5300+3300)/ 2, determine target color temperature value;
    The color temperature value of the mobile terminal screen is adjusted to the target color temperature value;
    Wherein, E represents the minimum Euclidean distance, A1For the first default adjustment parameter, B1For the first preset limit parameter, institute State B1For making C1Less than 3300K;A2For the second default adjustment parameter, B2For the second preset limit parameter, the B2For making C2 More than 5300K;A3For the second default adjustment parameter, B3For the 3rd preset limit parameter, the B3For making C3Not less than 3300K And it is not more than 5300K.
  20. 20. mobile terminal according to claim 11, it is characterised in that the mobile terminal also includes:
    Statistical module, the regulating frequency of the colour temperature for counting the mobile terminal screen;
    Adjusting module, for the regulating frequency according to the colour temperature, adjust the cycle that the camera gathers preview image.
  21. 21. a kind of mobile terminal, it is characterised in that including processor, memory and be stored on the memory and can be in institute State the computer program run on processor, the computer program realized during the computing device as claim 1 to The step of color temperature adjusting method any one of 10.
  22. 22. a kind of computer-readable recording medium, it is characterised in that computer journey is stored on the computer-readable recording medium Sequence, the color temperature adjusting method as any one of claim 1 to 10 is realized when the computer program is executed by processor The step of.
CN201710973108.2A 2017-10-18 2017-10-18 A kind of color temperature adjusting method and mobile terminal Pending CN107665074A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710973108.2A CN107665074A (en) 2017-10-18 2017-10-18 A kind of color temperature adjusting method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710973108.2A CN107665074A (en) 2017-10-18 2017-10-18 A kind of color temperature adjusting method and mobile terminal

Publications (1)

Publication Number Publication Date
CN107665074A true CN107665074A (en) 2018-02-06

Family

ID=61098346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710973108.2A Pending CN107665074A (en) 2017-10-18 2017-10-18 A kind of color temperature adjusting method and mobile terminal

Country Status (1)

Country Link
CN (1) CN107665074A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020000211A1 (en) * 2018-06-26 2020-01-02 华为技术有限公司 Method for adjusting screen colour temperature, and terminal
WO2020029406A1 (en) * 2018-08-07 2020-02-13 平安科技(深圳)有限公司 Human face emotion identification method and device, computer device and storage medium
CN111182409A (en) * 2019-11-26 2020-05-19 广东小天才科技有限公司 Screen control method based on intelligent sound box, intelligent sound box and storage medium
CN113591630A (en) * 2021-07-16 2021-11-02 中国图片社有限责任公司 Certificate photo automatic processing method, system, terminal equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202434193U (en) * 2011-11-25 2012-09-12 北京京东方光电科技有限公司 Image display device
CN103996029A (en) * 2014-05-23 2014-08-20 安庆师范学院 Expression similarity measuring method and device
CN104036255A (en) * 2014-06-21 2014-09-10 电子科技大学 Facial expression recognition method
CN104102749A (en) * 2013-04-11 2014-10-15 华为技术有限公司 Terminal device
CN104503683A (en) * 2014-12-01 2015-04-08 小米科技有限责任公司 Eyesight protecting method and device
CN104867476A (en) * 2014-02-20 2015-08-26 联想(北京)有限公司 Color temperature adjusting method and electronic device
CN105095827A (en) * 2014-04-18 2015-11-25 汉王科技股份有限公司 Facial expression recognition device and facial expression recognition method
US9572232B2 (en) * 2014-05-15 2017-02-14 Universal Display Corporation Biosensing electronic devices
CN106446753A (en) * 2015-08-06 2017-02-22 南京普爱医疗设备股份有限公司 Negative expression identifying and encouraging system
CN106708257A (en) * 2016-11-23 2017-05-24 网易(杭州)网络有限公司 Game interaction method and device
CN106775360A (en) * 2017-01-20 2017-05-31 珠海格力电器股份有限公司 The control method of a kind of electronic equipment, system and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202434193U (en) * 2011-11-25 2012-09-12 北京京东方光电科技有限公司 Image display device
CN104102749A (en) * 2013-04-11 2014-10-15 华为技术有限公司 Terminal device
CN104867476A (en) * 2014-02-20 2015-08-26 联想(北京)有限公司 Color temperature adjusting method and electronic device
CN105095827A (en) * 2014-04-18 2015-11-25 汉王科技股份有限公司 Facial expression recognition device and facial expression recognition method
US9572232B2 (en) * 2014-05-15 2017-02-14 Universal Display Corporation Biosensing electronic devices
CN103996029A (en) * 2014-05-23 2014-08-20 安庆师范学院 Expression similarity measuring method and device
CN104036255A (en) * 2014-06-21 2014-09-10 电子科技大学 Facial expression recognition method
CN104503683A (en) * 2014-12-01 2015-04-08 小米科技有限责任公司 Eyesight protecting method and device
CN106446753A (en) * 2015-08-06 2017-02-22 南京普爱医疗设备股份有限公司 Negative expression identifying and encouraging system
CN106708257A (en) * 2016-11-23 2017-05-24 网易(杭州)网络有限公司 Game interaction method and device
CN106775360A (en) * 2017-01-20 2017-05-31 珠海格力电器股份有限公司 The control method of a kind of electronic equipment, system and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020000211A1 (en) * 2018-06-26 2020-01-02 华为技术有限公司 Method for adjusting screen colour temperature, and terminal
WO2020029406A1 (en) * 2018-08-07 2020-02-13 平安科技(深圳)有限公司 Human face emotion identification method and device, computer device and storage medium
CN111182409A (en) * 2019-11-26 2020-05-19 广东小天才科技有限公司 Screen control method based on intelligent sound box, intelligent sound box and storage medium
CN111182409B (en) * 2019-11-26 2022-03-25 广东小天才科技有限公司 Screen control method based on intelligent sound box, intelligent sound box and storage medium
CN113591630A (en) * 2021-07-16 2021-11-02 中国图片社有限责任公司 Certificate photo automatic processing method, system, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107256555B (en) Image processing method, device and storage medium
CN107835364A (en) One kind is taken pictures householder method and mobile terminal
CN107835367A (en) A kind of image processing method, device and mobile terminal
CN110740259A (en) Video processing method and electronic equipment
CN107613131A (en) A kind of application program disturbance-free method and mobile terminal
CN107633098A (en) A kind of content recommendation method and mobile terminal
CN107734251A (en) A kind of photographic method and mobile terminal
CN107621738A (en) The control method and mobile terminal of a kind of mobile terminal
CN107833177A (en) A kind of image processing method and mobile terminal
CN107817939A (en) A kind of image processing method and mobile terminal
CN107665074A (en) A kind of color temperature adjusting method and mobile terminal
CN109461117A (en) A kind of image processing method and mobile terminal
CN107895352A (en) A kind of image processing method and mobile terminal
CN107644396B (en) Lip color adjusting method and device
CN107845057A (en) One kind is taken pictures method for previewing and mobile terminal
CN109461124A (en) A kind of image processing method and terminal device
CN108600668A (en) A kind of record screen frame per second method of adjustment and mobile terminal
CN110072012A (en) A kind of based reminding method and mobile terminal for screen state switching
CN107483836A (en) A kind of image pickup method and mobile terminal
CN108198127A (en) A kind of image processing method, device and mobile terminal
CN108495036B (en) Image processing method and mobile terminal
CN109167914A (en) A kind of image processing method and mobile terminal
CN108037966A (en) A kind of interface display method, device and mobile terminal
CN109448069A (en) A kind of template generation method and mobile terminal
CN109671034A (en) A kind of image processing method and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180206

RJ01 Rejection of invention patent application after publication