CN105431852A - Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium - Google Patents

Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium Download PDF

Info

Publication number
CN105431852A
CN105431852A CN201580001465.1A CN201580001465A CN105431852A CN 105431852 A CN105431852 A CN 105431852A CN 201580001465 A CN201580001465 A CN 201580001465A CN 105431852 A CN105431852 A CN 105431852A
Authority
CN
China
Prior art keywords
image
face
equipment
user
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201580001465.1A
Other languages
Chinese (zh)
Inventor
金善爱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority claimed from PCT/KR2015/002535 external-priority patent/WO2015137788A1/en
Publication of CN105431852A publication Critical patent/CN105431852A/en
Pending legal-status Critical Current

Links

Landscapes

  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

Provided is a device and method of providing health status information. The device includes: a device including: a storage configured to store a first image including a face of a user and first health status information extracted from the first image; an imager configured to capture an image; a controller configured to control the imager to capture a second image including the face of the user and to extract second health status information from the captured second image; and a display configured to output the second image and information other than the stored first health status information from among the extracted second health status information.

Description

Electronic installation and the control method thereof of health status information are provided, and computer-readable recording medium
Technical field
The apparatus and method consistent with exemplary embodiment relate to the computer readable recording medium storing program for performing providing the electronic installation of health status information, its control method and it records the calculation procedure code performing the method.
Background technology
Recently, various electronic installation provides healthy relevant function to user to user.For these devices, various equipment is used to measure healthy relevant information.Especially, special device is used to measure healthy relevant information, such as, reference object (subject) is carried out by using ultrasound wave, computed tomography (CT) or magnetic resonance imaging (MRI), by using sphygmomanometer to carry out Measure blood pressure, or by using scale to measure body weight.But, owing to being difficult to easily use device special like this for domestic consumer, so domestic consumer may be difficult to the status information that secures good health.
The image developed from obtaining by catching the face of user detects facial zone, and the technology of facial zone that recognition detection arrives.
Therefore, if the health status of user can be determined based on image, then can provide health status information to user and special device need not be used.
Summary of the invention
Solution
The aspect of one or more exemplary embodiment provides health status information by using face-image to user, and user is easily secured good health status information.
The aspect of one or more exemplary embodiment provides by easily collecting face-image and obtainable health status information.
The square user oriented of one or more exemplary embodiment provides health status information over time.
Additional aspect will partly be set forth in the following description, and part is become obvious from instructions, or can by carrying out acquistion to the practice of exemplary embodiment.
Beneficial effect
Domestic consumer can easily secure good health status information, and without the need to the special device of such as CT device or magnetic resonance imaging (MRI) device.
Accompanying drawing explanation
Fig. 1 is the figure providing the method for the health status information of user from the face-image of the face comprising user for describing according to exemplary embodiment;
Fig. 2 is the process flow diagram of method being obtained the health status information of user by equipment based on the face-image of user according to exemplary embodiment;
Fig. 3 is the figure of the method for the face for describing registered user according to exemplary embodiment;
Fig. 4 be according to exemplary embodiment for describing registration about the figure of the method for the information of user;
Fig. 5 be according to exemplary embodiment for describe be arranged on health examination by equipment based on the selection of user during the figure of the method for information that will consider;
Fig. 6 is the process flow diagram being obtained the method for face-image by equipment according to exemplary embodiment;
Fig. 7 is the process flow diagram being obtained the method for face-image by equipment from input picture according to exemplary embodiment;
Fig. 8 is the process flow diagram of the method for face-image that obtained by catching template by equipment according to exemplary embodiment;
Fig. 9 is the figure providing the method for tutorial message for describing equipment when catching template according to exemplary embodiment;
Figure 10 is the figure providing the method for tutorial message for describing equipment when catching template according to another exemplary embodiment;
Figure 11 is the figure of the method apart from the face in camera specific range that catches when catching template for describing equipment according to exemplary embodiment;
Figure 12 is directly towards the figure of the method in the direction of camera for describing equipment capturing facial when catching template to make the direction of the face in input picture according to exemplary embodiment;
Figure 13 is whether to make up based on the face of user when catching template the figure of the method obtaining input picture for describing equipment according to exemplary embodiment;
Figure 14 is the figure of the method for input picture that obtains according to the illumination value preset when catching template for describing equipment according to exemplary embodiment;
Figure 15 A and Figure 15 B is the figure of the method for face-image that obtains in face authenticating pattern when catching template for describing equipment according to exemplary embodiment;
Figure 16 is the figure of the method for the face-image of user that obtains when running video call for describing equipment according to exemplary embodiment;
Figure 17 is the figure of the method for the face-image of user that obtains when running application for describing equipment according to exemplary embodiment;
Figure 18 is the figure for describing the method being obtained the face-image of user when equipment is worn in wrist by equipment according to exemplary embodiment;
Figure 19 is the figure for describing the method being obtained the face-image of user when equipment is glasses type by equipment according to exemplary embodiment;
Figure 20 be according to exemplary embodiment for being described in and obtaining face-image by equipment use image-generating unit time the table of shooting environmental information that obtains;
Figure 21 is the process flow diagram being obtained the method for face-image by equipment from the image being received from external source according to exemplary embodiment;
Figure 22 is the figure for describing the method being obtained face-image by equipment from external server according to exemplary embodiment;
Figure 23 A and Figure 23 B is the figure of the method for describing the face-image being obtained user by equipment from the image that user selects according to exemplary embodiment;
Figure 24 be according to exemplary embodiment for describing by equipment from the figure storing image in a device and obtain the process of face-image;
Figure 25 is the figure of the method for describing the image zooming-out face-image selected from user by equipment according to exemplary embodiment;
Figure 26 be according to exemplary embodiment for describing by the figure of equipment from the method for mobile image zooming-out face-image;
Figure 27 be according to exemplary embodiment for describing by the figure of equipment to the process of face-image distribution marker;
Figure 28 be according to exemplary embodiment for describe by equipment health status information be obtain from image information be recorded in the figure of the method on the file of image;
Figure 29 be according to exemplary embodiment for describing by the table of the method for device storage face-image;
Figure 30 be according to exemplary embodiment by the process flow diagram of the method for equipment code face-image;
Figure 31 be according to exemplary embodiment for describing by the figure of the method for the size of equipment code face-image;
Figure 32 A to Figure 32 C is the figure for describing the method being adjusted the effect of the colour temperature of the illumination on face-image by equipment according to exemplary embodiment;
Figure 32 D be according to exemplary embodiment by the process flow diagram of equipment based on the method for the color of the face in the color specification input picture of base area;
Figure 32 E is the figure of the method for describing the color based on the face in the color specification input picture of base area according to exemplary embodiment;
Figure 33 is the process flow diagram of the method for the facial condition information of the situation of the face of indicating user of being extracted from the face-image of normalized user by equipment according to exemplary embodiment;
Figure 34 is the process flow diagram of the method being extracted the facial condition information of face display according to exemplary embodiment by equipment from normalized face-image;
Figure 35 A and Figure 35 B be according to exemplary embodiment for describing by the figure of the method for the position of equipment determination diagnostic region;
Figure 36 A and Figure 36 B is the figure for describing the method being extracted facial condition information by equipment from diagnostic region according to exemplary embodiment;
Figure 37 is the process flow diagram being obtained the method for the health status information relevant with the health of user by equipment based on facial condition information according to exemplary embodiment;
Figure 38 A and Figure 38 B be according to exemplary embodiment for describing the form being extracted the method for the health status information of user by equipment based on the facial condition information extracted from face-image;
Figure 39 be according to exemplary embodiment by equipment by considering that the shooting environmental information that obtains while capturing facial to secure good health from facial condition information the process flow diagram of method of status information;
Figure 40 is the figure for describing the method shooting environmental information obtained when catching image shown together with the health status information of user according to exemplary embodiment;
Figure 41 A and Figure 41 B be according to exemplary embodiment for describing the function being provided the shooting environmental information that selection will be considered by equipment in the middle of many shooting environmental information by equipment, the figure of method of the status information that simultaneously secures good health from facial condition information;
Figure 42 is the process flow diagram of method of the status information that secured good health from facial condition information based on the time point of capturing facial image by equipment according to exemplary embodiment;
Figure 43 is the figure of process for describing the status information that to be secured good health by equipment according to exemplary embodiment;
Figure 44 A is to secure good health the figure of method of status information by using service server for describing equipment according to exemplary embodiment;
Figure 44 B illustrates the database about user be stored in service server according to exemplary embodiment;
Figure 44 C is to be secured good health the process flow diagram of process of status information by using service server by equipment according to exemplary embodiment;
Figure 45 is the process flow diagram being shown the method for the health status information of user by equipment according to exemplary embodiment;
Figure 46 is the figure for describing the method being provided for the user interface providing the health status information calculated from the face of the user shown by equipment when equipment shows the image stored by equipment according to exemplary embodiment;
Figure 47 is the figure for describing the method being shown health status information by equipment on shown image according to exemplary embodiment;
Figure 48 A is the figure for describing the method being provided for the user interface selecting the people that will be displayed on screen in the middle of the health status information about multiple people by equipment according to exemplary embodiment;
Figure 48 B be according to exemplary embodiment for describing the figure being shown the input picture of people and the method for the health status information corresponding with input picture selected by user by equipment;
Figure 49 A to Figure 49 C is the figure for describing the method being provided the health status information about the period selected by user or disease by equipment according to exemplary embodiment;
Figure 50 A shows the screen providing health status information according to exemplary embodiment;
Figure 50 B shows the screen providing health status information according to another exemplary embodiment;
Figure 51 A shows the screen providing health status information according to another exemplary embodiment;
Figure 51 B shows from the figure of the method for time dependent facial condition information many facial condition informations of user in the middle of for describing by equipment according to exemplary embodiment;
Figure 52 shows the screen providing health status information according to another exemplary embodiment;
Figure 53 A and Figure 53 B is the figure of the method for describing the health status information being provided user by equipment with calendar form according to exemplary embodiment;
Figure 54 is the figure for being described in the method being shown the health status information of user when operation social networks is applied by equipment according to exemplary embodiment;
Figure 55 illustrates the process flow diagram being extracted the method for the difference of the facial situation of user by equipment by comparing face-image and reference picture;
Figure 56 is the table of the shooting element according to exemplary embodiment;
Figure 57 be according to exemplary embodiment for describing by the figure of the method for equipment determination reference picture;
Figure 58 A to Figure 58 C be according to one or more exemplary embodiment for describing by the figure of the method for equipment determination reference picture;
Figure 59 be according to exemplary embodiment for describing the figure environmentally being generated the method for multiple reference picture by equipment by compensating a base image;
Figure 60 A to Figure 60 B be according to exemplary embodiment for describing by the state value determination reference picture of equipment based on the shooting element of face-image, and determined the figure of the method for the health status of user by comparing face-image and reference picture by equipment;
Figure 61 A to Figure 61 E be according to one or more exemplary embodiment for describing the figure being obtained the health status information of user and the method for the service providing hospital to be correlated with based on health status information by equipment by equipment from the face-image of user;
Figure 62 is the database according to the health status information can extracted from facial condition information of exemplary embodiment and the prescription information according to health status information;
Figure 63 A to Figure 63 C be according to one or more exemplary embodiment for describing the figure being provided the method for the prescription information being suitable for user by equipment based on the health status information of user;
Figure 64 be according to exemplary embodiment for describe by equipment by the figure providing multiple third-party server of health related information to carry out the method providing the service relevant with the health status information of user to user alternately;
Figure 65 is the figure for describing the method by service server, the service provided by third-party server being supplied to user by equipment according to exemplary embodiment;
Figure 66 be according to exemplary embodiment for describing by the figure of equipment by using the service server mutual with the comprehensive server of third-party server to provide the method for the service provided by third-party server to user;
Figure 67 provides the figure of the method for the service that by third-party server provided by equipment by using service server when service server is operating as comprehensive server for describing according to exemplary embodiment;
Figure 68 is the block diagram of the equipment according to exemplary embodiment;
Figure 69 is the block diagram of the equipment according to another exemplary embodiment; And
Figure 70 is the block diagram of the service server according to exemplary embodiment.
Preferred forms
According to the one side of exemplary embodiment, provide a kind of equipment, comprising: memory storage, be configured to store the first image of the face comprising user and the first health status information from the first image zooming-out; Imager, is configured to catch image; Controller, is configured to control imager and catches the second image comprising the face of user, and from the second image zooming-out second health status information caught; And display, be configured to the information being different from the first stored health status information in the middle of output second image and the second health status information of extracting.
In the middle of multiple application in a device, when running through use imager and applying to catch image first, controller can be configured to control imager and catch the second image, and when running the second application being different from the first application, controller can be configured to control display and export the information being different from the first health status information in the middle of the second image and the second health status information of extracting.
Equipment can also comprise: user interface section, be configured to the user's input received for unlocking the equipment be in the lock state, wherein, controller can be configured to, when receiving the user for unlocking the equipment be in the lock state and inputting, control imager catch the second image and from catch the second image zooming-out second health status information.
Equipment can also comprise: user interface section, be configured to receive the user's input for the video call application in operational outfit, wherein, controller can be configured to, when input according to the user that receives run video call application time, control imager and catch the second image and from the second image zooming-out second health status information caught.
First image can be the photo received from the server at registered user place.
Controller can be configured to the resolution of face by amplifying the size of the second image or be reduced to default size to standardize the user in the second image of catching, and from normalized second image zooming-out second health status information.
Controller can be configured to the colour temperature of the illumination obtained on the face of the user of the time point of seizure second image, standardize with the tone value by adjusting the second image based on the colour temperature obtained the tone value of face of user, and from normalized second image zooming-out second health status information.
Display can be configured to by from wherein extract be different from user in the second image of the information of the first health status information facial zone on display indicator export the second image, to indicate the information extracted and be different from the first health status information.
Display can be configured to temporally order and export multiple second image of catching during predetermined amount of time and the health status information from described multiple second image zooming-out.
Display can be configured to export and obtain based on the face-image preset the shooting tutorial message that conditional guidance catches the face of user; And controller can be configured to determine whether imager obtains according to the face-image preset the face that condition catches user, and when determine according to the face-image preset catch acquisition condition catch user facial time, the image of the face of user is defined as the second image.
Equipment can also comprise: sensor, be configured to obtain the biological information the user of the time point of seizure second image, its middle controller can be configured to determine the biological condition of the user of the time point at seizure second image based on the biological information obtained, and gets rid of some in the second facial condition information of illustrating on the face of user due to biological condition.
According to the one side of another exemplary embodiment, provide a kind of method being provided health status information by equipment, the method comprises: the first health status information obtaining the first image zooming-out from the face comprising user; Catch the second image comprising the face of user; From the second image zooming-out second health status information caught; And export in the middle of the second image and the second health status information of extracting, to be different from the first health status information information.
Catch the second image can comprise, when in the multiple application in operational outfit, by use imager catch image first application time, catch the second image; And export the second image and information and can comprise, when in the described multiple application in operational outfit, the second application of being different from the first application time, export the information being different from the first health status information in the middle of the second image and the second health status information.
The method can also comprise: receive the user's input for unlocking the equipment in be in the lock state, and wherein, catches the second image and can comprise and catch the second image upon receiving user input.
The method can also comprise: the user's input receiving the video call application be used in operational outfit, wherein, catches the second image and can comprise and catch the second image upon receiving user input.
First image can be the photo received from the server at registered user place.
Extract the second health status information can comprise: by the size of the second image being amplified or being reduced to default size to standardize the resolution of face of the user in the second image; And from normalized second image zooming-out second health status information.
Extract the second health status information can comprise: the colour temperature obtaining the illumination on the face of the user of the time point at seizure second image; Standardized by the tone value adjusting the second image based on the colour temperature obtained the tone value of face of user; And from normalized second image zooming-out second health status information.
Export the second image and information can comprise, by from wherein extract be different from user in the second image of the information of the first health status information facial zone on display indicator export the second image, to indicate the information extracted and be different from the first health status information.
Export the second image and information can comprise, temporally order exports multiple second image of catching during predetermined amount of time and the health status information from described multiple second image zooming-out.
Catch the second image can comprise: export the shooting tutorial message being used to guide and catching the face of user based on the face-image acquisition condition preset; Determine whether imager obtains according to the face-image preset the face that condition catches user; And when determine according to the face-image preset obtain condition catch user facial time, the image of the face of user is defined as the second image.
The method can also comprise: the biological information detecting user at the time point of seizure second image, and wherein extracting the second health status information can comprise: the biological condition determining the user of the time point at seizure second image based on the biological information detected; And some eliminating in the second facial condition information of illustrating on the face of user due to biological condition.
According to the one side of another exemplary embodiment, provide a kind of non-transitory computer readable recording medium storing program for performing, recording thereon can by computer run program to perform the method.
According to the one side of another exemplary embodiment, provide a kind of equipment, comprising: controller, be configured to obtain the second image of the face comprising user, and from the second image zooming-out second health status information obtained; And output device, be configured in the middle of output second image and the second health status information of extracting, to be different from the first health status information obtained from the first image of the face comprising user information.
Equipment can also comprise: imager, is configured to catch image, and its middle controller is configured to control imager and catches the second image comprising the face of user, and from the second image zooming-out second health status information caught.
When in the multiple application in operational outfit, by use imager catch image first application time, controller can be configured to control imager and catch the second image, and when running the second application being different from the first application, controller can be configured to control output device and export the information being different from the first health status information in the middle of the second image and the second health status information of extracting.
Equipment can also comprise: user interface section, be configured to receive the user's input for unlocking when the equipment be in the lock state, its middle controller can be configured to, when receiving the user for unlocking the equipment be in the lock state and inputting, control imager catch the second image and from catch the second image zooming-out second health status information.
Equipment can also comprise: user interface section, be configured to receive the user's input for the video call application in operational outfit, wherein, controller can be configured to, when input according to the user that receives run video call application time, control imager and catch the second image and from the second image zooming-out second health status information caught.
First image can be the photo received from the server at registered user place.
Controller can be configured to the resolution of face by amplifying the size of the second image or be reduced to default size to standardize the user in the second image of obtaining, and from normalized second image zooming-out second health status information.
Controller can be configured to the colour temperature of the illumination obtained on the face of the user of the time point of seizure second image, standardize with the tone value by adjusting the second image based on the colour temperature obtained the tone value of face of user, and from normalized second image zooming-out second health status information.
Output device can be configured to export the second image by display indicator on the facial zone from the user in the second image wherein extracting the information being different from the first health status information, to indicate the information extracted and be different from the first health status information.
Output device can be configured to temporally order and export multiple second image of seizure or acquisition and the health status information from described multiple second image zooming-out during predetermined amount of time.
Output device can be configured to export be used to guide and obtain based on the face-image preset the shooting tutorial message that condition catches the face of user; And controller can be configured to determine whether imager obtains according to the face-image preset the face that condition catches user, and when determine according to the face-image preset obtain condition catch user facial time, the image of the face of user is defined as the second image.
Equipment can also comprise: sensor, be configured to the biological information obtaining user at the time point of seizure second image, its middle controller can be configured to the biological condition determining the user of the time point at seizure second image based on the biological information obtained, and gets rid of some due to biological condition in the facial condition information of second shown in the face of user.
Embodiment
The right of priority of No. 10-2014-0098639th, the korean patent application that this application claims No. 10-2014-0030457th, korean patent application submitting in Korean Intellectual Property Office (KIPO) on March 14th, 2014 and submit on July 31st, 2014 at KIPO, it is all openly incorporated herein by reference in their entirety.
By reference to the detailed description of following exemplary embodiment and accompanying drawing, the advantage of one or more exemplary embodiment and feature and its implementation more easily can be understood.In this respect, this exemplary embodiment can have multi-form, and should not be construed as being limited to description set forth herein.On the contrary, provide these exemplary embodiments will to be comprehensive and complete to make this open, and fully pass on the design of the present embodiment to those skilled in the art, and the present invention only will be defined by the following claims.Run through instructions, identical reference number refers to identical element.
Hereinafter, will briefly define the term used in the description, and will embodiment be described in detail.
The whole terms comprising descriptive or technical term used herein should be interpreted as having and significantly look like to those of ordinary skill in the art.But according to the appearance of the intention of those of ordinary skill in the art, precedent or new technology, term can have the different meaning.In addition, some terms at random can be selected by applicant, and in this case, will describe the meaning of selected term in detail in the detailed description of exemplary embodiment.Therefore, term used herein must be defined based on the meaning of term together with the description running through instructions.
When parts " comprise " or " comprising " element time, unless there are the specific description contrary with it, otherwise these parts can also comprise other elements, do not get rid of other elements.And term " unit " is in an embodiment of the present invention meant to component software or nextport hardware component NextPort, such as field programmable gate array (FPGA) or special IC (ASIC), and perform specific function.But term " unit " is not limited to software or hardware." unit " can be formed in addressable storage medium, or can be formed to operate one or more processor.Therefore, such as, term " unit " can finger assembly, such as component software, OO component software, class component and task component, and process, function, attribute, process, subroutine, program code segments, driver, firmware, microcode, circuit, data, database, data structure, table, array or variable can be comprised.The function provided by assembly and " unit " can be associated with the assembly of smaller amounts and " unit ", or can be divided into extra assembly and " unit ".
As used herein, term "and/or" comprises one or more listd any and all combinations that are associated.When such as " ... at least one " such give expression to the list of present element after time, its modifies permutation element, instead of modifies the individual element of this list.
Run through instructions, facial condition information can represent the state of the reference faces for determining health and fitness information.Face condition information can comprise the quantity of facial color, flaw and acne and size, the eyes of inflammation, eye color, the size of pupil, the movement of pupil, facial size, the movement of the position of the face organ of the shape of face contour, lip color, the lip of cracking, such as eyes, nose, lip, ear and eyebrow, hair color, hair condition and facial muscles.Run through instructions, " face-image acquisition condition " can refer to for equipment from the facial zone input picture extract facial condition information, facial zone in input picture the condition that meets.
Now with detailed reference to exemplary embodiment, its example illustrates in the accompanying drawings.In this respect, this exemplary embodiment can have multi-form, and should not be construed as limited to description set forth herein.In the following description, do not describe known function or structure in detail, in order to avoid by the fuzzy embodiment of unnecessary details.
Fig. 1 is the figure extracting the method for the health status information 40 of user from the face-image 20 of the face comprising user for describing according to exemplary embodiment.
With reference to figure 1, equipment 100 can obtain face-image 20 from input picture 10.Input picture 10 can be take via to general (general) of user the image obtained.Alternatively, input picture 10 can be the image received from external unit, can be to carry out by catching template (such as, unlock face recognition) image etc. that capturing facial obtains.
Equipment 100 can obtain face-image 20 from the input picture 10 meeting (such as, the reservation) face-image preset and obtain condition.The face-image preset obtain condition can comprise about condition face whether being detected, the condition about facial direction, the condition about illumination during shooting, about rock during shooting the condition of (shake), the condition whether opened about eyes, about facial expression condition, about the whether visible condition of ear, whether be arranged in about face center condition and about at least one of the condition of facial size.
Face-image 20 can mean the image only comprising (or consisting essentially of) facial zone in the whole region of input picture 10.Such as, face-image 20 can be have from the object in input picture 10 face, region from forehead to chin is as the rectangular area of the region between vertical area and two ears as horizontal zone.According to another exemplary embodiment, face-image 20 can be have from the object in input picture 10 face, from apart from forehead reservation distance to apart from chin reservation distance region as vertical area and from apart from ear reservation distance to the region apart from another ear reservation distance as the rectangular area of horizontal zone.
Equipment 100 can detect facial zone from input picture 10, and the image of facial zone is stored as face-image 20.Describe equipment 100 in detail below with reference to Fig. 8 to Figure 29 and obtain input picture 10 and the one or more exemplary embodiments obtaining face-image 20 from input picture 10.
When obtain face-image 20 time, equipment 100 can according to or carry out normalized faces image 20 according to preset standard.Such as, the size of face-image 20 can be changed into default size by equipment 100.Alternatively, equipment 100 can adjust the effect of the colour temperature of the illumination on face-image 20, the brightness of face-image 20 can be changed into default brightness etc.Carry out one or more exemplary embodiments of normalized faces image 20 based on the standard preset below with reference to Figure 30 to Figure 32 E detailed description equipment 100.
When normalized faces image 20, equipment 100 can extract facial condition information from face-image 20.Equipment 100 to extract facial condition information one or more exemplary embodiments from face-image 20 are described in detail below with reference to Figure 33 to Figure 36.
When obtaining facial condition information, equipment 100 can by the health status information 40 using facial condition information to obtain the health status of indicating user.Describe equipment 100 in detail below with reference to Figure 37 to Figure 44 to secure good health one or more exemplary embodiments of status information 40 by using facial condition information.
In addition, equipment 100 can provide health status information 40 according to any means in various method to user.One or more exemplary embodiments that equipment 100 provides health status information 40 are described in detail below with reference to Figure 45 to Figure 54.
Fig. 2 is the process flow diagram of method being obtained the health status information of user by equipment 100 based on the face-image of user according to exemplary embodiment.
In operation S210, equipment 100 can obtain the first facial condition information of the health status for determining user, and it comprises the image zooming-out of the face of user from what catch in advance and/or prestore.
Multiple images that the image caught in advance is captured before can being the time point be captured at input picture.Can be stored in device 100 from the first facial condition information of the image zooming-out caught in advance.
Face condition information can about the state of the reference faces for determining health and fitness information.Face condition information can be determined according to an input picture.
In operation S220, when capturing user facial, equipment 100 can obtain the input picture of the face comprising user.
Such as, equipment 100 can obtain input picture via the general shooting of user.Alternatively, equipment 100 can obtain input picture from external unit.
Alternatively, such as, equipment 100 can carry out capturing facial to obtain input picture by catching template (such as, unlocking face recognition).While seizure template, equipment 100 can show the shooting tutorial message of the face obtaining conditional guidance seizure user according to the face-image preset.
When obtaining input picture, equipment 100 can determine whether input picture meets the face-image preset and obtain condition.Such as, equipment 100 can determine that illumination when catching input picture is whether in basic scope.Alternatively, equipment 100 can determine whether camera rocks when catching input picture.
In addition, equipment 100 can determine whether the face in the facial zone of input picture meets the face-image acquisition condition preset.Such as, equipment 100 can determine that whether the angle of face is with in front apart basic angle.Alternatively, equipment 100 can determine whether the eyes of face are opened, whether face in the facial expression of face in input picture, the ear whether showing face in the input image, input picture at least have basic size etc.
Only when input picture meets default face-image acquisition condition, equipment 100 just can use input picture.In operation S230, equipment 100 can change the facial zone of the user in input picture based on reference conditions, to remove because shooting condition is different from the shooting condition of the image caught in advance and the impact on input picture that causes.
Such as, equipment 100 can by by the expanded in size of facial zone or be reduced to default size and standardize the resolution of face of the user in input picture.Alternatively, equipment 100 can get rid of the effect of the colour temperature of illumination from input picture.Such as, the colour temperature of the illumination on the face of the user at the time point place catching input picture is obtained.Based on colour temperature, the tone value of the face of the user in input picture can be standardized by the tone value of the facial zone of the user in adjustment input picture.Alternatively, equipment 100 can make the color of the base area of facial zone be changed to base color to standardize the tone value of face of user by adjusting the tone value of facial zone, or can to standardize the brightness of face of the user in input picture by the brightness of facial zone being changed into default brightness.
So, even if when there is multiple image caught under different shooting conditions, equipment 100 also can standardize the face of the user in multiple image, thus provides the effect that face seemingly catches under identical shooting condition.
Therefore, equipment 100 can from the image zooming-out caught under identical shooting condition for determining the facial condition information of the health status of user.
In operation S240, equipment 100 can extract the second facial condition information for the health status of determining user from the facial zone changed.
By the facial zone of the user in standardization input picture, equipment 100 can extract the second facial condition information from normalized facial zone.
Such as, equipment 100 can determine the whether swelling of region below eyes.Alternatively, equipment 100 can determine that whether the color in the region below eyes is than darker in the past.
In operation S250, equipment 100 can show the information of at least one difference between the facial condition information of instruction second and first facial condition information.
Such as, when the information that first facial condition information is the region swelling below eyes, and second facial condition information be region swelling below eyes and information for black time, the region that equipment 100 can show below from the eyes of the second facial condition information on screen is the information of black.So, equipment 100 can inform the user the nearest change compared with the original state of user.
Alternatively, can be the face shape feature of user from the first facial condition information of the image zooming-out caught in advance, instead of due to disease in the information of face display, can be the information that user knows always due to chronic disease etc.
Equipment 100 can show (such as, only show) indicates the information of at least one difference, thus provides (such as, only providing) about nearest disease or the information of organ that worsens recently to user.
In addition, equipment 100 can show the information of at least one difference of instruction on the region of extracting at least one difference.Such as, when at least one difference is lip color, equipment 100 can show the information of disease about lip or imbalance on the lip of input picture.
In addition, equipment 100 can obtain the health status information of the health status of indicating user based on the information of at least one difference of instruction, they are different at the time point catching input picture compared with catching the time point of the image caught in advance.
Health status information can comprise about from the disease of the prediction of facial condition information or the user that determines, the information with the organ of the function of deterioration or the situation of user.
Such as, when determining the region swelling below eyes, equipment 100 can determine that user has hyperthyroidism or anaphylactia.In addition, when determining that the region below eyes is black, equipment 100 can determine that user has allergic rhinitis.
Equipment 100 can obtain the health status information of user based on the second facial condition information.
Except facial condition information, equipment 100 can consider that the biological information of the user obtained when catching input picture is to the status information that secures good health simultaneously.Such as, the biological condition of user that equipment 100 can be determined at the time point catching input picture based on the biological information of user, and by considering that biological information is got rid of the information be presented at the face of user due to biological condition from second condition information and to be secured good health status information simultaneously.
Now, equipment 100 can show the user interface of at least one for selecting in multiple biological information.When the user receiving selection biological information inputs, equipment 100 can consider that the second facial condition information and biological information are to the status information that secures good health simultaneously.
Equipment 100 can show health status information by any means in various method on screen.
Such as, equipment 100 can show the health status information of time dependent user based on first facial condition information and the second facial condition information.
Such as, equipment 100 can show facial zone and the health status information corresponding with facial zone together.In this case, the image that equipment 100 can be relevant to health status information in this region of display instruction, region of the facial zone extracting facial condition information from it.
Alternatively, when showing multiple facial zone, equipment 100 can show multiple facial zone and the health status information corresponding with multiple facial zone according to the order catching date and time.
In addition, equipment 100 can show the user interface for selecting at least one disease obtaining based on the second facial condition information from multiple facial zone or determine.Receive select the user of at least one disease to input time, equipment 100 can show according to the order catching date and time in the middle of multiple facial zone with the image of selected disease association.
In addition, equipment 100 can displaying calendar, the date of one week in its display a certain period and sky.In addition, equipment 100 can show the health status information obtained according to the second facial condition information extracted from the facial zone caught on a certain date in the middle of multiple facial zone on the region corresponding with a certain date in calendar.
Fig. 3 is the figure of the method for the face for describing registered user according to exemplary embodiment.
With reference to figure 3, equipment 100 can be provided for catching the face-image of user and be the user's face register interface 310 of the face of user by caught face-image registration.
User's face identification interface 310 can comprise the interface of the face-image in any one direction in all directions in front and side for catching such as user.
In addition, user's face register interface 310 can comprise for the guide image 320 according to the size capturing facial image preset.
When receiving the user for registering caught face-image and inputting, equipment 100 can store according to the mark of user (ID) information the face-image caught.When registering face-image, equipment 100 can extract the feature of face based on the face-image of registration.Equipment 100 can store according to the id information of user the feature extracted.Therefore, equipment 100 can extract the feature of face from arbitrary image, the similarity determined extracted feature and store between feature in device 100, and the face determining in arbitrary image when similarity is equal to or higher than threshold value is the face of user.
According to one or more exemplary embodiment, equipment 100 can carry out the face of registered user based on the image selected in the middle of the image stored in device 100 by user.
Fig. 4 be according to exemplary embodiment for describing registration about the figure of the method for the information of user.
With reference to figure 4, equipment 100 can be provided for registering the interface about the information of user.
Information can comprise the information of the biological information about user and the medical history about user.Such as, information can comprise user age, body weight, height, sex, present illness, the disease in past, facial color etc. at least one.
Equipment 100 can receive information from user.In addition, equipment 100 can store information according to the id information of user.Equipment 100 can by considering that this information obtains the health status of user exactly from the facial condition information of user.
Fig. 5 be according to exemplary embodiment for describe be arranged on health examination by equipment 100 based on the selection of user during the figure of the method for information that will consider.
With reference to figure 5, equipment 100 can provide the information of user interface for selecting will consider during health examination.
The information will considered during health examination can comprise shooting time, shooting date, spot for photography, biological information, about at least one in the information etc. of user.Biological information can comprise at least one in heart rate, blood pressure, the length of one's sleep, brain wave etc.Biological information can obtain from the sensor comprised in device 100.Alternatively, biological information can be recorded in the input image in the form of metadata.
By selecting the information that will consider during health examination, equipment 100 can to secure good health status information from the facial condition information of user while the information selected by considering.
Fig. 6 is the process flow diagram being obtained the method for face-image by equipment 100 according to exemplary embodiment.
In operation S610, equipment 100 can obtain the face-image of user.
Equipment 100 can obtain input picture.When obtaining input picture, equipment 100 can detect the facial zone in input picture.When facial zone being detected, equipment 100 can determine whether facial zone meets face-image and obtain condition.When facial zone meets face-image acquisition condition, facial zone can be stored as face-image by equipment 100.
Fig. 7 is the process flow diagram being obtained the method for face-image by equipment 100 from input picture according to exemplary embodiment.
In operation S710, equipment 100 can obtain input picture.
Equipment 100 can in various ways in any means obtain input picture.
Such as, equipment 100 can receive from the image-generating unit comprised in device 100 (such as, imager, camera etc.) input picture comprising the face of user.In this case, equipment 100 can provide template to catch interface, meets for obtaining the input picture that the face-image preset obtains condition.One or more exemplary embodiments of the seizure of template are described in detail below with reference to Fig. 8 to Figure 14.
In addition, when equipment 100 meets the condition of the face catching user, equipment 100 can capturing facial in the absence of user input.One or more exemplary embodiments of capturing facial are described in detail below with reference to Figure 15 to Figure 19.
Alternatively, equipment 100 can obtain user from storing the image selected in the middle of multiple images in device 100 as input picture.The one or more exemplary embodiments selecting image are described in detail below with reference to Figure 21 to Figure 25.Alternatively, equipment 100 can obtain from external unit download image as input picture.The one or more exemplary embodiments obtaining the image downloaded are described in detail below with reference to Figure 22.
In operation S720, equipment 100 can detect the facial zone in input picture.
Equipment 100 can detect the facial zone in input picture according to any particular algorithms in various algorithm.Such as, equipment 100 can determine the position of the facial zone in input picture according to the method for Knowledge based engineering method, feature based, template matching method or the method based on appearance.
In operation S730, equipment 100 can determine whether facial zone meets face-image and obtain condition.
It can be the condition that the facial zone in input picture meets that face-image obtains condition, extracts facial condition information for equipment 100 from facial zone.Face-image obtain condition can be the condition of face, the condition about facial direction, the condition about illumination during shooting such as whether detected, the condition whether opened about the condition, the eyes that rock during shooting, about the condition of facial expression, the combination of the various standards of state, the condition whether being positioned at center about face, the condition about facial size and the condition about focusing range that whether shows about ear.
About whether detecting that facial condition is such condition, wherein, when face being detected from input picture, input picture is selected as face-image, and when face not detected, input picture is not used as face-image.In order to detect face-image from input picture, equipment 100 can use any particular algorithms in the various face detection algorithm of such as adaboost algorithm.
Condition about facial direction is such condition, and wherein, when to detect in one direction from input picture facial, input picture is selected as face-image.Such as, when using the image in front of face, when the front of face being detected from input picture, equipment 100 selects input picture as face-image.Alternatively, when using the image of side of face, when the side of face being detected from input picture, equipment 100 selects input picture as face-image.
Condition about illumination during shooting can be such condition, and wherein, the illumination during shooting is between the first illumination and the second illumination.When input picture does not meet the condition about illumination, equipment 100 can not select input picture as face-image.Illumination during shooting can be measured by using the illuminance transducer comprised in device 100.When input picture stores in device 100, equipment 100 can use the lighting condition be stored in the image file of input picture.
Be such condition about the condition of rocking during shooting, wherein, when input picture is rocked at least specific (such as, reservation or threshold value) level, input picture is not selected as face-image.According to exemplary embodiment, rocking during shooting can be measured by using rolling dynamic sensor.According to another exemplary embodiment, rocking during shooting can be determined based on the level of rocking of input picture.
The condition whether eyes are opened is such condition, and wherein, when facial still eyes closed being detected from input picture, input picture is not selected as face-image.Such as, eyes can be detected from input picture, the black and white region of eyes can be detected, and when the area in black and white region is less than or equal to particular value, equipment 100 can determine eyes closed.
Condition about facial expression is such condition, and wherein, when specific facial expression being detected according to the health status information that will measure, input picture is selected as face-image.Equipment 100 can detect face from input picture, and performs Expression Recognition to determine whether specific facial expression to be detected from input picture to face.Specific facial expression can be smiling face, amimia or have the face that closed eyes.
The condition whether shown about ear is such condition, and wherein, when face being detected from input picture and showing ear in the input image, input picture is selected as face-image.
In operation S740, when facial zone meets face-image acquisition condition, equipment 100 can store facial zone as face-image.
When facial zone meets face-image acquisition condition, equipment 100 can extract the data of facial zone from input picture, and the data of extraction are stored as face-image.
Fig. 8 is the process flow diagram of the method for face-image that obtained by catching template (that is, catching template image) by equipment 100 according to exemplary embodiment.
In operation S810, equipment 100 can obtain condition according to the face-image preset, and on screen, display or output are used to guide the shooting tutorial message of the face catching user.
Such as, equipment 100 can be provided for the application diagnosing the health of user by catching template.When receiving the user of selection for the application catching template and inputting, equipment 100 can enter health examination pattern.In health examination pattern, equipment 100 can obtain according to the face-image preset the shooting tutorial message that capturing facial is instructed in condition display.
According to exemplary embodiment, when user's manipulation menu runs the album function reproducing the image stored simultaneously, equipment 100 enters health examination pattern, and at this moment equipment 100 can perform the operation of capturing facial image.Alternatively, when user runs the health examination application provided in device 100, equipment 100 can perform the operation of capturing facial image.
According to exemplary embodiment, equipment 100 can provide the shooting of capturing facial image to instruct in health examination pattern.Shooting guidance can be provided on screen, provides in a voice form, or provides by opening and closing light emitting diode (LED).
According to exemplary embodiment, when showing preview image in health examination pattern, shooting can be provided to instruct.Equipment 100 is by determining that the whether satisfied face-image acquisition condition preset of preview image can upgrade shooting and instruct continuously.
According to another exemplary embodiment, when receiving the control signal that request shooting is instructed from user, shooting can be provided to instruct.Such as, user can provide shooting to instruct by selecting certain menu or button to carry out requesting service 100 in health examination pattern.
According to another exemplary embodiment, when equipment 100 comprises the shutter release button for receiving shutter release signal, and shutter release button can be when partly pressing or entirely press, and equipment 100 can calculate and provide when shutter release button is half pressed shooting to instruct.Here, when shutter release button is half pressed, first (S1) signal corresponding with automatic focus (AF) can be generated, and when shutter release button is pressed entirely, second (S2) signal corresponding with shutter release signal can be generated.
Shooting tutorial message can comprise for obtaining the guide image of condition capturing facial according to the face-image preset or instructing phrase.Shooting tutorial message is described in detail below with reference to Fig. 9 to Figure 14.
In operation S820, equipment 100 can from the image-generating unit that comprises in device 100 receive comprise the face of user image.
When receiving image from image-generating unit, equipment 100 can show image.
In operation S830, equipment 100 can determine whether the facial zone in image meets the face-image preset and obtain condition.
Such as, equipment 100 can determine whether face detected in the picture based on face detection algorithm.
Alternatively, such as, equipment 100 can the direction of face in detected image, and determines whether direction meets reference direction.Such as, equipment 100 can calculate facial yawing moment, vergence direction and inclination direction from image.The direction that reference direction catches when can be the face direct-view camera when image.
After the direction face in image being detected, when the direction detected is in the scope preset apart from reference direction, equipment 100 can determine that the direction detected meets reference direction.
Such as, illumination value during alternatively, equipment 100 can detect shooting.Equipment 100 can obtain illumination value from the illumination sensor comprised in device 100.When obtaining brightness value, equipment 100 can determine illumination value whether in the scope preset.
Alternatively, such as, equipment 100 can determine whether object or camera rock during taking.Such as, equipment 100 can obtain the value that level is rocked in instruction from the rolling dynamic sensor comprised in device 100.Whether equipment 100 can determine to indicate the value of rocking level in the scope preset.
Alternatively, such as, equipment 100 can determine whether the size of the face in image is equal to or greater than default basic size.
When operating in S830 the facial zone determined in image and meeting default face-image acquisition condition, image can be stored as input picture by equipment 100 in operation S840.
The facial zone detected in the input image only can be stored as face-image by equipment 100.Alternatively, input picture can be stored as face-image by equipment 100.
When determining that in operation S830 the facial zone in image does not meet the face-image acquisition condition preset, equipment 100 can show the shooting tutorial message of the face-image the preset acquisition condition that instruction is not satisfied.
Such as, when the angle of face is not in the angular range preset, equipment 100 can for user shows direction and the angle of face on screen.
Fig. 9 be according to exemplary embodiment for being described in and catching template time the figure of the method for tutorial message is provided by equipment 100.
When catching template in order to health examination, preview image can be shown as shown in Figure 9.According to present example embodiment, preview image can be provided for meeting the guidance that face-image obtains condition.
From preview image, equipment 100 determines whether that meeting face-image obtains condition, and can provide about whether meeting face-image acquisition conditioned disjunction about how to catch image to meet the shooting guidance that face-image obtains condition.
Shooting guidance can be presented on screen as illustrated in fig. 9.According to another exemplary embodiment, shooting guidance can export as audio frequency.
As shown in Figure 9, take the instruction instructed and can comprise and further face to adjust facial size or face is placed in farther instruction.Alternatively, shooting instruct can comprise based on measure illumination increase or reduces illumination guidance, show the instruction of ear, the guidance etc. about facial expression or facial positions when ear does not show.
According to exemplary embodiment, equipment 100 can provide the shooting of the one or more conditions be not satisfied in the middle of about face-image acquisition condition to instruct, and the shooting terminating to provide face-image to obtain condition when described one or more condition is satisfied is instructed.According to another exemplary embodiment, equipment 100 can obtain condition at all face-images of the upper display of display unit (such as, display), and provides the information whether be satisfied about each face-image acquisition condition.
Figure 10 be according to another exemplary embodiment for being described in and catching template time the figure of the method for tutorial message is provided by equipment 100.
According to current exemplary embodiment, equipment 100 can obtain condition according to face-image on preview screen, provide facial frame 1010, to inform the user size about face and position.User can by arranging that in facial frame 1010 face carrys out easily capturing facial image.
According to exemplary embodiment, equipment 100 can detect position and the size of face in preview image, and obtains condition and to export via facial frame 1010 by comparing the position and size that detect and face-image and obtain about face-image the information whether condition be satisfied.Such as, whether equipment 100 can meet face-image based on the position of face and size obtains condition, and is provided the information whether be satisfied about face-image acquisition condition to user by least one in the color that changes facial frame 1010, line style and existence.
When obtaining face-image, equipment 100 can generate the image file of face-image, and memory image file.The content that equipment 100 can add image file to image file is the information of the face-image for health examination.
According to exemplary embodiment, the content of image file is that the information of face-image can be stored in the head of image file.
According to another exemplary embodiment, equipment 100 can carry out store and management image file content as the data of separating is the information of face-image.Such as, the file name of image file and store path can be stored as file separately by equipment 100.
According to exemplary embodiment, equipment 100 can add image file to by with the information that face-image obtains condition relevant.Such as, equipment 100 can in image file together with image file store about the illumination during facial positions, facial size, facial expression, shooting, facial direction and shooting during rock in the information of at least one.The content of image file is the information of face-image and obtains the relevant information of condition with face-image and can be stored in the head of image file.
Figure 11 be according to exemplary embodiment for being described in and catching template time to be caught the figure of the method for the face in camera specific range by equipment 100.
With reference to Figure 11, equipment 100 can show the user interface for catching apart from the face in camera specific range.
Such as, equipment 100 can show the guide image 1110 of the face for catching the position being positioned at the distance preset apart from camera on screen.Guide image 1110 can be (such as, the shape in image planes portion is the same) of rectangle or ellipse.
In addition, such as, equipment 100 can show and instructs phrase 1120 for what adjust face in guide image 1110 together with guide image 1110.
Equipment 100 based on face whether in the region of guide image 1110, can determine that the face of user is positioned at the position of the distance preset apart from camera.Such as, equipment 100 can detect the profile of face from the image being received from image-generating unit, and determines profile whether in the region of guide image 1110.When the profile of face is in the region of guide image 1110, equipment 100 can determine that face is positioned at the position of the distance preset apart from camera.
When face is in the region of guide image 1110, equipment 100 can determine that image is input picture.When face is not in the region of guide image 1110, what equipment 100 can show mobile face in particular directions based on the position of face instructs phrase.
Figure 12 be according to exemplary embodiment for being described in and catching template time be the figure of the method in the direction facing camera directly by equipment 100 capturing facial to make the direction of the face in input picture.
With reference to Figure 12, equipment 100 can determine whether face looks at camera straight, instead of sees to the left side or the right.
Such as, equipment 100 can detect facial zone from the image being received from image-generating unit.When facial zone being detected, equipment 100 can detect the position of eyes, nose or lip in facial zone.When the position of eyes being detected, equipment 100 can determine the central point between eyes.Now, when central point in the horizontal direction with by block diagram as 1210 vertical center lines 1220 being divided into two in the scope preset time, equipment 100 can determine face direct-view camera.
When central point not with vertical center line 1220 in the scope preset time, equipment 100 can show instruct user left or dynamic face of turning right instruct phrase.
Figure 13 be according to exemplary embodiment for being described in and catching template time whether to be made up based on the face of user by equipment 100 figure of the method obtaining input picture.
With reference to Figure 13, equipment 100 can detect facial zone in the image received from image-generating unit, and determines whether the face of user makes up artificially based on the color data of the facial zone detected.
Based at least one in the color of color data, brightness and saturation degree, equipment 100 can determine whether face makes up.
Such as, palpebral region can be detected from facial zone.When the color data of palpebral region to differ with the skin color in other regions of face at least reference value time, equipment 100 can determine that palpebral region is made up.
Alternatively, such as, equipment 100 can detect lip-region from facial zone.When the color data of lip-region has the brightness or saturation degree that exceed basic scope, equipment 100 can determine that lip-region is not the original lip color of user.
Alternatively, the face-image of making up that do not have of user can be stored as base image by equipment 100.Such as, equipment 100 can by the middle of the user's face image caught recently comprise maximum flaws, the face-image of acne or mole is stored as base image.
When facial zone being detected, equipment 100 can detect the position of flaw in facial zone, acne or mole.Then, when the quantity of quantity and the flaw in base image of the flaw detected from facial zone, acne or mole, acne or mole differs at least one with reference to quantity, equipment 100 can determine customization adornment.Here, equipment 100 can compare facial zone and base image according to region.Alternatively, equipment 100 only can compare the specific region of presetting and the base image of facial zone.
When determining customization adornment, equipment 100 can show instruct makeup removing instruct phrase.
On the other hand, when determining that user does not make up, the image received from image-generating unit can be defined as input picture by equipment 100.Alternatively, when determining to only have specific region to make up, the image received from image-generating unit can be defined as input picture by equipment 100, obtains face-image from input picture, and to secure good health status information from getting rid of the facial zone in cosmetic region.
Figure 14 be according to exemplary embodiment for being described in and catching template time obtain the figure of the method for input picture by equipment 100 according to the illumination value preset.
With reference to Figure 14, equipment 100 can determine brightness value whether in basic scope when catching user facial.
Equipment 100 can measure brightness value by using the illumination sensor comprised in device 100.Equipment 100 can determine that the brightness value measured is whether in basic scope.
When the brightness value measured is higher or less than basic scope, what equipment 100 can show that request user moves to brighter or darker place instructs phrase.
Figure 15 A and Figure 15 B be according to exemplary embodiment for being described in and catching template time in face authenticating pattern, to be obtained the figure of the method for face-image by equipment 100.
With reference to figure 15A, when equipment 100 is by using face authenticating to unlock, is used for the image that catches in the pattern of face authenticating at the face catching user and can be used as the face-image of health examination.
Such as, when equipment 100 receives by using the user of face authenticating unlocker device 100 to input in lock-out state, equipment 100 can enter facial certification mode.In face authenticating pattern, equipment 100 can receive from imageing sensor the image comprising the face of user.Equipment 100 can detect facial zone from image, and determines whether facial zone meets face-image and obtain condition.When facial zone meets face-image acquisition condition, equipment 100 can obtain the data of facial zone as face-image.
According to current exemplary embodiment, equipment 100 easily can obtain face-image and without the need to the additional manipulation of user.In addition, because the face-image caught in order to face authenticating catches under obtaining the condition of conditional likelihood with face-image, therefore easily can collect and meet the face-image that face-image obtains condition.In addition, when capturing the image for face authenticating, face authenticating process is performed to caught image, and owing to can obtain the id information of the face in caught image as the result of face authenticating process, so easily can obtain face-image and id information and without the need to producing extra burden on the appliance 100.
According to current exemplary embodiment, as shown in Figure 15 B, when when face authenticating pattern septum reset authentication success, the initial screen after unlocker device 100 can provide the health status information extracted from the face-image for face authenticating.
Figure 16 be according to exemplary embodiment for being described in and running video call time to be obtained the figure of the method for the face-image of user by equipment 100.
With reference to Figure 16, equipment 100 can obtain the face-image of user when running video call application.
Such as, equipment 100 can determine whether to perform video call function just in device 100.When performing video call function, equipment 100 can receive the image that comprise the face of user from imageing sensor.Equipment 100 can detect facial zone from image.When facial zone being detected, equipment 100 can determine whether facial zone meets face-image and obtain condition.When facial zone meets face-image acquisition condition, equipment 100 can obtain the data of facial zone as face-image.
Equipment 100 can receive the image comprising the face of user continuously during video call.Equipment 100 can select the image received with the time interval of rule in the middle of the image received continuously, and whether the facial zone determining in selected image meets face-image obtains condition.
Figure 17 is the figure of the method for the face-image of user that obtains when running application for describing equipment 100 according to exemplary embodiment.
With reference to Figure 17, equipment 100 can obtain the face-image of user when running application with the time interval of presetting.
Such as, equipment 100 can determine whether running game, mobile image or web browser just in device 100.When running game, mobile image or web browser just in device 100, equipment 100 can catch image by driving the image-generating unit comprised in device 100.Equipment 100 can catch image by driving when application brings into operation image-generating unit, or can catch image by periodically driving image-generating unit.
Then, equipment 100 can determine whether facial zone is present in the image of seizure.When facial zone being detected, equipment 100 can determine whether facial zone meets face-image and obtain condition.When facial zone meets face-image acquisition condition, equipment 100 can obtain the data of facial zone as face-image.
Figure 18 is the figure for describing the method being obtained the face-image of user when equipment 100 is worn in wrist by equipment 100 according to exemplary embodiment.
With reference to Figure 18, when user is by lifting wrist equipment 100, equipment 100 can catch the face-image of user.
Equipment 100 can determine equipment 100 whether near the face of user.Such as, equipment 100 can use the motion sensor comprised in device 100 to come moving direction or the displacement of measuring equipment 100.Moving direction can comprise wrist apart from the height on ground, wrist upward direction, in downward direction, left direction, right direction or sense of rotation.
Equipment 100 can determine based on moving direction or displacement that equipment 100 is within the reference distance of the face apart from user.
When determining that equipment 100 is within the reference distance apart from face, equipment 100 can catch image by driving the image-generating unit comprised in device 100.Then, equipment 100 can determine whether facial zone is present in image.When facial zone being detected, equipment 100 can determine whether facial zone meets face-image and obtain condition.When facial zone meets face-image acquisition condition, equipment 100 can obtain the data of facial zone as face-image.
Alternatively, when determine equipment 100 apart from face reference distance within and equipment 100 receive user input time, equipment 100 can by drive image-generating unit catch image.
Figure 19 is the figure for describing the method being obtained the face-image of user when equipment 100 is glasses types by equipment 100 according to exemplary embodiment.
With reference to Figure 19, when user look at mirror when the equipment 100 dressing glasses type, equipment 100 can obtain the face-image of user.
Equipment 100 can by use the image-generating unit comprised in device 100 catch in the visual field of user environment.When user look at mirror when dressing equipment 100, equipment 100 can obtain from specularly reflected jiong image.
Equipment 100 can determine whether face is present in the image received by image-generating unit.When face exists, equipment 100 can detect facial zone from image, and determines that whether facial zone is the face of user.When facial zone is user facial, equipment 100 can determine whether facial zone meets face-image and obtain condition.When facial zone meets face-image acquisition condition, equipment 100 can obtain the data of facial zone as face-image.According to another exemplary embodiment, (namely equipment 100 can use the image of direct capturing facial, not from the reflection of outside mirror) one or more image-generating units catch one or more input picture (such as, the region of face).
Figure 20 be according to exemplary embodiment for being described in and using image-generating unit to obtain face-image by equipment 100 time the table of shooting environmental information that obtains.
With reference to Figure 20, when obtaining the face-image of user by the image-generating unit comprised in device 100, equipment 100 can obtain and store shooting environmental information.
At least one illumination during shooting environmental information can comprise shooting, shooting time, spot for photography, biological information of obtaining from biology sensor etc.
Such as, equipment 100 can obtain illumination information by using the illuminance transducer being attached to equipment 100.In addition, equipment 100 can obtain temporal information when capturing facial image.In addition, equipment 100 can obtain spot for photography information by using the equipment of such as GPS (GPS) equipment or module.
In addition, equipment 100 can receive information, to obtain the biological information of the user during taking from the biology sensor of the health being attached to user.Such as, equipment 100 can obtain the information of the amount of movement about user from the passometer of the health being attached to user.In addition, equipment 100 can receive about the information of the temperature of user from the thermometer of the health being attached to user.In addition, equipment 100 can receive about the information of the heart rate of user from the ecg equipment of the health being attached to user.
In addition, equipment 100 can obtain the information about the activity at shooting time point.
Such as, the action message of user can be estimated based on spot for photography and shooting time or determine.Such as, when spot for photography be school and shooting time is 10:00 in the morning, the activity of user can be estimated as school life.
Alternatively, such as, by analyzing, action message can therefrom detect that the image of face-image is estimated.Such as, when image comprises food, the activity of user can be estimated as in dining.
Alternatively, the action message of user can pass through not only to consider image face-image therefrom being detected, but also the image that consideration caught before and after catching the time point of this image, estimate.
Shooting environmental information can store according to input picture.Such as, shooting environmental information can be recorded on image file with metadata form.Alternatively, shooting environmental information can be stored in storage space separately according to the id information of input picture.When equipment 100 obtains the health status information of user from facial condition information, shooting environmental information can be used.
Figure 21 is the process flow diagram being obtained the method for face-image by equipment 100 from the image being received from external source according to exemplary embodiment.
In operation S2110, equipment 100 can receive input picture from the network (such as, external network) being connected to equipment 100.
When the user receiving selection photo inputs, equipment 100 can from the photo selected by external server download.External server can be connected to social networking service (SNS) server of equipment 100, cloud storage server or another equipment via wired connection, wireless connections or LAN (Local Area Network) (LAN).
In operation S2120, equipment 100 can detect the facial zone in input picture.
When receiving input picture from external network, equipment 100 can detect the facial zone in input picture.Equipment 100 can detect facial zone from input picture, and extracts the position of facial zone.
In operation S2130, equipment 100 based on the feature of the face of the user of registered in advance, can extract the facial zone of the face of indicating user in the middle of facial zone.
Such as, equipment 100 can extract facial characteristics from facial zone.The example extracting the method for facial characteristics from facial zone comprises Gabor filter method and local binary (LBP) method.
Equipment 100 can determine the similarity between the feature of the face of the user of facial characteristics and the registered in advance extracted from facial zone, and the facial zone of similarity in the scope preset is defined as the facial zone of user.
In operation S2140, equipment 100 determines whether the facial zone of user meets face-image and obtain condition.At operation S2150, when the facial zone of user meets face-image acquisition condition, equipment 100 can obtain the facial zone of user as face-image.
Figure 22 is the figure for describing the method being obtained face-image by equipment 100 from external server according to exemplary embodiment.
With reference to Figure 22, equipment 100 can exchange data with external unit or external server, such as face-image, facial condition information and health status information.
Equipment 100 can use the data received to the status information that secures good health, or can external device or external server transmission face-image, facial condition information or health status information.
According to exemplary embodiment, equipment 100 can download face-image from external server.
According to exemplary embodiment, equipment 100 can download face-image from Cloud Server.
Alternatively, whenever comprise this image of instruction be the image of the information of face-image be added to cloud account (such as, pre-assigned cloud account) time, equipment 100 can download image, and uses this image as face-image.New image whether be added to Cloud Server can such as via push from Cloud Server or determine with the notice that Cloud Server periodically communicates.
According to exemplary embodiment, equipment 100 can download face-image from SNS server.Such as, when image is added to pre-assigned SNS account, equipment 100 can determine whether image meets face-image and obtain condition, and uses this image as face-image when image meets when face-image obtains condition.
Alternatively, when comprise this image of instruction be the image of the information of face-image be added to pre-assigned cloud account time, equipment 100 can download image, and uses this image as face-image.
According to another exemplary embodiment, equipment 100 can from the available image zooming-out face-image by the pre-assigned SNS account of use.Such as, equipment 100 can extract face-image from the photo that available photo and pre-assigned SNS account indica are crossed by sharing with the user of other SNS accounts be registered as in the photo registered another SNS account of the friend of pre-assigned SNS account, pre-assigned SNS account.Equipment 100 can determine whether available image meets face-image acquisition condition by using pre-assigned SNS account, and uses the image meeting face-image acquisition condition as face-image.
New image whether to be registered in pre-assigned SNS account or other SNS accounts can such as via push from pre-assigned SNS account or other SNS accounts or determine with the notice that SNS server periodically communicates.
According to another exemplary embodiment, equipment 100 can from being provided for providing the service server of the function of health status information to download face-image.Such as, face-image can be stored in service server according to user account, and equipment 100 can download face-image from certain accounts by access certain accounts.According to exemplary embodiment, service server can provide face-image and the facial condition information relevant with face-image and/or health status information together to equipment 100.
According to exemplary embodiment, equipment 100 can adjust face-image according to special algorithm, extracts facial condition information from the face-image after adjustment, and to secure good health status information from the facial condition information extracted.
Figure 23 A and Figure 23 B be according to exemplary embodiment for describing the figure being obtained the method for the face-image of user by equipment 100 from the image 2310 that user selects.
With reference to figure 23A and Figure 23 B, the image 2310 that equipment 100 can be selected from user obtains the face-image of user.
Equipment 100 can show from the image 2310 in the multiple images stored in device 100 on screen.In addition, equipment 100 can display menu 2315, provides the health status information of user for the face from the user in image 2310.
When the user receiving choice menus 2315 inputs, equipment 100 can determine that image 2310 is as input picture.When image 2310 is confirmed as input picture, facial zone 2320 to 2340 can be detected from image 2310.Equipment 100 can detect facial zone 2320 to 2340 from image 2310, and extracts the position of facial zone 2320 to 2340.
When extracting the position of facial zone 2320 to 2340, equipment 100 can extract the feature of face from facial zone 2320 to 2340.Equipment 100 can compare the feature of the face of the user of feature and the registered in advance extracted from facial zone 2320 to 2340, to determine the face of user from facial zone 2320 to 2340.Such as, when the similarity between the feature extracted from facial zone 2320 to 2340 and the feature of registered in advance is in the scope preset, can by the described facial zone being defined as user in facial zone 2320 to 2340.
When determining the user indicated by facial zone 2320 to 2340, equipment 100 can show the id information of user according to facial zone 2320 to 2340.
Then, when the user receiving (such as, facial zone 2330) in selection facial zone 2320 to 2340 inputs, equipment 100 can determine whether facial zone 2330 meets face-image and obtain condition.When facial zone 2330 meets face-image acquisition condition, facial zone 2330 can be obtained as face-image.
In addition, equipment 100 can carry out normalized faces image based on the standard preset.Equipment 100 can extract facial condition information from normalized face-image.Equipment 100 can to secure good health status information by using facial condition information, and provides health status information.
Figure 24 be according to exemplary embodiment for describing by equipment 100 from the figure storing image in device 100 and obtain the process of face-image.
With reference to Figure 24, equipment 100 can obtain face-image from the image stored in device 100.Equipment 100 can determine whether the image of each storage meets face-image and obtain condition, and extraction meets the image of the storage of face-image acquisition condition as face-image.
According to exemplary embodiment, whenever new image be captured and store in device 100 or input whenever new image from external unit and store in device 100 time, equipment 100 can determine new image whether meet face-image obtain condition.
According to another exemplary embodiment, equipment 100 periodically can determine that the image still not meeting face-image acquisition condition in the image stored meets face-image and obtains condition.Such as, whether the new image that equipment 100 can determine weekly to store a week meets face-image obtains condition.
According to another exemplary embodiment, whenever the request signal of user's input request health status information, equipment 100 can determine whether be not yet determined to be the no image meeting face-image acquisition condition meets face-image acquisition condition.Such as, when the request signal of user's input request health status information, simultaneously 100 images be stored in an electronic and for 90 images determine whether meet face-image obtain condition time, equipment 100 can from remaining 10 image zooming-out face-images, extract facial condition information from face-image, and health status information is provided.
Figure 25 be according to exemplary embodiment for describing by the figure of equipment 100 from the method for the image zooming-out face-image selected by user.
With reference to Figure 25, user can directly select the image that will be used as face-image in the middle of the image stored in device 100.Such as, when the function running such as photograph album or picture library or application, user can select the image that will be used as face-image.Now, as shown in Figure 25, when showing the thumbnail image 120 of the image stored in device 100, user can select the image that will be used as face-image.
According to exemplary embodiment, all images that equipment 100 can use user to select are as face-image.
According to another exemplary embodiment, equipment 100 can determine whether the image selected by user meets face-image and obtain condition, and only extraction meets the image of face-image acquisition condition as face-image.
According to exemplary embodiment, as shown in Figure 25, when have selected multiple image, when the control signal of user's input request health status information, equipment 100 can extract facial condition information by using selected image, and extracts and provide health status information.When user presses " health care " button shown in Figure 25, control signal can be generated.
According to another exemplary embodiment, when when the function running such as photograph album or picture library, during user's input control signal, equipment 100 automatically can determine whether the image be included in photograph album or picture library meets face-image and obtain condition, extract face-image, and obtain from face-image and facial condition information and health status information are provided.In this case, user can select signal acquisition health status information via one, and need not select the image being used as face-image.
Figure 26 be according to exemplary embodiment for describing by the figure of equipment 100 from the method for mobile image zooming-out face-image.
With reference to Figure 26, equipment 100 can from mobile image zooming-out face-image.In this case, equipment 100 can determine whether each motion picture frame meets face-image and obtain condition, and extraction meets the motion picture frame of face-image acquisition condition to be used as face-image.According to exemplary embodiment, when continuous print motion picture frame meets face-image acquisition condition, equipment 100 can extract (such as, first of continuous print moving image frame or of centre) in continuous print motion picture frame as face-image.
According to exemplary embodiment, user can select health examination function when reproducing moving images, and from mobile image zooming-out health status information.When receiving from user the input selecting health examination function when reproducing moving images, equipment 100 can determine whether the motion picture frame of mobile image meets face-image and obtain condition, extract face-image, extract facial condition information, then extract and health status information is provided.
According to another exemplary embodiment, equipment 100 can obtain facial condition information and health status information by using mobile image.Such as, equipment 100 can extract facial condition information by using mobile image, the degree and the dirigibility of neck nictation of the movement of such as facial muscles, eyes.Then, equipment 100 can by use the movement of facial muscles, eyes nictation degree and the dirigibility of neck determine the situation of user.
Figure 27 be according to exemplary embodiment for describing by the figure of equipment 100 to the process of face-image distribution marker.
According to current exemplary embodiment, when input face image, equipment 100 can compare about registration the face of identifier and the information of face-image, to determine the id information of face-image.Such as, the representative face image of each identifier or the facial characteristics dot information of each identifier can be stored in device 100.In this case, equipment 100 can compare representative face image or facial characteristics dot information and face-image, and determines which identifier face-image belongs to.
According to current exemplary embodiment, the classified information of face-image can be stored together with face-image with identifier.Classified information is the information that instruction face-image is used to determine health status information.In addition, according to exemplary embodiment, additional information can be stored together with face-image and identifier.
Figure 28 be according to exemplary embodiment for describe by equipment 100 health status information be obtain from image information be recorded in the figure of the method in the file of image.
According to current exemplary embodiment, at least one in classified information, identifier and additional information or the pixel value combined as visible form can be recorded in face-image by equipment 100.Such as, as shown in Figure 28, the button 2810 of " health care " can be inserted on face-image as pixel value.Recorded information as the pixel value of visible form can mean, information is recorded in letter, icon, bar code or responds fast in (QR) code.According to current exemplary embodiment, when information is recorded on image as the pixel value of visible form, user can know that image is the face-image for health examination intuitively.In addition, by inserting icon, bar code or QR code to image, by using icon, bar code or QR code, user can ask more detailed facial condition information or health status information, or equipment 100 can enter health examination pattern.
According to exemplary embodiment, at least one in classified information, identifier and additional information or the pixel value combined as invisible form can be recorded on face-image by equipment 100.Such as, face-image is indicated to be used to the code of health examination function or mark can be recorded on face-image as the pixel value of invisible form.Pixel value recorded information as invisible form can mean, information is recorded in letter, icon, bar code or QR code.Such information can read for the algorithm reading invisible form by using.
According to the present example embodiment that at least one in classified information, identifier and additional information or combination are recorded as the pixel value of visible form or invisible form, customizing messages can be stored in face-image, and need not use extra storage space.And according to current exemplary embodiment, classified information, identifier and additional information can be stored and not be subject to the restriction of file layout.According to current exemplary embodiment, classified information, identifier or additional information can not be registered as pixel value, but can be recorded as pixel value about the store path of classified information, identifier or additional information or the information of link.
According to exemplary embodiment, at least one in classified information, identifier and additional information or combination can be stored in the head of the file of face-image by equipment 100.Such as, when image file is stored with exchangeable image file (Exif) form, at least one or combination in classified information, identifier and additional information can be stored in the head of Exif file.
Figure 29 is the table for describing the method being stored face-image by equipment 100 according to exemplary embodiment.
According to exemplary embodiment, at least one or the combination that are used in the classified information of the file of each face-image of health examination function, identifier and additional information can be carried out store and management as the data of separating by equipment 100.Such as, at least one in the classified information of the file of each face-image, identifier and additional information or combination can be stored as management document separately by equipment 100.
According to exemplary embodiment, in storage space that such management document separated can be stored in such as photograph album or picture library, that store face-image.According to another exemplary embodiment, the management document separated can be stored in the storage space of the application for providing health examination function.As shown in Figure 29, the management document separated can store store path and the identifier of each file.
Figure 30 be according to exemplary embodiment by the process flow diagram of the method for equipment 100 normalized faces image.
In operation S3010, equipment 100 can normalized faces image.
The standardization of face-image can mean that the property value of face-image is adjusted to default standard.Property value can be the size of face-image, the brightness of face-image and the tonal variation value according to the effect of throwing light on.
Due to according to input picture, shooting environmental or image pickup method can be different, and the property value of the face-image therefore obtained from input picture can be different.Therefore, equipment 100 can normalized faces image, to obtain the health status information of same subscriber under same or similar condition.
In addition, because input picture catches under different shooting conditions, therefore when extracting facial condition information mistakenly based on shooting condition, the accuracy of health status information may reduce.In order to get rid of the parameter according to shooting condition from face-image as much as possible, equipment 100 can adjust face-image, to increase the accuracy of health status information.
According to exemplary embodiment, equipment 100 can adjust face-image according to shooting condition.Such as, the brightness of face-image can be adjusted by the information of use about the illumination during shooting.Information about illumination can be detected by the illumination sensor comprised in an electronic.Alternatively, the white balance of face-image can be adjusted by the information of use about the white balance during shooting.
According to exemplary embodiment, equipment 100 can adjust the color of face-image based on the color in the certain organs of face-image or region.Such as, because the color of neck can not change a lot according to health status, therefore the color of face-image can adjust based on the color of neck.In this case, the colouring information of the neck of the user corresponding with face-image can be prestored, and when obtaining the face-image of user, the color of face-image can adjust based on the colouring information of the neck prestored.The whole color of face-image can be adjusted, to make the color adjusting the neck in face-image according to the colouring information of the neck prestored.
According to exemplary embodiment, when generating red-eye effect in face-image, equipment 100 can perform red eye compensation.
According to exemplary embodiment, when when catching, face-image rocks, or during by the image-generating unit capturing facial image rocked, equipment 100 can perform sway compensating.
According to exemplary embodiment, when under health examination pattern during capturing facial image, equipment 100 can ask user to catch white object or object, so that the information of the white balance during obtaining about shooting.Such as, white object can be blank sheet of paper.
According to exemplary embodiment, can perform and adjust the method for face-image, the method adjusting the color of face-image based on the color in certain organs or region, the method performing red eye compensation and some or all methods performed in the method for sway compensating according to shooting condition.
After adjustment face-image, equipment 100 can extract facial condition information from the face-image after adjustment, and secure good health status information, and provide health status information.
Figure 31 be according to exemplary embodiment for describing by the figure of the method for the size of equipment 100 normalized faces image.
With reference to Figure 31, equipment 100 can obtain face-image from input picture, and size based on the size of face-image is standardized.
Basis size can be stored in device 100.Such as, in device 100, basic size can be set to the resolution of 200x300.
When the size of face-image is less than basic size, equipment 100 can by the pixel data of face-image is applied with sampling or interpolating method face-image is increased to basic size.When the size of face-image is greater than basic size, equipment 100 can by applying double sampling (sub-sampling) to the pixel data of face-image or face-image is reduced to basic size by Downsapling method.
By standardizing to basic size by the size of face-image, the size of the facial zone in face-image also can be typically canonicalized.In addition, because face-image is that in the middle of input picture, only from image facial zone wherein being detected, therefore the size of face-image can be considered the size of facial zone.By the size of face-image is adjusted to basic size, the facial condition information that the face that can be extracted in user from face-image under same facial size condition shows.
Figure 32 A to Figure 32 C is the figure of the method for describing the tone value being come normalized faces region by equipment 100 based on basic colour temperature according to exemplary embodiment.
When the type of the illumination on the face of user is different, the color of the face in the face-image of user can change.Colour temperature can be have by using the numerical value representing the color of the illumination of object with the temperature of the black matrix of the illumination equal illumination of object.
Such as, as shown in fig. 32b, white object is shown as blush by the fluorescent light with 4000K colour temperature, and white object is shown as white by the daylight with the daytime of 5200K colour temperature, and white object is shown as micro-blueness by the cloudy day of cloud that has with 7000K colour temperature.
Therefore, same object can be red under fluorescent light, but is blue in the sunlight.Therefore, the color of the face of the user in face-image can change based on the illumination during shooting on face.
In operation S3210, equipment 100 can obtain the colour temperature of the illumination on the face of user when catching user facial.
Such as, the colour temperature of the illumination obtained from colour temperature detecting sensor can be registered as metadata, such as Exif information over an input image.Equipment 100 can obtain the colour temperature of record illumination over an input image from the metadata of input picture.
Alternatively, equipment 100 can be extracted in the colour temperature of the illumination of the time point catching input picture from input picture.Such as, the brightest area of input picture can be defined as the region catching original white by equipment 100.By detecting the brightest area of input picture, equipment 100 can detect the average color tone pitch of brightest area.Then, equipment 100 can obtain the colour temperature of the illumination corresponding with tone value based on tone value.The colour temperature of the illumination corresponding with tone value can be defined as the colour temperature of the illumination at the time point catching input picture by equipment 100.
In operation S3220, equipment 100 can adjust the tone value of face-image based on the colour temperature of illumination.
By obtaining the colour temperature of illumination, equipment 100 can obtain the tone value shown when (that is, throwing light on) white object is illuminated in the illumination with obtained colour temperature based on the information about colour temperature prestored.Because tone value determines, so equipment 100 can determine the redness corresponding with the tone value shown when white object is illuminated in the illumination with obtained colour temperature, green and blue gain based on red, green and blue gain.
Equipment 100 based on determined redness, green and blue gain, can get rid of the impact of the colour temperature of illumination from face-image.Such as, equipment 100 can adjust determined gain, is original white to make white object.In addition, equipment 100 can get rid of the impact of the colour temperature of illumination by adjusting the redness of the pixel value of face-image, green and blue gain with adjusted gain.
Like this, by getting rid of the impact of colour temperature of illumination from face-image, the tone value of the face of the user caught with basic colour temperature can be obtained.According to exemplary embodiment, such method can be referred to as white balance adjustment method.
With reference to figure 32C, equipment 100 can based on the tone value of basic colour temperature normalized faces image.
Such as, when the colour temperature of input picture 3212 be 8400K and basic colour temperature is 6300K, the colour temperature specification of input picture 3212 by changing the redness of the pixel value of input picture 3212, green and blue gain, can be turned to 6300K by equipment 100.By the colour temperature of the input picture 3212 that standardizes, the color of the facial zone in normalized input picture 3214 can be defined as the facial color of user by equipment 100.
Figure 32 D be according to exemplary embodiment by the process flow diagram of equipment 100 based on the method for the color of the face in the color specification input picture of base area.
In operation S3205, equipment 100 can detect the color of base area from facial zone.
Base area can be the white portion of eyebrow, eyelashes, hair, neck or pupil, but is not limited thereto in other exemplary embodiment one or more.Equipment 100 can detect the area of base area from the facial zone input picture, and detects the color of this area.
In operation S3215, equipment 100 can adjust the tone value of face-image, is changed to base color to make the color detected.
Base color can prestore according to base area.Such as, the base color of eyebrow, eyelashes and hair can be black, and the base color of the white portion of pupil can be white.
Equipment 100 can adjust the tone value of facial zone, is changed to base color to make the color detected from base area.The color of the facial zone of the tone value had through adjustment can be defined as the facial color of user by equipment 100.
Figure 32 E be according to exemplary embodiment for describing by the figure of equipment 100 based on the method for the color of the face in the color specification input picture of base area.
With reference to figure 32E, when base area is the white portion of eyes, and when base color is white, equipment 100 can determine the area of the white portion of the eyes in input picture 3212, and determines the color of determined area.When the color of determined area is blue, equipment 100 can adjust the tone value of input picture 3212, to make the color of determined area for white.
By adjusting the tone value of input picture 3212, the facial color of input picture 3212 can change into skin color from dark blue.
Figure 33 be according to exemplary embodiment by the process flow diagram of equipment 100 from the method for the facial condition information of the situation of the face of the normalized faces image zooming-out indicating user of user.
In operation S3310, equipment 100 can from the facial condition information of the situation of the face of normalized user's face image zooming-out indicating user.
Face condition information can be the base state of the face for determining health and fitness information.
By normalized faces image, equipment 100 can from normalized face-image determine will from wherein extract facial condition information region.Such as, equipment 100 can determine the position in the region around region below eyes, nasal area and mouth in face-image.
Determine will from wherein extract facial condition information region after, equipment 100 can from this extracted region face condition information.Such as, equipment 100 can determine the whether swelling of region below eyes.Alternatively, equipment 100 can determine that whether the color in the region below eyes is than darker in the past.
Determine will from wherein extract facial condition information region position method, want checked symptom and check that the method for symptom can be stored in device 100.Therefore, equipment 100 can determine will from wherein extract facial condition information region whether show symptom.
The specific region of face can show the symptom be presented at when health has fault in face.Such as, when producing hyperthyroidism or anaphylactia or reacting, the fat below eyes may increase.In addition, such as, when producing allergic rhinitis, black circle can be produced below eyes.
Equipment 100 can store symptom information according to the region of face.Such as, whether darker equipment 100 can store region below about eyes whether swelling or information, as the symptom information according to the region below eyes.
Therefore, region below eyes in the whole region of face can be defined as by equipment 100 will from the region of wherein extracting facial condition information, and from the extracted region below eyes about whether swelling or the darker information of the region below eyes, as facial condition information.
Figure 34 is the process flow diagram of the method being extracted the facial condition information of face display according to exemplary embodiment by equipment 100 from normalized face-image.
In operation S3410, equipment 100 can from the position of face-image determination facial zone septum reset composition.
Face composition can comprise at least one in eyes, nose and mouth.Equipment 100 any one in various ways can determine the position of facial composition from face-image.
Such as, by using the feature that other facial compositions of brightness ratio face of eyes, eyebrow and mouth are darker, equipment 100 can dualization face-image, and the position of more dark areas is defined as the position of eyes, eyebrow and mouth.
Alternatively, equipment 100 can extract skin-coloured regions from face-image, and from the position of extracted skin-coloured regions determination eyes, eyebrow and mouth.
Alternatively, because the position of eyes, eyebrow, nose and mouth demonstrates specific pattern at face, therefore equipment 100 can determine the position of facial composition by using initiatively outer table model (AAM) method, and the position of its septum reset composition is determined based on face pattern.
In operation S3420, equipment 100 can based on the position of facial composition, and determining will from the position of diagnostic region of wherein extracting facial condition information.
Diagnostic region can be pre-stored in equipment 100.Such as, the region around the region below forehead, eyes, nose, mouth or chin area can be set to diagnostic region in device 100.
In addition, determine that the method for the position of diagnostic region can be pre-stored in equipment 100.Such as, nasal area can be confirmed as connecting two end points of side of nose and the delta-shaped region of the starting point of nose.The method determining the position of diagnostic region is described in detail below with reference to Figure 35.
In operation S3430, equipment 100 can extract facial condition information from diagnostic region.
Facial condition information can be extracted by analyzing face-image.Such as, equipment 100 can extract facial condition information by using the colouring information of the facial zone in face-image, face recognition algorithm or facial expression recognition.Can be pre-stored in equipment 100 according to the type of the extractible facial condition information of diagnostic region and the method for extracting facial condition information.
Equipment 100 can extract information about the quantity of the color of face, flaw or acne, eye color or lip color by using the colouring information of facial zone.In addition, such as, equipment 100 can extract about the lip of the shape of the eyes of inflammation, pupil size, the movement of pupil, facial size, face contour, cracking, the position of each organ of face and the information of the movement of facial muscles by using face recognition algorithm or facial expression recognition.Any other algorithm or method can be used to extract facial condition information.
Skin can be determined by using the color of face, flaw or the quantity of acne and the shape of face contour.Such as, when the quantity that the color of face is dark, flaw or acne be high and also the relaxation of shapes of face contour time, can determine that skin is bad.
Eye condition can be determined by least one using in the movement of the eyes of inflammation, eye color, pupil size and pupil.Such as, when the white portion of inflammatory eye, eyes be reaction velocity that is dark and pupil slow time, equipment 100 can determine that eye condition is bad.In addition, based on the color of the white portion of eyes, equipment 100 can determine that liver is bad or eye infection.
Body weight change can be determined based at least one in the shape of facial size and face contour.Such as, if facial size increases and face contour outside movement compared with the face-image previously caught of cheek part in face contour, then can determine that body weight increases.
At least one in whether ftractureing by using lip color and lip can determine lip portion situation.Such as, when lip for red, there is high saturation and do not ftracture time, determine that lip is in order, and when lip have low saturation or blue and ftracture time, determine that lip situation is poor.In addition, whether equipment 100 can be kermesinus based on lip, whether the saturation degree of blueness or skin color and lip be the low fault determining health.
Hair condition can be determined by least one in using hair color, the gloss of air and hair whether impaired.Such as, when hair color is dark, when hair has gloss and do not have impaired, equipment 100 can determine that hair condition is good, otherwise, determine that hair condition is poor.The gloss of hair can be determined by detecting the bright areas shown in the bending area of hair zones.
By collect or determine in the movement of skin, eye condition, lip situation, hair condition and facial muscles at least one can determine health.Such as, when the movement of eye condition, lip situation, hair condition and facial muscles is good, equipment 100 can determine that health is good.
Figure 35 A and Figure 35 B is the figure for describing the method being determined the position of diagnostic region by equipment 100 according to exemplary embodiment.
With reference to figure 35A, equipment 100 can determine the position of the facial composition in face-image.
Such as, equipment 100 can determine the end points 3440 and 3442 of the side point 3430 and 3432 of the end points 3410 to 3416 of the end points 3420 to 3426 of eyes, eyebrow, nose, lip, the end points 3434 of nose and the starting point 3436 of nose.
With reference to figure 35B, equipment 100 can determine the position of diagnostic region based on the position of facial composition.
Such as, the region around the region in device 100 below forehead, eyes, nose, mouth and chin area can be set to diagnostic region.
In addition, determine that the method for the position of each diagnostic region can be set up in device 100.Such as, nasal area can be confirmed as the delta-shaped region 3450 connecting the side point 3430 and 3432 of nose and the starting point 3436 of nose.In addition, the coboundary in the region 3460 around mouth 3460 can be confirmed as the Y-axis coordinate of the central point 3446 between the end points 3434 of nose and the central point 3444 of lip.
Figure 36 A and Figure 36 B is the figure for describing the method being extracted facial condition information by equipment 100 from diagnostic region according to exemplary embodiment.
With reference to figure 36A and Figure 36 B, equipment 100 can extract the facial condition information corresponding with diagnostic region from diagnostic region.
In lip color and the region around mouth, at least one whether generation in acne can be set to the symptom corresponding with mouth region in device 100.
With reference to figure 36A, equipment 100 determines the position of lip-region, and extracts the tone value of lip or intensity value as facial condition information from lip-region.Such as, equipment 100 can extract the tone value of lip by the tone value of the pixel on average forming lip-region.
Alternatively, equipment 100 can determine the position in the region around mouth, and from this extracted region skin problem level as facial condition information.
With reference to figure 36B, equipment 100 can use any method in various image processing method so that the acne in detection of skin.
Such as, equipment 100 can emphasize the acne compared to skin by the contrast increasing region.Here, because acne or skin problem are darker than normal skin, so equipment 100 can dualization region, and in the region of dualization, determine the position of acne or skin problem.Alternatively, because acne or skin problem are red, the position therefore showing red pixel can be confirmed as the position of acne or skin problem.
Based on the change of the area generating acne or skin problem, equipment 100 can determine whether the quantity of acne or skin problem increases or reduce.
Figure 37 is the process flow diagram being obtained the method for the health status information relevant with the health of user by equipment 100 based on facial condition information according to exemplary embodiment.
In operation S3710, equipment 100 can by the health status information using facial condition information to obtain the health status of indicating user.
Equipment 100 can to secure good health status information by using the facial condition information extracted from the diagnostic region face-image.
Health status information can comprise the disease about user predicted from facial condition information, the information with the organ of deteriorated function or the situation of user.Health status information can be stored in device 100.
Equipment 100 by considering the shooting environmental information obtained while the face catching user, can to secure good health status information from facial condition information.The biological information of user that shooting environmental information obtains during can comprising the activity of the user during shooting time, spot for photography, shooting, shooting.
In addition, equipment 100 can by considering physical state such as height, body weight, age, the sex of user, or at least one in medical history such as present illness or passing disease, to secure good health status information from facial condition information.
Figure 38 A and Figure 38 B be according to exemplary embodiment for describing the form being extracted the method for the health status information of user by equipment 100 based on the facial condition information extracted from face-image.
With reference to figure 38A, equipment 100 can store the information of the disease about prediction according to facial color.
Therefore, equipment 100 can be determined, when face is mazarine, liver has problem, and when face is black, kidney has problem, and when face is white, lung has problem, and heart has problem when face is redness.
With reference to figure 38B, equipment 100 according to the symptom extracted from diagnostic region, can store the information about the disease of prediction or the reason of symptom.
Therefore, equipment 100 based on the symptom extracted from each diagnostic region, can determine the reason of disease or the symptom predicted.Such as, when being inflammatory eye from the facial condition information of Extract eyes, equipment 100 can determine that liver and heart have problem.
Figure 39 be according to exemplary embodiment by equipment 100 by considering that the shooting environmental information that obtains while capturing facial to secure good health from facial condition information the process flow diagram of method of status information.
In operation S3910, equipment 10 can obtain the shooting environmental information obtained while capturing facial.
Shooting environmental information can be stored in the file of input picture with the form of the metadata of input picture.Alternatively, equipment 100 can store shooting environmental information according to the id information of input picture while generating input picture.
Shooting environmental information can comprise the user during shooting time, spot for photography, shooting activity and shooting during obtain user biological information at least one.
In operation S3920, equipment 100 can consider that facial condition information obtains the health status information relevant to the health of user with while shooting environmental information together.
Such as, when the shooting time of face is 3:00am, equipment 100 can be determined, even if more much lower than the standard preset from the facial situation of face-image extraction, so facial situation is also because interim excessive exercise causes.In this case, the degree of extracted facial situation can be defined as lighter (lighter) by equipment 100, to secure good health status information from facial condition information simultaneously.
Alternatively, such as, when shooting time is dawn, equipment 100 can determine that face is not made up, and by the status information that secures good health that assigns weight compared with other times district.
Alternatively, such as, when when capturing facial, the length of one's sleep of user is when 2 hours, equipment 100 can be determined, even if the facial situation extracted from face-image is more much lower than the standard preset, so facial situation is also because interim excessive exercise causes.In this case, the degree of extracted facial situation can be defined as lighter by equipment 100, to secure good health status information from facial condition information simultaneously.
Alternatively, such as, when spot for photography be bar and also be flush (flush) from the symptom that face-image extracts time, equipment 100 can determine that flush causes owing to drinking temporarily, and by getting rid of the symptom of blushing and the status information that secures good health.
Alternatively, equipment 100 can determine the biological condition of the user at the time point catching input picture based on the biological information at the time point catching input picture, and gets rid of the information shown at the face of user because biological condition causes from facial condition information.
Such as, when the pulse frequency of user is higher than time normal, equipment 100 can determine that face catches after exercise immediately, and gets rid of the facial color of user from facial condition information.
Alternatively, such as, when when capturing facial the activity of user that to be high, heart rate be is high or when being flush facial from the symptom that face-image extracts, equipment 100 can determine that the face of flush is because interim motion causes, and to be secured good health status information by the symptom of the face getting rid of flush from facial condition information.
Figure 40 be according to exemplary embodiment for describing and being presented at together with the health status information of user and catching image time the figure of the method for shooting environmental information that obtains.
With reference to Figure 40, equipment 100 shows face-image on screen, and display is from the health status information of the user of shown face-image extraction.In addition, the shooting environmental information considered when extracting health status information from face-image can show by equipment 100 together with health status information.
From face-image extract health status information time, equipment 100 can consider spot for photography, shooting time, camera to face distance, shooting during heart rate, shooting during activity and shooting during the length of one's sleep at least one.
In addition, equipment 100 can show and is applied to original input picture to extract the image processing method of health status information from original input picture.Such as, equipment 100 can show about extensive ratio or illumination whether controlled information.
Figure 41 A and Figure 41 B be according to exemplary embodiment for describing and being provided in and being secured good health status information by equipment 100 from facial condition information time in the middle of many shooting environmental information, select the figure of the method for the function of shooting environmental information will considered by equipment 100.
Equipment 100 can show the user interface for selecting the shooting environmental information that will be considered when securing good health status information from facial condition information.According to the user of at least one input selected in many photographing information shown on screen, equipment 100 can to secure good health status information from facial condition information based on selected shooting environmental information.
Alternatively, equipment 100 can show the user interface for selecting will be applied to when securing good health status information from facial condition information the image processing method of face-image.Based on the user of at least one input in the image processing method selecting to show on screen, equipment 100 can adjust face-image based at least one selected image processing method, and obtains facial condition information from the face-image after adjustment.
With reference to figure 41A, equipment 100 can extract skin problem around mouth as facial condition information from face-image.In addition, equipment 100 can determine that the facial color in face-image is black.The deterioration that equipment 100 can obtain renal function based on the facial color of the skin problem around mouth and black is health status information.
With reference to figure 41B, equipment 100 can show the user interface of the colour temperature for adjusting the illumination on face-image, and for considering heart rate, activity and the user interface of at least one in the length of one's sleep when securing good health status information.
When receiving the colour temperature of adjustment illumination and selecting heart rate, activity and the user of the length of one's sleep to input, equipment 100 can by considering that the colour temperature of illumination determines the tone value of face-image.
By adjusting the tone value of face-image, facial color can be changed into yellow from black by equipment 100.Then, by considering that the length of one's sleep during yellow facial color and shooting is 2 hours, equipment 100 skin problem that can obtain around instruction mouth is the health status information that the deterioration of hormonal disorder instead of the renal function caused owing to not having enough sleep causes.
Figure 42 is the process flow diagram being obtained the method for current health status information by equipment 100 based on the time point catching current face image from current face condition information according to exemplary embodiment.
In operation S4210, equipment 100 can obtain the current shooting date from the file of current face image.
Such as, the current shooting date can be recorded in current face image in the form of metadata.Therefore, equipment 100 can obtain the current shooting date from the metadata of current face image.
In operation S4220, equipment 100 while considering the previous health status information relevant with the previous face-image caught before the current shooting date, can obtain current health status information from current face condition information.
Equipment 100 can extract previous facial condition information from the face-image of multiple user obtained in advance.When extracting previous facial condition information from multiple face-image obtained in advance, equipment 100 according to the previous shooting date of multiple face-image obtained in advance, can store the previous health status information obtained from previous facial condition information.Therefore, equipment 100 can obtain the previous health status information of user according to previous shooting date.
Equipment 100 can obtain the previous health status information relevant with the previous face-image caught before operating the current shooting date obtained in S4210.Equipment 100 based on the health status of the user according to previous shooting date, can obtain the previous health status information of previous face-image.
Therefore, equipment 100 can obtain disease or the exception of the user before the time point catching current face image.
When obtaining the previous health status information before the current shooting date, equipment 100 can before consideration shooting date before while previous health status information, obtain current health status information from the current face condition information of current face image.
Equipment 100 based on the disease of the user before the current shooting date or exception, can obtain current health status information from current face condition information.
When being confirmed as the result of multiple disease or exception from the symptom of current face image zooming-out, equipment 100 can based on the previous health status information determination current health status information extracted from multiple face-image obtained in advance.
Such as, when the disease extracted at the previous face-image obtained from the previous moon on current shooting date of current face image or exception are enterogastritis, and enterogastritis, excessive exercise or drink the reason be extracted as the current face condition information from current face image zooming-out time, equipment 100 can determine that the reason of current symptomatic is enterogastritis.
In addition, when the number of times of the disease shown before the current shooting time or exception is lower than during with reference to quantity, equipment 10 by disease or may be defined as health status information extremely.
In addition, equipment 100 based on the previous health status information extracted from previous face-image, can determine that specified disease worsens or improves.
The health status information extracted from face-image continuous in time can be similar.Such as, the health status information extracted from the face of the same person capturing month can demonstrate similar disease or exception.Therefore, equipment 100 can by be used in catch current face image time point before the disease that has of user or exception, obtain health status information accurately from facial condition information.
Figure 43 is the figure for describing the process being obtained current health status information by equipment 100 according to exemplary embodiment.
According to exemplary embodiment, equipment 100 can by (such as, reference) state and current state obtain current health status information relatively usually.Such as, equipment 100 can catch from the past and the previous face-image of the user stored extracts first front face condition information, and calculate the usual state of user by the mean value calculating first front face condition information.
In addition, equipment 100 can by calculating current User Status from current face image zooming-out current face condition information.Usual state and current state can calculate according to the type of facial condition information.The example of type of face condition information comprises at least one in the movement of the quantity of facial color, flaw or acne or size, the eyes of inflammation, eye color, pupil size, the movement of pupil, facial size, the shape of face contour, lip color, the lip of cracking, the position of each organ (eyes, nose, mouth, ear or eyebrow) of face, hair color, the glass of hair, impaired hair and facial muscles.
The result of usual state and current state based on the comparison, equipment 100 can evaluate current health state according to the type of facial condition information or by the result of collecting more facial condition information.
According to another exemplary embodiment, equipment 100 can obtain current health status information by calculating the difference between normal condition and current state prestored.Such as, the normal condition of facial condition information can define based at least one in the body weight of the height of the sex of the age of user, user, user, user, time zone and weather.
Equipment 100 according to the type of facial condition information, can calculate the difference between normal condition and current state prestored.The normal condition prestored based on the comparison and the result of current state, equipment 100 can evaluate current health state according to the type of facial condition information or by the result of collecting (such as, obtaining) more facial condition information.
According to exemplary embodiment, just while the photograph album of operational outfit 100 or the function of picture library, the operation obtaining and provide current health status information can be performed.Such as, when user in the middle of the image being stored in photograph album, select to be used as the image of face-image and the health examination function of operational outfit 100 time, equipment 100 can calculate from selected face-image and provide health status information.
Alternatively, when while running photograph album, user runs health examination function, equipment 100 can extract and meet the face-image that face-image obtains condition in the middle of the image being stored in photograph album, and by using the face-image extracted calculate and provide health status information.In this case, face-image can be classified according to user, and provides each health state of user information.
Alternatively, equipment 100 can provide the list of the people be included in face-image, and obtains and provide the health status information of the people selected by user.
According to another exemplary embodiment, while running application-specific just in device 100, the operation obtaining and provide current health status information can be performed.Such as, user can by run have health examination function should be used for obtain current health status information.
According to exemplary embodiment, current health status information can be calculated by equipment 100.In this case, equipment 100 can use special algorithm to calculate current health status information.
When calculating current health status information, equipment 100 provides current health status information to user.Current health status information can via any method in various method, such as display, output audio, output to external unit or store in memory at least one, be supplied to user.
According to one or more exemplary embodiment, equipment 100 can show current face image and current health status information on one screen, or only shows current health status information.
Figure 44 A is to secure good health the figure of method of status information by using service server 1000 for describing equipment 100 according to exemplary embodiment.
With reference to figure 44A, equipment 100 can obtain the health status information of user from face-image by use service server 1000.
The address information of service server 1000 and in service server 1000 user of registration account information at least one can be stored in device 100.The address information of service server 1000 can comprise Internet Protocol (IP) address of service server 1000.
Equipment 100 can send the face-image of user to service server 1000 based on the address information of service server 1000.Now, equipment 100 only can extract the facial zone of user from input picture, and only sends facial zone to service server 1000, or alternatively, can send the input picture comprising the face of user to service server 1000.In addition, during this time, equipment 100 can be sent in the account information of the user of registration in service server 1000 to service server 1000.Equipment 100 periodically can send face-image to service server 1000, or can send face-image based on user's input.
When receiving face-image and account information from equipment 100, service server 1000 can obtain the characteristic information of the face of the user correspondingly stored with account information, and the position of facial zone in feature based information determination face-image.
By determining the position of the facial zone in face-image, service server 1000 can obtain the facial condition information of user by the facial zone analyzed in face-image.Now, service server 1000 by comparing face-image and the account information corresponding to user and the reference picture that stores, can obtain facial condition information.Reference picture can be the previous face-image of the user caught before the time point of capturing facial image.In addition, reference picture can be selected automatically by service server 1000, or can determine based on the selection of user.
When obtaining facial condition information, service server 1000 can obtain the health status information of user based on facial condition information.
Service server 1000 can store facial condition information and health status information according to the account information of user.In addition, facial condition information and health status information can be sent to equipment 100 by service server 1000.
Figure 44 B shows the database 4400 being stored in the user in service server 1000 according to exemplary embodiment.
With reference to figure 44B, service server 1000 can store information about user accordingly with the user profile 4410 of user.
Can comprise about the information of user the characteristic information 4420 of the face of the user for extracting facial zone from face-image, will with face-image compare in the health status information 4450 of the id information 4430 of the reference picture extracting facial condition information 4440, the facial condition information 4440 of user and user at least one, but be not limited thereto in other exemplary embodiment one or more.
Figure 44 C is to be secured good health the process flow diagram of process of status information by using service server 1000 by equipment 100 according to exemplary embodiment.
According to present example embodiment, the process of the status information that secures good health can be performed by with the service server 1000 that equipment 100 carries out communicating.Such as, service server 1000 can be to provide the server of the service of the status information that secures good health, and can be medical services server, application server or Website server.
Equipment 100 obtains face-image in operation S4410.Then, face-image is sent to service server 1000 by equipment 100 in operation S4420.Equipment 100 can via the Internet, telephone network or cordless communication network access services server 1000.Such as, when equipment 100 is mobile communication terminals, equipment 100 can by using method of mobile communication, such as WiFi, 3G (Third Generation) Moblie, forth generation mobile communication, Long Term Evolution (LTE) or senior Long Term Evolution (LTE-A), visit network, and access services server 1000.In addition, according to one or more exemplary embodiment, face-image can be sent to service server 1000 via any method in various method by equipment 100, such as, face-image is sent to service server 1000 from photograph album, face-image in application is sent to service server 1000, by Short Message Service (SMS), face-image is sent to service server 1000, by using messenger application that face-image is sent to service server 1000, and via e-mail face-image is sent to service server 1000.
According to exemplary embodiment, the image file of face-image can be sent to service server 1000 by equipment 100.In this case, service server 1000 can extract the id information of the face-image be included in image file.Service server 1000 can store facial characteristic point information according to id information, and by using the id information of the face feature point information extraction face-image stored.And according to exemplary embodiment, service server 1000 relative to extracted id information, can search for the information be stored in service server 1000.Such as, service server 1000 can relative to id information store in facial condition information, health status information, personal information, history information and another face-image at least one, and by using id information to search for such information.
According to another exemplary embodiment, the image file of face-image and id information can be sent to service server 1000 by equipment 100.Such as, id information can comprise at least one in ID, telephone number, IP address and media interviews control (MAC) address or combine.Service server 1000 can be stored in service server 1000 by using id information to search for and correspond to the facial condition information of this id information, health status information, personal information, history information and other face-images.
According to another exemplary embodiment, equipment 100 by the image file of face-image, id information and can be sent to service server 1000 relative to the facial condition information of id information and health status information.According to another exemplary embodiment, the personal information relative to id information can be sent to service server 1000 with additional information by equipment 100 together with image file.Personal information can be about at least one in the information of name, telephone number or occupation.Additional information can be about the information of at least one in medical records, medical history, height, body weight, age, blood pressure, blood sugar, waistline measurement, hip circumference, chest size etc.
When receiving face-image in operation S4420, service server 1000 extracts facial condition information from face-image in operation S4430.Face condition information can extract by analyzing face-image.Such as, service server 1000 can extract facial condition information by least one in the colouring information of use facial zone, face recognition algorithm or facial expression recognition.Such as, service server 1000 can extract about the information of at least one in the quantity of facial color, flaw or acne, eye color or lip color by using the colouring information of facial zone.In addition, such as, service server 1000 can extract about the information of at least one in the shape of the eyes of inflammation, pupil size, the movement of pupil, facial size, face contour, the lip of cracking, the position of each organ of face or the movement of facial muscles by using face recognition algorithm or facial expression recognition.Alternatively, any algorithm in various algorithm and method and method can be used to extract facial condition information.
Then, service server 1000 extracts health status information by the facial condition information of use in operation S4440.Such as, service server 1000 can extract health status information by least one determining in the change of skin, eye condition, body weight, lip situation, hair condition or health.
The method extracting facial condition information and health status information can be upgraded by user (such as, the supvr of service server 1000 or medical worker).
According to exemplary embodiment, service server 1000 can provide the facial condition information and/or health status information that are extracted from face-image by Medical Technologist.In this case, facial condition information and/or health status information can be provided to equipment 100 after face-image is sent to some times of service server 1000 or some skies.
After the status information that secures good health in operation S4440, facial condition information and/or health status information are sent to equipment 100 by service server 1000 in operation S4450.Can by any method in various method in operation S4450, such as send facial condition information and/or health status information via the message of application message, SMS message, messenger application or Email.
The facial condition information received from service server 1000 and/or health status information are supplied to user by equipment 100 in operation S4460.Such as, equipment 100 can show health status information on the screen of equipment 100.
Figure 45 is the process flow diagram being shown the method for the health status information of user by equipment 100 according to exemplary embodiment.
In operation S4510, equipment 100 can show the health status information of user.
Equipment 100 can show the health status information by using facial condition information to obtain.In addition, equipment 100 not only can show health status information, but also can show about at least one in the symptom information extracted from face-image, information, the information about another symptom that can show together with extracted symptom or the information about the action improved needed for symptom about the reason of symptom.
Figure 46 is the figure for describing the method being provided for the user interface providing the health status information calculated from the face of the user shown by equipment 100 when equipment 100 shows the image stored by equipment 100 according to exemplary embodiment.
With reference to Figure 46, equipment 100 can show the image selected in the middle of the image stored in device 100 by user.
When receiving the user of at least one in the image that selection stores and inputting, equipment 100 can determine that whether selected image is the image from wherein extracting facial condition information.When selected image is from when wherein extracting the image of facial condition information, equipment 100 can show the user interface 4610 for providing the health status information corresponding with face-image.
When receiving the user of selection for the button 2810 providing " health care " of health status information and inputting, equipment 100 can show the health status information correspondingly stored with selected image.
Figure 47 is the figure for describing the method being shown health status information by equipment 100 on shown image according to exemplary embodiment.
With reference to Figure 47, equipment 100 can show health status information on shown image.
Equipment 100 can obtain the id information of shown image.Equipment 100 can obtain the health status information correspondingly stored with obtained id information.Equipment 100 can according to the disease of id information Storage Estimation and from the position of diagnostic region of disease of wherein extracting prediction.Therefore, equipment 100 can show the phrase 4720 of the disease of indication predicting and indicate the image 4710 of the diagnostic region from the disease wherein extracting prediction on shown image.
In addition, equipment 100 can be provided for not from the user interface 4730 of shown image zooming-out health status information.When receiving selection for not inputting from the user of the user interface 4730 of shown image zooming-out health status information, equipment 100 can delete the health status information stored about shown image.
Therefore, when because clearly not catch the face of user due to external environment condition, so during the health status information distortion of user, the health status information of distortion can be deleted.
Figure 48 A is the figure for describing the method being provided for the user interface 4810 selecting the people that will be displayed on screen in the middle of the health status information about multiple people by equipment 100 according to exemplary embodiment.
With reference to figure 48A, equipment 100 can show the user interface 4810 selecting the people that will be displayed on screen in the middle of the health status information from multiple people.Equipment 100 can be extracted in the health status information of the multiple people shown in face-image from face-image.In addition, equipment 100 can store from the id information of the input picture wherein extracting facial condition information and the health status information that obtains from facial condition information accordingly with the id information of people.When the user receiving a people in the multiple people of selection inputs, equipment 100 can show the health status information of input picture and selected people.
Figure 48 B be according to exemplary embodiment for describing the figure being shown the input picture 4820 of the people selected by user and the method for the health status information 4840 corresponding with input picture 4820 by equipment 100.
With reference to figure 48B, equipment 100 can show the input picture 4820 of the people selected by user and the health status information 4840 corresponding with input picture 4820.
Equipment 100 can according to the chronological order display input picture 4820 of the shooting time of input picture 4820.In addition, the date and time information 4830 when generating input picture 4820 can show by equipment 100 together with input picture 4820.In addition, equipment 100 can show the health status information 4840 corresponding with input picture 4820.
Therefore, user can check health status information 4940 relative to selected people according to chronological order.
Figure 49 A to Figure 49 C is the figure for describing the method being provided the health status information about the period selected by user or disease by equipment 100 according to exemplary embodiment.
With reference to figure 49A, equipment 100 can show the user interface 4910 for selecting the period.User interface 4910 can be the user interface for selecting from current point in time unit time period forward.
Receive select to input from the user of current point in time unit time period forward time, equipment 100 can show the health status information extracted from the image caught during selected Unit time period.
With reference to figure 49B, equipment 100 can show the user interface 4920 for selecting disease.Equipment 100 can show the disease with the image-related user stored in device 100.In addition, equipment 100 can show with store unit time period in the middle of image in device 100, that select user during the disease of image-related user that catches.
With reference to figure 49C, when the user receiving selection unit's period and disease inputs, the input picture that equipment 100 is relevant with selected disease during can being presented at selected unit time period according to chronological order.
Figure 50 A shows the screen providing health status information according to exemplary embodiment.
According to exemplary embodiment, (such as, determine or the obtain) health status information calculated by equipment 100 can show on the screen of equipment 100.In addition, when health status changes, the information about change can be supplied to user by equipment 100.Such as, as shown in Figure 50 A, instruction causes skin conditions worse message due to the increase of flaw quantity can be shown on screen.
Figure 50 B shows the screen providing health status information according to another exemplary embodiment.
According to exemplary embodiment, health status information can to notify that health status information form over time provides.When the image zooming-out face-image caught by use, the time can be determined based on the pull-in time of the image caught or current time.When the image zooming-out face-image stored by use, the time can be determined based on the shooting date be stored in the image file of stored image.Such as, as shown in Figure 50 B, the change of flaw quantity was shown according to the date, to inform the user the skin according to the time.Alternatively, equipment 100 can provide the change of blood pressure, skin color, eye condition or skin according to the time.According to present example embodiment, user easily can identify the change of the health status information according to the time.
Figure 51 A shows the screen providing health status information according to another exemplary embodiment.
According to exemplary embodiment, health status information can show by equipment 100 together with face-image.Such as, as shown in Figure 51 A, can illustrate by using face-image with the quantity of the flaw of today before the week.According to present example embodiment, the change of health status can illustrate intuitively and visually.
And, according to present example embodiment, equipment 100 can by compare optimum condition to come with current state together with face-image and health status information are provided.Such as, equipment 100 can show the face-image that optimum skin situation is shown and the face-image that current skin situation is shown.According to present example embodiment, user can by comparing optimum condition and current state determines current state intuitively.
And according to present example embodiment, equipment 100 can by illustrating the face-image of the user of current state and illustrating that the basic face-image of normal condition provides health status information.Here, basic face-image can not be the face-image of user.According to present example embodiment, user can determine his/her health status intuitively.
Figure 51 B is the figure for describing the method being shown the time dependent facial condition information in the middle of from many facial condition informations of user by equipment 100 according to exemplary embodiment.
With reference to figure 51B, equipment 100 can by comparing first time period (such as, one week) photo 5104 of photo 5102, second time period (such as, three days) the front seizure of front seizure and current photo 5106, show time dependent facial condition information.In addition, equipment 100 can show from the facial zone wherein extracted with the input picture of the user of the different different facial condition information of facial condition information of passing by the designator indicating and extract different facial condition informations.
Such as, equipment 100 can extract the quantity of acne cheek from photo 5102 and be equal to or higher than the information of number of thresholds, as facial condition information.Therefore, equipment 100 can obtain the information of pulmonary function deterioration from photo 5102, as health status information.
When obtaining photo 5104 as input picture, equipment 100 can extract the quantity of acne cheek from photo 5104 and be equal to or greater than below number of thresholds and eyes and generate black-eyed information, as facial condition information.When difference between the quantity of the acne in the quantity and photo 5102 of the acne in photo 5104 is in threshold value, equipment 100 can not show separately the information about the acne on photo 5104.In addition, equipment 100 possibly cannot determine that acne worsen or improve.
Simultaneously, because the livid ring around eye below the eyes that detect from photo 5104 are not included in the facial condition information extracted from photo 5102, therefore in order to indicate about based on before one week before three days the information of the facial situation of the user of change, equipment 100 can show the image 5110 and 5120 of the such change on the black-eyed region of instruction.
In addition, when obtaining the current photo 5106 as input picture, equipment 100 can extract the quantity of acne cheek from photo current 5106 and be equal to or higher than number of thresholds and below eyes, generate black-eyed information, as facial condition information.When difference between the quantity of the acne in the quantity and photo 5104 of the acne in current photo 5106 is in threshold value, equipment 100 can not show separately the information about the acne on current photo 5106.In addition, equipment 100 possibly cannot determine that acne worsen or improve.
Simultaneously, the livid ring around eye be confirmed as below than the eyes detected from photo 5104 due to the livid ring around eye below the eyes that detect from current photo 5106 are larger and darker, therefore in order to indicate about current relative to three days before the information of facial situation of user of change, equipment 100 can show the image 5130 and 5140 of such change in the black-eyed region of instruction.
Like this, equipment 100 can provide the information about time dependent facial situation.
In addition, equipment 100 can based on the information of at least one difference between the current face condition information of indicating user and past facial condition information, show the health status information of indicating user, it is catching the time point of current input image and is being different at the time point of seizure past input picture.
Such as, based on the livid ring around eye detected from photo 5104, equipment 100 can show the phrase 5150 of instruction allergic rhinitis as health status information.In addition, based on the darker and larger livid ring around eye detected from current photo 5106, equipment 100 can show the phrase 5160 of instruction allergic rhinitis deterioration as health status information.
Like this, equipment 100 can provide time dependent health status information.
Figure 52 shows the screen providing health status information according to another exemplary embodiment.
According to exemplary embodiment, equipment 100 can be offered suggestions to user based on health status information.Such as, when the skin difference of user, can provide and take ascorbic suggestion.Here, in order to guide user to take vitamin C one day twice, can provide taking ascorbic notice as Pop-up message at certain two time point.Alternatively, such as, equipment 100 can provide at certain time point the message or notice that guide user to take exercise.
Alternatively, equipment 100 can based on the health status information of user, to user provide about required for user or for user's suggestion or determine food, life style or exercise information.
Figure 53 A and Figure 53 B is the figure of the method for describing the health status information being provided user by equipment 100 with calendar form according to exemplary embodiment.
With reference to figure 53A, equipment 100 can on screen displaying calendar, this calendar by week or be monthly shown the date.In addition, equipment 100 can show user interface 5310, for the health status information that display is corresponding with the date.
With reference to figure 53B, when receiving the user of selection for the user interface 5310 showing the health status information corresponding with the date and inputting, equipment 100 can show the health status information corresponding with the face-image caught on each date on the region corresponding with each date.
Equipment 100 can store the health status information corresponding with the face-image caught on each date, and the id information of face-image.Here, the health status information corresponding with each date can obtain from facial condition information, wherein from the face-image caught on each date, extracts facial condition information.
In addition, when the user receiving a date of selection inputs, equipment 100 may be displayed on the face-image 5340 of selected date seizure, and from the health status information 5330 that the facial condition information that face-image 5340 extracts obtains.
Figure 54 shows the figure of the method for the health status information of user by equipment 100 when running social networks and applying for describing according to exemplary embodiment.
With reference to Figure 54, when running social networks application, equipment 100 can show the health status information of user.
Such as, equipment 100 can store the health status information corresponding with face-image.Equipment 100 can store the health status information corresponding with the face-image caught recently as current health status information.In this case, current health status information can be stored in ad-hoc location.Like this, the social networks application run in device 100 can extract health status information from ad-hoc location.
In addition, the social networks application run in device 100 can show image 5410 based on current health status information, and it illustrates the state of user.Such as, the social networks application run in device 100 can show the image of patient when the health status of user is equal to or less than reference value.
Figure 55 illustrates the process flow diagram being extracted the method for the difference of the facial situation of user by equipment 100 by comparing face-image and reference picture.
In operation S5510, equipment 100 can obtain the face-image of user.
Such as, equipment 100 can receive from the image-generating unit comprised in device 100 (such as, imager) input picture comprising the face of user.Now, equipment 100 can provide template to catch interface, meets for obtaining the input picture that the face-image preset obtains condition.
In addition, when input picture meets default face-image acquisition condition, even without user's input, equipment 100 also can catch the face of user.
Alternatively, equipment 100 can obtain by user from storing the image selected in the middle of image in device 100 as input picture.Alternatively, equipment 100 can obtain from external unit download image as input picture.
Equipment 100 can extract facial zone from input picture, and facial zone is stored as face-image.
In operation S5520, equipment 100 can extract the state value of the shooting element of instruction shooting environmental from face-image.
The shooting element of instruction shooting environmental can comprise at least one in illumination as shown in Figure 56, place, background, time zone, the angle of face or facial expression, but is not limited thereto in other exemplary embodiment one or more.
Such as, equipment 100 can extract the brightness of light during shooting, direction and colour temperature from face-image.
In operation S5530, equipment 100 can determine the reference picture that will compare with face-image based on state value from multiple reference picture.
Equipment 100 can store multiple reference picture accordingly with the scope of state value.Such as, the reference picture that the place being 60lux in the brightness of light catches can be stored as the reference picture corresponding with 50 to 70lux.In addition, the reference picture that the place being 80lux in the brightness of light catches can be stored as the reference picture corresponding with 70 to 90lux.
Therefore, equipment 100 can determine the reference picture corresponding with the state value extracted from face-image.Such as, when the brightness of the light extracted from face-image is 60lux, the reference picture corresponding with 50 to 70lux in the middle of multiple reference picture can be determined the reference picture that will compare with face-image by equipment 100.
The multiple state values not only can taking element with one correspondingly pre-determine reference picture, and correspondingly to the combination of multiple shooting element can be determined in advance reference picture.Such as, can be the situation of 50 to 70lux, 70 to 90lux and 90 to 110lux with the brightness of light, place is the situation of family, school and office, and the direction of face is that each in 18 combinations of situation left and to the right correspondingly pre-determines reference picture.
In operation S5540, equipment 100 can extract the difference of facial situation by comparing face-image and reference picture.
Equipment 100 can compare face-image and reference picture to extract difference from face-image.Such as, can extract about compared with reference picture, whether the quantity of the acne in facial zone increases, face the information that whether color dimmed or whether lip more dry.
In operation S5550, equipment 100 can determine the change of the health status of user based on described difference.
Such as, when the quantity of acne increases, equipment 100 can determine that the hormone secretion of user is unstable.In addition, when facial colour-darkening, equipment 100 can determine that the blood circulation of user has some setbacks.
Like this, equipment 100 can compare the reference picture of the shooting environmental of face-image and reflection face-image, to determine the health status of user exactly.
Figure 56 is the table of the shooting element according to exemplary embodiment.
With reference to Figure 56, equipment 100 can not only by user, and by element in addition to the user, be defined as taking element.
Shooting element can be the element of the facial condition information that impact is extracted from the face-image of user.
Shooting element can comprise at least one in the colour temperature of the brightness of light, the direction of light and light.The darkness of the facial color of user can change based on the brightness of light.In addition, the shade in the facial zone of user can change based on the direction of light.In addition, the facial color of user can change based on the colour temperature of light.
In addition, shooting element can comprise place and be at least one in indoor or the information in outdoor about user.Place not only can comprise the information about absolute position, such as latitude and longitude, and can comprise the information about relative position, such as family, office or school.The colour temperature of face-image can be change in indoor or in outdoor based on user.In addition, the cosmetic degree of user can be at home, change in office or in school based on user.In addition, based on the life pattern of user, user at home, can be different in office or in the activity of school.
In addition, the object that element can comprise the facial surroundings of user in the background color of the facial surroundings of user in face-image or face-image is taken.Whether equipment 100 can be dark based on the background color of facial surroundings, determines daytime or night.In addition, when the color of facial surroundings is skin color, the accuracy of the position of facial zone may be affected.
In addition, take element and can comprise time zone.The situation of user can show consistent pattern based on time zone, and the facial situation of user can change according to situation.The pattern of situation can be determined based on over day or the moon.Alternatively, described situation can show consistent pattern based on in a week day.
In addition, the cosmetic degree of user can show consistent pattern based on time zone.Such as, user may not be with adornment from 00:00 to 07:00, and may be with adornment from 10:00 to 17:00.
In addition, the angle of face relative to camera of user during element can comprise shooting is taken.Based on the angle of face, the region of the face exposed in face-image can be different.In addition, the facial expression of the user during element can comprise shooting is taken.Based on facial expression, can be different from the region of the face wherein extracting facial condition information.
Figure 57 is the figure for describing the method being determined reference picture by equipment 100 according to exemplary embodiment.
As shown in Figure 57, equipment 100 can provide user interface, for selecting to be used as the image of reference picture in the middle of the multiple images stored in device 100.
Such as, when have selected for selecting the button of reference picture, equipment 100 can show the image 5710 to 5740 of in the middle of from the multiple images stored in device 100, to comprise user faces.And equipment 100 each in image 5740 to 5710 can show the switching push button for selecting image.
When the user receiving selection image inputs, selected image can be stored as reference picture by equipment 100.
Then, equipment 100 can extract the state value of the shooting element of the shooting environmental of instruction reference picture from reference picture.Such as, equipment 100 can from the time of the meta-data extraction reference picture of reference picture file, at least one in the direction of place and light, angle and colour temperature.In addition, equipment 100 can analyze reference picture to obtain at least one in the angle of the face of user in face-image, direction and facial expression.
In addition, equipment 100 automatically can determine reference picture.Such as, the image of the registered in advance storing in multiple images in device 100, to comprise user faces can be defined as reference picture by equipment 100.The image that the resolution (such as, the quantity of the pixel of the face of indicating user) of the wherein face of user can be equal to or higher than reference value by equipment 100 is defined as reference picture.
Figure 58 A to Figure 58 C is the figure for describing the method being determined reference picture by equipment 100 according to one or more exemplary embodiment.
With reference to figure 58A, equipment 100 can determine reference picture according to place from multiple image.
Such as, equipment 100 based on as the longitude of metadata store in image file and latitude, can determine reference picture according to longitude and latitude.Alternatively, image file can store the relative position of such as family or office, replaces absolute position.Therefore, equipment 100 can determine the reference picture 5810 corresponding with family and the reference picture 5815 corresponding with office.
With reference to figure 58B, equipment 100 can determine reference picture according to the brightness of light from multiple image.
Such as, equipment 100 can based on the brightness as the light of metadata store in image file, and reference picture is determined in the brightness according to light.Such as, the reference picture 5824 when the brightness of the reference picture 5822 when the brightness of the reference picture 5820 when the brightness of light can determining during taking is 200lux, the light during taking is 100lux and the light during shooting is 50lux.
With reference to figure 58C, equipment 100 can determine reference picture according to the position of light from multiple image.
Equipment 100 can determine the position of light by analysis chart picture.Such as, when the face of user in image right side than left side secretly reach at least threshold value time, can determine take during light be positioned at left side.In addition, when brightness uniformity spreads all over facial zone, can determine that light is positioned on the direction identical with camera.In addition, when the face of user in face-image top than bottom secretly reach at least threshold value time, can determine during taking, light is positioned at immediately below the face of user.
Therefore, the reference picture 5836 when the position of the reference picture 5834 when the position of the reference picture 5832 during right side user, the position of the reference picture 5830 when equipment 100 can determine left side user, the position of light during taking, the light during taking, the light during taking is above camera lens and the light during shooting is immediately below the face of user.
Figure 59 be according to exemplary embodiment for describing the figure environmentally being generated the method for multiple reference picture 5921,5923,5925,5931,5933 and 5935 by equipment 100 by compensating a base image 5910.
Reference picture 5910 can be selected by user, or equipment 100 can automatically be selected to obtain the image of condition as reference image 5910 closest to face-image.
Whether equipment 100 can show multiple shooting element, and can show for selecting according to the switching push button 5920,5930 and 5940 of shooting Element generation reference picture.
When receiving selection and inputting according to the user of the switching push button 5920 of the brightness generating reference image of light, equipment 100 can generate the reference picture under different brightness by compensated foundation image 5910.Such as, equipment 100 by the brightness of pixel in adjustment base image 5910, can generate reference picture 5921,5923 and 5925 corresponding with 200lux, 100lux and 50lux respectively.
In addition, when receiving the switching push button 5930 selected according to the angle generating reference image of face, equipment 100 can generate the reference picture of different angles by compensated foundation image 5910.Such as, equipment 100 can generate face respectively deflected left, front and right reference picture 5931,5933 and 5935.In addition, equipment 100 can generate the reference picture that face tilts up or down.In addition, equipment 100 can generate the reference picture that face rotates to the left or to the right.Like this, equipment 100 can according to the various angle generating reference images of face.
Figure 60 A to Figure 60 B be according to exemplary embodiment for describing by the state value determination reference picture of equipment 100 based on the shooting element of face-image, and determined the figure of the method for the health status of user by comparing face-image and reference picture by equipment 100.
With reference to figure 60A, equipment 100 can determine the state value of the shooting element of face-image 6010.
Such as, equipment 100 can calculate deflected 25 ° of the angle of the face of the user in face-image 6010 by analyzing face-image 6010.In addition, equipment 100 can determine based on the metadata in the file of face-image 6010 to take during the brightness of light be 200lux.In addition, equipment 100 can determine based on the taking location information in the file of face-image 6010 that the place taken is family.In addition, by analyzing face-image 6010, equipment 100 can determine that light is directly illuminate user.In addition, by analyzing face-image 6010, equipment 100 can determine that the colour temperature of light is 4000K.In addition, equipment 100 can determine that based on the colour temperature determined light is fluorescent light, and determines that the place taken is in indoor.In addition, equipment 100 can determine that shooting time is 17:33, and the facial expression of user in face-image 6010 is amimia.
When determining the state value of shooting element of face-image 6010, equipment 100 can determine a reference picture based on determined state value.
Such as, equipment 100 can obtain a reference picture of the determined state value of reflection.Such as, equipment 100 can obtain a reference picture, and the angle of its septum reset is deflection 25 °, the brightness of light is 200lux, and the angle of light is front, and the colour temperature of light is 4000K, the place of shooting is family, and the time of shooting is 17:33, and facial expression is amimia.Now, reference picture can be presentation graphics, the angle of its septum reset is 15 ° to 30 °, the brightness of light is 150 to 200lux, the angle of light is front, and the colour temperature of light is 3500 to 4500K, and spot for photography is family, shooting time is 16:00 to 18:00, and facial expression is amimia.
According to exemplary embodiment, equipment 100 only usually can determine reference picture based on the shooting unit of state value outside basic scope in multiple shooting element.Such as, when the angle of face is equal to or higher than basic value relative to front, that is, when being equal to or higher than 20 DEG C, equipment 100 can determine that the angle of face is as the shooting element for determining reference picture.Alternatively, when brightness is equal to or higher than 150lux or 50lux, equipment 100 can determine that the brightness of light is as the shooting element for determining reference picture.Alternatively, when colour temperature is less than or equal to 3000K or when being equal to or higher than 8000K, equipment 100 can determine that the colour temperature of light is as the shooting element for determining reference picture.Alternatively, when shooting time is before 06:00 or after 23:00, equipment 100 can determine that shooting time is as the shooting element for determining reference picture.Alternatively, when facial expression be smile face or frown face time, equipment 100 can determine that facial expression is as the shooting element for determining reference picture.
Therefore, about the face-image 6010 of Figure 60 A, equipment 100 can select the brightness of facial angle or light as the shooting element for determining reference picture.In addition, equipment 100 can determine from multiple reference picture that the angle of face be the brightness of deflection 25 ° and light is that the reference picture 6020 of 200lux is as the reference picture that will compare with face-image 6010.
Alternatively, according to exemplary embodiment, the image prestored not only can be defined as reference picture according to the state value of the shooting element of face-image 6010 by equipment 100, and can generate by compensating face-image 6010 or obtain reference picture.
Such as, equipment 100 can obtain the image of the angular deflection 25 ° of face, and by the brightness adjustment of obtained image to be generated the reference picture that will compare with face-image 6010 to 200lux.
When acquisition or generating reference image 6020, face-image 6010 and reference picture 6020 can compare by equipment 100, to determine different facial condition informations, and determine current health status information based on different facial condition informations.
With reference to figure 60B, equipment 100 can determine multiple reference picture based on the state value of the shooting element of face-image 6010.
Such as, equipment 100 can obtain that the reference picture 6030 of angular deflection 25 °, the brightness of light of face are the reference picture 6040 of 200lux, the angle of light be the reference picture in front and the colour temperature of light is the reference picture of 4000K.In addition, the reference picture that equipment 100 can obtain reference picture 6050 that spot for photography is family, shooting time is about 17:00 and facial expression are expressionless reference pictures.
When determining the multiple reference picture corresponding with the shooting element of face-image 6010, face-image 6010 and each in multiple reference picture can compare, to determine many facial condition informations corresponding with each in multiple reference picture by equipment 100.In addition, equipment 100 can determine a facial condition information by average or the facial condition information of weighted mean many.
According to exemplary embodiment, equipment 100 only usually can determine reference picture based on the shooting unit of state value outside basic scope in multiple shooting element.
Such as, equipment 100 can be compared by the reference picture 6050 of to be the reference picture 6040 of 200lux and the place taken by the reference picture 6030 of angular deflection 25 ° of face-image 6010 and face, the brightness of light be family determines three facial condition informations, and determines health status information by average or weighted mean three facial condition informations.
Figure 61 A to Figure 61 E be according to one or more exemplary embodiment for describing the figure being obtained the health status information of user and the method for the service providing hospital to be correlated with based on health status information by equipment 100 by equipment 100 from the face-image of user.
With reference to figure 61A, equipment 100 can provide based on health status information 6130 service that hospital is relevant.
Such as, when the user receiving the health status information 6130 that display obtains from face-image 6110 inputs, equipment 100 can show or export face-image 6110, the facial condition information 6120 that obtains from face-image 6110 and the health status information 6130 obtained based on facial condition information 6120.Such as, facial condition information 6120 can be swelling below exophthalmos ' and eyes, and health status information 6130 can be hyperthyroidism.
In addition, equipment 100 can show or export user interface, provides with facial condition information 6120 for the service that hospital is relevant together with health status information 6130.Such as, equipment 100 can show for providing the button 6140 of subscription services, for showing the button 6150 of the list of nearby hospitals and the button 6160 for talking with doctor.
With reference to figure 61B, when receiving the user of selection for the button 6140 providing subscription services and inputting, equipment 100 can show user interface, for preengaging in the hospital relevant to the health status information of user.
Such as, when health status information is hyperthyroidism, equipment 100 can show the list of the hospital relevant to thyroid gland.Here, equipment 100 can receive the list of the hospital relevant to thyroid gland from server (such as, third-party server), and receives the list of the hospital relevant to thyroid gland from the service server 1000 being connected to third-party server.
With reference to figure 61C, receive select the user of hospital to input from the list of hospital time, equipment 100 can provide user interface, for preengaging in selected hospital.
User interface for carrying out preengaging can comprise for option date and time item 6170, for input the name of user item 6172, for the item 6174 of inputting contact information and the item 6176 for inputting p.m.entry.
When fill in item 6170 to 6176 and receive select the user of reserve button 6178 to input time, the information in item 6170 to 6176 can be sent to service server 1000 or third-party server by equipment 100.When receiving instruction from service server 1000 or third-party server and having made the information of reservation, equipment 100 can show instruction and make the phrase of reservation.
With reference to figure 61D, when the user receiving the button 6150 selecting the list for showing nearby hospitals in Figure 61 A inputs, equipment 100 can provide positional information about the hospital near the current location of user based on the positional information of equipment 100.
Such as, equipment 100 can comprise the current latitude of GPS equipment 100 in device 100 and longitude by using.Based on current longitude and latitude, equipment 100 can provide the hospital relevant to the health status information of user in the middle of the hospital near the current location of user.Now, equipment 100 can show the map 6180 of the position that hospital is shown.
Such as, equipment 100 can show user on map 6180 current location near hospital in the middle of, the position 6181 to 6186 of specializing in the endocrine hospital of reply hyperthyroidism.
With reference to figure 61E, receive select the user for the button 6160 of talking with doctor in Figure 61 A to input time, equipment 100 can provide the list of doctor.
Such as, when the user receiving select button 6160 inputs, equipment 100 can provide the list of the doctor relevant to the health status information of user.Here, health status information can be sent to service server 1000 or third-party server by equipment 100, receives the list of doctor, and show the list of doctor from service server 1000 or third-party server.
Receive select the user of doctor to input from the list of doctor time, the id information of selected doctor can be sent to service server 1000 or third-party server by equipment 100, and display is used for the chat window of talking with selected doctor.
Figure 62 is the database 6200 according to the health status information 6230 and 6240 can extracted from facial condition information 6220 of exemplary embodiment and the prescription information 6250 according to health status information 6230 and 6240.
With reference to Figure 62, equipment 100 or service server 1000 can from the health status information 6230 and 6240 of facial condition information 6220 extraction with form storages of database 6200, and according to the prescription information 6250 of health status information 6230 and 6240.Here, equipment 100 or service server 1000 can store facial condition information 6220, health status information 6230 and 6240 and prescription information 6250 according to the region 6210 of face.
Equipment 100 or service server 1000 can extract health status information 6230 and 6240 from facial condition information 6220, and obtain prescription information 6250 based on database 6200 from health status information 6230 and 6240.
Such as, when facial condition information is the eyes of inflammation, the problem of the liver of user and heart can be defined as health status information by equipment 100 or service server 1000.In addition, equipment 100 can determine that syndrome is dizzy, headache, cold sore, tinnitus and dandruff in mouth.In addition, the exercise of not drinking with rule can be defined as prescription information by equipment 100 or service server 1000.
Figure 63 A to Figure 63 C be according to exemplary embodiment for describing the figure being provided the method for the prescription information being suitable for user by equipment 100 based on the health status information of user.
With reference to figure 63A, equipment 100 can provide based on health status information the prescription information being suitable for user.
Equipment 100 can change the information of the habits and customs of user based on health status information display.
Such as, when health status information instruction blood circulation problems, equipment 100 can show the information for guiding user fully to sleep.In addition, equipment 100 can show the symptom of not having enough sleep and causing or for the suggestion changing sleep habit.
With reference to figure 63B, equipment 100 can change the information of the eating habit of user based on health status information display.
Such as, when health status information instruction blood circulation problems, equipment 100 can advise that user mainly eats cereal and vegetables.Equipment 100 can recommend to be suitable for the cereal of user or vegetables to improve health status.
With reference to figure 63C, equipment 100 can show based on health status information the exercise information be helpful for users.
Such as, when health status information instruction blood circulation problems, equipment 100 can display page 6310, for providing the information about taking exercise under circumstances.When the user receiving " operationally stretching " during to select in the page 6310 every inputs, equipment 100 can show the stretching, extension that user can do at work.
Figure 64 be according to exemplary embodiment for describe by equipment 100 by the figure providing multiple third-party server 2000a to 2000b of health related information to carry out the method providing the service relevant with the health status information of user to user alternately.
Reference Figure 64, third-party server 2000a to 2000c can be the servers operated by different service providers.
Such as, third-party server 2000a can be to provide the server of the information about the habits and customs being suitable for user.Third-party server 2000b can be to provide the server of the dietetic treatment being suitable for user.In addition, third-party server 2000c can be the server of the exercise for recommending to be suitable for user.
Equipment 100 can based on the address information of the third-party server 2000a to 2000c prestored, the health status information of the user obtained by the face-image from user is sent to one in third-party server 2000a to 2000c, and is suitable for the prescription information of user from the reception of third-party server 2000a to 2000c.
Prescription information can be determined based on the health status information received from equipment 100, and prescription information is supplied to equipment 100 for one in third-party server 2000a to 2000c.Now, prescription information can provide with the form of text, image or mobile image, or can provide with the form of the webpage comprising health and fitness information.
Figure 65 is the figure for describing the method by service server 1000, the service provided by third-party server 3000a to 3000b being supplied to user by equipment 100 according to exemplary embodiment.
With reference to Figure 65, service server 1000 can be connected to the third-party server 3000a to 3000b providing health related information.
Third-party server 3000a can be to provide the server of subscription services by hospital operation.Third-party server 3000b can be the server being operated to provide Map Services by service provider.In addition, third-party server 3000c can be the server being operated to provide messenger service by service provider.
Service server 1000 can store the address information of each in third-party server 3000a to 3000c.When receiving the request to service from equipment 100, service server 1000 can determine providing the third-party server of asked service, and provides the information needed for asked service to described third-party server request.When receiving asked information from third-party server, service server 1000 can provide asked service based on the information received to equipment 100.
Such as, when receiving the subscription services about specific medical section office from equipment 100, service server 1000 can be asked about available dates and the information of time to third-party server 3000a, and about the information of the doctor about specific medical section office, and receive the information of asking.When receiving asked information from third-party server 3000a, the information received can be sent to equipment 100 by service server 1000.
When receiving the reserve requests about at least one in the date selected by user, time and doctor from equipment 100, service server 1000 doctor that selects of the date and time selected user of requesting third-party server 3000a and user can make an appointment (makeanappointment).
In addition, such as, when receiving the request for the list of the hospital close to user from equipment 100, service server 1000 can ask the list of hospital to third-party server 3000b.Now, service server 1000 can to the longitude of third-party server 3000b transmitting apparatus 100 and latitude information.In addition, the id information of specific medical section office can be sent to third-party server 3000b with longitude by service server 1000 together with latitude information.
When receiving the list of hospital from third-party server 3000b, the list received can be sent to equipment 100 by service server 1000.Here, the information of the grading about the position of hospital, the medical department of hospital or hospital can be sent to equipment 100 by service server 1000.
In addition, when receiving the request of talking with the doctor of specific medical section office from equipment 100, service server 1000 based on the id information of specific medical section office, can determine the doctor corresponding with specific medical section office in the middle of the multiple doctors prestored.Service server 1000 can, based on the courier ID of the multiple doctors prestored, be asked to chat with determined doctor to third-party server 300c.Service server 1000 can send to third-party server 300c the message received from equipment 100, and sends the message received from third-party server 3000c to equipment 100, thus provides messenger service to equipment 100.
Figure 66 be according to exemplary embodiment for describing by the figure of equipment 100 by using the service server 1000 mutual with the comprehensive server 5000 of third-party server 4000a to 4000c to provide the method for the service provided by third-party server 4000a to 4000c to user.
With reference to Figure 66, service server 1000 can by providing to equipment 100 service provided by third-party server 4000a to 4000c with comprehensive server 5000 alternately.
Comprehensive server 5000 can be send to third-party server 4000a to 4000c and receive the server of information from third-party server 4000a to 4000c.Such as, when third-party server 4000a to 4000c is the hospital server by different hospital operation, comprehensive server 5000 can be send to third-party server 4000a to 4000c and receive the server of information from third-party server 4000a to 4000c.In this case, comprehensive server 5000 can store the address information of the hospital of operation third-party server 4000a to 4000c.
When receiving the subscription services about specific medical section office from equipment 100, service server 1000 can be asked about available dates and the information of time and the information about the doctor about specific medical section office to comprehensive server 5000.When receiving reserve requests from service server 1000, comprehensive server 5000 can ask to preengage in specific medical section office to the third-party server 4000a to 4000c of registration in comprehensive server 5000 needed for information.
When receiving from third-party server 4000a to 4000c about available dates and the information of time and the information about doctor, the information that comprehensive server 5000 can arrive to service server 1000 transmission and reception.Now, the information received can be sent to service server 1000 together with the id information of hospital by comprehensive server 5000.
When receiving the information needed for preengaging from comprehensive server 5000, the information that service server 1000 can arrive to equipment 100 transmission and reception.
Receiving selection hospital and option date, the user of time and doctor is when inputting, and equipment 100 can send information about user's input to service server 1000.When receiving the information about user's input from equipment 100, service server 1000 can ask comprehensive server 5000 to make reservation, and comprehensive server 5000 can make reservation based on the information about user's input received.
Figure 67 provides the figure of the method for the service that by third-party server 4000a to 4000c provided by equipment 100 by using service server 1000 when service server 1000 is operating as comprehensive server for describing according to exemplary embodiment.
With reference to Figure 67, service server 1000 can perform the function of the comprehensive server 5000 of Figure 65.
Such as, the ISP of operate services server 1000 can operate services server 1000 and comprehensive server 5000 together.In this case, service server 1000 can be a server of the function simultaneously performing service server 1000 and comprehensive server 5000.Alternatively, service server 1000 can be divided into multiple server, such as, and the server of the server performing the function of service server 1000 and the function performing comprehensive server 5000.
Figure 68 is the block diagram of the equipment 100 according to exemplary embodiment.
As shown in Figure 68, equipment 100 according to exemplary embodiment can comprise display unit 110 (such as, display), sensing cell 190 (such as, sensor), communication unit 130 (such as, communicator), image-generating unit 155 (such as, imager), storer 120 and controller 170.But be understandable that, in other exemplary embodiment one or more, equipment 100 can comprise the assembly more more or less than the assembly shown in Figure 68.Such as, according to another exemplary embodiment, equipment 100 can not comprise display unit 110 and/or can comprise follower (such as, output device), it is configured to output information and/or image for display (such as, output information and/or image are to external display).
Equipment 100 can obtain the face that user is shown input picture.Such as, equipment 100 can obtain input picture by the template of the face obtaining condition seizure user according to the face-image preset.Now, display unit 110 can show the tutorial message being used for obtaining condition capturing facial according to the face-image preset on screen.When the face of user is close to image-generating unit 155, equipment 100 can receive input picture by image-generating unit 155.According to another exemplary embodiment, equipment 100 can obtain one or more image and/or input picture from memory storage (such as, internal storage device or external memory).
Image-generating unit 155 generates Electrical imaging signal by performing opto-electronic conversion to incident light.Image-generating unit 155 can comprise at least one in camera lens, aperture and image pick up equipment.In addition, image-generating unit 155 can be mechanical shutter type or electronic shutter type.
According to present example embodiment, input picture can catch by using image-generating unit 155.Input picture can comprise, such as, and at least one in the image of the seizure caught by image-generating unit 155, preview image and mobile image.
According to exemplary embodiment, when user passes through to use image-generating unit 155 to catch image and the image caught meets default face-image acquisition condition, controller 170 can use caught image as face-image.According to another exemplary embodiment, when caught image meets default face-image acquisition condition, can export inquiry user whether use caught image as the message of the face-image for health examination, and can based on the selection of user using catch image as face-image.According to exemplary embodiment, even if when face-image catches in the normal mode, the face-image for health examination also can be collected and need not perform special operational.
According to another exemplary embodiment, face-image can catch under the AD HOC provided by equipment 100.Such as, AD HOC can be at least one in face authenticating pattern, self-timer mode or personage's screening-mode for carrying out unlocker device 100 via face authenticating.
The acquisition parameters preset can be set up in image-generating unit 155.Such as, the acquisition parameters preset can comprise f-number, whether use in flashlamp or white balance situation at least one.
When the preview image in health examination pattern meets default face-image acquisition condition, controller 170 automatically can catch image, and uses the image caught as face-image.
When catching image according to the input of shutter release signal in health examination pattern, controller 170 can determine whether caught image meets the face-image preset and obtain condition, and display unit 110 can provide the information whether be satisfied about the face-image acquisition condition preset.Alternatively, when the image caught meets face-image acquisition condition, controller 170 can use the image of seizure as face-image.
Alternatively, equipment 100 can receive input picture by communication unit 130 from the external network being connected to equipment 100.
When receiving input picture, input picture can be stored in storer 120 by controller 170.Alternatively, controller 170 can obtain face-image from input picture.
Controller 170 can obtain face-image by using image processor 185 from input picture.Image processor 185 can comprise user's face detecting device 181, face-image obtains condition determiner 182 and face standardization 183.
User's face detecting device 181 can detect the facial zone of people from input picture, and extracts the position of facial zone.When extracting the position of facial zone, user's face detecting device 181 can extract the feature of face from facial zone.The method extracting the feature of face from facial zone can be Gabor filter method or LBP method.
When similarity between the feature of the face of the user of extracted characteristic sum registered in advance is in the scope preset, the facial zone extracted from input picture can be defined as the facial zone of user by user's face detecting device 181.
When determining the facial zone of user, face-image obtains condition determiner 182 can determine whether input picture meets the face-image preset and obtain condition.Such as, equipment 100 can determine that the illumination of input picture is in basic scope.Alternatively, equipment 100 can determine whether camera rocks when catching input picture.
In addition, face-image acquisition condition determiner 182 can determine that the whether satisfied face-image preset of face the facial zone extracted from input picture obtains condition.
Such as, face-image acquisition condition determiner 182 can determine that whether the angle of face is with in front apart basic angular range.Alternatively, face-image acquisition condition determiner 182 can determine whether the eyes of the face in input picture are opened.Alternatively, face-image acquisition condition determiner 182 can determine the facial expression in input picture.Alternatively, face-image acquisition condition determiner 182 can determine that whether ear is visible in the input image.Alternatively, face-image acquisition condition determiner 182 can determine whether the size of input picture septum reset is equal to or less than basic size.
When input picture meet default face-image obtain condition time, the image that controller 170 can obtain facial zone from input picture is using as face-image.
When the image obtaining facial zone from input picture is as face-image, face-image can standardize to the standard preset by facial normalizer 183.
Such as, the size of face-image can be changed into default size by facial normalizer 183.Alternatively, facial normalizer 183 can adjust the effect of the colour temperature that face-image throws light on.Alternatively, the brightness of face-image can be changed into default brightness by facial normalizer 183.
After normalized faces image, controller 170 can determine health status information by using health and fitness information determiner 189 based on face-image.Health and fitness information determiner 189 can comprise diagnostic region extraction apparatus 186, facial condition information extraction apparatus 187 and health status information extraction apparatus 188.
Diagnostic region extraction apparatus 186 can be determined from face-image will from the diagnostic region wherein extracting default facial condition information.
Such as, diagnostic region extraction apparatus 186 can determine the position of facial composition from face-image.Face composition can be at least one in eyes, nose and mouth.Diagnostic region extraction apparatus 186 can dualization face-image, and based on the darker fact of the brightness ratio skin color of eyes, eyebrow and mouth, the dark areas in the face-image of dualization is defined as eyes, eyebrow and mouth.Alternatively, diagnostic region extraction apparatus 186 can not be the region of skin color from face-image extraction by use skin color information, and extracted region is defined as the position of eyes, nose and mouth.Because the position of eyes, eyebrow, nose and mouth illustrates specific pattern, so diagnostic region extraction apparatus 186 can by the position using AAM method to determine facial composition, the position of its septum reset composition is determined based on face pattern.
After the position determining facial composition, diagnostic region extraction apparatus 186 can determine the position of diagnostic region based on the position of facial composition.
After the position determining diagnostic region, facial condition information extraction apparatus 187 can extract facial condition information from diagnostic region.
Face condition information can represent referenced with the state of the face determining health and fitness information.
Can be pre-stored in storer 120 according to the type of the extractible facial condition information of diagnostic region and the method for extracting facial condition information.
Face condition information can extract by analyzing face-image.Such as, facial condition information extraction apparatus 187 can extract facial condition information by using the colouring information of facial zone, face recognition algorithm or facial expression recognition.
After the facial condition information of extraction, health status information extraction apparatus 188 can by using facial condition information to obtain the health status information relevant to the health of user.
Health status information can be the information of disease about the user predicted from facial condition information or life style.Such as, when from region swelling below face-image determination eyes, equipment 100 can determine that user has hyperthyroidism or allergic problem.Alternatively, when being black from the region below face-image determination eyes, equipment 100 can determine that user has allergic rhinitis.
Controller 170 can provide health status information by display unit 110.
Controller 170 can obtain face-image from the image file be stored in storer 120.In addition, controller 170 can be provided for browsing via photograph album or figure library facility the user interface of the image file be stored in storer 120.
At least one image file can be stored in storer 120.Image file can be static picture document or mobile image file.Storer 120 can store from the image file of caught Computer image genration and/or the image file that receives from external unit.Image file can be joint image Coding Experts group (JPEG) form, mobile photographic experts group (MPEG) form, MP4 form, audio-visual interweave (AVI) form or advanced streaming format (ASF).
Equipment 100 according to exemplary embodiment can comprise sensing cell 190.In this case, equipment 100 by using the biological parameter detected by sensing cell 190, can obtain facial condition information and health status information.The example of biological parameter comprises at least one in the concentration of haemoglobin in blood pressure, heart rate, blood sugar, acidic levels, blood and oxygen saturation.Such as, sensing cell 190 can be at least one in sensor, blood pressure measurement sensor or acidic levels survey sensor for detecting heart rate.
According to exemplary embodiment, controller 170 can pass through user input unit (such as, user's input unit or user input device, such as physical button, operating rod, touch pad, tracking plate, mouse device, peripherals, rotating disc; Audio input device, such as microphone; Visual input device, such as camera or gestures detection device etc.) receive facial condition information and health status information from user, and by using the facial condition information received and the health status information received and face-image to obtain new facial condition information and new health status information together.The facial condition information received from user can comprise the height of user, body weight, age or blood pressure.The health status information received from user can comprise chronic disease, medical history, family's medical history and current condition information.
The facial condition information received from user and health status information can be stored and manage the file for separating, and are employed management or serviced server admin.Alternatively, the facial condition information received from user and health status information can manage according to user.
Communication unit 130 can communicate with external unit.Communication unit 130 can communicate with external unit via wired or wireless (such as, transceiver, network adapter, wave point, USB (universal serial bus), wireline interface etc.).According to exemplary embodiment, communication unit 130 can communicate with Cloud Server, SNS server or the service server providing health examination to serve.Alternatively, communication unit 130 can communicate with other electronic equipments, such as smart phone, camera, tablet personal computer (PC), smart machine, wearable device, personal digital assistant (PDA), laptop computer, mobile phone etc.
Equipment 100 can be the form of smart phone, dull and stereotyped PC, mobile phone, camera, laptop computer, PDA, smart machine, wearable device etc.
Figure 69 is the block diagram of the equipment 100 according to another exemplary embodiment.
As shown in Figure 69, equipment 100 goes for any equipment in various equipment, such as camera, mobile phone, dull and stereotyped PC, PDA, MP3 player, call box, digital photo frame, navigator, Digital Television, watch, head mounted display (HMD) etc.
With reference to Figure 69, equipment 100 can comprise at least one in display unit 110, controller 170, storer 120, GPS (GPS) chip 125, communication unit 130, video processor 135, audio process 140, user input unit 145, microphone unit 150 (such as, microphone), image-generating unit 155, loudspeaker unit 160 and motion detector 165.
In addition, display unit 110 can comprise display panel 111 and control the controller of display panel 111.Display panel 111 can realize with the display of any type, such as liquid crystal display (LCD), Organic Light Emitting Diode (OLED) display, Activematric OLED (AM-OLED), plasma display (PDP) etc.Display panel 111 can be flexible, transparent or wearable at least one.Display unit 110 can be provided as touch-screen by combining with the touch panel 147 of user input unit 145.Such as, touch-screen can comprise integration module, and wherein display panel 111 and touch panel 147 are combined in stacked structure.
Storer 120 can comprise at least one in internal storage and external memory storage.
The example of internal storage comprises volatile memory (such as, dynamic RAM (DRAM), static RAM (SRAM) (SRAM) and synchronous dram (SDRAM)), nonvolatile memory (such as, disposable programmable read only memory (OTPROM), programming ROM (PROM), erasable and programming ROM (EPROM), electric erasable and programming ROM (EEPROM), mask rom and flash rom), hard disk drive (HDD), solid-state drive (SSD) etc.According to exemplary embodiment, controller 170 by from least one order received in nonvolatile memory or other assembly or Data import in volatile memory, and can process this order or data.In addition, controller 170 can by receive from other assembly or the data that generated by other assembly store in the nonvolatile memory.
The example of external memory storage comprises Compact Flash (CF) storer, secure digital (SD) storer, microSD storer, mini-SD storer, very fast numeral (XD) storer, memory stick etc.
Storer 120 can store various program for operating equipment 100 and data.Such as, storer 120 can store temporarily or semi-permanently will be displayed on content on lock-screen at least partially.
Controller 170 can control display unit 110 and be displayed on display unit 110 to make a part for the content be stored in storer 120.In other words, controller 170 can show the part being stored in the content in storer 120 on display unit 110.Alternatively, when user's gesture performs on a region of display unit 110, controller 170 can perform the control operation corresponding with user's gesture.
Controller 170 can comprise at least one in RAM171, ROM172, CPU (central processing unit) (CPU) 173, Graphics Processing Unit (GPU) 174, bus 175 etc.RAM171, ROM172, CPU173 and GPU174 can be interconnected via bus 175.
CPU173 accesses storer 120, and by using the operating system (OS) be stored in storer 120 to perform guiding.In addition, CPU173 performs various operation by using various programs, content and the data be stored in storer 120.
Command set for System guides is stored in ROM172.Such as, when supplying electric power along with input open command to equipment 100, the operating system be stored in storer 120 (OS) can be copied to RAM171 according to the order 171 be stored in ROM172 by CPU173, and runs OS for System guides.When System guides completes, the various program copies that are stored in storer 120 to RAM171, and are run the program that copies in RAM171 to perform various operation by CPU173.When System guides completes, GPU174 shows user interface screen in the region of display unit 110.In detail, GPU174 can generate the screen of display electronic document, and electronic document comprises various object, such as content, icon and menu.GPU174 is according to each object's property value of the layout calculation of screen, such as coordinate figure, shape, size and color.Then, GPU174 can have any one screen in various layout based on calculated attribute value generation.The screen generated by GPU174 can be provided to display unit 110, and shows in each region of display unit 110.
GPS chip 125 can receive gps signal from gps satellite, and the current location of computing equipment 100.When using Navigator or when needing the current location of user, controller 170 can by the position using GPS chip 125 to carry out computing equipment 100.
Communication unit 130 can be communicated with external unit by any one in the various communication means of use.Communication unit 130 can comprise at least one in Wi-Fi chip 131, Bluetooth chip 132, wireless communication chips 133 and near-field communication (NFC) chip 134.Controller 170 can communicate with any one in various external unit by using communication unit 130.
Wi-Fi chip 131 and Bluetooth chip 132 can by using Wi-Fi method and bluetooth approach executive communication respectively.When using Wi-Fi chip 131 or Bluetooth chip 132, first transmit various types of link informations of such as subsystem identification (SSID) or session key, then by using link information to transmit various types of information.Wireless communication chips 133 is the chips according to any one executive communication in the various communication standards of such as IEEE, purple honeybee, the third generation (3G), third generation partner program (3GPP) and LTE.NFC chip 134 is the chips by using NFC method to operate, and wherein NFC method uses the 13.56MHz frequency band in the middle of radio-frequency (RF) identification (RFID) frequency band of such as 135kHz, 13.56MHz, 433MHz, 860 to 960MHz and 2.45GHz.
Video processor 135 can process video data that is in the content being included in and being received by communication unit 130 or that be included in the content that is stored in storer 120.Video processor 135 can perform various image procossing to video data, and such as decoding, convergent-divergent, noise filtering, frame rate change and resolution changing.
Audio process 140 can process voice data that is in the content being included in and being received by communication unit 130 or that be included in the content that is stored in storer 120.Audio process 140 can perform various process to voice data, such as decodes, amplifies and at least one in noise filtering.
When running the playback program of regarding multimedia content, controller 170 can carry out reproducing multimedia content by driving video processor 135 and audio process 140.Loudspeaker unit 160 (such as, loudspeaker) can export the voice data generated by audio process 140.
User input unit 145 can receive various order from user.User input unit 145 can comprise at least one in key 146, touch panel 147 and pen identification panel 148.
Key 146 can comprise various types of key, such as mechanical button and roller, and they are formed on the regional of the ectosome of equipment 100, such as front surface region, lateral side regions and rear surface regions.
Touch panel 147 can detect the touch input of user, and output inputs corresponding touch event value with touch.When touch panel 147 forms touch-screen by combining with display panel 111, touch-screen can comprise the touch sensor of any type, such as electrostatic, pressure-type or piezo-electric type.By using the dielectric that is coated on the surface of touch-screen, electrostatic touch sensor calculates touch coordinate when body touch touch-screen surperficial of user by detecting the micro-electric current responded to by the health of user.When user touches touch-screen, pressure-type touch sensing calculates touch coordinate by detecting the electric current generated when the power-on and power-off pole plate that touch-screen comprises contacts with each other.The touch event generated on the touchscreen can generate primarily of the finger of user, but alternatively, can by formed by conductive material, the object that can generate electrostatic capacitance change generates.
Pen identifies that panel 148 can detect the close input of the felt pen of such as writing pencil or digitizer pen or touch input, and exports pen close to event or stylus touch event.Pen identifies that panel 148 can use electromagnetic radiation (EMR) method, and based on by felt pen close to or touch the change of electromagnetic intensity caused and detect close to input or touch input.In detail, pen identifies that panel 148 can comprise the electronic induction coil pickoff with lattice structure, and sequentially (sequentially) provides the E-signal processor of the alternating signal with characteristic frequency to the loop coil of electronic induction coil pickoff.When the pen comprising resonant circuit is near the loop coil of pen identification panel 148, the magnetic field sent from loop coil generates electric current in a resonant circuit based on mutual electronic induction.Then, based on this electric current, from the coil forming resonant circuit, generate induced field, and pen identification panel 148 detects induced field from loop coil under Signal reception state, thus the close or touch location of detecting pen.Pen identifies that panel 148 can have the area for covering the particular area (such as, the viewing area of display panel 111) below display panel 111.
The voice of user or other sound can change or be treated to voice data by microphone unit 150.Controller 170 can use the voice of user for call operation, or is stored in storer 120 by voice data.
Image-generating unit 155 can based on the control catching static images of user or mobile image.Image-generating unit 155 can comprise multiple camera, such as Front camera and rearmounted camera.
When providing image-generating unit 155 and microphone unit 150, controller 170 can carry out executivecontrol function according to the voice of the user inputted by microphone unit 150 or the user action identified by image-generating unit 155.Such as, equipment 100 can operate under action control pattern or Voice command pattern.When equipment 100 is under action control pattern, controller 170 can activate image-generating unit 155 to take user, and carrys out executivecontrol function by the action change following the tracks of user.When equipment 100 is under Voice command pattern, controller 170 can analyze the voice of the user inputted by microphone unit 150, and carrys out executivecontrol function based on the voice of analyzed user.
Motion detector 165 can the movement of fuselage of checkout equipment 100.Equipment 100 can rotate along any direction in various direction or tilt.Now, motion detector 165 by using at least one in the various sensors of such as geomagnetic sensor, gyro sensor and acceleration transducer, can detect moving characteristic, such as sense of rotation, the anglec of rotation or angle of inclination.
In addition, according to one or more exemplary embodiment, equipment 100 can also comprise the USB port be connected to by USB (universal serial bus) (USB) connector, by the various external input ports that the various exterior terminals of such as earphone, mouse and LAN cable are connected to, receive and process the DMB chip of DMB (DMB) signal and various sensor.
Figure 70 is the block diagram of the service server 1000 according to exemplary embodiment.
With reference to Figure 70, service server 1000 can comprise storage unit 1100 (such as, memory storage), communication unit 1200 (such as, communicator) and controller 1300.But be understandable that, in other exemplary embodiment one or more, service server 1000 can comprise the assembly more more or less than the assembly shown in Figure 70.
Storage unit 1100 can store the information for extracting the health status information of user from the face-image of user.
Storage unit 1100 can store the account information of user.In addition, storage unit 1100 can correspondingly store the face-image of user with the account information of user.In addition, storage unit 1100 can correspondingly store characteristic information and the reference picture of the face of user with the account information of user.In addition, storage unit 1100 can correspondingly store facial condition information and health status information with the account information of user.
Communication unit 1200 can send data by the network that is connected to service server 1000 from external unit or external server and receive data from external unit or external server.
Communication unit 1200 can receive account information and the face-image of user from equipment 100.In addition, communication unit 1200 can send to equipment 100 health status information obtained from face-image.
In addition, communication unit 1200 can receive the request of the prescription information for the health status being suitable for user from equipment 100.In addition, communication unit 1200 can receive the request of the service of being correlated with for hospital from equipment 100.
In addition, communication unit 1200 can be suitable for the information of the prescription of user to third-party server request.In addition, the information that the hospital that communication unit 1200 can be asked to third-party server request by equipment 100 is correlated with.Now, communication unit 1200 can send the health status information of user to third-party server.
When receiving the information relevant with hospital of the prescription information based on health status information from third-party server, communication unit 1200 can send the prescription information information relevant with hospital to equipment 100.
Controller 1300 can control the integrated operation of the service server 1000 comprising storage unit 1100 and communication unit 1200.
Controller 1300 can carry out authenticated based on the account information received from equipment 100.When authenticated, controller 1300 can obtain facial condition information by analyzing the face-image received from equipment 100.Now, controller 1300 can obtain facial condition information by face-image and the reference picture be stored in service server 1000 being compared.In addition, controller 1300 can to secure good health status information based on facial condition information.
Be understandable that, the title of the assembly of equipment 100 described above can be changed and maybe can change.In addition, equipment 100 can comprise at least one in assembly described above, can not comprise some assemblies, or may further include other assembly.
One or more exemplary embodiment also can be embodied in the computer-readable code on computer readable recording medium storing program for performing.Computer readable recording medium storing program for performing is data-storable any data storage device, and described data after this can by computer system reads.
Computer-readable code is configured to, when to read from computer readable recording medium storing program for performing and when being run by processor, executable operations is to realize according to the method for the control electronic installation of one or more exemplary embodiment.Computer-readable code can be various programming language.In addition, for realize the function program of one or more exemplary embodiment, code and code segment can easily belonging to one or more exemplary embodiment field skilled programming personnel explain.
The example of computer readable recording medium storing program for performing comprises ROM (read-only memory) (ROM), random access memory (RAM), CD-ROM, tape, floppy disk, optical data storage devices etc.Computer readable recording medium storing program for performing also can be distributed in the computer system of network coupling, thus stores in a distributed manner and moving calculation machine readable code.
But should be understood that, exemplary embodiment as described herein should be considered to be only describing significance, instead of the object in order to limit.Feature in each exemplary embodiment or the description of aspect should be considered to other similar characteristics or the aspect that can be used for other exemplary embodiments usually.Although described one or more exemplary embodiment with reference to the accompanying drawings above, those skilled in the art will appreciate that, can various change made in form and details and not depart from the spirit and scope be defined by the following claims.

Claims (15)

1. an equipment, comprising:
Memory storage, is configured to store the first image of the face comprising user and the first health status information from the first image zooming-out;
Imager, is configured to catch image;
Controller, is configured to control imager and catches the second image comprising the face of user, and from the second image zooming-out second health status information caught; And
Display, is configured to the information being different from the first stored health status information in the middle of output second image and the second health status information of extracting.
2. equipment as claimed in claim 1, wherein, when in the multiple application in operational outfit by use imager catch image first application time, controller is configured to control imager and catches the second image, and when running the second application being different from the first application, controller is configured to control display and exports the information being different from the first health status information in the middle of the second image and the second health status information of extracting.
3. equipment as claimed in claim 1, also comprises:
User interface section, is configured to the user's input received for unlocking equipment when in a locked condition,
Wherein, controller is configured to, when receiving the user for unlocking the equipment be in the lock state and inputting, control imager catch the second image and from catch the second image zooming-out second health status information.
4. equipment as claimed in claim 1, also comprises:
User interface section, is configured to receive the user's input for the video call application in operational outfit,
Wherein, controller is configured to, and when inputting the application of operation video call according to the user received, controls imager and catches the second image, and from the second image zooming-out second health status information caught.
5. equipment as claimed in claim 1, wherein, described display is configured to export the second image by display indicator on the facial zone from the user in the second image wherein extracting the information being different from the first health status information, to indicate the information extracted and be different from the first health status information.
6. equipment as claimed in claim 1, wherein, described display is configured to temporally order and exports multiple second image of catching during predetermined amount of time and the health status information from described multiple second image zooming-out.
7. equipment as claimed in claim 1, wherein:
Described display is configured to export be used to guide and obtains based on the face-image preset the shooting tutorial message that condition catches the face of user; And
Described controller is configured to determine whether imager obtains according to the face-image preset the face that condition catches user, and when determine according to the face-image preset obtain condition catch user facial time, the image of the face of user is defined as the second image.
8. provided a method for health status information by equipment, the method comprises:
Obtain the first health status information of the first image zooming-out from the face comprising user;
Catch the second image comprising the face of user;
From the second image zooming-out second health status information caught; And
Export the information being different from the first health status information in the middle of the second image and the second health status information of extracting.
9. method as claimed in claim 8, wherein:
Catch the second image to comprise, when in the multiple application in operational outfit by use imager catch image first application time, catch the second image; And
Exporting the second image and information comprises, when being different from the second application of the first application in the described multiple application in operational outfit, exporting the information being different from the first health status information in the middle of the second image and the second health status information.
10. method as claimed in claim 8, also comprises:
Receive the user's input for unlocking equipment when in a locked condition,
Wherein, catch the second image and comprise, upon receiving user input, catch the second image.
11. methods as claimed in claim 8, also comprise:
Receive user's input of the video call application be used in operational outfit,
Wherein, catch the second image and comprise, upon receiving user input, catch the second image.
12. methods as claimed in claim 8, wherein, export the second image and information comprises, the second image is exported, to indicate the information extracted and be different from the first health status information by display indicator on the facial zone from the user in the second image wherein extracting the information being different from the first health status information.
13. methods as claimed in claim 8, wherein, catch the second image and comprise:
Export to be used to guide and obtain based on the face-image preset the shooting tutorial message that condition catches the face of user;
Determine whether imager obtains according to the face-image preset the face that condition catches user; And
When determine according to the face-image preset obtain condition catch user facial time, the image of the face of user is defined as the second image.
14. methods as claimed in claim 8, also comprise:
The biological information of user is detected at the time point of seizure second image,
Wherein extract the second health status information to comprise:
The biological condition of the user of the time point at seizure second image is determined based on the biological information detected; And
Get rid of some in the second facial condition information illustrated on the face of user due to biological condition.
15. 1 kinds of non-transitory computer readable recording medium storing program for performing, record the program that can be required the method for 8 by computer run with enforcement of rights thereon.
CN201580001465.1A 2014-03-14 2015-03-16 Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium Pending CN105431852A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
KR20140030457 2014-03-14
KR10-2014-0030457 2014-03-14
KR1020140098639A KR102420100B1 (en) 2014-03-14 2014-07-31 Electronic apparatus for providing health status information, method for controlling the same, and computer-readable storage medium
KR10-2014-0098639 2014-07-31
PCT/KR2015/002535 WO2015137788A1 (en) 2014-03-14 2015-03-16 Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN105431852A true CN105431852A (en) 2016-03-23

Family

ID=54246037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201580001465.1A Pending CN105431852A (en) 2014-03-14 2015-03-16 Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium

Country Status (3)

Country Link
KR (1) KR102420100B1 (en)
CN (1) CN105431852A (en)
AU (1) AU2015201759B2 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107115100A (en) * 2017-04-28 2017-09-01 苏州商信宝信息科技有限公司 A kind of Intelligent mirror device detected for medical treatment & health and its detection method
CN107705245A (en) * 2017-10-13 2018-02-16 北京小米移动软件有限公司 Image processing method and device
CN108133757A (en) * 2016-12-01 2018-06-08 三星电子株式会社 For providing the device and method thereof of health management service
CN108198618A (en) * 2018-01-02 2018-06-22 联想(北京)有限公司 A kind of health analysis method and electronic equipment
CN108334801A (en) * 2017-01-20 2018-07-27 杭州海康威视系统技术有限公司 A kind of method for recognizing fire disaster, device and fire alarm system
CN108509905A (en) * 2018-03-30 2018-09-07 百度在线网络技术(北京)有限公司 Health state evaluation method, apparatus, electronic equipment and storage medium
CN108766548A (en) * 2018-05-17 2018-11-06 安徽昱康智能科技有限公司 Physical-examination machine auxiliary operation method and system
CN108932493A (en) * 2018-06-29 2018-12-04 东北大学 A kind of facial skin quality evaluation method
CN108968920A (en) * 2018-07-12 2018-12-11 维沃移动通信有限公司 A kind of data detection method and electronic equipment
CN109544551A (en) * 2018-12-06 2019-03-29 合肥鑫晟光电科技有限公司 The detection method and device of health status
CN110248450A (en) * 2019-04-30 2019-09-17 广州富港万嘉智能科技有限公司 A kind of combination personage carries out the method and device of signal light control
CN110832472A (en) * 2017-06-07 2020-02-21 智能节奏利润有限公司 Database construction method
CN111325746A (en) * 2020-03-16 2020-06-23 维沃移动通信有限公司 Skin detection method and electronic equipment
CN111627519A (en) * 2020-05-22 2020-09-04 江苏妈咪家健康科技有限公司 Postpartum rehabilitation monitoring and management system and method based on mobile terminal
CN111670004A (en) * 2018-03-07 2020-09-15 三星电子株式会社 Electronic device and method for measuring heart rate
CN112603834A (en) * 2020-12-18 2021-04-06 南通市第一人民医院 Control system and method for enteral nutrition input of patient
CN113749614A (en) * 2020-06-05 2021-12-07 华为技术有限公司 Skin detection method and apparatus

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10945637B2 (en) 2016-12-28 2021-03-16 Ajou University Industry-Academic Cooperation Foundation Image based jaundice diagnosing method and apparatus and image based jaundice diagnosis assisting apparatus
KR102199397B1 (en) * 2018-04-23 2021-01-06 (주)블라우비트 System and method for diagnosing and analyzing disease
WO2020218754A1 (en) * 2019-04-22 2020-10-29 주식회사 엠티에스컴퍼니 Physical examination method and system using health check query performed by chatbot, and physical examination method and system
KR102293824B1 (en) * 2019-05-21 2021-08-24 (의) 삼성의료재단 Method and system for monitoring related disease using face perception of mobile phone
CN111477307A (en) * 2019-08-13 2020-07-31 上海新约信息技术有限公司 Hospital diagnosis and treatment management system and method
KR102381549B1 (en) * 2019-11-29 2022-03-31 건국대학교 글로컬산학협력단 Product information providing system and product information providing kiosk device
KR102540755B1 (en) 2021-04-30 2023-06-07 성균관대학교산학협력단 Method of estimating hemoglobin concentration using skin image, or health information and body information, and hemoglobin concentration estimating device performing method
WO2023277548A1 (en) * 2021-06-30 2023-01-05 주식회사 타이로스코프 Method for acquiring side image for eye protrusion analysis, image capture device for performing same, and recording medium
WO2023277622A1 (en) 2021-06-30 2023-01-05 주식회사 타이로스코프 Method for guiding hospital visit for treating active thyroid ophthalmopathy and system for performing same
WO2023277589A1 (en) 2021-06-30 2023-01-05 주식회사 타이로스코프 Method for guiding visit for active thyroid eye disease examination, and system for performing same
US20230353885A1 (en) * 2022-04-27 2023-11-02 Sonic Star Global Limited Image processing system and method for processing images
CN117909002A (en) * 2022-10-17 2024-04-19 抖音视界有限公司 Method, apparatus, device and storage medium for content presentation

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060133607A (en) * 2005-06-21 2006-12-27 주식회사 팬택 Mobile communication terminal for self-checking user's health, system and method for offering multimedia contents using mobile communication terminal for self-checking user's health
KR20070063195A (en) * 2005-12-14 2007-06-19 주식회사 케이티 Health care system and method thereof
WO2008054162A1 (en) * 2006-11-03 2008-05-08 Min Hwa Lee Method, apparatus, and system for diagnosing health status of mobile terminal users
CN102096810A (en) * 2011-01-26 2011-06-15 北京中星微电子有限公司 Method and device for detecting fatigue state of user before computer
CN102985007A (en) * 2010-06-30 2013-03-20 泰尔茂株式会社 Health-monitoring device
CN103197825A (en) * 2011-11-09 2013-07-10 索尼公司 Image processor, display control method and program
CN103546627A (en) * 2012-07-11 2014-01-29 Lg电子株式会社 Mobile terminal and control method thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7972266B2 (en) * 2007-05-22 2011-07-05 Eastman Kodak Company Image data normalization for a monitoring system
JP5435250B2 (en) * 2007-08-24 2014-03-05 日本電気株式会社 Image display device
JP2009172181A (en) * 2008-01-25 2009-08-06 Seiko Epson Corp Health checkup method and health checkup apparatus
JP5571633B2 (en) * 2011-08-31 2014-08-13 東芝テック株式会社 Health level notification device, program, and health level notification method
KR20130100806A (en) * 2012-01-31 2013-09-12 삼성전자주식회사 Method for managing quantity of exercise, display apparatus, and server thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060133607A (en) * 2005-06-21 2006-12-27 주식회사 팬택 Mobile communication terminal for self-checking user's health, system and method for offering multimedia contents using mobile communication terminal for self-checking user's health
KR20070063195A (en) * 2005-12-14 2007-06-19 주식회사 케이티 Health care system and method thereof
WO2008054162A1 (en) * 2006-11-03 2008-05-08 Min Hwa Lee Method, apparatus, and system for diagnosing health status of mobile terminal users
CN102985007A (en) * 2010-06-30 2013-03-20 泰尔茂株式会社 Health-monitoring device
CN102096810A (en) * 2011-01-26 2011-06-15 北京中星微电子有限公司 Method and device for detecting fatigue state of user before computer
CN103197825A (en) * 2011-11-09 2013-07-10 索尼公司 Image processor, display control method and program
CN103546627A (en) * 2012-07-11 2014-01-29 Lg电子株式会社 Mobile terminal and control method thereof

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108133757A (en) * 2016-12-01 2018-06-08 三星电子株式会社 For providing the device and method thereof of health management service
CN108334801A (en) * 2017-01-20 2018-07-27 杭州海康威视系统技术有限公司 A kind of method for recognizing fire disaster, device and fire alarm system
CN107115100A (en) * 2017-04-28 2017-09-01 苏州商信宝信息科技有限公司 A kind of Intelligent mirror device detected for medical treatment & health and its detection method
CN110832472A (en) * 2017-06-07 2020-02-21 智能节奏利润有限公司 Database construction method
CN107705245A (en) * 2017-10-13 2018-02-16 北京小米移动软件有限公司 Image processing method and device
CN108198618A (en) * 2018-01-02 2018-06-22 联想(北京)有限公司 A kind of health analysis method and electronic equipment
CN111670004A (en) * 2018-03-07 2020-09-15 三星电子株式会社 Electronic device and method for measuring heart rate
CN108509905A (en) * 2018-03-30 2018-09-07 百度在线网络技术(北京)有限公司 Health state evaluation method, apparatus, electronic equipment and storage medium
CN108766548A (en) * 2018-05-17 2018-11-06 安徽昱康智能科技有限公司 Physical-examination machine auxiliary operation method and system
CN108932493A (en) * 2018-06-29 2018-12-04 东北大学 A kind of facial skin quality evaluation method
CN108968920A (en) * 2018-07-12 2018-12-11 维沃移动通信有限公司 A kind of data detection method and electronic equipment
CN109544551A (en) * 2018-12-06 2019-03-29 合肥鑫晟光电科技有限公司 The detection method and device of health status
CN110248450A (en) * 2019-04-30 2019-09-17 广州富港万嘉智能科技有限公司 A kind of combination personage carries out the method and device of signal light control
CN110248450B (en) * 2019-04-30 2021-11-12 广州富港生活智能科技有限公司 Method and device for controlling light by combining people
CN111325746A (en) * 2020-03-16 2020-06-23 维沃移动通信有限公司 Skin detection method and electronic equipment
WO2021185210A1 (en) * 2020-03-16 2021-09-23 维沃移动通信有限公司 Skin quality test method, and electronic device
CN111627519A (en) * 2020-05-22 2020-09-04 江苏妈咪家健康科技有限公司 Postpartum rehabilitation monitoring and management system and method based on mobile terminal
CN111627519B (en) * 2020-05-22 2023-08-22 江苏妈咪家健康科技有限公司 Postpartum rehabilitation monitoring management system and method based on mobile terminal
CN113749614A (en) * 2020-06-05 2021-12-07 华为技术有限公司 Skin detection method and apparatus
CN113749614B (en) * 2020-06-05 2023-03-10 华为技术有限公司 Skin detection method and apparatus
CN112603834A (en) * 2020-12-18 2021-04-06 南通市第一人民医院 Control system and method for enteral nutrition input of patient

Also Published As

Publication number Publication date
KR102420100B1 (en) 2022-07-13
KR20150107565A (en) 2015-09-23
AU2015201759B2 (en) 2017-01-12
AU2015201759A1 (en) 2015-10-01

Similar Documents

Publication Publication Date Title
EP2919142B1 (en) Electronic apparatus and method for providing health status information
AU2015201759B2 (en) Electronic apparatus for providing health status information, method of controlling the same, and computer readable storage medium
EP3335412B1 (en) Method and electronic apparatus for providing service associated with image
US9886454B2 (en) Image processing, method and electronic device for generating a highlight content
KR102349428B1 (en) Method for processing image and electronic device supporting the same
CN108513060B (en) Photographing method using external electronic device and electronic device supporting the same
KR102423184B1 (en) Method for Outputting Screen and Electronic Device supporting the same
CN110084153B (en) Smart camera for automatically sharing pictures
JP7020626B2 (en) Makeup evaluation system and its operation method
EP4020374A1 (en) Image processing method and electronic apparatus
US11058209B2 (en) Beauty counseling information providing device and beauty counseling information providing method
CN109101873A (en) For providing the electronic equipment for being directed to the characteristic information of external light source of object of interest
JP2017211970A (en) Care information acquisition method, care information sharing method, and electronic device for those methods
KR20160037074A (en) Image display method of a apparatus with a switchable mirror and the apparatus
JP7097721B2 (en) Information processing equipment, methods and programs
JP7238902B2 (en) Information processing device, information processing method, and program
KR20180080140A (en) Personalized skin diagnosis and skincare
JP7148624B2 (en) Image proposal device, image proposal method, and image proposal program
JP2023026630A (en) Information processing system, information processing apparatus, information processing method, and program
JP6905775B1 (en) Programs, information processing equipment and methods
CN110673737B (en) Display content adjusting method and device based on intelligent home operating system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160323

RJ01 Rejection of invention patent application after publication