CN108511066A - information generating method and device - Google Patents

information generating method and device Download PDF

Info

Publication number
CN108511066A
CN108511066A CN201810270481.6A CN201810270481A CN108511066A CN 108511066 A CN108511066 A CN 108511066A CN 201810270481 A CN201810270481 A CN 201810270481A CN 108511066 A CN108511066 A CN 108511066A
Authority
CN
China
Prior art keywords
health
information
image
face
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810270481.6A
Other languages
Chinese (zh)
Inventor
路双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810270481.6A priority Critical patent/CN108511066A/en
Publication of CN108511066A publication Critical patent/CN108511066A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present application discloses information generating method and device.One specific implementation mode of this method includes:Obtain the face-image of target user;Acquired face-image is input to health and fitness information identification model trained in advance, obtains recognition result, wherein health and fitness information identification model is used to characterize the correspondence between face-image and recognition result;Based on identified recognition result, the health information of target user is determined;Based on the correspondence between pre-set health information and health guidance information, the corresponding health guidance information of identified health information is determined;Based on identified health information, health guidance information, health and fitness information report is generated.The embodiment allows user that the health status of oneself is detected and is supervised at any time, to help to improve the health status of user.

Description

Information generating method and device
Technical field
The invention relates to field of computer technology, and in particular to information generating method and device.
Background technology
With the development of artificial intelligence technology, people can obtain various experience by artificial intelligence technology.Artificial intelligence Many facilities are brought for people’s lives.Health analysis is provided to the user using artificial intelligence technology and health guidance to use The health at family has obtained certain guarantee.
The existing body fat rate information etc. for generally including detection human body to the health analysis of human body using artificial intelligence technology.
Invention content
The embodiment of the present application proposes information generating method and device.
In a first aspect, the embodiment of the present application provides a kind of information generating method, this method includes:Obtain target user's Face-image;Acquired face-image is input to health and fitness information identification model trained in advance, obtains recognition result, In, health and fitness information identification model is used to characterize the correspondence between face-image and recognition result;Based on identified identification As a result, determining the health information of target user;Based between pre-set health information and health guidance information Correspondence, determine determined by health information corresponding health guidance information;Believed based on identified health status Breath, health guidance information generate health and fitness information report.
In some embodiments, recognition result includes the health of default facial corresponding with acquired face-image Score value;Based on identified recognition result, the health information of target user is determined, including:Based on pre-set default The weighted value of facial obtains the healthy score value of target user to presetting the healthy score value weighted average of facial;By mesh The healthy score value for marking user is compared with default healthy point threshold, is determined corresponding with the healthy score value of target user default Healthy score value section;Based on the correspondence between pre-set default healthy score value section and health information, target is determined The health information of user.
In some embodiments, recognition result includes default health information class corresponding with acquired face-image Not Ji He in health information of all categories probability;Based on identified recognition result, the healthy shape of target user is determined Condition information, including:Health of all categories from the acquired corresponding default health information category set of face-image In the probability of condition information, preset number probability is chosen according to descending sequence is worth;It is corresponding to selected probability The health information of classification is merged;Based on fusion results, the health information of target user is determined.
In some embodiments, health and fitness information identification model trains to obtain by following steps:Training sample set is obtained, Training sample includes facial sample image and the markup information to facial sample image, wherein markup information includes for referring to Show the markup information at each position of face of facial sample image and the markup information of healthy score value corresponding with each position of face; Using the facial sample image of each training sample in training sample set as input, by the facial sample image pair with input The markup information answered is trained convolutional neural networks, obtains health and fitness information identification model as output.
In some embodiments, health and fitness information identification model also trains to obtain by following steps:Obtain training sample set It closes, training sample includes facial sample image and the markup information to facial sample image, wherein markup information is used to indicate Health information of all categories and corresponding with health information of all categories in default health information category set Probability;Using the facial sample image of each training sample in training sample set as input, by the facial sample with input The corresponding markup information of this image is trained Recognition with Recurrent Neural Network, obtains health and fitness information identification model as output.
Second aspect, the embodiment of the present application provide a kind of device that information generates, which includes:Acquiring unit is matched Set the face-image for obtaining target user;Recognition unit is configured to acquired face-image being input to advance instruction Experienced health and fitness information identification model, obtains recognition result, wherein health and fitness information identification model is for characterizing face-image and identification As a result the correspondence between;First determination unit is configured to, based on identified recognition result, determine that target user's is strong Health condition information;Second determination unit is configured to based between pre-set health information and health guidance information Correspondence, determine determined by health information corresponding health guidance information;Generation unit is configured to be based on institute Determining health information, health guidance information generate health and fitness information report.
In some embodiments, recognition result includes the health of default facial corresponding with acquired face-image Score value;First determination unit is further configured to:Based on the weighted value of pre-set default facial, to default face The healthy score value weighted average at position, obtains the healthy score value of target user;By the healthy score value of target user and default health Point threshold is compared, and determines default healthy score value section corresponding with the healthy score value of target user;Based on pre-set Correspondence between default health score value section and health information, determines the health information of target user.
In some embodiments, recognition result includes default health information class corresponding with acquired face-image Not Ji He in health information of all categories probability;First determination unit is further configured to:From acquired face In the probability of health information of all categories in the corresponding default health information category set of image, according to value by big Preset number probability is chosen to small sequence;The health information of the selected corresponding classification of probability is merged; Based on fusion results, the health information of target user is determined.
In some embodiments, health and fitness information identification model trains to obtain by following steps:Training sample set is obtained, Training sample includes facial sample image and the markup information to facial sample image, wherein markup information includes for referring to Show the markup information at each position of face of facial sample image and the markup information of healthy score value corresponding with each position of face; Using the facial sample image of each training sample in training sample set as input, by the facial sample image pair with input The markup information answered is trained convolutional neural networks, obtains health and fitness information identification model as output.
In some embodiments, health and fitness information identification model also trains to obtain by following steps:Obtain training sample set It closes, training sample includes facial sample image and the markup information to facial sample image, wherein markup information is used to indicate Health information of all categories and corresponding with health information of all categories in default health information category set Probability;Using the facial sample image of each training sample in training sample set as input, by the facial sample with input The corresponding markup information of this image is trained Recognition with Recurrent Neural Network, obtains health and fitness information identification model as output.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, which includes:One or more processing Device;Storage device, for storing one or more programs;When one or more programs are executed by one or more processors, make Obtain method of the one or more processors realization as described in any realization method in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence realizes the method as described in any realization method in first aspect when the computer program is executed by processor.
Information generating method and device provided by the embodiments of the present application, by obtaining the face-image of target user, then The corresponding recognition result of acquired face-image is determined using health and fitness information identification model trained in advance, then according to knowledge Other result determines health information and health guidance information corresponding with health information, ultimately produces health and fitness information report It accuses, so that user can be detected and supervise to the health status of oneself at any time, helps to improve the healthy shape of user Condition.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the information generating method of the application;
Fig. 3 is a schematic diagram according to the application scenarios of the information generating method of the application;
Fig. 4 is the flow chart according to another embodiment of the information generating method of the application;
Fig. 5 is the structural schematic diagram of one embodiment of the device generated according to the information of the application;
Fig. 6 is adapted for the structural schematic diagram of the computer system of the electronic equipment for realizing the embodiment of the present application.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the exemplary of the embodiment for the device that the information generating method of the application or information can be applied to generate System architecture 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
Terminal device 101,102,103 is interacted by network 104 with server 105, to receive or send message etc..
Terminal device 101,102,103 can be hardware, can also be software.When terminal device 101,102,103 is hard Can be the various electronic equipments for supporting image camera function or video capture function when part, including but not limited to camera, take the photograph Camera, camera, smart mobile phone and tablet computer etc..When terminal device 101,102,103 is software, may be mounted at It states in cited electronic equipment.Multiple softwares or software module may be implemented into it, can also be implemented as single software or soft Part module.It is not specifically limited herein.
Server 105 can be to provide the server of various services, such as the figure to the upload of terminal device 101,102,103 As the image processing server handled.Image processing server can carry out the face-image of the target user received The processing such as analysis, and generate handling result (such as health and fitness information report).
It should be noted that server 105 can be hardware, can also be software.It, can when server 105 is hardware To be implemented as the distributed server cluster that multiple servers form, individual server can also be implemented as.When server 105 is When software, multiple softwares or software module (such as providing Distributed Services) may be implemented into, can also be implemented as single Software or software module.It is not specifically limited herein.
It should be noted that the information generating method that the embodiment of the present application is provided generally is executed by server 105, work as end When end has the function of Data Analysis Services, it can also be executed by terminal device 101,102,103.Correspondingly, the dress that information generates Set be generally positioned in server 105 or terminal device 101,102,103 in.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.Images to be recognized is locally stored in server 105 In the case of, system architecture 100 can be not provided with terminal device 101,102,103.
With continued reference to Fig. 2, it illustrates the flows 200 according to one embodiment of the information generating method of the application.It should Information generating method includes the following steps:
Step 201, the face-image of target user is obtained.
In the present embodiment, the executive agent (such as server 105 shown in FIG. 1) of information generating method can be from local Or communicate with the face of terminal device (such as terminal device shown in FIG. 1 101,102,103) the acquisition target user of connection Portion's image.Wherein, terminal device can be the various electronic equipments of support continuous image shooting function or video capture function, including but It is not limited to camera, video camera, camera, smart mobile phone and tablet computer etc..Herein, the face-image of the target user Generally include coloured image (RGB image), depth image (Depth images) etc..The face-image, which includes target user, to be wrapped Include the profile of each position (such as nose, eyes, cheek etc.) of face;It can also include the color at each position.Preferably, on The face-image for stating target user can be the depth image of colour.By setting the face-image of user to colored depth Image can make face-image have better depth of view information, so that the face contour in face-image and face Each position is more clear.
In the present embodiment, the face-image of above-mentioned target user may include a face-image, can also include more Open face-image.Herein, which for example may include the image and user's lips of user's tight-lipped Bar stretch out tongue image.
Step 202, acquired face-image is input to health and fitness information identification model trained in advance, obtains identification knot Fruit.
In the present embodiment, the face-image of the target user acquired in step 201, above-mentioned executive agent can incite somebody to action Acquired face-image is input to health and fitness information identification model trained in advance, to obtain recognition result.Herein, the knowledge Other result may include the healthy score value of default facial corresponding with acquired face-image.Herein, the default face Portion position is such as may include eyes, cheek position, forehead position, lower position, lip position, tongue position. Here, since each position of face organ different from physical efficiency is corresponding.Above-mentioned executive agent passes through each facial of determination Healthy score value can determine the health status of each organ in target user's body.
In the present embodiment, health and fitness information identification model can be used for characterizing corresponding between face-image and recognition result Relationship.In other words, health and fitness information identification model can be used for characterizing face-image default face portion corresponding with face-image Correspondence between the healthy score value of position.
In some optional realization methods of the present embodiment, those skilled in the art to a large amount of facial sample image into Row statistical analysis, so that it is determined that go out the health status of default facial corresponding with facial sample image, and according to the health Situation determines the healthy score value of default facial.Health and fitness information identification model can be based on facial sample image and with The mapping table of the healthy score value of default facial in facial sample image.At this point, above-mentioned executive agent can calculate Default facial in the face-image of target user and presetting in each facial sample image in the mapping table Similarity between facial.Herein, which for example may include the similarity of the profile of each default facial, It may include the similarity etc. of the color of each default facial.And based on similarity calculation as a result, from the mapping table Obtain the healthy score value of default facial corresponding with the face-image of target user.For example, determining to use with target first The color of cheek in the face-image at family, forehead profile (if such as forehead with whelk, the face contour of forehead with The profile of no whelk differs), the highest facial sample image of similarities such as the profile of place between the eyebrows.Then, from the correspondence Found out in table the color of cheek corresponding with the face sample image, the profile of forehead, place between the eyebrows profile etc. healthy score value, And using the healthy score value of identified each default facial as cheek, forehead, the place between the eyebrows in the face-image of target user Etc. facials healthy score value.
In some optional realization methods of the present embodiment, health and fitness information identification model can utilize various engineerings Learning method and training sample Training carried out to existing machine learning model (such as various artificial neural networks etc.) and It obtains.Here, training sample may include facial sample image set.The face sample image set includes a large amount of faces The markup information of sample image and each facial sample image.
In practice, health and fitness information identification model can also be by preset convolutional neural networks (Convolutional Neural Network, CNN) be trained.Wherein, which can be indiscipline or not train At multilayer convolutional neural networks.The convolutional neural networks for example may include convolutional layer, pond layer, full articulamentum and loss Layer.In addition, non-first convolutional layer in the convolutional neural networks can with it is at least one before the non-first convolutional layer Convolutional layer is connected.For example, the non-first convolutional layer can be connected with all convolutional layers before it;The non-first volume Lamination can also be connected with the part convolutional layer before it.Specifically, health and fitness information identification model can be by as follows Step trains to obtain:
First, facial sample image set is obtained.The face sample image set include a large amount of facial sample images and The markup information of each face sample image.Herein, which may include the face for being used to indicate facial sample image The markup information of the markup information at each position in portion and healthy score value corresponding with each position of face.Herein, each portion of the face The markup information of position may include the markup information of the profile at each position of face, the markup information of color at each position of face, face Each position in portion is located at the markup information etc. of the position of face.The markup information of position for being located at face by each position of the face is It can determine the default facial in the face-image of above-mentioned target user.It herein, should health corresponding with each position of face The markup information of score value may include the weight of the profile based on the position and the weight of color and the healthy score value that generates Markup information.
It then, will be corresponding with each facial sample image using the facial sample image in facial sample set as input The markup information of the healthy score value at each position of face is trained initial convolutional neural networks, obtains healthy letter as output Cease identification model.
Here, the convolution god that initial convolutional neural networks can be unbred convolutional neural networks or training is not completed Through network, initial convolutional neural networks can be provided with initial network parameter (such as different small random numbers), and network parameter exists It can constantly be adjusted in the training process of health and fitness information identification model.Until face-image and and face can be characterized by training Until the health and fitness information identification model of correspondence between the healthy score value of the corresponding facial of portion's image.For example, can be with Using BP (Back Propagation, backpropagation) algorithms or SGD (Stochastic Gradient Descent, at random Gradient declines) algorithm adjusts the network parameter of convolutional neural networks.
Step 203, based on identified recognition result, the health information of target user is determined.
In the present embodiment, according to default face portion corresponding with the face-image of target user determined by step 202 The healthy score value of position, above-mentioned executive agent can determine the health information of above-mentioned target user.
As an example, the weighted value of default facial can be pre-set in above-mentioned executive agent.Herein, this is default The weighted value of facial for example may include the weighted value 0.3 of the weighted value 0.1 of cheek, the weighted value 0.6 of eyes, forehead Deng.Above-mentioned executive agent can be according to the weighted value of pre-set default facial, to presetting the health point of facial It is worth weighted average, to obtain the healthy score value of user.For example, the healthy score value of the cheek of target user is 50, eyes are good for Health score value is 70, and the healthy score value of forehead is 60, then the healthy score value of target user is 50*0.1+70*0.6+60*0.3=65, I.e. the healthy score value of target user is 65.Cannot also be arranged in above-mentioned executive agent healthy score value section and health information it Between mapping table.For example, when healthy score value section is 0~40, health information corresponding with the health score value section is Health status is poor;When healthy score value section is 41~60, health information corresponding with the health score value section is health status It is poor;When healthy score value section is 60~80, health information corresponding with the health score value section is to be in a good state of health;When When healthy score value section is 80~100, health information corresponding with the health score value section is in the pink of condition.To above-mentioned to hold The healthy score value of target user can be compared by row main body with default healthy point threshold, strong with target user to determine The corresponding default healthy score value section of health score value.Finally, according to pre-set default healthy score value section and health information it Between correspondence, determine the health information of target user.
Step 204, based on the correspondence between pre-set health information and health guidance information, institute is determined The corresponding health guidance information of determining health information.
In the present embodiment, can be previously provided in above-mentioned executive agent health information and health guidance information it Between correspondence.The health guidance information according to health information carrying out rational health guidance.According to step The health information of user determined by 203, above-mentioned executive agent can determine strong with this according to the health information The corresponding health guidance information of health condition information.For example, when executive agent determines that the health status of user is " good ", with this The corresponding health guidance information of health status for example may include that " physical condition is pretty good, it is proposed that usually does more physical exercises to keep good Constitution, diet avoid it is greasy, work and rest rule, to keep preferable energy state ".
Step 205, based on identified health status information, health guidance information, health and fitness information report is generated.
In the present embodiment, above-mentioned executive agent can be believed according to the health status of target user determined by step 203 Health guidance information determined by breath and step 204, to generate health and fitness information report.By changing health and fitness information report, mesh Marking user can be very clear to the health status of oneself, improves monitoring of the user to self health status.To contribute to User improves the health status of oneself.Optionally, the default face portion of target user can also be included in health and fitness information report Healthy score value corresponding to position and organ information corresponding with default facial.To contribute to user according to face Each health score value understands the health status of the organ of itself.
It is a schematic diagram according to the application scenarios of the information generating method of the application with continued reference to Fig. 3, Fig. 3.In Fig. 3 Application scenarios in, the face-image 302 of the target user 307 got is sent to server 303 by capture apparatus 301.Clothes The face-image 302 got is inputted health and fitness information identification model trained in advance by business device 303, to obtain " place between the eyebrows health Score value 60 divides ", the recognition result 304 of " wing of nose health score value 70 divides ", " chin health score value 80 divides ".Then, according to the identification As a result, server 303 can determine that the health information 305 of target user is " being in a good state of health ".Then, server 303 can determine that " health status is good according to the correspondence between pre-set health information and health guidance information Corresponding health guidance information 306 is " lasting to keep good custom of working and resting " well ".Finally, server 303 can be according to determining " being in a good state of health ", " lasting keep good work and rest custom " so that generate health and fitness information report.
Information generating method provided by the embodiments of the present application is then utilized pre- by obtaining the face-image of target user First trained health and fitness information identification model determines the corresponding recognition result of acquired face-image, then according to recognition result It determines health information and health guidance information corresponding with health information, ultimately produces health and fitness information report, from And allow user that the health status of oneself is detected and is supervised at any time, to help to improve the healthy shape of user Condition.
With further reference to Fig. 4, it illustrates the flows according to another embodiment of the information generating method of the application 400.The flow 400 of the information generating method, includes the following steps:
Step 401, the face-image of target user is obtained.
In the present embodiment, the executive agent (such as server 105 shown in FIG. 1) of information generating method can be from local Or communicate with the face of terminal device (such as terminal device shown in FIG. 1 101,102,103) the acquisition target user of connection Portion's image.Herein, the face-image of the target user generally includes coloured image (RGB image), (Depth schemes depth image Picture) etc..The face-image includes the profile that target user may include each position (such as nose, eyes, cheek etc.) of face; It can also include the color at each position.
Step 402, acquired face-image is input to health and fitness information identification model trained in advance, obtains identification knot Fruit.
In the present embodiment, the face-image of the target user acquired in step 401, above-mentioned executive agent can incite somebody to action Acquired face-image is input to health and fitness information identification model trained in advance, to obtain recognition result.Herein, the knowledge Other result may include health of all categories in default health information category set corresponding with acquired face-image The probability of condition information.Herein, health information of all categories for example may include the health status letter of endocrine disorder Health information, the health information of sphagitis, the physical condition good condition of health information etc. of breath, psychological pressure weight Deng.To which above-mentioned executive agent can be obtained on corresponding with the face-image of target user by health and fitness information class models State the probability of health information of all categories.It should be noted that the summation of each probability in the recognition result can wait In 1.
In the present embodiment, health and fitness information identification model can be used for characterizing corresponding between face-image and recognition result Relationship.In other words, health and fitness information identification model can be used for characterizing face-image and corresponding with acquired face-image Default health information category set in health information of all categories probability between correspondence.
In some optional realization methods of the present embodiment, those skilled in the art can be to including largely default health The facial sample image of each classification in condition information category set is for statistical analysis, is stored with including pre- to make If the mapping table between the face-image and health information of each classification in health information category set, and Using the mapping table as health and fitness information identification model.Above-mentioned executive agent can calculate the face-image of target user and be somebody's turn to do In mapping table includes the phase preset between the facial sample image of each classification in health information category set Like degree.Herein, which for example may include the similarity of the profile of each default facial, may include each portion of face The similarity etc. of the color of position.Herein, the method for calculating the similarity between face-image for example may include existing spy Sign extraction law technology.I.e. by extracting the face contour point of each face-image, Euclidean distance is carried out to the face contour point of response Calculate etc..Then, based on similarity calculation as a result, being obtained from the mapping table corresponding with the face-image of target user The probability of health information of all categories in default health information category set.
In some optional realization methods of the present embodiment, health and fitness information identification model can utilize various engineerings Learning method and training sample Training carried out to existing machine learning model (such as various artificial neural networks etc.) and It obtains.Here, training sample may include facial sample image set.The face sample image set includes a large amount of faces The markup information of sample image and each facial sample image.
In practice, health and fitness information identification model can also be by being trained to obtain to preset Recognition with Recurrent Neural Network 's.Recognition with Recurrent Neural Network is a kind of artificial neural network of node orientation connection cyclization.The substantive characteristics of this network is to locate The feedback link of existing inside has feedforward to connect again between reason unit, and internal state can show dynamic time sequence behavior.Here, Initial cycle neural network can be trained using training sample, to obtain health and fitness information identification model.Specifically, the heart Reason status information prediction model can train as follows to be obtained:
First, facial sample image set is obtained.The face sample image set include a large amount of facial sample images and The markup information of each face sample image.Herein, which may include being used to indicate each facial sample image pair The health information answered and probability corresponding with the health information.
It then, will be corresponding with each facial sample image using the facial sample image in facial sample set as input The markup information of health information and probability corresponding with health information is as output, to initial cycle neural network It is trained, obtains health and fitness information identification model.
Here, the cycle god that initial cycle neural network can be unbred Recognition with Recurrent Neural Network or training is not completed Through network, initial cycle neural network can be provided with initial network parameter (such as different small random numbers), and network parameter exists It can constantly be adjusted in the training process of health and fitness information identification model.Until train can characterize face-image and with In the corresponding default health information category set of acquired face-image the probability of health information of all categories it Between correspondence health and fitness information identification model until.
Step 403, from of all categories in the acquired corresponding default health information category set of face-image In the probability of health information, preset number probability is chosen according to descending sequence is worth.
In the present embodiment, according to the corresponding default health status of the face-image of target user determined by step 402 The probability of health information of all categories in information category set, above-mentioned executive agent can with the maximum probability of selected value, Can also have according to value and arrive small sequence greatly, select preset number probability.Herein, when the maximum probability of value and other probability Between numerical value when being more than predetermined threshold value, can the only maximum probability of selective value.When each two in preset number probability is general When rate difference is respectively less than predetermined threshold value, the preset number probability can be chosen.
Step 404, the health information of the selected corresponding classification of probability is merged.
In the present embodiment, the preset number probability selected by step 403.Above-mentioned executive agent can be according to pre- If health information of all categories between incidence relation, to the selected corresponding health status of preset number probability Information is merged.As an example, when identified health information includes " pressure is big ", " endocrine disorder " etc., and lead to Often " pressure is big " can lead to " endocrine disorder ", therefore, can be fused to the two " stress problems ".
Step 405, fusion results are based on, determine the health information of target user.
In the present embodiment, according to the fusion results of step 404, above-mentioned executive agent can be according to the fusion results come really Set the goal the health information of user.Herein, which can be determined as to the health status letter of the target user Breath.
Step 406, based on the correspondence between pre-set health information and health guidance information, institute is determined The corresponding health guidance information of determining health information.
In the present embodiment, can be previously provided in above-mentioned executive agent health information and health guidance information it Between correspondence.The health guidance information according to health information carrying out rational health guidance.According to step The health information of user determined by 203, above-mentioned executive agent can determine strong with this according to the health information The corresponding health guidance information of health condition information.
Step 407, based on identified health status information, health guidance information, health and fitness information report is generated.
In the present embodiment, above-mentioned executive agent can be believed according to the health status of target user determined by step 203 Health guidance information determined by breath and step 204, to generate health and fitness information report.
Figure 4, it is seen that unlike embodiment shown in Fig. 2, the present embodiment uses other training method Health and fitness information identification model is trained, to enrich the output result of health and fitness information identification model.
It generates and fills this application provides a kind of information as the realization to method shown in above-mentioned each figure with further reference to Fig. 5 The one embodiment set, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to respectively In kind electronic equipment.
As shown in figure 5, the device 500 that the information of the present embodiment generates may include:Acquiring unit 501, recognition unit 502, the first determination unit 503, the second determination unit 504 and generation unit 505.Wherein, acquiring unit 501 are configured to obtain Take the face-image of target user.Recognition unit 502 is configured to acquired face-image being input to the strong of training in advance Health information identification model, obtains recognition result, wherein health and fitness information identification model for characterize face-image and recognition result it Between correspondence.First determination unit 503 is configured to determine the health of target user based on identified recognition result Condition information.Second determination unit 504 is configured to based between pre-set health information and health guidance information Correspondence, determine determined by health information corresponding health guidance information.Generation unit 505, is configured to base In identified health information, health guidance information, health and fitness information report is generated.
In the present embodiment, in information generation device 500:Acquiring unit 501, recognition unit 502, the first determination unit 503, the specific processing of the second determination unit 504 and generation unit 505 and its caused technique effect can be right with reference to figure 2 respectively Answer step 201 in embodiment, step 202, step 203, step 204 and 205 related description, details are not described herein.
In some optional realization methods of the present embodiment, recognition result includes corresponding with acquired face-image The healthy score value of default facial;First determination unit 503 is further configured to:Based on pre-set default face portion The weighted value of position obtains the healthy score value of target user to presetting the healthy score value weighted average of facial;By target user Healthy score value be compared with default healthy point threshold, determine that default health corresponding with the healthy score value of target user is divided It is worth section;Based on the correspondence between pre-set default healthy score value section and health information, determine target user's Health information.
In some optional realization methods of the present embodiment, recognition result includes corresponding with acquired face-image The probability of health information of all categories in default health information category set;First determination unit 503 is further matched It sets and is used for:From the health status letter of all categories in the acquired corresponding default health information category set of face-image In the probability of breath, preset number probability is chosen according to descending sequence is worth;To the selected corresponding classification of probability Health information is merged;Based on fusion results, the health information of target user is determined.
In some optional realization methods of the present embodiment, health and fitness information identification model is trained by following steps It arrives:Training sample set is obtained, training sample includes facial sample image and the markup information to facial sample image, In, markup information includes the markup information at each position of face for being used to indicate facial sample image and corresponding with each position of face Healthy score value markup information;It, will using the facial sample image of each training sample in training sample set as input Markup information corresponding with the facial sample image of input is trained convolutional neural networks, obtains healthy letter as output Cease identification model.
In some optional realization methods of the present embodiment, health and fitness information identification model is also trained by following steps It arrives:Training sample set is obtained, training sample includes facial sample image and the markup information to facial sample image, In, markup information be used to indicate health information of all categories in default health information category set and with it is of all categories The corresponding probability of health information;Using the facial sample image of each training sample in training sample set as defeated Enter, using markup information corresponding with the facial sample image of input as output, Recognition with Recurrent Neural Network is trained, is good for Health information identification model.
Below with reference to Fig. 6, it illustrates suitable for for realizing that the electronic equipment of the embodiment of the present application is (such as shown in FIG. 1 Electronic equipment or server) computer system 600 structural schematic diagram.Electronic equipment shown in Fig. 6 is only an example, Any restrictions should not be brought to the function and use scope of the embodiment of the present application.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and Execute various actions appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data. CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always Line 604.
It is connected to I/O interfaces 605 with lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 608 including hard disk etc.; And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because The network of spy's net executes communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 610, as needed in order to be read from thereon Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed by communications portion 609 from network, and/or from detachable media 611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or Computer readable storage medium either the two arbitrarily combines.Computer readable storage medium for example can be --- but Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or arbitrary above combination. The more specific example of computer readable storage medium can include but is not limited to:Electrical connection with one or more conducting wires, Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory Part or above-mentioned any appropriate combination.In this application, computer readable storage medium can any be included or store The tangible medium of program, the program can be commanded the either device use or in connection of execution system, device.And In the application, computer-readable signal media may include the data letter propagated in a base band or as a carrier wave part Number, wherein carrying computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use In by instruction execution system, device either device use or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc., Huo Zheshang Any appropriate combination stated.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof Machine program code, described program design language include object-oriented programming language-such as Java, Smalltalk, C+ +, further include conventional procedural programming language-such as " C " language or similar programming language.Program code can Fully to execute on the user computer, partly execute, executed as an independent software package on the user computer, Part executes or executes on a remote computer or server completely on the remote computer on the user computer for part. In situations involving remote computers, remote computer can pass through the network of any kind --- including LAN (LAN) Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service Provider is connected by internet).
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet Include acquiring unit, recognition unit, the first determination unit, the second determination unit and generation unit.Wherein, the title of these units exists The restriction to the unit itself is not constituted in the case of certain, for example, acquiring unit is also described as " obtaining target user Face-image unit ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be Included in electronic equipment described in above-described embodiment;Can also be individualism, and without be incorporated the electronic equipment in. Above computer readable medium carries one or more program, when said one or multiple programs are held by the electronic equipment When row so that the electronic equipment:Obtain the face-image of target user;Acquired face-image is input to training in advance Health and fitness information identification model, obtains recognition result, wherein health and fitness information identification model is for characterizing face-image and recognition result Between correspondence;Based on identified recognition result, the health information of target user is determined;Based on pre-set Correspondence between health information and health guidance information determines that the identified corresponding health of health information refers to Lead information;Based on identified health information, health guidance information, health and fitness information report is generated.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (12)

1. a kind of information generating method, including:
Obtain the face-image of target user;
Acquired face-image is input to health and fitness information identification model trained in advance, obtains recognition result, wherein described Health and fitness information identification model is used to characterize the correspondence between face-image and recognition result;
Based on identified recognition result, the health information of the target user is determined;
Based on the correspondence between pre-set health information and health guidance information, identified healthy shape is determined The corresponding health guidance information of condition information;
Based on identified health information, health guidance information, health and fitness information report is generated.
2. according to the method described in claim 1, wherein, the recognition result includes corresponding with acquired face-image pre- If the healthy score value of facial;
It is described that the health information of the target user is determined based on identified recognition result, including:
It is obtained based on the weighted value of pre-set default facial to presetting the healthy score value weighted average of facial The healthy score value of the target user;
The healthy score value of the target user is compared with default healthy point threshold, is determined strong with the target user The corresponding default healthy score value section of health score value;
Based on the correspondence between pre-set default healthy score value section and health information, the target user is determined Health information.
3. according to the method described in claim 1, wherein, the recognition result includes corresponding with acquired face-image pre- If the probability of health information of all categories in health information category set;
It is described that the health information of the target user is determined based on identified recognition result, including:
Health information of all categories from the acquired corresponding default health information category set of face-image Probability in, choose preset number probability according to descending sequence is worth;
The health information of the selected corresponding classification of probability is merged;
Based on fusion results, the health information of the target user is determined.
4. according to the method described in claim 2, wherein, the health and fitness information identification model trains to obtain by following steps:
Training sample set is obtained, training sample includes facial sample image and the markup information to facial sample image, In, markup information includes the markup information at each position of face for being used to indicate facial sample image and corresponding with each position of face Healthy score value markup information;
Using the facial sample image of each training sample in the training sample set as input, by the facial sample with input The corresponding markup information of this image is trained convolutional neural networks, obtains the health and fitness information identification model as output.
5. according to the method described in claim 3, wherein, the health and fitness information identification model is also trained by following steps It arrives:
Training sample set is obtained, training sample includes facial sample image and the markup information to facial sample image, In, markup information be used to indicate health information of all categories in default health information category set and with it is of all categories The corresponding probability of health information;
Using the facial sample image of each training sample in the training sample set as input, by the facial sample with input The corresponding markup information of this image is trained Recognition with Recurrent Neural Network, obtains the health and fitness information identification model as output.
6. a kind of information generation device, including:
Acquiring unit is configured to obtain the face-image of target user;
Recognition unit is configured to for acquired face-image to be input to health and fitness information identification model trained in advance, obtain Recognition result, wherein the health and fitness information identification model is used to characterize the correspondence between face-image and recognition result;
First determination unit is configured to determine the health information of the target user based on identified recognition result;
Second determination unit is configured to based on the corresponding pass between pre-set health information and health guidance information System determines the corresponding health guidance information of identified health information;
Generation unit is configured to, based on identified health information, health guidance information, generate health and fitness information report.
7. device according to claim 6, wherein the recognition result includes corresponding with acquired face-image pre- If the healthy score value of facial;
First determination unit is further configured to:
It is obtained based on the weighted value of pre-set default facial to presetting the healthy score value weighted average of facial The healthy score value of the target user;
The healthy score value of the target user is compared with default healthy point threshold, is determined strong with the target user The corresponding default healthy score value section of health score value;
Based on the correspondence between pre-set default healthy score value section and health information, the target user is determined Health information.
8. device according to claim 6, wherein the recognition result includes corresponding with acquired face-image pre- If the probability of health information of all categories in health information category set;
First determination unit is further configured to:
Health information of all categories from the acquired corresponding default health information category set of face-image Probability in, choose preset number probability according to descending sequence is worth;
The health information of the selected corresponding classification of probability is merged;
Based on fusion results, the health information of the target user is determined.
9. device according to claim 7, wherein the health and fitness information identification model trains to obtain by following steps:
Training sample set is obtained, training sample includes facial sample image and the markup information to facial sample image, In, markup information includes the markup information at each position of face for being used to indicate facial sample image and corresponding with each position of face Healthy score value markup information;
Using the facial sample image of each training sample in the training sample set as input, by the facial sample with input The corresponding markup information of this image is trained convolutional neural networks, obtains the health and fitness information identification model as output.
10. device according to claim 8, wherein the health and fitness information identification model is also trained by following steps It arrives:
Training sample set is obtained, training sample includes facial sample image and the markup information to facial sample image, In, markup information be used to indicate health information of all categories in default health information category set and with it is of all categories The corresponding probability of health information;
Using the facial sample image of each training sample in the training sample set as input, by the facial sample with input The corresponding markup information of this image is trained Recognition with Recurrent Neural Network, obtains the health and fitness information identification model as output.
11. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors so that one or more of processors are real The now method as described in any in claim 1-5.
12. a kind of computer readable storage medium, is stored thereon with computer program, wherein the computer program is handled The method as described in any in claim 1-5 is realized when device executes.
CN201810270481.6A 2018-03-29 2018-03-29 information generating method and device Pending CN108511066A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810270481.6A CN108511066A (en) 2018-03-29 2018-03-29 information generating method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810270481.6A CN108511066A (en) 2018-03-29 2018-03-29 information generating method and device

Publications (1)

Publication Number Publication Date
CN108511066A true CN108511066A (en) 2018-09-07

Family

ID=63379265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810270481.6A Pending CN108511066A (en) 2018-03-29 2018-03-29 information generating method and device

Country Status (1)

Country Link
CN (1) CN108511066A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448852A (en) * 2018-11-29 2019-03-08 平安科技(深圳)有限公司 Health control method, device and computer equipment based on prediction model
CN111227789A (en) * 2018-11-29 2020-06-05 百度在线网络技术(北京)有限公司 Human health monitoring method and device
CN112420141A (en) * 2020-11-19 2021-02-26 张磊 Traditional Chinese medicine health assessment system and application thereof
CN112672119A (en) * 2019-10-15 2021-04-16 许桂林 Intelligent projector with camera shooting function and personal health data generation method
CN114392457A (en) * 2022-03-25 2022-04-26 北京无疆脑智科技有限公司 Information generation method, device, electronic equipment, storage medium and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346503A (en) * 2013-07-23 2015-02-11 广州华久信息科技有限公司 Human face image based emotional health monitoring method and mobile phone
CN105868561A (en) * 2016-04-01 2016-08-17 乐视控股(北京)有限公司 Health monitoring method and device
CN106339576A (en) * 2016-07-20 2017-01-18 美的集团股份有限公司 Health management method and system
CN107844780A (en) * 2017-11-24 2018-03-27 中南大学 A kind of the human health characteristic big data wisdom computational methods and device of fusion ZED visions
US20190216333A1 (en) * 2018-01-12 2019-07-18 Futurewei Technologies, Inc. Thermal face image use for health estimation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346503A (en) * 2013-07-23 2015-02-11 广州华久信息科技有限公司 Human face image based emotional health monitoring method and mobile phone
CN105868561A (en) * 2016-04-01 2016-08-17 乐视控股(北京)有限公司 Health monitoring method and device
CN106339576A (en) * 2016-07-20 2017-01-18 美的集团股份有限公司 Health management method and system
CN107844780A (en) * 2017-11-24 2018-03-27 中南大学 A kind of the human health characteristic big data wisdom computational methods and device of fusion ZED visions
US20190216333A1 (en) * 2018-01-12 2019-07-18 Futurewei Technologies, Inc. Thermal face image use for health estimation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄岑宇: "智能健康终端系统设计", 《EQUIPMENT MANUFACTURING TECHNOLOGY》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448852A (en) * 2018-11-29 2019-03-08 平安科技(深圳)有限公司 Health control method, device and computer equipment based on prediction model
CN111227789A (en) * 2018-11-29 2020-06-05 百度在线网络技术(北京)有限公司 Human health monitoring method and device
CN112672119A (en) * 2019-10-15 2021-04-16 许桂林 Intelligent projector with camera shooting function and personal health data generation method
CN112672119B (en) * 2019-10-15 2022-09-09 许桂林 Intelligent projector with camera shooting function and personal health data generation method
CN112420141A (en) * 2020-11-19 2021-02-26 张磊 Traditional Chinese medicine health assessment system and application thereof
CN112420141B (en) * 2020-11-19 2024-01-26 张磊 Traditional Chinese medicine health evaluation system and application thereof
CN114392457A (en) * 2022-03-25 2022-04-26 北京无疆脑智科技有限公司 Information generation method, device, electronic equipment, storage medium and system

Similar Documents

Publication Publication Date Title
CN108537152A (en) Method and apparatus for detecting live body
CN108511066A (en) information generating method and device
CN108416324A (en) Method and apparatus for detecting live body
CN109145781B (en) Method and apparatus for processing image
CN108898185A (en) Method and apparatus for generating image recognition model
CN107644209A (en) Method for detecting human face and device
CN108898186A (en) Method and apparatus for extracting image
CN108830235A (en) Method and apparatus for generating information
CN108446651A (en) Face identification method and device
CN108776786A (en) Method and apparatus for generating user's truth identification model
CN109919079A (en) Method and apparatus for detecting learning state
CN108491808B (en) Method and device for acquiring information
CN111868742A (en) Machine implemented facial health and beauty aid
CN107910060A (en) Method and apparatus for generating information
CN108509892A (en) Method and apparatus for generating near-infrared image
CN109086719A (en) Method and apparatus for output data
CN109829432A (en) Method and apparatus for generating information
CN108197618A (en) For generating the method and apparatus of Face datection model
CN109308490A (en) Method and apparatus for generating information
CN108491823A (en) Method and apparatus for generating eye recognition model
CN109858444A (en) The training method and device of human body critical point detection model
CN108985228A (en) Information generating method and device applied to terminal device
CN109241934A (en) Method and apparatus for generating information
CN108510466A (en) Method and apparatus for verifying face
CN110009059A (en) Method and apparatus for generating model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination