WO2023048153A1 - Information processing method, computer program, and information processing device - Google Patents

Information processing method, computer program, and information processing device Download PDF

Info

Publication number
WO2023048153A1
WO2023048153A1 PCT/JP2022/035045 JP2022035045W WO2023048153A1 WO 2023048153 A1 WO2023048153 A1 WO 2023048153A1 JP 2022035045 W JP2022035045 W JP 2022035045W WO 2023048153 A1 WO2023048153 A1 WO 2023048153A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature points
shadow
facial feature
information processing
facial
Prior art date
Application number
PCT/JP2022/035045
Other languages
French (fr)
Japanese (ja)
Inventor
俊彦 西村
康之 本間
雄太 吉田
Original Assignee
テルモ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by テルモ株式会社 filed Critical テルモ株式会社
Publication of WO2023048153A1 publication Critical patent/WO2023048153A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Definitions

  • the present invention relates to an information processing method, a computer program, and an information processing apparatus for handling an image of a person's face taken with a camera or the like.
  • Devices equipped with cameras such as smartphones or tablet terminals, are widely used.
  • research and development have been made on techniques for determining the health condition of a person based on an image of the person captured by a camera.
  • Patent Literature 1 an image of a user captured by a user terminal is transmitted to a server, and the server calculates a health level indicating the health condition of the user based on the face image included in the image and transmits the health level to the user terminal.
  • a health condition determination system in which a user terminal outputs a health level has been proposed.
  • the server creates graphs showing the health level, skin condition, and facial beauty rate in chronological order, correlation images showing the correlation with the health level, advice information, facial image videos, etc. , to the user terminal.
  • the present invention has been made in view of such circumstances, and its object is to provide information that can be expected to suppress deterioration in the accuracy of determining health conditions, etc. due to shadows contained in images of faces.
  • An object of the present invention is to provide a processing method, a computer program, and an information processing apparatus.
  • An information processing method includes an information processing apparatus that acquires a facial image, extracts facial feature points from the acquired facial image, determines whether or not a shadow is present in the facial image, and extracts a shadow. When it is determined that , the facial feature points in the shaded portion are corrected.
  • it can be expected to suppress deterioration in the accuracy of determination of the health condition, etc., due to shadows contained in the photographed image of the face.
  • FIG. 1 is a schematic diagram for explaining an overview of an information processing system according to an embodiment
  • FIG. 1 is a block diagram showing the configuration of a server device according to an embodiment
  • FIG. 2 is a block diagram showing the configuration of a terminal device according to this embodiment
  • FIG. 6 is a flowchart showing the procedure of health level determination processing performed by the server device according to the present embodiment
  • 6 is a flow chart showing a procedure of shadow correction processing relating to the outline of a face, which is performed by the server device according to the present embodiment
  • FIG. 10 is a schematic diagram for explaining shadow correction processing relating to the contour of the face
  • 7 is a flowchart showing a procedure of shadow correction processing relating to nasolabial folds or a nose performed by the server device according to the present embodiment
  • FIG. 10 is a schematic diagram for explaining shadow correction processing for nasolabial folds or a nose;
  • FIG. 10 is a schematic diagram for explaining shadow correction processing for nasolabial folds or a nose;
  • 7 is a flow chart showing a procedure of shadow correction processing for the eyes or mouth performed by the server device according to the present embodiment;
  • FIG. 10 is a schematic diagram for explaining shadow correction processing for eyes or mouth;
  • FIG. 10 is a schematic diagram for explaining shadow correction processing for eyes or mouth;
  • FIG. 1 is a schematic diagram for explaining an outline of an information processing system according to this embodiment.
  • the information processing system according to the present embodiment is a system that analyzes a subject's face image to determine the health condition, and includes a server device 1, a terminal device 3, and the like.
  • a user who uses this system takes a picture of a person's face using a terminal device 3 such as a smart phone or a tablet-type terminal device, and transmits the taken face image from the terminal device 3 to the server device 1 .
  • the user uses the terminal device 3 to photograph his/her own face, but the present invention is not limited to this. good.
  • the server device 1 receives the face image transmitted by the terminal device 3, analyzes the features of the face image, determines the health level of the subject, and transmits the determined health level and information related thereto to the terminal device 3. do. Based on the face image, the server device 1 determines, for example, the subject's fatigue level, stress level, positive (negative) emotion, presence and degree of facial paralysis, presence and degree of signs of stroke, and the like. can be determined as Note that the degree of health determined by the server device 1 is not limited to the above example, and various indices related to the health of the subject may be employed.
  • the server device 1 extracts facial feature points (so-called keypoints, landmarks, etc.) from the facial image using, for example, a facial feature point extraction model machine-learned in advance, and based on the extracted facial feature points, the degree of health is calculated. judgment is made.
  • the server device 1 is configured to suppress the deterioration of the health level determination accuracy due to shadows that occur in the face image captured by the user using the terminal device 3. Processing is performed to correct the extraction result of the facial feature points for the shaded portion.
  • the server device 1 determines, for example, whether or not a shadowed portion is included in the facial image, and if the shadowed portion is included, identifies the position and range of the shadowed portion, and identifies facial feature points included in the shadowed portion.
  • the correction suppresses deterioration in the accuracy of determination of the health level based on the facial feature points.
  • FIG. 2 is a block diagram showing the configuration of the server device 1 according to this embodiment.
  • the server device 1 according to the present embodiment includes a processing unit 11, a storage unit (storage) 12, a communication unit (transceiver) 13, and the like.
  • the explanation is given assuming that the processing is performed by one server device, but the processing may be performed by a plurality of server devices in a distributed manner.
  • the processing unit 11 includes an arithmetic processing unit such as a CPU (Central Processing Unit), MPU (Micro-Processing Unit), GPU (Graphics Processing Unit) or quantum processor, ROM (Read Only Memory), RAM (Random Access Memory), etc. It is configured using The processing unit 11 reads out and executes the server program 12a stored in the storage unit 12 to perform processing of acquiring a facial image of the subject from the terminal device 3, processing of extracting facial feature points from the acquired facial image, Various processes such as a process of correcting the facial feature points for the shaded portion and a process of judging the health level of the subject from the facial feature points are performed.
  • a CPU Central Processing Unit
  • MPU Micro-Processing Unit
  • GPU Graphics Processing Unit
  • quantum processor ROM (Read Only Memory)
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the storage unit 12 is configured using a large-capacity storage device such as a hard disk.
  • the storage unit 12 stores various programs executed by the processing unit 11 and various data required for processing by the processing unit 11 .
  • the storage unit 12 stores a server program 12a executed by the processing unit 11.
  • the storage unit 12 stores a facial feature point extraction model 12b for extracting facial feature points from a facial image, and is provided with a facial feature point DB (database) 12c for storing information on the extracted facial feature points.
  • the server program (program product) 12a is provided in a form recorded in a recording medium 99 such as a memory card or an optical disk, and the server device 1 reads the server program 12a from the recording medium 99 and stores it in the storage unit 12.
  • the server program 12a may be written in the storage unit 12 during the manufacturing stage of the server device 1, for example.
  • the server program 12a may be delivered by another remote server device or the like, and the server device 1 may acquire the program through communication.
  • the server program 12 a may be recorded in the recording medium 99 and read by a writing device and written in the storage unit 12 of the server device 1 .
  • the server program 12 a may be provided in the form of distribution via a network, or may be provided in the form of being recorded on the recording medium 99 .
  • the facial feature point extraction model 12b is a learning model that has undergone machine learning in advance so as to extract and output the facial feature points of the target person in response to the input of the target person's face image.
  • the facial feature point extraction model 12b is a trained model that extracts facial feature points using, for example, Open Pose technology.
  • the facial feature point extraction model 12b is not limited to the Open Pose learning model, and various learning models for extracting facial feature points using other techniques may be employed. Extraction of facial feature points using a learning model based on machine learning is an existing technique, and thus details such as the configuration and generation method of the learning model will be omitted.
  • the facial feature point DB 12c is a database that stores and accumulates information on facial feature points extracted from the facial image of the subject.
  • the facial feature point DB 12c includes, for example, identification information such as the name or ID of the subject, the facial image of the subject, the date and time when the facial image was taken, information on the facial feature points extracted from the facial image, and the facial feature points. is stored in association with the determination result of the degree of health based on
  • the server device 1 compares, for example, the facial feature points extracted from the latest facial image with facial feature points from a predetermined period of time, such as one month or one year ago, and based on changes in the facial feature points during the predetermined period. can determine the degree of health.
  • the communication unit 13 communicates with various devices via a network N including a mobile phone communication network, a wireless LAN (Local Area Network), the Internet, and the like. In the present embodiment, the communication unit 13 communicates with one or more terminal devices 3 via the network N. FIG. The communication unit 13 transmits the data given from the processing unit 11 to other devices, and gives the data received from the other devices to the processing unit 11 .
  • a network N including a mobile phone communication network, a wireless LAN (Local Area Network), the Internet, and the like.
  • the communication unit 13 communicates with one or more terminal devices 3 via the network N.
  • the communication unit 13 transmits the data given from the processing unit 11 to other devices, and gives the data received from the other devices to the processing unit 11 .
  • the storage unit 12 may be an external storage device connected to the server device 1.
  • the server device 1 may be a multicomputer including a plurality of computers, or may be a virtual machine virtually constructed by software.
  • the server device 1 is not limited to the above configuration, and may include, for example, a reading unit that reads information stored in a portable storage medium, an input unit that receives operation inputs, or a display unit that displays images. .
  • the server program 12a stored in the storage unit 12 is read out and executed by the processing unit 11, so that the facial image acquisition unit 11a, the facial feature point extraction unit 11b, the shadow determination unit A portion 11c, a shadow portion specifying portion 11d, a shadow correction portion 11e, a health level determination portion 11f, a notification portion 11g, and the like are implemented in the processing portion 11 as software functional portions.
  • functional units of the processing unit 11 functional units related to determination of the health level from the face image are illustrated, and functional units related to other processes are omitted.
  • the face image acquisition unit 11a performs processing for acquiring a face image of the target person's face from the terminal device 3.
  • the facial image acquisition unit 11 a communicates with the terminal device 3 through the communication unit 13 , receives facial image data transmitted from the terminal device 3 , and stores the data in the storage unit 12 .
  • the face image obtaining section 11a may perform a process of extracting an image region corresponding to the face of the subject from the image.
  • the face image acquisition unit 11a performs face detection processing on the image acquired from the terminal device 3, and cuts out an image area including the detected face from the original image, thereby extracting the face image of the subject. can be obtained. Since face detection processing is an existing technology, detailed description thereof will be omitted.
  • the facial feature point extraction unit 11b extracts a plurality of points indicating facial features as feature points (keypoints, landmarks, etc.) from the facial image acquired by the facial image acquisition unit 11a.
  • the facial feature point extraction unit 11b uses the facial feature point extraction model 12b stored in the storage unit 12 to extract facial feature points from the facial image.
  • the facial feature point extracting unit 11b inputs the facial image of the subject to the facial feature point extraction model 12b, and obtains the information of the facial feature points output by the facial feature point extraction model 12b in response to the facial image.
  • the facial feature point extraction model 12b outputs, for example, the coordinates of the facial feature points in the input facial image, and the number of facial feature points output by the facial feature point extraction model 12b is, for example, several to several hundred.
  • the shadow determination unit 11c performs processing for determining whether or not the face image acquired by the face image acquisition unit 11a contains a shadow. For example, the shadow determination unit 11c compares the luminance (or brightness, etc.) of each pixel forming the face image with a predetermined threshold value, and determines whether or not there is a pixel whose luminance is lower (darker) than the threshold value. determines whether or not the face image includes a shadow.
  • the threshold used for shadow determination may be determined, for example, at the design stage of the present system, or may be calculated, for example, based on the average value of luminance of the entire face image.
  • the shadow portion identification unit 11d performs a process of identifying an image area containing a shadow present in the face image as a shadow portion. Similar to the shadow determination unit 11c, the shadow portion identification unit 11d compares the brightness of each pixel of the face image with a predetermined threshold, and identifies pixels whose brightness is lower than the threshold. The threshold used for this process may be the same as the threshold used by the shadow determination unit 11c for determination. may be used.
  • the shaded portion specifying unit 11d specifies a rectangular image region including pixels whose brightness is smaller than a threshold value, and specifies this image region as a shaded portion. Note that the shadow portion identifying section 11d may identify a plurality of shadow portions from the face image.
  • the shadow part specifying unit 11d specifies which part of the subject's face corresponds to the shadow part included in the face image.
  • the shadow portion specifying unit 11d compares, for example, the coordinates of the facial feature points extracted by the facial feature point extraction unit 11b with the range of the coordinates of the shadow portion, and determines whether the facial feature points existing within or near the shadow portion are identified as the face. It is determined which part of the face the shaded part corresponds to by determining which feature of the point.
  • the shaded portion specifying unit 11d specifies whether the shaded portion corresponds to any part of the contour of the face, the nasolabial fold, the nose, the eyes, or the mouth, or does not correspond to any part. .
  • the shadow correction unit 11e performs processing for correcting facial feature points included in the shadow portion specified by the shadow portion specifying unit 11d.
  • the shadow correction section 11e corrects the shadow portion using a correction method determined according to which part of the face the shadow portion corresponds to.
  • a method for correcting a shadow portion corresponding to the contour of the face, a method for correcting a shadow portion corresponding to nasolabial folds or a nose, and a method for correcting a shadow portion corresponding to the eyes or mouth are described.
  • the shadow correction section 11e selects an appropriate method according to the shadow portion from the three types of correction methods and performs correction. Details of each correction method will be described later.
  • the health level determination unit 11f performs processing for determining the health level of the subject based on a plurality of facial feature points extracted from the face image and corrected for shadow portions.
  • the health degree determination unit 11f determines, for example, the presence or absence or degree of facial paralysis of the subject.
  • a designer or the like of an information processing system according to the present embodiment collects information about facial feature points of a person with symptoms of facial paralysis, predetermines conditions for determining whether or not the person has facial paralysis, The determination conditions are stored in the device 1 .
  • the health degree determination unit 11f determines whether or not the facial feature points of the subject have the characteristics of facial paralysis based on pre-stored determination conditions, and outputs the presence or absence of facial paralysis or the degree thereof as a determination result. do.
  • the health level determination unit 11f outputs, for example, a decimal value ranging from 0 (no facial paralysis characteristic) to 1 (facial paralysis characteristic) as a determination result.
  • the information processing system is intended to accurately extract facial feature points that can be used to determine various degrees of health. It may be used for determining the degree of health. For this reason, in the present embodiment, a detailed description of the method of determining the degree of health, such as the method of determining the presence or absence of facial paralysis and the degree of facial paralysis by the health degree determination unit 11f, will be omitted.
  • the health degree determination unit 11f may determine the presence or absence or degree of facial paralysis from the subject's facial feature points using a machine-learned learning model in advance.
  • the learning model performs machine learning using, for example, teacher data in which information on facial feature points and information on the presence or absence of facial paralysis are associated, and outputs the presence or absence of facial paralysis or its degree in response to the input of facial feature points. do.
  • the health level determination unit 11f may determine the degree of progression or improvement of, for example, the symptom of facial paralysis based on the facial feature point information stored in the facial feature point DB 12c of the storage unit 12. Further, the health degree determination unit 11f predicts the degree of facial paralysis at a future point in time, such as one month or one year ahead, based on the time-series facial feature point information stored in the facial feature point DB 12c. good too. In the present embodiment, the health degree determination unit 11f determines the presence or absence of facial paralysis of the subject or the degree thereof, but is not limited to this. The presence or absence of cerebral apoplexy or the like may be determined as the degree of health.
  • the notification unit 11g performs a process of notifying the user of the result of health level determination by the health level determination unit 11f.
  • the notification unit 11g may notify, for example, when an abnormality is detected in the subject's health level, or may notify, for example, when an improvement in the subject's health level is detected.
  • the judgment result may be notified regardless of the presence or absence of improvement.
  • the notification unit 11g notifies the user of that effect.
  • the notification unit 11g transmits a message for notifying the determination result to one or a plurality of notification destinations preset in association with the target person.
  • the notification unit 11g notifies the terminal device 3 of the user who has transmitted the face image.
  • the notification destination is not limited to the terminal device 3 of the user who transmitted the face image, and may be the terminal device 3 of a different user (for example, the user's family member or doctor in charge). good.
  • FIG. 3 is a block diagram showing the configuration of the terminal device 3 according to this embodiment.
  • the terminal device 3 includes a processing unit 31, a storage unit (storage) 32, a communication unit (transceiver) 33, a display unit (display) 34, an operation unit 35, and the like.
  • the terminal device 3 is a device used by a user such as a target person who wants to determine the health level, or a related person such as a family member or a doctor in charge of the target person, for example, a smartphone, a tablet terminal device, a personal computer, etc. information processing apparatus.
  • the terminal device 3 does not have to be a portable device, and may be a device such as an AI speaker or a surveillance camera installed in the target person's house. It is desirable that the terminal device 3 is equipped with a camera for photographing the face of the subject. A device without a camera may be used.
  • the processing unit 31 is configured using an arithmetic processing unit such as a CPU or MPU, a ROM, and the like.
  • the processing unit 31 reads out and executes the program 32a stored in the storage unit 32 to perform processing related to photographing of the subject, processing of transmitting the photographed image to the server device 1 and requesting determination of the degree of health, and , acquires the determination result of the health level from the server device 1, and performs various processes such as a process of displaying it to the user.
  • the storage unit 32 is configured using, for example, a non-volatile memory device such as a flash memory or a storage device such as a hard disk.
  • the storage unit 32 stores various programs executed by the processing unit 31 and various data required for processing by the processing unit 31 .
  • the storage unit 32 stores a program 32a executed by the processing unit 31.
  • FIG. In the present embodiment, the program (program product) 32 a is distributed by a remote server device or the like, and the terminal device 3 acquires it through communication and stores it in the storage unit 32 .
  • the program 32a may be written in the storage unit 32 during the manufacturing stage of the terminal device 3, for example.
  • the program 32a may be stored in the storage unit 32 after the terminal device 3 reads the program 32a recorded in the recording medium 98 such as a memory card or an optical disk.
  • the program 32 a may be recorded in the recording medium 98 and read by a writing device and written in the storage unit 32 of the terminal device 3 .
  • the program 32 a may be provided in the form of distribution via a network, or may be provided in the form of being recorded on the recording medium 98 .
  • the communication unit 33 communicates with various devices via a network N including a mobile phone communication network, a wireless LAN, the Internet, and the like. In the present embodiment, the communication unit 33 communicates with the server device 1 via the network N. FIG. The communication unit 33 transmits data received from the processing unit 31 to other devices, and provides the processing unit 31 with data received from other devices.
  • the display unit 34 is configured using a liquid crystal display or the like, and displays various images, characters, etc. based on the processing of the processing unit 31.
  • the operation unit 35 receives a user's operation and notifies the processing unit 31 of the received operation.
  • the operation unit 35 receives a user's operation using an input device such as mechanical buttons or a touch panel provided on the surface of the display unit 34 .
  • the operation unit 35 may be an input device such as a mouse and a keyboard, and these input devices may be detachable from the terminal device 3 .
  • the camera 36 is configured using an image sensor such as a CCD (Charge Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor), and an optical element such as a lens. Give to 31.
  • CCD Charge Coupled Device
  • CMOS Complementary Metal Oxide Semiconductor
  • the program 32a stored in the storage unit 32 is read out and executed by the processing unit 31, so that the photographing processing unit 31a, the display processing unit 31b, etc. function as software functional units. It is implemented in the processing unit 31 .
  • the program 32a may be a program dedicated to the information processing system according to the present embodiment, or may be a general-purpose program such as an Internet browser or web browser.
  • the photographing processing unit 31a performs processing related to photographing of the subject's face image by the camera 36.
  • the photographing processing unit 31a performs photographing by the camera 36 according to, for example, the user's operation on the operation unit 35, and acquires data of the photographed image.
  • the photographing processing unit 31a may perform, for example, a message display or a photographing guide display to assist the user in photographing an appropriate face image.
  • the imaging processing unit 31a transmits an image (face image) obtained by imaging to the server device 1, and requests determination of the health level.
  • the display processing unit 31b performs processing for displaying information on the health level determination result received from the server device 1 on the display unit 34.
  • the display processing unit 31b may display, for example, the presence or absence of facial paralysis determined by the server device 1 or its degree, and, for example, when a determination result indicating that there is a characteristic of facial paralysis is obtained, a push notification to that effect is obtained.
  • the time-series change in the degree of facial paralysis may be displayed as a graph or the like, or various displays other than these may be performed.
  • FIG. 4 is a flow chart showing the procedure of health level determination processing performed by the server device 1 according to the present embodiment.
  • the facial image acquisition unit 11a of the processing unit 11 of the server device 1 according to the present embodiment communicates with the terminal device 3 through the communication unit 13, and receives facial image data transmitted by the terminal device 3. , a face image of the subject is obtained (step S1).
  • the facial feature point extraction unit 11b of the processing unit 11 inputs the facial image acquired in step S1 to the facial feature point extraction model 12b stored in the storage unit 12, and extracts the face image output by the facial feature point extraction model 12b. By obtaining information on the feature points, the feature points are extracted from the face image of the subject (step S2).
  • the shadow determination unit 11c of the processing unit 11 determines whether or not the face image acquired in step S1 includes a shadow (step S3). At this time, the shadow determination unit 11c determines whether or not there is a pixel having a brightness lower than the threshold based on the result of comparing the brightness of each pixel constituting the face image with the threshold, thereby determining whether or not there is a shadow on the face image. Determine whether it contains If it is determined that the face image does not contain a shadow (S3: NO), the shadow determining section 11c advances the process to step S7.
  • the shadow portion specifying unit 11d of the processing unit 11 specifies an image region including the shadow in the face image as a shadow portion, and identifies the shadow portion. It is specified which part of the subject's face the part corresponds to (step S4). At this time, the shadow portion specifying unit 11d determines which portion of the face the shadow portion corresponds to based on the positional relationship between the region specified as the shadow portion and the coordinates of the facial feature points extracted in step S2. to identify
  • the shadow portion specifying unit 11d determines whether or not the shadow portion specified in step S4 needs to be corrected (step S5). In the present embodiment, the shadow portion specifying unit 11d determines that correction is necessary when the shadow portion of the face image corresponds to any part of the subject's facial contour, nasolabial fold, nose, eyes, or mouth. If it does not correspond to these parts, it is determined that there is no need for correction. If it is determined that correction is not necessary (S5: NO), the shadow portion specifying section 11d advances the process to step S7.
  • the shadow correction section 11e of the processing section 11 performs correction processing on the shadow portion of the face image identified by the shadow portion identification section 11d (step S6).
  • the shadow corrector 11e is determined according to whether the shadowed portion is a portion corresponding to the outline of the face, a portion corresponding to the nasolabial fold or nose, or a portion corresponding to the eyes or mouth. Use the correction method to correct the shaded area.
  • the health level determination unit 11f of the processing unit 11 determines the health level of the subject based on the facial feature points extracted from the facial image of the subject and the facial feature points corrected with respect to the shaded portion as necessary. (step S7).
  • the health degree determination unit 11f determines the presence or absence of facial paralysis and its degree based on the facial feature points of the subject.
  • Various health measures such as a person's degree of fatigue, stress level, degree of positive (negative) emotions, or presence and degree of signs of stroke may be determined.
  • the health level determination unit 11f transmits information on the facial feature points extracted from the face image of the subject and information on the facial feature points corrected with respect to the shaded portion as necessary, together with the health level determination result in step S7.
  • Stored in the facial feature point DB 12c of the storage unit 12 step S8.
  • the notification unit 11g of the processing unit 11 notifies the determination result of the health level by transmitting the information about the health level of the target person determined in step S7 to the destination preset for the target person (step S9), the process ends.
  • the server device 1 stores, in a database or the like, identification information such as the subject's name or ID, and information such as the e-mail address of the recipient of the determination result of the health level in association with each other.
  • the terminal device 3 transmits the identification information of the target person to the server device 1 together with the face image.
  • the notification unit 11g of the server device 1 acquires the information of the transmission destination to which the health level determination result is transmitted from the database based on the subject identification information received together with the face image, and sends the acquired transmission destination for the health level determination. You can send the results.
  • FIG. 5 is a flowchart showing the procedure of shadow correction processing concerning the contour of the face performed by the server device 1 according to the present embodiment.
  • the shadow correction processing shown in FIG. 5 is processing that can be performed in step S6 of the flowchart of FIG.
  • FIG. 6 is a schematic diagram for explaining the shadow correction processing regarding the contour of the face.
  • the shadow correction unit 11e of the processing unit 11 of the server device 1 performs correction to increase the brightness of the shadow portion of the face image identified in step S4 of the flowchart shown in FIG. S21).
  • the upper part of FIG. 6 shows an example of a face image and a shaded portion 101 corresponding to the contour of the face specified from this face image.
  • the middle part of FIG. 6 shows an example of the image after the correction for increasing the luminance of the shaded portion 101 has been performed.
  • the shadow correction unit 11e performs luminance correction only on the shadow portion specified from the face image, and does not have to perform luminance correction on portions other than the shadow portion (although it may be performed).
  • the shadow correction unit 11e examines, for example, the color distribution of the image of the shaded portion with the increased luminance, and identifies image regions with similar colors to determine the region corresponding to human skin (skin region) and the other regions. (background area) and (step S22).
  • the lower part of FIG. 6 shows an example in which the shaded area is separated into a skin area 101a and a background area 101b.
  • the shadow correction unit 11e separates the skin region and the background region by specifying regions with similar colors, but the region separation method is not limited to this. Another method may be used to separate regions, such as detecting a boundary line from a portion by processing such as edge extraction and separating the region based on this boundary line.
  • the shadow correction unit 11e acquires a portion corresponding to the boundary line between the skin area and the background area based on the separation result of the skin area and the background area in step S22 (step S23).
  • the shadow correction unit 11e acquires some points as feature points from among the acquired plurality of points on the boundary line (step S24), and ends the shadow correction process.
  • the shadow correction unit 11e replaces the feature points included in the shadow portion among the feature points extracted in step S2 of the flowchart of FIG. 4 with the feature points acquired in step S24, thereby completing the shadow correction processing. .
  • FIG. 7 is a flowchart showing a procedure for shadow correction processing for nasolabial folds or nose performed by the server device 1 according to the present embodiment.
  • the shadow correction processing shown in FIG. 7 is processing that can be performed in step S6 of the flowchart of FIG. 8 and 9 are schematic diagrams for explaining the shadow correction processing for nasolabial folds or nose.
  • the shadow correction unit 11e of the processing unit 11 of the server device 1 performs correction to increase the brightness of the shadow portion of the face image identified in step S4 of the flowchart shown in FIG. S31).
  • the upper part of FIG. 8 shows an example of a face image and a shadow portion 102 corresponding to the nasolabial folds of the face identified from this face image. Further, the lower part of FIG. 8 shows an example of the image after the correction for increasing the luminance of the shaded portion 101 is performed.
  • the shadow correction unit 11e performs luminance correction only on the shadow portion specified from the face image, and does not have to perform luminance correction on portions other than the shadow portion (although it may be performed).
  • the shadow correction unit 11e compares the brightness of each pixel in the shaded portion of the image of the shaded portion with the increased brightness with a predetermined threshold value, thereby extracting regions where the brightness is lower than the threshold value (step S32).
  • the upper part of FIG. 9 shows an example of the shadow portion 102 identified from the face image, and the lower part of FIG. there is
  • a region where the brightness is lower than the threshold value is shown hatched, and this region can be estimated as a portion corresponding to the nasolabial folds of the face.
  • the nasolabial folds are taken as an example, but the same applies to the nose, and regions where the brightness is lower than the threshold value correspond to the boundaries of the nose or concave portions in the unevenness of the nose.
  • the shadow correction unit 11e acquires some points as feature points from among the points included in the low-luminance region extracted in step S32 (step S33), and ends the shadow correction process. For example, the shadow correction unit 11e replaces the feature points included in the shadow portion among the feature points extracted in step S2 of the flowchart of FIG. 4 with the feature points acquired in step S33, thereby completing the shadow correction processing. .
  • FIG. 10 is a flowchart showing a procedure for shadow correction processing for eyes or mouth performed by the server device 1 according to the present embodiment.
  • the shadow correction processing shown in FIG. 10 is processing that can be performed in step S6 of the flowchart of FIG. 11 and 12 are schematic diagrams for explaining the shadow correction processing for the eyes or mouth.
  • the shadow correction unit 11e of the processing unit 11 of the server device 1 performs correction to increase the brightness of the shadow portion of the face image identified in step S4 of the flowchart shown in FIG. S41).
  • the upper part of FIG. 11 shows an example of a face image and a shadow portion 103 corresponding to the eyes identified from this face image. Further, the lower part of FIG. 11 shows an example of the image after the correction for increasing the brightness of the shaded portion 103 is performed.
  • the shadow correction unit 11e performs luminance correction only on the shadow portion specified from the face image, and does not have to perform luminance correction on portions other than the shadow portion (although it may be performed).
  • the shadow correction unit 11e performs processing for extracting a closed curve surrounding the eyes or mouth from the image of the shadow portion with the increased brightness (step S42). At this time, the shadow correction unit 11e extracts, for example, pixels corresponding to edges from the image, extracts a portion where the pixels of the edge are connected to form a curve, and extracts a portion where the curve is closed. can be extracted. Extraction of closed curves surrounding the eyes or mouth can be done using active contour methods such as Snakes or Level Set. Since the active contour method is an existing technique, detailed description thereof will be omitted. Note that this method of extracting a closed curve is only an example and is not limited to this, and extraction of a closed curve by the shadow correction unit 11e may be performed by any method.
  • active contour methods such as Snakes or Level Set. Since the active contour method is an existing technique, detailed description thereof will be omitted. Note that this method of extracting a closed curve is only an example and is not limited to this, and extraction of a closed curve by the shadow correction unit 11e may
  • the shadow correction unit 11e performs processing for detecting corners of the closed curve extracted in step S42 (step S43). At this time, the shadow correction unit 11e detects a crooked portion of the closed curve, and if the internal angle at this portion is smaller than a predetermined angle (eg, 90° or 60°), this portion is determined to be a corner portion. can be done. Note that this angle detection method is an example and is not limited to this, and the angle detection by the shadow correction unit 11e may be performed by any method.
  • the upper part of FIG. 12 shows a closed curve 104 detected from a shaded portion 103 corresponding to the eye, and a corner portion 105 of this closed curve 104 . In the lower part of FIG. 12, a closed curve 104 detected from a shaded portion 103 corresponding to the mouth and a corner portion 105 of this closed curve 104 are shown.
  • the shadow correction unit 11e acquires the point corresponding to the corner detected in step S43 as a feature point (step S44), and ends the shadow correction process.
  • the shadow correction unit 11e completes the shadow correction processing by, for example, replacing the feature points included in the shadow portion among the feature points extracted in step S2 of the flowchart of FIG. 4 with the feature points acquired in step S44. .
  • the server device 1 acquires a face image of a subject photographed from the terminal device 3, extracts facial feature points from the acquired face image, and extracts facial feature points from the face image. To determine whether or not a shadow occurs, and correct facial feature points in the shadow portion when the shadow occurs. As a result, in the information processing system according to the present embodiment, it is possible to suppress deterioration in accuracy in extracting facial feature points due to shadows included in a photographed image of the face, and to improve accuracy in determining health conditions and the like based on facial feature points. It can be expected that the decrease will be suppressed.
  • the server device 1 also determines whether or not correction is necessary according to which part of the face the shaded portion included in the face image corresponds to, that is, the position of the shaded portion with respect to the face. As a result, the server device 1 performs correction when a part included in the facial image corresponds to a part that affects the extraction of facial feature points, for example, and corrects a part that does not affect the extraction of facial feature points. By not performing correction, it is expected that the load of correction processing can be reduced.
  • the server device 1 corrects facial feature points related to the outline of the face included in the shaded portion.
  • the server device 1 separates the face portion (facial skin region) and the background portion (background region) included in the shaded portion, extracts the boundary line between the face portion and the background portion, and Correction is performed by using the points as facial feature points.
  • the server apparatus 1 can be expected to accurately extract facial feature points even from a facial image in which shadows are produced on the outline of the face and its surroundings.
  • the server device 1 corrects feature points related to nasolabial folds or noses of the face included in the shaded portion.
  • the server device 1 performs correction by extracting a portion in which the brightness of each pixel included in the shaded portion is smaller than a threshold, and using points included in the extracted portion as facial feature points.
  • the server device 1 can be expected to extract facial feature points with high accuracy even in a face image in which the nasolabial folds or the nose and its surroundings are shaded.
  • the server device 1 corrects feature points related to the eyes or mouth of the face included in the shaded portion. At this time, the server device 1 extracts a closed curve surrounding the eyes or mouth included in the shaded portion, detects the corner of the extracted closed curve, and uses points on the detected corner as facial feature points for correction. conduct. As a result, the server device 1 can be expected to accurately extract facial feature points even from a facial image in which the eyes or mouth and their surroundings are shaded.
  • the server device 1 determines the health level of the subject depicted in the face image based on the feature points extracted from the face image and the feature points corrected as necessary.
  • the degree of health to be determined includes, for example, the presence or absence and degree of facial paralysis of the subject, the degree of fatigue of the subject, the stress level, the degree of positive (negative) emotions, or the presence or absence and degree of signs of stroke. can be adopted.
  • the server device 1 notifies, for example, the terminal device 3 of the determination result of the health level. Accordingly, the user can obtain information about the health level of the target person, such as himself or his family, by a simple operation of photographing and transmitting a face image.
  • the server device 1 stores in the storage unit 12 information on the facial feature points extracted from the facial image of the subject and information on the facial feature points corrected with respect to the shaded portion as necessary. do.
  • the server device 1 compares, for example, past facial feature points stored in the storage unit 12 with the latest facial feature points extracted based on the facial image received from the terminal device 3, and determines whether or not there is a change in the facial feature points.
  • the degree of health may be determined based on the degree of change or the like.
  • the server apparatus 1 can be expected to perform more accurate health level determination based on the facial feature point extraction results obtained at a plurality of points in time.
  • the server device 1 determines the degree of health based on the face image of the subject photographed by the camera 36 or the like of the terminal device 3.
  • the information obtained from the above may be used together to correct the shaded portion and determine the degree of health.
  • the terminal device 3 is equipped with a sensor such as a depth sensor to measure the surface shape of the subject's face, and information about the surface shape of the face measured by the sensor together with the face image of the subject photographed by the camera 36. may be transmitted to the server device 1 .
  • the server apparatus 1 can correct the facial feature points related to the shadow portion based on information on the surface shape of the face measured by the sensor.
  • the terminal device 3 is equipped with a sensor that detects infrared light, detects infrared light on the face of the subject, and detects the infrared light detected by the sensor together with the face image of the subject taken by the camera 36. information may be transmitted to the server device 1 .
  • the server device 1 can correct the facial feature points related to the shadow portion based on the infrared light information detected by the sensor.
  • Appendix 1 The information processing device get face image extracting facial feature points from the obtained facial image; Determining whether or not there is a shadow in the face image, correcting the facial feature points in the shaded portion when it is determined that the shadow occurs; Information processing methods.
  • Appendix 2 Determining whether or not correction is necessary according to the position of the shaded portion with respect to the face, correcting facial feature points in the shaded portion when it is determined that correction is necessary; The information processing method according to appendix 1.
  • Appendix 3 correcting facial feature points related to the contour of the face included in the shaded portion; The information processing method according to Supplementary Note 1 or Supplementary Note 2.
  • (Appendix 11) storing the facial feature points in a storage unit; Determining the health level based on changes over time in a plurality of stored feature points; The information processing method according to appendix 9 or appendix 10. (Appendix 12) to the computer, get face image extracting facial feature points from the obtained facial image; Determining whether or not there is a shadow in the face image, A computer program for executing processing for correcting facial feature points in a shadowed portion when it is determined that a shadow is present.
  • An information processing apparatus comprising: a correction unit that corrects facial feature points in a shaded portion when it is determined that a shadow is present.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Dentistry (AREA)
  • Primary Health Care (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Image Processing (AREA)

Abstract

Provided are an information processing method, a computer program, and an information processing device which can be expected to suppress a drop in the determination accuracy of a health condition or the like due to a shadow included in an image which captures a face. By the information processing method according to this embodiment, an information processing device acquires a face image, extracts facial feature points from the acquired face image, determines whether there is a shadow on the face image, and corrects the facial feature points of the shadowed portion if a determination was made that there is a shadow. The information processing device determines whether a correction is necessary according to the position of the shadowed portion with respect to the face, and if a correction is determined to be necessary, may correct the facial feature points of the shadowed portion.

Description

情報処理方法、コンピュータプログラム及び情報処理装置Information processing method, computer program and information processing apparatus
 本発明は、カメラ等で撮影した人の顔の画像を扱う情報処理方法、コンピュータプログラム及び情報処理装置に関する。 The present invention relates to an information processing method, a computer program, and an information processing apparatus for handling an image of a person's face taken with a camera or the like.
 スマートフォン又はタブレット型端末装置等のようなカメラを搭載した機器が広く普及している。近年では、カメラで撮影した人の画像に基づいて、この人の健康状態等を判定する技術が研究、開発されている。 Devices equipped with cameras, such as smartphones or tablet terminals, are widely used. In recent years, research and development have been made on techniques for determining the health condition of a person based on an image of the person captured by a camera.
 特許文献1においては、ユーザ端末にて撮影したユーザの画像をサーバへ送信し、この画像に含まれる顔画像に基づいてサーバがユーザの健康状態を示す健康度を算出してユーザ端末へ送信し、ユーザ端末が健康度を出力する健康状態判定システムが提案されている。この健康状態判定システムでは、サーバが健康度、肌の状態及び美顔率を時系列に示したグラフ、健康度との相関関係を示す相関画像、アドバイス情報、並びに、顔画像の動画等を作成し、ユーザ端末に提供する。 In Patent Literature 1, an image of a user captured by a user terminal is transmitted to a server, and the server calculates a health level indicating the health condition of the user based on the face image included in the image and transmits the health level to the user terminal. , a health condition determination system in which a user terminal outputs a health level has been proposed. In this health condition determination system, the server creates graphs showing the health level, skin condition, and facial beauty rate in chronological order, correlation images showing the correlation with the health level, advice information, facial image videos, etc. , to the user terminal.
特開2020-52505号公報Japanese Patent Application Laid-Open No. 2020-52505
 ユーザの顔を撮影した画像に基づいて健康状態等を判定する技術では、撮影時の周囲の環境が判定精度に影響を与える虞がある。例えば、外来光又は室内照明等によりユーザの顔に陰影が生じる場合があり、顔を撮影した画像に含まれる陰影が健康状態等の判定精度を低下させる虞がある。 With technology that determines the health condition, etc., based on an image of the user's face, there is a risk that the surrounding environment at the time of shooting will affect the determination accuracy. For example, shadows may appear on the user's face due to external light, indoor lighting, or the like, and the shadows included in the photographed image of the face may reduce the accuracy of determining the health condition or the like.
 本発明は、斯かる事情に鑑みてなされたものであって、その目的とするところは、顔を撮影した画像に含まれる陰影による健康状態等の判定精度の低下を抑制することが期待できる情報処理方法、コンピュータプログラム及び情報処理装置を提供することにある。 The present invention has been made in view of such circumstances, and its object is to provide information that can be expected to suppress deterioration in the accuracy of determining health conditions, etc. due to shadows contained in images of faces. An object of the present invention is to provide a processing method, a computer program, and an information processing apparatus.
 一実施形態に係る情報処理方法は、情報処理装置が、顔画像を取得し、取得した前記顔画像から顔特徴点を抽出し、前記顔画像に陰影が生じているか否かを判定し、陰影が生じていると判定した場合に、陰影部分の顔特徴点を補正する。 An information processing method according to one embodiment includes an information processing apparatus that acquires a facial image, extracts facial feature points from the acquired facial image, determines whether or not a shadow is present in the facial image, and extracts a shadow. When it is determined that , the facial feature points in the shaded portion are corrected.
 一実施形態による場合は、顔を撮影した画像に含まれる陰影による健康状態等の判定精度の低下を抑制することが期待できる。 According to one embodiment, it can be expected to suppress deterioration in the accuracy of determination of the health condition, etc., due to shadows contained in the photographed image of the face.
本実施の形態に係る情報処理システムの概要を説明するための模式図である。1 is a schematic diagram for explaining an overview of an information processing system according to an embodiment; FIG. 本実施の形態に係るサーバ装置の構成を示すブロック図である。1 is a block diagram showing the configuration of a server device according to an embodiment; FIG. 本実施の形態に係る端末装置の構成を示すブロック図である。2 is a block diagram showing the configuration of a terminal device according to this embodiment; FIG. 本実施の形態に係るサーバ装置が行う健康度判定処理の手順を示すフローチャートである。6 is a flowchart showing the procedure of health level determination processing performed by the server device according to the present embodiment; 本実施の形態に係るサーバ装置が行う顔の輪郭に関する陰影補正処理の手順を示すフローチャートである。6 is a flow chart showing a procedure of shadow correction processing relating to the outline of a face, which is performed by the server device according to the present embodiment; 顔の輪郭に関する陰影補正処理を説明するための模式図である。FIG. 10 is a schematic diagram for explaining shadow correction processing relating to the contour of the face; 本実施の形態に係るサーバ装置が行うほうれい線又は鼻に関する陰影補正処理の手順を示すフローチャートである。7 is a flowchart showing a procedure of shadow correction processing relating to nasolabial folds or a nose performed by the server device according to the present embodiment; ほうれい線又は鼻に関する陰影補正処理を説明するための模式図である。FIG. 10 is a schematic diagram for explaining shadow correction processing for nasolabial folds or a nose; ほうれい線又は鼻に関する陰影補正処理を説明するための模式図である。FIG. 10 is a schematic diagram for explaining shadow correction processing for nasolabial folds or a nose; 本実施の形態に係るサーバ装置が行う目又は口に関する陰影補正処理の手順を示すフローチャートである。7 is a flow chart showing a procedure of shadow correction processing for the eyes or mouth performed by the server device according to the present embodiment; 目又は口に関する陰影補正処理を説明するための模式図である。FIG. 10 is a schematic diagram for explaining shadow correction processing for eyes or mouth; 目又は口に関する陰影補正処理を説明するための模式図である。FIG. 10 is a schematic diagram for explaining shadow correction processing for eyes or mouth;
 本発明の実施形態に係る情報処理システムの具体例を、以下に図面を参照しつつ説明する。なお、本発明はこれらの例示に限定されるものではなく、請求の範囲によって示され、請求の範囲と均等の意味及び範囲内でのすべての変更が含まれることが意図される。 A specific example of the information processing system according to the embodiment of the present invention will be described below with reference to the drawings. The present invention is not limited to these exemplifications, but is indicated by the scope of the claims, and is intended to include all modifications within the meaning and scope of equivalents to the scope of the claims.
<システム構成>
 図1は、本実施の形態に係る情報処理システムの概要を説明するための模式図である。本実施の形態に係る情報処理システムは、対象者の顔画像を分析して健康状態を判定するシステムであり、サーバ装置1及び端末装置3等を備えて構成されている。本システムを利用するユーザは、スマートフォン又はタブレット型端末装置等の端末装置3を利用して人の顔を撮影し、撮影した顔画像を端末装置3からサーバ装置1へ送信する。本図においては、ユーザが端末装置3を利用して自身の顔を撮影しているが、これに限るものではなく、ユーザは端末装置3を利用して他の人の顔を撮影してもよい。
<System configuration>
FIG. 1 is a schematic diagram for explaining an outline of an information processing system according to this embodiment. The information processing system according to the present embodiment is a system that analyzes a subject's face image to determine the health condition, and includes a server device 1, a terminal device 3, and the like. A user who uses this system takes a picture of a person's face using a terminal device 3 such as a smart phone or a tablet-type terminal device, and transmits the taken face image from the terminal device 3 to the server device 1 . In this figure, the user uses the terminal device 3 to photograph his/her own face, but the present invention is not limited to this. good.
 端末装置3が送信する顔画像を受信したサーバ装置1は、この顔画像の特徴を分析することによって、対象者の健康度を判定し、判定した健康度及びこれに関する情報を端末装置3へ送信する。サーバ装置1は、顔画像に基づいて例えば対象者の疲労度、ストレスレベル、感情がポジティブ(ネガティブ)である度合、顔面麻痺の有無及び度合、又は、脳卒中の兆候の有無及び度合等を健康度として判定することができる。なおサーバ装置1が判定する健康度は、上記の例に限らず、対象者の健康に関する様々な指標が採用されてよい。 The server device 1 receives the face image transmitted by the terminal device 3, analyzes the features of the face image, determines the health level of the subject, and transmits the determined health level and information related thereto to the terminal device 3. do. Based on the face image, the server device 1 determines, for example, the subject's fatigue level, stress level, positive (negative) emotion, presence and degree of facial paralysis, presence and degree of signs of stroke, and the like. can be determined as Note that the degree of health determined by the server device 1 is not limited to the above example, and various indices related to the health of the subject may be employed.
 サーバ装置1は、例えば予め機械学習がなされた顔特徴点抽出モデルを用いて、顔画像から顔特徴点(いわゆるキーポイント又はランドマーク等)を抽出し、抽出した顔特徴点に基づいて健康度の判定を行う。本実施の形態に係る情報処理システムにおいてサーバ装置1は、ユーザが端末装置3を利用して撮影した顔画像に生じた陰影が健康度の判定精度を低下させることを抑制すべく、顔画像の陰影部分について顔特徴点の抽出結果を補正する処理を行う。サーバ装置1は、例えば顔画像に陰影部分が含まれているか否かを判定し、陰影部分が含まれている場合にはその位置及び範囲等を特定し、陰影部分に含まれる顔特徴点を補正することで、顔特徴点に基づく健康度の判定精度の低下を抑制する。 The server device 1 extracts facial feature points (so-called keypoints, landmarks, etc.) from the facial image using, for example, a facial feature point extraction model machine-learned in advance, and based on the extracted facial feature points, the degree of health is calculated. judgment is made. In the information processing system according to the present embodiment, the server device 1 is configured to suppress the deterioration of the health level determination accuracy due to shadows that occur in the face image captured by the user using the terminal device 3. Processing is performed to correct the extraction result of the facial feature points for the shaded portion. The server device 1 determines, for example, whether or not a shadowed portion is included in the facial image, and if the shadowed portion is included, identifies the position and range of the shadowed portion, and identifies facial feature points included in the shadowed portion. The correction suppresses deterioration in the accuracy of determination of the health level based on the facial feature points.
<装置構成>
 図2は、本実施の形態に係るサーバ装置1の構成を示すブロック図である。本実施の形態に係るサーバ装置1は、処理部11、記憶部(ストレージ)12及び通信部(トランシーバ)13等を備えて構成されている。なお本実施の形態においては、1つのサーバ装置にて処理が行われるものとして説明を行うが、複数のサーバ装置が分散して処理を行ってもよい。
<Device configuration>
FIG. 2 is a block diagram showing the configuration of the server device 1 according to this embodiment. The server device 1 according to the present embodiment includes a processing unit 11, a storage unit (storage) 12, a communication unit (transceiver) 13, and the like. In this embodiment, the explanation is given assuming that the processing is performed by one server device, but the processing may be performed by a plurality of server devices in a distributed manner.
 処理部11は、CPU(Central Processing Unit)、MPU(Micro-Processing Unit)、GPU(Graphics Processing Unit)又は量子プロセッサ等の演算処理装置、ROM(Read Only Memory)及びRAM(Random Access Memory)等を用いて構成されている。処理部11は、記憶部12に記憶されたサーバプログラム12aを読み出して実行することにより、端末装置3から対象者の顔画像を取得する処理、取得した顔画像から顔特徴点を抽出する処理、陰影部分について顔特徴点を補正する処理、及び、顔特徴点から対象者の健康度を判定する処理等の種々の処理を行う。 The processing unit 11 includes an arithmetic processing unit such as a CPU (Central Processing Unit), MPU (Micro-Processing Unit), GPU (Graphics Processing Unit) or quantum processor, ROM (Read Only Memory), RAM (Random Access Memory), etc. It is configured using The processing unit 11 reads out and executes the server program 12a stored in the storage unit 12 to perform processing of acquiring a facial image of the subject from the terminal device 3, processing of extracting facial feature points from the acquired facial image, Various processes such as a process of correcting the facial feature points for the shaded portion and a process of judging the health level of the subject from the facial feature points are performed.
 記憶部12は、例えばハードディスク等の大容量の記憶装置を用いて構成されている。記憶部12は、処理部11が実行する各種のプログラム、及び、処理部11の処理に必要な各種のデータを記憶する。本実施の形態において記憶部12は、処理部11が実行するサーバプログラム12aを記憶する。また記憶部12は、顔画像から顔特徴点を抽出する顔特徴点抽出モデル12bを記憶するとともに、抽出した顔特徴点に関する情報を記憶する顔特徴点DB(データベース)12cが設けられている。 The storage unit 12 is configured using a large-capacity storage device such as a hard disk. The storage unit 12 stores various programs executed by the processing unit 11 and various data required for processing by the processing unit 11 . In the present embodiment, the storage unit 12 stores a server program 12a executed by the processing unit 11. FIG. The storage unit 12 stores a facial feature point extraction model 12b for extracting facial feature points from a facial image, and is provided with a facial feature point DB (database) 12c for storing information on the extracted facial feature points.
 本実施の形態においてサーバプログラム(プログラム製品)12aは、メモリカード又は光ディスク等の記録媒体99に記録された態様で提供され、サーバ装置1は記録媒体99からサーバプログラム12aを読み出して記憶部12に記憶する。ただし、サーバプログラム12aは、例えばサーバ装置1の製造段階において記憶部12に書き込まれてもよい。また例えばサーバプログラム12aは、遠隔の他のサーバ装置等が配信するものをサーバ装置1が通信にて取得してもよい。例えばサーバプログラム12aは、記録媒体99に記録されたものを書込装置が読み出してサーバ装置1の記憶部12に書き込んでもよい。サーバプログラム12aは、ネットワークを介した配信の態様で提供されてもよく、記録媒体99に記録された態様で提供されてもよい。 In the present embodiment, the server program (program product) 12a is provided in a form recorded in a recording medium 99 such as a memory card or an optical disk, and the server device 1 reads the server program 12a from the recording medium 99 and stores it in the storage unit 12. Remember. However, the server program 12a may be written in the storage unit 12 during the manufacturing stage of the server device 1, for example. Further, for example, the server program 12a may be delivered by another remote server device or the like, and the server device 1 may acquire the program through communication. For example, the server program 12 a may be recorded in the recording medium 99 and read by a writing device and written in the storage unit 12 of the server device 1 . The server program 12 a may be provided in the form of distribution via a network, or may be provided in the form of being recorded on the recording medium 99 .
 顔特徴点抽出モデル12bは、対象者の顔画像の入力に対して、対象者の顔特徴点を抽出して出力するよう予め機械学習がなされた学習モデルである。顔特徴点抽出モデル12bは、例えばOpen Poseの技術を利用した顔特徴点の抽出を行う学習済みのモデルが用いられる。なお顔特徴点抽出モデル12bは、Open Poseの学習モデルに限らず、これ以外の技術により顔特徴点を抽出する種々の学習モデルが採用されてよい。機械学習による学習モデルを用いた顔特徴点の抽出は既存の技術であるため、学習モデルの構成及び生成方法等の詳細は説明を省略する。 The facial feature point extraction model 12b is a learning model that has undergone machine learning in advance so as to extract and output the facial feature points of the target person in response to the input of the target person's face image. The facial feature point extraction model 12b is a trained model that extracts facial feature points using, for example, Open Pose technology. The facial feature point extraction model 12b is not limited to the Open Pose learning model, and various learning models for extracting facial feature points using other techniques may be employed. Extraction of facial feature points using a learning model based on machine learning is an existing technique, and thus details such as the configuration and generation method of the learning model will be omitted.
 顔特徴点DB12cは、対象者の顔画像から抽出した顔特徴点に関する情報を記憶して蓄積するデータベースである。顔特徴点DB12cは、例えば対象者の名前又はID等の識別情報と、対象者の顔画像と、顔画像が撮影された日時と、顔画像から抽出した顔特徴点の情報と、顔特徴点に基づく健康度の判定結果とを対応付けて記憶する。サーバ装置1は、例えば最新の顔画像から抽出した顔特徴点と、1ヶ月前又は1年前等の所定期間前の顔特徴点との比較を行い、所定期間における顔特徴点の変化に基づいて健康度の判定を行うことができる。 The facial feature point DB 12c is a database that stores and accumulates information on facial feature points extracted from the facial image of the subject. The facial feature point DB 12c includes, for example, identification information such as the name or ID of the subject, the facial image of the subject, the date and time when the facial image was taken, information on the facial feature points extracted from the facial image, and the facial feature points. is stored in association with the determination result of the degree of health based on The server device 1 compares, for example, the facial feature points extracted from the latest facial image with facial feature points from a predetermined period of time, such as one month or one year ago, and based on changes in the facial feature points during the predetermined period. can determine the degree of health.
 通信部13は、携帯電話通信網、無線LAN(Local Area Network)及びインターネット等を含むネットワークNを介して、種々の装置との間で通信を行う。本実施の形態において通信部13は、ネットワークNを介して、一又は複数の端末装置3との間で通信を行う。通信部13は、処理部11から与えられたデータを他の装置へ送信すると共に、他の装置から受信したデータを処理部11へ与える。 The communication unit 13 communicates with various devices via a network N including a mobile phone communication network, a wireless LAN (Local Area Network), the Internet, and the like. In the present embodiment, the communication unit 13 communicates with one or more terminal devices 3 via the network N. FIG. The communication unit 13 transmits the data given from the processing unit 11 to other devices, and gives the data received from the other devices to the processing unit 11 .
 なお記憶部12は、サーバ装置1に接続された外部記憶装置であってよい。またサーバ装置1は、複数のコンピュータを含んで構成されるマルチコンピュータであってよく、ソフトウェアによって仮想的に構築された仮想マシンであってもよい。またサーバ装置1は、上記の構成に限定されず、例えば可搬型の記憶媒体に記憶された情報を読み取る読取部、操作入力を受け付ける入力部、又は、画像を表示する表示部等を含んでもよい。 The storage unit 12 may be an external storage device connected to the server device 1. The server device 1 may be a multicomputer including a plurality of computers, or may be a virtual machine virtually constructed by software. The server device 1 is not limited to the above configuration, and may include, for example, a reading unit that reads information stored in a portable storage medium, an input unit that receives operation inputs, or a display unit that displays images. .
 また本実施の形態に係るサーバ装置1には、記憶部12に記憶されたサーバプログラム12aを処理部11が読み出して実行することにより、顔画像取得部11a、顔特徴点抽出部11b、陰影判定部11c、陰影部分特定部11d、陰影補正部11e、健康度判定部11f及び通知部11g等が、ソフトウェア的な機能部として処理部11に実現される。なお本図においては、処理部11の機能部として、顔画像からの健康度の判定に関連する機能部を図示し、これ以外の処理に関する機能部は図示を省略している。 In the server device 1 according to the present embodiment, the server program 12a stored in the storage unit 12 is read out and executed by the processing unit 11, so that the facial image acquisition unit 11a, the facial feature point extraction unit 11b, the shadow determination unit A portion 11c, a shadow portion specifying portion 11d, a shadow correction portion 11e, a health level determination portion 11f, a notification portion 11g, and the like are implemented in the processing portion 11 as software functional portions. In this figure, as functional units of the processing unit 11, functional units related to determination of the health level from the face image are illustrated, and functional units related to other processes are omitted.
 顔画像取得部11aは、端末装置3から対象者の顔を撮影した顔画像を取得する処理を行う。顔画像取得部11aは、通信部13にて端末装置3との通信を行い、端末装置3から送信される顔画像のデータを受信して、記憶部12に記憶する。また顔画像取得部11aは、端末装置3から取得した画像が例えば対象者の全身を撮影したものである場合など、この画像から対象者の顔に相当する画像領域を抽出する処理を行ってよい。この場合に顔画像取得部11aは、端末装置3から取得した画像に対して顔検出の処理を行い、検出された顔を含む画像領域を元の画像から切り出すことで、対象者の顔画像を取得することができる。なお顔検出の処理は既存の技術であるため、詳細な説明を省略する。 The face image acquisition unit 11a performs processing for acquiring a face image of the target person's face from the terminal device 3. The facial image acquisition unit 11 a communicates with the terminal device 3 through the communication unit 13 , receives facial image data transmitted from the terminal device 3 , and stores the data in the storage unit 12 . Further, when the image obtained from the terminal device 3 is, for example, a photograph of the entire body of the subject, the face image obtaining section 11a may perform a process of extracting an image region corresponding to the face of the subject from the image. . In this case, the face image acquisition unit 11a performs face detection processing on the image acquired from the terminal device 3, and cuts out an image area including the detected face from the original image, thereby extracting the face image of the subject. can be obtained. Since face detection processing is an existing technology, detailed description thereof will be omitted.
 顔特徴点抽出部11bは、顔画像取得部11aが取得した顔画像から、顔の特徴を示す複数の点を特徴点(キーポイント又はランドマーク等)として抽出する処理を行う。本実施の形態において顔特徴点抽出部11bは、記憶部12に記憶された顔特徴点抽出モデル12bを用いて、顔画像から顔特徴点を抽出する。顔特徴点抽出部11bは、対象者の顔画像を顔特徴点抽出モデル12bへ入力し、これに応じて顔特徴点抽出モデル12bが出力する顔特徴点の情報を取得することで、顔画像から複数の顔特徴点を抽出する。顔特徴点抽出モデル12bは例えば入力された顔画像における顔特徴点の座標を出力し、顔特徴点抽出モデル12bが出力する顔特徴点の数は例えば数個~数百個である。 The facial feature point extraction unit 11b extracts a plurality of points indicating facial features as feature points (keypoints, landmarks, etc.) from the facial image acquired by the facial image acquisition unit 11a. In the present embodiment, the facial feature point extraction unit 11b uses the facial feature point extraction model 12b stored in the storage unit 12 to extract facial feature points from the facial image. The facial feature point extracting unit 11b inputs the facial image of the subject to the facial feature point extraction model 12b, and obtains the information of the facial feature points output by the facial feature point extraction model 12b in response to the facial image. Extract a plurality of facial feature points from The facial feature point extraction model 12b outputs, for example, the coordinates of the facial feature points in the input facial image, and the number of facial feature points output by the facial feature point extraction model 12b is, for example, several to several hundred.
 陰影判定部11cは、顔画像取得部11aが取得した顔画像に陰影が含まれているか否かを判定する処理を行う。陰影判定部11cは、例えば顔画像を構成する各画素の輝度(又は明度等)と所定の閾値との比較を行い、閾値より輝度が小さい(暗い)画素が存在するか否かを判定することにより、顔画像に陰影が含まれているか否かを判定する。なお陰影判定に用いる閾値は、例えば本システムの設計段階等において定められてもよく、また例えば顔画像全体の輝度の平均値等に基づいて算出されてもよい。 The shadow determination unit 11c performs processing for determining whether or not the face image acquired by the face image acquisition unit 11a contains a shadow. For example, the shadow determination unit 11c compares the luminance (or brightness, etc.) of each pixel forming the face image with a predetermined threshold value, and determines whether or not there is a pixel whose luminance is lower (darker) than the threshold value. determines whether or not the face image includes a shadow. The threshold used for shadow determination may be determined, for example, at the design stage of the present system, or may be calculated, for example, based on the average value of luminance of the entire face image.
 陰影部分特定部11dは、顔画像に存在する陰影が含まれる画像領域を陰影部分として特定する処理を行う。陰影部分特定部11dは、陰影判定部11cと同様に顔画像の各画素の輝度と所定の閾値との比較を行い、輝度が閾値より小さい画素を特定する。なおこの処理に用いる閾値は陰影判定部11cが判定に用いる閾値と同じものであってよく、この処理を陰影判定部11cが行い、陰影部分特定部11dは陰影判定部11cの処理結果を取得して用いてもよい。陰影部分特定部11dは、輝度が閾値より小さい画素を含む矩形の画像領域を特定し、この画像領域を陰影部分として特定する。なお陰影部分特定部11dは、顔画像から複数の陰影部分を特定してもよい。 The shadow portion identification unit 11d performs a process of identifying an image area containing a shadow present in the face image as a shadow portion. Similar to the shadow determination unit 11c, the shadow portion identification unit 11d compares the brightness of each pixel of the face image with a predetermined threshold, and identifies pixels whose brightness is lower than the threshold. The threshold used for this process may be the same as the threshold used by the shadow determination unit 11c for determination. may be used. The shaded portion specifying unit 11d specifies a rectangular image region including pixels whose brightness is smaller than a threshold value, and specifies this image region as a shaded portion. Note that the shadow portion identifying section 11d may identify a plurality of shadow portions from the face image.
 また陰影部分特定部11dは、顔画像に含まれる陰影部分が、対象者の顔のどの部位に相当する部分であるかを特定する。陰影部分特定部11dは、例えば顔特徴点抽出部11bが抽出した顔特徴点の座標と、陰影部分の座標の範囲とを比較し、陰影部分内又は陰影部分近傍に存在する顔特徴点が顔のいずれの特徴を示す点であるかを判定することによって、陰影部分が顔のどの部位に相当するかを特定する。本実施の形態において陰影部分特定部11dは、陰影部分が顔の輪郭、ほうれい線、鼻、目又は口のいずれの部位に相当するか、又は、いずれの部位にも相当しないかを特定する。 In addition, the shadow part specifying unit 11d specifies which part of the subject's face corresponds to the shadow part included in the face image. The shadow portion specifying unit 11d compares, for example, the coordinates of the facial feature points extracted by the facial feature point extraction unit 11b with the range of the coordinates of the shadow portion, and determines whether the facial feature points existing within or near the shadow portion are identified as the face. It is determined which part of the face the shaded part corresponds to by determining which feature of the point. In the present embodiment, the shaded portion specifying unit 11d specifies whether the shaded portion corresponds to any part of the contour of the face, the nasolabial fold, the nose, the eyes, or the mouth, or does not correspond to any part. .
 陰影補正部11eは、陰影部分特定部11dが特定した陰影部分に含まれる顔特徴点を補正する処理を行う。本実施の形態において陰影補正部11eは、陰影部分が顔のいずれの部位に相当するかに応じて定められた補正方法で、陰影部分の補正を行う。本実施の形態においては、顔の輪郭に相当する陰影部分を補正する方法、ほうれい線又は鼻に相当する陰影部分を補正する方法、及び、目又は口に相当する陰影部分を補正する方法の3種類の補正方法から、陰影補正部11eが陰影部分に応じた適切な方法を選択して補正を行う。各補正方法の詳細は後述する。 The shadow correction unit 11e performs processing for correcting facial feature points included in the shadow portion specified by the shadow portion specifying unit 11d. In the present embodiment, the shadow correction section 11e corrects the shadow portion using a correction method determined according to which part of the face the shadow portion corresponds to. In the present embodiment, a method for correcting a shadow portion corresponding to the contour of the face, a method for correcting a shadow portion corresponding to nasolabial folds or a nose, and a method for correcting a shadow portion corresponding to the eyes or mouth are described. The shadow correction section 11e selects an appropriate method according to the shadow portion from the three types of correction methods and performs correction. Details of each correction method will be described later.
 健康度判定部11fは、顔画像から抽出され、且つ、陰影部分について補正された複数の顔特徴点に基づいて、対象者の健康度を判定する処理を行う。本実施の形態に係る健康度判定部11fは、例えば対象者の顔面麻痺の有無又はその度合いを判定する。本実施の形態に係る情報処理システムの設計者等は、顔面麻痺の症状がある人の顔特徴点に関する情報を収集して、顔面麻痺であるか否かを判断する条件を予め決定し、サーバ装置1に判断条件を記憶させておく。健康度判定部11fは、予め記憶した判断条件に基づいて、対象者の顔特徴点が顔面麻痺の特徴を有しているか否かを判定し、顔面麻痺の有無又はその度合いを判定結果として出力する。健康度判定部11fは、例えば0(顔面麻痺の特徴なし)から1(顔面麻痺の特徴あり)までの範囲の小数の値を判定結果として出力する。 The health level determination unit 11f performs processing for determining the health level of the subject based on a plurality of facial feature points extracted from the face image and corrected for shadow portions. The health degree determination unit 11f according to the present embodiment determines, for example, the presence or absence or degree of facial paralysis of the subject. A designer or the like of an information processing system according to the present embodiment collects information about facial feature points of a person with symptoms of facial paralysis, predetermines conditions for determining whether or not the person has facial paralysis, The determination conditions are stored in the device 1 . The health degree determination unit 11f determines whether or not the facial feature points of the subject have the characteristics of facial paralysis based on pre-stored determination conditions, and outputs the presence or absence of facial paralysis or the degree thereof as a determination result. do. The health level determination unit 11f outputs, for example, a decimal value ranging from 0 (no facial paralysis characteristic) to 1 (facial paralysis characteristic) as a determination result.
 なお、本実施の形態に係る情報処理システムは、種々の健康度の判定に用いることができる顔特徴点の抽出を精度よく行うことを目的とするものであり、抽出した顔特徴点はどのような健康度の判定に用いられてもよい。このため本実施の形態においては、健康度判定部11fによる顔面麻痺の有無及びその度合いの判定方法など、健康度の判定方法についての詳細な説明を省略する。 The information processing system according to the present embodiment is intended to accurately extract facial feature points that can be used to determine various degrees of health. It may be used for determining the degree of health. For this reason, in the present embodiment, a detailed description of the method of determining the degree of health, such as the method of determining the presence or absence of facial paralysis and the degree of facial paralysis by the health degree determination unit 11f, will be omitted.
 なお健康度判定部11fは、予め機械学習がなされた学習モデルを用いて、対象者の顔特徴点から顔面麻痺の有無又はその度合いを判定してよい。学習モデルは、例えば顔特徴点の情報と顔面麻痺の有無の情報とが対応付けられた教師データを用いて機械学習がなされ、顔特徴点の入力に対して顔面麻痺の有無又はその度合いを出力する。 The health degree determination unit 11f may determine the presence or absence or degree of facial paralysis from the subject's facial feature points using a machine-learned learning model in advance. The learning model performs machine learning using, for example, teacher data in which information on facial feature points and information on the presence or absence of facial paralysis are associated, and outputs the presence or absence of facial paralysis or its degree in response to the input of facial feature points. do.
 また健康度判定部11fは、記憶部12の顔特徴点DB12cに記憶された顔特徴点の情報に基づいて、例えば顔面麻痺の症状についての進行又は改善の度合いを判定してもよい。また健康度判定部11fは、顔特徴点DB12cに記憶された時系列の顔特徴点の情報に基づいて、例えば1ヵ月先又は1年先等の将来の時点における顔面麻痺の度合いを予測してもよい。なお本実施の形態において健康度判定部11fは、対象者の顔面麻痺の有無又はその度合いを判定するものとしたが、これに限るものではなく、例えば対象者の疲労度、ストレスレベル、感情又は脳卒中の有無等を健康度として判定してもよい。 In addition, the health level determination unit 11f may determine the degree of progression or improvement of, for example, the symptom of facial paralysis based on the facial feature point information stored in the facial feature point DB 12c of the storage unit 12. Further, the health degree determination unit 11f predicts the degree of facial paralysis at a future point in time, such as one month or one year ahead, based on the time-series facial feature point information stored in the facial feature point DB 12c. good too. In the present embodiment, the health degree determination unit 11f determines the presence or absence of facial paralysis of the subject or the degree thereof, but is not limited to this. The presence or absence of cerebral apoplexy or the like may be determined as the degree of health.
 通知部11gは、健康度判定部11fによる健康度の判定結果をユーザに通知する処理を行う。通知部11gは、例えば対象者の健康度について異常が検知された場合に通知を行ってよく、また例えば対象者の健康度について改善が検知された場合に通知を行ってよく、また例えば異常の有無及び改善の有無等に関わらず判定結果を通知してよい。本実施の形態において通知部11gは、例えば健康度判定部11fが対象者に顔面麻痺の兆候があると判定した場合に、その旨をユーザに通知する。通知部11gは、対象者に対応付けて予め設定された一又は複数の通知先に対して、判定結果を通知するメッセージを送信する。本実施の形態において通知部11gは、顔画像を送信したユーザの端末装置3へ通知を行う。ただし通知先は、顔画像を送信したユーザの端末装置3に限らず、異なるユーザ(例えばユーザの家族又は担当医師等)の端末装置3であってよく、端末装置3とは異なる装置であってよい。 The notification unit 11g performs a process of notifying the user of the result of health level determination by the health level determination unit 11f. The notification unit 11g may notify, for example, when an abnormality is detected in the subject's health level, or may notify, for example, when an improvement in the subject's health level is detected. The judgment result may be notified regardless of the presence or absence of improvement. In the present embodiment, for example, when the health level determination unit 11f determines that the target person has a symptom of facial paralysis, the notification unit 11g notifies the user of that effect. The notification unit 11g transmits a message for notifying the determination result to one or a plurality of notification destinations preset in association with the target person. In the present embodiment, the notification unit 11g notifies the terminal device 3 of the user who has transmitted the face image. However, the notification destination is not limited to the terminal device 3 of the user who transmitted the face image, and may be the terminal device 3 of a different user (for example, the user's family member or doctor in charge). good.
 図3は、本実施の形態に係る端末装置3の構成を示すブロック図である。本実施の形態に係る端末装置3は、処理部31、記憶部(ストレージ)32、通信部(トランシーバ)33、表示部(ディスプレイ)34及び操作部35等を備えて構成されている。端末装置3は、例えば健康度の判定を望む対象者、又は、この対象者の家族もしくは担当医師等の関係者等のユーザが使用する装置であり、例えばスマートフォン、タブレット型端末装置又はパーソナルコンピュータ等の情報処理装置を用いて構成され得る。端末装置3は、可搬型の装置である必要はなく、例えば対象者の家に設置されたAIスピーカ又は監視カメラ等の装置であってもよい。また端末装置3は、対象者の顔を撮影するためのカメラを搭載していることが望ましいが、例えばデジタルカメラ等で撮影した対象者の顔画像を通信等の手段で取得するパーソナルコンピュータ等のカメラを搭載していない装置であってもよい。 FIG. 3 is a block diagram showing the configuration of the terminal device 3 according to this embodiment. The terminal device 3 according to this embodiment includes a processing unit 31, a storage unit (storage) 32, a communication unit (transceiver) 33, a display unit (display) 34, an operation unit 35, and the like. The terminal device 3 is a device used by a user such as a target person who wants to determine the health level, or a related person such as a family member or a doctor in charge of the target person, for example, a smartphone, a tablet terminal device, a personal computer, etc. information processing apparatus. The terminal device 3 does not have to be a portable device, and may be a device such as an AI speaker or a surveillance camera installed in the target person's house. It is desirable that the terminal device 3 is equipped with a camera for photographing the face of the subject. A device without a camera may be used.
 処理部31は、CPU又はMPU等の演算処理装置、ROM及び等を用いて構成されている。処理部31は、記憶部32に記憶されたプログラム32aを読み出して実行することにより、対象者の撮影に関する処理、撮影した画像をサーバ装置1へ送信して健康度の判定を要求する処理、及び、サーバ装置1から健康度の判定結果を取得してユーザに表示する処理等の種々の処理を行う。 The processing unit 31 is configured using an arithmetic processing unit such as a CPU or MPU, a ROM, and the like. The processing unit 31 reads out and executes the program 32a stored in the storage unit 32 to perform processing related to photographing of the subject, processing of transmitting the photographed image to the server device 1 and requesting determination of the degree of health, and , acquires the determination result of the health level from the server device 1, and performs various processes such as a process of displaying it to the user.
 記憶部32は、例えばフラッシュメモリ等の不揮発性のメモリ素子又はハードディスク等の記憶装置等を用いて構成されている。記憶部32は、処理部31が実行する各種のプログラム、及び、処理部31の処理に必要な各種のデータを記憶する。本実施の形態において記憶部32は、処理部31が実行するプログラム32aを記憶している。本実施の形態においてプログラム(プログラム製品)32aは遠隔のサーバ装置等により配信され、これを端末装置3が通信にて取得し、記憶部32に記憶する。ただしプログラム32aは、例えば端末装置3の製造段階において記憶部32に書き込まれてもよい。例えばプログラム32aは、メモリカード又は光ディスク等の記録媒体98に記録されたプログラム32aを端末装置3が読み出して記憶部32に記憶してもよい。例えばプログラム32aは、記録媒体98に記録されたものを書込装置が読み出して端末装置3の記憶部32に書き込んでもよい。プログラム32aは、ネットワークを介した配信の態様で提供されてもよく、記録媒体98に記録された態様で提供されてもよい。 The storage unit 32 is configured using, for example, a non-volatile memory device such as a flash memory or a storage device such as a hard disk. The storage unit 32 stores various programs executed by the processing unit 31 and various data required for processing by the processing unit 31 . In the present embodiment, the storage unit 32 stores a program 32a executed by the processing unit 31. FIG. In the present embodiment, the program (program product) 32 a is distributed by a remote server device or the like, and the terminal device 3 acquires it through communication and stores it in the storage unit 32 . However, the program 32a may be written in the storage unit 32 during the manufacturing stage of the terminal device 3, for example. For example, the program 32a may be stored in the storage unit 32 after the terminal device 3 reads the program 32a recorded in the recording medium 98 such as a memory card or an optical disk. For example, the program 32 a may be recorded in the recording medium 98 and read by a writing device and written in the storage unit 32 of the terminal device 3 . The program 32 a may be provided in the form of distribution via a network, or may be provided in the form of being recorded on the recording medium 98 .
 通信部33は、携帯電話通信網、無線LAN及びインターネット等を含むネットワークNを介して、種々の装置との間で通信を行う。本実施の形態において通信部33は、ネットワークNを介して、サーバ装置1との間で通信を行う。通信部33は、処理部31から与えられたデータを他の装置へ送信すると共に、他の装置から受信したデータを処理部31へ与える。 The communication unit 33 communicates with various devices via a network N including a mobile phone communication network, a wireless LAN, the Internet, and the like. In the present embodiment, the communication unit 33 communicates with the server device 1 via the network N. FIG. The communication unit 33 transmits data received from the processing unit 31 to other devices, and provides the processing unit 31 with data received from other devices.
 表示部34は、液晶ディスプレイ等を用いて構成されており、処理部31の処理に基づいて種々の画像及び文字等を表示する。操作部35は、ユーザの操作を受け付け、受け付けた操作を処理部31へ通知する。例えば操作部35は、機械式のボタン又は表示部34の表面に設けられたタッチパネル等の入力デバイスによりユーザの操作を受け付ける。また例えば操作部35は、マウス及びキーボード等の入力デバイスであってよく、これらの入力デバイスは端末装置3に対して取り外すことが可能な構成であってもよい。カメラ36は、例えばCCD(Charge Coupled Device)又はCMOS(Complementary Metal Oxide Semiconductor)等のイメージセンサ、並びに、レンズ等の光学素子を用いて構成されており、撮影により得られた画像のデータを処理部31へ与える。 The display unit 34 is configured using a liquid crystal display or the like, and displays various images, characters, etc. based on the processing of the processing unit 31. The operation unit 35 receives a user's operation and notifies the processing unit 31 of the received operation. For example, the operation unit 35 receives a user's operation using an input device such as mechanical buttons or a touch panel provided on the surface of the display unit 34 . Further, for example, the operation unit 35 may be an input device such as a mouse and a keyboard, and these input devices may be detachable from the terminal device 3 . The camera 36 is configured using an image sensor such as a CCD (Charge Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor), and an optical element such as a lens. Give to 31.
 また本実施の形態に係る端末装置3は、記憶部32に記憶されたプログラム32aを処理部31が読み出して実行することにより、撮影処理部31a及び表示処理部31b等がソフトウェア的な機能部として処理部31に実現される。なおプログラム32aは、本実施の形態に係る情報処理システムに専用のプログラムであってもよく、インターネットブラウザ又はウェブブラウザ等の汎用のプログラムであってもよい。 In the terminal device 3 according to the present embodiment, the program 32a stored in the storage unit 32 is read out and executed by the processing unit 31, so that the photographing processing unit 31a, the display processing unit 31b, etc. function as software functional units. It is implemented in the processing unit 31 . The program 32a may be a program dedicated to the information processing system according to the present embodiment, or may be a general-purpose program such as an Internet browser or web browser.
 撮影処理部31aは、カメラ36による対象者の顔画像の撮影に関する処理を行う。撮影処理部31aは、例えばユーザの操作部35に対する操作に応じてカメラ36による撮影を行い、撮影された画像のデータを取得する。このときに撮影処理部31aは、ユーザが適切な顔画像の撮影を行うことを支援すべく、例えばメッセージ表示又は撮影のガイド表示等を行ってもよい。撮影処理部31aは、撮影により得られた画像(顔画像)をサーバ装置1へ送信し、健康度の判定を要求する。 The photographing processing unit 31a performs processing related to photographing of the subject's face image by the camera 36. The photographing processing unit 31a performs photographing by the camera 36 according to, for example, the user's operation on the operation unit 35, and acquires data of the photographed image. At this time, the photographing processing unit 31a may perform, for example, a message display or a photographing guide display to assist the user in photographing an appropriate face image. The imaging processing unit 31a transmits an image (face image) obtained by imaging to the server device 1, and requests determination of the health level.
 表示処理部31bは、サーバ装置1から受信した健康度の判定結果に関する情報を表示部34に表示する処理を行う。表示処理部31bは、例えばサーバ装置1が判定した顔面麻痺の有無又はその度合いを表示してよく、また例えば顔面麻痺の特徴があるとの判定結果が得られた場合にはその旨をプッシュ通知等でユーザに通知してよく、また例えば顔面麻痺の度合の時系列的な変化をグラフ等で表示してもよく、これら以外の種々の表示を行ってよい。 The display processing unit 31b performs processing for displaying information on the health level determination result received from the server device 1 on the display unit 34. The display processing unit 31b may display, for example, the presence or absence of facial paralysis determined by the server device 1 or its degree, and, for example, when a determination result indicating that there is a characteristic of facial paralysis is obtained, a push notification to that effect is obtained. For example, the time-series change in the degree of facial paralysis may be displayed as a graph or the like, or various displays other than these may be performed.
<陰影補正処理>
 図4は、本実施の形態に係るサーバ装置1が行う健康度判定処理の手順を示すフローチャートである。本実施の形態に係るサーバ装置1の処理部11の顔画像取得部11aは、通信部13にて端末装置3との通信を行い、端末装置3が送信する顔画像のデータを受信することで、対象者の顔画像を取得する(ステップS1)。処理部11の顔特徴点抽出部11bは、記憶部12に記憶された顔特徴点抽出モデル12bに対してステップS1にて取得した顔画像を入力し、顔特徴点抽出モデル12bが出力する顔特徴点の情報を取得することによって、対象者の顔画像から特徴点を抽出する(ステップS2)。
<Shadow Correction Processing>
FIG. 4 is a flow chart showing the procedure of health level determination processing performed by the server device 1 according to the present embodiment. The facial image acquisition unit 11a of the processing unit 11 of the server device 1 according to the present embodiment communicates with the terminal device 3 through the communication unit 13, and receives facial image data transmitted by the terminal device 3. , a face image of the subject is obtained (step S1). The facial feature point extraction unit 11b of the processing unit 11 inputs the facial image acquired in step S1 to the facial feature point extraction model 12b stored in the storage unit 12, and extracts the face image output by the facial feature point extraction model 12b. By obtaining information on the feature points, the feature points are extracted from the face image of the subject (step S2).
 処理部11の陰影判定部11cは、ステップS1にて取得した顔画像に陰影が含まれているか否かを判定する(ステップS3)。このときに陰影判定部11cは、顔画像を構成する各画素の輝度と閾値との比較結果に基づいて、閾値より輝度が小さい画素が存在するか否かを判定することにより、顔画像に陰影が含まれているか否かを判定する。顔画像に陰影が含まれていないと判定した場合(S3:NO)、陰影判定部11cは、ステップS7へ処理を進める。 The shadow determination unit 11c of the processing unit 11 determines whether or not the face image acquired in step S1 includes a shadow (step S3). At this time, the shadow determination unit 11c determines whether or not there is a pixel having a brightness lower than the threshold based on the result of comparing the brightness of each pixel constituting the face image with the threshold, thereby determining whether or not there is a shadow on the face image. Determine whether it contains If it is determined that the face image does not contain a shadow (S3: NO), the shadow determining section 11c advances the process to step S7.
 顔画像に陰影が含まれていると判定した場合(S3:YES)、処理部11の陰影部分特定部11dは、顔画像中の陰影が含まれる画像領域を陰影部分として特定すると共に、この陰影部分が対象者の顔のどの部位に相当する部分であるかを特定する(ステップS4)。このときに陰影部分特定部11dは、陰影部分として特定した領域とステップS2にて抽出した顔特徴点の座標との位置関係などに基づいて、陰影部分が顔のどの部位に相当する部分であるかを特定する。 When it is determined that the face image includes a shadow (S3: YES), the shadow portion specifying unit 11d of the processing unit 11 specifies an image region including the shadow in the face image as a shadow portion, and identifies the shadow portion. It is specified which part of the subject's face the part corresponds to (step S4). At this time, the shadow portion specifying unit 11d determines which portion of the face the shadow portion corresponds to based on the positional relationship between the region specified as the shadow portion and the coordinates of the facial feature points extracted in step S2. to identify
 陰影部分特定部11dは、ステップS4にて特定した陰影部分に対して補正が必要であるか否かを判定する(ステップS5)。本実施の形態において陰影部分特定部11dは、顔画像の陰影部分が対象者の顔の輪郭、ほうれい線、鼻、目又は口のいずれかの部位に相当する場合に補正の必要があると判定し、これらの部位に相当しない場合に補正の必要がないと判定する。補正の必要がないと判定した場合(S5:NO)、陰影部分特定部11dは、ステップS7へ処理を進める。 The shadow portion specifying unit 11d determines whether or not the shadow portion specified in step S4 needs to be corrected (step S5). In the present embodiment, the shadow portion specifying unit 11d determines that correction is necessary when the shadow portion of the face image corresponds to any part of the subject's facial contour, nasolabial fold, nose, eyes, or mouth. If it does not correspond to these parts, it is determined that there is no need for correction. If it is determined that correction is not necessary (S5: NO), the shadow portion specifying section 11d advances the process to step S7.
 補正の必要があると判定した場合(S5:YES)、処理部11の陰影補正部11eは、陰影部分特定部11dが特定した顔画像の陰影部分に対する補正処理を行う(ステップS6)。このときに陰影補正部11eは、陰影部分が顔の輪郭に相当する部位、ほうれい線もしくは鼻に相当する部位、又は、目もしくは口に相当する部位のいずれであるかに応じて定められた補正方法で、陰影部分の補正を行う。 If it is determined that correction is necessary (S5: YES), the shadow correction section 11e of the processing section 11 performs correction processing on the shadow portion of the face image identified by the shadow portion identification section 11d (step S6). At this time, the shadow corrector 11e is determined according to whether the shadowed portion is a portion corresponding to the outline of the face, a portion corresponding to the nasolabial fold or nose, or a portion corresponding to the eyes or mouth. Use the correction method to correct the shaded area.
 処理部11の健康度判定部11fは、対象者の顔画像から抽出された顔特徴点、及び、必要に応じて陰影部分に関して補正が行われた顔特徴点に基づいて、対象者の健康度を判定する処理を行う(ステップS7)。本実施の形態において健康度判定部11fは、対象者の顔特徴点に基づいて顔面麻痺の有無及びその度合いを判定するものとするが、これに限るものではなく、健康度判定部11fは対象者の疲労度、ストレスレベル、感情がポジティブ(ネガティブ)である度合、又は、脳卒中の兆候の有無及び度合等の種々の健康度を判定してよい。健康度判定部11fは、対象者の顔画像から抽出された顔特徴点、及び、必要に応じて陰影部分に関して補正が行われた顔特徴点に関する情報を、ステップS7による健康度の判定結果と共に記憶部12の顔特徴点DB12cに記憶する(ステップS8)。 The health level determination unit 11f of the processing unit 11 determines the health level of the subject based on the facial feature points extracted from the facial image of the subject and the facial feature points corrected with respect to the shaded portion as necessary. (step S7). In the present embodiment, the health degree determination unit 11f determines the presence or absence of facial paralysis and its degree based on the facial feature points of the subject. Various health measures such as a person's degree of fatigue, stress level, degree of positive (negative) emotions, or presence and degree of signs of stroke may be determined. The health level determination unit 11f transmits information on the facial feature points extracted from the face image of the subject and information on the facial feature points corrected with respect to the shaded portion as necessary, together with the health level determination result in step S7. Stored in the facial feature point DB 12c of the storage unit 12 (step S8).
 処理部11の通知部11gは、ステップS7にて判定した対象者の健康度に関する情報を、この対象者について予め設定された送信先へ送信することで、健康度の判定結果を通知し(ステップS9)、処理を終了する。例えばサーバ装置1は、対象者の名前又はID等の識別情報と、健康度の判定結果の送信先のメールアドレス等の情報とをデータベース等に対応付けて記憶している。端末装置3は、対象者の顔画像をサーバ装置1へ送信する際に、対象者の識別情報を顔画像と共にサーバ装置1へ送信する。サーバ装置1の通知部11gは、顔画像と共に受信した対象者の識別情報を基に、健康度の判定結果を送信する送信先の情報をデータベースから取得し、取得した送信先へ健康度の判定結果を送信することができる。 The notification unit 11g of the processing unit 11 notifies the determination result of the health level by transmitting the information about the health level of the target person determined in step S7 to the destination preset for the target person (step S9), the process ends. For example, the server device 1 stores, in a database or the like, identification information such as the subject's name or ID, and information such as the e-mail address of the recipient of the determination result of the health level in association with each other. When transmitting the face image of the target person to the server device 1 , the terminal device 3 transmits the identification information of the target person to the server device 1 together with the face image. The notification unit 11g of the server device 1 acquires the information of the transmission destination to which the health level determination result is transmitted from the database based on the subject identification information received together with the face image, and sends the acquired transmission destination for the health level determination. You can send the results.
(1)顔の輪郭に関する陰影補正処理
 図5は、本実施の形態に係るサーバ装置1が行う顔の輪郭に関する陰影補正処理の手順を示すフローチャートである。図5に示された陰影補正処理は、図4のフローチャートのステップS6にて行われ得る処理である。また図6は、顔の輪郭に関する陰影補正処理を説明するための模式図である。
(1) Shadow Correction Processing Concerning Face Contour FIG. 5 is a flowchart showing the procedure of shadow correction processing concerning the contour of the face performed by the server device 1 according to the present embodiment. The shadow correction processing shown in FIG. 5 is processing that can be performed in step S6 of the flowchart of FIG. Also, FIG. 6 is a schematic diagram for explaining the shadow correction processing regarding the contour of the face.
 本実施の形態に係るサーバ装置1の処理部11の陰影補正部11eは、図4に示すフローチャートのステップS4にて特定された顔画像の陰影部分に対して、輝度を高める補正を行う(ステップS21)。図6の上段には、顔画像と、この顔画像から特定された顔の輪郭に相当する陰影部分101との一例が示されている。また図6の中段には、陰影部分101の輝度を高める補正が行われた後の画像の一例が示されている。陰影補正部11eは、顔画像から特定された陰影部分にのみ輝度の補正を行い、陰影部分以外の部分については輝度の補正を行わなくてよい(ただし、行ってもよい)。 The shadow correction unit 11e of the processing unit 11 of the server device 1 according to the present embodiment performs correction to increase the brightness of the shadow portion of the face image identified in step S4 of the flowchart shown in FIG. S21). The upper part of FIG. 6 shows an example of a face image and a shaded portion 101 corresponding to the contour of the face specified from this face image. The middle part of FIG. 6 shows an example of the image after the correction for increasing the luminance of the shaded portion 101 has been performed. The shadow correction unit 11e performs luminance correction only on the shadow portion specified from the face image, and does not have to perform luminance correction on portions other than the shadow portion (although it may be performed).
 陰影補正部11eは、輝度を高めた陰影部分の画像について例えば色の分布を調べ、色が近い画像領域を特定することによって、人の肌に相当する領域(肌領域)と、それ以外の領域(背景領域)とに分離する(ステップS22)。図6の下段には、陰影部分が肌領域101aと、背景領域101bとに分離された一例が示されている。なお本実施の形態において陰影補正部11eは、色の近い領域を特定することで肌領域と背景領域との分離を行っているが、領域の分離の方法はこれに限るものではなく、例えば陰影部分からエッジ抽出などの処理により境界線を検出し、この境界線に基づいて領域を分離するなど、別の方法で領域の分離を行ってもよい。 The shadow correction unit 11e examines, for example, the color distribution of the image of the shaded portion with the increased luminance, and identifies image regions with similar colors to determine the region corresponding to human skin (skin region) and the other regions. (background area) and (step S22). The lower part of FIG. 6 shows an example in which the shaded area is separated into a skin area 101a and a background area 101b. In the present embodiment, the shadow correction unit 11e separates the skin region and the background region by specifying regions with similar colors, but the region separation method is not limited to this. Another method may be used to separate regions, such as detecting a boundary line from a portion by processing such as edge extraction and separating the region based on this boundary line.
 陰影補正部11eは、ステップS22の肌領域及び背景領域の分離結果に基づいて、肌領域と背景領域との境界線に相当する部分を取得する(ステップS23)。陰影補正部11eは、取得した境界線上の複数の点の中から、いくつかの点を特徴点として取得し(ステップS24)、陰影補正処理を終了する。陰影補正部11eは、例えば図4のフローチャートのステップS2にて抽出した特徴点のうち、陰影部分に含まれる特徴点をステップS24にて取得した特徴点に置き換えることで、陰影補正処理を完了する。 The shadow correction unit 11e acquires a portion corresponding to the boundary line between the skin area and the background area based on the separation result of the skin area and the background area in step S22 (step S23). The shadow correction unit 11e acquires some points as feature points from among the acquired plurality of points on the boundary line (step S24), and ends the shadow correction process. The shadow correction unit 11e replaces the feature points included in the shadow portion among the feature points extracted in step S2 of the flowchart of FIG. 4 with the feature points acquired in step S24, thereby completing the shadow correction processing. .
(2)ほうれい線又は鼻に関する陰影補正処理
 図7は、本実施の形態に係るサーバ装置1が行うほうれい線又は鼻に関する陰影補正処理の手順を示すフローチャートである。図7に示された陰影補正処理は、図4のフローチャートのステップS6にて行われ得る処理である。また図8及び図9は、ほうれい線又は鼻に関する陰影補正処理を説明するための模式図である。
(2) Shadow Correction Processing for Nasolabial Lines or Nose FIG. 7 is a flowchart showing a procedure for shadow correction processing for nasolabial folds or nose performed by the server device 1 according to the present embodiment. The shadow correction processing shown in FIG. 7 is processing that can be performed in step S6 of the flowchart of FIG. 8 and 9 are schematic diagrams for explaining the shadow correction processing for nasolabial folds or nose.
 本実施の形態に係るサーバ装置1の処理部11の陰影補正部11eは、図4に示すフローチャートのステップS4にて特定された顔画像の陰影部分に対して、輝度を高める補正を行う(ステップS31)。図8の上段には、顔画像と、この顔画像から特定された顔のほうれい線に相当する陰影部分102との一例が示されている。また図8の下段には、陰影部分101の輝度を高める補正が行われた後の画像の一例が示されている。陰影補正部11eは、顔画像から特定された陰影部分にのみ輝度の補正を行い、陰影部分以外の部分については輝度の補正を行わなくてよい(ただし、行ってもよい)。 The shadow correction unit 11e of the processing unit 11 of the server device 1 according to the present embodiment performs correction to increase the brightness of the shadow portion of the face image identified in step S4 of the flowchart shown in FIG. S31). The upper part of FIG. 8 shows an example of a face image and a shadow portion 102 corresponding to the nasolabial folds of the face identified from this face image. Further, the lower part of FIG. 8 shows an example of the image after the correction for increasing the luminance of the shaded portion 101 is performed. The shadow correction unit 11e performs luminance correction only on the shadow portion specified from the face image, and does not have to perform luminance correction on portions other than the shadow portion (although it may be performed).
 陰影補正部11eは、輝度を高めた陰影部分の画像について、陰影部分の各画素の輝度と所定の閾値との比較を行うことによって、輝度が閾値よりも低い領域を抽出する(ステップS32)。図9の上段には顔画像から特定された陰影部分102の一例が示され、図9の下段には陰影部分102に重ねて示した直線A-A’における輝度の変化がグラフとして示されている。また図9下段のグラフには、輝度が閾値より低い領域にハッチングが付されて示されており、この領域が顔のほうれい線に相当する部分と推定できる。なお図9においてはほうれい線を例に挙げたが、鼻についても同様であり、輝度が閾値よりも低い領域が鼻の境界線又は鼻の凹凸における凹部分等に相当する。 The shadow correction unit 11e compares the brightness of each pixel in the shaded portion of the image of the shaded portion with the increased brightness with a predetermined threshold value, thereby extracting regions where the brightness is lower than the threshold value (step S32). The upper part of FIG. 9 shows an example of the shadow portion 102 identified from the face image, and the lower part of FIG. there is In addition, in the lower graph of FIG. 9, a region where the brightness is lower than the threshold value is shown hatched, and this region can be estimated as a portion corresponding to the nasolabial folds of the face. In FIG. 9, the nasolabial folds are taken as an example, but the same applies to the nose, and regions where the brightness is lower than the threshold value correspond to the boundaries of the nose or concave portions in the unevenness of the nose.
 陰影補正部11eは、ステップS32にて抽出した輝度が低い領域内に含まれる複数の点の中から、いくつかの点を特徴点として取得し(ステップS33)、陰影補正処理を終了する。陰影補正部11eは、例えば図4のフローチャートのステップS2にて抽出した特徴点のうち、陰影部分に含まれる特徴点をステップS33にて取得した特徴点に置き換えることで、陰影補正処理を完了する。 The shadow correction unit 11e acquires some points as feature points from among the points included in the low-luminance region extracted in step S32 (step S33), and ends the shadow correction process. For example, the shadow correction unit 11e replaces the feature points included in the shadow portion among the feature points extracted in step S2 of the flowchart of FIG. 4 with the feature points acquired in step S33, thereby completing the shadow correction processing. .
(3)目又は口に関する陰影補正処理
 図10は、本実施の形態に係るサーバ装置1が行う目又は口に関する陰影補正処理の手順を示すフローチャートである。図10に示された陰影補正処理は、図4のフローチャートのステップS6にて行われ得る処理である。また図11及び図12は、目又は口に関する陰影補正処理を説明するための模式図である。
(3) Shadow Correction Processing for Eyes or Mouth FIG. 10 is a flowchart showing a procedure for shadow correction processing for eyes or mouth performed by the server device 1 according to the present embodiment. The shadow correction processing shown in FIG. 10 is processing that can be performed in step S6 of the flowchart of FIG. 11 and 12 are schematic diagrams for explaining the shadow correction processing for the eyes or mouth.
 本実施の形態に係るサーバ装置1の処理部11の陰影補正部11eは、図4に示すフローチャートのステップS4にて特定された顔画像の陰影部分に対して、輝度を高める補正を行う(ステップS41)。図11の上段には、顔画像と、この顔画像から特定された目に相当する陰影部分103との一例が示されている。また図11の下段には、陰影部分103の輝度を高める補正が行われた後の画像の一例が示されている。陰影補正部11eは、顔画像から特定された陰影部分にのみ輝度の補正を行い、陰影部分以外の部分については輝度の補正を行わなくてよい(ただし、行ってもよい)。 The shadow correction unit 11e of the processing unit 11 of the server device 1 according to the present embodiment performs correction to increase the brightness of the shadow portion of the face image identified in step S4 of the flowchart shown in FIG. S41). The upper part of FIG. 11 shows an example of a face image and a shadow portion 103 corresponding to the eyes identified from this face image. Further, the lower part of FIG. 11 shows an example of the image after the correction for increasing the brightness of the shaded portion 103 is performed. The shadow correction unit 11e performs luminance correction only on the shadow portion specified from the face image, and does not have to perform luminance correction on portions other than the shadow portion (although it may be performed).
 陰影補正部11eは、輝度を高めた陰影部分の画像について、目又は口を囲む閉曲線を抽出する処理を行う(ステップS42)。このときに陰影補正部11eは、例えば画像からエッジに相当する画素を抽出し、エッジの画素が連なって曲線を構成している部分を抽出し、曲線が閉じている部分を抽出することで閉曲線を抽出することができる。目又は口を囲む閉曲線の抽出は、例えばSnakes又はLevel Set等の動的輪郭法を用いて行われ得る。なお動的輪郭法は既存の技術であるため、詳細な説明を省略する。なおこの閉曲線の抽出方法は一例であってこれに限るものではなく、陰影補正部11eによる閉曲線の抽出はどのような方法で行われてもよい。 The shadow correction unit 11e performs processing for extracting a closed curve surrounding the eyes or mouth from the image of the shadow portion with the increased brightness (step S42). At this time, the shadow correction unit 11e extracts, for example, pixels corresponding to edges from the image, extracts a portion where the pixels of the edge are connected to form a curve, and extracts a portion where the curve is closed. can be extracted. Extraction of closed curves surrounding the eyes or mouth can be done using active contour methods such as Snakes or Level Set. Since the active contour method is an existing technique, detailed description thereof will be omitted. Note that this method of extracting a closed curve is only an example and is not limited to this, and extraction of a closed curve by the shadow correction unit 11e may be performed by any method.
 陰影補正部11eは、ステップS42にて抽出した閉曲線について、角の部分を検出する処理を行う(ステップS43)。このときに陰影補正部11eは、閉曲線において屈曲した箇所を検出し、この箇所の内角の角度が所定角度(例えば90°又は60°等)より小さい場合に、この箇所を角部分と判断することができる。なおこの角検出の方法は一例であってこれに限るものではなく、陰影補正部11eによる角検出はどのような方法で行われてもよい。図12の上段には、目に相当する陰影部分103から検出された閉曲線104と、この閉曲線104の角部分105とが示されている。また図12の下段には、口に相当する陰影部分103から検出された閉曲線104と、この閉曲線104の角部分105とが示されている。 The shadow correction unit 11e performs processing for detecting corners of the closed curve extracted in step S42 (step S43). At this time, the shadow correction unit 11e detects a crooked portion of the closed curve, and if the internal angle at this portion is smaller than a predetermined angle (eg, 90° or 60°), this portion is determined to be a corner portion. can be done. Note that this angle detection method is an example and is not limited to this, and the angle detection by the shadow correction unit 11e may be performed by any method. The upper part of FIG. 12 shows a closed curve 104 detected from a shaded portion 103 corresponding to the eye, and a corner portion 105 of this closed curve 104 . In the lower part of FIG. 12, a closed curve 104 detected from a shaded portion 103 corresponding to the mouth and a corner portion 105 of this closed curve 104 are shown.
 陰影補正部11eは、ステップS43にて検出した角に相当する点を特徴点として取得し(ステップS44)、陰影補正処理を終了する。陰影補正部11eは、例えば図4のフローチャートのステップS2にて抽出した特徴点のうち、陰影部分に含まれる特徴点をステップS44にて取得した特徴点に置き換えることで、陰影補正処理を完了する。 The shadow correction unit 11e acquires the point corresponding to the corner detected in step S43 as a feature point (step S44), and ends the shadow correction process. The shadow correction unit 11e completes the shadow correction processing by, for example, replacing the feature points included in the shadow portion among the feature points extracted in step S2 of the flowchart of FIG. 4 with the feature points acquired in step S44. .
<まとめ>
 以上の構成の本実施の形態に係る情報処理システムでは、端末装置3から対象者を撮影した顔画像をサーバ装置1が取得し、取得した顔画像から顔特徴点を抽出すると共に、顔画像に陰影が生じているか否かを判定して、陰影が生じている場合に陰影部分の顔特徴点を補正する。これにより本実施の形態に係る情報処理システムでは、顔を撮影した画像に含まれる陰影による顔特徴点の抽出精度の低下を抑制することができ、顔特徴点に基づく健康状態等の判定精度の低下を抑制することが期待できる。
<Summary>
In the information processing system according to the present embodiment having the configuration described above, the server device 1 acquires a face image of a subject photographed from the terminal device 3, extracts facial feature points from the acquired face image, and extracts facial feature points from the face image. To determine whether or not a shadow occurs, and correct facial feature points in the shadow portion when the shadow occurs. As a result, in the information processing system according to the present embodiment, it is possible to suppress deterioration in accuracy in extracting facial feature points due to shadows included in a photographed image of the face, and to improve accuracy in determining health conditions and the like based on facial feature points. It can be expected that the decrease will be suppressed.
 また本実施の形態に係るサーバ装置1は、顔画像に含まれる陰影部分が顔のどの部位に相当するか、即ち顔に対する陰影部分の位置に応じて補正の要否を判定する。これによりサーバ装置1は、例えば顔画像に含まれる部分が顔特徴点の抽出に影響を与える部位に相当する場合に補正を行い、顔特徴点の抽出に影響を与えない部位に相当する場合には補正を行わないことで、補正処理の負荷を低減することが期待できる。 The server device 1 according to the present embodiment also determines whether or not correction is necessary according to which part of the face the shaded portion included in the face image corresponds to, that is, the position of the shaded portion with respect to the face. As a result, the server device 1 performs correction when a part included in the facial image corresponds to a part that affects the extraction of facial feature points, for example, and corrects a part that does not affect the extraction of facial feature points. By not performing correction, it is expected that the load of correction processing can be reduced.
 また本実施の形態に係るサーバ装置1は、陰影部分に含まれる顔の輪郭に係る顔特徴点を補正する。このときにサーバ装置1は、陰影部分に含まれる顔部分(顔の肌領域)と背景部分(背景領域)とを分離し、顔部分及び背景部分の境界線を抽出し、抽出した境界線上の点を顔特徴点とすることで補正を行う。これによりサーバ装置1は、顔の輪郭線及びその周辺に陰影が生じた顔画像であっても顔特徴点の抽出を精度よく行うことが期待できる。 Further, the server device 1 according to the present embodiment corrects facial feature points related to the outline of the face included in the shaded portion. At this time, the server device 1 separates the face portion (facial skin region) and the background portion (background region) included in the shaded portion, extracts the boundary line between the face portion and the background portion, and Correction is performed by using the points as facial feature points. As a result, the server apparatus 1 can be expected to accurately extract facial feature points even from a facial image in which shadows are produced on the outline of the face and its surroundings.
 また本実施の形態に係るサーバ装置1は、陰影部分に含まれる顔のほうれい線又は鼻に係る特徴点を補正する。このときにサーバ装置1は、陰影部分に含まれる各画素の輝度が閾値より小さい部分を抽出し、抽出した部分に含まれる点を顔特徴点とすることで補正を行う。これによりサーバ装置1は、ほうれい線又は鼻及びその周辺に陰影が生じた顔画像であっても顔特徴点の抽出を精度よく行うことが期待できる。 In addition, the server device 1 according to the present embodiment corrects feature points related to nasolabial folds or noses of the face included in the shaded portion. At this time, the server device 1 performs correction by extracting a portion in which the brightness of each pixel included in the shaded portion is smaller than a threshold, and using points included in the extracted portion as facial feature points. As a result, the server device 1 can be expected to extract facial feature points with high accuracy even in a face image in which the nasolabial folds or the nose and its surroundings are shaded.
 また本実施の形態に係るサーバ装置1は、陰影部分に含まれる顔の目又は口に係る特徴点を補正する。このときにサーバ装置1は、陰影部分に含まれる目又は口を囲む閉曲線を抽出し、抽出した閉曲線の角部分を検出し、検出した角部分上の点を顔特徴点とすることで補正を行う。これによりサーバ装置1は、目又は口及びその周辺に陰影が生じた顔画像であっても顔特徴点の抽出を精度よく行うことが期待できる。 In addition, the server device 1 according to the present embodiment corrects feature points related to the eyes or mouth of the face included in the shaded portion. At this time, the server device 1 extracts a closed curve surrounding the eyes or mouth included in the shaded portion, detects the corner of the extracted closed curve, and uses points on the detected corner as facial feature points for correction. conduct. As a result, the server device 1 can be expected to accurately extract facial feature points even from a facial image in which the eyes or mouth and their surroundings are shaded.
 また本実施の形態に係るサーバ装置1は、顔画像から抽出した特徴点及び必要に応じて補正を行った特徴点に基づいて、顔画像に写された対象者の健康度を判定する。判定する健康度は、例えば対象者の顔面麻痺の有無及び度合、対象者の疲労度、ストレスレベル、感情がポジティブ(ネガティブ)である度合、又は、脳卒中の兆候の有無及び度合等の種々のものが採用され得る。サーバ装置1は、健康度の判定結果を例えば端末装置3へ送信して通知する。これによりユーザは、顔画像を撮影して送信する簡単な操作で、自身又は家族等の対象者の健康度に関する情報を得ることができる。 In addition, the server device 1 according to the present embodiment determines the health level of the subject depicted in the face image based on the feature points extracted from the face image and the feature points corrected as necessary. The degree of health to be determined includes, for example, the presence or absence and degree of facial paralysis of the subject, the degree of fatigue of the subject, the stress level, the degree of positive (negative) emotions, or the presence or absence and degree of signs of stroke. can be adopted. The server device 1 notifies, for example, the terminal device 3 of the determination result of the health level. Accordingly, the user can obtain information about the health level of the target person, such as himself or his family, by a simple operation of photographing and transmitting a face image.
 また本実施の形態に係るサーバ装置1は、対象者の顔画像から抽出された顔特徴点、及び、必要に応じて陰影部分に関して補正が行われた顔特徴点に関する情報を記憶部12に記憶する。サーバ装置1は、例えば記憶部12に記憶した過去の顔特徴点と、端末装置3から受信した顔画像に基づいて抽出した最新の顔特徴点とを比較し、顔特徴点の変化の有無及び変化の度合等に基づいて健康度の判定を行ってよい。これによりサーバ装置1は、複数時点における顔特徴点の抽出結果に基づいて、より精度のよい健康度の判定を行うことが期待できる。 In addition, the server device 1 according to the present embodiment stores in the storage unit 12 information on the facial feature points extracted from the facial image of the subject and information on the facial feature points corrected with respect to the shaded portion as necessary. do. The server device 1 compares, for example, past facial feature points stored in the storage unit 12 with the latest facial feature points extracted based on the facial image received from the terminal device 3, and determines whether or not there is a change in the facial feature points. The degree of health may be determined based on the degree of change or the like. As a result, the server apparatus 1 can be expected to perform more accurate health level determination based on the facial feature point extraction results obtained at a plurality of points in time.
 なお本実施の形態においてサーバ装置1は、端末装置3のカメラ36等にて撮影された対象者の顔画像を元に健康度の判定を行っているが、カメラ36等の撮影装置以外の装置から得られる情報を併用して陰影部分の補正及び健康度の判定等を行ってもよい。例えば、端末装置3が深度センサ等のセンサを搭載して、対象者の顔の表面形状を測定し、カメラ36にて撮影した対象者の顔画像と共にセンサにて測定した顔の表面形状の情報をサーバ装置1へ送信してもよい。サーバ装置1は、顔画像に陰影部分が存在する場合に、センサが測定した顔の表面形状の情報に基づいて、陰影部分に関する顔特徴点の補正を行うことができる。 In the present embodiment, the server device 1 determines the degree of health based on the face image of the subject photographed by the camera 36 or the like of the terminal device 3. The information obtained from the above may be used together to correct the shaded portion and determine the degree of health. For example, the terminal device 3 is equipped with a sensor such as a depth sensor to measure the surface shape of the subject's face, and information about the surface shape of the face measured by the sensor together with the face image of the subject photographed by the camera 36. may be transmitted to the server device 1 . When a face image includes a shadow portion, the server apparatus 1 can correct the facial feature points related to the shadow portion based on information on the surface shape of the face measured by the sensor.
 また例えば、端末装置3が赤外光を検知するセンサを搭載して、対象者の顔について赤外光を検知し、カメラ36にて撮影した対象者の顔画像と共にセンサが検知した赤外光の情報をサーバ装置1へ送信してもよい。サーバ装置1は、顔画像に陰影部分が存在する場合に、センサが検知した赤外光の情報に基づいて、陰影部分に関する顔特徴点の補正を行うことができる。 Further, for example, the terminal device 3 is equipped with a sensor that detects infrared light, detects infrared light on the face of the subject, and detects the infrared light detected by the sensor together with the face image of the subject taken by the camera 36. information may be transmitted to the server device 1 . When there is a shadow portion in the face image, the server device 1 can correct the facial feature points related to the shadow portion based on the infrared light information detected by the sensor.
 今回開示された実施形態はすべての点で例示であって、制限的なものではないと考えられるべきである。本発明の範囲は、上記した意味ではなく、請求の範囲によって示され、請求の範囲と均等の意味及び範囲内でのすべての変更が含まれることが意図される。 The embodiments disclosed this time are illustrative in all respects and should be considered not restrictive. The scope of the present invention is indicated by the scope of the claims rather than the meaning described above, and is intended to include all changes within the meaning and scope equivalent to the scope of the claims.
<付記>
  (付記1)
 情報処理装置が、
 顔画像を取得し、
 取得した前記顔画像から顔特徴点を抽出し、
 前記顔画像に陰影が生じているか否かを判定し、
 陰影が生じていると判定した場合に、陰影部分の顔特徴点を補正する、
 情報処理方法。
  (付記2)
 顔に対する前記陰影部分の位置に応じて、補正の要否を判定し、
 補正が必要と判定した場合に、前記陰影部分の顔特徴点を補正する、
 付記1に記載の情報処理方法。
  (付記3)
 前記陰影部分に含まれる顔の輪郭に係る顔特徴点を補正する、
 付記1又は付記2に記載の情報処理方法。
  (付記4)
 前記陰影部分に含まれる顔部分と背景部分とを分離し、
 分離結果に基づいて前記顔部分及び前記背景部分の境界線を抽出し、
 抽出した境界線上の点を前記顔特徴点とする、
 付記3に記載の情報処理方法。
  (付記5)
 前記陰影部分に含まれる顔のほうれい線又は鼻に係る特徴点を補正する、
 付記1から付記4までのいずれか1つに記載の情報処理方法。
  (付記6)
 前記陰影部分の輝度が閾値より小さい部分を抽出し、
 抽出した部分に含まれる点を前記顔特徴点とする、
 付記5に記載の情報処理方法。
  (付記7)
 前記陰影部分に含まれる顔の目又は口に係る特徴点を補正する、
 付記1から付記6までのいずれか1つに記載の情報処理方法。
  (付記8)
 前記陰影部分に含まれる前記目又は口を囲む閉曲線を抽出し、
 抽出した前記閉曲線の角部分を検出し、
 検出した前記角部分の点を前記顔特徴点とする、
 付記7に記載の情報処理方法。
  (付記9)
 前記顔特徴点に基づいて、前記顔画像に写された人の健康度を判定する、
 付記1から付記8までのいずれか1つに記載の情報処理方法。
  (付記10)
 判定した前記健康度に関する情報をユーザへ通知する、
 付記9に記載の情報処理方法。
  (付記11)
 前記顔特徴点を記憶部に記憶し、
 記憶した複数の特徴点の経時的な変化に基づいて前記健康度を判定する、
 付記9又は付記10に記載の情報処理方法。
  (付記12)
 コンピュータに、
 顔画像を取得し、
 取得した前記顔画像から顔特徴点を抽出し、
 前記顔画像に陰影が生じているか否かを判定し、
 陰影が生じていると判定した場合に、陰影部分の顔特徴点を補正する
 処理を実行させる、コンピュータプログラム。
  (付記13)
 顔画像を取得する取得部と、
 取得した前記顔画像から顔特徴点を抽出する抽出部と、
 前記顔画像に陰影が生じているか否かを判定する判定部と、
 陰影が生じていると判定した場合に、陰影部分の顔特徴点を補正する補正部と
 を備える、情報処理装置。
<Appendix>
(Appendix 1)
The information processing device
get face image
extracting facial feature points from the obtained facial image;
Determining whether or not there is a shadow in the face image,
correcting the facial feature points in the shaded portion when it is determined that the shadow occurs;
Information processing methods.
(Appendix 2)
Determining whether or not correction is necessary according to the position of the shaded portion with respect to the face,
correcting facial feature points in the shaded portion when it is determined that correction is necessary;
The information processing method according to appendix 1.
(Appendix 3)
correcting facial feature points related to the contour of the face included in the shaded portion;
The information processing method according to Supplementary Note 1 or Supplementary Note 2.
(Appendix 4)
Separating the face part and the background part included in the shadow part,
extracting a boundary line between the face portion and the background portion based on the separation result;
The point on the extracted boundary line is set as the facial feature point;
The information processing method according to appendix 3.
(Appendix 5)
correcting facial nasolabial folds or feature points related to the nose included in the shaded portion;
The information processing method according to any one of appendices 1 to 4.
(Appendix 6)
extracting a portion where the brightness of the shaded portion is smaller than a threshold;
The points included in the extracted portion are set as the facial feature points;
The information processing method according to appendix 5.
(Appendix 7)
correcting feature points related to the eyes or mouth of the face included in the shaded portion;
The information processing method according to any one of appendices 1 to 6.
(Appendix 8)
extracting a closed curve surrounding the eyes or mouth included in the shaded portion;
Detecting a corner portion of the extracted closed curve,
The point of the detected corner portion is set as the facial feature point;
The information processing method according to appendix 7.
(Appendix 9)
Determining the health level of the person depicted in the facial image based on the facial feature points;
The information processing method according to any one of appendices 1 to 8.
(Appendix 10)
Notifying the user of information about the determined health level;
The information processing method according to appendix 9.
(Appendix 11)
storing the facial feature points in a storage unit;
Determining the health level based on changes over time in a plurality of stored feature points;
The information processing method according to appendix 9 or appendix 10.
(Appendix 12)
to the computer,
get face image
extracting facial feature points from the obtained facial image;
Determining whether or not there is a shadow in the face image,
A computer program for executing processing for correcting facial feature points in a shadowed portion when it is determined that a shadow is present.
(Appendix 13)
an acquisition unit that acquires a face image;
an extraction unit that extracts facial feature points from the acquired facial image;
a determination unit that determines whether or not there is a shadow in the face image;
An information processing apparatus, comprising: a correction unit that corrects facial feature points in a shaded portion when it is determined that a shadow is present.
 1 サーバ装置
 3 端末装置
 11 処理部
 11a 顔画像取得部
 11b 顔特徴点抽出部
 11c 陰影判定部
 11d 陰影部分特定部
 11e 陰影補正部
 11f 健康度判定部
 11g 通知部
 12 記憶部
 12a サーバプログラム
 12b 顔特徴点抽出モデル
 12c 顔特徴点DB
 13 通信部
 31 処理部
 31a 撮影処理部
 31b 表示処理部
 32 記憶部
 32a プログラム
 33 通信部
 34 表示部
 35 操作部
 36 カメラ
 98,99 記録媒体
 N ネットワーク
 
1 server device 3 terminal device 11 processing unit 11a facial image acquisition unit 11b facial feature point extraction unit 11c shadow determination unit 11d shadow part identification unit 11e shadow correction unit 11f health level determination unit 11g notification unit 12 storage unit 12a server program 12b facial features Point extraction model 12c Facial feature point DB
13 communication unit 31 processing unit 31a photographing processing unit 31b display processing unit 32 storage unit 32a program 33 communication unit 34 display unit 35 operation unit 36 camera 98, 99 recording medium N network

Claims (13)

  1.  情報処理装置が、
     顔画像を取得し、
     取得した前記顔画像から顔特徴点を抽出し、
     前記顔画像に陰影が生じているか否かを判定し、
     陰影が生じていると判定した場合に、陰影部分の顔特徴点を補正する、
     情報処理方法。
    The information processing device
    get face image
    extracting facial feature points from the obtained facial image;
    Determining whether or not there is a shadow in the face image,
    correcting the facial feature points in the shaded portion when it is determined that the shadow occurs;
    Information processing methods.
  2.  顔に対する前記陰影部分の位置に応じて、補正の要否を判定し、
     補正が必要と判定した場合に、前記陰影部分の顔特徴点を補正する、
     請求項1に記載の情報処理方法。
    Determining whether or not correction is necessary according to the position of the shaded portion with respect to the face,
    correcting facial feature points in the shaded portion when it is determined that correction is necessary;
    The information processing method according to claim 1 .
  3.  前記陰影部分に含まれる顔の輪郭に係る顔特徴点を補正する、
     請求項1に記載の情報処理方法。
    correcting facial feature points related to the contour of the face included in the shaded portion;
    The information processing method according to claim 1 .
  4.  前記陰影部分に含まれる顔部分と背景部分とを分離し、
     分離結果に基づいて前記顔部分及び前記背景部分の境界線を抽出し、
     抽出した境界線上の点を前記顔特徴点とする、
     請求項3に記載の情報処理方法。
    Separating the face part and the background part included in the shadow part,
    extracting a boundary line between the face portion and the background portion based on the separation result;
    The point on the extracted boundary line is set as the facial feature point;
    The information processing method according to claim 3.
  5.  前記陰影部分に含まれる顔のほうれい線又は鼻に係る特徴点を補正する、
     請求項1に記載の情報処理方法。
    correcting facial nasolabial folds or feature points related to the nose included in the shaded portion;
    The information processing method according to claim 1 .
  6.  前記陰影部分の輝度が閾値より小さい部分を抽出し、
     抽出した部分に含まれる点を前記顔特徴点とする、
     請求項5に記載の情報処理方法。
    extracting a portion where the brightness of the shaded portion is smaller than a threshold;
    The points included in the extracted portion are set as the facial feature points;
    The information processing method according to claim 5.
  7.  前記陰影部分に含まれる顔の目又は口に係る特徴点を補正する、
     請求項1に記載の情報処理方法。
    correcting feature points related to the eyes or mouth of the face included in the shaded portion;
    The information processing method according to claim 1 .
  8.  前記陰影部分に含まれる前記目又は口を囲む閉曲線を抽出し、
     抽出した前記閉曲線の角部分を検出し、
     検出した前記角部分の点を前記顔特徴点とする、
     請求項7に記載の情報処理方法。
    extracting a closed curve surrounding the eyes or mouth included in the shaded portion;
    Detecting a corner portion of the extracted closed curve,
    The point of the detected corner portion is set as the facial feature point;
    The information processing method according to claim 7.
  9.  前記顔特徴点に基づいて、前記顔画像に写された人の健康度を判定する、
     請求項1に記載の情報処理方法。
    Determining the health level of the person depicted in the facial image based on the facial feature points;
    The information processing method according to claim 1 .
  10.  判定した前記健康度に関する情報をユーザへ通知する、
     請求項9に記載の情報処理方法。
    Notifying the user of information about the determined health level;
    The information processing method according to claim 9 .
  11.  前記顔特徴点を記憶部に記憶し、
     記憶した複数の特徴点の経時的な変化に基づいて前記健康度を判定する、
     請求項9に記載の情報処理方法。
    storing the facial feature points in a storage unit;
    Determining the health level based on changes over time in a plurality of stored feature points;
    The information processing method according to claim 9 .
  12.  コンピュータに、
     顔画像を取得し、
     取得した前記顔画像から顔特徴点を抽出し、
     前記顔画像に陰影が生じているか否かを判定し、
     陰影が生じていると判定した場合に、陰影部分の顔特徴点を補正する
     処理を実行させる、コンピュータプログラム。
    to the computer,
    get face image
    extracting facial feature points from the obtained facial image;
    Determining whether or not there is a shadow in the face image,
    A computer program for executing processing for correcting facial feature points in a shadowed portion when it is determined that a shadow is present.
  13.  顔画像を取得する取得部と、
     取得した前記顔画像から顔特徴点を抽出する抽出部と、
     前記顔画像に陰影が生じているか否かを判定する判定部と、
     陰影が生じていると判定した場合に、陰影部分の顔特徴点を補正する補正部と
     を備える、情報処理装置。
     
    an acquisition unit that acquires a face image;
    an extraction unit that extracts facial feature points from the acquired facial image;
    a determination unit that determines whether or not there is a shadow in the face image;
    An information processing apparatus, comprising: a correction unit that corrects facial feature points in a shaded portion when it is determined that a shadow is present.
PCT/JP2022/035045 2021-09-24 2022-09-21 Information processing method, computer program, and information processing device WO2023048153A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021155927 2021-09-24
JP2021-155927 2021-09-24

Publications (1)

Publication Number Publication Date
WO2023048153A1 true WO2023048153A1 (en) 2023-03-30

Family

ID=85719495

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/035045 WO2023048153A1 (en) 2021-09-24 2022-09-21 Information processing method, computer program, and information processing device

Country Status (1)

Country Link
WO (1) WO2023048153A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010045770A (en) * 2008-07-16 2010-02-25 Canon Inc Image processor and image processing method
JP2020052505A (en) * 2018-09-25 2020-04-02 大日本印刷株式会社 Health condition determination system, health condition determination device, server, health condition determination method and program
WO2020230445A1 (en) * 2019-05-13 2020-11-19 パナソニックIpマネジメント株式会社 Image processing device, image processing method, and computer program
JP2021099749A (en) * 2019-12-23 2021-07-01 花王株式会社 Detection method of nasolabial folds

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010045770A (en) * 2008-07-16 2010-02-25 Canon Inc Image processor and image processing method
JP2020052505A (en) * 2018-09-25 2020-04-02 大日本印刷株式会社 Health condition determination system, health condition determination device, server, health condition determination method and program
WO2020230445A1 (en) * 2019-05-13 2020-11-19 パナソニックIpマネジメント株式会社 Image processing device, image processing method, and computer program
JP2021099749A (en) * 2019-12-23 2021-07-01 花王株式会社 Detection method of nasolabial folds

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KANBAYASHI KANBAYASHI TOSHIKI TOSHIKI, DIAGO LUIS, KITAOKA TETSUKO, HAGIWARA ICHIRO: "Examination of Correction Method to Shadow in Face Image for Iyashi Expression Recognition System", THE JOURNAL OF THE INSTITUTE OF IMAGE ELECTRONICS ENGINEERS OF JAPAN, THE INSTITUTE OF IMAGE ELECTRONICS ENGINEERS OF JAPAN, 30 January 2012 (2012-01-30), pages 28 - 35, XP093055944, Retrieved from the Internet <URL:https://www.jstage.jst.go.jp/article/iieej/41/1/41_28/_pdf/-char/ja> [retrieved on 20230620], DOI: 10.11371/iieej.41.28 *

Similar Documents

Publication Publication Date Title
US8819015B2 (en) Object identification apparatus and method for identifying object
CN112040834A (en) Eyeball tracking method and system
WO2019137038A1 (en) Method for determining point of gaze, contrast adjustment method and device, virtual reality apparatus, and storage medium
CN108428214B (en) Image processing method and device
US11232586B2 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
US20120133753A1 (en) System, device, method, and computer program product for facial defect analysis using angular facial image
WO2019061659A1 (en) Method and device for removing eyeglasses from facial image, and storage medium
US10984281B2 (en) System and method for correcting color of digital image based on the human sclera and pupil
JP2014194617A (en) Visual line direction estimating device, visual line direction estimating method, and visual line direction estimating program
KR102657095B1 (en) Method and device for providing alopecia information
US20240005494A1 (en) Methods and systems for image quality assessment
JP2005149370A (en) Imaging device, personal authentication device and imaging method
KR101938361B1 (en) Method and program for predicting skeleton state by the body ouline in x-ray image
JP2019046239A (en) Image processing apparatus, image processing method, program, and image data for synthesis
JP6098133B2 (en) Face component extraction device, face component extraction method and program
WO2023048153A1 (en) Information processing method, computer program, and information processing device
US20230284968A1 (en) System and method for automatic personalized assessment of human body surface conditions
JP5272797B2 (en) Digital camera
JP5242827B2 (en) Face image processing apparatus, face image processing method, electronic still camera, digital image processing apparatus, and digital image processing method
US20160110886A1 (en) Information processing apparatus and clothes proposing method
JP2014044525A (en) Subject recognition device and control method thereof, imaging device, display device, and program
JP7103443B2 (en) Information processing equipment, information processing methods, and programs
JP4762329B2 (en) Face image processing apparatus and face image processing method
WO2024004789A1 (en) Information processing device, information processing method, information processing system, and recording medium
JP2015226154A (en) Tongue image capturing apparatus, method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22872905

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22872905

Country of ref document: EP

Kind code of ref document: A1