WO2023048153A1 - Procédé de traitement d'informations, programme informatique et dispositif de traitement d'informations - Google Patents

Procédé de traitement d'informations, programme informatique et dispositif de traitement d'informations Download PDF

Info

Publication number
WO2023048153A1
WO2023048153A1 PCT/JP2022/035045 JP2022035045W WO2023048153A1 WO 2023048153 A1 WO2023048153 A1 WO 2023048153A1 JP 2022035045 W JP2022035045 W JP 2022035045W WO 2023048153 A1 WO2023048153 A1 WO 2023048153A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature points
shadow
facial feature
information processing
facial
Prior art date
Application number
PCT/JP2022/035045
Other languages
English (en)
Japanese (ja)
Inventor
俊彦 西村
康之 本間
雄太 吉田
Original Assignee
テルモ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by テルモ株式会社 filed Critical テルモ株式会社
Publication of WO2023048153A1 publication Critical patent/WO2023048153A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Definitions

  • the present invention relates to an information processing method, a computer program, and an information processing apparatus for handling an image of a person's face taken with a camera or the like.
  • Devices equipped with cameras such as smartphones or tablet terminals, are widely used.
  • research and development have been made on techniques for determining the health condition of a person based on an image of the person captured by a camera.
  • Patent Literature 1 an image of a user captured by a user terminal is transmitted to a server, and the server calculates a health level indicating the health condition of the user based on the face image included in the image and transmits the health level to the user terminal.
  • a health condition determination system in which a user terminal outputs a health level has been proposed.
  • the server creates graphs showing the health level, skin condition, and facial beauty rate in chronological order, correlation images showing the correlation with the health level, advice information, facial image videos, etc. , to the user terminal.
  • the present invention has been made in view of such circumstances, and its object is to provide information that can be expected to suppress deterioration in the accuracy of determining health conditions, etc. due to shadows contained in images of faces.
  • An object of the present invention is to provide a processing method, a computer program, and an information processing apparatus.
  • An information processing method includes an information processing apparatus that acquires a facial image, extracts facial feature points from the acquired facial image, determines whether or not a shadow is present in the facial image, and extracts a shadow. When it is determined that , the facial feature points in the shaded portion are corrected.
  • it can be expected to suppress deterioration in the accuracy of determination of the health condition, etc., due to shadows contained in the photographed image of the face.
  • FIG. 1 is a schematic diagram for explaining an overview of an information processing system according to an embodiment
  • FIG. 1 is a block diagram showing the configuration of a server device according to an embodiment
  • FIG. 2 is a block diagram showing the configuration of a terminal device according to this embodiment
  • FIG. 6 is a flowchart showing the procedure of health level determination processing performed by the server device according to the present embodiment
  • 6 is a flow chart showing a procedure of shadow correction processing relating to the outline of a face, which is performed by the server device according to the present embodiment
  • FIG. 10 is a schematic diagram for explaining shadow correction processing relating to the contour of the face
  • 7 is a flowchart showing a procedure of shadow correction processing relating to nasolabial folds or a nose performed by the server device according to the present embodiment
  • FIG. 10 is a schematic diagram for explaining shadow correction processing for nasolabial folds or a nose;
  • FIG. 10 is a schematic diagram for explaining shadow correction processing for nasolabial folds or a nose;
  • 7 is a flow chart showing a procedure of shadow correction processing for the eyes or mouth performed by the server device according to the present embodiment;
  • FIG. 10 is a schematic diagram for explaining shadow correction processing for eyes or mouth;
  • FIG. 10 is a schematic diagram for explaining shadow correction processing for eyes or mouth;
  • FIG. 1 is a schematic diagram for explaining an outline of an information processing system according to this embodiment.
  • the information processing system according to the present embodiment is a system that analyzes a subject's face image to determine the health condition, and includes a server device 1, a terminal device 3, and the like.
  • a user who uses this system takes a picture of a person's face using a terminal device 3 such as a smart phone or a tablet-type terminal device, and transmits the taken face image from the terminal device 3 to the server device 1 .
  • the user uses the terminal device 3 to photograph his/her own face, but the present invention is not limited to this. good.
  • the server device 1 receives the face image transmitted by the terminal device 3, analyzes the features of the face image, determines the health level of the subject, and transmits the determined health level and information related thereto to the terminal device 3. do. Based on the face image, the server device 1 determines, for example, the subject's fatigue level, stress level, positive (negative) emotion, presence and degree of facial paralysis, presence and degree of signs of stroke, and the like. can be determined as Note that the degree of health determined by the server device 1 is not limited to the above example, and various indices related to the health of the subject may be employed.
  • the server device 1 extracts facial feature points (so-called keypoints, landmarks, etc.) from the facial image using, for example, a facial feature point extraction model machine-learned in advance, and based on the extracted facial feature points, the degree of health is calculated. judgment is made.
  • the server device 1 is configured to suppress the deterioration of the health level determination accuracy due to shadows that occur in the face image captured by the user using the terminal device 3. Processing is performed to correct the extraction result of the facial feature points for the shaded portion.
  • the server device 1 determines, for example, whether or not a shadowed portion is included in the facial image, and if the shadowed portion is included, identifies the position and range of the shadowed portion, and identifies facial feature points included in the shadowed portion.
  • the correction suppresses deterioration in the accuracy of determination of the health level based on the facial feature points.
  • FIG. 2 is a block diagram showing the configuration of the server device 1 according to this embodiment.
  • the server device 1 according to the present embodiment includes a processing unit 11, a storage unit (storage) 12, a communication unit (transceiver) 13, and the like.
  • the explanation is given assuming that the processing is performed by one server device, but the processing may be performed by a plurality of server devices in a distributed manner.
  • the processing unit 11 includes an arithmetic processing unit such as a CPU (Central Processing Unit), MPU (Micro-Processing Unit), GPU (Graphics Processing Unit) or quantum processor, ROM (Read Only Memory), RAM (Random Access Memory), etc. It is configured using The processing unit 11 reads out and executes the server program 12a stored in the storage unit 12 to perform processing of acquiring a facial image of the subject from the terminal device 3, processing of extracting facial feature points from the acquired facial image, Various processes such as a process of correcting the facial feature points for the shaded portion and a process of judging the health level of the subject from the facial feature points are performed.
  • a CPU Central Processing Unit
  • MPU Micro-Processing Unit
  • GPU Graphics Processing Unit
  • quantum processor ROM (Read Only Memory)
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the storage unit 12 is configured using a large-capacity storage device such as a hard disk.
  • the storage unit 12 stores various programs executed by the processing unit 11 and various data required for processing by the processing unit 11 .
  • the storage unit 12 stores a server program 12a executed by the processing unit 11.
  • the storage unit 12 stores a facial feature point extraction model 12b for extracting facial feature points from a facial image, and is provided with a facial feature point DB (database) 12c for storing information on the extracted facial feature points.
  • the server program (program product) 12a is provided in a form recorded in a recording medium 99 such as a memory card or an optical disk, and the server device 1 reads the server program 12a from the recording medium 99 and stores it in the storage unit 12.
  • the server program 12a may be written in the storage unit 12 during the manufacturing stage of the server device 1, for example.
  • the server program 12a may be delivered by another remote server device or the like, and the server device 1 may acquire the program through communication.
  • the server program 12 a may be recorded in the recording medium 99 and read by a writing device and written in the storage unit 12 of the server device 1 .
  • the server program 12 a may be provided in the form of distribution via a network, or may be provided in the form of being recorded on the recording medium 99 .
  • the facial feature point extraction model 12b is a learning model that has undergone machine learning in advance so as to extract and output the facial feature points of the target person in response to the input of the target person's face image.
  • the facial feature point extraction model 12b is a trained model that extracts facial feature points using, for example, Open Pose technology.
  • the facial feature point extraction model 12b is not limited to the Open Pose learning model, and various learning models for extracting facial feature points using other techniques may be employed. Extraction of facial feature points using a learning model based on machine learning is an existing technique, and thus details such as the configuration and generation method of the learning model will be omitted.
  • the facial feature point DB 12c is a database that stores and accumulates information on facial feature points extracted from the facial image of the subject.
  • the facial feature point DB 12c includes, for example, identification information such as the name or ID of the subject, the facial image of the subject, the date and time when the facial image was taken, information on the facial feature points extracted from the facial image, and the facial feature points. is stored in association with the determination result of the degree of health based on
  • the server device 1 compares, for example, the facial feature points extracted from the latest facial image with facial feature points from a predetermined period of time, such as one month or one year ago, and based on changes in the facial feature points during the predetermined period. can determine the degree of health.
  • the communication unit 13 communicates with various devices via a network N including a mobile phone communication network, a wireless LAN (Local Area Network), the Internet, and the like. In the present embodiment, the communication unit 13 communicates with one or more terminal devices 3 via the network N. FIG. The communication unit 13 transmits the data given from the processing unit 11 to other devices, and gives the data received from the other devices to the processing unit 11 .
  • a network N including a mobile phone communication network, a wireless LAN (Local Area Network), the Internet, and the like.
  • the communication unit 13 communicates with one or more terminal devices 3 via the network N.
  • the communication unit 13 transmits the data given from the processing unit 11 to other devices, and gives the data received from the other devices to the processing unit 11 .
  • the storage unit 12 may be an external storage device connected to the server device 1.
  • the server device 1 may be a multicomputer including a plurality of computers, or may be a virtual machine virtually constructed by software.
  • the server device 1 is not limited to the above configuration, and may include, for example, a reading unit that reads information stored in a portable storage medium, an input unit that receives operation inputs, or a display unit that displays images. .
  • the server program 12a stored in the storage unit 12 is read out and executed by the processing unit 11, so that the facial image acquisition unit 11a, the facial feature point extraction unit 11b, the shadow determination unit A portion 11c, a shadow portion specifying portion 11d, a shadow correction portion 11e, a health level determination portion 11f, a notification portion 11g, and the like are implemented in the processing portion 11 as software functional portions.
  • functional units of the processing unit 11 functional units related to determination of the health level from the face image are illustrated, and functional units related to other processes are omitted.
  • the face image acquisition unit 11a performs processing for acquiring a face image of the target person's face from the terminal device 3.
  • the facial image acquisition unit 11 a communicates with the terminal device 3 through the communication unit 13 , receives facial image data transmitted from the terminal device 3 , and stores the data in the storage unit 12 .
  • the face image obtaining section 11a may perform a process of extracting an image region corresponding to the face of the subject from the image.
  • the face image acquisition unit 11a performs face detection processing on the image acquired from the terminal device 3, and cuts out an image area including the detected face from the original image, thereby extracting the face image of the subject. can be obtained. Since face detection processing is an existing technology, detailed description thereof will be omitted.
  • the facial feature point extraction unit 11b extracts a plurality of points indicating facial features as feature points (keypoints, landmarks, etc.) from the facial image acquired by the facial image acquisition unit 11a.
  • the facial feature point extraction unit 11b uses the facial feature point extraction model 12b stored in the storage unit 12 to extract facial feature points from the facial image.
  • the facial feature point extracting unit 11b inputs the facial image of the subject to the facial feature point extraction model 12b, and obtains the information of the facial feature points output by the facial feature point extraction model 12b in response to the facial image.
  • the facial feature point extraction model 12b outputs, for example, the coordinates of the facial feature points in the input facial image, and the number of facial feature points output by the facial feature point extraction model 12b is, for example, several to several hundred.
  • the shadow determination unit 11c performs processing for determining whether or not the face image acquired by the face image acquisition unit 11a contains a shadow. For example, the shadow determination unit 11c compares the luminance (or brightness, etc.) of each pixel forming the face image with a predetermined threshold value, and determines whether or not there is a pixel whose luminance is lower (darker) than the threshold value. determines whether or not the face image includes a shadow.
  • the threshold used for shadow determination may be determined, for example, at the design stage of the present system, or may be calculated, for example, based on the average value of luminance of the entire face image.
  • the shadow portion identification unit 11d performs a process of identifying an image area containing a shadow present in the face image as a shadow portion. Similar to the shadow determination unit 11c, the shadow portion identification unit 11d compares the brightness of each pixel of the face image with a predetermined threshold, and identifies pixels whose brightness is lower than the threshold. The threshold used for this process may be the same as the threshold used by the shadow determination unit 11c for determination. may be used.
  • the shaded portion specifying unit 11d specifies a rectangular image region including pixels whose brightness is smaller than a threshold value, and specifies this image region as a shaded portion. Note that the shadow portion identifying section 11d may identify a plurality of shadow portions from the face image.
  • the shadow part specifying unit 11d specifies which part of the subject's face corresponds to the shadow part included in the face image.
  • the shadow portion specifying unit 11d compares, for example, the coordinates of the facial feature points extracted by the facial feature point extraction unit 11b with the range of the coordinates of the shadow portion, and determines whether the facial feature points existing within or near the shadow portion are identified as the face. It is determined which part of the face the shaded part corresponds to by determining which feature of the point.
  • the shaded portion specifying unit 11d specifies whether the shaded portion corresponds to any part of the contour of the face, the nasolabial fold, the nose, the eyes, or the mouth, or does not correspond to any part. .
  • the shadow correction unit 11e performs processing for correcting facial feature points included in the shadow portion specified by the shadow portion specifying unit 11d.
  • the shadow correction section 11e corrects the shadow portion using a correction method determined according to which part of the face the shadow portion corresponds to.
  • a method for correcting a shadow portion corresponding to the contour of the face, a method for correcting a shadow portion corresponding to nasolabial folds or a nose, and a method for correcting a shadow portion corresponding to the eyes or mouth are described.
  • the shadow correction section 11e selects an appropriate method according to the shadow portion from the three types of correction methods and performs correction. Details of each correction method will be described later.
  • the health level determination unit 11f performs processing for determining the health level of the subject based on a plurality of facial feature points extracted from the face image and corrected for shadow portions.
  • the health degree determination unit 11f determines, for example, the presence or absence or degree of facial paralysis of the subject.
  • a designer or the like of an information processing system according to the present embodiment collects information about facial feature points of a person with symptoms of facial paralysis, predetermines conditions for determining whether or not the person has facial paralysis, The determination conditions are stored in the device 1 .
  • the health degree determination unit 11f determines whether or not the facial feature points of the subject have the characteristics of facial paralysis based on pre-stored determination conditions, and outputs the presence or absence of facial paralysis or the degree thereof as a determination result. do.
  • the health level determination unit 11f outputs, for example, a decimal value ranging from 0 (no facial paralysis characteristic) to 1 (facial paralysis characteristic) as a determination result.
  • the information processing system is intended to accurately extract facial feature points that can be used to determine various degrees of health. It may be used for determining the degree of health. For this reason, in the present embodiment, a detailed description of the method of determining the degree of health, such as the method of determining the presence or absence of facial paralysis and the degree of facial paralysis by the health degree determination unit 11f, will be omitted.
  • the health degree determination unit 11f may determine the presence or absence or degree of facial paralysis from the subject's facial feature points using a machine-learned learning model in advance.
  • the learning model performs machine learning using, for example, teacher data in which information on facial feature points and information on the presence or absence of facial paralysis are associated, and outputs the presence or absence of facial paralysis or its degree in response to the input of facial feature points. do.
  • the health level determination unit 11f may determine the degree of progression or improvement of, for example, the symptom of facial paralysis based on the facial feature point information stored in the facial feature point DB 12c of the storage unit 12. Further, the health degree determination unit 11f predicts the degree of facial paralysis at a future point in time, such as one month or one year ahead, based on the time-series facial feature point information stored in the facial feature point DB 12c. good too. In the present embodiment, the health degree determination unit 11f determines the presence or absence of facial paralysis of the subject or the degree thereof, but is not limited to this. The presence or absence of cerebral apoplexy or the like may be determined as the degree of health.
  • the notification unit 11g performs a process of notifying the user of the result of health level determination by the health level determination unit 11f.
  • the notification unit 11g may notify, for example, when an abnormality is detected in the subject's health level, or may notify, for example, when an improvement in the subject's health level is detected.
  • the judgment result may be notified regardless of the presence or absence of improvement.
  • the notification unit 11g notifies the user of that effect.
  • the notification unit 11g transmits a message for notifying the determination result to one or a plurality of notification destinations preset in association with the target person.
  • the notification unit 11g notifies the terminal device 3 of the user who has transmitted the face image.
  • the notification destination is not limited to the terminal device 3 of the user who transmitted the face image, and may be the terminal device 3 of a different user (for example, the user's family member or doctor in charge). good.
  • FIG. 3 is a block diagram showing the configuration of the terminal device 3 according to this embodiment.
  • the terminal device 3 includes a processing unit 31, a storage unit (storage) 32, a communication unit (transceiver) 33, a display unit (display) 34, an operation unit 35, and the like.
  • the terminal device 3 is a device used by a user such as a target person who wants to determine the health level, or a related person such as a family member or a doctor in charge of the target person, for example, a smartphone, a tablet terminal device, a personal computer, etc. information processing apparatus.
  • the terminal device 3 does not have to be a portable device, and may be a device such as an AI speaker or a surveillance camera installed in the target person's house. It is desirable that the terminal device 3 is equipped with a camera for photographing the face of the subject. A device without a camera may be used.
  • the processing unit 31 is configured using an arithmetic processing unit such as a CPU or MPU, a ROM, and the like.
  • the processing unit 31 reads out and executes the program 32a stored in the storage unit 32 to perform processing related to photographing of the subject, processing of transmitting the photographed image to the server device 1 and requesting determination of the degree of health, and , acquires the determination result of the health level from the server device 1, and performs various processes such as a process of displaying it to the user.
  • the storage unit 32 is configured using, for example, a non-volatile memory device such as a flash memory or a storage device such as a hard disk.
  • the storage unit 32 stores various programs executed by the processing unit 31 and various data required for processing by the processing unit 31 .
  • the storage unit 32 stores a program 32a executed by the processing unit 31.
  • FIG. In the present embodiment, the program (program product) 32 a is distributed by a remote server device or the like, and the terminal device 3 acquires it through communication and stores it in the storage unit 32 .
  • the program 32a may be written in the storage unit 32 during the manufacturing stage of the terminal device 3, for example.
  • the program 32a may be stored in the storage unit 32 after the terminal device 3 reads the program 32a recorded in the recording medium 98 such as a memory card or an optical disk.
  • the program 32 a may be recorded in the recording medium 98 and read by a writing device and written in the storage unit 32 of the terminal device 3 .
  • the program 32 a may be provided in the form of distribution via a network, or may be provided in the form of being recorded on the recording medium 98 .
  • the communication unit 33 communicates with various devices via a network N including a mobile phone communication network, a wireless LAN, the Internet, and the like. In the present embodiment, the communication unit 33 communicates with the server device 1 via the network N. FIG. The communication unit 33 transmits data received from the processing unit 31 to other devices, and provides the processing unit 31 with data received from other devices.
  • the display unit 34 is configured using a liquid crystal display or the like, and displays various images, characters, etc. based on the processing of the processing unit 31.
  • the operation unit 35 receives a user's operation and notifies the processing unit 31 of the received operation.
  • the operation unit 35 receives a user's operation using an input device such as mechanical buttons or a touch panel provided on the surface of the display unit 34 .
  • the operation unit 35 may be an input device such as a mouse and a keyboard, and these input devices may be detachable from the terminal device 3 .
  • the camera 36 is configured using an image sensor such as a CCD (Charge Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor), and an optical element such as a lens. Give to 31.
  • CCD Charge Coupled Device
  • CMOS Complementary Metal Oxide Semiconductor
  • the program 32a stored in the storage unit 32 is read out and executed by the processing unit 31, so that the photographing processing unit 31a, the display processing unit 31b, etc. function as software functional units. It is implemented in the processing unit 31 .
  • the program 32a may be a program dedicated to the information processing system according to the present embodiment, or may be a general-purpose program such as an Internet browser or web browser.
  • the photographing processing unit 31a performs processing related to photographing of the subject's face image by the camera 36.
  • the photographing processing unit 31a performs photographing by the camera 36 according to, for example, the user's operation on the operation unit 35, and acquires data of the photographed image.
  • the photographing processing unit 31a may perform, for example, a message display or a photographing guide display to assist the user in photographing an appropriate face image.
  • the imaging processing unit 31a transmits an image (face image) obtained by imaging to the server device 1, and requests determination of the health level.
  • the display processing unit 31b performs processing for displaying information on the health level determination result received from the server device 1 on the display unit 34.
  • the display processing unit 31b may display, for example, the presence or absence of facial paralysis determined by the server device 1 or its degree, and, for example, when a determination result indicating that there is a characteristic of facial paralysis is obtained, a push notification to that effect is obtained.
  • the time-series change in the degree of facial paralysis may be displayed as a graph or the like, or various displays other than these may be performed.
  • FIG. 4 is a flow chart showing the procedure of health level determination processing performed by the server device 1 according to the present embodiment.
  • the facial image acquisition unit 11a of the processing unit 11 of the server device 1 according to the present embodiment communicates with the terminal device 3 through the communication unit 13, and receives facial image data transmitted by the terminal device 3. , a face image of the subject is obtained (step S1).
  • the facial feature point extraction unit 11b of the processing unit 11 inputs the facial image acquired in step S1 to the facial feature point extraction model 12b stored in the storage unit 12, and extracts the face image output by the facial feature point extraction model 12b. By obtaining information on the feature points, the feature points are extracted from the face image of the subject (step S2).
  • the shadow determination unit 11c of the processing unit 11 determines whether or not the face image acquired in step S1 includes a shadow (step S3). At this time, the shadow determination unit 11c determines whether or not there is a pixel having a brightness lower than the threshold based on the result of comparing the brightness of each pixel constituting the face image with the threshold, thereby determining whether or not there is a shadow on the face image. Determine whether it contains If it is determined that the face image does not contain a shadow (S3: NO), the shadow determining section 11c advances the process to step S7.
  • the shadow portion specifying unit 11d of the processing unit 11 specifies an image region including the shadow in the face image as a shadow portion, and identifies the shadow portion. It is specified which part of the subject's face the part corresponds to (step S4). At this time, the shadow portion specifying unit 11d determines which portion of the face the shadow portion corresponds to based on the positional relationship between the region specified as the shadow portion and the coordinates of the facial feature points extracted in step S2. to identify
  • the shadow portion specifying unit 11d determines whether or not the shadow portion specified in step S4 needs to be corrected (step S5). In the present embodiment, the shadow portion specifying unit 11d determines that correction is necessary when the shadow portion of the face image corresponds to any part of the subject's facial contour, nasolabial fold, nose, eyes, or mouth. If it does not correspond to these parts, it is determined that there is no need for correction. If it is determined that correction is not necessary (S5: NO), the shadow portion specifying section 11d advances the process to step S7.
  • the shadow correction section 11e of the processing section 11 performs correction processing on the shadow portion of the face image identified by the shadow portion identification section 11d (step S6).
  • the shadow corrector 11e is determined according to whether the shadowed portion is a portion corresponding to the outline of the face, a portion corresponding to the nasolabial fold or nose, or a portion corresponding to the eyes or mouth. Use the correction method to correct the shaded area.
  • the health level determination unit 11f of the processing unit 11 determines the health level of the subject based on the facial feature points extracted from the facial image of the subject and the facial feature points corrected with respect to the shaded portion as necessary. (step S7).
  • the health degree determination unit 11f determines the presence or absence of facial paralysis and its degree based on the facial feature points of the subject.
  • Various health measures such as a person's degree of fatigue, stress level, degree of positive (negative) emotions, or presence and degree of signs of stroke may be determined.
  • the health level determination unit 11f transmits information on the facial feature points extracted from the face image of the subject and information on the facial feature points corrected with respect to the shaded portion as necessary, together with the health level determination result in step S7.
  • Stored in the facial feature point DB 12c of the storage unit 12 step S8.
  • the notification unit 11g of the processing unit 11 notifies the determination result of the health level by transmitting the information about the health level of the target person determined in step S7 to the destination preset for the target person (step S9), the process ends.
  • the server device 1 stores, in a database or the like, identification information such as the subject's name or ID, and information such as the e-mail address of the recipient of the determination result of the health level in association with each other.
  • the terminal device 3 transmits the identification information of the target person to the server device 1 together with the face image.
  • the notification unit 11g of the server device 1 acquires the information of the transmission destination to which the health level determination result is transmitted from the database based on the subject identification information received together with the face image, and sends the acquired transmission destination for the health level determination. You can send the results.
  • FIG. 5 is a flowchart showing the procedure of shadow correction processing concerning the contour of the face performed by the server device 1 according to the present embodiment.
  • the shadow correction processing shown in FIG. 5 is processing that can be performed in step S6 of the flowchart of FIG.
  • FIG. 6 is a schematic diagram for explaining the shadow correction processing regarding the contour of the face.
  • the shadow correction unit 11e of the processing unit 11 of the server device 1 performs correction to increase the brightness of the shadow portion of the face image identified in step S4 of the flowchart shown in FIG. S21).
  • the upper part of FIG. 6 shows an example of a face image and a shaded portion 101 corresponding to the contour of the face specified from this face image.
  • the middle part of FIG. 6 shows an example of the image after the correction for increasing the luminance of the shaded portion 101 has been performed.
  • the shadow correction unit 11e performs luminance correction only on the shadow portion specified from the face image, and does not have to perform luminance correction on portions other than the shadow portion (although it may be performed).
  • the shadow correction unit 11e examines, for example, the color distribution of the image of the shaded portion with the increased luminance, and identifies image regions with similar colors to determine the region corresponding to human skin (skin region) and the other regions. (background area) and (step S22).
  • the lower part of FIG. 6 shows an example in which the shaded area is separated into a skin area 101a and a background area 101b.
  • the shadow correction unit 11e separates the skin region and the background region by specifying regions with similar colors, but the region separation method is not limited to this. Another method may be used to separate regions, such as detecting a boundary line from a portion by processing such as edge extraction and separating the region based on this boundary line.
  • the shadow correction unit 11e acquires a portion corresponding to the boundary line between the skin area and the background area based on the separation result of the skin area and the background area in step S22 (step S23).
  • the shadow correction unit 11e acquires some points as feature points from among the acquired plurality of points on the boundary line (step S24), and ends the shadow correction process.
  • the shadow correction unit 11e replaces the feature points included in the shadow portion among the feature points extracted in step S2 of the flowchart of FIG. 4 with the feature points acquired in step S24, thereby completing the shadow correction processing. .
  • FIG. 7 is a flowchart showing a procedure for shadow correction processing for nasolabial folds or nose performed by the server device 1 according to the present embodiment.
  • the shadow correction processing shown in FIG. 7 is processing that can be performed in step S6 of the flowchart of FIG. 8 and 9 are schematic diagrams for explaining the shadow correction processing for nasolabial folds or nose.
  • the shadow correction unit 11e of the processing unit 11 of the server device 1 performs correction to increase the brightness of the shadow portion of the face image identified in step S4 of the flowchart shown in FIG. S31).
  • the upper part of FIG. 8 shows an example of a face image and a shadow portion 102 corresponding to the nasolabial folds of the face identified from this face image. Further, the lower part of FIG. 8 shows an example of the image after the correction for increasing the luminance of the shaded portion 101 is performed.
  • the shadow correction unit 11e performs luminance correction only on the shadow portion specified from the face image, and does not have to perform luminance correction on portions other than the shadow portion (although it may be performed).
  • the shadow correction unit 11e compares the brightness of each pixel in the shaded portion of the image of the shaded portion with the increased brightness with a predetermined threshold value, thereby extracting regions where the brightness is lower than the threshold value (step S32).
  • the upper part of FIG. 9 shows an example of the shadow portion 102 identified from the face image, and the lower part of FIG. there is
  • a region where the brightness is lower than the threshold value is shown hatched, and this region can be estimated as a portion corresponding to the nasolabial folds of the face.
  • the nasolabial folds are taken as an example, but the same applies to the nose, and regions where the brightness is lower than the threshold value correspond to the boundaries of the nose or concave portions in the unevenness of the nose.
  • the shadow correction unit 11e acquires some points as feature points from among the points included in the low-luminance region extracted in step S32 (step S33), and ends the shadow correction process. For example, the shadow correction unit 11e replaces the feature points included in the shadow portion among the feature points extracted in step S2 of the flowchart of FIG. 4 with the feature points acquired in step S33, thereby completing the shadow correction processing. .
  • FIG. 10 is a flowchart showing a procedure for shadow correction processing for eyes or mouth performed by the server device 1 according to the present embodiment.
  • the shadow correction processing shown in FIG. 10 is processing that can be performed in step S6 of the flowchart of FIG. 11 and 12 are schematic diagrams for explaining the shadow correction processing for the eyes or mouth.
  • the shadow correction unit 11e of the processing unit 11 of the server device 1 performs correction to increase the brightness of the shadow portion of the face image identified in step S4 of the flowchart shown in FIG. S41).
  • the upper part of FIG. 11 shows an example of a face image and a shadow portion 103 corresponding to the eyes identified from this face image. Further, the lower part of FIG. 11 shows an example of the image after the correction for increasing the brightness of the shaded portion 103 is performed.
  • the shadow correction unit 11e performs luminance correction only on the shadow portion specified from the face image, and does not have to perform luminance correction on portions other than the shadow portion (although it may be performed).
  • the shadow correction unit 11e performs processing for extracting a closed curve surrounding the eyes or mouth from the image of the shadow portion with the increased brightness (step S42). At this time, the shadow correction unit 11e extracts, for example, pixels corresponding to edges from the image, extracts a portion where the pixels of the edge are connected to form a curve, and extracts a portion where the curve is closed. can be extracted. Extraction of closed curves surrounding the eyes or mouth can be done using active contour methods such as Snakes or Level Set. Since the active contour method is an existing technique, detailed description thereof will be omitted. Note that this method of extracting a closed curve is only an example and is not limited to this, and extraction of a closed curve by the shadow correction unit 11e may be performed by any method.
  • active contour methods such as Snakes or Level Set. Since the active contour method is an existing technique, detailed description thereof will be omitted. Note that this method of extracting a closed curve is only an example and is not limited to this, and extraction of a closed curve by the shadow correction unit 11e may
  • the shadow correction unit 11e performs processing for detecting corners of the closed curve extracted in step S42 (step S43). At this time, the shadow correction unit 11e detects a crooked portion of the closed curve, and if the internal angle at this portion is smaller than a predetermined angle (eg, 90° or 60°), this portion is determined to be a corner portion. can be done. Note that this angle detection method is an example and is not limited to this, and the angle detection by the shadow correction unit 11e may be performed by any method.
  • the upper part of FIG. 12 shows a closed curve 104 detected from a shaded portion 103 corresponding to the eye, and a corner portion 105 of this closed curve 104 . In the lower part of FIG. 12, a closed curve 104 detected from a shaded portion 103 corresponding to the mouth and a corner portion 105 of this closed curve 104 are shown.
  • the shadow correction unit 11e acquires the point corresponding to the corner detected in step S43 as a feature point (step S44), and ends the shadow correction process.
  • the shadow correction unit 11e completes the shadow correction processing by, for example, replacing the feature points included in the shadow portion among the feature points extracted in step S2 of the flowchart of FIG. 4 with the feature points acquired in step S44. .
  • the server device 1 acquires a face image of a subject photographed from the terminal device 3, extracts facial feature points from the acquired face image, and extracts facial feature points from the face image. To determine whether or not a shadow occurs, and correct facial feature points in the shadow portion when the shadow occurs. As a result, in the information processing system according to the present embodiment, it is possible to suppress deterioration in accuracy in extracting facial feature points due to shadows included in a photographed image of the face, and to improve accuracy in determining health conditions and the like based on facial feature points. It can be expected that the decrease will be suppressed.
  • the server device 1 also determines whether or not correction is necessary according to which part of the face the shaded portion included in the face image corresponds to, that is, the position of the shaded portion with respect to the face. As a result, the server device 1 performs correction when a part included in the facial image corresponds to a part that affects the extraction of facial feature points, for example, and corrects a part that does not affect the extraction of facial feature points. By not performing correction, it is expected that the load of correction processing can be reduced.
  • the server device 1 corrects facial feature points related to the outline of the face included in the shaded portion.
  • the server device 1 separates the face portion (facial skin region) and the background portion (background region) included in the shaded portion, extracts the boundary line between the face portion and the background portion, and Correction is performed by using the points as facial feature points.
  • the server apparatus 1 can be expected to accurately extract facial feature points even from a facial image in which shadows are produced on the outline of the face and its surroundings.
  • the server device 1 corrects feature points related to nasolabial folds or noses of the face included in the shaded portion.
  • the server device 1 performs correction by extracting a portion in which the brightness of each pixel included in the shaded portion is smaller than a threshold, and using points included in the extracted portion as facial feature points.
  • the server device 1 can be expected to extract facial feature points with high accuracy even in a face image in which the nasolabial folds or the nose and its surroundings are shaded.
  • the server device 1 corrects feature points related to the eyes or mouth of the face included in the shaded portion. At this time, the server device 1 extracts a closed curve surrounding the eyes or mouth included in the shaded portion, detects the corner of the extracted closed curve, and uses points on the detected corner as facial feature points for correction. conduct. As a result, the server device 1 can be expected to accurately extract facial feature points even from a facial image in which the eyes or mouth and their surroundings are shaded.
  • the server device 1 determines the health level of the subject depicted in the face image based on the feature points extracted from the face image and the feature points corrected as necessary.
  • the degree of health to be determined includes, for example, the presence or absence and degree of facial paralysis of the subject, the degree of fatigue of the subject, the stress level, the degree of positive (negative) emotions, or the presence or absence and degree of signs of stroke. can be adopted.
  • the server device 1 notifies, for example, the terminal device 3 of the determination result of the health level. Accordingly, the user can obtain information about the health level of the target person, such as himself or his family, by a simple operation of photographing and transmitting a face image.
  • the server device 1 stores in the storage unit 12 information on the facial feature points extracted from the facial image of the subject and information on the facial feature points corrected with respect to the shaded portion as necessary. do.
  • the server device 1 compares, for example, past facial feature points stored in the storage unit 12 with the latest facial feature points extracted based on the facial image received from the terminal device 3, and determines whether or not there is a change in the facial feature points.
  • the degree of health may be determined based on the degree of change or the like.
  • the server apparatus 1 can be expected to perform more accurate health level determination based on the facial feature point extraction results obtained at a plurality of points in time.
  • the server device 1 determines the degree of health based on the face image of the subject photographed by the camera 36 or the like of the terminal device 3.
  • the information obtained from the above may be used together to correct the shaded portion and determine the degree of health.
  • the terminal device 3 is equipped with a sensor such as a depth sensor to measure the surface shape of the subject's face, and information about the surface shape of the face measured by the sensor together with the face image of the subject photographed by the camera 36. may be transmitted to the server device 1 .
  • the server apparatus 1 can correct the facial feature points related to the shadow portion based on information on the surface shape of the face measured by the sensor.
  • the terminal device 3 is equipped with a sensor that detects infrared light, detects infrared light on the face of the subject, and detects the infrared light detected by the sensor together with the face image of the subject taken by the camera 36. information may be transmitted to the server device 1 .
  • the server device 1 can correct the facial feature points related to the shadow portion based on the infrared light information detected by the sensor.
  • Appendix 1 The information processing device get face image extracting facial feature points from the obtained facial image; Determining whether or not there is a shadow in the face image, correcting the facial feature points in the shaded portion when it is determined that the shadow occurs; Information processing methods.
  • Appendix 2 Determining whether or not correction is necessary according to the position of the shaded portion with respect to the face, correcting facial feature points in the shaded portion when it is determined that correction is necessary; The information processing method according to appendix 1.
  • Appendix 3 correcting facial feature points related to the contour of the face included in the shaded portion; The information processing method according to Supplementary Note 1 or Supplementary Note 2.
  • (Appendix 11) storing the facial feature points in a storage unit; Determining the health level based on changes over time in a plurality of stored feature points; The information processing method according to appendix 9 or appendix 10. (Appendix 12) to the computer, get face image extracting facial feature points from the obtained facial image; Determining whether or not there is a shadow in the face image, A computer program for executing processing for correcting facial feature points in a shadowed portion when it is determined that a shadow is present.
  • An information processing apparatus comprising: a correction unit that corrects facial feature points in a shaded portion when it is determined that a shadow is present.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Dentistry (AREA)
  • Primary Health Care (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé de traitement d'informations, un programme informatique et un dispositif de traitement d'informations dont on peut attendre qu'ils suppriment une chute de la précision de détermination d'un état de santé ou similaire en raison d'une ombre incluse dans une image qui capture un visage. Selon le procédé de traitement d'informations selon ce mode de réalisation, un dispositif de traitement d'informations acquiert une image de visage ; extrait, de l'image de visage acquise, des points de caractéristique faciale ; détermine s'il y a une ombre dans l'image de visage ; et corrige les points de caractéristique faciale de la partie ombragée s'il a été déterminé qu'il y a une ombre. Le dispositif de traitement d'informations détermine si une correction est nécessaire selon la position de la partie ombragée par rapport au visage, et s'il est déterminé qu'une correction est nécessaire, peut corriger les points de caractéristique faciale de la partie ombragée.
PCT/JP2022/035045 2021-09-24 2022-09-21 Procédé de traitement d'informations, programme informatique et dispositif de traitement d'informations WO2023048153A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-155927 2021-09-24
JP2021155927 2021-09-24

Publications (1)

Publication Number Publication Date
WO2023048153A1 true WO2023048153A1 (fr) 2023-03-30

Family

ID=85719495

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/035045 WO2023048153A1 (fr) 2021-09-24 2022-09-21 Procédé de traitement d'informations, programme informatique et dispositif de traitement d'informations

Country Status (1)

Country Link
WO (1) WO2023048153A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010045770A (ja) * 2008-07-16 2010-02-25 Canon Inc 画像処理装置及び画像処理方法
JP2020052505A (ja) * 2018-09-25 2020-04-02 大日本印刷株式会社 健康状態判定システム、健康状態判定装置、サーバ、健康状態判定方法、及びプログラム
WO2020230445A1 (fr) * 2019-05-13 2020-11-19 パナソニックIpマネジメント株式会社 Dispositif de traitement d'image, procédé de traitement d'image et programme informatique
JP2021099749A (ja) * 2019-12-23 2021-07-01 花王株式会社 ほうれい線の検出方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010045770A (ja) * 2008-07-16 2010-02-25 Canon Inc 画像処理装置及び画像処理方法
JP2020052505A (ja) * 2018-09-25 2020-04-02 大日本印刷株式会社 健康状態判定システム、健康状態判定装置、サーバ、健康状態判定方法、及びプログラム
WO2020230445A1 (fr) * 2019-05-13 2020-11-19 パナソニックIpマネジメント株式会社 Dispositif de traitement d'image, procédé de traitement d'image et programme informatique
JP2021099749A (ja) * 2019-12-23 2021-07-01 花王株式会社 ほうれい線の検出方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KANBAYASHI KANBAYASHI TOSHIKI TOSHIKI, DIAGO LUIS, KITAOKA TETSUKO, HAGIWARA ICHIRO: "Examination of Correction Method to Shadow in Face Image for Iyashi Expression Recognition System", THE JOURNAL OF THE INSTITUTE OF IMAGE ELECTRONICS ENGINEERS OF JAPAN, THE INSTITUTE OF IMAGE ELECTRONICS ENGINEERS OF JAPAN, 30 January 2012 (2012-01-30), pages 28 - 35, XP093055944, Retrieved from the Internet <URL:https://www.jstage.jst.go.jp/article/iieej/41/1/41_28/_pdf/-char/ja> [retrieved on 20230620], DOI: 10.11371/iieej.41.28 *

Similar Documents

Publication Publication Date Title
US8819015B2 (en) Object identification apparatus and method for identifying object
CN112040834A (zh) 眼球跟踪方法及系统
WO2019137038A1 (fr) Procédé de détermination de point de regard, procédé et dispositif de réglage de contraste, appareil de réalité virtuelle et support d&#39;informations
CN108428214B (zh) 一种图像处理方法及装置
US11232586B2 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
US20120133753A1 (en) System, device, method, and computer program product for facial defect analysis using angular facial image
WO2019061659A1 (fr) Procédé et dispositif permettant de supprimer des lunettes d&#39;une image de visage, et support d&#39;informations
JP2014194617A (ja) 視線方向推定装置、視線方向推定装置および視線方向推定プログラム
KR102657095B1 (ko) 탈모 상태 정보 제공 방법 및 장치
US20240005494A1 (en) Methods and systems for image quality assessment
JP2005149370A (ja) 画像撮影装置、個人認証装置及び画像撮影方法
KR101938361B1 (ko) X선 영상 내 체형 윤곽선 기반의 골격 상태 예측방법 및 프로그램
JP2019046239A (ja) 画像処理装置、画像処理方法、プログラム及び合成用画像データ
US10984281B2 (en) System and method for correcting color of digital image based on the human sclera and pupil
JP6098133B2 (ja) 顔構成部抽出装置、顔構成部抽出方法及びプログラム
WO2023048153A1 (fr) Procédé de traitement d&#39;informations, programme informatique et dispositif de traitement d&#39;informations
US20230284968A1 (en) System and method for automatic personalized assessment of human body surface conditions
JP5272797B2 (ja) デジタルカメラ
JP5242827B2 (ja) 顔画像処理装置、顔画像処理方法、電子スチルカメラ、デジタル画像処理装置およびデジタル画像処理方法
US20160110886A1 (en) Information processing apparatus and clothes proposing method
JP2014044525A (ja) 被写体認識装置及びその制御方法、撮像装置、表示装置、並びにプログラム
JP7103443B2 (ja) 情報処理装置、情報処理方法、およびプログラム
JP4762329B2 (ja) 顔画像処理装置および顔画像処理方法
CN116434328A (zh) 姿态监控方法、装置、系统、电子设备和存储介质
JP2015226154A (ja) 舌画像撮影装置、方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22872905

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE