WO2017209225A1 - Dispositif d'estimation d'état, procédé d'estimation d'état et programme d'estimation d'état - Google Patents

Dispositif d'estimation d'état, procédé d'estimation d'état et programme d'estimation d'état Download PDF

Info

Publication number
WO2017209225A1
WO2017209225A1 PCT/JP2017/020378 JP2017020378W WO2017209225A1 WO 2017209225 A1 WO2017209225 A1 WO 2017209225A1 JP 2017020378 W JP2017020378 W JP 2017020378W WO 2017209225 A1 WO2017209225 A1 WO 2017209225A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
driver
state
face
subject
Prior art date
Application number
PCT/JP2017/020378
Other languages
English (en)
Japanese (ja)
Inventor
初美 青位
航一 木下
相澤 知禎
秀人 ▲濱▼走
匡史 日向
芽衣 上谷
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/JP2017/007142 external-priority patent/WO2017208529A1/fr
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Priority to US16/303,710 priority Critical patent/US20200334477A1/en
Priority to CN201780029000.6A priority patent/CN109155106A/zh
Priority to DE112017002765.9T priority patent/DE112017002765T5/de
Publication of WO2017209225A1 publication Critical patent/WO2017209225A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present invention relates to a state estimation device, a state estimation method, and a state estimation program.
  • Patent Document 1 proposes a concentration determination device that detects the driver's line of sight of the vehicle and estimates that the driver's concentration decreases when the detected line of sight stays for a long time.
  • Patent Document 2 proposes an image analysis device that compares a face image of a driver's license with a photographed image of a driver while driving to determine a driver's sleepiness and look-aside. .
  • Patent Document 3 the driver's eyelid movement is detected, and immediately after the detection, the driver's drowsiness is determined according to whether or not the driver's face angle has changed. There has been proposed a drowsiness detection device that prevents this from happening.
  • Patent Document 4 proposes a drowsiness determination device that determines the drowsiness level of a driver based on the movement of muscles around the driver's mouth.
  • Patent Document 5 proposes a driver's face is detected in an image obtained by reducing and resizing a captured image, and a specific part (eye, nose, mouth) of the face is extracted, and a state such as doze from the movement of each specific part is disclosed. There has been proposed a face situation determination apparatus for determining the above.
  • Patent Document 6 proposes an image processing apparatus that periodically and sequentially processes a plurality of processes such as determination of a driver's face orientation and gaze estimation.
  • the inventors of the present invention have found that the conventional method for estimating the driver state as described above has the following problems. That is, in the conventional method, the driver's state is estimated by paying attention only to partial changes that occur in the driver's face, such as face orientation, eye opening / closing, and line of sight. Therefore, for example, when turning right or left, shake your face to check the surroundings, look back for visual confirmation, or change the line of sight to check the display of mirrors, meters, and in-vehicle devices, etc. May be mistaken for an act of looking aside or a state of reduced concentration.
  • a state where the user cannot concentrate on driving such as eating and drinking or smoking while gazing at the front and making a call with a mobile phone while gazing at the front may be mistaken as a normal state.
  • the conventional method uses only information that captures the partial changes that occur on the face, so the driver's degree of concentration on driving is accurately reflected by reflecting the various states that the driver can take.
  • the present inventors have found that there is a problem that it cannot be estimated. In addition, this problem may arise similarly when estimating the state of subjects other than a driver
  • the present invention has been made in view of such a situation, and an object thereof is to provide a technique capable of appropriately estimating various states that can be taken by the subject.
  • the state estimation device includes an image acquisition unit that acquires a captured image from a capturing device that is disposed so as to capture a target person who may be present at a predetermined location, and the target based on the captured image. Analyzing the behavior of the person's face, obtaining first information relating to the behavior of the face of the subject, analyzing the body movement of the subject based on the captured image, and analyzing the subject's body A second analysis unit that acquires second information related to the operation; and an estimation unit that estimates the state of the subject based on the first information and the second information.
  • the state estimation device acquires the first information related to the behavior of the subject's face and the second information related to the body motion, and based on the acquired first information and the second information, the state of the subject presume. Therefore, not only the local information such as the behavior of the subject's face but also the global information such as the subject's physical motion can be reflected in the analysis of the subject's state. Therefore, according to the said structure, the various states which a subject can take can be estimated.
  • each of the first information and the second information may be expressed by one or a plurality of feature amounts, and the estimation unit may determine the target based on a value of each feature amount.
  • a person's state may be estimated. According to the said structure, the calculation process which estimates the various states which a subject can take can be easily set by expressing each information with a feature-value.
  • the state estimation apparatus may further include a weight setting unit that sets a weight for determining a priority degree of each feature amount for each feature amount, and the estimation unit includes the respective weights to which the weight is applied.
  • the state of the subject may be estimated based on the feature value. According to this configuration, it is possible to improve the estimation accuracy of the state of the subject by appropriately weighting each feature amount.
  • the weight setting unit may determine the weight value based on a result of estimating the state of the subject in the past.
  • the estimation accuracy of a subject's state can be improved by reflecting the result estimated in the past. For example, when the state in which the subject person looks back is estimated, the next action that the subject person can take is assumed to be a look back. In such a case, it is possible to improve the estimation accuracy of the state of the target person by making the weighting of the feature amount related to the look back ahead larger than the other feature amounts.
  • the state estimation apparatus may further include a resolution conversion unit that reduces the resolution of the captured image, and the second analysis unit analyzes the body movement with respect to the captured image with the reduced resolution.
  • the second information may be acquired by performing the above. Compared with the behavior of the face, the behavior of the body movement can appear greatly in the captured image. Therefore, when acquiring the second information related to the body movement from the captured image compared to acquiring the first information related to the behavior of the face from the captured image, a captured image with a small amount of information, in other words, a low resolution, is obtained. Is available. Therefore, in this configuration, a captured image with reduced resolution is used when acquiring the second information. Thereby, the calculation amount of the calculation process at the time of acquiring 2nd information can be reduced, and the load of the processor concerning estimation of a subject's state can be suppressed.
  • the second analysis unit includes a feature amount related to at least one of an edge position, an edge strength, and a local frequency component extracted from the captured image with reduced resolution. May be acquired as the second information. According to the said structure, since the 2nd information regarding a body motion can be acquired appropriately from the picked-up image which reduced the resolution, a subject's state can be estimated accurately.
  • the captured image may be composed of a plurality of frames
  • the second analysis unit analyzes the body movement with respect to two or more frames included in the captured image.
  • the second information may be acquired. According to the said structure, since the body motion over two or more frames can be extracted, the estimation precision of a subject's state can be improved.
  • the first analysis unit performs predetermined image analysis on the captured image, thereby detecting whether or not the subject's face can be detected, the position of the face, the direction of the face, and the face Information on at least one of movement of the eye, direction of line of sight, position of the facial organ, and opening and closing of the eyes may be acquired as the first information.
  • the first information related to the behavior of the face can be acquired appropriately, so that the state of the subject can be estimated with high accuracy.
  • the captured image may be configured with a plurality of frames
  • the first analysis unit performs analysis of the behavior of the face with respect to the captured image in units of one frame
  • the first information may be acquired. According to this configuration, by acquiring the first information in units of one frame, it is possible to detect a minute change in the behavior of the face and accurately estimate the state of the subject. Become.
  • the target person may be a driver who drives a vehicle
  • the image acquisition unit is arranged to photograph the driver who has arrived at the driver's seat of the vehicle.
  • the photographed image may be acquired from the photographed device, and the estimation unit may estimate the state of the driver based on the first information and the second information.
  • the estimation unit as the state of the driver, the driver's forward gaze, drowsiness, looking aside, clothes removal, telephone operation, leaning, driving disturbance by passengers or pets, the onset of illness, backward facing, kneeling down It is also possible to estimate at least one of eating, drinking, smoking, dizziness, abnormal behavior, car navigation or audio operation, attachment / detachment of glasses or sunglasses, and photography.
  • the state estimation apparatus which can estimate a driver
  • the target person may be a factory worker
  • the image acquisition unit is arranged to photograph the worker who may be present at a predetermined work place.
  • the photographed image may be acquired from a photographing device, and the estimation unit may estimate the worker's state based on the first information and the second information.
  • the estimation unit may estimate the degree of concentration on the work performed by the worker or the health state of the worker as the worker state.
  • the state estimation apparatus which can estimate a worker's various states can be provided.
  • the health status of the worker may be expressed by some health-related index, for example, an index such as physical condition or fatigue level.
  • an information processing method that realizes each of the above-described configurations, a program, a computer that records such a program, or the like It may be a storage medium that can be read by an apparatus, a machine, or the like.
  • the computer-readable recording medium is a medium that stores information such as programs by electrical, magnetic, optical, mechanical, or chemical action.
  • a computer acquires a captured image from an imaging device arranged to image a subject who may be present at a predetermined location, and based on the captured image Analyzing the behavior of the subject's face, obtaining first information regarding the behavior of the subject's face as a result of analyzing the behavior of the face, and the subject based on the captured image Analyzing the body motion of the person, analyzing the body motion, obtaining second information relating to the body motion of the subject, based on the first information and the second information, And a step of estimating the state of the subject.
  • the state estimation program includes a step of acquiring a captured image from a capturing device disposed in a computer so as to capture a target person who may be present at a predetermined location, and the captured image. Analyzing the behavior of the face of the subject based on the results of analyzing the behavior of the face of the subject, obtaining first information on the behavior of the face of the subject, based on the captured image Based on the first information and the second information, the step of analyzing the body motion of the subject, the step of obtaining the second information regarding the body motion of the subject as a result of the step of analyzing the body motion, And a step of estimating the state of the subject.
  • FIG. 1 schematically illustrates an example of a usage scene of the state estimation device according to the embodiment.
  • FIG. 2 schematically illustrates an example of a hardware configuration of the state estimation device according to the embodiment.
  • FIG. 3A schematically illustrates an example of a functional configuration of the state estimation device according to the embodiment.
  • FIG. 3B schematically illustrates an example of a functional configuration of the facial organ state detection unit.
  • FIG. 4 illustrates an example of a combination of a driver's state and information used to estimate it.
  • FIG. 5 illustrates more specific estimation conditions of the driver's state.
  • FIG. 6 illustrates an example of a processing procedure of the state estimation device according to the embodiment.
  • FIG. 7 illustrates an example of a method of detecting the driver's face direction, line-of-sight direction, eye open / closed degree, etc. in a plurality of stages.
  • FIG. 8 illustrates an example of a process of extracting a feature amount related to the driver's physical movement.
  • FIG. 9 illustrates an example of a process for calculating each feature amount.
  • FIG. 10 illustrates a process of estimating the driver's state based on each feature quantity, and a process of changing the weighting of each feature quantity based on the estimation result.
  • FIG. 11 illustrates the weighting process performed after estimating the driver's backward reflection.
  • FIG. 12 exemplifies each feature amount (time series information) detected when the driver falls down.
  • FIG. 13 illustrates each feature amount (time-series information) detected when the concentration of a driver who is distracted in the right direction decreases.
  • FIG. 14 illustrates a state estimation method for a subject according to another embodiment.
  • FIG. 15 illustrates the configuration of a state estimation device according to another embodiment.
  • FIG. 16 illustrates the configuration of a state estimation device according to another embodiment.
  • FIG. 17 illustrates a usage scene of the state estimation device according to another embodiment.
  • this embodiment will be described with reference to the drawings.
  • this embodiment described below is only an illustration of the present invention in all respects. It goes without saying that various improvements and modifications can be made without departing from the scope of the present invention. That is, in implementing the present invention, a specific configuration according to the embodiment may be adopted as appropriate.
  • data appearing in this embodiment is described in a natural language, more specifically, it is specified by a pseudo language, a command, a parameter, a machine language, or the like that can be recognized by a computer.
  • FIG. 1 schematically illustrates an example in which the state estimation device 10 according to an embodiment is applied to an automatic driving system 20.
  • the automatic driving system 20 includes a camera 21 (imaging device), a state estimation device 10, and an automatic driving support device 22, and monitors a driver D who drives the vehicle C.
  • the vehicle C is configured to perform automatic driving.
  • the type of the vehicle C is not particularly limited as long as an automatic driving system can be mounted, and may be, for example, an automobile.
  • the camera 21 corresponds to the “photographing device” of the present invention, and is appropriately arranged so that a place where the subject can exist can be photographed.
  • the driver D who arrives at the driver's seat of the vehicle C corresponds to the “subject” of the present invention, and the camera 21 is appropriately arranged so as to photograph the driver D.
  • the camera 21 is installed in the upper front part of the driver's seat of the vehicle C, and continuously captures the driver's seat where the driver D can exist from the front. Thereby, the picked-up image which can contain the substantially whole upper body of the driver
  • operator D is acquirable. Then, the camera 21 transmits the captured image obtained by the imaging to the state estimation device 10.
  • the captured image may be a still image or a moving image.
  • the state estimation device 10 is a computer that acquires a captured image from the camera 21 and estimates the state of the driver D by analyzing the acquired captured image. Specifically, the state estimation device 10 analyzes the behavior of the face of the driver D based on the captured image acquired from the camera 21, and first information about the behavior of the face of the driver D (first information to be described later). 122). In addition, the state estimation device 10 analyzes the body motion of the driver D based on the captured image, and acquires second information (second information 123 described later) regarding the body motion of the driver D. And the state estimation apparatus 10 estimates the state of the driver
  • the automatic driving support device 22 controls the driving system and the control system of the vehicle C to automatically perform the driving operation regardless of the driver D and the manual driving mode in which the driving operation is manually performed by the driver D. And a computer that performs an automatic operation mode.
  • the automatic driving support device 22 is configured to switch between the manual driving mode and the automatic driving mode in accordance with the estimation result of the state estimating device 10, the setting of the car navigation device, and the like.
  • the first information related to the behavior of the face of the driver D and the second information related to the body movement are acquired, and based on the acquired first information and second information, the driver D's Estimate the state. Therefore, not only local information such as the behavior of the face of the driver D but also global information such as the body movement of the driver D can be reflected in the estimation of the state of the driver D. Therefore, according to the present embodiment, various states that the driver D can take can be estimated. Further, by using the estimation result for the control of the automatic driving, it is possible to realize the control of the vehicle C suitable for various states that the driver D can take.
  • FIG. 2 schematically illustrates an example of a hardware configuration of the state estimation device 10 according to the present embodiment.
  • the state estimation device 10 is a computer in which a control unit 110, a storage unit 120, and an external interface 130 are electrically connected.
  • the external interface is described as “external I / F”.
  • the control unit 110 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and the like, which are hardware processors, and controls each component according to information processing.
  • the storage unit 120 includes, for example, a RAM, a ROM, and the like, and stores a program 121, first information 122, second information 123, and the like.
  • the storage unit 120 corresponds to “memory”.
  • the program 121 is a program for causing the state estimation device 10 to execute information processing (FIG. 6) for estimating the state of the driver D described later.
  • the first information 122 is obtained as a result of executing a process of analyzing the behavior of the face of the driver D on the captured image obtained by the camera 21.
  • the second information 123 is obtained as a result of executing a process of analyzing the body motion of the driver D on the captured image obtained by the camera 21. Details will be described later.
  • the external interface 130 is an interface for connecting to an external device, and is appropriately configured according to the external device to be connected.
  • the external interface 130 is connected to the camera 21 and the automatic driving support device 22 via, for example, CAN (Controller Area Network).
  • the camera 21 is arranged so as to photograph the driver D who has arrived at the driver's seat of the vehicle C as described above.
  • the camera 21 is disposed on the front upper side of the driver's seat.
  • the arrangement location of the camera 21 may not be limited to such an example, and may be appropriately selected according to the embodiment as long as the driver D sitting on the driver's seat can be photographed.
  • the camera 21 may be a general digital camera, a video camera, or the like.
  • the automatic driving support device 22 can be configured by a computer in which a control unit, a storage unit, and an external interface are electrically connected. In this case, a program and various data for supporting the driving operation of the vehicle C by switching between the automatic driving mode and the manual driving mode are stored in the storage unit. Moreover, the automatic driving assistance device 22 is connected to the state estimation device 10 via an external interface. Thereby, the automatic driving assistance device 22 is configured to be able to control the operation of the automatic driving of the vehicle C using the estimation result of the state estimation device 10.
  • external devices other than the above may be connected to the external interface 130.
  • a communication module for performing data communication via a network may be connected to the external interface 130.
  • the external device connected to the external interface 130 does not have to be limited to each of the above devices, and may be appropriately selected according to the embodiment.
  • the state estimation device 10 includes one external interface 130.
  • the number of external interfaces 130 can be appropriately selected according to the embodiment.
  • the external interface 130 may be provided for each external device to be connected.
  • the state estimation device 10 has the hardware configuration as described above.
  • the hardware configuration of the state estimation device 10 may not be limited to the above example, and may be determined as appropriate according to the embodiment.
  • the specific hardware configuration of the state estimation device 10 it is possible to omit, replace, and add components as appropriate according to the embodiment.
  • the control unit 110 may include a plurality of hardware processors.
  • the hardware processor may be configured by a microprocessor, an FPGA (field-programmable gate array), or the like.
  • the storage unit 120 may be configured by a RAM and a ROM included in the control unit 110.
  • the storage unit 120 may be configured by an auxiliary storage device such as a hard disk drive or a solid state drive.
  • the state estimation device 10 may be a general-purpose computer in addition to an information processing device designed exclusively for the service to be provided.
  • FIG. 3A schematically illustrates an example of a functional configuration of the state estimation device 10 according to the present embodiment.
  • the control part 110 of the state estimation apparatus 10 expand
  • the control unit 110 interprets and executes the program 121 developed in the RAM by the CPU, and controls each component.
  • the state estimation device 10 includes an image acquisition unit 11, a first analysis unit 12, a resolution conversion unit 13, a second analysis unit 14, a feature vector generation unit 15, a weight. It functions as a computer including the setting unit 16 and the estimation unit 17.
  • the image acquisition unit 11 acquires a captured image (hereinafter also referred to as “first image”) from the camera 21 arranged to capture the driver D. Then, the image acquisition unit 11 transmits the acquired first image to the first analysis unit 12 and the resolution conversion unit 13.
  • first image a captured image
  • the image acquisition unit 11 transmits the acquired first image to the first analysis unit 12 and the resolution conversion unit 13.
  • the first analysis unit 12 analyzes the behavior of the face of the driver D based on the acquired first image, and acquires first information regarding the behavior of the face of the driver D.
  • the first information is not particularly limited as long as it relates to the behavior of the face, and may be appropriately determined according to the embodiment.
  • the first information is at least one of, for example, whether or not the face of the driver D (subject) can be detected, the position of the face, the direction of the face, the movement of the face, the direction of the line of sight, the position of the facial organ, and the opening and closing of the eyes. May be configured. Accordingly, the first analysis unit 12 can be configured as follows.
  • FIG. 3B schematically illustrates the configuration of the first analysis unit 12 according to the present embodiment.
  • the first analysis unit 12 according to the present embodiment includes a face detection unit 31, a facial organ point detection unit 32, and a facial organ state detection unit 33.
  • the facial organ state detection unit 33 includes an eye open / close detection unit 331, a gaze detection unit 332, and a face direction detection unit 333.
  • the face detection unit 31 detects the presence / absence of the face of the driver D and the position of the face in the first image by analyzing the image data of the first image.
  • the face organ point detector 32 detects the position of each organ (eye, mouth, nose, ear, etc.) included in the face of the driver D detected in the first image.
  • the facial organ point detector 32 may detect the outline of the entire face or a part of the face as a facial organ in an auxiliary manner.
  • the facial organ state detection unit 33 estimates the state of each organ of the face of the driver D whose position is detected in the first image. Specifically, the eye open / close detection unit 331 detects the eye open / closed degree of the driver D. The line-of-sight detection unit 332 detects the direction of the line of sight of the driver D. The face direction detection unit 333 detects the face direction of the driver D.
  • the configuration of the facial organ state detection unit 33 may not be limited to such an example.
  • the facial organ state detection unit 33 may be configured to detect information regarding the state of each facial organ other than these.
  • the facial organ state detection unit 33 may detect the movement of the face.
  • the analysis result of the first analysis unit 12 is sent to the feature vector generation unit 15 as first information (local information) regarding the behavior of the face. Note that, as shown in FIG. 3A, the analysis result (first information) of the first analysis unit 12 may be accumulated in the storage unit 120.
  • the resolution conversion unit 13 generates a captured image (hereinafter also referred to as “second image”) having a resolution lower than that of the first image by applying a resolution reduction process to the image data of the first image. To do.
  • the second image may be temporarily stored in the storage unit 120.
  • the 2nd analysis part 14 acquires the 2nd information regarding a driver
  • the second information is not particularly limited as long as it relates to the driver's physical movement, and may be appropriately determined according to the embodiment.
  • the second information may be configured to indicate, for example, the movement, posture, etc. of the driver D.
  • the analysis result of the second analysis unit 14 is sent to the feature vector generation unit 15 as second information (global information) regarding the body movement of the driver D. Note that the analysis result (second information) of the second analysis unit 14 may be accumulated in the storage unit 120.
  • the feature vector generation unit 15 receives the first information and the second information, and generates a feature vector indicating the behavior and body motion of the driver D.
  • the first information and the second information are each represented by a feature amount obtained from each detection result.
  • the feature amounts constituting the first information and the second information may be collectively referred to as “motion feature amounts”.
  • the motion feature amount includes both information related to the facial organ of the driver D and information related to the physical motion of the driver D.
  • the feature vector generation unit 15 generates a feature vector using each motion feature amount as an element.
  • the weight setting unit 16 sets, for each element (each feature amount) of the generated feature vector, a weight that determines the priority of each element.
  • the value of the weight may be determined as appropriate.
  • the weight setting unit 16 determines the weight value of each element based on the result of estimating the state of the driver D in the past by the estimation unit 17 described later.
  • the weighting data is appropriately stored in the storage unit 120.
  • the estimation unit 17 estimates the state of the driver D based on the first information and the second information. Specifically, the estimation unit 17 estimates the state of the driver D from a state vector obtained by applying a weight to the feature vector.
  • the state of the driver D to be estimated may be appropriately determined according to the embodiment.
  • the estimation unit 17 may, for example, monitor the driver D's state as forward gaze, drowsiness, looking aside, putting on and taking off clothes, telephone operation, leaning on the window or armrest, driving disturbance by a passenger or pet, At least one of onset, backward, prone, eating, drinking, smoking, dizziness, abnormal behavior, car navigation or audio manipulation, wearing or removing glasses or sunglasses, and taking a picture may be estimated.
  • FIG. 4 illustrates an example of a combination of the state of the driver D and information used to estimate it.
  • first information local information
  • second information global information
  • various states of the driver D can be appropriately set. Can be estimated.
  • “ ⁇ ” indicates that the target information is necessary to estimate the state of the target driver (driver).
  • “ ⁇ ” indicates that it is preferable to use the target information to estimate the state of the target driver (driver).
  • FIG. 5 illustrates an example of conditions for estimating the state of the driver D.
  • the estimation unit 17 uses the eye open / closed degree detected by the first analysis unit 12 as local information, and information about the movement of the driver D detected by the second analysis unit 14 as global information. It may be used to determine whether or not the driver D is in a state of being drowsy.
  • the estimation unit 17 uses the information on the face direction and the line-of-sight direction detected by the first analysis unit 12 as local information and the information on the posture of the driver D detected by the second analysis unit 14 as a whole. It may be determined as to whether or not the driver D is driving aside while using it as specific information.
  • the estimation unit 17 uses the information on the face orientation detected by the first analysis unit 12 as local information and the information on the posture of the driver D detected by the second analysis unit 14 as global information. As a result, it may be determined whether or not the driver D is operating the mobile terminal.
  • the estimation unit 17 uses the position of the face detected by the first analysis unit 12 as local information and the information on the movement and posture of the driver D detected by the second analysis unit 14 as global information. Or may be used to determine whether or not the driver D is leaning on the window side.
  • the estimation unit 17 uses the information on the face direction and the line-of-sight direction detected by the first analysis unit 12 as local information, and information on the movement and posture of the driver D detected by the second analysis unit 14. May be used as global information to determine whether or not the driver D is disturbed.
  • the estimation unit 17 uses the information on the eye open / closed degree, the face direction, and the line of sight detected by the first analysis unit 12 as local information and the driver D detected by the second analysis unit 14. Information regarding movement and posture may be used as global information to determine whether or not the driver D has developed a sudden illness.
  • each function of the state estimation device 10 will be described in detail in an operation example described later.
  • an example is described in which each function of the state estimation device 10 is realized by a general-purpose CPU.
  • part or all of the above functions may be realized by one or a plurality of dedicated processors.
  • functions may be omitted, replaced, and added as appropriate according to the embodiment.
  • FIG. 6 is a flowchart illustrating an example of a processing procedure of the state estimation device 10.
  • the processing procedure for estimating the state of the driver D described below corresponds to the “state estimation method” of the present invention.
  • the processing procedure described below is merely an example, and each processing may be changed as much as possible. Further, in the processing procedure described below, steps can be omitted, replaced, and added as appropriate according to the embodiment.
  • Step S11 First, in step S ⁇ b> 11, the control unit 110 functions as the image acquisition unit 11, and acquires a captured image from the camera 21 arranged so as to capture the driver D who has arrived at the driver's seat of the vehicle C.
  • the captured image may be a moving image or a still image.
  • the control unit 110 continuously acquires image data of captured images from the camera 21. Thereby, the acquired captured image is composed of a plurality of frames.
  • Steps S12 to S14 In the next steps S12 to S14, the control unit 110 functions as the first analysis unit 12, performs predetermined image analysis on the acquired captured image (first image), and operates based on the captured image. The behavior of the face of the driver D is analyzed, and first information regarding the behavior of the face of the driver D is acquired.
  • the control unit 110 functions as the face detection unit 31 of the first analysis unit 12, and detects the face of the driver D included in the acquired captured image.
  • a known image analysis method may be used for the face detection.
  • the control part 110 acquires the information regarding the detection availability and position of a face.
  • the control unit 110 determines whether or not a face is detected in the captured image in step S12. When the face is detected, the control unit 110 proceeds to the next step S14. On the other hand, when the face is not detected, the control unit 110 skips the process of step S14 and proceeds to the next step S15. In this case, the control unit 110 sets the detection result of the face direction, the eye open / closed degree, and the line-of-sight direction to 0.
  • the control unit 110 functions as the facial organ point detection unit 32, and each organ (eye, mouth, nose, ear, etc.) included in the face of the driver D in the detected face image. Is detected. A known image analysis method may be used for detection of each organ. Thereby, the control part 110 can acquire the information regarding the position of each organ of the face. Further, the control unit 110 functions as the facial organ state detection unit 33, and detects the orientation of the face, the movement of the face, the eye open / closed degree, the line-of-sight direction, and the like by analyzing the detected state of each organ.
  • FIG. 7 schematically illustrates an example of a method for detecting a face orientation, an eye open / closed degree, and a line-of-sight direction.
  • the control unit 110 functions as a face direction detection unit 333, and determines the face direction of the driver D in the photographed image in three vertical directions with respect to the two axial directions of the vertical direction and the horizontal direction. , It is detected at a frequency of 5 horizontal steps.
  • control unit 110 functions as a line-of-sight detection unit 332, and the direction of the line of sight of the driver D is the frequency of 3 levels in the vertical direction and 5 levels in the horizontal direction with respect to the two axial directions of the vertical direction and the horizontal direction, like the face direction. To detect. Further, the control unit 110 functions as an eye opening / closing detection unit 331 and detects the opening / closing degree of the eye of the driver D in the photographed image in 10 stages.
  • the control unit 110 obtains the first information on whether or not the face of the driver D can be detected, the position of the face, the direction of the face, the movement of the face, the direction of the line of sight, the position of each organ of the face, and the eye open / closed degree Get as.
  • the acquisition of the first information is preferably performed for each frame. That is, since the acquired captured image is composed of a plurality of frames, the control unit 110 may acquire the first information by analyzing the behavior of the face with respect to the captured image in units of one frame. In this case, the control unit 110 may analyze the facial behavior for all the frames, or may analyze the facial behavior every predetermined number of frames.
  • the first information indicating the behavior of the face of the driver D in detail can be acquired.
  • the captured image (first image) acquired by the camera 21 is used as it is for the processing from step S12 to S14 according to the present embodiment.
  • Steps S15 and S16 the control unit 110 functions as the resolution conversion unit 13, and lowers the resolution of the captured image acquired in step S11. Thereby, the control unit 110 forms a low-resolution captured image (second image) in units of frames.
  • the resolution reduction processing method is not particularly limited, and may be appropriately determined according to the embodiment.
  • the control unit 110 may form a low-resolution captured image by a technique such as a nearest neighbor method, a bilinear interpolation method, or a bicubic method.
  • the control unit 110 functions as the second analysis unit 14, and analyzes the body movement of the driver D with respect to the captured image (second image) with reduced resolution.
  • Second information related to the body movement of the person D is acquired.
  • the second information may include, for example, information on the posture of the driver D, the movement of the upper body, the presence or absence of the driver D, and the like.
  • FIG. 8 schematically illustrates an example of a process of detecting the second information from the captured image with reduced resolution.
  • the control unit 110 extracts second information as an image feature amount from the second image.
  • the control unit 110 extracts an edge in the second image based on the luminance value of each pixel.
  • a pre-designed image filter for example, 3 ⁇ 3 size
  • a learning device for example, a neural network
  • the control unit 110 can detect an edge in the second image by inputting the luminance value of each pixel of the second image to the image filter or the learning device.
  • the controller 110 compares the luminance value and the information about the extracted edge with the luminance value of the second image of the previous frame and the information about the extracted edge, respectively, and obtains a difference between the frames.
  • the “previous frame” is a frame that is a predetermined number (for example, one) before the currently processed frame.
  • the control unit 110 has four types of brightness value information of the current frame, edge information indicating the position of the edge of the current frame, brightness value difference information compared to the previous frame, and edge difference information compared to the previous frame. Can be acquired as an image feature amount (second information).
  • the luminance value information and the edge information mainly indicate the posture of the driver D and the presence or absence of the driver D.
  • the luminance value difference information and the edge difference information mainly indicate the movement of the driver D (upper body).
  • the control unit 110 may acquire the image feature amount related to the edge strength and the local frequency component of the image in addition to the edge position as described above.
  • the edge strength is the degree of change in luminance around the edge position included in the image.
  • the local frequency component of an image is an image feature amount obtained by performing image processing such as a Gabor filter, a Sobel filter, a Laplacian filter, a Canny edge detector, and a wavelet filter, for example. Further, the local frequency component of the image is not limited to the above-described image processing, and may be an image feature amount obtained by performing image processing using a filter designed in advance by machine learning.
  • the second information can be acquired.
  • the photographed image (first image) is composed of a plurality of frames
  • the photographed image (second image) reduced in resolution is also composed of a plurality of frames. Therefore, the control unit 110 acquires the second information such as the luminance value difference information and the edge difference information by analyzing the body motion with respect to two or more frames included in the second image.
  • the control unit 110 may store only the frame for calculating the difference in the storage unit 120 or the RAM. As a result, unnecessary frames need not be stored, and the memory capacity can be used efficiently.
  • the frames used for the analysis of the body motion may be adjacent to each other in time, but it is assumed that the change in the body motion of the driver D is slower than the change in each organ of the face. For this reason, it is preferable to use a plurality of frames with a predetermined time interval for analyzing the body movement.
  • the body movement of the driver D can appear greatly in the captured image as compared with the behavior of the face. Therefore, compared with the case where the first information related to the behavior of the face is acquired in steps S12 to S14, a captured image having a lower resolution can be used when the second information related to the body movement is acquired in step S16. . Therefore, in the present embodiment, the control unit 110 performs step S15 before performing step S16, thereby reducing the captured image (first image) for acquiring the first information related to facial behavior. A resolution-captured captured image (second image) is acquired. And the control part 110 is acquiring the 2nd information regarding the driver
  • the steps S15 and S16 may be executed in parallel with the steps S12 to S14.
  • the steps S15 and S16 may be executed before the steps S12 to S14.
  • the steps S15 and S16 may be executed between the steps S12 to S14.
  • the step S15 may be executed before any of the steps S12 to S14, and the step S16 may be executed after the steps S12 to S14. That is, steps S15 and S16 may be executed without depending on steps S12 to S14.
  • Step S17 Returning to FIG. 6, in the next step S ⁇ b> 17, the control unit 110 functions as the feature vector generation unit 15 and generates a feature vector from the acquired first information and second information.
  • FIG. 9 schematically illustrates an example of a process of calculating each element (each feature amount) of the feature vector.
  • control unit 110 functions as the first analysis unit 12, and analyzes the behavior of the face in units of one frame for the acquired first image.
  • the control unit 110 includes feature amounts indicating whether or not the face of the driver D can be detected, the position of the face, the direction of the face, the movement of the face, the direction of the line of sight, the position of each organ of the face, and the degree of opening and closing the eyes ( (Histogram) is calculated as the first information.
  • step S15 the control unit 110 functions as the resolution conversion unit 13 and forms a second image obtained by reducing the resolution of the first image.
  • step S ⁇ b> 16 the control unit 110 functions as the second analysis unit 14 and extracts image feature amounts as second information from two or more frames included in the formed second image.
  • the control unit 110 sets each feature amount acquired as the first information and the second information in each element of the feature vector. Thereby, the control part 110 produces
  • Steps S18 to S20 Returning to FIG. 6, in the next step S ⁇ b> 18, the control unit 110 functions as the weight setting unit 16, and sets a weight that determines the priority of each element for each element (each feature amount) of the feature vector. . In the next step S19, the control unit 110 determines the state of the driver D based on the state vector obtained by applying the set weight to the feature vector, that is, the value of each feature amount to which the set weight is applied. Is estimated. As shown in FIGS.
  • the control unit 110 determines the state of the driver D as, for example, driver's forward gaze, drowsiness, looking aside, putting on and taking off clothes, telephone operation, leaning on the window or armrest, passenger Or, at least one of driving disturbance by a pet, onset of illness, backwards, kneeling, eating and drinking, smoking, dizziness, abnormal behavior, car navigation or audio manipulation, wearing or removing glasses or sunglasses, and taking a picture can be estimated. .
  • the control unit 110 determines whether or not to continue estimating the state of the driver D according to a command (not shown) from the automatic driving system 20.
  • the control unit 110 ends the process according to this operation example. For example, when the vehicle C stops, the control unit 110 determines not to continue estimating the state of the driver D, and ends the monitoring of the state of the driver D.
  • the control part 110 repeats a process from step S11. For example, when the automatic driving of the vehicle C is continued, the control unit 110 determines to continue estimating the state of the driver D, and repeats the processing from step S11, thereby continuing the state of the driver D. Monitor.
  • step S18 the control unit 110 determines a weight value for each element based on the result of estimating the state of the driver D in the past in step S19. . That is, based on the estimation result of the state of the driver D, the control unit 110 emphasizes items (facial organs, body of the body) that are estimated when the state of the driver D is estimated in the next cycle in which the estimation is performed. Weight for each feature amount is determined so that movement, posture, etc.) are given priority.
  • the control unit 110 increases the weight of the feature amount indicating the presence or absence of the face, and increases the line-of-sight direction and eye opening / closing. The weighting may be performed so that the weight of the feature amount indicating the degree becomes small.
  • control part 110 may repeatedly perform the estimation process by step S19 until the estimation result of the state of the driver
  • the threshold for determining the accuracy of estimation may be set in advance and stored in the storage unit 120, or may be set by the user.
  • FIG. 10 illustrates a process of estimating the state of the driver based on each feature quantity and a process of changing the weighting of each feature quantity based on the estimation result.
  • FIG. 11 exemplifies a weighting process performed after estimating the driver D's backward reflection.
  • the control unit 110 acquires the feature vector x through the above step S ⁇ b> 17.
  • the feature vector x includes a feature quantity (first information) such as the presence / absence of a face, face orientation, line-of-sight direction, and eye open / closed degree, and a feature quantity (second information) such as body movement and posture as elements. It is out.
  • the control unit 110 estimates the state of the driver D based on the state vector y.
  • the control unit 110 outputs the index (ArgMax (y (i))) of the element having the largest value among the elements of the state vector y as an estimation result.
  • each element of the state vector y is associated with the state of the driver D.
  • the first element is associated with “forward gaze”
  • the second element is “sleepy”
  • the third element is associated with “aside look”
  • “ArgMax (y (i)) 2 "Indicates an estimation result that the driver D is in a state of" sleepiness ".
  • the control unit 110 changes the value of each element of the weight vector W used in the next cycle based on the estimation result.
  • the value of each element of the weight vector W corresponding to the estimation result may be appropriately determined according to the embodiment.
  • the value of each element of the weight vector W may be determined by a machine learning method such as reinforcement learning, for example. Note that when there is no past estimation result, the control unit 110 may appropriately perform weighting with an initial value or the like given in advance.
  • ArgMax (y (i)) at a certain point in time indicates a look back toward the driver D.
  • the next action of the driver D is predicted to be a look back. Therefore, until the driver D's face is detected in the captured image, it is assumed that the feature quantities related to the facial organs such as the face direction, the line-of-sight direction, and the eye open / closed degree are unnecessary for estimating the state of the driver D.
  • the control unit 110 determines the face direction, the line-of-sight direction, the eye open / closed degree, and the like in step S18 after the next cycle.
  • the weighting of each feature amount related to the facial organ may be gradually reduced.
  • the control unit 110 may gradually increase the weight of the feature amount related to the presence or absence of a face.
  • each feature amount related to the facial organs such as the face direction, the line-of-sight direction, and the degree of eye opening / closing is calculated.
  • the weighting may be increased.
  • detection of the target feature amount may be temporarily stopped.
  • the control unit 110 performs step S14. , Detection of the face direction, the line-of-sight direction, and the eye open / closed degree may be omitted. Thereby, the calculation amount of a series of processes can be reduced, and the estimation process of the state of the driver D can be executed at high speed.
  • FIG. 12 illustrates each feature amount (time-series information) detected when the driver D stands down.
  • FIG. 13 exemplifies each feature amount (time-series information) detected when the concentration of the driver D who is distracted in the right direction decreases.
  • the face of the driver D that has been detected up to 4 is frame no. 4 to No. It is invisible (not detected) until 5. Further, the movement of the body of the driver D is indicated by the frame No. 3 to No. 5 and the frame No. At 6, the movement of the body has stopped. Furthermore, frame No. 2 to No. 3, the posture of the driver D has shifted from the normal driving posture to the forward leaning posture. The control unit 110 captures this tendency based on the state vector y, and determines the frame number. 3 to No. It may be presumed that the driver D has gone down and has shifted to the state.
  • FIG. 13 illustrates a scene where the concentration of driver D on driving becomes distracting.
  • the driver D watches the forward direction without moving his body.
  • the concentration on driving is decreasing, the driver D turns his face or line of sight in a direction other than the front, or moves his body greatly. Therefore, by appropriately setting the weight vector W, in step S19, the control unit 110 sets the state of the driver D based on the feature amounts related to the driver's D face direction, line-of-sight direction, and body movement.
  • the degree of concentration of the driver D with respect to driving may be estimated.
  • the face direction of the driver D changes from the front direction to the right direction.
  • the line of sight of the driver D is frame No. 2 to No.
  • the frame No. 4 changes to the right again after 7.
  • the movement of the driver D is indicated by the frame No. 4 to No. It grows up to 5.
  • the control unit 110 captures this tendency with the state vector y, and No. It may be estimated that the object gradually turns right from 2 and the posture gradually turns to the right and the degree of concentration is decreasing.
  • the control unit 110 transmits such an estimation result to the automatic driving support device 22.
  • the automatic driving support device 22 controls the operation of the automatic driving using the estimation result of the state estimation device 10. For example, when it is estimated that the driver D has developed a sudden illness, the automatic driving support device 22 switches the operation of the vehicle C from the manual driving mode to the automatic driving mode, and makes the vehicle C safe (for example, You may control to stop after moving to a nearby hospital, parking lot, etc.).
  • the state estimation device 10 is based on the captured image (first image) acquired from the camera 21 that is set to capture the driver D in steps S12 to S14. First information on the behavior of the face of the driver D is acquired. Moreover, the state estimation apparatus 10 acquires the 2nd information regarding the driver
  • the driver D not only the local information (first information) that is the behavior of the face of the driver D but also the global information (second information) that is the body movement of the driver D is used as the driver D. Can be reflected in estimating the state of Therefore, according to the present embodiment, various states that the driver D can take can be estimated as illustrated in FIGS. 4, 5, 12, and 13.
  • control unit 110 applies the weight applied to the feature vector x so as to be suitable for the estimation of the current cycle based on the estimation result of the past cycle in step S18.
  • the value of each element of the vector W can be changed. Therefore, according to the present embodiment, various driver D states can be estimated with high accuracy.
  • the captured image (first image) acquired from the camera 21 is used as it is for analyzing the behavior of the face, and the captured image acquired from the camera 21 is used for analyzing the body motion.
  • a resolution-captured captured image (second image) is used.
  • the first information includes feature amounts relating to whether or not the face of the driver D can be detected, the face position, the face orientation, the face movement, the line-of-sight direction, the position of each organ of the face, and the eye open / closed degree.
  • the second information includes the luminance value information of the current frame, edge information indicating the position of the edge of the current frame, luminance value difference information compared to the previous frame, and feature quantities related to edge difference information compared to the previous frame.
  • the number of feature amounts included in each of the first information and the second information may be appropriately determined according to the embodiment.
  • Each of the first information and the second information may be expressed by one or a plurality of feature amounts (motion feature amounts).
  • each of the first information and the second information may be appropriately determined according to the embodiment.
  • the first information is composed of information related to at least one of whether or not the face of the driver D can be detected, the position of the face, the direction of the face, the movement of the face, the direction of the line of sight, the position of each organ of the face, and the eye open / closed degree. It's okay.
  • the second information may be constituted by a feature amount related to at least one of the position of the edge extracted from the second image, the strength of the edge, and a local frequency component of the image.
  • Each of the first information and the second information may be composed of feature amounts, information, and the like that are different from those in the above embodiment.
  • control part 110 is analyzing the driver
  • the analysis of the body motion is not limited to such a form, and may be performed on the first image acquired from the camera 21.
  • the resolution conversion unit 13 may be omitted.
  • step S15 may be omitted.
  • the learned learning is performed by machine learning of each processing.
  • a container eg, a neural network
  • a captured image is used for each of the analysis of the facial behavior and the analysis of the body movement, so that the learning device uses a convolutional neural network having a structure in which convolutional layers and pooling layers are alternately connected.
  • the learning device uses a convolutional neural network having a structure in which convolutional layers and pooling layers are alternately connected.
  • FIG. 14 shows an example in which the second analysis unit 14 is configured using a recursive neural network.
  • the recursive neural network constituting the second analysis unit 14 is a multilayered neural network used for so-called deep learning.
  • the output of the intermediate layer at time t1 is the output of the intermediate layer at time t1 + 1. Used for input.
  • the estimated state of the driver D includes forward gaze, drowsiness, looking aside, putting on and taking off clothes, telephone operation, leaning on the window or armrest, driving disturbance by a passenger or pet, onset of illness, backward Illustrates, prone, eating, drinking, smoking, dizziness, abnormal behavior, car navigation or audio operation, attachment or detachment of glasses or sunglasses, and photography.
  • the state of the driver D to be estimated may not be limited to such an example, and may be appropriately selected according to the embodiment.
  • the control unit 110 may make another state, such as falling asleep and watching the monitor screen, as candidates for the state estimation of the driver D.
  • the state estimation apparatus 10 may present a candidate for a state to be estimated on a display (not shown) or the like, and accept the designation of the state to be estimated.
  • the control unit 110 detects the face of the driver D and its organs in steps S12 to S14, so that the driver D's face direction, gaze direction (gaze change), and eye opening / closing. Detect degrees etc.
  • the behavior of the face to be detected need not be limited to such an example, and may be appropriately selected according to the embodiment.
  • the control unit 110 may acquire information other than the above, such as the number of blinks of the driver D and the speed of breathing. Further, for example, the control unit 110 may estimate the driver's state using biological information such as a pulse other than the first information and the second information.
  • the state estimation device 10 may be applied to a vehicle system 200 that does not have the automatic driving support device 22.
  • FIG. 15 schematically illustrates an example in which the state estimation device 10 is applied to a vehicle system 200 that does not have the automatic driving support device 22.
  • This modification is configured in the same manner as in the above embodiment, except that the automatic driving support device 22 is not provided.
  • the vehicle system 200 according to this modification may appropriately issue a warning or the like based on the estimation result of the state of the driver D.
  • the vehicle system 200 may automatically issue a warning to the driver D when a state involving a danger such as falling asleep or dangerous driving is estimated.
  • the vehicle system 200 may perform communication for requesting an ambulance. Thereby, even if it is the vehicle system 200 which is not provided with the automatic driving assistance device 22, the estimation result of the state estimation apparatus 10 can be utilized directionally.
  • the control unit 110 determines the value of each element of the weight vector W applied to the feature vector x based on the estimation result of the state of the driver D. change. However, this weighting process may be omitted.
  • the first information and the second information may be expressed in a form other than the feature amount.
  • FIG. 16 schematically illustrates the state estimation device 100 according to this modification.
  • the state estimation device 100 is configured in the same manner as the state estimation device 10 according to the above embodiment, except that the feature vector generation unit 15 and the weight setting unit 16 are not provided.
  • the state estimation device 100 detects first information related to the behavior of the face of the driver D based on the first image, and the first information related to the body motion of the driver D based on the second image obtained by reducing the resolution of the first image. 2 Information is detected. And the state estimation apparatus 100 estimates the state of the driver
  • the state of the driver D is obtained by using a captured image of the driver's seat where the driver D continuously photographed by one camera 21 installed in the vehicle C can exist. Is estimated.
  • the number of cameras 21 for acquiring a captured image is not limited to one, and may be a plurality.
  • a plurality of cameras 21 may be appropriately installed around the driver D so as to photograph the driver D from various angles.
  • the state estimation device 10 may estimate the state of the driver D using a captured image acquired from each camera 21. As a result, it is possible to obtain a photographed image at an angle that could not be photographed by one camera, so that the state of the driver D can be estimated more accurately.
  • the subject whose state is to be estimated is the driver D of the vehicle C.
  • FIG. 1 shows an example of an automobile as the type of the vehicle C.
  • the type of the vehicle C is not limited to the automobile, and may be a truck, a bus, a ship, various work vehicles, a bullet train, a train, or the like.
  • the target person whose state is to be estimated does not have to be limited to the driver of various vehicles, and may be appropriately selected according to the embodiment.
  • the target person whose state is to be estimated may be a worker who performs work in a facility such as a factory, a care recipient who enters a care facility, or the like.
  • the camera 21 should just be arrange
  • FIG. 17 schematically illustrates a scene in which the state estimation device 101 is applied to a system that estimates the state of the worker L in the factory F.
  • the state estimation device 101 is the above implementation except that the subject whose state is to be estimated is the worker L of the factory F, the state of the worker L is estimated, and is not connected to the automatic driving support device 22. It is comprised similarly to the state estimation apparatus 10 which concerns on a form. In this case, the camera 21 is appropriately arranged so as to photograph the worker L who may be present at a predetermined work place.
  • the state estimation apparatus 101 acquires the first information related to the behavior of the face of the worker L based on the captured image (first image) acquired from the camera 21 as in the above embodiment. Moreover, the state estimation apparatus 101 acquires the 2nd information regarding the worker's L physical motion based on the captured image (2nd image) which reduced the captured image acquired from the camera 21. FIG. And the state estimation apparatus 101 estimates the state of the operator L based on 1st information and 2nd information. At this time, the state estimation apparatus 101 can estimate the concentration level and health state (for example, the physical condition or fatigue level of the worker) of the worker L as the worker L state. Further, for example, when applied to a care recipient who has moved into a care facility, the abnormal behavior of the care recipient can be estimated.
  • concentration level and health state for example, the physical condition or fatigue level of the worker
  • the captured image is composed of a plurality of frames
  • the control unit 110 analyzes the behavior of the face in units of frames in steps S12 to S14, and in step S16, for two or more frames. Analyzing body movements.
  • the captured image and each analysis method need not be limited to such an example.
  • the control unit 110 may perform an analysis of body motion on a captured image configured with one frame.
  • the state estimation device has the effect of being able to estimate the state of a wide variety of subjects with higher accuracy than before, and thus can be widely applied as a device for estimating the state of the subject. It is.
  • a hardware processor A memory for holding a program to be executed by the hardware processor;
  • a state estimation device comprising: The hardware processor executes the program, Acquiring a photographed image from a photographing device arranged to photograph a subject who may be present in a predetermined location; Analyzing the behavior of the subject's face based on the captured image and obtaining first information regarding the behavior of the subject's face; Analyzing the physical motion of the subject based on the captured image and obtaining second information relating to the physical motion of the subject; Estimating the state of the subject based on the first information and the second information; Configured to run the State estimation device.
  • Appendix 2 Acquiring a photographed image from a photographing device arranged to photograph a subject who may be present at a predetermined location by a hardware processor; Analyzing a behavior of the subject's face based on the captured image by a hardware processor, and obtaining first information regarding the behavior of the subject's face; Analyzing a physical motion of the subject based on the captured image by a hardware processor and obtaining second information relating to the physical motion of the subject; Estimating a state of the subject based on the first information and the second information by a hardware processor; Comprising State estimation method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Child & Adolescent Psychology (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

Selon un aspect, la présente invention concerne un appareil d'estimation d'état comportant : une unité d'acquisition d'image qui acquiert une image photographiée auprès d'un dispositif de photographie conçu pour photographier un sujet qui peut être présent dans un endroit prescrit ; une première unité d'analyse qui analyse le mouvement du visage du sujet sur la base de l'image photographiée, et acquiert des premières informations concernant le mouvement du visage du sujet ; une seconde unité d'analyse qui analyse le mouvement corporel du sujet sur la base de l'image photographiée, et acquiert des secondes informations concernant le mouvement corporel du sujet ; et une unité d'estimation qui estime l'état du sujet sur la base des premières informations et des secondes informations.
PCT/JP2017/020378 2016-06-02 2017-06-01 Dispositif d'estimation d'état, procédé d'estimation d'état et programme d'estimation d'état WO2017209225A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US16/303,710 US20200334477A1 (en) 2016-06-02 2017-06-01 State estimation apparatus, state estimation method, and state estimation program
CN201780029000.6A CN109155106A (zh) 2016-06-02 2017-06-01 状态推定装置、状态推定方法和状态推定程序
DE112017002765.9T DE112017002765T5 (de) 2016-06-02 2017-06-01 Zustandsabschätzungsvorrichtung, Zustandsabschätzungsverfahren und Zustandsabschätzungsprogramm

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2016-111108 2016-06-02
JP2016111108 2016-06-02
JPPCT/JP2017/007142 2017-02-24
PCT/JP2017/007142 WO2017208529A1 (fr) 2016-06-02 2017-02-24 Dispositif d'estimation d'état de conducteur, système d'estimation d'état de conducteur, procédé d'estimation d'état de conducteur, programme d'estimation d'état de conducteur, dispositif d'estimation d'état de sujet, procédé d'estimation d'état de sujet, programme d'estimation d'état de sujet et support d'enregistrement

Publications (1)

Publication Number Publication Date
WO2017209225A1 true WO2017209225A1 (fr) 2017-12-07

Family

ID=60477526

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/020378 WO2017209225A1 (fr) 2016-06-02 2017-06-01 Dispositif d'estimation d'état, procédé d'estimation d'état et programme d'estimation d'état

Country Status (1)

Country Link
WO (1) WO2017209225A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472253A (zh) * 2018-12-28 2019-03-15 华人运通控股有限公司 行车安全智能提醒方法、装置、智能方向盘和智能手环
JP2020123334A (ja) * 2019-01-30 2020-08-13 株式会社ストラドビジョン 自律走行モードとマニュアル走行モードとの間の走行モードを変更するために自律走行の安全を確認するためのrnnの学習方法及び学習装置、そしてテスト方法及びテスト装置
JP2020184128A (ja) * 2019-05-05 2020-11-12 Assest株式会社 疲労度判別プログラム
WO2021111567A1 (fr) 2019-12-04 2021-06-10 日本電気株式会社 Système de détermination d'état physique anormal, procédé de détermination d'état physique anormal et programme informatique

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012230535A (ja) * 2011-04-26 2012-11-22 Nikon Corp 電子機器および電子機器の制御プログラム
JP2016045714A (ja) * 2014-08-22 2016-04-04 株式会社デンソー 車載制御装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012230535A (ja) * 2011-04-26 2012-11-22 Nikon Corp 電子機器および電子機器の制御プログラム
JP2016045714A (ja) * 2014-08-22 2016-04-04 株式会社デンソー 車載制御装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472253A (zh) * 2018-12-28 2019-03-15 华人运通控股有限公司 行车安全智能提醒方法、装置、智能方向盘和智能手环
CN109472253B (zh) * 2018-12-28 2024-04-16 华人运通(上海)云计算科技有限公司 行车安全智能提醒方法、装置、智能方向盘和智能手环
JP2020123334A (ja) * 2019-01-30 2020-08-13 株式会社ストラドビジョン 自律走行モードとマニュアル走行モードとの間の走行モードを変更するために自律走行の安全を確認するためのrnnの学習方法及び学習装置、そしてテスト方法及びテスト装置
JP2020184128A (ja) * 2019-05-05 2020-11-12 Assest株式会社 疲労度判別プログラム
WO2021111567A1 (fr) 2019-12-04 2021-06-10 日本電気株式会社 Système de détermination d'état physique anormal, procédé de détermination d'état physique anormal et programme informatique

Similar Documents

Publication Publication Date Title
JP6245398B2 (ja) 状態推定装置、状態推定方法、及び状態推定プログラム
JP6264492B1 (ja) 運転者監視装置、運転者監視方法、学習装置及び学習方法
WO2017209225A1 (fr) Dispositif d'estimation d'état, procédé d'estimation d'état et programme d'estimation d'état
CN111566612A (zh) 基于姿势和视线的视觉数据采集系统
US8411171B2 (en) Apparatus and method for generating image including multiple people
EP1589485B1 (fr) Procédé de poursuite d'objet et d'identification de l'état d'un oeuil
CN106165391B (zh) 增强的图像捕获
JP6668942B2 (ja) 撮影制御装置、プログラム及び方法
US9687189B2 (en) Automatic visual remote assessment of movement symptoms in people with parkinson's disease for MDS-UPDRS finger tapping task
JP5001930B2 (ja) 動作認識装置及び方法
JP2007074033A (ja) 撮像装置及びその制御方法及びプログラム及び記憶媒体
US11455810B2 (en) Driver attention state estimation
US11084424B2 (en) Video image output apparatus, video image output method, and medium
JP2010191793A (ja) 警告表示装置及び警告表示方法
JP6043933B2 (ja) 眠気レベルの推定装置、眠気レベルの推定方法および眠気レベルの推定処理プログラム
JP2024505633A (ja) 画像処理システム
WO2018168038A1 (fr) Dispositif de détermination de siège de conducteur
JP2017091013A (ja) 運転支援装置
JP2021037216A (ja) 閉眼判定装置
JP2009244944A (ja) 画像回復装置および撮影装置
JP2020194227A (ja) 顔遮蔽判定装置、顔遮蔽判定方法、顔遮蔽判定プログラム及び乗員監視システム
JP6087615B2 (ja) 画像処理装置およびその制御方法、撮像装置、および表示装置
JP2020149499A (ja) 乗員観察装置
JP7019394B2 (ja) 視認対象検知装置、視認対象検知方法、およびプログラム
WO2020263277A1 (fr) Lissage temporel de points de repère

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17806775

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17806775

Country of ref document: EP

Kind code of ref document: A1