WO2024095362A1 - Information processing system, information processing device, information processing method, and recording medium - Google Patents

Information processing system, information processing device, information processing method, and recording medium Download PDF

Info

Publication number
WO2024095362A1
WO2024095362A1 PCT/JP2022/040864 JP2022040864W WO2024095362A1 WO 2024095362 A1 WO2024095362 A1 WO 2024095362A1 JP 2022040864 W JP2022040864 W JP 2022040864W WO 2024095362 A1 WO2024095362 A1 WO 2024095362A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
camera
information processing
degradation
authentication
Prior art date
Application number
PCT/JP2022/040864
Other languages
French (fr)
Japanese (ja)
Inventor
竜一 赤司
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to PCT/JP2022/040864 priority Critical patent/WO2024095362A1/en
Publication of WO2024095362A1 publication Critical patent/WO2024095362A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations

Definitions

  • This disclosure relates to an information processing system, an information processing device, an information processing method, and a recording medium.
  • Patent document 1 describes measuring the actual distance between the subject and the camera from the facial component distance, and acquiring an eye image from a person image of the subject who has been confirmed to be in the iris photography space. Patent document 1 further describes measuring the quality of the acquired eye image, and acquiring an image for iris recognition that meets a standard quality level.
  • Patent document 2 describes evaluating the quality of an iris image before it is processed for iris recognition. Patent document 2 also describes that an evaluation of the iris image is provided according to blur, defocus, eye closure, obscuration, etc.
  • Patent Document 3 describes how, in walk-through authentication, the focus is fixed and burst imaging is performed on a subject who passes through a fixed focus position, and in the case of re-authentication, the focus is scanned and burst imaging is performed on a subject who is standing still. By doing this, the technology in Patent Document 3 makes it possible to extract an iris image focused on the subject's iris.
  • This disclosure aims to improve upon the techniques described in the prior art documents mentioned above.
  • a camera capable of capturing an image of a target;
  • An acquisition means for acquiring status information indicating at least a status of the target;
  • an estimation means for generating deterioration information relating to deterioration that is estimated to occur in an object image obtained by capturing an image of the object with the camera, using the state information;
  • an information processing system comprising: a control means for outputting control information corresponding to the degradation information at least one of before the object is imaged by the camera and when the object is imaged by the camera.
  • An acquisition means for acquiring status information indicating at least a status of an object; an estimation means for generating deterioration information relating to deterioration that is estimated to occur in an object image obtained by capturing an image of the object with a camera, using the state information;
  • an information processing device comprising: a control means for outputting control information corresponding to the degradation information at least one of before the object is imaged by the camera and when the object is imaged by the camera.
  • One or more computers Obtaining status information indicating at least a status of the target; generating deterioration information regarding deterioration that is estimated to occur in an object image obtained by capturing an image of the object with a camera using the state information; There is provided an information processing method for outputting control information according to the degradation information at least one of before the object is imaged by the camera and when the object is imaged by the camera.
  • a computer-readable recording medium having a program recorded thereon causes a computer to include: an acquisition unit that acquires status information indicating at least a status of an object; There is provided an estimation means for generating, using the status information, degradation information relating to degradation that is estimated to occur in an object image obtained by imaging the object with a camera, and a recording medium that functions as a control means for outputting control information corresponding to the degradation information at least one of before the object is imaged with the camera and when the object is imaged with the camera.
  • FIG. 1 is a diagram showing an overview of an information processing system according to a first embodiment.
  • 1 is a diagram showing an overview of an information processing apparatus according to a first embodiment;
  • FIG. 1 is a diagram showing an overview of an information processing method according to a first embodiment.
  • FIG. 1 is a diagram illustrating an example of a usage environment of an information processing system according to a first embodiment.
  • 1 is a block diagram illustrating a functional configuration of an information processing system according to a first embodiment.
  • 4 is a flowchart illustrating a flow of processing performed by the information processing system according to the first embodiment.
  • 10 is a diagram illustrating an example of a deterioration estimation model used by an estimation unit to generate deterioration information.
  • FIG. 11 is a diagram illustrating a functional configuration of an information processing system according to a second embodiment.
  • 13 is a flowchart illustrating the flow of a process executed by an information processing system according to a third embodiment.
  • 11 is a diagram illustrating an example of a quality estimation model used by the control unit to generate a quality score.
  • FIG. 13 is a block diagram illustrating a functional configuration of an information processing system according to a fourth embodiment. 13 is a diagram for explaining a method in which an estimation unit according to a sixth embodiment generates degradation information.
  • FIG. FIG. 23 is a block diagram illustrating a functional configuration of an information processing system according to a seventh embodiment.
  • FIG. 23 is a diagram illustrating an example of a usage environment of an information processing system according to a seventh embodiment.
  • FIG. 23 is a block diagram illustrating a functional configuration of an information processing system according to an eighth embodiment.
  • FIG. 1 is a diagram showing an overview of an information processing system 50 according to a first embodiment.
  • the information processing system 50 includes a camera 20, an acquisition unit 110, an estimation unit 130, and a control unit 150.
  • the camera 20 is capable of capturing an image of an object.
  • the acquisition unit 110 acquires state information indicating at least a state of the object.
  • the estimation unit 130 generates degradation information using the state information.
  • the degradation information is information on degradation estimated to occur in an object image obtained by capturing an image of the object with the camera 20.
  • the control unit 150 outputs control information according to the degradation information at least one of before the object is captured by the camera 20 and when the object is captured by the camera 20.
  • This information processing system 50 can estimate the degradation of the target image before capturing the image of the target with the camera 20, and take measures to obtain a good image.
  • FIG. 2 is a diagram showing an overview of the information processing device 10 according to this embodiment.
  • the information processing device 10 includes an acquisition unit 110, an estimation unit 130, and a control unit 150.
  • the acquisition unit 110 acquires status information indicating at least the status of the target.
  • the estimation unit 130 generates degradation information using the status information.
  • the degradation information is information about degradation that is estimated to occur in a target image obtained by capturing an image of the target with the camera 20.
  • the control unit 150 outputs control information corresponding to the degradation information at least either before the target is captured by the camera 20 or when the target is captured by the camera 20.
  • This information processing device 10 can estimate the degradation of the target image before capturing the image of the target with the camera 20, and take measures to obtain a good image.
  • the information processing system 50 according to this embodiment can be configured to include the information processing device 10 according to this embodiment.
  • FIG. 3 is a diagram showing an overview of the information processing method according to this embodiment.
  • the information processing method according to this embodiment is executed by one or more computers.
  • the information processing method according to this embodiment includes steps S10, S20, and S30.
  • step S10 status information indicating at least the status of the target is acquired.
  • step S20 degradation information is generated using the status information.
  • the degradation information is information about degradation estimated to occur in the target image obtained by capturing an image of the target with the camera 20.
  • control information corresponding to the degradation information is output at least either before the target is captured by the camera 20 or when the target is captured by the camera 20.
  • This information processing method makes it possible to estimate the degradation of the target image before capturing an image of the target with the camera 20, and to take measures to obtain a good image.
  • the information processing method according to this embodiment can be executed by the information processing device 10 according to this embodiment.
  • FIG. 4 is a diagram illustrating an example of a usage environment of the information processing system 50 according to this embodiment.
  • the information processing system 50 relates to, for example, a walk-through authentication system.
  • the target image captured by the camera 20 is an image used to authenticate the target 90.
  • the target 90 is moving, for example, so as to approach point P2, which is the point where the target image is captured.
  • point P1 which is the point where the target image is captured.
  • the state of the target 90 at point P1 which is a position farther from the camera 20 than point P2, is measured using the state measurement unit 30.
  • the estimation unit 130 of the information processing device 10 estimates the deterioration of the target image obtained by capturing an image of the target 90 with the camera 20 at point P2. Then, based on the estimation result, the control unit 150 of the information processing device 10 outputs control information for suppressing the deterioration of the target image.
  • Patent Documents 1 to 3 make it impossible to estimate image degradation and take measures before capturing an image.
  • control unit 150 outputs control information corresponding to the degradation information at least either before the object is imaged by the camera 20 or when the object is imaged by the camera 20. Therefore, appropriate control for suppressing degradation is possible, and a good image can be obtained.
  • the information processing system 50, information processing device 10, and information processing method according to this embodiment it is particularly preferable to use the information processing system 50, information processing device 10, and information processing method according to this embodiment to implement measures to improve the quality of the target image before or when capturing the target image.
  • point P2 the focal position of the camera 20
  • point P1 a point farther away from the camera 20 than point P2
  • these points are not limited to the example of the walk-through system shown in Figure 4.
  • FIG. 5 is a block diagram illustrating the functional configuration of the information processing system 50 according to this embodiment.
  • FIG. 6 is a flowchart illustrating the flow of processing performed by the information processing system 50 according to this embodiment. Detailed examples of the information processing system 50, information processing device 10, and information processing method according to this embodiment will be described with reference to FIGS. 5 and 6.
  • the target 90 is, for example, a human. However, the target 90 may be a living thing other than a human, or a non-living object. It is preferable that the target 90 moves closer to the camera 20 before the target image is captured.
  • the target image captured by camera 20 is the image used to authenticate target 90.
  • Camera 20 is focused on approximately point P2.
  • Information processing system 50 is capable of performing, for example, iris authentication of target 90.
  • Camera 20 is, for example, an iris imaging camera for imaging the iris of target 90.
  • the target image is an image used for iris authentication.
  • camera 20 is provided so as to be able to image the iris of target 90 located at point P2.
  • the target image is an image including the eye.
  • the target image may be an image including both eyes, or an image including only one of the right eye or the left eye.
  • the target image may be an image including not only the eye, but also the area around the eye.
  • the authentication performed by the information processing system 50 is not limited to iris authentication.
  • the information processing system 50 may be a system capable of performing facial authentication of the target 90.
  • the camera 20 may be a camera for capturing an image to be used for facial authentication.
  • the target image may be an image to be used for facial authentication.
  • the camera 20 is arranged to be capable of capturing an image of the face of the target 90 located at point P2.
  • the target image is an image to be used for facial authentication
  • the target image is an image that includes a face.
  • the information processing system 50 further includes one or more status measurement units 30.
  • Status information is generated based on the measurement results by the status measurement units 30.
  • the status information indicates at least the status of the object 90 at the time when the object 90 is located at a point farther away than the focal point of the camera 20.
  • the control unit 150 outputs control information at least one of before the object 90 reaches the focal point of the camera 20 and when the object 90 reaches the focal point of the camera 20. In this way, it is possible to perform imaging that takes degradation into account without performing imaging with the camera 20 in advance, and obtain a good image of the object.
  • the state measurement unit 30 examples include a camera, a lidar (LiDAR), and an infrared sensor.
  • the camera is not particularly limited, and may be, for example, a depth camera, a wide-angle camera, a visible light camera, or a near-infrared camera.
  • the state measurement unit 30 is a depth camera or a lidar, the distance from the state measurement unit 30 to the target 90 and three-dimensional information of the target 90 can be obtained.
  • the state measurement unit 30 is a wide-angle camera, it can capture an image of a wider range than the camera 20, for example, the entire target 90.
  • step S101 the state measurement unit 30 measures the state of the target 90 at a point farther away from the camera 20 than point P2. Note that if the state measurement unit 30 is a camera of some type, the state measurement unit 30 measuring the state of the target 90 includes the state measurement unit 30 capturing an image of the target 90.
  • the point P1 may or may not be the same for the multiple state measurement units 30. In other words, the point P1 can be set independently for each state measurement unit 30.
  • Point P1 is a point where the target 90 is located in front of point P2.
  • the distance between points P1 and P2 is not particularly limited, but is, for example, 50 cm or more and 2 m or less.
  • step S102 status information is generated using the measurement results by the status measurement unit 30.
  • the status information includes information related to the target 90 or items worn by the target 90.
  • the status information indicates one or more of the face direction, body posture, gaze direction, movement speed, and whether or not glasses are worn.
  • the status information is not limited to these examples as long as it includes information related to the target 90 or items worn by the target 90. By using such status information, the deterioration factors can be accurately estimated.
  • the facial orientation of the subject 90 can be determined by analyzing information obtained, for example, from a depth camera, a lidar, and/or a wide-angle camera using existing techniques.
  • the status information can include, for example, the angle of the facial orientation (left/right angle and up/down angle) based on a specified direction.
  • the body posture of the subject 90 can be identified by analyzing information obtained, for example, from at least one of a depth camera, a lidar, and a wide-angle camera using existing methods.
  • the state information can be, for example, a body posture label indicating the body posture, such as "walking,” “standing upright,” or “walking with a hunched back.”
  • the body posture label can be a predetermined numerical value.
  • the gaze direction of the subject 90 can be determined, for example, by analyzing information obtained from at least one of a depth camera and a wide-angle camera using existing techniques.
  • the status information can include, for example, the angle of the gaze direction (left/right angle and up/down angle) based on a specified direction.
  • the moving speed of the target 90 can be determined by analyzing information obtained, for example, by at least one of a depth camera, a lidar, and an infrared sensor using existing methods. For example, if the status measurement unit 30 is an infrared sensor, multiple infrared sensors detect the passing of the target 90 at multiple points. Then, the speed of the target 90 can be calculated based on the distance between the multiple points and the difference in the passing timing.
  • the status information can include, for example, information indicating the direction and speed of movement of the target 90.
  • Whether or not the subject 90 is wearing glasses can be determined by, for example, analyzing information obtained from at least one of a depth camera, a lidar, and a wide-angle camera using existing techniques.
  • the status information can include, for example, information indicating whether or not the subject 90 is wearing glasses.
  • the state information can be a vector whose elements are each of these multiple types of information.
  • the method of generating state information is not limited to the above example.
  • a trained neural network may be used to generate state information.
  • the process of generating status information from the measurement results of the status measurement unit 30 may be performed by the status measurement unit 30, may be performed by the information processing device 10, or may be performed by an analysis device different from the information processing device 10 and the status measurement unit 30.
  • the analysis device or the information processing device 10 acquires the measurement results and information necessary for generating the status information from one or more status measurement units 30 and uses it to generate the status information.
  • the devices generating the multiple types of information do not all need to be the same.
  • some of the information may be generated by the status measurement unit 30, and other information may be generated by the information processing device 10.
  • the information processing device 10 can generate the status information by combining multiple types of information.
  • the acquisition unit 110 acquires state information.
  • the acquisition unit 110 may acquire the state information from the state measurement unit 30, or from another analysis device.
  • the acquisition unit 110 may acquire state information generated by the information processing device 10 using information acquired from one or more state measurement units 30 and analysis devices.
  • the estimation unit 130 generates degradation information using the status information acquired by the acquisition unit 110.
  • the degradation information indicates, for example, one or more degradation factors occurring in the target image and the degree of degradation for each of the one or more degradation factors.
  • the status information indicates one or more combinations of degradation factors and the degree of degradation.
  • the one or more degradation factors include, for example, one or more of focus blur, motion blur, occlusion, and off-angle. If the target image is an image that includes an iris, it is preferable that the one or more degradation factors include one or more of focus blur, motion blur, eyelid occlusion, lighting reflection occlusion, and off-angle. By targeting such degradation factors, it is possible to accurately estimate the degradation of the target image.
  • the degradation factor can also be said to be the type of degradation that is predicted to occur in the target image.
  • Focus blur is blurring of the target 90 or a specific part of the target 90 due to a shift in focus.
  • the specific part of the target 90 is, for example, a part of the target image used for authentication.
  • Motion blur is blurring due to a change in the relative position between the camera 20 and the target 90 or a specific part of the target 90.
  • the degree of degradation may indicate the state of a specific part in the target image. For example, if the target image is an image that includes an iris, the specific part is the iris. The greater the degree of blurring of the iris region in the target image, the higher the degree of degradation may be.
  • Occlusion means that at least a part of the target 90 or a predetermined part of the target 90 is not visible in the target image.
  • eyelid occlusion means that at least a part of the iris is hidden by the eyelid.
  • illumination reflection occlusion means that the brightness of at least a part of the iris region of the target image increases due to illumination being reflected off the pupil. If there is a part of the iris region that is too bright (for example, a blown-out highlight), the iris pattern used for iris authentication cannot be detected.
  • the object image is an image that includes an iris
  • the greater the proportion of the iris that is hidden the greater the degree of degradation.
  • Off-angle means that the orientation of the object 90 or the orientation of a specific part of the object 90 deviates from the optical axis of the camera 20.
  • the object image is an image that includes an iris
  • the deterioration information may, for example, include a numerical value indicating the degree of deterioration for each deterioration factor.
  • the deterioration information may be a vector whose elements are each a multiple degree of deterioration.
  • the method by which the estimation unit 130 generates the deterioration information is not particularly limited, but the estimation unit 130 can generate the deterioration information, for example, by using a trained neural network that has been trained using the state information and the correct deterioration information. In this way, it is possible to generate deterioration information with high estimation accuracy.
  • FIG. 7 is a diagram illustrating a deterioration estimation model 131 that the estimation unit 130 uses to generate deterioration information.
  • the input data of the deterioration estimation model 131 includes state information, and the output data of the deterioration estimation model 131 includes deterioration information.
  • the deterioration estimation model 131 includes a neural network. This neural network is a trained neural network that has undergone machine learning in advance using state information and correct deterioration information as training data.
  • the state information used in this machine learning is preferably state information obtained in the information processing system 50 according to this embodiment or in a system similar to the information processing system 50 according to this embodiment.
  • the correct degradation information used in this machine learning is preferably information on degradation that has occurred in an image of the object 90 obtained by the camera 20 in the information processing system 50 according to this embodiment or in a system similar to the information processing system 50 according to this embodiment.
  • the correct degradation information is information on degradation that has occurred in an image obtained without any notification or control of the object 90 and the camera 20 between the measurement by the state measurement unit 30 and the image capture by the camera 20.
  • the type and form of information included in the correct degradation information are the same as the type and form of information in the degradation information described above.
  • the degradation estimation model 131 may be a model prepared for specific imaging conditions.
  • the imaging conditions when preparing the state conditions and ground truth degradation information used for learning the degradation estimation model 131 are the same as the imaging conditions when estimating degradation information using the degradation estimation model 131.
  • the imaging conditions are, for example, one or more imaging parameters of the camera 20, the lighting state for the target 90, and a combination of these.
  • the information processing system 50 includes a lighting unit 40, which can be controlled by the information processing device 10.
  • the lighting unit 40 is, for example, a lighting device capable of adjusting the direction and intensity of light.
  • a deterioration estimation model 131 may be prepared for each of the multiple imaging conditions and stored in advance in a storage device accessible by the estimation unit 130.
  • the estimation unit 130 selects and uses the deterioration estimation model 131 that corresponds to the imaging condition at the time when the status information was obtained from the multiple deterioration estimation models 131.
  • the information processing device 10 can obtain information for identifying the imaging condition at the time when the status information was obtained from the camera 20 and the lighting unit 40.
  • the estimation unit 130 inputs state information to the deterioration estimation model 131.
  • the estimation unit 130 then obtains the deterioration information output from the deterioration estimation model 131.
  • step S105 the control unit 150 outputs control information corresponding to the degradation information generated by the estimation unit 130.
  • control unit 150 outputs control information for controlling the imaging conditions of the camera 20 based on the degradation information. In this way, a target image with less degradation can be obtained.
  • the control information output by the control unit 150 is information for controlling the imaging conditions of the camera 20 so as to suppress the deterioration indicated in the deterioration information.
  • the control unit 150 outputs the control information to at least one of the camera 20 and the lighting unit 40.
  • the distance between the above-mentioned points P1 and P2 is not particularly limited, but is preferably, for example, 50 cm or more and 70 cm or less.
  • the distance between points P1 and P2 50 cm or more time to control the imaging conditions can be secured.
  • the distance between points P1 and P2 70 cm or less the accuracy of estimating deterioration can be improved.
  • control unit 150 An example of the processing performed by the control unit 150 in this embodiment is described below.
  • This degradation can be reduced, for example, by shortening the exposure time of the camera 20 or by adjusting the focus position of the camera 20.
  • the control unit 150 when the degree of degradation due to motion blur indicated in the degradation information is equal to or greater than a predetermined standard a, the control unit 150 outputs control information to the camera 20 for shortening the exposure time when obtaining the target image. At this time, the control unit 150 may output control information for shortening the exposure time as the degree of degradation due to motion blur increases.
  • control unit 150 may output control information to the camera 20 for changing the focal length while capturing the target image. Specifically, the control unit 150 outputs control information for causing the camera 20 to capture the target image while reducing the focal length.
  • control unit 150 outputs control information for, for example, narrowing the aperture of the lens of the camera 20 to increase the depth of field.
  • control unit 150 outputs control information for changing the focal position of the camera 20.
  • control unit 150 If the degree of degradation due to lighting reflection occlusion indicated in the degradation information is equal to or greater than a predetermined standard c, the control unit 150 outputs control information for changing at least one of the position of the camera 20, the orientation of the camera 20, the position of the lighting relative to the target 90, and the orientation of the lighting, so that lighting reflection does not occur on the target 90 or a predetermined part of the target 90.
  • step S106 the camera 20 generates a target image by capturing an image of the target 90 located at point P2.
  • the camera 20 may capture an image based on a control signal output from the control unit 150. Also, as described above, if the control unit 150 controls the imaging parameters by outputting control information to the camera 20, the control unit 150 may output the control information when capturing the target image. In other words, steps S105 and S106 may be performed simultaneously.
  • the degree of deterioration of the target image actually obtained in step S106 is expected to be lower than the degree of deterioration indicated in the deterioration information, i.e., the estimated degree of deterioration.
  • Each functional component of the information processing device 10 acquisition unit 110, estimation unit 130, and control unit 150 may be realized by hardware that realizes each functional component (e.g., hardwired electronic circuitry, etc.), or may be realized by a combination of hardware and software (e.g., a combination of an electronic circuitry and a program that controls it, etc.).
  • acquisition unit 110 e.g., hardwired electronic circuitry, etc.
  • estimation unit 130 e.g., estimation unit 130, and control unit 150
  • control unit 150 may be realized by hardware that realizes each functional component (e.g., hardwired electronic circuitry, etc.), or may be realized by a combination of hardware and software (e.g., a combination of an electronic circuitry and a program that controls it, etc.).
  • a combination of hardware and software e.g., a combination of an electronic circuitry and a program that controls it, etc.
  • FIG. 8 is a diagram illustrating a computer 1000 for realizing the information processing device 10.
  • the computer 1000 is any computer.
  • the computer 1000 is a SoC (System On Chip), a Personal Computer (PC), a server machine, a tablet terminal, or a smartphone.
  • the computer 1000 may be a dedicated computer designed to realize the information processing device 10, or may be a general-purpose computer.
  • the information processing device 10 may be realized by one computer 1000, or may be realized by a combination of multiple computers 1000.
  • the computer 1000 has a bus 1020, a processor 1040, a memory 1060, a storage device 1080, an input/output interface 1100, and a network interface 1120.
  • the bus 1020 is a data transmission path through which the processor 1040, the memory 1060, the storage device 1080, the input/output interface 1100, and the network interface 1120 transmit and receive data to and from each other.
  • the method of connecting the processor 1040 and other components to each other is not limited to bus connection.
  • the processor 1040 is one of various processors such as a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or an FPGA (Field-Programmable Gate Array).
  • the memory 1060 is a main storage device realized using a RAM (Random Access Memory) or the like.
  • the storage device 1080 is an auxiliary storage device realized using a hard disk, an SSD (Solid State Drive), a memory card, or a ROM (Read Only Memory) or the like.
  • the input/output interface 1100 is an interface for connecting the computer 1000 to an input/output device.
  • an input device such as a keyboard and an output device such as a display are connected to the input/output interface 1100.
  • the input/output interface 1100 may be connected to the input device or output device by a wireless connection or a wired connection.
  • the network interface 1120 is an interface for connecting the computer 1000 to a network.
  • This communication network is, for example, a LAN (Local Area Network) or a WAN (Wide Area Network).
  • the method for connecting the network interface 1120 to the network may be a wireless connection or a wired connection.
  • the camera 20, one or more state measurement units 30, and the lighting unit 40 are each connected to the information processing device 10 via an input/output interface 1100 or a network interface 1120, and can communicate with the information processing device 10.
  • the storage device 1080 stores program modules that realize each functional component of the information processing device 10.
  • the processor 1040 reads each of these program modules into the memory 1060 and executes them to realize the function corresponding to each program module.
  • control unit 150 outputs control information corresponding to the degradation information at least one of before the target is imaged by the camera 20 and when the target is imaged by the camera 20. Therefore, it is possible to estimate the degradation of the target image before the target is imaged by the camera 20 and take measures to obtain a good image.
  • Second Embodiment 9 is a diagram illustrating a functional configuration of an information processing system 50 according to the second embodiment.
  • the information processing system 50, the information processing device 10, and the information processing method according to the present embodiment are the same as the information processing system 50, the information processing device 10, and the information processing method according to the first embodiment, respectively, except for the points described below.
  • the control unit 150 outputs control information for executing a notification when the deterioration information satisfies a predetermined condition C. By doing so, it is possible to encourage the target 90 to take action to avoid the estimated deterioration.
  • the information processing system 50 further includes one or more notification units 70.
  • the control unit 150 outputs control information to the notification unit 70 to cause the notification unit 70 to execute a notification.
  • Examples of the notification unit 70 include a display, a speaker, and a light-emitting device. If the notification unit 70 is a display, the notification may be, for example, a display of a message, a diagram, or the like on the display. If the notification unit 70 is a speaker, the notification information may be a sound output such as a message or an alarm. If the notification unit 70 is a light-emitting device, the notification information may be light emission.
  • the notification unit 70 is connected to the information processing device 10 via the input/output interface 1100 or the network interface 1120.
  • Each notification unit 70 is provided so that the notification by that notification unit 70 can be recognized by an object 90 located between points P1 and P2.
  • control unit 150 may output notification information instead of outputting control information for controlling the imaging environment to at least one of the camera 20 and the lighting unit 40, or may output notification information in addition to outputting control information for controlling the imaging environment.
  • control unit 150 An example of the processing performed by the control unit 150 according to this embodiment is described below. Note that in the following example, when the degree of deterioration is equal to or greater than a predetermined standard set for that degree of deterioration, this corresponds to satisfying the predetermined condition C.
  • the control unit 150 outputs control information for causing one or more notification units 70 to issue a notification urging the object 90 to slow down its moving speed.
  • a message such as "Please slow down your walking speed” or "Please stop once at a specified position" is displayed on the display or is issued as a sound from the speaker.
  • control unit 150 outputs control information to cause one or more notification units 70 to issue a notification urging the target 90 to change the direction of his/her face or gaze.
  • a message such as "Look ahead" may be displayed on the display or emitted from the speaker.
  • the gaze of the target 90 may be guided by emitting light from a light-emitting device.
  • control unit 150 If the degree of deterioration due to eyelid occlusion indicated in the deterioration information is equal to or greater than a predetermined standard e, the control unit 150 outputs control information to cause one or more notification units 70 to issue a notification urging the subject 90 to open his/her eyes. By doing so, for example, a message such as "Please open your eyes wide" may be displayed on the display or emitted from the speaker.
  • control unit 150 outputs control information to one or more notification units 70 to notify the subject 90 to change at least one of the facial orientation, position, and gaze direction so as to direct at least one of the facial orientation and gaze toward the camera 20.
  • a message such as "Look ahead” or “Move a little more to the right and then gaze towards us” may be displayed on the display or emitted from the speaker.
  • a light-emitting device may emit light to guide the gaze of the subject 90.
  • the control unit 150 may also output control information based on the status information. For example, if the status information indicates that the subject 90 is wearing glasses, the control information is output to cause one or more notification units 70 to issue a notification urging the subject 90 to remove the glasses. In this way, a message such as "Please remove your glasses" may be displayed on the display or emitted from the speaker.
  • the distance between points P1 and P2 is not particularly limited, but is preferably, for example, 1 m or more and 2 m or less. By making the distance between points P1 and P2 1 m or more, it is possible to ensure time for the object 90 to change its state in response to the alarm. By making the distance between points P1 and P2 2 m or less, it is possible to ensure sufficient accuracy in estimating the deterioration.
  • control unit 150 outputs control information for executing a notification when the deterioration information satisfies a predetermined condition C. Therefore, it is possible to encourage the target 90 to take action to avoid the estimated deterioration.
  • Third Embodiment 10 is a flowchart illustrating a flow of processing executed by the information processing system 50 according to the third embodiment.
  • the information processing system 50, the information processing device 10, and the information processing method according to this embodiment are the same as the information processing system 50, the information processing device 10, and the information processing method according to at least one of the first embodiment and the second embodiment, respectively, except for the points described below.
  • the control unit 150 estimates the quality of the target image, which indicates its suitability as an image to be used for authentication, using the degradation information.
  • the control unit 150 then outputs control information according to the quality. In this way, it is possible to obtain a target image suitable for authentication processing.
  • the success or failure of authentication depends on the quality of the image used for authentication. For example, it may happen that an object 90 that should be authenticated is not recognized due to low image quality. Furthermore, the quality of the image required for authentication may differ depending on the performance and characteristics of the authentication device.
  • the control unit 150 in this embodiment generates, for example, a quality score as a quality indicator of the suitability of the image for use in authentication.
  • steps S101 to S104 and step S106 are as described in the first embodiment.
  • step S204 the control unit 150 estimates the quality using the degradation information. For example, the control unit 150 generates a quality score. The lower the suitability of an image for use in authentication, the lower the quality score of the target image.
  • the method by which the control unit 150 generates the quality score is not particularly limited, and may be a method based on linear regression or a method using a neural network. The method by which the control unit 150 generates the quality score will be described in detail later.
  • control unit 150 outputs control information according to the quality score.
  • the control unit 150 may output control information for controlling the imaging conditions of the camera 20, as described in the first embodiment, only if the quality score is equal to or less than a predetermined standard g.
  • the control unit 150 may output control information for executing an alert, as described in the second embodiment, only if the quality score is equal to or less than a predetermined standard g.
  • the following describes a first and second example of how the control unit 150 generates a quality score.
  • the control unit 150 calculates the quality score by linear regression.
  • the quality score is calculated as a weighted sum of the degrees of deterioration for one or more deterioration factors.
  • the weights for each degree of deterioration can be determined in advance, for example, as follows.
  • multiple images with different deterioration states are prepared.
  • the objects captured in these images may be the same or different.
  • an authentication process is performed using these images to confirm whether the authentication is successful or not.
  • the degree of deterioration related to each deterioration factor is identified for each image. This degree of deterioration is consistent with (i.e., comparable to) the content of the deterioration information generated by the estimation unit 130.
  • a weight for each degree of deterioration is determined.
  • a formula for determining a weighted sum using these weights is defined as a formula for calculating the quality score.
  • an authentication score indicating the likelihood of identity in authentication for each image may be determined by other methods.
  • the authentication score may be assigned to each image by a specific authentication algorithm, or may be assigned by human judgment.
  • a weight for each degree of degradation is determined so that a quality score equivalent to the authentication score is obtained using multiple combinations of the authentication score and one or more degrees of degradation.
  • An example of an authentication algorithm is described below.
  • iris authentication for example, an iris code extracted from an iris image is used as a feature.
  • an authentication algorithm can be used that determines whether or not the person is the real person based on the Hamming distance between the feature of the person registered in advance and the feature extracted from the image captured by the camera 20.
  • the authentication algorithm There is no particular limitation on the authentication algorithm, and various algorithms can be used. However, it is preferable that the multiple authentication scores prepared when determining one formula for calculating the quality score are prepared using the same algorithm.
  • the one or more weights used in the formula may be generated by a neural network. That is, the degree of deterioration related to one or more deterioration factors and the authentication score are used as input data for the neural network, and the one or more weights are used as output data for the neural network. Then, the neural network is trained so that the quality score calculated using the output weights approaches the authentication score as the correct answer. The one or more weights output from the trained neural network thus obtained are used as weights to be used in the formula for calculating the quality score.
  • control unit 150 calculates the quality score by substituting the degree of degradation indicated in the degradation information generated by the estimation unit 130 into a predetermined formula as described above.
  • ⁇ Second Example> 11 is a diagram illustrating a quality estimation model 151 used by the control unit 150 to generate a quality score.
  • Input data of the quality estimation model 151 includes degradation information
  • output data of the quality estimation model 151 includes a quality score.
  • the quality estimation model 151 includes a neural network. This neural network is a trained neural network that has been trained by machine learning using multiple combinations of an authentication score (corresponding to a correct answer of the quality score) and one or more degrees of degradation, as described in the first example.
  • control unit 150 inputs the degree of each degradation indicated in the degradation information generated by the estimation unit 130 to the quality estimation model 151.
  • the control unit 150 then obtains a quality score output from the quality estimation model 151.
  • control unit 150 uses the degradation information to estimate the quality of the target image, which indicates its suitability as an image to be used for authentication. Therefore, it is possible to obtain a target image suitable for authentication processing.
  • the information processing system 50, information processing device 10, and information processing method according to the fourth embodiment are the same as the information processing system 50, information processing device 10, and information processing method according to at least any of the first to third embodiments, except for the points described below.
  • the control unit 150 stops the camera 20 from capturing an image of the target 90 when the degradation information satisfies a predetermined condition D. This makes it possible to reduce unnecessary processing when there is a low probability of obtaining a good target image.
  • Condition D is a condition that indicates that it is unlikely that a good target image will be obtained.
  • Condition D may be a condition on the degradation information itself, or may be a condition on the quality (quality score) obtained from the degradation information as in the third embodiment. In that case, if it is estimated that the target image does not meet the quality required for authentication, the imaging of the target 90 by the camera 20 is stopped.
  • condition D is that, of one or more degrees of deterioration indicated in the deterioration information, a predetermined number or more of the degrees of deterioration are equal to or higher than a predetermined standard set for each degree of deterioration.
  • condition D is that the quality score obtained based on the deterioration information is equal to or lower than a predetermined score.
  • condition D is not limited to these examples.
  • FIG. 12 is a block diagram illustrating the functional configuration of an information processing system 50 according to this embodiment.
  • the information processing system 50 according to this embodiment includes one or more notification units 70, similar to the information processing system 50 according to the second embodiment.
  • the control unit 150 according to this embodiment stops the camera 20 from capturing an image of the target 90 and outputs control information for executing a notification.
  • the control unit 150 according to this embodiment does not need to output control information for controlling the imaging conditions of the camera 20.
  • control unit 150 If the deterioration information satisfies a predetermined condition D, the control unit 150 outputs control information to cause one or more notification units 70 to issue a notification to the target 90, for example, encouraging the target 90 to go back and start the movement again. By doing so, a message such as "Please go back a little and repeat the route" may be displayed on the display or emitted from the speaker.
  • Such a notification is expected to cause the target 90 to try moving again.
  • the information processing system 50 can then repeat measurements at point P1, generate status information, generate degradation information, and so on. According to this example, these repeats can be performed without capturing an image of the target 90 with the camera 20, so a good image of the target can be obtained in a short time compared to a case in which the target 90 is captured by the camera 20 and then the user is prompted to try again.
  • the information processing system 50 further includes an alternative camera 22.
  • the alternative camera 22 is a camera provided separately from the camera 20, and is a camera for capturing an image of the target 90 in place of the camera 20 to generate a target image.
  • the hardware configuration of the computer that realizes the information processing device 10 according to this embodiment is shown in FIG. 8, for example, similar to the information processing device 10 according to the first embodiment.
  • the alternative camera 22 is connected to the information processing device 10 via an input/output interface 1100 or a network interface 1120.
  • the control unit 150 If the degradation information satisfies a predetermined condition D, the control unit 150 outputs control information to cause one or more notification units 70 to issue a notification to, for example, the target 90, urging the target 90 to stop in the imaging area of the alternative camera 22. By doing so, a message such as "Please stop in front of the camera on the right side" is displayed on the display or emitted from the speaker.
  • This notification is expected to cause the target 90 to stop at a position where the alternative camera 22 can capture a good image of the target 90.
  • the information processing system 50 then captures an image of the target 90 using the alternative camera 22, and obtains a good image of the target.
  • imaging using the alternative camera 22 can be performed without capturing an image of the target 90 using the camera 20, so a good image of the target can be obtained in a short time compared to the case where the camera 20 captures an image of the target 90 and then prompts the user to capture the image again.
  • control unit 150 stops the camera 20 from capturing an image of the target 90 when the degradation information satisfies a predetermined condition D. Therefore, it is possible to reduce unnecessary processing when there is a low possibility of obtaining a good target image.
  • the information processing system 50, information processing device 10, and information processing method according to the fifth embodiment are the same as the information processing system 50, information processing device 10, and information processing method according to at least any of the first to fourth embodiments, except for the points described below.
  • the status information further indicates the imaging conditions of the camera 20.
  • the imaging conditions include the imaging parameters of the camera 20 and the environmental conditions.
  • the status information indicates one or more of the exposure time of the camera 20, the lighting conditions for the target 90, the focal position of the camera 20, the lens aperture of the camera 20, and the brightness of the imaging area captured by the camera 20.
  • the degradation estimation model 131 was a model that was trained on the premise of specific imaging conditions.
  • the actual imaging conditions may not necessarily match the imaging conditions assumed by the degradation estimation model 131.
  • the imaging parameters of the camera 20 may not be fixed, and may be automatically adjusted according to the brightness of the imaging area, etc.
  • the status information indicates, for example, one or more of the exposure time of the camera 20, the lighting conditions for the target 90, the focal position of the camera 20, the lens aperture of the camera 20, and the brightness of the area captured by the camera 20.
  • Examples of the lighting conditions for the target 90 include at least one of the direction and intensity of the lighting by the lighting unit 40.
  • Examples of the focal position of the camera 20 include at least one of the focal length and the coordinates of the focal position.
  • the information processing device 10 can obtain information indicating the exposure time of the camera 20, the focal position of the camera 20, and the lens aperture of the camera 20 from the camera 20.
  • the information processing device 10 can obtain information indicating the lighting state of the target 90 from the lighting unit 40.
  • the information processing device 10 can obtain information indicating the brightness of the area captured by the camera 20 from an illuminance sensor or the like provided in the image capturing area.
  • the information processing device 10 acquires one or more pieces of information indicating the imaging conditions of the camera 20 as described above. Then, the information processing device 10 generates status information including both the information indicating the state of the target 90 and the information indicating the imaging conditions of the camera 20.
  • the status information can be a vector whose elements are each of multiple types of information (face direction, body posture, exposure time of the camera 20, focal position of the camera 20, etc.). Then, the acquisition unit 110 acquires the generated status information.
  • the estimation unit 130 can generate degradation information using the degradation estimation model 131.
  • the degradation information includes information indicating the imaging conditions of the camera 20, so it can be said that the input of the degradation estimation model 131 also includes conditions indicating the imaging conditions of the camera 20.
  • the degradation estimation model 131 according to this embodiment can be trained in the same way as the degradation estimation model 131 according to the first embodiment.
  • the state information used in the machine learning of the degradation estimation model 131 according to this embodiment includes information indicating the imaging conditions of the camera 20. This information indicating the imaging conditions is information indicating the imaging conditions when the image that is the source of the correct degradation information was captured.
  • the degradation estimation model 131 is a model that can be used regardless of the imaging conditions.
  • the estimation unit 130 generates degradation information in the same manner as described in the first embodiment, using state information including information indicating the imaging conditions.
  • the control unit 150 outputs control information in the same manner as the control unit 150 according to at least any one of the first to fourth embodiments.
  • the status information further indicates the imaging conditions of the camera 20. Therefore, the cause of deterioration and the degree of deterioration can be estimated with high accuracy.
  • Sixth Embodiment 13 is a diagram for explaining a method in which the estimation unit 130 according to the sixth embodiment generates degradation information.
  • the information processing system 50, the information processing device 10, and the information processing method according to this embodiment are the same as the information processing system 50, the information processing device 10, and the information processing method according to at least any one of the first embodiment to the fifth embodiment, except for the points described below.
  • the estimation unit 130 uses the state information to generate image-capture state information indicating the result of estimating the state of the target 90 at the time when the target 90 reaches the focus of the camera 20.
  • the estimation unit 130 then generates degradation information using the image-capture state information.
  • the estimation unit 130 according to this embodiment performs degradation estimation in a first stage in which the state at the time of image capture is estimated based on the state at point P1, and a second stage in which degradation of the target image is estimated based on the state at the time of image capture. This allows the information processing system 50 to be applied to changes in the state between point P1 and point P2 due to changes in the usage environment of the information processing system 50, without changing the processing of the second stage (e.g., the estimation model).
  • the configuration of the image capture state information is the same as the configuration of the state information. That is, the image capture state information indicates at least the state of the object.
  • the image capture state information may further indicate the image capture conditions of the camera 20.
  • the image capture state information may be a vector whose elements are each of multiple types of information.
  • the estimation unit 130 generates degradation information from state information, for example, using a shooting-time state estimation model 132 and a degradation estimation model 133.
  • the shooting-time state estimation model 132 and the degradation estimation model 133 are each trained models including a neural network.
  • the shooting-time state estimation model 132 is a model for estimating the state at the time of image capture based on the state at point P1.
  • the degradation estimation model 133 is a model for estimating the degradation of the target image based on the state at the time of image capture.
  • the input data of the image-capturing state estimation model 132 includes state information
  • the output data of the image-capturing state estimation model 132 includes state information at the time of image capture.
  • the image-capturing state estimation model 132 can be prepared in advance by performing machine learning using state information and correct state information as learning data.
  • the correct state information is information obtained by measuring the state (at point P2) at the time of image capture. In other words, the state at point P2 can be measured and correct state information can be generated in the same manner as the information processing system 50 generates state information at point P1.
  • the state information used for the machine learning of the image-capturing state estimation model 132 may be the same as the state information used for the machine learning of the degradation estimation model 131. It is preferable that the configuration of the correct state information is the same as the state information.
  • the input data of the degradation estimation model 133 includes state information at the time of imaging, and the output data of the degradation estimation model 133 includes degradation information.
  • the degradation estimation model 133 can be prepared in advance by performing machine learning using the above-mentioned correct state information and the above-mentioned correct degradation information as learning data.
  • the estimation unit 130 inputs the state acquired by the acquisition unit 110 to the image capture state estimation model 132, and obtains image capture state information output from the image capture state estimation model 132.
  • the estimation unit 130 further inputs the image capture state information to the degradation estimation model 133, and obtains degradation information output from the degradation estimation model 133.
  • the control unit 150 outputs control information, similar to the control unit 150 according to at least any one of the first to fourth embodiments.
  • the status information may or may not include information indicating the imaging conditions of the camera 20. If the status information includes information indicating the imaging conditions of the camera 20, the imaging time status information and the correct status information also include information indicating the imaging conditions of the camera 20.
  • a time-of-shooting state estimation model 132 and a deterioration estimation model 133 may be prepared for each of the multiple imaging conditions.
  • the time-of-shooting state estimation model 132 and the deterioration estimation model 133 for each of the multiple imaging conditions are stored in advance in a storage device accessible by the estimation unit 130. Then, as in the first embodiment, the estimation unit 130 uses the time-of-shooting state estimation model 132 and the deterioration estimation model 133 that correspond to the imaging conditions at the time the status information was obtained.
  • the estimation unit 130 may generate image capture state information based on a predetermined rule.
  • the estimation unit 130 can generate image capture state information from state information based on the following rules.
  • time T1 the time when the state measurement unit 30 measures the state of the object 90
  • time T2 the time when the object 90 reaches the focus of the camera 20
  • the estimation unit 130 estimates the body posture and moving speed at time T2, for example, based on the body posture and moving speed of the subject 90 at time T1.
  • the walking model described in the literature "Comprehensive Analysis Model and Simulation of Bipedal Walking” by Yamazaki Nobuhisa (Biomechanism, Vol. 3, 1975, pp. 261-269) can be used.
  • the estimation unit 130 may calculate a lower speed at time T2. The amount of speed reduction can be determined in advance based on the results of prior research or experiments (for example, the average of multiple subjects 90).
  • the estimation unit 130 takes the facial direction and gaze direction of the target 90 at time T1 as the estimated results of the facial direction and gaze direction of the target 90 at time T2. However, if guidance regarding the face and gaze direction is always performed, the estimation unit 130 may take the direction toward the guidance destination as the estimated results of the facial direction and gaze direction of the target 90 at time T2.
  • the estimation unit 130 uses the presence or absence of glasses at time T1 as the estimation result of the presence or absence of glasses at time T2. However, if the subject 90 is constantly guided to remove his or her glasses, the estimation unit 130 may estimate that the subject 90 is not wearing glasses at time T2.
  • the estimation unit 130 can generate image capture state information from state information based on these rules. Then, by inputting the generated image capture state information to the deterioration estimation model 133, deterioration information can be obtained.
  • the estimation unit 130 uses the state information to generate image capture state information indicating the result of estimating the state of the object 90 at the time when the object 90 reaches the focus of the camera 20. Then, the estimation unit 130 uses the image capture state information to generate degradation information. Therefore, it is possible to respond to changes in state between point P1 and point P2 with minor changes.
  • Fig. 14 is a block diagram illustrating a functional configuration of an information processing system 50 according to the seventh embodiment.
  • Fig. 15 is a diagram illustrating a usage environment of the information processing system 50 according to the present embodiment.
  • the information processing system 50, the information processing device 10, and the information processing method according to the present embodiment are the same as the information processing system 50, the information processing device 10, and the information processing method according to at least any one of the first embodiment to the sixth embodiment, except for the points described below.
  • the information processing device 10 further includes a first authentication unit 170. That is, the information processing system 50 according to this embodiment further includes a first authentication unit 170.
  • the first authentication unit 170 performs authentication using a target image of the target 90 generated by the camera 20.
  • the authentication process performed by the first authentication unit 170 is, for example, iris authentication or face authentication.
  • the authentication process performed by the first authentication unit 170 is not limited to these examples and can be any authentication process.
  • the first authentication unit 170 can perform authentication processing using an existing method. For example, the first authentication unit 170 obtains feature information by detecting a predetermined area from the target image and extracting features of the detected area. When the first authentication unit 170 performs iris authentication processing, the predetermined area is an area corresponding to the iris. When the first authentication unit 170 performs face authentication processing, the predetermined area is an area corresponding to the face.
  • the authentication information storage unit 100 holds multiple pieces of authentication information in advance, in which identification information and feature information are associated with each other.
  • the identification information is information for individually identifying multiple targets 90. If the target 90 is a person, the identification information is, for example, personal identification information.
  • the authentication information storage unit 100 may be included in the information processing device 10, or may be provided outside the information processing device 10. However, the first authentication unit 170 can access the authentication information storage unit 100. The first authentication unit 170 compares the feature information obtained from the target image with each piece of feature information stored in the authentication information storage unit 100. The first authentication unit 170 then identifies the feature information that has the highest degree of match with the feature information obtained from the target image from among the multiple pieces of feature information stored in the authentication information storage unit 100. The first authentication unit 170 then identifies the identification information associated with the identified feature information as the identification information of the target 90 captured by the camera 20.
  • the first authentication unit 170 outputs the identification information of the identified target 90.
  • the first authentication unit 170 may, for example, display the identification information of the target 90 on a display, or may transmit it to another device.
  • the first authentication unit 170 may output information to the effect that authentication has not been successful.
  • the first authentication unit 170 may also control the passage of the target 90 according to the identification information of the target 90.
  • the information processing system 50 further includes a gate 80.
  • the authentication information stored in the authentication information storage unit 100 includes information associated with the identification information indicating whether or not the target 90 is allowed to pass through.
  • the first authentication unit 170 When the first authentication unit 170 identifies the identification information of the target 90, it reads out information associated with the identification information indicating whether or not the target 90 can pass through. If information indicating that the target 90 can pass through is associated with the identification information, the first authentication unit 170 puts the gate 80 in a state where the target 90 can pass through. On the other hand, if information indicating that the target 90 can pass through is not associated with the identification information, the first authentication unit 170 does not put the gate 80 in a state where the target 90 can pass through.
  • the first authentication unit 170 also does not put the gate 80 in a state where the target 90 can pass through if there is no feature information among the multiple pieces of feature information stored in the authentication information storage unit 100 that matches the feature information obtained from the target image to a degree that exceeds a predetermined standard. In this way, the information processing system 50 can control the passage of the target 90.
  • the hardware configuration of the computer that realizes the information processing device 10 according to this embodiment is shown in FIG. 8, for example, similar to the information processing device 10 according to the first embodiment.
  • the storage device 1080 of the computer 1000 that realizes the information processing device 10 according to this embodiment further stores a program module that realizes the functions of the first authentication unit 170 of this embodiment.
  • the authentication information storage unit 100 is provided inside the information processing device 10, for example, the authentication information storage unit 100 is realized using a storage device 1080.
  • the gate 80 is connected to the information processing device 10 via an input/output interface 1100 or a network interface 1120.
  • the information processing device 10 according to this embodiment further includes a first authentication unit 170. Therefore, authentication can be performed using a target image.
  • Eighth embodiment 16 is a block diagram illustrating a functional configuration of an information processing system 50 according to an eighth embodiment.
  • the information processing system 50, the information processing device 10, and the information processing method according to this embodiment are the same as the information processing system 50, the information processing device 10, and the information processing method according to at least any one of the first embodiment to the seventh embodiment, except for the points described below.
  • the information processing system 50 includes a first authentication unit 170.
  • the first authentication unit 170 performs authentication using a target image.
  • the information processing system 50 according to this embodiment further includes a second authentication unit 190.
  • the second authentication unit 190 performs authentication using information different from the target image. If the quality based on the degradation information satisfies a predetermined condition A, the control unit 150 increases the importance of authentication by the second authentication unit 190. This can increase the possibility of authentication by the second authentication unit 190 even if a good target image cannot be obtained by the camera 20.
  • the first authentication unit 170 performs, for example, iris authentication processing using an image of the target obtained by the camera 20.
  • the second authentication unit 190 performs, for example, face authentication using an image obtained by a second authentication camera 60 different from the camera 20.
  • the camera 20 and the second authentication camera 60 are controlled independently to obtain an image of the target 90.
  • the authentication processing performed by the first authentication unit 170 and the second authentication unit 190 is as described for the first authentication unit 170 in the seventh embodiment.
  • the information processing system 50 can obtain both the authentication result by the first authentication unit 170 and the authentication result by the second authentication unit 190.
  • the information processing system 50 may output both of these authentication results, or may output only the authentication result with the higher reliability.
  • the authentication result with the higher authentication score may be output as the authentication result with the higher reliability.
  • Examples of authentication scores include the degree of match with the feature information in the authentication information described above, and a score based on the Hamming distance described above.
  • the information processing system 50 may also determine and output a final authentication result based on the result obtained by integrating these two authentication results and the authentication scores, or based on the authentication score. For example, the authentication result may be determined and output based on the sum, product, or average of the two authentication scores.
  • control unit 150 in this embodiment uses the degradation information to estimate the quality score of the target image, which indicates its suitability as an image to be used for authentication.
  • the control unit 150 increases the importance of authentication by the second authentication unit 190. That is, the control unit 150 controls the second authentication camera 60 to improve the quality of the image of the target 90 obtained by the second authentication camera 60. For example, the control unit 150 outputs control information for increasing the resolution of the second authentication camera 60 when imaging the target 90. Alternatively, the control unit 150 outputs control information for causing the second authentication camera 60 to image the target 90 at a timing when the second authentication camera 60 obtains an image in which a predetermined part of the target 90 is captured larger.
  • the predetermined part of the target 90 is a part that the second authentication unit 190 uses for authentication, such as the face.
  • control unit 150 does not increase the importance of authentication by the second authentication unit 190.
  • the hardware configuration of the computer that realizes the information processing device 10 according to this embodiment is shown in FIG. 8, for example, similar to the information processing device 10 according to the first embodiment.
  • the storage device 1080 of the computer 1000 that realizes the information processing device 10 according to this embodiment further stores program modules that realize the functions of the first authentication unit 170 and the second authentication unit 190 of this embodiment.
  • the authentication information storage unit 100 is provided inside the information processing device 10, for example, the authentication information storage unit 100 is realized using a storage device 1080.
  • the second authentication camera 60 is connected to the information processing device 10 via an input/output interface 1100 or a network interface 1120.
  • control unit 150 increases the importance of authentication by the second authentication unit 190 when the quality based on the degradation information satisfies a predetermined condition A. Therefore, when it is estimated that a good target image cannot be obtained by the camera 20, it is possible to increase the possibility of authentication by the second authentication unit 190.
  • This modification is a modification of the eighth embodiment.
  • the information processing system 50, the information processing device 10, and the information processing method according to this modification are the same as the information processing system 50, the information processing device 10, and the information processing method according to the eighth embodiment, respectively, except for the points described below.
  • the information processing system 50 includes a first authentication unit 170 and a second authentication unit 190, similar to the information processing system 50 according to the eighth embodiment.
  • the control unit 150 increases the importance of authentication by the second authentication unit 190 when the degradation information satisfies a predetermined condition B.
  • Condition B is a condition for the degradation information itself.
  • Condition B is, for example, that of one or more degrees of degradation indicated in the degradation information, a predetermined number or more degrees of degradation are equal to or above a predetermined standard set for each degree of degradation.
  • the control unit 150 increases the importance of authentication by the second authentication unit 190.
  • An example of the method in which the control unit 150 increases the importance of authentication by the second authentication unit 190 is as described in the eighth embodiment.
  • a camera capable of capturing an image of a target; An acquisition means for acquiring status information indicating at least a status of the target; an estimation means for generating deterioration information relating to deterioration that is estimated to occur in an object image obtained by capturing an image of the object with the camera, using the state information; a control means for outputting control information corresponding to the degradation information at least one of before the object is imaged by the camera and when the object is imaged by the camera. 1-2.
  • the state information indicates at least a state of the object at a time when the object is located at a point farther away than the focal point of the camera; An information processing system, wherein the control means outputs the control information at least one of before the object reaches the focus of the camera and when the object reaches the focus of the camera.
  • the estimation means includes: generating image capture state information indicating an estimation result of a state of the object at a time when the object reaches the focal point of the camera using the state information; An information processing system that generates the deterioration information using the image capture state information. 1-4. In the information processing system according to any one of 1-1.
  • the degradation information indicates one or more degradation factors occurring in the target image and a degree of degradation for each of the one or more degradation factors. 1-5.
  • the target image is an image including an iris
  • the one or more degradation factors include one or more of focus blur, motion blur, eyelid occlusion, lighting reflection occlusion, and off-angle. 1-6.
  • the control means Estimating a quality of the target image, which indicates suitability as an image to be used for authentication, using the degradation information; An information processing system that outputs the control information according to the quality. 1-7.
  • the state information indicates one or more of a face direction, a body posture, a gaze direction, a moving speed, and whether or not glasses are worn. 1-10.
  • the information processing system wherein the status information further indicates an imaging condition of the camera. 1-11.
  • the estimation means is an information processing system that generates the degradation information using a trained neural network trained using the state information and correct degradation information. 1-13.
  • the control means outputs the control information for executing a notification when the deterioration information satisfies a predetermined condition C. 1-14.
  • the control means is an information processing system that stops the camera from capturing an image of the target when the deterioration information satisfies a predetermined condition D. 1-15. In the information processing system according to any one of 1-1.
  • the control means outputs the control information for controlling the imaging conditions of the camera based on the deterioration information.
  • the state information indicates at least a state of the object at a time when the object is located at a point farther away than the focal point of the camera;
  • the control means outputs the control information at least one of before the object reaches the focus of the camera and when the object reaches the focus of the camera.
  • the estimation means includes: generating image capture state information indicating an estimation result of a state of the object at a time when the object reaches the focal point of the camera using the state information; An information processing device that generates the degradation information using the image capture state information.
  • the degradation information indicates one or more degradation factors occurring in the target image and a degree of degradation of each of the one or more degradation factors. 2-5.
  • the target image is an image including an iris
  • the one or more degradation factors include one or more of focus blur, motion blur, eyelid occlusion, lighting reflection occlusion, and off-angle. 2-6.
  • the control means Estimating a quality of the target image, which indicates suitability as an image to be used for authentication, using the degradation information; An information processing device that outputs the control information according to the quality. 2-7.
  • the state information indicates one or more of a face direction, a body posture, a gaze direction, a moving speed, and whether or not glasses are worn. 2-10.
  • the information processing device wherein the status information further indicates an imaging condition of the camera. 2-11.
  • the status information indicates one or more of the exposure time of the camera, the lighting condition for the object, the focal position of the camera, the lens aperture of the camera, and the brightness of the area captured by the camera. 2-12.
  • the estimation means is an information processing device that generates the degradation information using a trained neural network trained using the state information and correct degradation information. 2-13.
  • the control means is an information processing device that outputs the control information for executing a notification when the deterioration information satisfies a predetermined condition C. 2-14.
  • the control means is an information processing device that stops the camera from capturing an image of the target when the deterioration information satisfies a predetermined condition D. 2-15. In the information processing device according to any one of 2-1.
  • the control means is an information processing device that outputs the control information for controlling the imaging conditions of the camera based on the deterioration information.
  • 3-1. One or more computers, Obtaining status information indicating at least a status of the target; generating deterioration information regarding deterioration that is estimated to occur in an object image obtained by capturing an image of the object with a camera using the state information; The information processing method outputs control information corresponding to the degradation information at least one of before the object is imaged by the camera and when the object is imaged by the camera. 3-2.
  • the state information indicates at least a state of the object at a time when the object is located at a point farther away than the focal point of the camera; An information processing method, wherein the one or more computers output the control information at least one of before the object reaches the focus of the camera and when the object reaches the focus of the camera. 3-3.
  • the one or more computers In the information processing method described in 3-2., The one or more computers: generating image capture state information indicating an estimation result of a state of the object at a time when the object reaches the focal point of the camera using the state information; An information processing method for generating the degradation information using the image capture state information. 3-4. In the information processing method according to any one of 3-1.
  • the degradation information indicates one or more degradation factors occurring in the target image and a degree of degradation for each of the one or more degradation factors.
  • the target image is an image including an iris
  • the information processing method, wherein the one or more degradation factors include one or more of focus blur, motion blur, eyelid occlusion, lighting reflection occlusion, and off-angle.
  • the one or more computers Estimating a quality of the target image, which indicates suitability as an image to be used for authentication, using the degradation information; An information processing method for outputting the control information according to the quality. 3-7.
  • the one or more computers further comprise: Perform authentication using the target image; performing authentication using information different from the target image; The information processing method, wherein the one or more computers increase the importance of the authentication using information different from the target image when the quality satisfies a predetermined condition A. 3-8.
  • the one or more computers further comprise: Perform authentication using the target image; performing authentication using information different from the target image; The information processing method, wherein, when the degradation information satisfies a predetermined condition B, the one or more computers increase the importance of the authentication using information different from the target image. 3-9. In the information processing method according to any one of 3-1.
  • the state information indicates one or more of a face direction, a body posture, a gaze direction, a moving speed, and whether or not glasses are worn. 3-10.
  • An information processing method, wherein the status information further indicates an imaging condition of the camera. 3-11.
  • An information processing method in which the status information indicates one or more of the exposure time of the camera, the lighting condition for the object, the focus position of the camera, the lens aperture of the camera, and the brightness of the area captured by the camera. 3-12.
  • the status information indicates one or more of the exposure time of the camera, the lighting condition for the object, the focus position of the camera, the lens aperture of the camera, and the brightness of the area captured by the camera.
  • the one or more computers output the control information for executing a notification when the degradation information satisfies a predetermined condition C. 3-14.
  • the one or more computers output the control information for controlling an imaging condition of the camera based on the degradation information. 4-1.
  • a computer-readable recording medium having a program recorded thereon causes a computer to include: an acquisition unit that acquires status information indicating at least a status of an object; an estimation means for generating, using the status information, degradation information relating to degradation that is estimated to occur in an object image obtained by imaging the object with a camera; and a recording medium that functions as a control means for outputting control information corresponding to the degradation information at least one of before the object is imaged with the camera and when the object is imaged with the camera.
  • the state information indicates at least a state of the object at a time when the object is located at a point farther away than the focal point of the camera;
  • the control means outputs the control information at least one of before the object reaches the focus of the camera and when the object reaches the focus of the camera.
  • the estimation means includes: generating image capture state information indicating an estimation result of a state of the object at a time when the object reaches the focal point of the camera using the state information; A recording medium for generating the deterioration information using the image capture state information. 4-4.
  • the degradation information is a recording medium indicating one or more degradation factors occurring in the target image and the degree of degradation of each of the one or more degradation factors.
  • the target image is an image including an iris
  • the one or more degradation factors include one or more of focus blur, motion blur, eyelid occlusion, lighting reflection occlusion, and off-angle.
  • the control means Estimating a quality of the target image, which indicates suitability as an image to be used for authentication, using the degradation information; A recording medium for outputting the control information according to the quality. 4-7.
  • the program further comprises: a first authentication means for performing authentication using the target image; and a second authentication means for performing authentication using information different from the target image,
  • the control means increases the importance of authentication by the second authentication means when the quality satisfies a predetermined condition A. 4-8.
  • the recording medium according to any one of 4-1 to 4-7 The program further comprises: a first authentication means for performing authentication using the target image; and a second authentication means for performing authentication using information different from the target image, The control means increases the importance of authentication by the second authentication means when the degradation information satisfies a predetermined condition B. 4-9.
  • the state information indicates one or more of a face direction, a body posture, a gaze direction, a moving speed, and whether or not glasses are worn. 4-10.
  • the recording medium according to 4-10. The status information is a recording medium indicating one or more of the exposure time of the camera, the lighting conditions for the object, the focal position of the camera, the lens aperture of the camera, and the brightness of the area captured by the camera. 4-12.
  • the estimation means generates the degradation information using a trained neural network trained using the state information and correct degradation information. 4-13.
  • the recording medium according to any one of 4-1. to 4-12.
  • the control means outputs the control information for executing a notification when the deterioration information satisfies a predetermined condition C. 4-14.
  • the recording medium according to any one of 4-1. to 4-13.
  • the control means stops the camera from capturing an image of the object when the deterioration information satisfies a predetermined condition D. 4-15.
  • the state information indicates at least a state of the object at a time when the object is located at a point farther away than the focal point of the camera;
  • the control means outputs the control information at least one of before the object reaches the focus of the camera and when the object reaches the focus of the camera.
  • the estimation means includes: generating image capture state information indicating an estimation result of a state of the object at a time when the object reaches the focal point of the camera using the state information; A program for generating the deterioration information using the image capture state information. 5-4.
  • the degradation information indicates one or more degradation factors occurring in the target image and the degree of degradation of each of the one or more degradation factors. 5-5.
  • the target image is an image including an iris
  • the one or more degradation factors include one or more of focus blur, motion blur, eyelid occlusion, lighting reflection occlusion, and off-angle. 5-6.
  • the control means Estimating a quality of the target image, which indicates suitability as an image to be used for authentication, using the degradation information; A program for outputting the control information according to the quality. 5-7.
  • the computer further comprises: a first authentication means for performing authentication using the target image; and a second authentication means for performing authentication using information different from the target image,
  • the control means is a program for increasing the importance of authentication by the second authentication means when the quality satisfies a predetermined condition A. 5-8.
  • the computer further comprises: a first authentication means for performing authentication using the target image; and a second authentication means for performing authentication using information different from the target image
  • the control means is a program for increasing the importance of authentication by the second authentication means when the degradation information satisfies a predetermined condition B. 5-9.
  • the state information indicates one or more of face direction, body posture, gaze direction, movement speed, and whether or not glasses are worn. 5-10.
  • the program, wherein the status information further indicates imaging conditions of the camera. 5-11.
  • the status information indicates one or more of the exposure time of the camera, the lighting conditions for the object, the focus position of the camera, the lens aperture of the camera, and the brightness of the area captured by the camera. 5-12.
  • the estimation means is a program that generates the degradation information using a trained neural network that has been trained using the state information and correct degradation information. 5-13.
  • the control means is a program that outputs the control information for executing a notification when the deterioration information satisfies a predetermined condition C. 5-14. In the program according to any one of 5-1.
  • the control means is a program that stops the camera from capturing an image of the target when the deterioration information satisfies a predetermined condition D. 5-15.
  • the control means is a program that outputs the control information for controlling the imaging conditions of the camera based on the deterioration information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

An information processing system (50) comprises a camera (20), an acquisition unit (110), an estimation unit (130), and a control unit (150). The camera (20) is capable of capturing an image of a target. The acquisition unit (110) acquires at least state information indicating a state of the target. The estimation unit (130) generates deterioration information by using the state information. The deterioration information relates to deterioration that is estimated to occur in the target image obtained by capturing the image of the target with the camera (20). The control unit (150) outputs control information corresponding to the deterioration information of at least one of before the image of the target is captured by the camera (20) and when the image of the target is captured by the camera (20).

Description

情報処理システム、情報処理装置、情報処理方法、および記録媒体Information processing system, information processing device, information processing method, and recording medium
 本開示は、情報処理システム、情報処理装置、情報処理方法、および記録媒体に関する。 This disclosure relates to an information processing system, an information processing device, an information processing method, and a recording medium.
 対象の画像を用いた種々の認証システムが開発されている。そのようなシステムでは、対象の良好な画像を得る必要がある。 Various authentication systems have been developed that use images of the subject. In such systems, it is necessary to obtain a good image of the subject.
 特許文献1には、顔構成要素距離から被撮影者とカメラの間の実際距離を測定し、虹彩撮影空間にいると確認された被撮影者の人物イメージからアイイメージを取得することが記載されている。さらに特許文献1には、取得したアイイメージの品質を測定し、基準品質度を満たす虹彩認識用イメージを取得することが記載されている。 Patent document 1 describes measuring the actual distance between the subject and the camera from the facial component distance, and acquiring an eye image from a person image of the subject who has been confirmed to be in the iris photography space. Patent document 1 further describes measuring the quality of the acquired eye image, and acquiring an image for iris recognition that meets a standard quality level.
 特許文献2には、虹彩認識のために処理される前に、虹彩画像の品質を評価することが記載されている。また、特許文献2には、虹彩画像の評価が、ぼけ、焦点ずれ、閉眼、オブスキュレーション等に応じて提供されることが記載されている。 Patent document 2 describes evaluating the quality of an iris image before it is processed for iris recognition. Patent document 2 also describes that an evaluation of the iris image is provided according to blur, defocus, eye closure, obscuration, etc.
 特許文献3には、ウォークスルー認証において、固定焦点位置を通過する対象者に対し、焦点を固定してバースト撮像を行い、また、再認証の場合、立ち止まっている対象者に対し、焦点を走査してバースト撮像を行うことが記載されている。特許文献3の技術では、こうすることで、対象者の虹彩に合焦した虹彩画像を抽出可能とする。 Patent Document 3 describes how, in walk-through authentication, the focus is fixed and burst imaging is performed on a subject who passes through a fixed focus position, and in the case of re-authentication, the focus is scanned and burst imaging is performed on a subject who is standing still. By doing this, the technology in Patent Document 3 makes it possible to extract an iris image focused on the subject's iris.
特表2017-503276号公報JP 2017-503276 A 特表2009-529200号公報JP 2009-529200 A 国際公開第2021/199188号International Publication No. 2021/199188
 この開示は、上述した先行技術文献に記載の技術を改良することを目的とする。 This disclosure aims to improve upon the techniques described in the prior art documents mentioned above.
 この開示の一態様によれば、
 対象を撮像可能なカメラと、
 少なくとも前記対象の状態を示す状態情報を取得する取得手段と、
 前記カメラで前記対象を撮像して得られる対象画像において、発生すると推定される劣化に関する劣化情報を、前記状態情報を用いて生成する推定手段と、
 前記カメラで前記対象が撮像される前および前記カメラで前記対象が撮像される時の少なくとも一方に、前記劣化情報に応じた制御情報を出力する制御手段とを備える
情報処理システムが提供される。
According to one aspect of this disclosure,
A camera capable of capturing an image of a target;
An acquisition means for acquiring status information indicating at least a status of the target;
an estimation means for generating deterioration information relating to deterioration that is estimated to occur in an object image obtained by capturing an image of the object with the camera, using the state information;
There is provided an information processing system comprising: a control means for outputting control information corresponding to the degradation information at least one of before the object is imaged by the camera and when the object is imaged by the camera.
 この開示の一態様によれば、
 少なくとも対象の状態を示す状態情報を取得する取得手段と、
 カメラで前記対象を撮像して得られる対象画像において、発生すると推定される劣化に関する劣化情報を、前記状態情報を用いて生成する推定手段と、
 前記カメラで前記対象が撮像される前および前記カメラで前記対象が撮像される時の少なくとも一方に、前記劣化情報に応じた制御情報を出力する制御手段とを備える
情報処理装置が提供される。
According to one aspect of this disclosure,
An acquisition means for acquiring status information indicating at least a status of an object;
an estimation means for generating deterioration information relating to deterioration that is estimated to occur in an object image obtained by capturing an image of the object with a camera, using the state information;
There is provided an information processing device comprising: a control means for outputting control information corresponding to the degradation information at least one of before the object is imaged by the camera and when the object is imaged by the camera.
 この開示の一態様によれば、
 一以上のコンピュータが、
  少なくとも対象の状態を示す状態情報を取得し、
  カメラで前記対象を撮像して得られる対象画像において、発生すると推定される劣化に関する劣化情報を、前記状態情報を用いて生成し、
  前記カメラで前記対象が撮像される前および前記カメラで前記対象が撮像される時の少なくとも一方に、前記劣化情報に応じた制御情報を出力する
情報処理方法が提供される。
According to one aspect of this disclosure,
One or more computers
Obtaining status information indicating at least a status of the target;
generating deterioration information regarding deterioration that is estimated to occur in an object image obtained by capturing an image of the object with a camera using the state information;
There is provided an information processing method for outputting control information according to the degradation information at least one of before the object is imaged by the camera and when the object is imaged by the camera.
 この開示の一態様によれば、
 プログラムを記録しているコンピュータ読み取り可能な記録媒体であって、
 前記プログラムは、コンピュータを
  少なくとも対象の状態を示す状態情報を取得する取得手段、
  カメラで前記対象を撮像して得られる対象画像において、発生すると推定される劣化に関する劣化情報を、前記状態情報を用いて生成する推定手段、および
  前記カメラで前記対象が撮像される前および前記カメラで前記対象が撮像される時の少なくとも一方に、前記劣化情報に応じた制御情報を出力する制御手段
として機能させる
記録媒体が提供される。
According to one aspect of this disclosure,
A computer-readable recording medium having a program recorded thereon,
The program causes a computer to include: an acquisition unit that acquires status information indicating at least a status of an object;
There is provided an estimation means for generating, using the status information, degradation information relating to degradation that is estimated to occur in an object image obtained by imaging the object with a camera, and a recording medium that functions as a control means for outputting control information corresponding to the degradation information at least one of before the object is imaged with the camera and when the object is imaged with the camera.
第1の実施形態に係る情報処理システムの概要を示す図である。1 is a diagram showing an overview of an information processing system according to a first embodiment. 第1の実施形態に係る情報処理装置の概要を示す図である。1 is a diagram showing an overview of an information processing apparatus according to a first embodiment; 第1の実施形態に係る情報処理方法の概要を示す図である。FIG. 1 is a diagram showing an overview of an information processing method according to a first embodiment. 第1の実施形態に係る情報処理システムの使用環境を例示する図である。FIG. 1 is a diagram illustrating an example of a usage environment of an information processing system according to a first embodiment. 第1の実施形態に係る情報処理システムの機能構成を例示するブロック図である。1 is a block diagram illustrating a functional configuration of an information processing system according to a first embodiment. 第1の実施形態に係る情報処理システムが行う処理の流れを例示するフローチャートである。4 is a flowchart illustrating a flow of processing performed by the information processing system according to the first embodiment. 推定部が劣化情報の生成に用いる劣化推定モデルを例示する図である。10 is a diagram illustrating an example of a deterioration estimation model used by an estimation unit to generate deterioration information. FIG. 情報処理装置を実現するための計算機を例示する図である。FIG. 1 is a diagram illustrating a computer for implementing an information processing device. 第2の実施形態に係る情報処理システムの機能構成を例示する図である。FIG. 11 is a diagram illustrating a functional configuration of an information processing system according to a second embodiment. 第3の実施形態に係る情報処理システムが実行する処理の流れを例示するフローチャートである。13 is a flowchart illustrating the flow of a process executed by an information processing system according to a third embodiment. 制御部が品質スコアの生成に用いる品質推定モデルを例示する図である。11 is a diagram illustrating an example of a quality estimation model used by the control unit to generate a quality score. 第4の実施形態に係る情報処理システムの機能構成を例示するブロック図である。FIG. 13 is a block diagram illustrating a functional configuration of an information processing system according to a fourth embodiment. 第6の実施形態に係る推定部が劣化情報を生成する方法を説明するための図である。13 is a diagram for explaining a method in which an estimation unit according to a sixth embodiment generates degradation information. FIG. 第7の実施形態に係る情報処理システムの機能構成を例示するブロック図である。FIG. 23 is a block diagram illustrating a functional configuration of an information processing system according to a seventh embodiment. 第7の実施形態に係る情報処理システムの使用環境を例示する図である。FIG. 23 is a diagram illustrating an example of a usage environment of an information processing system according to a seventh embodiment. 第8の実施形態に係る情報処理システムの機能構成を例示するブロック図である。FIG. 23 is a block diagram illustrating a functional configuration of an information processing system according to an eighth embodiment.
 以下、この開示の実施の形態について、図面を用いて説明する。尚、すべての図面において、同様な構成要素には同様の符号を付し、適宜説明を省略する。 Below, an embodiment of this disclosure will be described with reference to the drawings. Note that in all drawings, similar components are given similar reference numerals and descriptions will be omitted as appropriate.
(第1の実施形態)
 図1は、第1の実施形態に係る情報処理システム50の概要を示す図である。情報処理システム50は、カメラ20、取得部110、推定部130、および制御部150を備える。カメラ20は、対象を撮像可能である。取得部110は、少なくとも対象の状態を示す状態情報を取得する。推定部130は、劣化情報を、状態情報を用いて生成する。劣化情報は、カメラ20で対象を撮像して得られる対象画像において、発生すると推定される劣化に関する情報である。制御部150は、カメラ20で対象が撮像される前およびカメラ20で対象が撮像される時の少なくとも一方に、劣化情報に応じた制御情報を出力する。
First Embodiment
FIG. 1 is a diagram showing an overview of an information processing system 50 according to a first embodiment. The information processing system 50 includes a camera 20, an acquisition unit 110, an estimation unit 130, and a control unit 150. The camera 20 is capable of capturing an image of an object. The acquisition unit 110 acquires state information indicating at least a state of the object. The estimation unit 130 generates degradation information using the state information. The degradation information is information on degradation estimated to occur in an object image obtained by capturing an image of the object with the camera 20. The control unit 150 outputs control information according to the degradation information at least one of before the object is captured by the camera 20 and when the object is captured by the camera 20.
 この情報処理システム50によれば、カメラ20で対象を撮像する前に対象画像の劣化を推定して、良好な画像を得るための対策を講じることができる。 This information processing system 50 can estimate the degradation of the target image before capturing the image of the target with the camera 20, and take measures to obtain a good image.
 図2は、本実施形態に係る情報処理装置10の概要を示す図である。情報処理装置10は、取得部110、推定部130、および制御部150を備える。取得部110は、少なくとも対象の状態を示す状態情報を取得する。推定部130は、劣化情報を、状態情報を用いて生成する。劣化情報は、カメラ20で対象を撮像して得られる対象画像において、発生すると推定される劣化に関する情報である。制御部150は、カメラ20で対象が撮像される前およびカメラ20で対象が撮像される時の少なくとも一方に、劣化情報に応じた制御情報を出力する。 FIG. 2 is a diagram showing an overview of the information processing device 10 according to this embodiment. The information processing device 10 includes an acquisition unit 110, an estimation unit 130, and a control unit 150. The acquisition unit 110 acquires status information indicating at least the status of the target. The estimation unit 130 generates degradation information using the status information. The degradation information is information about degradation that is estimated to occur in a target image obtained by capturing an image of the target with the camera 20. The control unit 150 outputs control information corresponding to the degradation information at least either before the target is captured by the camera 20 or when the target is captured by the camera 20.
 この情報処理装置10によれば、カメラ20で対象を撮像する前に対象画像の劣化を推定して、良好な画像を得るための対策を講じることができる。 This information processing device 10 can estimate the degradation of the target image before capturing the image of the target with the camera 20, and take measures to obtain a good image.
 本実施形態に係る情報処理システム50は本実施形態に係る情報処理装置10を含んで構成されうる。 The information processing system 50 according to this embodiment can be configured to include the information processing device 10 according to this embodiment.
 図3は、本実施形態に係る情報処理方法の概要を示す図である。本実施形態に係る情報処理方法は、一以上のコンピュータにより実行される。本実施形態に係る情報処理方法は、ステップS10、ステップS20、およびステップS30を含む。ステップS10では、少なくとも対象の状態を示す状態情報が取得される。ステップS20では、劣化情報が、状態情報を用いて生成される。劣化情報は、カメラ20で対象を撮像して得られる対象画像において、発生すると推定される劣化に関する情報である。ステップS30では、カメラ20で対象が撮像される前およびカメラ20で対象が撮像される時の少なくとも一方に、劣化情報に応じた制御情報が出力される。 FIG. 3 is a diagram showing an overview of the information processing method according to this embodiment. The information processing method according to this embodiment is executed by one or more computers. The information processing method according to this embodiment includes steps S10, S20, and S30. In step S10, status information indicating at least the status of the target is acquired. In step S20, degradation information is generated using the status information. The degradation information is information about degradation estimated to occur in the target image obtained by capturing an image of the target with the camera 20. In step S30, control information corresponding to the degradation information is output at least either before the target is captured by the camera 20 or when the target is captured by the camera 20.
 この情報処理方法によれば、カメラ20で対象を撮像する前に対象画像の劣化を推定して、良好な画像を得るための対策を講じることができる。 This information processing method makes it possible to estimate the degradation of the target image before capturing an image of the target with the camera 20, and to take measures to obtain a good image.
 本実施形態に係る情報処理方法は、本実施形態に係る情報処理装置10により実行されうる。 The information processing method according to this embodiment can be executed by the information processing device 10 according to this embodiment.
 図4は、本実施形態に係る情報処理システム50の使用環境を例示する図である。情報処理システム50はたとえば、ウォークスルー式の認証システムに関する。カメラ20が撮像する対象画像は、対象90の認証に用いられる画像である。 FIG. 4 is a diagram illustrating an example of a usage environment of the information processing system 50 according to this embodiment. The information processing system 50 relates to, for example, a walk-through authentication system. The target image captured by the camera 20 is an image used to authenticate the target 90.
 図4の例では、対象90はたとえば対象画像の撮像地点である地点P2に近づくように移動している。情報処理システム50では、地点P2よりもカメラ20から離れた位置である地点P1における対象90の状態が、状態測定部30を用いて測定される。情報処理装置10の推定部130は、地点P1における対象90の状態に基づいて、地点P2においてカメラ20で対象90を撮像して得られる対象画像の劣化を推定する。そして、推定結果に基づいて、情報処理装置10の制御部150は、対象画像の劣化を抑制するための制御情報を出力する。 In the example of FIG. 4, the target 90 is moving, for example, so as to approach point P2, which is the point where the target image is captured. In the information processing system 50, the state of the target 90 at point P1, which is a position farther from the camera 20 than point P2, is measured using the state measurement unit 30. Based on the state of the target 90 at point P1, the estimation unit 130 of the information processing device 10 estimates the deterioration of the target image obtained by capturing an image of the target 90 with the camera 20 at point P2. Then, based on the estimation result, the control unit 150 of the information processing device 10 outputs control information for suppressing the deterioration of the target image.
 特許文献1から3に記載された技術では、画像の撮像前に画像の劣化を予め推定して、対策を行うことはできなかった。 The techniques described in Patent Documents 1 to 3 make it impossible to estimate image degradation and take measures before capturing an image.
 本実施形態に係る情報処理システム50、情報処理装置10、および情報処理方法によれば、制御部150は、劣化情報に応じた制御情報を、カメラ20で対象が撮像される前およびカメラ20で対象が撮像される時の少なくとも一方に、出力する。したがって、劣化を抑制するための適切な制御が可能であり、良好な画像を得られる。 According to the information processing system 50, information processing device 10, and information processing method according to this embodiment, the control unit 150 outputs control information corresponding to the degradation information at least either before the object is imaged by the camera 20 or when the object is imaged by the camera 20. Therefore, appropriate control for suppressing degradation is possible, and a good image can be obtained.
 特にウォークスルーシステムのように対象画像の撮像機会が原則一度しかなく、再撮像に対する労力や障害が大きいシステムにおいては、撮像の失敗を減らすことが重要である。したがって、本実施形態に係る情報処理システム50、情報処理装置10、および情報処理方法により、対象画像の品質向上のための対策を対象画像の撮像前や撮像時に実施することが特に好適である。 It is particularly important to reduce imaging failures in systems such as walk-through systems, where there is in principle only one opportunity to capture a target image and re-capturing requires a lot of effort and obstacles. Therefore, it is particularly preferable to use the information processing system 50, information processing device 10, and information processing method according to this embodiment to implement measures to improve the quality of the target image before or when capturing the target image.
 なお、以降、カメラ20の焦点位置を「地点P2」と呼び、地点P2よりもカメラ20から離れた地点を「地点P1」と呼ぶが、これらの地点は図4に示したウォークスルーのシステムの例に限定されない。 Note that from here on, the focal position of the camera 20 will be referred to as "point P2," and a point farther away from the camera 20 than point P2 will be referred to as "point P1," but these points are not limited to the example of the walk-through system shown in Figure 4.
 図5は、本実施形態に係る情報処理システム50の機能構成を例示するブロック図である。図6は、本実施形態に係る情報処理システム50が行う処理の流れを例示するフローチャートである。図5および図6を参照し、本実施形態に係る情報処理システム50、情報処理装置10、および情報処理方法の詳細例について説明する。 FIG. 5 is a block diagram illustrating the functional configuration of the information processing system 50 according to this embodiment. FIG. 6 is a flowchart illustrating the flow of processing performed by the information processing system 50 according to this embodiment. Detailed examples of the information processing system 50, information processing device 10, and information processing method according to this embodiment will be described with reference to FIGS. 5 and 6.
 対象90は、たとえば人である。ただし、対象90は、人以外の生物であってもよいし、対象90は生物以外の物体であってもよい。対象画像が撮像される前に、対象90は、カメラ20に近づくように移動していることが好ましい。 The target 90 is, for example, a human. However, the target 90 may be a living thing other than a human, or a non-living object. It is preferable that the target 90 moves closer to the camera 20 before the target image is captured.
 上述したように、カメラ20が撮像する対象画像は、対象90の認証に用いられる画像である。カメラ20の焦点はおよそ地点P2に合っている。 As described above, the target image captured by camera 20 is the image used to authenticate target 90. Camera 20 is focused on approximately point P2.
 情報処理システム50ではたとえば、対象90の虹彩認証を行うことが可能である。カメラ20はたとえば、対象90の虹彩を撮像するための虹彩撮像カメラである。対象画像は虹彩認証に用いられる画像である。対象画像が虹彩認証に用いられる画像である場合、カメラ20は、地点P2に位置する対象90の虹彩を撮像可能に設けられている。この場合対象画像は、眼を含む画像である。対象画像は両眼を含む画像でもよいし、右眼および左眼の一方のみを含む画像でもよい。また、対象画像は、眼だけでなく、眼の周辺を含む画像でもよい。 Information processing system 50 is capable of performing, for example, iris authentication of target 90. Camera 20 is, for example, an iris imaging camera for imaging the iris of target 90. The target image is an image used for iris authentication. When the target image is an image used for iris authentication, camera 20 is provided so as to be able to image the iris of target 90 located at point P2. In this case, the target image is an image including the eye. The target image may be an image including both eyes, or an image including only one of the right eye or the left eye. Furthermore, the target image may be an image including not only the eye, but also the area around the eye.
 ただし、情報処理システム50で行われる認証は虹彩認証に限定されない。情報処理システム50は、対象90の顔認証を行うことが可能なシステムであってもよい。カメラ20は、顔認証に用いる画像を撮像するためのカメラであってもよい。対象画像は顔認証に用いられる画像であってもよい。対象画像が顔認証に用いられる画像である場合、カメラ20は、地点P2に位置する対象90の顔を撮像可能に設けられている。また、対象画像が顔認証に用いられる画像である場合、対象画像は顔を含む画像である。 However, the authentication performed by the information processing system 50 is not limited to iris authentication. The information processing system 50 may be a system capable of performing facial authentication of the target 90. The camera 20 may be a camera for capturing an image to be used for facial authentication. The target image may be an image to be used for facial authentication. When the target image is an image to be used for facial authentication, the camera 20 is arranged to be capable of capturing an image of the face of the target 90 located at point P2. Also, when the target image is an image to be used for facial authentication, the target image is an image that includes a face.
 図5の例において、情報処理システム50は、一以上の状態測定部30をさらに備える。状態情報は、状態測定部30による測定結果に基づいて生成される。状態情報は、カメラ20の焦点よりも離れた地点に対象90が位置する時点における、対象90の状態を少なくとも示す。そして、制御部150は、対象90がカメラ20の焦点に到達する前および対象90がカメラ20の焦点に到達した時の少なくとも一方に制御情報を出力する。そうすることで、事前にカメラ20での撮像を行うことなく劣化を加味した撮像を行い、良好な対象画像を得ることができる。 In the example of FIG. 5, the information processing system 50 further includes one or more status measurement units 30. Status information is generated based on the measurement results by the status measurement units 30. The status information indicates at least the status of the object 90 at the time when the object 90 is located at a point farther away than the focal point of the camera 20. The control unit 150 outputs control information at least one of before the object 90 reaches the focal point of the camera 20 and when the object 90 reaches the focal point of the camera 20. In this way, it is possible to perform imaging that takes degradation into account without performing imaging with the camera 20 in advance, and obtain a good image of the object.
 状態測定部30の例には、カメラ、ライダー(LiDAR)、および赤外線センサが含まれる。ここで、カメラは特に限定されず、たとえば、デプスカメラ、広角カメラ、可視光カメラ、または近赤外カメラであり得る。状態測定部30がデプスカメラまたはライダーである場合、その状態測定部30から対象90までの距離や、対象90の三次元情報が得られる。状態測定部30が広角カメラである場合、カメラ20よりも広い範囲、たとえば対象90の全体を撮像できる。 Examples of the state measurement unit 30 include a camera, a lidar (LiDAR), and an infrared sensor. Here, the camera is not particularly limited, and may be, for example, a depth camera, a wide-angle camera, a visible light camera, or a near-infrared camera. When the state measurement unit 30 is a depth camera or a lidar, the distance from the state measurement unit 30 to the target 90 and three-dimensional information of the target 90 can be obtained. When the state measurement unit 30 is a wide-angle camera, it can capture an image of a wider range than the camera 20, for example, the entire target 90.
 ステップS101において状態測定部30は、地点P2よりもカメラ20から離れた地点における対象90の状態を測定する。なお、状態測定部30が何らかのタイプのカメラである場合、状態測定部30が対象90の状態を測定することには状態測定部30が対象90を撮像することが含まれる。 In step S101, the state measurement unit 30 measures the state of the target 90 at a point farther away from the camera 20 than point P2. Note that if the state measurement unit 30 is a camera of some type, the state measurement unit 30 measuring the state of the target 90 includes the state measurement unit 30 capturing an image of the target 90.
 情報処理システム50が複数の状態測定部30を含む場合、地点P1は、複数の状態測定部30において同じであってもよいし、同じでなくてもよい。すなわち地点P1は、各状態測定部30に対して独立に設定されうる。 If the information processing system 50 includes multiple state measurement units 30, the point P1 may or may not be the same for the multiple state measurement units 30. In other words, the point P1 can be set independently for each state measurement unit 30.
 地点P1は、対象90が地点P2よりも前に位置する地点である。地点P1と地点P2との距離は特に限定されないが、たとえば50cm以上2m以下である。 Point P1 is a point where the target 90 is located in front of point P2. The distance between points P1 and P2 is not particularly limited, but is, for example, 50 cm or more and 2 m or less.
 ステップS102では、状態測定部30による測定結果を用いて状態情報が生成される。状態情報は、対象90または対象90の装着物に関する情報を含む。本実施形態において状態情報は、顔の向き、体姿勢、視線の向き、移動速度、および眼鏡着用有無のうち、一つ以上を示す。ただし、状態情報は、対象90または対象90の装着物に関する情報を含む限り、これらの例に限定されない。このような状態情報を用いることで、劣化要因を精度良く推定できる。 In step S102, status information is generated using the measurement results by the status measurement unit 30. The status information includes information related to the target 90 or items worn by the target 90. In this embodiment, the status information indicates one or more of the face direction, body posture, gaze direction, movement speed, and whether or not glasses are worn. However, the status information is not limited to these examples as long as it includes information related to the target 90 or items worn by the target 90. By using such status information, the deterioration factors can be accurately estimated.
 対象90の顔の向きは、たとえばデプスカメラ、ライダー、および広角カメラの少なくともいずれかで得られる情報を、既存の手法を用いて解析する事により、特定する事ができる。状態情報はたとえば、所定の方向を基準とした、顔の向きの角度(左右の角度および上下の角度)を含むことができる。 The facial orientation of the subject 90 can be determined by analyzing information obtained, for example, from a depth camera, a lidar, and/or a wide-angle camera using existing techniques. The status information can include, for example, the angle of the facial orientation (left/right angle and up/down angle) based on a specified direction.
 対象90の体姿勢は、たとえばデプスカメラ、ライダー、および広角カメラの少なくともいずれかで得られる情報を、既存の手法を用いて解析する事により、特定する事ができる。状態情報はたとえば、「歩行」、「直立」、「前かがみ歩行」等、体姿勢を示す体姿勢ラベルであり得る。体姿勢ラベルは予め定められた数値でありえる。 The body posture of the subject 90 can be identified by analyzing information obtained, for example, from at least one of a depth camera, a lidar, and a wide-angle camera using existing methods. The state information can be, for example, a body posture label indicating the body posture, such as "walking," "standing upright," or "walking with a hunched back." The body posture label can be a predetermined numerical value.
 対象90の視線の向きは、たとえばデプスカメラおよび広角カメラの少なくともいずれかで得られる情報を既存の手法を用いて解析する事により特定する事ができる。状態情報はたとえば、所定の方向を基準とした、視線の向きの角度(左右の角度および上下の角度)を含むことができる。 The gaze direction of the subject 90 can be determined, for example, by analyzing information obtained from at least one of a depth camera and a wide-angle camera using existing techniques. The status information can include, for example, the angle of the gaze direction (left/right angle and up/down angle) based on a specified direction.
 対象90の移動速度は、たとえばデプスカメラ、ライダー、および赤外線センサの少なくともいずれかで得られる情報を、既存の手法を用いて解析する事により、特定する事ができる。例として状態測定部30が赤外線センサである場合、複数の赤外線センサにより複数地点での対象90の通過を検知する。そして、複数地点間の距離および通過タイミングの差に基づいて、対象90の速度を算出できる。状態情報はたとえば、対象90の移動の向きおよび移動の速さを示す情報を含む事ができる。 The moving speed of the target 90 can be determined by analyzing information obtained, for example, by at least one of a depth camera, a lidar, and an infrared sensor using existing methods. For example, if the status measurement unit 30 is an infrared sensor, multiple infrared sensors detect the passing of the target 90 at multiple points. Then, the speed of the target 90 can be calculated based on the distance between the multiple points and the difference in the passing timing. The status information can include, for example, information indicating the direction and speed of movement of the target 90.
 対象90の眼鏡着用の有無は、たとえばデプスカメラ、ライダー、および広角カメラの少なくともいずれかで得られる情報を、既存の手法を用いて解析する事により、特定する事ができる。状態情報はたとえば、対象90が眼鏡を着用しているか否かを示す情報を含むことができる。 Whether or not the subject 90 is wearing glasses can be determined by, for example, analyzing information obtained from at least one of a depth camera, a lidar, and a wide-angle camera using existing techniques. The status information can include, for example, information indicating whether or not the subject 90 is wearing glasses.
 状態情報は、これらの複数種類の情報のそれぞれを要素とするベクトルでありうる。 The state information can be a vector whose elements are each of these multiple types of information.
 なお、状態情報を生成する方法は上記の例に限定されない。たとえば、状態情報の生成には、学習済みのニューラルネットワークが用いられてもよい。 Note that the method of generating state information is not limited to the above example. For example, a trained neural network may be used to generate state information.
 状態測定部30の測定結果から状態情報を生成する処理は、状態測定部30で行われてもよいし、情報処理装置10で行われてもよいし、情報処理装置10とも状態測定部30とも異なる解析装置で行われてもよい。解析装置または情報処理装置10で状態情報を生成する場合、解析装置または情報処理装置10は、一以上の状態測定部30から測定結果および状態情報の生成に必要な情報を取得して、状態情報の生成に用いる。 The process of generating status information from the measurement results of the status measurement unit 30 may be performed by the status measurement unit 30, may be performed by the information processing device 10, or may be performed by an analysis device different from the information processing device 10 and the status measurement unit 30. When the status information is generated by the analysis device or the information processing device 10, the analysis device or the information processing device 10 acquires the measurement results and information necessary for generating the status information from one or more status measurement units 30 and uses it to generate the status information.
 また、状態情報に上述のような複数種類の情報が含まれる場合、複数種類の情報が生成される装置は全て同じである必要はない。たとえば、一部の情報は状態測定部30で生成され、他の情報は情報処理装置10で生成されてもよい。情報処理装置10は、複数種類の情報を組み合わせることにより状態情報を生成することができる。 Furthermore, when the status information includes multiple types of information as described above, the devices generating the multiple types of information do not all need to be the same. For example, some of the information may be generated by the status measurement unit 30, and other information may be generated by the information processing device 10. The information processing device 10 can generate the status information by combining multiple types of information.
 ステップS103において取得部110は、状態情報を取得する。取得部110は状態情報を、状態測定部30から取得してもよいし、他の解析装置から取得してもよい。または取得部110は、一以上の状態測定部30および解析装置から取得した情報を用いて情報処理装置10で生成された状態情報を、取得してもよい。 In step S103, the acquisition unit 110 acquires state information. The acquisition unit 110 may acquire the state information from the state measurement unit 30, or from another analysis device. Alternatively, the acquisition unit 110 may acquire state information generated by the information processing device 10 using information acquired from one or more state measurement units 30 and analysis devices.
 ステップS104において、推定部130は、取得部110が取得した状態情報を用いて劣化情報を生成する。劣化情報はたとえば、対象画像に生じる一以上の劣化要因と、一以上の劣化要因それぞれの劣化の度合いとを示す。すなわち、状態情報は、劣化要因と、劣化の度合いとの組み合わせを一以上示す。劣化要因と、その度合いの情報を用いることで、良好な対象画像を得るための制御を適切に行える。 In step S104, the estimation unit 130 generates degradation information using the status information acquired by the acquisition unit 110. The degradation information indicates, for example, one or more degradation factors occurring in the target image and the degree of degradation for each of the one or more degradation factors. In other words, the status information indicates one or more combinations of degradation factors and the degree of degradation. By using information on the degradation factors and their degrees, appropriate control can be performed to obtain a good target image.
 ここで、一以上の劣化要因にはたとえばフォーカスブラー、モーションブラー、オクルージョン、およびオフアングルのうち一以上が含まれる。対象画像が虹彩を含む画像である場合、一以上の劣化要因は、フォーカスブラー、モーションブラー、瞼オクルージョン、照明反射オクルージョン、およびオフアングルのうち、一つ以上を含むことが好ましい。このような劣化要因を対象とすることで、精度良く対象画像の劣化を推定できる。 Here, the one or more degradation factors include, for example, one or more of focus blur, motion blur, occlusion, and off-angle. If the target image is an image that includes an iris, it is preferable that the one or more degradation factors include one or more of focus blur, motion blur, eyelid occlusion, lighting reflection occlusion, and off-angle. By targeting such degradation factors, it is possible to accurately estimate the degradation of the target image.
 劣化要因は、対象画像に生じると推測される劣化の種類であるとも言える。 The degradation factor can also be said to be the type of degradation that is predicted to occur in the target image.
 フォーカスブラーは、焦点のずれによる、対象90または対象90の所定部分のボケである。対象90の所定部分とはたとえば、対象画像のうち認証に用いる部分である。モーションブラーは、カメラ20と対象90または対象90の所定部分との相対位置の変化によるボケである。フォーカスブラーおよびモーションブラーについては、対象90または対象90の所定部分のボケの度合いが強いほど、劣化の度合いが高いと言える。劣化の度合いは対象画像における所定部分の状態を示してもよい。たとえば対象画像が虹彩を含む画像である場合、所定部分は虹彩である。そして、対象画像のうち虹彩領域のボケの度合いが強いほど、劣化の度合いを高くしてもよい。 Focus blur is blurring of the target 90 or a specific part of the target 90 due to a shift in focus. The specific part of the target 90 is, for example, a part of the target image used for authentication. Motion blur is blurring due to a change in the relative position between the camera 20 and the target 90 or a specific part of the target 90. For focus blur and motion blur, the greater the degree of blurring of the target 90 or a specific part of the target 90, the higher the degree of degradation. The degree of degradation may indicate the state of a specific part in the target image. For example, if the target image is an image that includes an iris, the specific part is the iris. The greater the degree of blurring of the iris region in the target image, the higher the degree of degradation may be.
 オクルージョンは、対象90または対象90の所定部分の少なくとも一部が対象画像に写らないことを意味する。対象画像が虹彩を含む画像である場合において、瞼オクルージョンは、瞼によって虹彩の少なくとも一部が隠れることを意味する。また、対象画像が虹彩を含む画像である場合において、照明反射オクルージョンは、照明が瞳に反射することにより、対象画像の虹彩領域のうち、少なくとも一部の輝度が高くなることを意味する。虹彩領域に輝度が高すぎる部分(たとえば白飛び部分)があると、虹彩認証に用いる虹彩模様を検出できなくなる。 Occlusion means that at least a part of the target 90 or a predetermined part of the target 90 is not visible in the target image. When the target image is an image including an iris, eyelid occlusion means that at least a part of the iris is hidden by the eyelid. When the target image is an image including an iris, illumination reflection occlusion means that the brightness of at least a part of the iris region of the target image increases due to illumination being reflected off the pupil. If there is a part of the iris region that is too bright (for example, a blown-out highlight), the iris pattern used for iris authentication cannot be detected.
 オクルージョンについては、対象90または対象90の所定部分の隠れる割合が大きいほど、劣化の度合いが高いと言える。たとえば対象画像が虹彩を含む画像である場合、虹彩の隠れる割合が高いほど劣化の度合いが高いと言える。 With regard to occlusion, the greater the proportion of the object 90 or a specific part of the object 90 that is hidden, the greater the degree of degradation. For example, if the object image is an image that includes an iris, the greater the proportion of the iris that is hidden, the greater the degree of degradation.
 オフアングルは、対象90の向きまたは対象90の所定部分の向きがカメラ20の光軸から逸れることを意味する。たとえば、オフアングルについては、対象90の向きまたは対象90の所定部分の向きが、カメラ20の光軸から逸れる角度が大きいほど、劣化の度合いが高いと言える。たとえば対象画像が虹彩を含む画像である場合、虹彩の向き(たとえば視線の向き)がカメラ20の光軸から逸れる角度が大きいほど、劣化の度合いが高いと言える。 Off-angle means that the orientation of the object 90 or the orientation of a specific part of the object 90 deviates from the optical axis of the camera 20. For example, with regard to off-angle, the greater the angle at which the orientation of the object 90 or the orientation of a specific part of the object 90 deviates from the optical axis of the camera 20, the greater the degree of degradation. For example, if the object image is an image that includes an iris, the greater the angle at which the orientation of the iris (e.g. the direction of the gaze) deviates from the optical axis of the camera 20, the greater the degree of degradation.
 劣化情報はたとえば、劣化要因ごとの劣化度合いを示す数値を含む。劣化情報は、複数の劣化度合いのそれぞれを要素とするベクトルでありえる。 The deterioration information may, for example, include a numerical value indicating the degree of deterioration for each deterioration factor. The deterioration information may be a vector whose elements are each a multiple degree of deterioration.
 推定部130が劣化情報を生成する方法は特に限定されないが、推定部130はたとえば、状態情報と、正解劣化情報とを用いて学習された学習済みのニューラルネットワークを用いて劣化情報を生成することができる。そうすることで、推定精度の高い劣化情報を生成できる。 The method by which the estimation unit 130 generates the deterioration information is not particularly limited, but the estimation unit 130 can generate the deterioration information, for example, by using a trained neural network that has been trained using the state information and the correct deterioration information. In this way, it is possible to generate deterioration information with high estimation accuracy.
 図7は、推定部130が劣化情報の生成に用いる劣化推定モデル131を例示する図である。劣化推定モデル131の入力データには状態情報が含まれ、劣化推定モデル131の出力データには劣化情報が含まれる。劣化推定モデル131は、ニューラルネットワークを含む。このニューラルネットワークは、状態情報および正解劣化情報を学習データとして用いて、予め機械学習が行われた学習済みのニューラルネットワークである。 FIG. 7 is a diagram illustrating a deterioration estimation model 131 that the estimation unit 130 uses to generate deterioration information. The input data of the deterioration estimation model 131 includes state information, and the output data of the deterioration estimation model 131 includes deterioration information. The deterioration estimation model 131 includes a neural network. This neural network is a trained neural network that has undergone machine learning in advance using state information and correct deterioration information as training data.
 この機械学習に用いられる状態情報は、本実施形態に係る情報処理システム50で、または、本実施形態に係る情報処理システム50と同様のシステムで得られる状態情報であることが好ましい。また、この機械学習に用いられる正解劣化情報は、本実施形態に係る情報処理システム50で、または、本実施形態に係る情報処理システム50と同様のシステムにおいて、カメラ20により得られる対象90の画像に生じた劣化の情報であることが好ましい。ただし、正解劣化情報は、状態測定部30による測定からカメラ20による撮像までの間に、対象90およびカメラ20に対して何の報知も制御も行わないで得られる画像に生じた劣化の情報である。正解劣化情報に含まれる情報の種類および形態は、上述した劣化情報における情報の種類および形態と同じである。 The state information used in this machine learning is preferably state information obtained in the information processing system 50 according to this embodiment or in a system similar to the information processing system 50 according to this embodiment. Moreover, the correct degradation information used in this machine learning is preferably information on degradation that has occurred in an image of the object 90 obtained by the camera 20 in the information processing system 50 according to this embodiment or in a system similar to the information processing system 50 according to this embodiment. However, the correct degradation information is information on degradation that has occurred in an image obtained without any notification or control of the object 90 and the camera 20 between the measurement by the state measurement unit 30 and the image capture by the camera 20. The type and form of information included in the correct degradation information are the same as the type and form of information in the degradation information described above.
 本実施形態に係る劣化推定モデル131は、特定の撮像条件用に準備されたモデルであってよい。すなわち、劣化推定モデル131の学習に用いる状態条件および正解劣化情報を準備する際の撮像条件と、劣化推定モデル131を用いて劣化情報を推定する際の撮像条件とは同じであることが好ましい。ここで、撮像条件はたとえば、カメラ20の一以上の撮像パラメータ、対象90に対する照明の状態、およびこれらの組み合わせである。図5の例において、情報処理システム50は照明部40を含み、照明部40は情報処理装置10により制御されうる。照明部40はたとえば光の向きや強さを調整可能な照明装置である。 The degradation estimation model 131 according to this embodiment may be a model prepared for specific imaging conditions. In other words, it is preferable that the imaging conditions when preparing the state conditions and ground truth degradation information used for learning the degradation estimation model 131 are the same as the imaging conditions when estimating degradation information using the degradation estimation model 131. Here, the imaging conditions are, for example, one or more imaging parameters of the camera 20, the lighting state for the target 90, and a combination of these. In the example of FIG. 5, the information processing system 50 includes a lighting unit 40, which can be controlled by the information processing device 10. The lighting unit 40 is, for example, a lighting device capable of adjusting the direction and intensity of light.
 なお、複数の撮像条件それぞれについて劣化推定モデル131が準備され、予め推定部130からアクセス可能な記憶装置に保持されていてもよい。その場合、推定部130は、状態情報が得られた時点の撮像条件に対応する劣化推定モデル131を、複数の劣化推定モデル131から選択して用いる。状態情報が得られた時点の撮像条件を特定するための情報を、情報処理装置10はカメラ20および照明部40から取得することができる。 In addition, a deterioration estimation model 131 may be prepared for each of the multiple imaging conditions and stored in advance in a storage device accessible by the estimation unit 130. In this case, the estimation unit 130 selects and uses the deterioration estimation model 131 that corresponds to the imaging condition at the time when the status information was obtained from the multiple deterioration estimation models 131. The information processing device 10 can obtain information for identifying the imaging condition at the time when the status information was obtained from the camera 20 and the lighting unit 40.
 推定部130は、劣化推定モデル131に状態情報を入力する。そして推定部130は、劣化推定モデル131から出力される劣化情報を得る。 The estimation unit 130 inputs state information to the deterioration estimation model 131. The estimation unit 130 then obtains the deterioration information output from the deterioration estimation model 131.
 ステップS105において制御部150は、推定部130で生成された劣化情報に応じた制御情報を出力する。 In step S105, the control unit 150 outputs control information corresponding to the degradation information generated by the estimation unit 130.
 本実施形態において制御部150は、劣化情報に基づいて、カメラ20の撮像条件を制御するための制御情報を出力する。そうすることで、劣化の少ない対象画像が得られる。 In this embodiment, the control unit 150 outputs control information for controlling the imaging conditions of the camera 20 based on the degradation information. In this way, a target image with less degradation can be obtained.
 制御部150が出力する制御情報は、劣化情報に示される劣化を抑制するように、カメラ20の撮像条件を制御するための情報である。制御部150は制御情報をカメラ20および照明部40の少なくとも一方に対して出力する。 The control information output by the control unit 150 is information for controlling the imaging conditions of the camera 20 so as to suppress the deterioration indicated in the deterioration information. The control unit 150 outputs the control information to at least one of the camera 20 and the lighting unit 40.
 この場合、上述した地点P1と地点P2との距離は特に限定されないが、たとえば50cm以上70cm以下であることが好ましい。地点P1と地点P2との距離を50cm以上とすることにより、撮像条件を制御する時間を確保できる。地点P1と地点P2との距離を70cm以下とすることにより、劣化の推定精度を向上させることができる。 In this case, the distance between the above-mentioned points P1 and P2 is not particularly limited, but is preferably, for example, 50 cm or more and 70 cm or less. By making the distance between points P1 and P2 50 cm or more, time to control the imaging conditions can be secured. By making the distance between points P1 and P2 70 cm or less, the accuracy of estimating deterioration can be improved.
 本実施形態に係る制御部150が行う処理の例について以下に説明する。 An example of the processing performed by the control unit 150 in this embodiment is described below.
 たとえば対象90の移動速度が速い場合、モーションブラーによる劣化が予測される。そして、この劣化はたとえば、カメラ20の露光時間を短くする、またはカメラ20の焦点位置を調整することにより低減されうる。 For example, if the object 90 is moving quickly, degradation due to motion blur is expected. This degradation can be reduced, for example, by shortening the exposure time of the camera 20 or by adjusting the focus position of the camera 20.
 したがって、劣化情報に示されるモーションブラーによる劣化の度合いが所定の基準a以上である場合、制御部150は、カメラ20に、対象画像を得る時の露光時間を短くするための制御情報を出力する。このとき、制御部150は、モーションブラーによる劣化の度合いが高いほど、露光時間をより短くさせる制御情報を出力してもよい。 Therefore, when the degree of degradation due to motion blur indicated in the degradation information is equal to or greater than a predetermined standard a, the control unit 150 outputs control information to the camera 20 for shortening the exposure time when obtaining the target image. At this time, the control unit 150 may output control information for shortening the exposure time as the degree of degradation due to motion blur increases.
 または制御部150は、劣化情報に示されるモーションブラーによる劣化の度合いが所定の基準a以上である場合、制御部150は、カメラ20に、対象画像を撮像する最中の焦点距離を変化させるための制御情報を出力してもよい。具体的には制御部150は、カメラ20に焦点距離を縮めながら対象画像を撮像させるための制御情報を出力する。 Alternatively, when the degree of degradation due to motion blur indicated in the degradation information is equal to or greater than a predetermined standard a, the control unit 150 may output control information to the camera 20 for changing the focal length while capturing the target image. Specifically, the control unit 150 outputs control information for causing the camera 20 to capture the target image while reducing the focal length.
 その他、劣化情報に示されるフォーカスブラーによる劣化の度合いが所定の基準b以上である場合、制御部150はたとえば、カメラ20のレンズの絞りを絞って被写界深度を広げるための制御情報を出力する。または、制御部150は、カメラ20の焦点位置を変更するための制御情報を出力する。 In addition, if the degree of degradation due to focus blur indicated in the degradation information is equal to or greater than a predetermined standard b, the control unit 150 outputs control information for, for example, narrowing the aperture of the lens of the camera 20 to increase the depth of field. Alternatively, the control unit 150 outputs control information for changing the focal position of the camera 20.
 劣化情報に示される照明反射オクルージョンによる劣化の度合いが所定の基準c以上である場合、制御部150はたとえば、照明反射が対象90または対象90の所定部分に生じないよう、カメラ20の位置、カメラ20の向き、対象90に対する照明の位置、および照明の向きの少なくともいずれかを変更するための制御情報を出力する。 If the degree of degradation due to lighting reflection occlusion indicated in the degradation information is equal to or greater than a predetermined standard c, the control unit 150 outputs control information for changing at least one of the position of the camera 20, the orientation of the camera 20, the position of the lighting relative to the target 90, and the orientation of the lighting, so that lighting reflection does not occur on the target 90 or a predetermined part of the target 90.
 ステップS106においてカメラ20は、地点P2に位置する対象90を撮像することで対象画像を生成する。なお、カメラ20は、制御部150から出力される制御信号に基づいて撮像を行ってもよい。また、上述したように制御部150がカメラ20への制御情報の出力により撮像パラメータの制御を行う場合、制御部150は対象画像の撮像時に制御情報を出力してもよい。すなわち、ステップS105とステップS106は同時であってもよい。 In step S106, the camera 20 generates a target image by capturing an image of the target 90 located at point P2. The camera 20 may capture an image based on a control signal output from the control unit 150. Also, as described above, if the control unit 150 controls the imaging parameters by outputting control information to the camera 20, the control unit 150 may output the control information when capturing the target image. In other words, steps S105 and S106 may be performed simultaneously.
 ステップS106で実際に得られる対象画像の劣化の度合いは、劣化情報に示される劣化の度合い、すなわち推定された劣化の度合いよりも、低いことが期待される。 The degree of deterioration of the target image actually obtained in step S106 is expected to be lower than the degree of deterioration indicated in the deterioration information, i.e., the estimated degree of deterioration.
 情報処理装置10のハードウエア構成について以下に説明する。情報処理装置10の各機能構成部(取得部110、推定部130、および制御部150)は、各機能構成部を実現するハードウエア(例:ハードワイヤードされた電子回路など)で実現されてもよいし、ハードウエアとソフトウエアとの組み合わせ(例:電子回路とそれを制御するプログラムの組み合わせなど)で実現されてもよい。以下、情報処理装置10の各機能構成部がハードウエアとソフトウエアとの組み合わせで実現される場合について、さらに説明する。 The hardware configuration of the information processing device 10 is described below. Each functional component of the information processing device 10 (acquisition unit 110, estimation unit 130, and control unit 150) may be realized by hardware that realizes each functional component (e.g., hardwired electronic circuitry, etc.), or may be realized by a combination of hardware and software (e.g., a combination of an electronic circuitry and a program that controls it, etc.). Below, further explanation is given of the case where each functional component of the information processing device 10 is realized by a combination of hardware and software.
 図8は、情報処理装置10を実現するための計算機1000を例示する図である。計算機1000は任意の計算機である。例えば計算機1000は、SoC(System On Chip)、Personal Computer(PC)、サーバマシン、タブレット端末、またはスマートフォンなどである。計算機1000は、情報処理装置10を実現するために設計された専用の計算機であってもよいし、汎用の計算機であってもよい。また、情報処理装置10は、一つの計算機1000で実現されても良いし、複数の計算機1000の組み合わせにより実現されても良い。 FIG. 8 is a diagram illustrating a computer 1000 for realizing the information processing device 10. The computer 1000 is any computer. For example, the computer 1000 is a SoC (System On Chip), a Personal Computer (PC), a server machine, a tablet terminal, or a smartphone. The computer 1000 may be a dedicated computer designed to realize the information processing device 10, or may be a general-purpose computer. Furthermore, the information processing device 10 may be realized by one computer 1000, or may be realized by a combination of multiple computers 1000.
 計算機1000は、バス1020、プロセッサ1040、メモリ1060、ストレージデバイス1080、入出力インタフェース1100、およびネットワークインタフェース1120を有する。バス1020は、プロセッサ1040、メモリ1060、ストレージデバイス1080、入出力インタフェース1100、およびネットワークインタフェース1120が、相互にデータを送受信するためのデータ伝送路である。ただし、プロセッサ1040などを互いに接続する方法は、バス接続に限定されない。プロセッサ1040は、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)、または FPGA(Field-Programmable Gate Array)などの種々のプロセッサである。メモリ1060は、RAM(Random Access Memory)などを用いて実現される主記憶装置である。ストレージデバイス1080は、ハードディスク、SSD(Solid State Drive)、メモリカード、または ROM(Read Only Memory)などを用いて実現される補助記憶装置である。 The computer 1000 has a bus 1020, a processor 1040, a memory 1060, a storage device 1080, an input/output interface 1100, and a network interface 1120. The bus 1020 is a data transmission path through which the processor 1040, the memory 1060, the storage device 1080, the input/output interface 1100, and the network interface 1120 transmit and receive data to and from each other. However, the method of connecting the processor 1040 and other components to each other is not limited to bus connection. The processor 1040 is one of various processors such as a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or an FPGA (Field-Programmable Gate Array). The memory 1060 is a main storage device realized using a RAM (Random Access Memory) or the like. The storage device 1080 is an auxiliary storage device realized using a hard disk, an SSD (Solid State Drive), a memory card, or a ROM (Read Only Memory) or the like.
 入出力インタフェース1100は、計算機1000と入出力デバイスとを接続するためのインタフェースである。例えば入出力インタフェース1100には、キーボードなどの入力装置や、ディスプレイなどの出力装置が接続される。入出力インタフェース1100が入力装置や出力装置に接続する方法は、無線接続であってもよいし、有線接続であってもよい。 The input/output interface 1100 is an interface for connecting the computer 1000 to an input/output device. For example, an input device such as a keyboard and an output device such as a display are connected to the input/output interface 1100. The input/output interface 1100 may be connected to the input device or output device by a wireless connection or a wired connection.
 ネットワークインタフェース1120は、計算機1000をネットワークに接続するためのインタフェースである。この通信網は、例えば LAN(Local Area Network)や WAN(Wide Area Network)である。ネットワークインタフェース1120がネットワークに接続する方法は、無線接続であってもよいし、有線接続であってもよい。 The network interface 1120 is an interface for connecting the computer 1000 to a network. This communication network is, for example, a LAN (Local Area Network) or a WAN (Wide Area Network). The method for connecting the network interface 1120 to the network may be a wireless connection or a wired connection.
 情報処理装置10は、カメラ20、一以上の状態測定部30、および照明部40のそれぞれは、入出力インタフェース1100またはネットワークインタフェース1120を介して情報処理装置10と接続され、情報処理装置10と通信可能である。 The camera 20, one or more state measurement units 30, and the lighting unit 40 are each connected to the information processing device 10 via an input/output interface 1100 or a network interface 1120, and can communicate with the information processing device 10.
 ストレージデバイス1080は、情報処理装置10の各機能構成部を実現するプログラムモジュールを記憶している。プロセッサ1040は、これら各プログラムモジュールをメモリ1060に読み出して実行することで、各プログラムモジュールに対応する機能を実現する。 The storage device 1080 stores program modules that realize each functional component of the information processing device 10. The processor 1040 reads each of these program modules into the memory 1060 and executes them to realize the function corresponding to each program module.
 以上、本実施形態によれば、制御部150は、カメラ20で対象が撮像される前およびカメラ20で対象が撮像される時の少なくとも一方に、劣化情報に応じた制御情報を出力する。したがって、カメラ20で対象を撮像する前に対象画像の劣化を推定して、良好な画像を得るための対策を講じることができる。 As described above, according to this embodiment, the control unit 150 outputs control information corresponding to the degradation information at least one of before the target is imaged by the camera 20 and when the target is imaged by the camera 20. Therefore, it is possible to estimate the degradation of the target image before the target is imaged by the camera 20 and take measures to obtain a good image.
(第2の実施形態)
 図9は、第2の実施形態に係る情報処理システム50の機能構成を例示する図である。本実施形態に係る情報処理システム50、情報処理装置10、および情報処理方法は、以下に説明する点を除いて第1の実施形態に係る情報処理システム50、情報処理装置10、および情報処理方法とそれぞれ同じである。
Second Embodiment
9 is a diagram illustrating a functional configuration of an information processing system 50 according to the second embodiment. The information processing system 50, the information processing device 10, and the information processing method according to the present embodiment are the same as the information processing system 50, the information processing device 10, and the information processing method according to the first embodiment, respectively, except for the points described below.
 本実施形態に係る制御部150は、劣化情報が所定の条件Cを満たす場合、報知を実行するための制御情報を出力する。そうすることにより、推定される劣化を避けるような行動を対象90に促すことができる。 The control unit 150 according to this embodiment outputs control information for executing a notification when the deterioration information satisfies a predetermined condition C. By doing so, it is possible to encourage the target 90 to take action to avoid the estimated deterioration.
 図9の例において、情報処理システム50は一以上の報知部70をさらに備える。制御部150は、報知部70によって報知を実行させるための制御情報を、報知部70に対して出力する。報知部70の例にはディスプレイ、スピーカー、および発光装置が含まれる。報知部70がディスプレイである場合、たとえば報知はディスプレイにおけるメッセージや図等の表示でありうる。報知部70がスピーカーである場合、報知情報はメッセージやアラーム等の音出力でありうる。報知部70が発光装置である場合、報知情報は、発光でありうる。報知部70は、入出力インタフェース1100またはネットワークインタフェース1120を介して情報処理装置10と接続される。 In the example of FIG. 9, the information processing system 50 further includes one or more notification units 70. The control unit 150 outputs control information to the notification unit 70 to cause the notification unit 70 to execute a notification. Examples of the notification unit 70 include a display, a speaker, and a light-emitting device. If the notification unit 70 is a display, the notification may be, for example, a display of a message, a diagram, or the like on the display. If the notification unit 70 is a speaker, the notification information may be a sound output such as a message or an alarm. If the notification unit 70 is a light-emitting device, the notification information may be light emission. The notification unit 70 is connected to the information processing device 10 via the input/output interface 1100 or the network interface 1120.
 各報知部70は、その報知部70による報知が地点P1から地点P2の間に位置する対象90に、認識されうるように設けられている。 Each notification unit 70 is provided so that the notification by that notification unit 70 can be recognized by an object 90 located between points P1 and P2.
 制御部150は、第1の実施形態で説明したように、カメラ20および照明部40の少なくとも一方に撮像環境を制御するための制御情報を出力することに代えて、報知情報を出力してもよいし、撮像環境を制御するための制御情報を出力することに加えて、報知情報を出力してもよい。 As described in the first embodiment, the control unit 150 may output notification information instead of outputting control information for controlling the imaging environment to at least one of the camera 20 and the lighting unit 40, or may output notification information in addition to outputting control information for controlling the imaging environment.
 本実施形態に係る制御部150が行う処理の例について以下に説明する。なお、以下の例において、各劣化の度合いが、その劣化の度合いに対して設けられた所定の基準以上であることが、所定の条件Cを満たすことに相当する。 An example of the processing performed by the control unit 150 according to this embodiment is described below. Note that in the following example, when the degree of deterioration is equal to or greater than a predetermined standard set for that degree of deterioration, this corresponds to satisfying the predetermined condition C.
 たとえば対象90の移動速度が速い場合、モーションブラーによる劣化が予測される。そして、この劣化はたとえば、対象90の移動速度を下げることにより低減されうる。したがって、劣化情報に示されるモーションブラーによる劣化の度合いが所定の基準d以上である場合、制御部150は、対象90に対して移動速度を下げることを促す報知を、一以上の報知部70にさせるための制御情報を出力する。そうすることで、たとえば「歩行速度を落として下さい」または、「指定位置で一度停止して下さい」といったメッセージが、ディスプレイに表示されたり、スピーカーから音声として発せられたりする。 For example, if the moving speed of the object 90 is fast, degradation due to motion blur is predicted. This degradation can be reduced, for example, by slowing down the moving speed of the object 90. Therefore, if the degree of degradation due to motion blur indicated in the degradation information is equal to or greater than a predetermined standard d, the control unit 150 outputs control information for causing one or more notification units 70 to issue a notification urging the object 90 to slow down its moving speed. By doing so, for example, a message such as "Please slow down your walking speed" or "Please stop once at a specified position" is displayed on the display or is issued as a sound from the speaker.
 また、劣化情報に示される照明反射オクルージョンによる劣化の度合いが所定の基準e以上である場合、制御部150はたとえば、対象90に対して顔の向きや視線の向きを変えることを促す報知を、一以上の報知部70にさせるための制御情報を出力する。そうすることで、たとえば「前方を見て下さい」といったメッセージがディスプレイに表示されたり、スピーカーから発せられたりする。または、発光装置が発光することで、対象90の視線を誘導してもよい。 Furthermore, if the degree of deterioration due to lighting reflection occlusion indicated in the deterioration information is equal to or greater than a predetermined standard e, the control unit 150 outputs control information to cause one or more notification units 70 to issue a notification urging the target 90 to change the direction of his/her face or gaze. By doing so, for example, a message such as "Look ahead" may be displayed on the display or emitted from the speaker. Alternatively, the gaze of the target 90 may be guided by emitting light from a light-emitting device.
 劣化情報に示される瞼オクルージョンによる劣化の度合いが所定の基準e以上である場合、制御部150はたとえば、対象90に対して眼を見開くことを促す報知を、一以上の報知部70にさせるための制御情報を出力する。そうすることで、たとえば「大きく眼を見開いて下さい」といったメッセージがディスプレイに表示されたり、スピーカーから発せられたりする。 If the degree of deterioration due to eyelid occlusion indicated in the deterioration information is equal to or greater than a predetermined standard e, the control unit 150 outputs control information to cause one or more notification units 70 to issue a notification urging the subject 90 to open his/her eyes. By doing so, for example, a message such as "Please open your eyes wide" may be displayed on the display or emitted from the speaker.
 劣化情報に示されるオフアングルによる劣化の度合いが所定の基準f以上である場合、制御部150はたとえば、対象90に対して、顔の向き、および視線の少なくとも一つをカメラ20に向けるように、顔の向き、位置、および視線の向きの少なくともいずれかを変えることを促す報知を、一以上の報知部70にさせるための制御情報を出力する。そうすることで、たとえば「前方を見て下さい」または「もう少し右側に寄ってから、視線をこちらに向けて下さい」といったメッセージがディスプレイに表示されたり、スピーカーから発せられたりする。または、発光装置が発光することで、対象90の視線を誘導してもよい。 If the degree of deterioration due to off-angle indicated in the deterioration information is equal to or greater than a predetermined standard f, the control unit 150 outputs control information to one or more notification units 70 to notify the subject 90 to change at least one of the facial orientation, position, and gaze direction so as to direct at least one of the facial orientation and gaze toward the camera 20. By doing so, a message such as "Look ahead" or "Move a little more to the right and then gaze towards us" may be displayed on the display or emitted from the speaker. Alternatively, a light-emitting device may emit light to guide the gaze of the subject 90.
 また、制御部150はさらに状態情報に基づいて制御情報を出力してもよい。たとえば、状態情報が、対象90が眼鏡を装着していることを示している場合、対象90に対して眼鏡を外すよう促す報知を、一以上の報知部70にさせるための制御情報を出力する。そうすることで、たとえば「眼鏡を外して下さい」といったメッセージがディスプレイに表示されたり、スピーカーから発せられたりする。 The control unit 150 may also output control information based on the status information. For example, if the status information indicates that the subject 90 is wearing glasses, the control information is output to cause one or more notification units 70 to issue a notification urging the subject 90 to remove the glasses. In this way, a message such as "Please remove your glasses" may be displayed on the display or emitted from the speaker.
 本実施形態においても、上述した地点P1と地点P2との距離は特に限定されないが、たとえば1m以上2m以下であることが好ましい。地点P1と地点P2との距離を1m以上とすることにより、対象90が報知に応じて状態を変えるための時間を確保できる。地点P1と地点P2との距離を2m以下とすることにより、劣化の推定精度を充分確保できる。 In this embodiment, the distance between points P1 and P2 is not particularly limited, but is preferably, for example, 1 m or more and 2 m or less. By making the distance between points P1 and P2 1 m or more, it is possible to ensure time for the object 90 to change its state in response to the alarm. By making the distance between points P1 and P2 2 m or less, it is possible to ensure sufficient accuracy in estimating the deterioration.
 次に、本実施形態の作用および効果について説明する。本実施形態においては第1の実施形態と同様の作用および効果が得られる。くわえて、本実施形態に係る制御部150は、劣化情報が所定の条件Cを満たす場合、報知を実行するための制御情報を出力する。したがって、推定される劣化を避けるような行動を対象90に促すことができる。 Next, the action and effect of this embodiment will be described. In this embodiment, the same action and effect as in the first embodiment can be obtained. In addition, the control unit 150 according to this embodiment outputs control information for executing a notification when the deterioration information satisfies a predetermined condition C. Therefore, it is possible to encourage the target 90 to take action to avoid the estimated deterioration.
(第3の実施形態)
 図10は、第3の実施形態に係る情報処理システム50が実行する処理の流れを例示するフローチャートである。本実施形態に係る情報処理システム50、情報処理装置10、および情報処理方法は、以下に説明する点を除いて第1の実施形態および第2の実施形態の少なくとも一方に係る情報処理システム50、情報処理装置10、および情報処理方法とそれぞれ同じである。
Third Embodiment
10 is a flowchart illustrating a flow of processing executed by the information processing system 50 according to the third embodiment. The information processing system 50, the information processing device 10, and the information processing method according to this embodiment are the same as the information processing system 50, the information processing device 10, and the information processing method according to at least one of the first embodiment and the second embodiment, respectively, except for the points described below.
 本実施形態に係る制御部150は、認証に用いる画像としての適正さを示す、対象画像の品質を、劣化情報を用いて推定する。そして制御部150は、品質に応じた制御情報を出力する。そうすることで、認証処理に適した対象画像を得ることができる。 The control unit 150 according to this embodiment estimates the quality of the target image, which indicates its suitability as an image to be used for authentication, using the degradation information. The control unit 150 then outputs control information according to the quality. In this way, it is possible to obtain a target image suitable for authentication processing.
 虹彩認証や顔認証等の認証処理では、認証に用いる画像の品質によって認証の成否が異なる。たとえば、本来認証されるべき対象90が、画像の品質が低いことによって認証されないということが生じうる。そして、認証に必要な画像の品質は、認証器の性能や特性に応じて異なりうる。 In authentication processes such as iris recognition and face recognition, the success or failure of authentication depends on the quality of the image used for authentication. For example, it may happen that an object 90 that should be authenticated is not recognized due to low image quality. Furthermore, the quality of the image required for authentication may differ depending on the performance and characteristics of the authentication device.
 本実施形態に係る制御部150は、認証に用いる画像としての適正さを示す品質として、たとえば品質スコアを生成する。 The control unit 150 in this embodiment generates, for example, a quality score as a quality indicator of the suitability of the image for use in authentication.
 図10の例において、ステップS101からステップS104およびステップS106は、第1の実施形態で説明した通りである。 In the example of FIG. 10, steps S101 to S104 and step S106 are as described in the first embodiment.
 本実施形態において、推定部130が劣化情報を生成すると、ステップS204において制御部150は、その劣化情報を用いて品質を推定する。たとえば制御部150は品質スコアを生成する。認証に用いる画像としての適正さが低いほど、その対象画像の品質スコアは低くなる。制御部150が品質スコアを生成する方法は特に限定されず、線形回帰による方法であっても良いし、ニューラルネットワークを用いる方法であってもよい。制御部150が品質スコアを生成する方法については詳しく後述する。 In this embodiment, when the estimation unit 130 generates the degradation information, in step S204 the control unit 150 estimates the quality using the degradation information. For example, the control unit 150 generates a quality score. The lower the suitability of an image for use in authentication, the lower the quality score of the target image. The method by which the control unit 150 generates the quality score is not particularly limited, and may be a method based on linear regression or a method using a neural network. The method by which the control unit 150 generates the quality score will be described in detail later.
 そしてステップS205において、制御部150は、品質スコアに応じた制御情報を出力する。たとえば制御部150は、品質スコアが所定の基準g以下である場合に限り、第1の実施形態で説明した、カメラ20の撮像条件を制御するための制御情報を出力してもよい。また、制御部150は、品質スコアが所定の基準g以下である場合に限り、第2の実施形態で説明した、報知を実行するための制御情報を出力してもよい。 Then, in step S205, the control unit 150 outputs control information according to the quality score. For example, the control unit 150 may output control information for controlling the imaging conditions of the camera 20, as described in the first embodiment, only if the quality score is equal to or less than a predetermined standard g. Also, the control unit 150 may output control information for executing an alert, as described in the second embodiment, only if the quality score is equal to or less than a predetermined standard g.
 制御部150が品質スコアを生成する方法の第1例および第2例について以下に説明する。 The following describes a first and second example of how the control unit 150 generates a quality score.
<第1例>
 第1例では、制御部150は、線形回帰により品質スコアを算出する。本例では、品質スコアは、一以上の劣化要因に関する劣化の度合いの重み付け和により算出される。各劣化の度合いに対する重みは、たとえば以下のようにして予め定めておくことができる。
<First Example>
In a first example, the control unit 150 calculates the quality score by linear regression. In this example, the quality score is calculated as a weighted sum of the degrees of deterioration for one or more deterioration factors. The weights for each degree of deterioration can be determined in advance, for example, as follows.
 まず、劣化状態が異なる複数の画像を準備する。これらの画像の撮像対象は互いに同じであってもよいし、異なっていてもよい。そして、これらの画像を用いて認証処理を行い、認証の成否を確認する。また、各画像について、各劣化要因に関する劣化の度合いを特定する。この劣化の度合いは推定部130が生成する劣化情報の内容と整合するもの(すなわち比較可能なもの)とする。こうして得られた認証の成否と、一以上の劣化の度合いとの複数の組み合わせを用いて、各劣化の度合いに対する重みを決定する。そして、これらの重みを用いた重み付け和を求めるための数式を、品質スコアを算出するための数式として定める。 First, multiple images with different deterioration states are prepared. The objects captured in these images may be the same or different. Then, an authentication process is performed using these images to confirm whether the authentication is successful or not. In addition, the degree of deterioration related to each deterioration factor is identified for each image. This degree of deterioration is consistent with (i.e., comparable to) the content of the deterioration information generated by the estimation unit 130. Using multiple combinations of the authentication success or failure obtained in this way and one or more degrees of deterioration, a weight for each degree of deterioration is determined. Then, a formula for determining a weighted sum using these weights is defined as a formula for calculating the quality score.
 なお、認証の成否の代わりに、各画像についての認証における本人の確からしさを示す認証スコアを他の方法で定めてもよい。認証スコアはたとえば各画像に対して、特定の認証アルゴリズムによって付与してもよいし、人の判断により付与してもよい。その場合、認証スコアと、一以上の劣化の度合いとの複数の組み合わせを用いて、認証スコアに相当する品質スコアが得られるように、各劣化の度合いに対する重みを決定する。認証アルゴリズムの例について以下に説明する。認証が虹彩認証である場合、たとえば虹彩画像から抽出した虹彩コードを特徴量とする。そして、予め登録されている本人の特徴量と、カメラ20で撮像された画像から抽出した特徴量のハミング距離によって、本人か否かを判断するような、認証アルゴリズムを用いることができる。認証アルゴリズムとしては、特に限定されず様々なアルゴリズムを用いることが可能である。ただし、品質スコアを算出するための一つの数式を定めるに際して準備する複数の認証スコアは、同一のアルゴリズムを用いて準備されることが好ましい。 In addition, instead of the success or failure of authentication, an authentication score indicating the likelihood of identity in authentication for each image may be determined by other methods. For example, the authentication score may be assigned to each image by a specific authentication algorithm, or may be assigned by human judgment. In this case, a weight for each degree of degradation is determined so that a quality score equivalent to the authentication score is obtained using multiple combinations of the authentication score and one or more degrees of degradation. An example of an authentication algorithm is described below. When authentication is iris authentication, for example, an iris code extracted from an iris image is used as a feature. Then, an authentication algorithm can be used that determines whether or not the person is the real person based on the Hamming distance between the feature of the person registered in advance and the feature extracted from the image captured by the camera 20. There is no particular limitation on the authentication algorithm, and various algorithms can be used. However, it is preferable that the multiple authentication scores prepared when determining one formula for calculating the quality score are prepared using the same algorithm.
 数式に用いる一以上の重みは、ニューラルネットワークにより生成されてもよい。すなわち、一以上の劣化要因に関する劣化の度合いと認証スコアとをニューラルネットワークの入力データとし、一以上の重みをニューラルネットワークの出力データとする。そして、出力された重みを用いて算出される品質スコアが正解としての認証スコアに近づくようにニューラルネットワークの学習を実施する。こうして得られた学習済みのニューラルネットワークにから出力される一以上の重みを、品質スコア算出のための数式に用いる重みとする。 The one or more weights used in the formula may be generated by a neural network. That is, the degree of deterioration related to one or more deterioration factors and the authentication score are used as input data for the neural network, and the one or more weights are used as output data for the neural network. Then, the neural network is trained so that the quality score calculated using the output weights approaches the authentication score as the correct answer. The one or more weights output from the trained neural network thus obtained are used as weights to be used in the formula for calculating the quality score.
 本例において制御部150は、推定部130が生成した劣化情報に示される各劣化の度合いを、上述したように予め定められた数式に代入することで、品質スコアを算出する。 In this example, the control unit 150 calculates the quality score by substituting the degree of degradation indicated in the degradation information generated by the estimation unit 130 into a predetermined formula as described above.
<第2例>
 図11は、制御部150が品質スコアの生成に用いる品質推定モデル151を例示する図である。品質推定モデル151の入力データには劣化情報が含まれ、品質推定モデル151の出力データには品質スコアが含まれる。品質推定モデル151は、ニューラルネットワークを含む。このニューラルネットワークは第1例で説明したような、認証スコア(品質スコアの正解に相当)と、一以上の劣化の度合いとの複数の組み合わせを用いて、予め機械学習が行われた学習済みのニューラルネットワークである。
<Second Example>
11 is a diagram illustrating a quality estimation model 151 used by the control unit 150 to generate a quality score. Input data of the quality estimation model 151 includes degradation information, and output data of the quality estimation model 151 includes a quality score. The quality estimation model 151 includes a neural network. This neural network is a trained neural network that has been trained by machine learning using multiple combinations of an authentication score (corresponding to a correct answer of the quality score) and one or more degrees of degradation, as described in the first example.
 本例において制御部150は、推定部130が生成した劣化情報に示される各劣化の度合いを、品質推定モデル151に入力する。そして制御部150は、品質推定モデル151から出力される品質スコアを得る。 In this example, the control unit 150 inputs the degree of each degradation indicated in the degradation information generated by the estimation unit 130 to the quality estimation model 151. The control unit 150 then obtains a quality score output from the quality estimation model 151.
 次に、本実施形態の作用および効果について説明する。本実施形態においては第1の実施形態および第2の実施形態の少なくとも一方と同様の作用および効果が得られる。くわえて、本実施形態によれば制御部150は、認証に用いる画像としての適正さを示す、対象画像の品質を、劣化情報を用いて推定する。したがって、認証処理に適した対象画像を得ることができる。 Next, the action and effect of this embodiment will be described. In this embodiment, the same action and effect as at least one of the first and second embodiments can be obtained. In addition, according to this embodiment, the control unit 150 uses the degradation information to estimate the quality of the target image, which indicates its suitability as an image to be used for authentication. Therefore, it is possible to obtain a target image suitable for authentication processing.
(第4の実施形態)
 第4の実施形態に係る情報処理システム50、情報処理装置10、および情報処理方法は、以下に説明する点を除いて第1の実施形態から第3の実施形態の少なくともいずれかに係る情報処理システム50、情報処理装置10、および情報処理方法とそれぞれ同じである。
(Fourth embodiment)
The information processing system 50, information processing device 10, and information processing method according to the fourth embodiment are the same as the information processing system 50, information processing device 10, and information processing method according to at least any of the first to third embodiments, except for the points described below.
 本実施形態に係る制御部150は、劣化情報が所定の条件Dを満たす場合、カメラ20による対象90の撮像を中止させる。そうすることで、良好な対象画像が得られる可能性が低い場合に、無駄な処理を減らすことができる。 The control unit 150 according to this embodiment stops the camera 20 from capturing an image of the target 90 when the degradation information satisfies a predetermined condition D. This makes it possible to reduce unnecessary processing when there is a low probability of obtaining a good target image.
 条件Dは、良好な対象画像が得られる可能性が低いことを示す条件である。条件Dは、劣化情報そのものに対する条件であってもよいし、第3の実施形態のように劣化情報から得られる品質(品質スコア)に関する条件であってもよい。その場合、対象画像が認証に必要な品質を満たせないと推定される場合に、カメラ20による対象90の撮像が中止される。 Condition D is a condition that indicates that it is unlikely that a good target image will be obtained. Condition D may be a condition on the degradation information itself, or may be a condition on the quality (quality score) obtained from the degradation information as in the third embodiment. In that case, if it is estimated that the target image does not meet the quality required for authentication, the imaging of the target 90 by the camera 20 is stopped.
 例として、条件Dは、劣化情報に示される一以上の劣化の度合いのうち、所定の数以上の劣化の度合いが、各劣化の度合いに対して設けられた所定の基準以上であることである。その他の例として、条件Dは、劣化情報にもとづいて得られる品質スコアが所定のスコア以下であることである。ただし、条件Dはこれらの例に限定されない。 As an example, condition D is that, of one or more degrees of deterioration indicated in the deterioration information, a predetermined number or more of the degrees of deterioration are equal to or higher than a predetermined standard set for each degree of deterioration. As another example, condition D is that the quality score obtained based on the deterioration information is equal to or lower than a predetermined score. However, condition D is not limited to these examples.
 図12は、本実施形態に係る情報処理システム50の機能構成を例示するブロック図である。本実施形態に係る情報処理システム50は、第2の実施形態に係る情報処理システム50と同様に一以上の報知部70を備える。本実施形態に係る制御部150は、カメラ20による対象90の撮像を中止させるともに、報知を実行するための制御情報を出力する。本実施形態に係る制御部150は、カメラ20の撮像条件を制御するための制御情報を出力する必要はない。 FIG. 12 is a block diagram illustrating the functional configuration of an information processing system 50 according to this embodiment. The information processing system 50 according to this embodiment includes one or more notification units 70, similar to the information processing system 50 according to the second embodiment. The control unit 150 according to this embodiment stops the camera 20 from capturing an image of the target 90 and outputs control information for executing a notification. The control unit 150 according to this embodiment does not need to output control information for controlling the imaging conditions of the camera 20.
 劣化情報が所定の条件Dを満たす場合、制御部150は、たとえば対象90に対し、一度戻って移動をやり直すことを促す報知を、一以上の報知部70にさせるための制御情報を出力する。そうすることで、たとえば「少し戻って経路を歩き直して下さい。」といったメッセージがディスプレイに表示されたり、スピーカーから発せられたりする。 If the deterioration information satisfies a predetermined condition D, the control unit 150 outputs control information to cause one or more notification units 70 to issue a notification to the target 90, for example, encouraging the target 90 to go back and start the movement again. By doing so, a message such as "Please go back a little and repeat the route" may be displayed on the display or emitted from the speaker.
 このような報知により対象90は再度移動をやり直すことが期待される。そこで情報処理システム50は地点P1における測定、状態情報の生成、劣化情報の生成等を再び行うことができる。本例によれば、これらのやり直しをカメラ20で対象90を撮像することなく実施できるため、カメラ20で対象90を撮像した後にやり直しを促す場合に比べて、短時間で良好な対象画像を得られる。 Such a notification is expected to cause the target 90 to try moving again. The information processing system 50 can then repeat measurements at point P1, generate status information, generate degradation information, and so on. According to this example, these repeats can be performed without capturing an image of the target 90 with the camera 20, so a good image of the target can be obtained in a short time compared to a case in which the target 90 is captured by the camera 20 and then the user is prompted to try again.
 また、図12の例において、本実施形態に係る情報処理システム50は、代替カメラ22をさらに備える。代替カメラ22は、カメラ20とは別途設けられたカメラであり、カメラ20の代わりに対象90を撮像して対象画像を生成するためのカメラである。 In the example of FIG. 12, the information processing system 50 according to this embodiment further includes an alternative camera 22. The alternative camera 22 is a camera provided separately from the camera 20, and is a camera for capturing an image of the target 90 in place of the camera 20 to generate a target image.
 本実施形態に係る情報処理装置10を実現する計算機のハードウエア構成は、第1の実施形態に係る情報処理装置10と同様に、例えば図8によって表される。代替カメラ22は、入出力インタフェース1100またはネットワークインタフェース1120を介して情報処理装置10と接続される。 The hardware configuration of the computer that realizes the information processing device 10 according to this embodiment is shown in FIG. 8, for example, similar to the information processing device 10 according to the first embodiment. The alternative camera 22 is connected to the information processing device 10 via an input/output interface 1100 or a network interface 1120.
 劣化情報が所定の条件Dを満たす場合、制御部150は、たとえば対象90に対し、代替カメラ22の撮像領域に停止することを促す報知を、一以上の報知部70にさせるための制御情報を出力する。そうすることで、たとえば「右側のカメラの前に立ち止まって下さい」といったメッセージがディスプレイに表示されたり、スピーカーから発せられたりする。 If the degradation information satisfies a predetermined condition D, the control unit 150 outputs control information to cause one or more notification units 70 to issue a notification to, for example, the target 90, urging the target 90 to stop in the imaging area of the alternative camera 22. By doing so, a message such as "Please stop in front of the camera on the right side" is displayed on the display or emitted from the speaker.
 このような報知により、代替カメラ22で対象90を良好に撮像可能な位置に、対象90が停止することが期待される。そこで情報処理システム50は代替カメラ22によって対象90を撮像し、良好な対象画像を得られる。本例によれば、代替カメラ22による撮像を、カメラ20で対象90を撮像することなく実施できるため、カメラ20で対象90を撮像した後に再撮像を促す場合に比べて、短時間で良好な対象画像を得られる。 This notification is expected to cause the target 90 to stop at a position where the alternative camera 22 can capture a good image of the target 90. The information processing system 50 then captures an image of the target 90 using the alternative camera 22, and obtains a good image of the target. According to this example, imaging using the alternative camera 22 can be performed without capturing an image of the target 90 using the camera 20, so a good image of the target can be obtained in a short time compared to the case where the camera 20 captures an image of the target 90 and then prompts the user to capture the image again.
 次に、本実施形態の作用および効果について説明する。本実施形態においては第1の実施形態および第2の実施形態の少なくとも一方と同様の作用および効果が得られる。くわえて、本実施形態に係る制御部150は、劣化情報が所定の条件Dを満たす場合、カメラ20による対象90の撮像を中止させる。したがって、良好な対象画像が得られる可能性が低い場合に、無駄な処理を減らすことができる。 Next, the action and effect of this embodiment will be described. In this embodiment, the same action and effect as at least one of the first and second embodiments can be obtained. In addition, the control unit 150 according to this embodiment stops the camera 20 from capturing an image of the target 90 when the degradation information satisfies a predetermined condition D. Therefore, it is possible to reduce unnecessary processing when there is a low possibility of obtaining a good target image.
(第5の実施形態)
 第5の実施形態に係る情報処理システム50、情報処理装置10、および情報処理方法は、以下に説明する点を除いて第1の実施形態から第4の実施形態の少なくともいずれかに係る情報処理システム50、情報処理装置10、および情報処理方法とそれぞれ同じである。
Fifth Embodiment
The information processing system 50, information processing device 10, and information processing method according to the fifth embodiment are the same as the information processing system 50, information processing device 10, and information processing method according to at least any of the first to fourth embodiments, except for the points described below.
 本実施形態に係る情報処理システム50において、状態情報は、カメラ20の撮像条件をさらに示す。撮像条件には、カメラ20の撮像パラメータと環境条件とが含まれる。状態情報はたとえば、カメラ20の露光時間、対象90に対する照明の状態、カメラ20の焦点位置、カメラ20のレンズの絞り、およびカメラ20による撮像領域の明るさのうち、一つ以上を示す。カメラ20の撮像条件をさらに示す状態情報を用いて対象画像の劣化を推定することにより、劣化要因および劣化の度合いを精度良く推定できる。 In the information processing system 50 according to this embodiment, the status information further indicates the imaging conditions of the camera 20. The imaging conditions include the imaging parameters of the camera 20 and the environmental conditions. For example, the status information indicates one or more of the exposure time of the camera 20, the lighting conditions for the target 90, the focal position of the camera 20, the lens aperture of the camera 20, and the brightness of the imaging area captured by the camera 20. By estimating the deterioration of the target image using the status information further indicating the imaging conditions of the camera 20, the cause of the deterioration and the degree of deterioration can be estimated with high accuracy.
 第1の実施形態では、劣化推定モデル131は、特定の撮像条件を前提として学習されたモデルであった。一方、外光等の影響で撮像条件が変動する場合などには、必ずしも実際の撮像条件と、劣化推定モデル131で前提とされる撮像条件とが一致しない場合がある。また、撮像領域の明るさが変化するような撮像環境では、カメラ20の撮像パラメータが固定されず、撮像領域の明るさ等に応じて自動調整される場合もある。 In the first embodiment, the degradation estimation model 131 was a model that was trained on the premise of specific imaging conditions. On the other hand, when the imaging conditions change due to the influence of external light, etc., the actual imaging conditions may not necessarily match the imaging conditions assumed by the degradation estimation model 131. Also, in an imaging environment where the brightness of the imaging area changes, the imaging parameters of the camera 20 may not be fixed, and may be automatically adjusted according to the brightness of the imaging area, etc.
 撮像条件を含めた状態情報を用いて劣化情報が生成されることで、撮像条件が変化するような場合でも精度良く劣化を推定できる。また、多様な撮像条件のそれぞれに対して、予め劣化推定モデル131を準備する必要がない。 By generating degradation information using state information including the imaging conditions, degradation can be estimated with high accuracy even when the imaging conditions change. In addition, there is no need to prepare a degradation estimation model 131 in advance for each of the various imaging conditions.
 上述した通り、本実施形態において状態情報はたとえば、カメラ20の露光時間、対象90に対する照明の状態、カメラ20の焦点位置、カメラ20のレンズの絞り、およびカメラ20による撮像領域の明るさのうち、一つ以上を示す。 As described above, in this embodiment, the status information indicates, for example, one or more of the exposure time of the camera 20, the lighting conditions for the target 90, the focal position of the camera 20, the lens aperture of the camera 20, and the brightness of the area captured by the camera 20.
 対象90に対する照明の状態の例には、照明部40による照明の向きおよび強さの少なくとも一方が含まれる。カメラ20の焦点位置の例には、焦点距離および焦点位置の座標の少なくとも一方が含まれる。 Examples of the lighting conditions for the target 90 include at least one of the direction and intensity of the lighting by the lighting unit 40. Examples of the focal position of the camera 20 include at least one of the focal length and the coordinates of the focal position.
 情報処理装置10はカメラ20の露光時間、カメラ20の焦点位置、カメラ20のレンズの絞りを示す情報を、カメラ20から取得することができる。情報処理装置10は、対象90に対する照明の状態を示す情報を、照明部40から取得することができる。また、情報処理装置10は、カメラ20による撮像領域の明るさを示す情報を、撮像領域に設けられた照度センサ等から取得することができる。 The information processing device 10 can obtain information indicating the exposure time of the camera 20, the focal position of the camera 20, and the lens aperture of the camera 20 from the camera 20. The information processing device 10 can obtain information indicating the lighting state of the target 90 from the lighting unit 40. Furthermore, the information processing device 10 can obtain information indicating the brightness of the area captured by the camera 20 from an illuminance sensor or the like provided in the image capturing area.
 情報処理装置10は、第1の実施形態で説明した対象90の状態を示す情報に加えて、上述のようなカメラ20の撮像条件を示す一以上の情報を取得する。そして、情報処理装置10は、対象90の状態を示す情報とカメラ20の撮像条件を示す情報の両方を含む状態情報を生成する。状態情報は、複数種類の情報(顔の向き、体姿勢、カメラ20の露光時間、およびカメラ20の焦点位置等)のそれぞれを要素とするベクトルでありうる。そして取得部110は生成された状態情報を取得する。 In addition to the information indicating the state of the target 90 described in the first embodiment, the information processing device 10 acquires one or more pieces of information indicating the imaging conditions of the camera 20 as described above. Then, the information processing device 10 generates status information including both the information indicating the state of the target 90 and the information indicating the imaging conditions of the camera 20. The status information can be a vector whose elements are each of multiple types of information (face direction, body posture, exposure time of the camera 20, focal position of the camera 20, etc.). Then, the acquisition unit 110 acquires the generated status information.
 本実施形態において、推定部130は劣化推定モデル131を用いて劣化情報を生成できる。ただし、本実施形態において劣化情報にはカメラ20の撮像条件を示す情報が含まれるため、劣化推定モデル131の入力にもカメラ20の撮像条件を示す条件が含まれると言える。 In this embodiment, the estimation unit 130 can generate degradation information using the degradation estimation model 131. However, in this embodiment, the degradation information includes information indicating the imaging conditions of the camera 20, so it can be said that the input of the degradation estimation model 131 also includes conditions indicating the imaging conditions of the camera 20.
 本実施形態に係る劣化推定モデル131は、第1の実施形態に係る劣化推定モデル131と同様に学習されうる。ただし、本実施形態に係る劣化推定モデル131の機械学習で用いられる状態情報には、カメラ20の撮像条件を示す情報が含まれる。この撮像条件を示す情報は、正解劣化情報の元となる画像を撮像したときの撮像条件を示す情報である。 The degradation estimation model 131 according to this embodiment can be trained in the same way as the degradation estimation model 131 according to the first embodiment. However, the state information used in the machine learning of the degradation estimation model 131 according to this embodiment includes information indicating the imaging conditions of the camera 20. This information indicating the imaging conditions is information indicating the imaging conditions when the image that is the source of the correct degradation information was captured.
 本実施形態に係る劣化推定モデル131は、撮像条件によらず利用可能なモデルである。 The degradation estimation model 131 according to this embodiment is a model that can be used regardless of the imaging conditions.
 本実施形態に係る推定部130は、撮像条件を示す情報を含む状態情報を用いて、第1の実施形態で説明したのと同様に劣化情報を生成する。そして本実施形態に係る制御部150は、第1の実施形態から第4の実施形態の少なくともいずれかに係る制御部150と同様に、制御情報を出力する。 The estimation unit 130 according to this embodiment generates degradation information in the same manner as described in the first embodiment, using state information including information indicating the imaging conditions. The control unit 150 according to this embodiment outputs control information in the same manner as the control unit 150 according to at least any one of the first to fourth embodiments.
 次に、本実施形態の作用および効果について説明する。本実施形態においては第1の実施形態から第4の実施形態の少なくともいずれかと同様の作用および効果が得られる。くわえて、本実施形態に係る情報処理システム50において、状態情報は、カメラ20の撮像条件をさらに示す。したがって、劣化要因および劣化の度合いを精度良く推定できる。 Next, the action and effect of this embodiment will be described. In this embodiment, the same action and effect as at least one of the first to fourth embodiments can be obtained. In addition, in the information processing system 50 according to this embodiment, the status information further indicates the imaging conditions of the camera 20. Therefore, the cause of deterioration and the degree of deterioration can be estimated with high accuracy.
(第6の実施形態)
 図13は、第6の実施形態に係る推定部130が劣化情報を生成する方法を説明するための図である。本実施形態に係る情報処理システム50、情報処理装置10、および情報処理方法は、以下に説明する点を除いて第1の実施形態から第5の実施形態の少なくともいずれかに係る情報処理システム50、情報処理装置10、および情報処理方法とそれぞれ同じである。
Sixth Embodiment
13 is a diagram for explaining a method in which the estimation unit 130 according to the sixth embodiment generates degradation information. The information processing system 50, the information processing device 10, and the information processing method according to this embodiment are the same as the information processing system 50, the information processing device 10, and the information processing method according to at least any one of the first embodiment to the fifth embodiment, except for the points described below.
 本実施形態に係る推定部130は、状態情報を用いて、対象90がカメラ20の焦点に到達する時点の、対象90の状態を推定した結果を示す撮像時状態情報を生成する。そして、推定部130は、撮像時状態情報を用いて劣化情報を生成する。このように、本実施形態に係る推定部130は、劣化の推定を、地点P1における状態に基づき撮像時の状態を推定する第1段階と、撮像時の状態に基づき対象画像の劣化を推定する第2段階に分けて実行する。そうすることにより、情報処理システム50の使用環境の変化等に伴う地点P1から地点P2の間の状態の変化に対し、第2段階の処理(たとえば推定モデル)を変更することなく情報処理システム50を適用できる。 The estimation unit 130 according to this embodiment uses the state information to generate image-capture state information indicating the result of estimating the state of the target 90 at the time when the target 90 reaches the focus of the camera 20. The estimation unit 130 then generates degradation information using the image-capture state information. In this way, the estimation unit 130 according to this embodiment performs degradation estimation in a first stage in which the state at the time of image capture is estimated based on the state at point P1, and a second stage in which degradation of the target image is estimated based on the state at the time of image capture. This allows the information processing system 50 to be applied to changes in the state between point P1 and point P2 due to changes in the usage environment of the information processing system 50, without changing the processing of the second stage (e.g., the estimation model).
 撮像時状態情報の構成は、状態情報の構成と同じである。すなわち、撮像時状態情報は、少なくとも対象の状態を示す。撮像時状態情報は、カメラ20の撮像条件をさらに示してもよい。撮像時状態情報は、複数種類の情報のそれぞれを要素とするベクトルでありうる。 The configuration of the image capture state information is the same as the configuration of the state information. That is, the image capture state information indicates at least the state of the object. The image capture state information may further indicate the image capture conditions of the camera 20. The image capture state information may be a vector whose elements are each of multiple types of information.
 本実施形態に係る推定部130はたとえば、撮影時状態推定モデル132および劣化推定モデル133を用いて状態情報から劣化情報を生成する。撮影時状態推定モデル132および劣化推定モデル133はそれぞれニューラルネットワークを含む、学習済みのモデルである。撮影時状態推定モデル132は、地点P1における状態に基づき撮像時の状態を推定するためのモデルである。劣化推定モデル133は、撮像時の状態に基づき対象画像の劣化を推定するためのモデルである。 The estimation unit 130 according to this embodiment generates degradation information from state information, for example, using a shooting-time state estimation model 132 and a degradation estimation model 133. The shooting-time state estimation model 132 and the degradation estimation model 133 are each trained models including a neural network. The shooting-time state estimation model 132 is a model for estimating the state at the time of image capture based on the state at point P1. The degradation estimation model 133 is a model for estimating the degradation of the target image based on the state at the time of image capture.
 撮影時状態推定モデル132の入力データには状態情報が含まれ、撮影時状態推定モデル132の出力データには撮像時状態情報が含まれる。撮影時状態推定モデル132は、状態情報および正解状態情報を学習データとして用いた機械学習を行うことにより、予め準備することができる。正解状態情報は、撮像時の(地点P2での)状態を測定することで得られる情報である。すなわち、情報処理システム50が地点P1における状態情報を生成するのと同じ方法で、地点P2における状態の測定および正解状態情報の生成を行える。撮影時状態推定モデル132の機械学習に用いる状態情報は、劣化推定モデル131の機械学習に用いる状態情報と同じであってよい。正解状態情報の構成は、状態情報と同じであることが好ましい。 The input data of the image-capturing state estimation model 132 includes state information, and the output data of the image-capturing state estimation model 132 includes state information at the time of image capture. The image-capturing state estimation model 132 can be prepared in advance by performing machine learning using state information and correct state information as learning data. The correct state information is information obtained by measuring the state (at point P2) at the time of image capture. In other words, the state at point P2 can be measured and correct state information can be generated in the same manner as the information processing system 50 generates state information at point P1. The state information used for the machine learning of the image-capturing state estimation model 132 may be the same as the state information used for the machine learning of the degradation estimation model 131. It is preferable that the configuration of the correct state information is the same as the state information.
 劣化推定モデル133の入力データには撮像時状態情報が含まれ、劣化推定モデル133の出力データには劣化情報が含まれる。劣化推定モデル133は、上述した正解状態情報および上述した正解劣化情報を学習データとして用いた機械学習を行うことにより、予め準備することができる。 The input data of the degradation estimation model 133 includes state information at the time of imaging, and the output data of the degradation estimation model 133 includes degradation information. The degradation estimation model 133 can be prepared in advance by performing machine learning using the above-mentioned correct state information and the above-mentioned correct degradation information as learning data.
 本実施形態に係る推定部130は、取得部110が取得した状態を撮影時状態推定モデル132に入力し、撮影時状態推定モデル132から出力される撮像時状態情報を得る。さらに推定部130は、撮像時状態情報を劣化推定モデル133に入力し、劣化推定モデル133から出力される劣化情報を得る。そして本実施形態に係る制御部150は、第1の実施形態から第4の実施形態の少なくともいずれかに係る制御部150と同様に、制御情報を出力する。 The estimation unit 130 according to this embodiment inputs the state acquired by the acquisition unit 110 to the image capture state estimation model 132, and obtains image capture state information output from the image capture state estimation model 132. The estimation unit 130 further inputs the image capture state information to the degradation estimation model 133, and obtains degradation information output from the degradation estimation model 133. The control unit 150 according to this embodiment outputs control information, similar to the control unit 150 according to at least any one of the first to fourth embodiments.
 本実施形態において状態情報にはカメラ20の撮像条件を示す情報が含まれてもよいし含まれなくてもよい。状態情報にはカメラ20の撮像条件を示す情報が含まれる場合、撮像時状態情報および正解状態情報にも、カメラ20の撮像条件を示す情報が含まれる。 In this embodiment, the status information may or may not include information indicating the imaging conditions of the camera 20. If the status information includes information indicating the imaging conditions of the camera 20, the imaging time status information and the correct status information also include information indicating the imaging conditions of the camera 20.
 状態情報にカメラ20の撮像条件を示す情報が含まれない場合、複数の撮像条件それぞれについて撮影時状態推定モデル132および劣化推定モデル133が準備されてもよい。複数の撮像条件それぞれについての撮影時状態推定モデル132および劣化推定モデル133は、予め推定部130からアクセス可能な記憶装置に保持される。そして第1の実施形態と同様に、推定部130は、状態情報が得られた時点の撮像条件に対応する撮影時状態推定モデル132および劣化推定モデル133を用いる。 If the status information does not include information indicating the imaging conditions of the camera 20, a time-of-shooting state estimation model 132 and a deterioration estimation model 133 may be prepared for each of the multiple imaging conditions. The time-of-shooting state estimation model 132 and the deterioration estimation model 133 for each of the multiple imaging conditions are stored in advance in a storage device accessible by the estimation unit 130. Then, as in the first embodiment, the estimation unit 130 uses the time-of-shooting state estimation model 132 and the deterioration estimation model 133 that correspond to the imaging conditions at the time the status information was obtained.
 推定部130は、撮影時状態推定モデル132を用いる代わりに、所定のルールに基づいて撮像時状態情報を生成してもよい。推定部130はたとえば、以下のようなルールに基づいて、状態情報から撮像時状態情報を生成できる。以降、状態測定部30が対象90の状態を測定する時点を時点T1と呼び、対象90がカメラ20の焦点に到達する時点を時点T2と呼ぶ。 Instead of using the image capture state estimation model 132, the estimation unit 130 may generate image capture state information based on a predetermined rule. For example, the estimation unit 130 can generate image capture state information from state information based on the following rules. Hereinafter, the time when the state measurement unit 30 measures the state of the object 90 is referred to as time T1, and the time when the object 90 reaches the focus of the camera 20 is referred to as time T2.
 推定部130はたとえば対象90の時点T1における体姿勢および移動速度に基づいて、時点T2の体姿勢および移動速度を推定する。この推定にはたとえば、山崎信寿による文献「2足歩行の総合解析モデルとシミュレーション」(バイオメカニズム,3巻,1975年,p.261-269)に記載された歩行モデルを用いることができる。また、減速を促す報知が常時行われる場合には、推定部130は、時点T2における速度を低く算出してもよい。速度の下げ幅は、事前の調査や実験の結果(たとえば複数の対象90の平均)に基づいて予め定めておくことができる。 The estimation unit 130 estimates the body posture and moving speed at time T2, for example, based on the body posture and moving speed of the subject 90 at time T1. For this estimation, for example, the walking model described in the literature "Comprehensive Analysis Model and Simulation of Bipedal Walking" by Yamazaki Nobuhisa (Biomechanism, Vol. 3, 1975, pp. 261-269) can be used. Furthermore, if a notification encouraging deceleration is constantly issued, the estimation unit 130 may calculate a lower speed at time T2. The amount of speed reduction can be determined in advance based on the results of prior research or experiments (for example, the average of multiple subjects 90).
 推定部130はたとえば、時点T1における対象90の顔の向きおよび視線の向きをそれぞれ、時点T2における対象90の顔の向きおよび視線の向きの推定結果とする。ただし、顔や視線の向きに関する誘導が常時行われる場合には、推定部130は、誘導先に向かう方向を、時点T2における対象90の顔の向きおよび視線の向きの推定結果としてもよい。 For example, the estimation unit 130 takes the facial direction and gaze direction of the target 90 at time T1 as the estimated results of the facial direction and gaze direction of the target 90 at time T2. However, if guidance regarding the face and gaze direction is always performed, the estimation unit 130 may take the direction toward the guidance destination as the estimated results of the facial direction and gaze direction of the target 90 at time T2.
 推定部130はたとえば、時点T1における眼鏡装着の有無を、時点T2における眼鏡装着の有無の推定結果とする。ただし、対象90に眼鏡を外させる誘導が常時行われる場合には、推定部130は、時点T2において対象90は眼鏡を装着していないと推定してもよい。 For example, the estimation unit 130 uses the presence or absence of glasses at time T1 as the estimation result of the presence or absence of glasses at time T2. However, if the subject 90 is constantly guided to remove his or her glasses, the estimation unit 130 may estimate that the subject 90 is not wearing glasses at time T2.
 推定部130はこれらのルールにもとづいて状態情報から撮像時状態情報を生成することができる。そして、生成した撮像時状態情報を劣化推定モデル133に入力することで、劣化情報を得ることができる。 The estimation unit 130 can generate image capture state information from state information based on these rules. Then, by inputting the generated image capture state information to the deterioration estimation model 133, deterioration information can be obtained.
 次に、本実施形態の作用および効果について説明する。本実施形態においては第1の実施形態から第5の実施形態の少なくともいずれかと同様の作用および効果が得られる。くわえて、本実施形態に係る推定部130は、状態情報を用いて、対象90がカメラ20の焦点に到達する時点の、対象90の状態を推定した結果を示す撮像時状態情報を生成する。そして、推定部130は、撮像時状態情報を用いて劣化情報を生成する。したがって、地点P1から地点P2の間の状態の変化に、軽度な変更で対応できる。 Next, the action and effect of this embodiment will be described. In this embodiment, the same action and effect as at least any one of the first to fifth embodiments can be obtained. In addition, the estimation unit 130 according to this embodiment uses the state information to generate image capture state information indicating the result of estimating the state of the object 90 at the time when the object 90 reaches the focus of the camera 20. Then, the estimation unit 130 uses the image capture state information to generate degradation information. Therefore, it is possible to respond to changes in state between point P1 and point P2 with minor changes.
(第7の実施形態)
 図14は、第7の実施形態に係る情報処理システム50の機能構成を例示するブロック図である。図15は、本実施形態に係る情報処理システム50の使用環境を例示する図である。本実施形態に係る情報処理システム50、情報処理装置10、および情報処理方法は、以下に説明する点を除いて第1の実施形態から第6の実施形態の少なくともいずれかに係る情報処理システム50、情報処理装置10、および情報処理方法とそれぞれ同じである。
Seventh Embodiment
Fig. 14 is a block diagram illustrating a functional configuration of an information processing system 50 according to the seventh embodiment. Fig. 15 is a diagram illustrating a usage environment of the information processing system 50 according to the present embodiment. The information processing system 50, the information processing device 10, and the information processing method according to the present embodiment are the same as the information processing system 50, the information processing device 10, and the information processing method according to at least any one of the first embodiment to the sixth embodiment, except for the points described below.
 本実施形態に係る情報処理装置10は第1認証部170をさらに備える。すなわち、本実施形態に係る情報処理システム50は第1認証部170をさらに備える。第1認証部170は、カメラ20で生成された対象90の対象画像を用いて、認証を行う。第1認証部170が実行する認証処理は、たとえば虹彩認証または顔認証の処理である。ただし、第1認証部170が実行する認証処理はこれらの例に限定されず、任意の認証処理でありうる。 The information processing device 10 according to this embodiment further includes a first authentication unit 170. That is, the information processing system 50 according to this embodiment further includes a first authentication unit 170. The first authentication unit 170 performs authentication using a target image of the target 90 generated by the camera 20. The authentication process performed by the first authentication unit 170 is, for example, iris authentication or face authentication. However, the authentication process performed by the first authentication unit 170 is not limited to these examples and can be any authentication process.
 第1認証部170は既存の方法を用いて認証処理を行える。たとえば第1認証部170は、対象画像から所定の領域を検出し、検出した領域の特徴を抽出することで特徴情報を得る。第1認証部170が虹彩認証処理を行う場合、所定の領域は虹彩に相当する領域である。第1認証部170が顔認証処理を行う場合、所定の領域は顔に相当する領域である。 The first authentication unit 170 can perform authentication processing using an existing method. For example, the first authentication unit 170 obtains feature information by detecting a predetermined area from the target image and extracting features of the detected area. When the first authentication unit 170 performs iris authentication processing, the predetermined area is an area corresponding to the iris. When the first authentication unit 170 performs face authentication processing, the predetermined area is an area corresponding to the face.
 認証情報記憶部100には、識別情報と特徴情報とが互いに関連付けられた認証情報が、予め複数保持されている。識別情報は、複数の対象90を個別に識別するための情報である。対象90が人である場合、識別情報はたとえば個人識別情報である。 The authentication information storage unit 100 holds multiple pieces of authentication information in advance, in which identification information and feature information are associated with each other. The identification information is information for individually identifying multiple targets 90. If the target 90 is a person, the identification information is, for example, personal identification information.
 認証情報記憶部100は情報処理装置10に含まれていてもよいし、情報処理装置10の外部に設けられていてもよい。ただし第1認証部170は認証情報記憶部100にアクセス可能である。第1認証部170は、対象画像から得られた特徴情報を、認証情報記憶部100に保持された各特徴情報と照合する。そして、第1認証部170は、対象画像から得られた特徴情報に対する一致度が最も高い特徴情報を、認証情報記憶部100に保持された複数の特徴情報の中から特定する。そして、第1認証部170は、特定された特徴情報に関連付けられた識別情報を、カメラ20で撮像された対象90の識別情報として特定する。 The authentication information storage unit 100 may be included in the information processing device 10, or may be provided outside the information processing device 10. However, the first authentication unit 170 can access the authentication information storage unit 100. The first authentication unit 170 compares the feature information obtained from the target image with each piece of feature information stored in the authentication information storage unit 100. The first authentication unit 170 then identifies the feature information that has the highest degree of match with the feature information obtained from the target image from among the multiple pieces of feature information stored in the authentication information storage unit 100. The first authentication unit 170 then identifies the identification information associated with the identified feature information as the identification information of the target 90 captured by the camera 20.
 第1認証部170は、特定した対象90の識別情報を出力する。第1認証部170は対象90の識別情報をたとえばディスプレイに表示させてもよいし、他の装置に送信してもよい。 The first authentication unit 170 outputs the identification information of the identified target 90. The first authentication unit 170 may, for example, display the identification information of the target 90 on a display, or may transmit it to another device.
 なお、第1認証部170は、対象画像から得られた特徴情報に対する一致度が所定の基準を超える特徴情報が、認証情報記憶部100に保持された複数の特徴情報の中に存在しない場合、認証されなかった旨の情報を出力してもよい。 In addition, if there is no feature information among the multiple pieces of feature information stored in the authentication information storage unit 100 that matches the feature information obtained from the target image to a predetermined standard, the first authentication unit 170 may output information to the effect that authentication has not been successful.
 また、第1認証部170は、対象90の識別情報に応じて対象90の通過を制御してもよい。図14および図15の例において、情報処理システム50はゲート80をさらに備える。認証情報記憶部100に保持された認証情報には、識別情報に関連付けられた通過の可否を示す情報が含まれる。 The first authentication unit 170 may also control the passage of the target 90 according to the identification information of the target 90. In the examples of Figs. 14 and 15, the information processing system 50 further includes a gate 80. The authentication information stored in the authentication information storage unit 100 includes information associated with the identification information indicating whether or not the target 90 is allowed to pass through.
 第1認証部170は、対象90の識別情報を特定すると、その識別情報に関連付けられた通過の可否を示す情報を読み出す。その識別情報に通過可能の旨を示す情報が関連付けられている場合には、第1認証部170は、ゲート80を、対象90が通過可能な状態にする。一方、その識別情報に通過可能の旨を示す情報が関連付けられていない場合、第1認証部170は、ゲート80を、対象90が通過可能な状態にしない。第1認証部170は、対象画像から得られた特徴情報に対する一致度が所定の基準を超える特徴情報が、認証情報記憶部100に保持された複数の特徴情報の中に存在しない場合にも、ゲート80を、対象90が通過可能な状態にしない。このようにして、情報処理システム50は、対象90の通過を制御することができる。 When the first authentication unit 170 identifies the identification information of the target 90, it reads out information associated with the identification information indicating whether or not the target 90 can pass through. If information indicating that the target 90 can pass through is associated with the identification information, the first authentication unit 170 puts the gate 80 in a state where the target 90 can pass through. On the other hand, if information indicating that the target 90 can pass through is not associated with the identification information, the first authentication unit 170 does not put the gate 80 in a state where the target 90 can pass through. The first authentication unit 170 also does not put the gate 80 in a state where the target 90 can pass through if there is no feature information among the multiple pieces of feature information stored in the authentication information storage unit 100 that matches the feature information obtained from the target image to a degree that exceeds a predetermined standard. In this way, the information processing system 50 can control the passage of the target 90.
 本実施形態に係る情報処理装置10を実現する計算機のハードウエア構成は、第1の実施形態に係る情報処理装置10と同様に、例えば図8によって表される。ただし、本実施形態に係る情報処理装置10を実現する計算機1000のストレージデバイス1080には、本実施形態の第1認証部170の機能を実現するプログラムモジュールがさらに記憶される。 The hardware configuration of the computer that realizes the information processing device 10 according to this embodiment is shown in FIG. 8, for example, similar to the information processing device 10 according to the first embodiment. However, the storage device 1080 of the computer 1000 that realizes the information processing device 10 according to this embodiment further stores a program module that realizes the functions of the first authentication unit 170 of this embodiment.
 また、認証情報記憶部100が情報処理装置10の内部に設けられる場合、例えば認証情報記憶部100は、ストレージデバイス1080を用いて実現される。ゲート80は、入出力インタフェース1100またはネットワークインタフェース1120を介して情報処理装置10と接続される。 When the authentication information storage unit 100 is provided inside the information processing device 10, for example, the authentication information storage unit 100 is realized using a storage device 1080. The gate 80 is connected to the information processing device 10 via an input/output interface 1100 or a network interface 1120.
 次に、本実施形態の作用および効果について説明する。本実施形態においては第1の実施形態から第6の実施形態の少なくともいずれかと同様の作用および効果が得られる。くわえて、本実施形態に係る情報処理装置10は第1認証部170をさらに備える。したがって、対象画像を用いた認証が行える。 Next, the action and effect of this embodiment will be described. In this embodiment, the same action and effect as at least one of the first embodiment to the sixth embodiment can be obtained. In addition, the information processing device 10 according to this embodiment further includes a first authentication unit 170. Therefore, authentication can be performed using a target image.
(第8の実施形態)
 図16は、第8の実施形態に係る情報処理システム50の機能構成を例示するブロック図である。本実施形態に係る情報処理システム50、情報処理装置10、および情報処理方法は、以下に説明する点を除いて第1の実施形態から第7の実施形態の少なくともいずれかに係る情報処理システム50、情報処理装置10、および情報処理方法とそれぞれ同じである。
Eighth embodiment
16 is a block diagram illustrating a functional configuration of an information processing system 50 according to an eighth embodiment. The information processing system 50, the information processing device 10, and the information processing method according to this embodiment are the same as the information processing system 50, the information processing device 10, and the information processing method according to at least any one of the first embodiment to the seventh embodiment, except for the points described below.
 本実施形態に係る情報処理システム50は、第1認証部170を備える。第1認証部170は、対象画像を用いて認証を行う。本実施形態に係る情報処理システム50は、第2認証部190をさらに備える。第2認証部190は、対象画像とは異なる情報を用いて認証を行う。そして制御部150は、劣化情報に基づく品質が所定の条件Aを満たす場合、第2認証部190による認証の重要度を上げる。そうすることで、カメラ20で良好な対象画像が得られない場合であっても、第2認証部190で認証できる可能性を高めることができる。 The information processing system 50 according to this embodiment includes a first authentication unit 170. The first authentication unit 170 performs authentication using a target image. The information processing system 50 according to this embodiment further includes a second authentication unit 190. The second authentication unit 190 performs authentication using information different from the target image. If the quality based on the degradation information satisfies a predetermined condition A, the control unit 150 increases the importance of authentication by the second authentication unit 190. This can increase the possibility of authentication by the second authentication unit 190 even if a good target image cannot be obtained by the camera 20.
 図16の例において、本実施形態に係る第1認証部170はカメラ20で得られた対象画像を用いて、たとえば虹彩認証の処理を行う。一方、第2認証部190は、カメラ20とは異なる第2認証用カメラ60で得られた画像を用いて、たとえば顔認証を行う。カメラ20と第2認証用カメラ60とは独立に制御されて対象90の画像を得ることができる。第1認証部170および第2認証部190がそれぞれ行う認証処理は、第7の実施形態において第1認証部170に関して説明した通りである。 In the example of FIG. 16, the first authentication unit 170 according to this embodiment performs, for example, iris authentication processing using an image of the target obtained by the camera 20. On the other hand, the second authentication unit 190 performs, for example, face authentication using an image obtained by a second authentication camera 60 different from the camera 20. The camera 20 and the second authentication camera 60 are controlled independently to obtain an image of the target 90. The authentication processing performed by the first authentication unit 170 and the second authentication unit 190 is as described for the first authentication unit 170 in the seventh embodiment.
 情報処理システム50は、第1認証部170による認証結果と第2認証部190による認証結果の両方を得ることができる。情報処理システム50はこれら両方の認証結果を出力してもよいし、これらのうち信頼度が高い方の認証結果のみを出力してもよい。たとえばこれら二つの認証結果のうち、認証スコアが高い方の認証結果が、信頼度が高い認証結果として出力されてもよい。認証スコアの例には、上述した認証情報における特徴情報に対する一致度や、上述したハミング距離に基づくスコアが含まれる。情報処理システム50はその他、これら二つの認証結果、認証スコアを統合して得られる結果、または、認証スコアに基づき、最終的な認証結果を決定して出力してもよい。たとえば、二つの認証スコアの和、積、および平均のいずれかに基づき認証結果を決定して、出力してもよい。 The information processing system 50 can obtain both the authentication result by the first authentication unit 170 and the authentication result by the second authentication unit 190. The information processing system 50 may output both of these authentication results, or may output only the authentication result with the higher reliability. For example, of these two authentication results, the authentication result with the higher authentication score may be output as the authentication result with the higher reliability. Examples of authentication scores include the degree of match with the feature information in the authentication information described above, and a score based on the Hamming distance described above. The information processing system 50 may also determine and output a final authentication result based on the result obtained by integrating these two authentication results and the authentication scores, or based on the authentication score. For example, the authentication result may be determined and output based on the sum, product, or average of the two authentication scores.
 本実施形態に係る制御部150は、第3の実施形態で説明したように、認証に用いる画像としての適正さを示す、対象画像の品質スコアを、劣化情報を用いて推定する。 As described in the third embodiment, the control unit 150 in this embodiment uses the degradation information to estimate the quality score of the target image, which indicates its suitability as an image to be used for authentication.
 そして制御部150は、品質スコアが所定の条件Aを満たす場合(たとえば品質スコアが所定のスコア以下である場合)、第2認証部190による認証の重要度を上げる。すなわち、制御部150は、第2認証用カメラ60で得られる対象90の画像の品質を高めるように第2認証用カメラ60を制御する。たとえば、制御部150は、対象90を撮像する時の第2認証用カメラ60の解像度を上げるための制御情報を出力する。または制御部150は、対象90の所定の部分がより大きく写る画像を第2認証用カメラ60で得られるタイミングで、第2認証用カメラ60に対象90を撮像させるための制御情報を出力する。ここで対象90の所定の部分は第2認証部190が認証に用いる部分であり、たとえば顔である。 Then, when the quality score satisfies a predetermined condition A (for example, when the quality score is equal to or lower than a predetermined score), the control unit 150 increases the importance of authentication by the second authentication unit 190. That is, the control unit 150 controls the second authentication camera 60 to improve the quality of the image of the target 90 obtained by the second authentication camera 60. For example, the control unit 150 outputs control information for increasing the resolution of the second authentication camera 60 when imaging the target 90. Alternatively, the control unit 150 outputs control information for causing the second authentication camera 60 to image the target 90 at a timing when the second authentication camera 60 obtains an image in which a predetermined part of the target 90 is captured larger. Here, the predetermined part of the target 90 is a part that the second authentication unit 190 uses for authentication, such as the face.
 こうすることで、カメラ20により品質の高い対象画像が得られないと推定される場合に、第2認証用カメラ60で得られる画像の品質を高められる。したがって、精度の高い認証が可能となる。 By doing this, when it is estimated that a high-quality target image cannot be obtained by the camera 20, the quality of the image obtained by the second authentication camera 60 can be improved. Therefore, highly accurate authentication is possible.
 一方で、制御部150は、品質が所定の条件Aを満たさない場合、第2認証部190による認証の重要度を上げない。 On the other hand, if the quality does not satisfy the specified condition A, the control unit 150 does not increase the importance of authentication by the second authentication unit 190.
 本実施形態に係る情報処理装置10を実現する計算機のハードウエア構成は、第1の実施形態に係る情報処理装置10と同様に、例えば図8によって表される。ただし、本実施形態に係る情報処理装置10を実現する計算機1000のストレージデバイス1080には、本実施形態の第1認証部170および第2認証部190の機能を実現するプログラムモジュールがさらに記憶される。 The hardware configuration of the computer that realizes the information processing device 10 according to this embodiment is shown in FIG. 8, for example, similar to the information processing device 10 according to the first embodiment. However, the storage device 1080 of the computer 1000 that realizes the information processing device 10 according to this embodiment further stores program modules that realize the functions of the first authentication unit 170 and the second authentication unit 190 of this embodiment.
 また、認証情報記憶部100が情報処理装置10の内部に設けられる場合、例えば認証情報記憶部100は、ストレージデバイス1080を用いて実現される。第2認証用カメラ60は、入出力インタフェース1100またはネットワークインタフェース1120を介して情報処理装置10と接続される。 When the authentication information storage unit 100 is provided inside the information processing device 10, for example, the authentication information storage unit 100 is realized using a storage device 1080. The second authentication camera 60 is connected to the information processing device 10 via an input/output interface 1100 or a network interface 1120.
 次に、本実施形態の作用および効果について説明する。本実施形態においては第1の実施形態と同様の作用および効果が得られる。くわえて、本実施形態に係る制御部150は、劣化情報に基づく品質が所定の条件Aを満たす場合、第2認証部190による認証の重要度を上げる。したがって、カメラ20で良好な対象画像が得られないと推定される場合に、第2認証部190で認証できる可能性を高めることができる。 Next, the action and effect of this embodiment will be described. In this embodiment, the same action and effect as in the first embodiment can be obtained. In addition, the control unit 150 according to this embodiment increases the importance of authentication by the second authentication unit 190 when the quality based on the degradation information satisfies a predetermined condition A. Therefore, when it is estimated that a good target image cannot be obtained by the camera 20, it is possible to increase the possibility of authentication by the second authentication unit 190.
(変形例)
 本変形例は第8の実施形態の変形例である。本変形例に係る情報処理システム50、情報処理装置10、および情報処理方法は、以下に説明する点を除いて第8の実施形態に係る情報処理システム50、情報処理装置10、および情報処理方法とそれぞれ同じである。
(Modification)
This modification is a modification of the eighth embodiment. The information processing system 50, the information processing device 10, and the information processing method according to this modification are the same as the information processing system 50, the information processing device 10, and the information processing method according to the eighth embodiment, respectively, except for the points described below.
 本実施形態に係る情報処理システム50は、第8の実施形態に係る情報処理システム50と同様、第1認証部170および第2認証部190を備える。そして本実施形態に係る制御部150は、劣化情報が所定の条件Bを満たす場合、第2認証部190による認証の重要度を上げる。 The information processing system 50 according to this embodiment includes a first authentication unit 170 and a second authentication unit 190, similar to the information processing system 50 according to the eighth embodiment. The control unit 150 according to this embodiment increases the importance of authentication by the second authentication unit 190 when the degradation information satisfies a predetermined condition B.
 本実施形態に係る制御部150は、品質を推定しなくてもよい。条件Bは劣化情報そのものに対する条件である。条件Bはたとえば、劣化情報に示される一以上の劣化の度合いのうち、所定の数以上の劣化の度合いが、各劣化の度合いに対して設けられた所定の基準以上であることである。 The control unit 150 according to this embodiment does not need to estimate quality. Condition B is a condition for the degradation information itself. Condition B is, for example, that of one or more degrees of degradation indicated in the degradation information, a predetermined number or more degrees of degradation are equal to or above a predetermined standard set for each degree of degradation.
 制御部150は、劣化情報が所定の条件Bを満たす場合、第2認証部190による認証の重要度を上げる。制御部150は、第2認証部190による認証の重要度を上げる方法の例は第8の実施形態で説明した通りである。 If the degradation information satisfies a predetermined condition B, the control unit 150 increases the importance of authentication by the second authentication unit 190. An example of the method in which the control unit 150 increases the importance of authentication by the second authentication unit 190 is as described in the eighth embodiment.
 本変形例においても、第8の実施形態と同様の作用および効果が得られる。すなわち、カメラ20で良好な対象画像が得られないと推定される場合に、第2認証部190で認証できる可能性を高めることができる。 In this modified example, the same actions and effects as those of the eighth embodiment can be obtained. That is, when it is estimated that the camera 20 cannot obtain a good target image, it is possible to increase the possibility of authentication by the second authentication unit 190.
 以上、図面を参照してこの開示の実施形態および変形例について述べたが、これらはこの開示の例示であり、上記以外の様々な構成を採用することもできる。 The above describes the embodiments and variations of this disclosure with reference to the drawings, but these are merely examples of this disclosure, and various configurations other than those described above can also be adopted.
 また、上述の説明で用いた複数のフローチャートでは、複数の工程(処理)が順番に記載されているが、各実施形態および変形例で実行される工程の実行順序は、その記載の順番に制限されない。各実施形態および変形例では、図示される工程の順番を内容的に支障のない範囲で変更することができる。また、上述の各実施形態および変形例は、内容が相反しない範囲で組み合わせることができる。 In addition, in the multiple flow charts used in the above explanation, multiple steps (processing) are described in order, but the order of execution of the steps performed in each embodiment and modified example is not limited to the order described. In each embodiment and modified example, the order of the steps shown in the figures can be changed to the extent that does not cause any problems in terms of content. In addition, each of the above-mentioned embodiments and modified examples can be combined to the extent that the content is not contradictory.
 上記の実施形態の一部または全部は、以下の付記のようにも記載されうるが、以下に限られない。
1-1. 対象を撮像可能なカメラと、
 少なくとも前記対象の状態を示す状態情報を取得する取得手段と、
 前記カメラで前記対象を撮像して得られる対象画像において、発生すると推定される劣化に関する劣化情報を、前記状態情報を用いて生成する推定手段と、
 前記カメラで前記対象が撮像される前および前記カメラで前記対象が撮像される時の少なくとも一方に、前記劣化情報に応じた制御情報を出力する制御手段とを備える
情報処理システム。
1-2. 1-1.に記載の情報処理システムにおいて、
 前記状態情報は、前記カメラの焦点よりも離れた地点に前記対象が位置する時点における、前記対象の状態を少なくとも示し、
 前記制御手段は、前記対象が前記カメラの焦点に到達する前、および前記対象が前記カメラの焦点に到達した時の少なくとも一方に、前記制御情報を出力する
情報処理システム。
1-3. 1-2.に記載の情報処理システムにおいて、
 前記推定手段は、
  前記状態情報を用いて、前記対象が前記カメラの焦点に到達する時点の、前記対象の状態を推定した結果を示す撮像時状態情報を生成し、
  前記撮像時状態情報を用いて前記劣化情報を生成する
情報処理システム。
1-4. 1-1.から1-3.のいずれか一つに記載の情報処理システムにおいて、
 前記劣化情報は、前記対象画像に生じる一以上の劣化要因と、前記一以上の劣化要因それぞれの劣化の度合いとを示す
情報処理システム。
1-5. 1-4.に記載の情報処理システムにおいて、
 前記対象画像は虹彩を含む画像であり、
 前記一以上の劣化要因は、フォーカスブラー、モーションブラー、瞼オクルージョン、照明反射オクルージョン、およびオフアングルのうち、一つ以上を含む
情報処理システム。
1-6. 1-1.から1-5.のいずれか一つに記載の情報処理システムにおいて、
 前記制御手段は、
  認証に用いる画像としての適正さを示す、前記対象画像の品質を、前記劣化情報を用いて推定し、
  前記品質に応じた前記制御情報を出力する
情報処理システム。
1-7. 1-6.に記載の情報処理システムにおいて、
 前記対象画像を用いて認証を行う第1認証手段と、
 前記対象画像とは異なる情報を用いて認証を行う第2認証手段とをさらに備え、
 前記制御手段は、前記品質が所定の条件Aを満たす場合、前記第2認証手段による認証の重要度を上げる
情報処理システム。
1-8. 1-1.から1-7.のいずれか一つに記載の情報処理システムにおいて、
 前記対象画像を用いて認証を行う第1認証手段と、
 前記対象画像とは異なる情報を用いて認証を行う第2認証手段とをさらに備え、
 前記制御手段は、前記劣化情報が所定の条件Bを満たす場合、前記第2認証手段による認証の重要度を上げる
情報処理システム。
1-9. 1-1.から1-8.のいずれか一つに記載の情報処理システムにおいて、
 前記状態情報は、顔の向き、体姿勢、視線の向き、移動速度、および眼鏡着用有無のうち、一つ以上を示す
情報処理システム。
1-10. 1-1.から1-9.のいずれか一つに記載の情報処理システムにおいて、
 前記状態情報は、前記カメラの撮像条件をさらに示す
情報処理システム。
1-11. 1-10.に記載の情報処理システムにおいて、
 前記状態情報は、前記カメラの露光時間、前記対象に対する照明の状態、前記カメラの焦点位置、前記カメラのレンズの絞り、および前記カメラによる撮像領域の明るさのうち、一つ以上を示す
情報処理システム。
1-12. 1-1.から1-11.のいずれか一つに記載の情報処理システムにおいて、
 前記推定手段は、前記状態情報と、正解劣化情報とを用いて学習された学習済みのニューラルネットワークを用いて前記劣化情報を生成する
情報処理システム。
1-13. 1-1.から1-12.のいずれか一つに記載の情報処理システムにおいて、
 前記制御手段は、前記劣化情報が所定の条件Cを満たす場合、報知を実行するための前記制御情報を出力する
情報処理システム。
1-14. 1-1.から1-13.のいずれか一つに記載の情報処理システムにおいて、
 前記制御手段は、前記劣化情報が所定の条件Dを満たす場合、前記カメラによる前記対象の撮像を中止させる
情報処理システム。
1-15. 1-1.から1-14.のいずれか一つに記載の情報処理システムにおいて、
 前記制御手段は、前記劣化情報に基づいて、前記カメラの撮像条件を制御するための前記制御情報を出力する
情報処理システム。
2-1. 少なくとも対象の状態を示す状態情報を取得する取得手段と、
 カメラで前記対象を撮像して得られる対象画像において、発生すると推定される劣化に関する劣化情報を、前記状態情報を用いて生成する推定手段と、
 前記カメラで前記対象が撮像される前および前記カメラで前記対象が撮像される時の少なくとも一方に、前記劣化情報に応じた制御情報を出力する制御手段とを備える
情報処理装置。
2-2. 2-1.に記載の情報処理装置において、
 前記状態情報は、前記カメラの焦点よりも離れた地点に前記対象が位置する時点における、前記対象の状態を少なくとも示し、
 前記制御手段は、前記対象が前記カメラの焦点に到達する前、および前記対象が前記カメラの焦点に到達した時の少なくとも一方に、前記制御情報を出力する
情報処理装置。
2-3. 2-2.に記載の情報処理装置において、
 前記推定手段は、
  前記状態情報を用いて、前記対象が前記カメラの焦点に到達する時点の、前記対象の状態を推定した結果を示す撮像時状態情報を生成し、
  前記撮像時状態情報を用いて前記劣化情報を生成する
情報処理装置。
2-4. 2-1.から2-3.のいずれか一つに記載の情報処理装置において、
 前記劣化情報は、前記対象画像に生じる一以上の劣化要因と、前記一以上の劣化要因それぞれの劣化の度合いとを示す
情報処理装置。
2-5. 2-4.に記載の情報処理装置において、
 前記対象画像は虹彩を含む画像であり、
 前記一以上の劣化要因は、フォーカスブラー、モーションブラー、瞼オクルージョン、照明反射オクルージョン、およびオフアングルのうち、一つ以上を含む
情報処理装置。
2-6. 2-1.から2-5.のいずれか一つに記載の情報処理装置において、
 前記制御手段は、
  認証に用いる画像としての適正さを示す、前記対象画像の品質を、前記劣化情報を用いて推定し、
  前記品質に応じた前記制御情報を出力する
情報処理装置。
2-7. 2-6.に記載の情報処理装置において、
 前記対象画像を用いて認証を行う第1認証手段と、
 前記対象画像とは異なる情報を用いて認証を行う第2認証手段とをさらに備え、
 前記制御手段は、前記品質が所定の条件Aを満たす場合、前記第2認証手段による認証の重要度を上げる
情報処理装置。
2-8. 2-1.から2-7.のいずれか一つに記載の情報処理装置において、
 前記対象画像を用いて認証を行う第1認証手段と、
 前記対象画像とは異なる情報を用いて認証を行う第2認証手段とをさらに備え、
 前記制御手段は、前記劣化情報が所定の条件Bを満たす場合、前記第2認証手段による認証の重要度を上げる
情報処理装置。
2-9. 2-1.から2-8.のいずれか一つに記載の情報処理装置において、
 前記状態情報は、顔の向き、体姿勢、視線の向き、移動速度、および眼鏡着用有無のうち、一つ以上を示す
情報処理装置。
2-10. 2-1.から2-9.のいずれか一つに記載の情報処理装置において、
 前記状態情報は、前記カメラの撮像条件をさらに示す
情報処理装置。
2-11. 2-10.に記載の情報処理装置において、
 前記状態情報は、前記カメラの露光時間、前記対象に対する照明の状態、前記カメラの焦点位置、前記カメラのレンズの絞り、および前記カメラによる撮像領域の明るさのうち、一つ以上を示す
情報処理装置。
2-12. 2-1.から2-11.のいずれか一つに記載の情報処理装置において、
 前記推定手段は、前記状態情報と、正解劣化情報とを用いて学習された学習済みのニューラルネットワークを用いて前記劣化情報を生成する
情報処理装置。
2-13. 2-1.から2-12.のいずれか一つに記載の情報処理装置において、
 前記制御手段は、前記劣化情報が所定の条件Cを満たす場合、報知を実行するための前記制御情報を出力する
情報処理装置。
2-14. 2-1.から2-13.のいずれか一つに記載の情報処理装置において、
 前記制御手段は、前記劣化情報が所定の条件Dを満たす場合、前記カメラによる前記対象の撮像を中止させる
情報処理装置。
2-15. 2-1.から2-14.のいずれか一つに記載の情報処理装置において、
 前記制御手段は、前記劣化情報に基づいて、前記カメラの撮像条件を制御するための前記制御情報を出力する
情報処理装置。
3-1. 一以上のコンピュータが、
  少なくとも対象の状態を示す状態情報を取得し、
  カメラで前記対象を撮像して得られる対象画像において、発生すると推定される劣化に関する劣化情報を、前記状態情報を用いて生成し、
  前記カメラで前記対象が撮像される前および前記カメラで前記対象が撮像される時の少なくとも一方に、前記劣化情報に応じた制御情報を出力する
情報処理方法。
3-2. 3-1.に記載の情報処理方法において、
 前記状態情報は、前記カメラの焦点よりも離れた地点に前記対象が位置する時点における、前記対象の状態を少なくとも示し、
 前記一以上のコンピュータは、前記対象が前記カメラの焦点に到達する前、および前記対象が前記カメラの焦点に到達した時の少なくとも一方に、前記制御情報を出力する
情報処理方法。
3-3. 3-2.に記載の情報処理方法において、
 前記一以上のコンピュータは、
  前記状態情報を用いて、前記対象が前記カメラの焦点に到達する時点の、前記対象の状態を推定した結果を示す撮像時状態情報を生成し、
  前記撮像時状態情報を用いて前記劣化情報を生成する
情報処理方法。
3-4. 3-1.から3-3.のいずれか一つに記載の情報処理方法において、
 前記劣化情報は、前記対象画像に生じる一以上の劣化要因と、前記一以上の劣化要因それぞれの劣化の度合いとを示す
情報処理方法。
3-5. 3-4.に記載の情報処理方法において、
 前記対象画像は虹彩を含む画像であり、
 前記一以上の劣化要因は、フォーカスブラー、モーションブラー、瞼オクルージョン、照明反射オクルージョン、およびオフアングルのうち、一つ以上を含む
情報処理方法。
3-6. 3-1.から3-5.のいずれか一つに記載の情報処理方法において、
 前記一以上のコンピュータは、
  認証に用いる画像としての適正さを示す、前記対象画像の品質を、前記劣化情報を用いて推定し、
  前記品質に応じた前記制御情報を出力する
情報処理方法。
3-7. 3-6.に記載の情報処理方法において、
 前記一以上のコンピュータはさらに、
  前記対象画像を用いて認証を行い、
  前記対象画像とは異なる情報を用いて認証を行い、
 前記一以上のコンピュータは、前記品質が所定の条件Aを満たす場合、前記対象画像とは異なる情報を用いる前記認証の重要度を上げる
情報処理方法。
3-8. 3-1.から3-7.のいずれか一つに記載の情報処理方法において、
 前記一以上のコンピュータはさらに、
  前記対象画像を用いて認証を行い、
  前記対象画像とは異なる情報を用いて認証を行い、
 前記一以上のコンピュータは、前記劣化情報が所定の条件Bを満たす場合、前記対象画像とは異なる情報を用いる前記認証の重要度を上げる
情報処理方法。
3-9. 3-1.から3-8.のいずれか一つに記載の情報処理方法において、
 前記状態情報は、顔の向き、体姿勢、視線の向き、移動速度、および眼鏡着用有無のうち、一つ以上を示す
情報処理方法。
3-10. 3-1.から3-9.のいずれか一つに記載の情報処理方法において、
 前記状態情報は、前記カメラの撮像条件をさらに示す
情報処理方法。
3-11. 3-10.に記載の情報処理方法において、
 前記状態情報は、前記カメラの露光時間、前記対象に対する照明の状態、前記カメラの焦点位置、前記カメラのレンズの絞り、および前記カメラによる撮像領域の明るさのうち、一つ以上を示す
情報処理方法。
3-12. 3-1.から3-11.のいずれか一つに記載の情報処理方法において、
 前記一以上のコンピュータは、前記状態情報と、正解劣化情報とを用いて学習された学習済みのニューラルネットワークを用いて前記劣化情報を生成する
情報処理方法。
3-13. 3-1.から3-12.のいずれか一つに記載の情報処理方法において、
 前記一以上のコンピュータは、前記劣化情報が所定の条件Cを満たす場合、報知を実行するための前記制御情報を出力する
情報処理方法。
3-14. 3-1.から3-13.のいずれか一つに記載の情報処理方法において、
 前記一以上のコンピュータは、前記劣化情報が所定の条件Dを満たす場合、前記カメラによる前記対象の撮像を中止させる
情報処理方法。
3-15. 3-1.から3-14.のいずれか一つに記載の情報処理方法において、
 前記一以上のコンピュータは、前記劣化情報に基づいて、前記カメラの撮像条件を制御するための前記制御情報を出力する
情報処理方法。
4-1.プログラムを記録しているコンピュータ読み取り可能な記録媒体であって、
 前記プログラムは、コンピュータを
  少なくとも対象の状態を示す状態情報を取得する取得手段、
  カメラで前記対象を撮像して得られる対象画像において、発生すると推定される劣化に関する劣化情報を、前記状態情報を用いて生成する推定手段、および
  前記カメラで前記対象が撮像される前および前記カメラで前記対象が撮像される時の少なくとも一方に、前記劣化情報に応じた制御情報を出力する制御手段
として機能させる
記録媒体。
4-2. 4-1.に記載の記録媒体において、
 前記状態情報は、前記カメラの焦点よりも離れた地点に前記対象が位置する時点における、前記対象の状態を少なくとも示し、
 前記制御手段は、前記対象が前記カメラの焦点に到達する前、および前記対象が前記カメラの焦点に到達した時の少なくとも一方に、前記制御情報を出力する
記録媒体。
4-3. 4-2.に記載の記録媒体において、
 前記推定手段は、
  前記状態情報を用いて、前記対象が前記カメラの焦点に到達する時点の、前記対象の状態を推定した結果を示す撮像時状態情報を生成し、
  前記撮像時状態情報を用いて前記劣化情報を生成する
記録媒体。
4-4. 4-1.から4-3.のいずれか一つに記載の記録媒体において、
 前記劣化情報は、前記対象画像に生じる一以上の劣化要因と、前記一以上の劣化要因それぞれの劣化の度合いとを示す
記録媒体。
4-5. 4-4.に記載の記録媒体において、
 前記対象画像は虹彩を含む画像であり、
 前記一以上の劣化要因は、フォーカスブラー、モーションブラー、瞼オクルージョン、照明反射オクルージョン、およびオフアングルのうち、一つ以上を含む
記録媒体。
4-6. 4-1.から4-5.のいずれか一つに記載の記録媒体において、
 前記制御手段は、
  認証に用いる画像としての適正さを示す、前記対象画像の品質を、前記劣化情報を用いて推定し、
  前記品質に応じた前記制御情報を出力する
記録媒体。
4-7. 4-6.に記載の記録媒体において、
 前記プログラムは、前記コンピュータをさらに、
  前記対象画像を用いて認証を行う第1認証手段、および
  前記対象画像とは異なる情報を用いて認証を行う第2認証手段として機能させ、
 前記制御手段は、前記品質が所定の条件Aを満たす場合、前記第2認証手段による認証の重要度を上げる
記録媒体。
4-8. 4-1.から4-7.のいずれか一つに記載の記録媒体において、
 前記プログラムは、前記コンピュータをさらに、
  前記対象画像を用いて認証を行う第1認証手段、および
  前記対象画像とは異なる情報を用いて認証を行う第2認証手段として機能させ、
 前記制御手段は、前記劣化情報が所定の条件Bを満たす場合、前記第2認証手段による認証の重要度を上げる
記録媒体。
4-9. 4-1.から4-8.のいずれか一つに記載の記録媒体において、
 前記状態情報は、顔の向き、体姿勢、視線の向き、移動速度、および眼鏡着用有無のうち、一つ以上を示す
記録媒体。
4-10. 4-1.から4-9.のいずれか一つに記載の記録媒体において、
 前記状態情報は、前記カメラの撮像条件をさらに示す
記録媒体。
4-11. 4-10.に記載の記録媒体において、
 前記状態情報は、前記カメラの露光時間、前記対象に対する照明の状態、前記カメラの焦点位置、前記カメラのレンズの絞り、および前記カメラによる撮像領域の明るさのうち、一つ以上を示す
記録媒体。
4-12. 4-1.から4-11.のいずれか一つに記載の記録媒体において、
 前記推定手段は、前記状態情報と、正解劣化情報とを用いて学習された学習済みのニューラルネットワークを用いて前記劣化情報を生成する
記録媒体。
4-13. 4-1.から4-12.のいずれか一つに記載の記録媒体において、
 前記制御手段は、前記劣化情報が所定の条件Cを満たす場合、報知を実行するための前記制御情報を出力する
記録媒体。
4-14. 4-1.から4-13.のいずれか一つに記載の記録媒体において、
 前記制御手段は、前記劣化情報が所定の条件Dを満たす場合、前記カメラによる前記対象の撮像を中止させる
記録媒体。
4-15. 4-1.から4-14.のいずれか一つに記載の記録媒体において、
 前記制御手段は、前記劣化情報に基づいて、前記カメラの撮像条件を制御するための前記制御情報を出力する
記録媒体。
5-1.コンピュータを、
  少なくとも対象の状態を示す状態情報を取得する取得手段、
  カメラで前記対象を撮像して得られる対象画像において、発生すると推定される劣化に関する劣化情報を、前記状態情報を用いて生成する推定手段、および
  前記カメラで前記対象が撮像される前および前記カメラで前記対象が撮像される時の少なくとも一方に、前記劣化情報に応じた制御情報を出力する制御手段
として機能させる
プログラム。
5-2. 5-1.に記載のプログラムにおいて、
 前記状態情報は、前記カメラの焦点よりも離れた地点に前記対象が位置する時点における、前記対象の状態を少なくとも示し、
 前記制御手段は、前記対象が前記カメラの焦点に到達する前、および前記対象が前記カメラの焦点に到達した時の少なくとも一方に、前記制御情報を出力する
プログラム。
5-3. 5-2.に記載のプログラムにおいて、
 前記推定手段は、
  前記状態情報を用いて、前記対象が前記カメラの焦点に到達する時点の、前記対象の状態を推定した結果を示す撮像時状態情報を生成し、
  前記撮像時状態情報を用いて前記劣化情報を生成する
プログラム。
5-4. 5-1.から5-3.のいずれか一つに記載のプログラムにおいて、
 前記劣化情報は、前記対象画像に生じる一以上の劣化要因と、前記一以上の劣化要因それぞれの劣化の度合いとを示す
プログラム。
5-5. 5-4.に記載のプログラムにおいて、
 前記対象画像は虹彩を含む画像であり、
 前記一以上の劣化要因は、フォーカスブラー、モーションブラー、瞼オクルージョン、照明反射オクルージョン、およびオフアングルのうち、一つ以上を含む
プログラム。
5-6. 5-1.から5-5.のいずれか一つに記載のプログラムにおいて、
 前記制御手段は、
  認証に用いる画像としての適正さを示す、前記対象画像の品質を、前記劣化情報を用いて推定し、
  前記品質に応じた前記制御情報を出力する
プログラム。
5-7. 5-6.に記載のプログラムにおいて、
 前記コンピュータをさらに、
  前記対象画像を用いて認証を行う第1認証手段、および
  前記対象画像とは異なる情報を用いて認証を行う第2認証手段として機能させ、
 前記制御手段は、前記品質が所定の条件Aを満たす場合、前記第2認証手段による認証の重要度を上げる
プログラム。
5-8. 5-1.から5-7.のいずれか一つに記載のプログラムにおいて、
 前記コンピュータをさらに、
  前記対象画像を用いて認証を行う第1認証手段、および
  前記対象画像とは異なる情報を用いて認証を行う第2認証手段として機能させ、
 前記制御手段は、前記劣化情報が所定の条件Bを満たす場合、前記第2認証手段による認証の重要度を上げる
プログラム。
5-9. 5-1.から5-8.のいずれか一つに記載のプログラムにおいて、
 前記状態情報は、顔の向き、体姿勢、視線の向き、移動速度、および眼鏡着用有無のうち、一つ以上を示す
プログラム。
5-10. 5-1.から5-9.のいずれか一つに記載のプログラムにおいて、
 前記状態情報は、前記カメラの撮像条件をさらに示す
プログラム。
5-11. 5-10.に記載のプログラムにおいて、
 前記状態情報は、前記カメラの露光時間、前記対象に対する照明の状態、前記カメラの焦点位置、前記カメラのレンズの絞り、および前記カメラによる撮像領域の明るさのうち、一つ以上を示す
プログラム。
5-12. 5-1.から5-11.のいずれか一つに記載のプログラムにおいて、
 前記推定手段は、前記状態情報と、正解劣化情報とを用いて学習された学習済みのニューラルネットワークを用いて前記劣化情報を生成する
プログラム。
5-13. 5-1.から5-12.のいずれか一つに記載のプログラムにおいて、
 前記制御手段は、前記劣化情報が所定の条件Cを満たす場合、報知を実行するための前記制御情報を出力する
プログラム。
5-14. 5-1.から5-13.のいずれか一つに記載のプログラムにおいて、
 前記制御手段は、前記劣化情報が所定の条件Dを満たす場合、前記カメラによる前記対象の撮像を中止させる
プログラム。
5-15. 5-1.から5-14.のいずれか一つに記載のプログラムにおいて、
 前記制御手段は、前記劣化情報に基づいて、前記カメラの撮像条件を制御するための前記制御情報を出力する
プログラム。
A part or all of the above-described embodiments can be described as, but are not limited to, the following supplementary notes.
1-1. A camera capable of capturing an image of a target;
An acquisition means for acquiring status information indicating at least a status of the target;
an estimation means for generating deterioration information relating to deterioration that is estimated to occur in an object image obtained by capturing an image of the object with the camera, using the state information;
a control means for outputting control information corresponding to the degradation information at least one of before the object is imaged by the camera and when the object is imaged by the camera.
1-2. In the information processing system according to 1-1.,
The state information indicates at least a state of the object at a time when the object is located at a point farther away than the focal point of the camera;
An information processing system, wherein the control means outputs the control information at least one of before the object reaches the focus of the camera and when the object reaches the focus of the camera.
1-3. In the information processing system according to 1-2.,
The estimation means includes:
generating image capture state information indicating an estimation result of a state of the object at a time when the object reaches the focal point of the camera using the state information;
An information processing system that generates the deterioration information using the image capture state information.
1-4. In the information processing system according to any one of 1-1. to 1-3.,
An information processing system, wherein the degradation information indicates one or more degradation factors occurring in the target image and a degree of degradation for each of the one or more degradation factors.
1-5. In the information processing system according to 1-4.,
the target image is an image including an iris,
The one or more degradation factors include one or more of focus blur, motion blur, eyelid occlusion, lighting reflection occlusion, and off-angle.
1-6. In the information processing system according to any one of 1-1. to 1-5.,
The control means
Estimating a quality of the target image, which indicates suitability as an image to be used for authentication, using the degradation information;
An information processing system that outputs the control information according to the quality.
1-7. In the information processing system according to 1-6.,
A first authentication means for performing authentication using the target image;
A second authentication means for performing authentication using information different from the target image,
The control means increases the importance of authentication by the second authentication means when the quality satisfies a predetermined condition A.
1-8. In the information processing system according to any one of 1-1. to 1-7.,
A first authentication means for performing authentication using the target image;
A second authentication means for performing authentication using information different from the target image,
The control means increases the importance of authentication by the second authentication means when the degradation information satisfies a predetermined condition B.
1-9. In the information processing system according to any one of 1-1. to 1-8.,
The state information indicates one or more of a face direction, a body posture, a gaze direction, a moving speed, and whether or not glasses are worn.
1-10. In the information processing system according to any one of 1-1. to 1-9.,
The information processing system, wherein the status information further indicates an imaging condition of the camera.
1-11. In the information processing system according to 1-10.,
An information processing system in which the status information indicates one or more of the exposure time of the camera, the lighting conditions for the object, the focus position of the camera, the lens aperture of the camera, and the brightness of the area captured by the camera.
1-12. In the information processing system according to any one of 1-1. to 1-11.,
The estimation means is an information processing system that generates the degradation information using a trained neural network trained using the state information and correct degradation information.
1-13. In the information processing system according to any one of 1-1. to 1-12.,
The control means outputs the control information for executing a notification when the deterioration information satisfies a predetermined condition C.
1-14. In the information processing system according to any one of 1-1. to 1-13.,
The control means is an information processing system that stops the camera from capturing an image of the target when the deterioration information satisfies a predetermined condition D.
1-15. In the information processing system according to any one of 1-1. to 1-14.,
The control means outputs the control information for controlling the imaging conditions of the camera based on the deterioration information.
2-1. An acquisition means for acquiring status information indicating at least a status of an object;
an estimation means for generating deterioration information relating to deterioration that is estimated to occur in an object image obtained by capturing an image of the object with a camera, using the state information;
and a control means for outputting control information corresponding to the degradation information at least one of before the object is imaged by the camera and when the object is imaged by the camera.
2-2. In the information processing device described in 2-1.,
The state information indicates at least a state of the object at a time when the object is located at a point farther away than the focal point of the camera;
The control means outputs the control information at least one of before the object reaches the focus of the camera and when the object reaches the focus of the camera.
2-3. In the information processing device described in 2-2.,
The estimation means includes:
generating image capture state information indicating an estimation result of a state of the object at a time when the object reaches the focal point of the camera using the state information;
An information processing device that generates the degradation information using the image capture state information.
2-4. In the information processing device according to any one of 2-1. to 2-3.,
The degradation information indicates one or more degradation factors occurring in the target image and a degree of degradation of each of the one or more degradation factors.
2-5. In the information processing device according to 2-4.,
the target image is an image including an iris,
The one or more degradation factors include one or more of focus blur, motion blur, eyelid occlusion, lighting reflection occlusion, and off-angle.
2-6. In the information processing device according to any one of 2-1. to 2-5.,
The control means
Estimating a quality of the target image, which indicates suitability as an image to be used for authentication, using the degradation information;
An information processing device that outputs the control information according to the quality.
2-7. In the information processing device according to 2-6.,
A first authentication means for performing authentication using the target image;
A second authentication means for performing authentication using information different from the target image,
The control means increases the importance of authentication by the second authentication means when the quality satisfies a predetermined condition A.
2-8. In the information processing device according to any one of 2-1. to 2-7.,
A first authentication means for performing authentication using the target image;
A second authentication means for performing authentication using information different from the target image,
The control means increases the importance of authentication by the second authentication means when the degradation information satisfies a predetermined condition B.
2-9. In the information processing device according to any one of 2-1. to 2-8.,
The state information indicates one or more of a face direction, a body posture, a gaze direction, a moving speed, and whether or not glasses are worn.
2-10. In the information processing device according to any one of 2-1. to 2-9.,
The information processing device, wherein the status information further indicates an imaging condition of the camera.
2-11. In the information processing device according to 2-10.,
An information processing device in which the status information indicates one or more of the exposure time of the camera, the lighting condition for the object, the focal position of the camera, the lens aperture of the camera, and the brightness of the area captured by the camera.
2-12. In the information processing device according to any one of 2-1. to 2-11.,
The estimation means is an information processing device that generates the degradation information using a trained neural network trained using the state information and correct degradation information.
2-13. In the information processing device according to any one of 2-1. to 2-12.,
The control means is an information processing device that outputs the control information for executing a notification when the deterioration information satisfies a predetermined condition C.
2-14. In the information processing device according to any one of 2-1. to 2-13.,
The control means is an information processing device that stops the camera from capturing an image of the target when the deterioration information satisfies a predetermined condition D.
2-15. In the information processing device according to any one of 2-1. to 2-14.,
The control means is an information processing device that outputs the control information for controlling the imaging conditions of the camera based on the deterioration information.
3-1. One or more computers,
Obtaining status information indicating at least a status of the target;
generating deterioration information regarding deterioration that is estimated to occur in an object image obtained by capturing an image of the object with a camera using the state information;
The information processing method outputs control information corresponding to the degradation information at least one of before the object is imaged by the camera and when the object is imaged by the camera.
3-2. In the information processing method described in 3-1.,
The state information indicates at least a state of the object at a time when the object is located at a point farther away than the focal point of the camera;
An information processing method, wherein the one or more computers output the control information at least one of before the object reaches the focus of the camera and when the object reaches the focus of the camera.
3-3. In the information processing method described in 3-2.,
The one or more computers:
generating image capture state information indicating an estimation result of a state of the object at a time when the object reaches the focal point of the camera using the state information;
An information processing method for generating the degradation information using the image capture state information.
3-4. In the information processing method according to any one of 3-1. to 3-3.,
An information processing method, wherein the degradation information indicates one or more degradation factors occurring in the target image and a degree of degradation for each of the one or more degradation factors.
3-5. In the information processing method described in 3-4.,
the target image is an image including an iris,
The information processing method, wherein the one or more degradation factors include one or more of focus blur, motion blur, eyelid occlusion, lighting reflection occlusion, and off-angle.
3-6. In the information processing method according to any one of 3-1. to 3-5.,
The one or more computers:
Estimating a quality of the target image, which indicates suitability as an image to be used for authentication, using the degradation information;
An information processing method for outputting the control information according to the quality.
3-7. In the information processing method according to 3-6.,
The one or more computers further comprise:
Perform authentication using the target image;
performing authentication using information different from the target image;
The information processing method, wherein the one or more computers increase the importance of the authentication using information different from the target image when the quality satisfies a predetermined condition A.
3-8. In the information processing method according to any one of 3-1. to 3-7.,
The one or more computers further comprise:
Perform authentication using the target image;
performing authentication using information different from the target image;
The information processing method, wherein, when the degradation information satisfies a predetermined condition B, the one or more computers increase the importance of the authentication using information different from the target image.
3-9. In the information processing method according to any one of 3-1. to 3-8.,
The state information indicates one or more of a face direction, a body posture, a gaze direction, a moving speed, and whether or not glasses are worn.
3-10. In the information processing method according to any one of 3-1. to 3-9.,
An information processing method, wherein the status information further indicates an imaging condition of the camera.
3-11. In the information processing method according to 3-10.,
An information processing method in which the status information indicates one or more of the exposure time of the camera, the lighting condition for the object, the focus position of the camera, the lens aperture of the camera, and the brightness of the area captured by the camera.
3-12. In the information processing method according to any one of 3-1. to 3-11.,
An information processing method in which the one or more computers generate the degradation information using a trained neural network trained using the state information and correct degradation information.
3-13. In the information processing method according to any one of 3-1. to 3-12.,
The one or more computers output the control information for executing a notification when the degradation information satisfies a predetermined condition C.
3-14. In the information processing method according to any one of 3-1. to 3-13.,
An information processing method in which the one or more computers stop capturing images of the target by the camera when the degradation information satisfies a predetermined condition D.
3-15. In the information processing method according to any one of 3-1. to 3-14.,
The one or more computers output the control information for controlling an imaging condition of the camera based on the degradation information.
4-1. A computer-readable recording medium having a program recorded thereon,
The program causes a computer to include: an acquisition unit that acquires status information indicating at least a status of an object;
an estimation means for generating, using the status information, degradation information relating to degradation that is estimated to occur in an object image obtained by imaging the object with a camera; and a recording medium that functions as a control means for outputting control information corresponding to the degradation information at least one of before the object is imaged with the camera and when the object is imaged with the camera.
4-2. In the recording medium according to 4-1.,
The state information indicates at least a state of the object at a time when the object is located at a point farther away than the focal point of the camera;
The control means outputs the control information at least one of before the object reaches the focus of the camera and when the object reaches the focus of the camera.
4-3. In the recording medium according to 4-2.,
The estimation means includes:
generating image capture state information indicating an estimation result of a state of the object at a time when the object reaches the focal point of the camera using the state information;
A recording medium for generating the deterioration information using the image capture state information.
4-4. The recording medium according to any one of 4-1 to 4-3,
The degradation information is a recording medium indicating one or more degradation factors occurring in the target image and the degree of degradation of each of the one or more degradation factors.
4-5. In the recording medium according to 4-4.,
the target image is an image including an iris,
The one or more degradation factors include one or more of focus blur, motion blur, eyelid occlusion, lighting reflection occlusion, and off-angle.
4-6. The recording medium according to any one of 4-1 to 4-5,
The control means
Estimating a quality of the target image, which indicates suitability as an image to be used for authentication, using the degradation information;
A recording medium for outputting the control information according to the quality.
4-7. In the recording medium according to 4-6.,
The program further comprises:
a first authentication means for performing authentication using the target image; and a second authentication means for performing authentication using information different from the target image,
The control means increases the importance of authentication by the second authentication means when the quality satisfies a predetermined condition A.
4-8. The recording medium according to any one of 4-1 to 4-7,
The program further comprises:
a first authentication means for performing authentication using the target image; and a second authentication means for performing authentication using information different from the target image,
The control means increases the importance of authentication by the second authentication means when the degradation information satisfies a predetermined condition B.
4-9. The recording medium according to any one of 4-1. to 4-8.,
The state information indicates one or more of a face direction, a body posture, a gaze direction, a moving speed, and whether or not glasses are worn.
4-10. The recording medium according to any one of 4-1. to 4-9.,
A recording medium in which the status information further indicates an imaging condition of the camera.
4-11. The recording medium according to 4-10.,
The status information is a recording medium indicating one or more of the exposure time of the camera, the lighting conditions for the object, the focal position of the camera, the lens aperture of the camera, and the brightness of the area captured by the camera.
4-12. The recording medium according to any one of 4-1. to 4-11.
The estimation means generates the degradation information using a trained neural network trained using the state information and correct degradation information.
4-13. The recording medium according to any one of 4-1. to 4-12.,
The control means outputs the control information for executing a notification when the deterioration information satisfies a predetermined condition C.
4-14. The recording medium according to any one of 4-1. to 4-13.,
The control means stops the camera from capturing an image of the object when the deterioration information satisfies a predetermined condition D.
4-15. The recording medium according to any one of 4-1. to 4-14.,
The control means outputs the control information for controlling the imaging conditions of the camera based on the deterioration information.
5-1. Computer,
An acquisition means for acquiring status information indicating at least a status of an object;
an estimation means for generating, using the status information, degradation information relating to degradation that is estimated to occur in an object image obtained by imaging the object with a camera; and a program for causing the object to function as a control means for outputting control information corresponding to the degradation information at least one of before the object is imaged with the camera and when the object is imaged with the camera.
5-2. In the program described in 5-1.,
The state information indicates at least a state of the object at a time when the object is located at a point farther away than the focal point of the camera;
The control means outputs the control information at least one of before the object reaches the focus of the camera and when the object reaches the focus of the camera.
5-3. In the program described in 5-2.,
The estimation means includes:
generating image capture state information indicating an estimation result of a state of the object at a time when the object reaches the focal point of the camera using the state information;
A program for generating the deterioration information using the image capture state information.
5-4. In the program according to any one of 5-1. to 5-3.,
The degradation information indicates one or more degradation factors occurring in the target image and the degree of degradation of each of the one or more degradation factors.
5-5. In the program described in 5-4.,
the target image is an image including an iris,
The one or more degradation factors include one or more of focus blur, motion blur, eyelid occlusion, lighting reflection occlusion, and off-angle.
5-6. In the program according to any one of 5-1. to 5-5.,
The control means
Estimating a quality of the target image, which indicates suitability as an image to be used for authentication, using the degradation information;
A program for outputting the control information according to the quality.
5-7. In the program described in 5-6.,
The computer further comprises:
a first authentication means for performing authentication using the target image; and a second authentication means for performing authentication using information different from the target image,
The control means is a program for increasing the importance of authentication by the second authentication means when the quality satisfies a predetermined condition A.
5-8. In the program according to any one of 5-1 to 5-7,
The computer further comprises:
a first authentication means for performing authentication using the target image; and a second authentication means for performing authentication using information different from the target image,
The control means is a program for increasing the importance of authentication by the second authentication means when the degradation information satisfies a predetermined condition B.
5-9. In the program according to any one of 5-1. to 5-8.,
The state information indicates one or more of face direction, body posture, gaze direction, movement speed, and whether or not glasses are worn.
5-10. In the program according to any one of 5-1. to 5-9.,
The program, wherein the status information further indicates imaging conditions of the camera.
5-11. In the program described in 5-10.,
The status information indicates one or more of the exposure time of the camera, the lighting conditions for the object, the focus position of the camera, the lens aperture of the camera, and the brightness of the area captured by the camera.
5-12. In the program according to any one of 5-1. to 5-11.,
The estimation means is a program that generates the degradation information using a trained neural network that has been trained using the state information and correct degradation information.
5-13. In the program according to any one of 5-1. to 5-12.,
The control means is a program that outputs the control information for executing a notification when the deterioration information satisfies a predetermined condition C.
5-14. In the program according to any one of 5-1. to 5-13.,
The control means is a program that stops the camera from capturing an image of the target when the deterioration information satisfies a predetermined condition D.
5-15. In the program according to any one of 5-1. to 5-14.,
The control means is a program that outputs the control information for controlling the imaging conditions of the camera based on the deterioration information.
10 情報処理装置
20 カメラ
22 代替カメラ
30 状態測定部
40 照明部
50 情報処理システム
60 第2認証用カメラ
70 報知部
80 ゲート
90 対象
100 認証情報記憶部
110 取得部
130 推定部
131,133 劣化推定モデル
132 撮影時状態推定モデル
150 制御部
151 品質推定モデル
170 第1認証部
190 第2認証部
1000 計算機
1020 バス
1040 プロセッサ
1060 メモリ
1080 ストレージデバイス
1100 入出力インタフェース
1120 ネットワークインタフェース
REFERENCE SIGNS LIST 10 Information processing device 20 Camera 22 Substitute camera 30 State measurement unit 40 Illumination unit 50 Information processing system 60 Second authentication camera 70 Notification unit 80 Gate 90 Object 100 Authentication information storage unit 110 Acquisition unit 130 Estimation units 131, 133 Degradation estimation model 132 State estimation model at time of shooting 150 Control unit 151 Quality estimation model 170 First authentication unit 190 Second authentication unit 1000 Computer 1020 Bus 1040 Processor 1060 Memory 1080 Storage device 1100 Input/output interface 1120 Network interface

Claims (18)

  1.  対象を撮像可能なカメラと、
     少なくとも前記対象の状態を示す状態情報を取得する取得手段と、
     前記カメラで前記対象を撮像して得られる対象画像において、発生すると推定される劣化に関する劣化情報を、前記状態情報を用いて生成する推定手段と、
     前記カメラで前記対象が撮像される前および前記カメラで前記対象が撮像される時の少なくとも一方に、前記劣化情報に応じた制御情報を出力する制御手段とを備える
    情報処理システム。
    A camera capable of capturing an image of a target;
    An acquisition means for acquiring status information indicating at least a status of the target;
    an estimation means for generating deterioration information relating to deterioration that is estimated to occur in an object image obtained by capturing an image of the object with the camera, using the state information;
    a control means for outputting control information corresponding to the degradation information at least one of before the object is imaged by the camera and when the object is imaged by the camera.
  2.  請求項1に記載の情報処理システムにおいて、
     前記状態情報は、前記カメラの焦点よりも離れた地点に前記対象が位置する時点における、前記対象の状態を少なくとも示し、
     前記制御手段は、前記対象が前記カメラの焦点に到達する前、および前記対象が前記カメラの焦点に到達した時の少なくとも一方に、前記制御情報を出力する
    情報処理システム。
    2. The information processing system according to claim 1,
    The state information indicates at least a state of the object at a time when the object is located at a point farther away than the focal point of the camera;
    An information processing system, wherein the control means outputs the control information at least one of before the object reaches the focus of the camera and when the object reaches the focus of the camera.
  3.  請求項2に記載の情報処理システムにおいて、
     前記推定手段は、
      前記状態情報を用いて、前記対象が前記カメラの焦点に到達する時点の、前記対象の状態を推定した結果を示す撮像時状態情報を生成し、
      前記撮像時状態情報を用いて前記劣化情報を生成する
    情報処理システム。
    3. The information processing system according to claim 2,
    The estimation means includes:
    generating image capture state information indicating an estimation result of a state of the object at the time when the object reaches the focal point of the camera using the state information;
    An information processing system that generates the deterioration information using the image capture state information.
  4.  請求項1から3のいずれか一項に記載の情報処理システムにおいて、
     前記劣化情報は、前記対象画像に生じる一以上の劣化要因と、前記一以上の劣化要因それぞれの劣化の度合いとを示す
    情報処理システム。
    4. The information processing system according to claim 1,
    An information processing system, wherein the degradation information indicates one or more degradation factors occurring in the target image and a degree of degradation for each of the one or more degradation factors.
  5.  請求項4に記載の情報処理システムにおいて、
     前記対象画像は虹彩を含む画像であり、
     前記一以上の劣化要因は、フォーカスブラー、モーションブラー、瞼オクルージョン、照明反射オクルージョン、およびオフアングルのうち、一つ以上を含む
    情報処理システム。
    5. The information processing system according to claim 4,
    the target image is an image including an iris,
    The one or more degradation factors include one or more of focus blur, motion blur, eyelid occlusion, lighting reflection occlusion, and off-angle.
  6.  請求項1から5のいずれか一項に記載の情報処理システムにおいて、
     前記制御手段は、
      認証に用いる画像としての適正さを示す、前記対象画像の品質を、前記劣化情報を用いて推定し、
      前記品質に応じた前記制御情報を出力する
    情報処理システム。
    6. The information processing system according to claim 1,
    The control means
    Estimating a quality of the target image, which indicates suitability as an image to be used for authentication, using the degradation information;
    An information processing system that outputs the control information according to the quality.
  7.  請求項6に記載の情報処理システムにおいて、
     前記対象画像を用いて認証を行う第1認証手段と、
     前記対象画像とは異なる情報を用いて認証を行う第2認証手段とをさらに備え、
     前記制御手段は、前記品質が所定の条件Aを満たす場合、前記第2認証手段による認証の重要度を上げる
    情報処理システム。
    7. The information processing system according to claim 6,
    A first authentication means for performing authentication using the target image;
    A second authentication means for performing authentication using information different from the target image,
    The control means increases the importance of authentication by the second authentication means when the quality satisfies a predetermined condition A.
  8.  請求項1から7のいずれか一項に記載の情報処理システムにおいて、
     前記対象画像を用いて認証を行う第1認証手段と、
     前記対象画像とは異なる情報を用いて認証を行う第2認証手段とをさらに備え、
     前記制御手段は、前記劣化情報が所定の条件Bを満たす場合、前記第2認証手段による認証の重要度を上げる
    情報処理システム。
    8. The information processing system according to claim 1,
    A first authentication means for performing authentication using the target image;
    A second authentication means for performing authentication using information different from the target image,
    The control means increases the importance of authentication by the second authentication means when the degradation information satisfies a predetermined condition B.
  9.  請求項1から8のいずれか一項に記載の情報処理システムにおいて、
     前記状態情報は、顔の向き、体姿勢、視線の向き、移動速度、および眼鏡着用有無のうち、一つ以上を示す
    情報処理システム。
    9. The information processing system according to claim 1,
    The state information indicates one or more of a face direction, a body posture, a gaze direction, a moving speed, and whether or not glasses are worn.
  10.  請求項1から9のいずれか一項に記載の情報処理システムにおいて、
     前記状態情報は、前記カメラの撮像条件をさらに示す
    情報処理システム。
    10. The information processing system according to claim 1 ,
    The information processing system, wherein the status information further indicates an imaging condition of the camera.
  11.  請求項10に記載の情報処理システムにおいて、
     前記状態情報は、前記カメラの露光時間、前記対象に対する照明の状態、前記カメラの焦点位置、前記カメラのレンズの絞り、および前記カメラによる撮像領域の明るさのうち、一つ以上を示す
    情報処理システム。
    11. The information processing system according to claim 10,
    An information processing system in which the status information indicates one or more of the exposure time of the camera, the lighting conditions for the object, the focus position of the camera, the lens aperture of the camera, and the brightness of the area captured by the camera.
  12.  請求項1から11のいずれか一項に記載の情報処理システムにおいて、
     前記推定手段は、前記状態情報と、正解劣化情報とを用いて学習された学習済みのニューラルネットワークを用いて前記劣化情報を生成する
    情報処理システム。
    12. The information processing system according to claim 1,
    The estimation means is an information processing system that generates the degradation information using a trained neural network trained using the state information and correct degradation information.
  13.  請求項1から12のいずれか一項に記載の情報処理システムにおいて、
     前記制御手段は、前記劣化情報が所定の条件Cを満たす場合、報知を実行するための前記制御情報を出力する
    情報処理システム。
    13. The information processing system according to claim 1,
    The control means outputs the control information for executing a notification when the deterioration information satisfies a predetermined condition C.
  14.  請求項1から13のいずれか一項に記載の情報処理システムにおいて、
     前記制御手段は、前記劣化情報が所定の条件Dを満たす場合、前記カメラによる前記対象の撮像を中止させる
    情報処理システム。
    14. The information processing system according to claim 1,
    The control means is an information processing system that stops the camera from capturing an image of the target when the deterioration information satisfies a predetermined condition D.
  15.  請求項1から14のいずれか一項に記載の情報処理システムにおいて、
     前記制御手段は、前記劣化情報に基づいて、前記カメラの撮像条件を制御するための前記制御情報を出力する
    情報処理システム。
    15. The information processing system according to claim 1,
    The control means outputs the control information for controlling the imaging conditions of the camera based on the deterioration information.
  16.  少なくとも対象の状態を示す状態情報を取得する取得手段と、
     カメラで前記対象を撮像して得られる対象画像において、発生すると推定される劣化に関する劣化情報を、前記状態情報を用いて生成する推定手段と、
     前記カメラで前記対象が撮像される前および前記カメラで前記対象が撮像される時の少なくとも一方に、前記劣化情報に応じた制御情報を出力する制御手段とを備える
    情報処理装置。
    An acquisition means for acquiring status information indicating at least a status of an object;
    an estimation means for generating deterioration information relating to deterioration that is estimated to occur in an object image obtained by capturing an image of the object with a camera, using the state information;
    and a control means for outputting control information corresponding to the degradation information at least one of before the object is imaged by the camera and when the object is imaged by the camera.
  17.  一以上のコンピュータが、
      少なくとも対象の状態を示す状態情報を取得し、
      カメラで前記対象を撮像して得られる対象画像において、発生すると推定される劣化に関する劣化情報を、前記状態情報を用いて生成し、
      前記カメラで前記対象が撮像される前および前記カメラで前記対象が撮像される時の少なくとも一方に、前記劣化情報に応じた制御情報を出力する
    情報処理方法。
    One or more computers
    Obtaining status information indicating at least a status of the target;
    generating deterioration information regarding deterioration that is estimated to occur in an object image obtained by capturing an image of the object with a camera using the state information;
    An information processing method for outputting control information according to the degradation information at least one of before the object is imaged by the camera and when the object is imaged by the camera.
  18.  プログラムを記録しているコンピュータ読み取り可能な記録媒体であって、
     前記プログラムは、コンピュータを
      少なくとも対象の状態を示す状態情報を取得する取得手段、
      カメラで前記対象を撮像して得られる対象画像において、発生すると推定される劣化に関する劣化情報を、前記状態情報を用いて生成する推定手段、および
      前記カメラで前記対象が撮像される前および前記カメラで前記対象が撮像される時の少なくとも一方に、前記劣化情報に応じた制御情報を出力する制御手段
    として機能させる
    記録媒体。
    A computer-readable recording medium having a program recorded thereon,
    The program includes: an acquisition unit that acquires status information indicating at least a status of an object;
    an estimation means for generating, using the status information, degradation information relating to degradation that is estimated to occur in an object image obtained by imaging the object with a camera; and a recording medium that functions as a control means for outputting control information corresponding to the degradation information at least one of before the object is imaged with the camera and when the object is imaged with the camera.
PCT/JP2022/040864 2022-11-01 2022-11-01 Information processing system, information processing device, information processing method, and recording medium WO2024095362A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/040864 WO2024095362A1 (en) 2022-11-01 2022-11-01 Information processing system, information processing device, information processing method, and recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/040864 WO2024095362A1 (en) 2022-11-01 2022-11-01 Information processing system, information processing device, information processing method, and recording medium

Publications (1)

Publication Number Publication Date
WO2024095362A1 true WO2024095362A1 (en) 2024-05-10

Family

ID=90930106

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/040864 WO2024095362A1 (en) 2022-11-01 2022-11-01 Information processing system, information processing device, information processing method, and recording medium

Country Status (1)

Country Link
WO (1) WO2024095362A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006157428A (en) * 2004-11-29 2006-06-15 Fuji Photo Film Co Ltd Photographic apparatus and photographing method
JP2008028434A (en) * 2006-07-18 2008-02-07 Matsushita Electric Ind Co Ltd Photographing apparatus, authentication apparatus, and photographing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006157428A (en) * 2004-11-29 2006-06-15 Fuji Photo Film Co Ltd Photographic apparatus and photographing method
JP2008028434A (en) * 2006-07-18 2008-02-07 Matsushita Electric Ind Co Ltd Photographing apparatus, authentication apparatus, and photographing method

Similar Documents

Publication Publication Date Title
US10878237B2 (en) Systems and methods for performing eye gaze tracking
JP6582604B2 (en) Pupil detection program, pupil detection method, pupil detection device, and gaze detection system
EP3153092B1 (en) Pupil detection system, gaze detection system, pupil detection method, and pupil detection program
US7819525B2 (en) Automatic direct gaze detection based on pupil symmetry
JP5024067B2 (en) Face authentication system, method and program
JP5106459B2 (en) Three-dimensional object determination device, three-dimensional object determination method, and three-dimensional object determination program
CN108985210A (en) A kind of Eye-controlling focus method and system based on human eye geometrical characteristic
US20100303294A1 (en) Method and Device for Finding and Tracking Pairs of Eyes
US10417782B2 (en) Corneal reflection position estimation system, corneal reflection position estimation method, corneal reflection position estimation program, pupil detection system, pupil detection method, pupil detection program, gaze detection system, gaze detection method, gaze detection program, face orientation detection system, face orientation detection method, and face orientation detection program
US10248852B2 (en) Method for recognizing facial expression of headset wearing user and apparatus enabling the same
JP6822482B2 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
WO2019097595A1 (en) Vehicle external communication apparatus, vehicle external communication method, information processing device, and vehicle external communication program
US10817722B1 (en) System for presentation attack detection in an iris or face scanner
CN107203743B (en) Face depth tracking device and implementation method
WO2020195732A1 (en) Image processing device, image processing method, and recording medium in which program is stored
US20170116736A1 (en) Line of sight detection system and method
WO2020079741A1 (en) Iris authentication device, iris authentication method, and recording medium
Rigas et al. Gaze estimation as a framework for iris liveness detection
US10402996B2 (en) Distance measuring device for human body features and method thereof
JP2016532217A (en) Method and apparatus for detecting eyes with glint
US20170243061A1 (en) Detection system and detection method
WO2024095362A1 (en) Information processing system, information processing device, information processing method, and recording medium
JP2017202038A (en) Discrimination device, discrimination method, and discrimination program
JP6468755B2 (en) Feature point detection system, feature point detection method, and feature point detection program
JP2009059165A (en) Outline detection apparatus, sight line detection apparatus using the same, program for causing computer to remove false outline data, program for causing computer to detect sight line direction, and computer-readable recording medium with the program recorded

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22964384

Country of ref document: EP

Kind code of ref document: A1