WO2019244496A1 - Information processing device, wearable equipment, information processing method, and program - Google Patents
Information processing device, wearable equipment, information processing method, and program Download PDFInfo
- Publication number
- WO2019244496A1 WO2019244496A1 PCT/JP2019/018523 JP2019018523W WO2019244496A1 WO 2019244496 A1 WO2019244496 A1 WO 2019244496A1 JP 2019018523 W JP2019018523 W JP 2019018523W WO 2019244496 A1 WO2019244496 A1 WO 2019244496A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- mode
- image
- unit
- control unit
- information processing
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/117—Identification of persons
- A61B5/1171—Identification of persons based on the shapes or appearances of their bodies or parts thereof
- A61B5/1172—Identification of persons based on the shapes or appearances of their bodies or parts thereof using fingerprinting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Definitions
- the present disclosure relates to an information processing device, a wearable device, an information processing method, and a program.
- One object of the present disclosure is to provide an information processing device, a wearable device, an information processing method, and a program that can suppress unnecessary power consumption.
- the present disclosure for example, At least a control unit that selectively sets a first mode and a second mode in which a process that consumes more power than the first mode is performed;
- the control unit is In the first mode, it is determined whether or not the image obtained via the sensor unit includes biological information,
- the operation mode is changed from the first mode to the second mode, triggered by the fact that the biological information is included in the image,
- the information processing apparatus performs at least a matching process using biological information.
- a control unit that selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed; And a sensor unit for acquiring an image.
- the control unit is In the first mode, it is determined whether or not the image obtained via the sensor unit includes biological information, The operation mode is changed from the first mode to the second mode, triggered by the fact that the biological information is included in the image, In the second mode, the wearable device performs at least the matching process using the biological information.
- the control unit selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed,
- the control unit is In the first mode, it is determined whether or not the image obtained via the sensor unit includes biological information,
- the operation mode is changed from the first mode to the second mode, triggered by the fact that the biological information is included in the image,
- the second mode is an information processing method that performs at least a matching process using biological information.
- the control unit selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed,
- the control unit is In the first mode, it is determined whether or not the image obtained via the sensor unit includes biological information,
- the operation mode is changed from the first mode to the second mode, triggered by the fact that the biological information is included in the image,
- the second mode is a program that causes a computer to execute an information processing method for performing at least a matching process using biological information.
- FIG. 1 is a diagram illustrating an example of an external appearance of a wristband electronic device according to an embodiment.
- FIG. 2 is a diagram illustrating an example of an internal structure of the wristband type electronic device according to the embodiment.
- FIG. 3 is a diagram illustrating a more specific example of the internal structure of the wristband type electronic device.
- FIG. 4 is a block diagram illustrating a circuit configuration example of the wristband type electronic device according to the embodiment.
- FIG. 5 is a functional block diagram for explaining a function example of the control unit according to the embodiment.
- FIG. 6 is a diagram for explaining feature points of a fingerprint.
- FIG. 7 is a functional block diagram for explaining a function example of the preprocessing unit according to the embodiment.
- FIGS. 8A to 8D are diagrams for explaining processing by the noise removal unit according to the embodiment.
- 9A and 9B are diagrams referred to when describing an example of processing for detecting the direction and the main frequency of the flow of the fingerprint.
- FIGS. 10A and 10B are diagrams for explaining a process of estimating a fingerprint line to the outside of the imaging range.
- FIG. 11 is a diagram illustrating an example of the certainty factor map.
- FIG. 12 is a diagram illustrating an example of the certainty factor.
- FIG. 13A and FIG. 13B are diagrams for explaining a process of generating a ridge estimation image with a certainty factor map.
- FIG. 14A and FIG. 14B are diagrams for explaining a registration process according to the embodiment.
- FIG. 15B are diagrams for explaining the matching process according to the embodiment.
- FIG. 16 is a state transition diagram for explaining an example of the transition of the operation mode.
- 17A and 17B are diagrams for explaining an example of the trigger P.
- FIG. 18 is a diagram for explaining another example of the trigger P.
- FIGS. 19A and 19B are diagrams illustrating an example of an axial direction defined by a wristband type electronic device.
- 20A to 20C are diagrams for explaining another example of the trigger P.
- FIG. FIGS. 21A and 21B are diagrams referred to when explaining another example of the trigger P.
- FIGS. 22A to 22D are diagrams illustrating an example of the trigger Q.
- FIG. FIG. 23 is a diagram for explaining an example of the trigger Q.
- FIG. 24A to 24D are diagrams referred to when explaining another example of the trigger Q.
- FIG. 25 is a flowchart illustrating a flow of a process according to the second embodiment.
- FIG. 26 is a flowchart illustrating a flow of a process according to the second embodiment.
- FIG. 27 is a diagram for describing a modification.
- FIG. 1 shows an example of an external appearance of a wristband electronic device (wristband electronic device 1) according to the first embodiment.
- the wristband type electronic device 1 is used, for example, like a wristwatch. More specifically, the wristband-type electronic device 1 has a band portion 2 wound around the user's wrist WR and a main body portion 3. The main body 3 has a display 4. Although details will be described later, in the wristband type electronic device 1 according to the embodiment, by touching the display 4 with a fingertip, it is possible to perform biometric authentication using fingerprint information of the fingertip.
- FIG. 2 is a partial cross-sectional view illustrating an example of the structure inside the main body 3 of the wristband type electronic device 1.
- the main unit 3 of the wristband type electronic device 1 includes, for example, the display 4, the light guide plate 5, the light emitting unit 6, the touch sensor unit 7, the image sensor 8 as an example of the sensor unit, and the lens unit 9 described above. .
- a touch operation with the fingertip F is performed on the display 4, and the presence or absence of the touch is detected by the touch sensor unit 7.
- the main body 3 of the wristband type electronic device 1 has a structure in which a light guide plate 5, a display 4, a lens unit 9, and an imaging element 8 are sequentially stacked from the near side to the far side when viewed from the operation direction.
- the contact with the display 4 may include not only direct contact with the display 4 but also indirect contact via another member (for example, the light guide plate 5).
- the contact with the display 4 may include, for example, not only the fingertip F touching the display 4 but also bringing the fingertip F close to the display 4 such that a fingerprint image is obtained.
- the display 4 includes a liquid crystal LCD (Liquid Crystal Display), an OLED (Organic Light Emitting Diode), and the like.
- the light guide plate 5 is, for example, a light transmissive member that guides light from the light emitting unit 6 to an area AR where the fingertip F is in contact.
- the light guide plate 5 is not limited to a transparent one, and may be any as long as it transmits light to the extent that a fingerprint of the fingertip F can be photographed by the imaging element 8.
- the light emitting unit 6 is configured by an LED (Light Emitting Diode) or the like, and is provided at least partially around the light guide plate 5.
- the area AR is an area including a position corresponding to the image sensor 8, specifically, at least a position corresponding to a range of imaging by the image sensor 8.
- the light emitting unit 6 provides light required for photographing, for example, by being turned on when photographing a fingerprint.
- the touch sensor unit 7 is a sensor that detects contact of the fingertip F with the display 4.
- a capacitive touch sensor is applied.
- a touch sensor of another type such as a resistive film type may be applied as the touch sensor unit 7.
- the touch sensor unit 7 is locally provided at a position near the lower part of the area AR.
- the touch sensor unit 7 may be provided over substantially the entire lower side of the display 4.
- the imaging device 8 is configured by a CCD (Charge Coupled Device), a CMOS (Complementary Metal Oxide Semiconductor), or the like.
- the imaging element 8 photoelectrically converts the subject light (reflected light from the object that has come into contact with the display 4) incident through the lens unit 9 to convert the light into an electric charge.
- Various processes in the subsequent stage are performed on the image signal obtained via the image sensor 8.
- the lens unit 9 is configured by a lens (microlens) provided at intervals of several tens to several hundreds of pixels of the imaging element 8.
- FIG. 3 is a diagram illustrating a more specific example of the internal structure of the wristband type electronic device 1.
- the display 4 is described as a transparent panel unit having a plurality of transparent light emitting elements such as a transparent organic EL element and a quantum dot light emitting element.
- the display 4 has an effective area 4A and an outer frame 4B.
- the display 4 has a function as a display panel that displays an image in the effective area 4A by light emission of the plurality of transparent light emitting elements.
- the transparent light emitting elements are arranged in a matrix in the effective area 4A, for example.
- the display 4 has a function as a touch sensor that detects a touch state of an object such as a finger based on a value of capacitance between a plurality of wirings for a light emitting element, for example.
- a cover glass 50 is provided on the upper surface (operation side) of the display 4, and an imaging unit 60 including the imaging element 8 is arranged below a partial area of the display 4.
- the imaging unit 60 is arranged below a partial area of the display 4.
- the imaging unit 60 has a function of imaging an object that is in contact with or in proximity to a partial area of the display 4 via the display 4.
- the object imaged by the imaging unit 60 may be, for example, a part of a living body.
- the imaging unit 60 may have a function of a biometric authentication device that performs biometric authentication on a part of a living body based on a captured image of a part of the living body obtained by imaging a part of the living body.
- the function of the imaging unit 60 as a biometric authentication device can constitute, for example, a fingerprint sensor.
- the imaging unit 60 includes a microlens array module 61, an imaging unit outer frame 62, the above-described imaging device 8, and a substrate 63.
- the micro lens array module 61 is arranged in the effective area 4A of the display 4 when viewed from above.
- the imaging element 8 is arranged on the substrate 63.
- the microlens array module 61 is disposed between the image sensor 8 and the effective area 4A of the display 4.
- the microlens array module 61 includes a cover glass / light guide plate 65, a microlens array 66, and a light guide plate 67 in this order from the top.
- the microlens array 66 has a plurality of microlenses arranged in a matrix.
- the microlens array 66 condenses object light from an object such as a finger toward the image sensor 8 by each of the plurality of microlenses.
- the cover glass and light guide plate 65 has a role of protecting the surface of the microlens array 66. Further, the cover glass and light guide plate 65 has a role of guiding the object light transmitted through the effective area 4A of the display 4 to each of the plurality of microlenses.
- the cover glass and light guide plate 65 has a plurality of light guide paths provided at positions corresponding to each of the plurality of micro lenses.
- the light guide plate 67 has a plurality of light guide paths 68 as shown in FIG.
- the plurality of light guide paths 68 are provided at positions corresponding to the plurality of microlenses, respectively, and guide the light collected by each of the plurality of microlenses to the image sensor 8.
- FIG. 4 is a block diagram illustrating a circuit configuration example of the wristband type electronic device 1 and the like.
- the wristband type electronic device 1 includes, for example, a control unit 11, a wireless communication unit 12, an antenna 13 connected to the wireless communication unit 12, an NFC (NFC), in addition to the display 4, the touch sensor unit 7, the image sensor 8, and the like.
- Near Field Communication) communication unit 14 antenna 15 connected to NFC communication unit 14, position sensor unit 16, antenna 17 connected to position sensor unit 16, memory unit 18, vibrator 19, motion sensor 20, voice processing unit 21 , A microphone 22 and a speaker 23.
- the control unit 11 includes, for example, a CPU (Central Processing Unit) and controls each unit of the wristband type electronic device 1. For example, the control unit 11 performs various types of image processing on a fingerprint image of the fingertip F captured by the imaging element 8 and performs fingerprint authentication based on a fingerprint image (fingerprint image), which is one of biological information.
- a CPU Central Processing Unit
- the wireless communication unit 12 performs short-range wireless communication with another terminal based on, for example, the Bluetooth (registered trademark) standard.
- the wireless communication unit 12 performs modulation / demodulation processing, error correction processing, and the like in accordance with, for example, the Bluetooth (registered trademark) standard.
- the NFC communication unit 14 performs wireless communication with a nearby reader / writer based on the NFC standard. Although illustration is omitted, power is supplied from a battery such as a lithium ion secondary battery to each unit of the wristband type electronic device 1. The battery may be charged wirelessly based on the NFC standard.
- the position sensor unit 16 is a positioning unit that performs positioning of the current position by using a system called GNSS (Global Navigation Satellite System), for example. Data obtained by the wireless communication unit 12, the NFC communication unit 14, and the position sensor unit 16 are supplied to the control unit 11. And the control part 11 performs control based on the supplied data.
- GNSS Global Navigation Satellite System
- the memory unit 18 includes a ROM (Read Only Memory) in which a program executed by the control unit 11 is stored, a RAM (Random Access Memory) used as a work memory when the control unit 11 executes the program, and a data storage This is a general term for non-volatile memories and the like.
- the memory unit 18 stores a feature amount of a fingerprint of an authorized user used for fingerprint authentication (hereinafter, appropriately referred to as a registered feature amount). This registered feature amount is initially registered, for example, when the wristband type electronic device 1 is used for the first time.
- the vibrator 19 is, for example, a member that vibrates the main body 3 of the wristband type electronic device 1. By vibrating the main body 3 by the vibrator 19, an incoming call, reception of an e-mail, or the like is notified.
- the motion sensor 20 detects the movement of the user wearing the wristband type electronic device 1.
- an acceleration sensor As the motion sensor 20, an acceleration sensor, a gyro sensor, an electronic compass, a barometric pressure sensor, a biosensor for detecting blood pressure, a pulse, and the like are used.
- a pressure sensor or the like for detecting whether or not the user wears the wristband type electronic device 1 may be provided on the back side (the side facing the wrist) of the band portion 2 or the main body portion 3.
- the microphone 22 and the speaker 23 are connected to the voice processing unit 21, and the voice processing unit 21 performs a call process with the other party connected by wireless communication in the wireless communication unit 12.
- the voice processing unit 21 can also perform a process for a voice input operation.
- the wristband-type electronic device 1 is not limited to the above-described configuration example, and may have a configuration in which a part of the configuration of the above-described wristband-type electronic device 1 is not provided, or a configuration in which another configuration is added. But it's fine.
- FIG. 5 is a functional block diagram for explaining an example of a function of the control unit 11.
- the control unit 11 includes a preprocessing unit 11a, a feature point detection unit 11b, a feature amount extraction unit 11c, and a matching processing unit 11d.
- the preprocessing unit 11a performs various correction processes on the input fingerprint image. Details of the processing performed by the preprocessing unit 11a will be described later.
- the feature point detection unit 11b detects a feature point of a fingerprint from an image including the fingerprint by applying a known method.
- the characteristic points of the fingerprint are, for example, end points and branch points in the pattern drawn on the fingerprint line of the fingerprint as shown in FIG. 6, and intersections and isolated points of the fingerprint line described later. It is a necessary characteristic part.
- the fingerprint line is described as a ridge of a fingerprint, but may be at least one of a ridge and a valley of the fingerprint.
- the feature amount extraction unit 11c extracts a feature amount characterizing each feature point detected by the feature point detection unit 11b.
- the feature amount includes a position of a feature point, a direction of a feature line (for example, a relative direction (vector) with respect to a predetermined direction defined by a ridge, and the like.
- the feature amount extraction unit 11 c A feature amount of the feature point is extracted based on the peripheral image including the feature point, for example, an image obtained by cutting out a size of 3 mm ⁇ 3 mm around the feature point and normalizing the angle is applied.
- the effect of extracting the feature amount after normalizing with the angle is that even if the orientation of the finger photographed at the time of registration and at the time of verification is different.
- an effect of making the extracted feature amount hard to change that is, an effect of improving the robustness with respect to the angle at which the finger is placed can be obtained.
- the relative position of the sweat gland with respect to the feature point may be included in the feature amount of the feature point.
- the embodiment according to the present disclosure does not necessarily need to take a fingerprint of a wide area of a finger. It can also be said to be a method suitable for fingerprint matching in a small area.
- the matching processing unit 11d performs a matching process of comparing the feature amount extracted by the feature amount extracting unit 11c with a registered feature amount registered in advance, and outputs a matching score as a result of the matching process. If the collation score is equal to or more than the threshold, the fingerprint authentication is established, that is, it is determined that the user is an authorized user. Conversely, if the collation score is smaller than the threshold, the fingerprint authentication is not established.
- the result of the matching process may be notified to the user by display, sound, vibration, or the like. As a result of the matching process, when the authentication is established, the use according to the application becomes possible, for example, the use of the predetermined function of the wristband type electronic device 1 is permitted.
- the registered feature amount is described as being stored in the memory unit 18.
- the registered feature amount may be stored in an external device such as a server device on a cloud.
- the registered feature amount may be downloaded from an external device.
- the registered feature amount may be automatically deleted from the wristband type electronic device 1 after the matching process is completed.
- FIG. 7 is a functional block diagram illustrating an example of a function of the preprocessing unit 11a.
- the preprocessing unit 11a includes, for example, a noise removal unit 101, a ridge estimation image generation unit 102 as an image generation unit, and a certainty factor map generation unit 103 as a configuration that executes functions included in the correction processing.
- the noise removing unit 101 removes noise included in a fingerprint image.
- 8A to 8D are diagrams for explaining the noise removal processing performed by the noise removal unit 101.
- FIG. The image on the right side of FIG. 8A shows the fingerprint image IM1A, and dust NA is reflected in the fingerprint image IM1A.
- the noise removing unit 101 determines, for example, a region in which a change in the luminance value between adjacent pixels is equal to or greater than a predetermined value as dust, and removes the dust NA by performing an interpolation process using peripheral pixels of the dust NA.
- the ridge estimation image IM2A as shown on the right side of FIG. 8A is generated by the ridge estimation image generation unit 102.
- a process for removing noise such as dust
- other known processes can be applied. The same applies to the processing for removing noise other than dust described below.
- the noise removing unit 101 removes fixed pattern noise that is noise other than dust, for example.
- the image on the left side of FIG. 8B shows the fingerprint image IM1B, and the fingerprint image IM1B includes, for example, a fixed pattern noise NB having a vertical stripe.
- the fixed pattern noise NB include, for example, the structure of the display 4, more specifically, the pattern of the display 4 itself.
- the imaging element 8 is disposed on the back side of the display 4 with reference to the operation direction. For this reason, there is a possibility that a pattern included in the display 4 in an image obtained through the imaging element 8 is reflected in the fingerprint image as fixed pattern noise NB.
- the noise removing unit 101 removes such fixed pattern noise NB and interpolates the location of the noise NB, the noise removing unit 101 has the structure of the wristband type electronic device 1 according to the present embodiment.
- the ridge estimation image IM2B is generated by the ridge estimation image generation unit 102.
- the noise removing unit 101 removes boundaries of the image sensor which are noises other than dust.
- the image sensor 8 has four image sensors as a plurality of sub-sensor units and has a configuration in which the four image sensors are combined.
- an image sensor 8 of a certain size is required in specifications, if an image sensor 8 of a required size can be formed by combining image sensors of an existing size, an image sensor 8 of a new size is required. Is more advantageous in terms of manufacturing cost and the like than manufacturing separately.
- the imaging device 8 has a structure in which a plurality of imaging devices are combined, as shown on the left side of FIG. 8C, a boundary between the plurality of imaging devices appears in the fingerprint image IM1C as noise NC. Since the noise removing unit 101 removes such noise NC and interpolates the location of the noise NC, even in the case of the structure of the wristband type electronic device 1 according to the present embodiment, the fingerprint is It is possible to prevent the accuracy of authentication from being reduced.
- a ridge estimation image IM2C is generated by the ridge estimation image generation unit 102.
- the noise removing unit 101 determines that a curved pattern corresponding to a ridge is not a fingerprint, and removes the curved pattern.
- the image IM2D after the removal is shown on the right side of FIG. 8D.
- Such a process is useful, for example, when performing fingerprint authentication when the user's clothes or the like touches the display 4.
- the process related to the fingerprint authentication may not be performed in the case of the image IM2D.
- the noise removing unit 101 by performing the correction processing by the noise removing unit 101, it is possible to prevent the accuracy of fingerprint authentication from being reduced due to the influence of noise. In addition, it is possible to prevent feedback from being performed to the user due to authentication failure caused by a decrease in the accuracy of fingerprint authentication.
- the ridge estimation image generation unit 102 generates a ridge estimation image in which a pattern based on a fingerprint line is estimated based on the image processed by the noise removal unit 101.
- a method for generating the ridge estimation image a known method can be applied. An example of a method for generating a ridge estimation image according to the present embodiment will be described.
- the ridge estimation image generation unit 102 uses FFT (Fast ⁇ Fourier ⁇ Transform) on the image processed by the noise removal unit 101, and calculates the average period (for example, 0.4 mm) of the fingerprint line of the fingerprint.
- FFT Fast ⁇ Fourier ⁇ Transform
- a ridge estimation image is generated by applying a bandpass filter before and after the frequency of (period).
- the ridge estimation image generation unit 102 uses the FFT for each area of 1 mm square in the vicinity to generate a frequency (hereinafter, referred to as a main frequency as appropriate) and a dominant frequency in the area. (The flow direction of the fingerprint) is extracted, and a Gabor filter adapted to the frequency and the angle is applied to generate a ridge estimation image. According to the above two examples, main ridges / valleys are emphasized, and the influence of small noise can be reduced.
- Example 2 an example of a method for detecting the flow direction and the main frequency of the fingerprint will be described with reference to FIG.
- the image IM8 shown on the left side in FIG. 9A shows a certain fingerprint image IM8.
- 9A a frequency spectrum obtained by applying the FFT to the image IM8 is shown.
- One of the radial lines superimposed on the frequency spectrum indicates a component having the largest integral value described later.
- FIG. 9B shows a frequency profile in the direction (principal direction) of the component in which the integrated value is the largest.
- ⁇ ⁇ ⁇ As a first step, profiles are extracted for 16 directions of the frequency spectrum, and the direction with the largest integral value is determined. This is the main directional component of the wave. Subsequently, as a second step, a peak value is detected from the frequency profile in the main direction, and a frequency corresponding to the peak value is set as a main frequency. Thus, the flow direction and the main frequency of the fingerprint can be detected.
- the ridge estimation image generation unit 102 extends a predetermined range to the outside of the captured area to estimate the fingerprint pattern. For example, based on the fingerprint image IM9A shown in FIG. 10A, as shown in FIG. 10B, a ridge estimation image IM9B enlarged to a range larger than the fingerprint image IM9A by a predetermined size is generated.
- the fingerprint line obtained by the original size (the size of the fingerprint image IM9A) is extended along the flow (direction) of the fingerprint line.
- a branch point or an intersection of the fingerprint line which is one of the characteristic points of the fingerprint, can be obtained by such processing. There is.
- the above processing for example, even when the size of the image sensor 8 is small and the area of the image obtained by the image sensor 8 is limited, more feature points can be obtained, and the accuracy of fingerprint authentication is improved. Can be done.
- the certainty map generation unit 103 generates a certainty map indicating the certainty of the estimation result in the area of the ridge estimation image that is the image obtained by estimating the pattern corresponding to the fingerprint.
- FIG. 11 shows a certainty factor map MA10, which is an example of the certainty factor map.
- the image area is divided into white and black areas.
- the white area is an area having a high degree of certainty, that is, an area where a fingerprint line pattern is accurately obtained.
- a black region is a region with low confidence.
- a predetermined threshold value is set for the certainty factor. If the certainty factor is equal to or larger than the threshold value, the certainty factor is set as a high-confidence area.
- an image of a predetermined size (for example, a rectangular image of 1 mm ⁇ 1 mm) is cut out from the image. Then, with respect to the cut-out image, a brightness distribution indicating a brightness value distribution of each pixel is created.
- FIG. 12 shows an example of the luminance distribution.
- the difference value D between the luminance value BV1 and the luminance value BV2 is set as the certainty factor. Note that the variance of the luminance values of the extracted images may be used as the certainty factor.
- the function of the ridge estimation image generation unit 102 and the function of the certainty map generation unit 103 described above may be configured as one function block, and the function block may generate a ridge estimation image with certainty. good.
- a white area is an area that can be recognized with an error of ⁇ or less
- a black area is an area that cannot be suppressed to an error of ⁇ or less.
- an accurate ridge image with respect to the input fingerprint image x is defined as a correct ridge image y.
- An estimation error between the correct ridge image y and the ridge estimation image f (x) is defined as an estimation error dy.
- One of the purposes of the process is to estimate an image f (x) that is close to y from x.
- Another object is to recognize a region that is likely to be correctly estimated, in other words, to determine whether or not the region is a region where the estimation error can be reduced to ⁇ or less.
- control unit 11 simultaneously learns functions f and g that minimize the loss function shown in FIG. 13B (however, 0 ⁇ g (x) ⁇ 1).
- the portion shown in parentheses in the loss function shown in FIG. 13B is the estimated error dyi.
- FIG. 14A is a diagram showing the flow of the registration process.
- FIG. 14B is a diagram showing an image and the like obtained in each process in association with each process.
- step ST11 an image input process is performed. For example, a fingertip is brought into contact with the display 4 and a fingerprint image is obtained via the imaging device 8. When the fingerprint authentication is acquired, the light emitting unit 6 emits light. Then, the process proceeds to step ST12.
- step ST12 preprocessing is performed by the preprocessing unit 11a. Specifically, noise is removed from the fingerprint image by the noise removing unit 101.
- the ridge estimation image generation unit 102 generates a ridge estimation image based on the fingerprint image from which noise has been removed. Further, the certainty map generation unit 103 generates a certainty map. In FIG. 14B, illustration of the certainty factor map is omitted. Then, the process proceeds to step ST13.
- the feature point detection unit 11b detects a feature point of the fingerprint based on the ridge estimation image.
- the feature point detection unit 11b refers to the certainty factor map and detects a feature point from an area determined to have a certainty factor or more.
- FIG. 14B shows an example in which three feature points (the centers of circles) are detected. Then, the process proceeds to step ST14.
- step ST14 the feature amount extraction unit 11c extracts a feature amount characterizing each feature point.
- the feature amount extraction unit 11c cuts out an image of a predetermined size centering on each feature point, and extracts a feature amount based on the cut out image. Then, the process proceeds to step ST15.
- step ST15 the control unit 11 performs a template registration process of registering the feature amount of each feature point extracted in the process in step ST14.
- the feature amount of each feature point is stored in the memory unit 28, for example.
- the feature amount stored in the memory unit 28 is used as a registered feature amount in a matching process described below.
- FIG. 15A is a diagram illustrating a flow of the matching process.
- FIG. 15B is a diagram illustrating an example of a feature amount acquired in each process and a diagram referred to when describing the process content, in association with each process.
- step ST21 a fingertip is placed on the display 4, and a fingerprint image is obtained. Then, a feature amount extraction process for extracting a feature amount is performed.
- the feature amount extraction process in step ST21 is a process including the above-described steps ST11 to ST14. Through the processing in step ST21, a feature amount for collation for performing fingerprint authentication is obtained.
- FIG. 15B shows feature amounts corresponding to five feature points. Then, the process proceeds to step ST22.
- step ST22 the control unit 11 reads out the registered feature amounts from the memory unit 28.
- FIG. 15B shows an example of a registered feature amount. Then, the process proceeds to step ST23.
- step ST23 the matching processing unit 11d performs a matching process of comparing the feature amount acquired in the process of step ST21 with the registered feature amount read in step ST22.
- the matching processing unit 11d obtains a similarity score between the feature amount for matching and the registered feature amount by an inner product operation, and generates a similarity score matrix shown in FIG. 15B based on the result. “A” in the similarity score matrix indicates a registered feature point, and “B” indicates a feature point for comparison.
- the (i, j) component is a similarity score between Ai and Bi.
- the matching processing unit 11d calculates a matching score based on the similarity score matrix. If the collation score is equal to or greater than the threshold, fingerprint authentication is established. If the collation score is smaller than the threshold, fingerprint authentication is not established. For example, the maximum value in the similarity score matrix is set as the matching score. The average value in the similarity score matrix may be set as the matching score. The average value of the maximum value of each column in the similarity score matrix may be set as the matching score.
- the feature amount is extracted based on the peripheral image of the feature point, information other than the information of the feature point itself can be used as the feature amount of the feature point. it can.
- the matching process based on various information can be performed, so that the accuracy of fingerprint authentication can be improved.
- fingerprint authentication is performed using the imaging device 8.
- an image sensor more specifically, a COMS sensor
- another method for example, a capacitance method.
- a battery having a capacity corresponding to the required power may be used, in the case of a wearable device, the size of the battery that can be mounted is limited, and the capacity of the battery is limited. Therefore, it is desired to minimize unnecessary power consumption.
- the number and size of input devices such as buttons to be provided are also restricted. Therefore, it is desirable that the control for minimizing unnecessary power consumption be performed without using an operation on a physical device such as a button as a trigger.
- the second embodiment will be described in detail while considering such a viewpoint.
- FIG. 16 is a diagram illustrating a state transition of the wristband type electronic device 1.
- the wristband type electronic device 1 is capable of transitioning between, for example, three modes as an operation mode related to fingerprint authentication.
- the three modes are mode 0, mode 1 and mode 2. From the viewpoint of power consumption, mode 0 has the lowest power consumption, and mode 2 has the highest power consumption.
- the power consumption in mode 1 is larger than the power consumption in mode 0 and smaller than the power consumption in mode 2.
- mode 0 corresponds to an example of the third mode
- modes 1 and 2 correspond to examples of the first and second modes, respectively.
- Mode 0 is a pause mode, in which the light emitting unit 6 is turned off and the image sensor 8 is not operated, that is, the fingerprint sensing using the image sensor 8 is not performed.
- Mode 1 is a standby state in which the light emitting unit 6 is turned on and fingerprint sensing using the image sensor 8 is performed. Note that the sensing in the mode 1 may be such that it is possible to determine whether or not the object in contact with the display 4 is a fingerprint. More specifically, sensing that acquires an image that can determine whether or not a fingerprint (for example, a characteristic point of the fingerprint) is included may be used.
- the mode 2 is an authentication state, in which the light emitting unit 6 is turned on, a feature amount of the fingerprint is acquired, and a matching process for comparing the acquired feature amount with the registered feature amount is performed.
- an image is acquired via the image sensor 8 based on a setting different from the setting in the mode 1.
- mode 1 for example, when a feature point of a fingerprint is detected from an image and it is determined that the object touching the display 4 is a fingertip, the operation mode transitions to mode 2 in which power consumption is higher. By such a mode transition, it is possible to prevent unnecessary execution of a matching process or the like that consumes a large amount of power even when the display 4 is touched by something other than a fingertip of clothes or the like. Therefore, for example, a decrease in the capacity of the battery can be suppressed.
- mode 0 is a mode in which processing related to fingerprint authentication is not performed. Therefore, in the following description, a specific example of the operation in mode 1 and mode 2 will be described.
- illumination control for controlling the brightness of the light emitting unit 6 by the control unit 11 is performed.
- the operation in each mode is performed according to the illumination control.
- the brightness (luminance) of the light emitting unit 6 is set to be small.
- the brightness of the light emitting unit 6 is set to be larger than that in mode 1 so that a high-definition image is obtained. Since the amount of the reflected light from the fingertip changes depending on the state of the finger and the degree of pressing of the finger, the light emission intensity of the light emitting unit 6 may be adjusted adaptively based on the luminance of the image.
- resolution control for changing the resolution is performed by the control unit 11 controlling active pixels in the image sensor 8.
- the operation in each mode is performed according to the resolution control.
- low resolution is set, and sensing at low resolution is performed.
- the low resolution means, for example, a resolution of about 300 to 500 ppi (pixels per inch) at which a feature point of a fingerprint can be detected.
- high resolution is set, and sensing at high resolution is performed.
- the high resolution means, for example, a resolution of about 1000 ppi or more at which a feature finer than a fingerprint line such as a sweat gland can be photographed.
- control unit 11 controls a region of an active pixel in the image sensor 8 to perform a sensing region control for controlling a sensing region which is an imaging range.
- sensing using a part (for example, only near the center) of the image sensor 8 is performed.
- mode 2 sensing using the entire image sensor 8 is performed.
- Control combining the control in the above-described example may be performed. For example, in mode 1, sensing at a low resolution is performed by the entire image sensor 8 to detect a feature point of a fingerprint. In mode 2, only the area near the detected feature point may be sensed at high resolution.
- transition between modes transitions according to a predetermined trigger, a lapse of time, a result of processing, and the like. As shown in FIG. 16, a transition is made from mode 0 to mode 1 based on the trigger P. Also, a transition is made from mode 1 to mode 2 based on the trigger Q.
- the operation mode changes from mode 1 to mode 0.
- the operation mode transitions from mode 1 to mode 0 (timeout).
- the operation mode transitions from mode 2 to mode 1 (timeout).
- the operation mode changes from the mode 2 to the mode 0.
- the operation mode may be directly transitable from mode 0 to mode 2.
- the operation mode may be allowed to transition from mode 0 to mode 2 based on the trigger R.
- An example of the trigger R is an operation input for instructing to perform fingerprint authentication. In this case, since it is clear that fingerprint authentication is performed in advance, the operation mode may directly transition from mode 0 to mode 2.
- the trigger P includes a timing at which the start of use of the wristband type electronic device 1 is detected. It is assumed that there is a high possibility that fingerprint authentication will be performed to execute a predetermined application at the timing when the use of the wristband type electronic device 1 is started. Therefore, the operation mode changes from mode 0 to mode 1.
- the waveform of the acceleration sensor (the output of the acceleration sensor) or the change in the output of the acceleration sensor is equal to or more than the threshold or equal to or less than the threshold is given.
- the wristband-type electronic device 1 there is a high possibility that the wristband-type electronic device 1 will be used, so that the operation mode transitions from mode 0 to mode 1.
- the acceleration sensor can be applied as one of the motion sensors 20.
- the trigger P As another specific example of the trigger P, as shown in FIG. 18, a case where there is a change equal to or more than a threshold value in the direction (gravity direction) of the composite vector of the three-axis acceleration can be given.
- a sensor output corresponding to each axis is defined. Examples of each axis corresponding to the wristband type electronic device 1 are shown in FIGS. 19A and 19B.
- the three-axis acceleration is represented by a three-dimensional vector, and if there is a change in the direction, it is determined that the hand direction has changed. Also in such a case, the operation mode transitions from mode 0 to mode 1 because there is a high possibility that some action including fingerprint authentication is performed on the wristband type electronic device 1.
- the predetermined section is set so that the output of the acceleration sensor includes a portion where a change equal to or more than the threshold value occurs.
- the output of the acceleration sensor corresponding to the set predetermined section is input to the recognizer schematically shown in FIG. 20B.
- the recognizer determines whether a predetermined gesture has occurred by applying the function f to the output of the acceleration sensor.
- a determination result of the recognizer is obtained as shown in FIG. 20C.
- a case where the score f (x) indicating the defined gesture-likeness, which is the determination result, is equal to or greater than a threshold value is defined as a trigger P.
- the trigger P may be a case where a contact of the fingertip with the display 4 or a movement of the fingertip with the display 4 is detected.
- a trigger P may be set when a contact of an object or a movement of the object is detected instead of the fingertip.
- FIGS. 21A and 21B are diagrams schematically showing respective positions of the image sensor 8 and the touch sensor unit 7 with respect to the display 4.
- the touch sensor unit 7 that detects contact or movement of an object is arranged, for example, in the vicinity of the image sensor 8 as shown in FIGS. 21A and 21B.
- the present invention is not limited to this, and various conditions can be set as the trigger P.
- a combination of the above-described examples may be used as the trigger P.
- the trigger Q is, for example, a trigger on the condition that a fingerprint is included in an image acquired via the imaging device 8.
- a cycle that can be considered as a cycle of a fingerprint line (here, a ridge line and a valley line) of a fingerprint is set.
- 22A shows an example of a 0.6 mm cycle
- FIG. 22B shows an example of a 0.3 mm cycle
- FIG. 22C shows an example of a 0.15 mm cycle
- FIG. 22D shows an example of a 0.075 mm cycle.
- a frequency component corresponding to each cycle is extracted from an image obtained via the image sensor 8. Then, for each frequency component, for example, 32 types of responses (in increments of 11.25 degrees) as shown in FIG. 23 are calculated, and the average value is obtained.
- the average value corresponding to at least one of the four frequency components described above is equal to or greater than the threshold value, it is highly likely that the object shown in the image is a fingerprint. Therefore, a condition that the average value corresponding to at least one of the four frequency components is equal to or larger than the threshold is set as the trigger Q.
- the condition that the number of fingerprint feature points equal to or larger than the threshold value is detected may be set as the trigger Q.
- the characteristic points of the fingerprint in addition to the end point of the fingerprint line shown in FIG. 24A, the branch point of the fingerprint line shown in FIG. 24B, the intersection of the fingerprint line shown in FIG. 24C and the fingerprint line shown in FIG. May be included.
- the present invention is not limited to this, and various conditions can be set as the trigger Q.
- circles A, B, and C indicate the continuity of the processing. Also, the description will be given on the assumption that the operation mode at the start of the process is mode 0.
- step ST31 in FIG. 25 for example, acceleration data is obtained based on the output of the motion sensor 20. Then, the process proceeds to step ST32.
- step ST32 using the acceleration data obtained in step ST31, the control unit 11 performs a process of recognizing whether or not the trigger P is established. As described above, whether or not the trigger P has been established may be determined using data other than the acceleration data. Then, the process proceeds to step ST33.
- step ST33 it is determined whether or not the trigger P is established based on the result of the process in step ST32. Here, if the trigger P is not established, the process returns to step ST31. If the trigger P is established, the process proceeds to step ST34.
- the operation mode changes from mode 0 to mode 1.
- the first elapsed time is set to 0 (initialized).
- the first elapsed time is a time for determining whether or not the entire processing has been completed within a predetermined time, in other words, whether or not the processing has timed out. Then, the process proceeds to step ST35.
- step ST35 the second elapsed time is set to 0 (initialized).
- the second elapsed time is a time for determining whether or not the processing of the mode 1 has been completed within a predetermined time, in other words, whether or not the processing has timed out. Then, the process proceeds to step ST36.
- step ST36 the light emitting section 6 is turned on with the brightness corresponding to mode 1. Then, the process proceeds to step ST37.
- step ST37 sensing according to the setting corresponding to mode 1 is started. Then, the process proceeds to step ST38.
- step ST38 an image is obtained via the imaging element 8 as a result of the sensing in step ST37. Then, the process proceeds to step ST39.
- step ST39 a process of recognizing the trigger Q is performed. Then, the process proceeds to step ST40.
- step ST40 in FIG. 26 the control unit 11 determines whether or not the trigger Q is established as a result of the processing in step ST39. Here, if the trigger Q is not established, the process proceeds to step ST41.
- step ST41 it is determined whether the second elapsed time is greater than a predetermined threshold th1.
- th1 is set to, for example, about 10 seconds.
- the process in mode 1 times out, and the process returns to step ST31.
- the process in mode 1 is repeated. That is, the process returns to step ST38, an image is acquired again, and the processes after step ST38 are performed.
- step ST40 the operation mode transitions from mode 1 to mode 2 and then the processing proceeds to step ST42.
- step ST42 the third elapsed time is set to 0 (initialized).
- the third elapsed time is a time for determining whether or not the process of mode 2 has been completed within a predetermined time, in other words, whether or not the process has timed out. Then, the process proceeds to Step ST43.
- step ST43 a setting relating to at least one of a shooting area, lighting (light emitting unit 6), and resolution according to mode 2 is performed, an image is shot based on the setting, and a fingerprint image is obtained. Further, a feature amount characterizing a feature point of the fingerprint image is extracted. Then, the process proceeds to step ST44.
- step ST44 a matching process is performed to match the obtained feature amount with the registered feature amount. Then, the process proceeds to step ST45.
- step ST45 it is determined whether or not the quality is sufficient. For example, if the number of detected feature points is equal to or greater than a threshold, it is determined that the quality is sufficient. Also, as a result of the matching process, if the number of feature points that are similar based on the comparison of the feature amounts is between a certain threshold thA and a certain threshold thB (threshold thA> thB), it is determined that the quality is not sufficient. Is also good.
- the number of feature points that are similar based on the comparison of the feature amounts is equal to or greater than the threshold thA (in this case, fingerprint authentication is established), or the number of feature points that are similar based on the comparison of the feature amounts is If it is equal to or smaller than the threshold thB (in this case, fingerprint authentication is not established), it is determined that the quality is sufficient for determining the result of fingerprint authentication. If it is determined in step ST45 that the quality is not sufficient, the process proceeds to step ST46.
- step ST46 it is determined whether the third elapsed time is greater than a threshold th2.
- the threshold th2 is set to, for example, about 10 seconds. If the third elapsed time is equal to or less than the threshold th2, the process proceeds to step ST47.
- step ST47 Since the third elapsed time is equal to or less than the threshold th2 and the time until the timeout has not elapsed, the mode 2 process is continued. That is, in step ST47, an image is acquired again via the imaging element 8, and the processing after step ST44 is performed.
- step ST48 it is determined whether the first elapsed time is greater than a threshold th3. If the first elapsed time is equal to or smaller than the threshold th3 as a result of the determination, the process returns to step ST38 because the time-out for the entire process has not elapsed, and the process related to mode 1 is performed again. If the first elapsed time is greater than the threshold th3 as a result of the determination, the time-out for the entire process has elapsed, and the process returns to step ST31, which is the first process.
- control is performed by appropriately setting the operation mode of the wristband type electronic device 1.
- the power consumed by the unit 11 and the imaging element 8 can be suppressed. Further, mode transition can be performed without performing an operation on the input device.
- a matching process using a low-resolution image may be performed.
- the processing according to the mode 1 is performed, and the matching processing using the low-resolution image is performed.
- the settlement amount is a large amount exceeding 1,000 yen, high security is required. Therefore, the process according to the mode 2 is performed, and the matching process using the high-resolution image is performed.
- the trigger Q which is a condition for switching from the mode 1 to the mode 2 may be a condition according to the content of the application.
- the content of the trigger Q which is a condition for switching from mode 1 to mode 2, may be dynamically switched.
- the control unit 11 acquires the remaining battery capacity of the wristband type electronic device 1.
- the SoC State $ of $ Charge
- the content of the trigger Q is switched and the content of the trigger Q is made strict (the operation mode is changed from mode 1 to mode 2). Makes the transition difficult.).
- the content of the trigger Q is set to a combination of the examples of the individual triggers Q described above.
- a configuration may be adopted in which a control unit (second control unit 11A) for executing the processes related to modes 0 and 1 is provided. Then, when the trigger Q is established, the second control unit 11A may perform a notification process to the control unit 11 which is a higher-level host, and the control unit 11 may perform a process such as a matching process related to the mode 2. .
- the control unit 11, which is a higher-level host controls various processes of the wristband type electronic device 1 and thus consumes a large amount of power. Therefore, when an image is obtained via the image sensor 8 (in other words, the display 4 (when something touches), activating the control unit 11 may increase the overall power consumption. Therefore, it is preferable to provide the second control unit 11A, which is a lower control unit that executes mode 0 and mode 1.
- the threshold for establishing fingerprint authentication may be changed according to the content of the application. For example, when fingerprint authentication is performed to enable high-value payment, the standard for image quality may be increased, or the threshold value for the matching score may be significantly changed.
- the configuration of the wristband type electronic device 1 according to the above-described embodiment can be changed as appropriate.
- a configuration without the light guide plate 5 and the light emitting unit 6 may be employed.
- imaging using light from the display 4 specifically, an OLED is performed.
- the biological information is not limited to the fingerprint, but may be a blood vessel of a palm, a capillary blood vessel of a retina, or a combination thereof.
- the fingerprint does not need to be a pattern formed by the fingerprint lines of the entire fingertip, but may include a part thereof. The same applies to other biological information.
- the present disclosure can also be realized by an apparatus, a method, a program, a system, and the like.
- a program that performs the function described in the above-described embodiment can be downloaded, and a device that does not have the function described in the embodiment downloads and installs the program, thereby explaining the program in the embodiment. Control can be performed.
- the present disclosure can also be realized by a server that distributes such a program.
- matters described in each of the embodiments and the modified examples can be appropriately combined.
- the present disclosure can also adopt the following configurations.
- the control unit includes: In the first mode, it is determined whether biological information is included in an image obtained through the sensor unit, With the biological information included in the image as a trigger, the operation mode is changed from the first mode to the second mode, In the second mode, an information processing device that performs a matching process using at least the biological information.
- the control unit causes the light-emitting unit that emits light at a timing at which an image is taken by the sensor unit to emit light at a first luminance in the first mode, and the light-emitting unit emits a light having a first luminance higher than the first luminance in the second mode.
- the information processing device according to (2) wherein the light emitting unit emits light at a luminance of 2.
- the control unit controls the sensor unit to acquire the image of a first resolution.
- the control unit has a second resolution larger than the first resolution.
- the control unit performs control to acquire the image using a part of the sensor unit.
- the image is obtained using the entire sensor unit.
- the information processing apparatus according to any one of (2) to (4), which performs control to acquire.
- the operation mode can be shifted from the third mode having lower power consumption than the power consumption of the first mode to the first mode,
- the control unit includes: The operation mode is changed from the third mode to the first mode by using at least one of a case where a movement of the information processing apparatus is detected and a case where a predetermined operation is detected as a trigger.
- Information processing device is (7) The information processing device according to (6), wherein in the third mode, the control unit turns off the light emitting unit and the sensor unit.
- the information processing device according to (6) or (7), wherein a touch sensor unit that detects the predetermined operation is provided near the sensor unit.
- the information processing apparatus according to any one of (3), (6) to (8), including the light emitting unit.
- the information processing apparatus according to any one of (1) to (9), wherein the content of the trigger is changed so that the transition from the first mode to the second mode becomes difficult.
- the control unit from an image including biological information obtained via the sensor unit, a feature point detection unit that detects a feature point, and a feature amount characterizing the feature point based on a peripheral image including the feature point.
- the information processing apparatus according to any one of (1) to (10), further comprising: a feature amount extracting unit to extract.
- the information processing apparatus according to any one of (1) to (11), wherein the biological information is at least one of a fingerprint and a blood vessel.
- the processing according to the first mode is performed by another control unit different from the control unit.
- a control unit that selectively sets at least a first mode and a second mode in which processing that consumes more power than the first mode is performed; And a sensor unit for acquiring an image.
- the control unit includes: In the first mode, it is determined whether or not biological information is included in an image obtained through the sensor unit, With the biological information included in the image as a trigger, the operation mode is changed from the first mode to the second mode, In the second mode, a wearable device that performs a matching process using at least the biological information.
- the control unit selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed;
- the control unit includes: In the first mode, it is determined whether biological information is included in an image obtained through the sensor unit, With the biological information included in the image as a trigger, the operation mode is changed from the first mode to the second mode, In the second mode, an information processing method that performs a matching process using at least the biological information.
- the control unit selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed;
- the control unit includes: In the first mode, it is determined whether biological information is included in an image obtained through the sensor unit, Triggered by the biological information being included in the image, the operation mode is changed from the first mode to the second mode, In the second mode, a program for causing a computer to execute an information processing method that performs a matching process using at least the biological information.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physiology (AREA)
- Dentistry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Collating Specific Patterns (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Input (AREA)
Abstract
Provided is an information processing device comprising a control part for selectively setting at least a first mode and a second mode in which a process consuming a greater amount of power than the first mode is carried out. In the first mode, the control part determines whether biological information is included in an image obtained via a sensor part. The inclusion of biological information in the image serves as a trigger for transitioning the operation mode from the first mode to the second mode. In the second mode, the control part carries out a matching process that employs at least the biological information.
Description
本開示は、情報処理装置、ウエアラブル機器、情報処理方法及びプログラムに関する。
The present disclosure relates to an information processing device, a wearable device, an information processing method, and a program.
従来から、異なるモードによってセンシングを行う装置が知られている(例えば、下記特許文献1及び2を参照のこと。)。
装置 Conventionally, devices that perform sensing in different modes are known (for example, see Patent Documents 1 and 2 below).
このような分野では、モードを適切に遷移させることで、センシングで消費される電力を適切にコントロールし、不要な電力消費を抑制することが望まれている。
In such a field, it is desired to appropriately control the power consumed by sensing by appropriately changing the mode, thereby suppressing unnecessary power consumption.
本開示は、不要な電力消費を抑制することができる情報処理装置、ウエアラブル機器、情報処理方法及びプログラムを提供することを目的の一つとする。
One object of the present disclosure is to provide an information processing device, a wearable device, an information processing method, and a program that can suppress unnecessary power consumption.
本開示は、例えば、
少なくとも、第1のモードと、第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定する制御部を有し、
制御部は、
第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
画像に生体情報が含まれることをトリガーとして、動作モードを第1のモードから第2のモードに遷移させ、
第2のモードでは、少なくとも生体情報を使用したマッチング処理を行う
情報処理装置である。 The present disclosure, for example,
At least a control unit that selectively sets a first mode and a second mode in which a process that consumes more power than the first mode is performed;
The control unit is
In the first mode, it is determined whether or not the image obtained via the sensor unit includes biological information,
The operation mode is changed from the first mode to the second mode, triggered by the fact that the biological information is included in the image,
In the second mode, the information processing apparatus performs at least a matching process using biological information.
少なくとも、第1のモードと、第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定する制御部を有し、
制御部は、
第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
画像に生体情報が含まれることをトリガーとして、動作モードを第1のモードから第2のモードに遷移させ、
第2のモードでは、少なくとも生体情報を使用したマッチング処理を行う
情報処理装置である。 The present disclosure, for example,
At least a control unit that selectively sets a first mode and a second mode in which a process that consumes more power than the first mode is performed;
The control unit is
In the first mode, it is determined whether or not the image obtained via the sensor unit includes biological information,
The operation mode is changed from the first mode to the second mode, triggered by the fact that the biological information is included in the image,
In the second mode, the information processing apparatus performs at least a matching process using biological information.
本開示は、例えば、
少なくとも、第1のモードと、第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定する制御部と、
画像を取得するセンサ部と
を有し、
制御部は、
第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
画像に生体情報が含まれることをトリガーとして、動作モードを第1のモードから第2のモードに遷移させ、
第2のモードでは、少なくとも生体情報を使用したマッチング処理を行う
ウエアラブル機器である。 The present disclosure, for example,
A control unit that selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed;
And a sensor unit for acquiring an image.
The control unit is
In the first mode, it is determined whether or not the image obtained via the sensor unit includes biological information,
The operation mode is changed from the first mode to the second mode, triggered by the fact that the biological information is included in the image,
In the second mode, the wearable device performs at least the matching process using the biological information.
少なくとも、第1のモードと、第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定する制御部と、
画像を取得するセンサ部と
を有し、
制御部は、
第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
画像に生体情報が含まれることをトリガーとして、動作モードを第1のモードから第2のモードに遷移させ、
第2のモードでは、少なくとも生体情報を使用したマッチング処理を行う
ウエアラブル機器である。 The present disclosure, for example,
A control unit that selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed;
And a sensor unit for acquiring an image.
The control unit is
In the first mode, it is determined whether or not the image obtained via the sensor unit includes biological information,
The operation mode is changed from the first mode to the second mode, triggered by the fact that the biological information is included in the image,
In the second mode, the wearable device performs at least the matching process using the biological information.
本開示は、例えば、
制御部が、少なくとも、第1のモードと、第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定し、
制御部は、
第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
画像に生体情報が含まれることをトリガーとして、動作モードを第1のモードから第2のモードに遷移させ、
第2のモードでは、少なくとも生体情報を使用したマッチング処理を行う
情報処理方法である。 The present disclosure, for example,
The control unit selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed,
The control unit is
In the first mode, it is determined whether or not the image obtained via the sensor unit includes biological information,
The operation mode is changed from the first mode to the second mode, triggered by the fact that the biological information is included in the image,
The second mode is an information processing method that performs at least a matching process using biological information.
制御部が、少なくとも、第1のモードと、第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定し、
制御部は、
第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
画像に生体情報が含まれることをトリガーとして、動作モードを第1のモードから第2のモードに遷移させ、
第2のモードでは、少なくとも生体情報を使用したマッチング処理を行う
情報処理方法である。 The present disclosure, for example,
The control unit selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed,
The control unit is
In the first mode, it is determined whether or not the image obtained via the sensor unit includes biological information,
The operation mode is changed from the first mode to the second mode, triggered by the fact that the biological information is included in the image,
The second mode is an information processing method that performs at least a matching process using biological information.
本開示は、例えば、
制御部が、少なくとも、第1のモードと、第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定し、
制御部は、
第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
画像に生体情報が含まれることをトリガーとして、動作モードを第1のモードから第2のモードに遷移させ、
第2のモードでは、少なくとも生体情報を使用したマッチング処理を行う
情報処理方法をコンピュータに実行させるプログラムである。 The present disclosure, for example,
The control unit selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed,
The control unit is
In the first mode, it is determined whether or not the image obtained via the sensor unit includes biological information,
The operation mode is changed from the first mode to the second mode, triggered by the fact that the biological information is included in the image,
The second mode is a program that causes a computer to execute an information processing method for performing at least a matching process using biological information.
制御部が、少なくとも、第1のモードと、第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定し、
制御部は、
第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
画像に生体情報が含まれることをトリガーとして、動作モードを第1のモードから第2のモードに遷移させ、
第2のモードでは、少なくとも生体情報を使用したマッチング処理を行う
情報処理方法をコンピュータに実行させるプログラムである。 The present disclosure, for example,
The control unit selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed,
The control unit is
In the first mode, it is determined whether or not the image obtained via the sensor unit includes biological information,
The operation mode is changed from the first mode to the second mode, triggered by the fact that the biological information is included in the image,
The second mode is a program that causes a computer to execute an information processing method for performing at least a matching process using biological information.
本開示の少なくとも一つの実施の形態によれば、不要な電力消費を抑制することができる。ここに記載された効果は必ずしも限定されるものではなく、本開示中に記載されたいずれの効果であっても良い。また、例示された効果により本開示の内容が限定して解釈されるものではない。
According to at least one embodiment of the present disclosure, unnecessary power consumption can be suppressed. The effects described here are not necessarily limited, and may be any of the effects described in the present disclosure. In addition, the contents of the present disclosure are not to be construed as being limited by the illustrated effects.
以下、本開示の実施の形態等について図面を参照しながら説明する。なお、説明は以下の順序で行う。
<第1の実施の形態>
<第2の実施の形態>
<変形例>
以下に説明する実施の形態等は本開示の好適な具体例であり、本開示の内容がこれらの実施の形態等に限定されるものではない。 Hereinafter, embodiments and the like of the present disclosure will be described with reference to the drawings. The description will be made in the following order.
<First embodiment>
<Second embodiment>
<Modification>
The embodiments and the like described below are preferred specific examples of the present disclosure, and the contents of the present disclosure are not limited to these embodiments and the like.
<第1の実施の形態>
<第2の実施の形態>
<変形例>
以下に説明する実施の形態等は本開示の好適な具体例であり、本開示の内容がこれらの実施の形態等に限定されるものではない。 Hereinafter, embodiments and the like of the present disclosure will be described with reference to the drawings. The description will be made in the following order.
<First embodiment>
<Second embodiment>
<Modification>
The embodiments and the like described below are preferred specific examples of the present disclosure, and the contents of the present disclosure are not limited to these embodiments and the like.
<第1の実施の形態>
[リストバンド型電子機器について]
(リストバンド型電子機器の外観例)
第1の実施の形態について説明する。第1の実施の形態は、本開示を情報処理装置の一例、より具体的にはウエアラブル機器の一例であるリストバンド型電子機器に適用した例である。図1は、第1の実施の形態に係るリストバンド型電子機器(リストバンド型電子機器1)の外観例を示している。 <First embodiment>
[About wristband type electronic devices]
(External appearance of wristband type electronic device)
A first embodiment will be described. The first embodiment is an example in which the present disclosure is applied to an example of an information processing device, more specifically, to a wristband electronic device that is an example of a wearable device. FIG. 1 shows an example of an external appearance of a wristband electronic device (wristband electronic device 1) according to the first embodiment.
[リストバンド型電子機器について]
(リストバンド型電子機器の外観例)
第1の実施の形態について説明する。第1の実施の形態は、本開示を情報処理装置の一例、より具体的にはウエアラブル機器の一例であるリストバンド型電子機器に適用した例である。図1は、第1の実施の形態に係るリストバンド型電子機器(リストバンド型電子機器1)の外観例を示している。 <First embodiment>
[About wristband type electronic devices]
(External appearance of wristband type electronic device)
A first embodiment will be described. The first embodiment is an example in which the present disclosure is applied to an example of an information processing device, more specifically, to a wristband electronic device that is an example of a wearable device. FIG. 1 shows an example of an external appearance of a wristband electronic device (wristband electronic device 1) according to the first embodiment.
図1に示すように、リストバンド型電子機器1は、例えば、腕時計のように用いられる。より詳細には、リストバンド型電子機器1は、ユーザの手首WRに巻き付けられるバンド部2と、本体部3とを有している。本体部3は、ディスプレイ4を有している。詳細は後述するが、実施の形態に係るリストバンド型電子機器1では、ディスプレイ4に指先を接触させることにより、指先の指紋情報を用いた生体認証を行うことが可能とされている。
As shown in FIG. 1, the wristband type electronic device 1 is used, for example, like a wristwatch. More specifically, the wristband-type electronic device 1 has a band portion 2 wound around the user's wrist WR and a main body portion 3. The main body 3 has a display 4. Although details will be described later, in the wristband type electronic device 1 according to the embodiment, by touching the display 4 with a fingertip, it is possible to perform biometric authentication using fingerprint information of the fingertip.
(リストバンド型電子機器の内部の構造例)
図2は、リストバンド型電子機器1の本体部3内部の構造例を説明するための部分断面図である。リストバンド型電子機器1の本体部3は、例えば、上述したディスプレイ4、導光板5、発光部6、タッチセンサ部7、センサ部の一例である撮像素子8及びレンズ部9を有している。 (Example of internal structure of wristband type electronic device)
FIG. 2 is a partial cross-sectional view illustrating an example of the structure inside themain body 3 of the wristband type electronic device 1. The main unit 3 of the wristband type electronic device 1 includes, for example, the display 4, the light guide plate 5, the light emitting unit 6, the touch sensor unit 7, the image sensor 8 as an example of the sensor unit, and the lens unit 9 described above. .
図2は、リストバンド型電子機器1の本体部3内部の構造例を説明するための部分断面図である。リストバンド型電子機器1の本体部3は、例えば、上述したディスプレイ4、導光板5、発光部6、タッチセンサ部7、センサ部の一例である撮像素子8及びレンズ部9を有している。 (Example of internal structure of wristband type electronic device)
FIG. 2 is a partial cross-sectional view illustrating an example of the structure inside the
概略的には、図2に示すように、指先Fによる接触操作がディスプレイ4に対してなされ、接触の有無がタッチセンサ部7により検出される。リストバンド型電子機器1の本体部3は、操作方向から見て手前側から奥側に向かって、導光板5、ディスプレイ4、レンズ部9及び撮像素子8が順次、積層された構造を有している。なお、ディスプレイ4に対する接触は、ディスプレイ4に対する直接的な接触だけでなく、他の部材(例えば、導光板5)を介した間接的な接触を含んでいても良い。また、ディスプレイ4に対する接触には、例えば指先Fがディスプレイ4に触れることだけでなく、指紋画像が得られる程度に指先Fをディスプレイ4に近接させることを含んでいても良い。
接触 Generally, as shown in FIG. 2, a touch operation with the fingertip F is performed on the display 4, and the presence or absence of the touch is detected by the touch sensor unit 7. The main body 3 of the wristband type electronic device 1 has a structure in which a light guide plate 5, a display 4, a lens unit 9, and an imaging element 8 are sequentially stacked from the near side to the far side when viewed from the operation direction. ing. Note that the contact with the display 4 may include not only direct contact with the display 4 but also indirect contact via another member (for example, the light guide plate 5). Further, the contact with the display 4 may include, for example, not only the fingertip F touching the display 4 but also bringing the fingertip F close to the display 4 such that a fingerprint image is obtained.
以下、各構成について説明する。ディスプレイ4は、液晶LCD(Liquid Crystal Display)、OLED(Organic Light Emitting Diode)等から構成されている。導光板5は、例えば、発光部6からの光を指先Fが接触される位置である領域ARに導光する光透過性部材である。導光板5は、透明のものに限らず、撮像素子8による指先Fの指紋の撮影が可能な程度、光を透過するものであれば良い。
各 Each component will be described below. The display 4 includes a liquid crystal LCD (Liquid Crystal Display), an OLED (Organic Light Emitting Diode), and the like. The light guide plate 5 is, for example, a light transmissive member that guides light from the light emitting unit 6 to an area AR where the fingertip F is in contact. The light guide plate 5 is not limited to a transparent one, and may be any as long as it transmits light to the extent that a fingerprint of the fingertip F can be photographed by the imaging element 8.
発光部6は、LED(Light Emitting Diode)等から構成されており、導光板5の周囲の少なくとも一部に設けられている。なお、領域ARは、撮像素子8に対応する位置、具体的には、少なくとも撮像素子8による撮影の範囲に対応する位置を含む領域である。発光部6は、例えば、指紋を撮影する際に点灯することで、撮影に必要とされる光を提供する。
The light emitting unit 6 is configured by an LED (Light Emitting Diode) or the like, and is provided at least partially around the light guide plate 5. Note that the area AR is an area including a position corresponding to the image sensor 8, specifically, at least a position corresponding to a range of imaging by the image sensor 8. The light emitting unit 6 provides light required for photographing, for example, by being turned on when photographing a fingerprint.
タッチセンサ部7は、ディスプレイ4に対する指先Fの接触を検出するセンサである。タッチセンサ部7としては、例えば、静電容量方式のタッチセンサが適用される。タッチセンサ部7として抵抗膜方式等、他の方式のタッチセンサが適用されても良い。なお、図2では、領域ARの下部付近の位置にタッチセンサ部7が局所的に設けられているが、ディスプレイ4の下側の略全面にわたってタッチセンサ部7が設けられていても良い。
The touch sensor unit 7 is a sensor that detects contact of the fingertip F with the display 4. As the touch sensor unit 7, for example, a capacitive touch sensor is applied. A touch sensor of another type such as a resistive film type may be applied as the touch sensor unit 7. In FIG. 2, the touch sensor unit 7 is locally provided at a position near the lower part of the area AR. However, the touch sensor unit 7 may be provided over substantially the entire lower side of the display 4.
撮像素子8は、CCD(Charge Coupled Device)、CMOS(Complementary Metal Oxide Semiconductor)等により構成されている。撮像素子8は、レンズ部9を介して入射する被写体光(ディスプレイ4に接触したものからの反射光)を光電変換して電荷量に変換する。撮像素子8を介して得られる画像信号に対して後段における種々の処理が行われる。レンズ部9は、撮像素子8の数十~数百画素に1つの間隔で設けられるレンズ(マイクロレンズ)により構成されるものである。
The imaging device 8 is configured by a CCD (Charge Coupled Device), a CMOS (Complementary Metal Oxide Semiconductor), or the like. The imaging element 8 photoelectrically converts the subject light (reflected light from the object that has come into contact with the display 4) incident through the lens unit 9 to convert the light into an electric charge. Various processes in the subsequent stage are performed on the image signal obtained via the image sensor 8. The lens unit 9 is configured by a lens (microlens) provided at intervals of several tens to several hundreds of pixels of the imaging element 8.
図3は、リストバンド型電子機器1の内部構造のより具体的な例を示す図である。以下に説明する例は、ディスプレイ4が、透明有機EL素子や量子ドット発光素子等の透明発光素子を複数個有する透明パネル部として説明する。
FIG. 3 is a diagram illustrating a more specific example of the internal structure of the wristband type electronic device 1. In the example described below, the display 4 is described as a transparent panel unit having a plurality of transparent light emitting elements such as a transparent organic EL element and a quantum dot light emitting element.
図3に示すように、ディスプレイ4は、有効領域4Aと、外枠部4Bとを有している。ディスプレイ4は、複数の透明発光素子の発光によって、有効領域4Aに画像を表示する表示パネルとしての機能を有している。なお、透明発光素子は、例えば、有効領域4A内にマトリクス状に配置されている。また、ディスプレイ4は、例えば発光素子用の複数の配線間の静電容量の値に基づいて指等の物体によるタッチ状態を検出するタッチセンサとしての機能を有している。図3に示すように、ディスプレイ4の上面(操作側)にはカバーガラス50が設けられ、ディスプレイ4の一部の領域の下に、撮像素子8を含む撮像部60が配置されている。
デ ィ ス プ レ イ As shown in FIG. 3, the display 4 has an effective area 4A and an outer frame 4B. The display 4 has a function as a display panel that displays an image in the effective area 4A by light emission of the plurality of transparent light emitting elements. The transparent light emitting elements are arranged in a matrix in the effective area 4A, for example. In addition, the display 4 has a function as a touch sensor that detects a touch state of an object such as a finger based on a value of capacitance between a plurality of wirings for a light emitting element, for example. As shown in FIG. 3, a cover glass 50 is provided on the upper surface (operation side) of the display 4, and an imaging unit 60 including the imaging element 8 is arranged below a partial area of the display 4.
撮像部60は、ディスプレイ4の一部の領域の下に配置されている。撮像部60は、ディスプレイ4における一部の領域に接触または近接した物体をディスプレイ4を介して撮像する機能を有している。撮像部60によって撮像する物体は、例えば生体の一部であってもよい。撮像部60は、生体の一部を撮像することによって得られた生体の一部の撮像画像に基づいて、生体の一部の生体認証を行う生体認証デバイスの機能を有していてもよい。撮像部60における生体認証デバイスとしての機能によって、例えば指紋センサを構成することができる。
The imaging unit 60 is arranged below a partial area of the display 4. The imaging unit 60 has a function of imaging an object that is in contact with or in proximity to a partial area of the display 4 via the display 4. The object imaged by the imaging unit 60 may be, for example, a part of a living body. The imaging unit 60 may have a function of a biometric authentication device that performs biometric authentication on a part of a living body based on a captured image of a part of the living body obtained by imaging a part of the living body. The function of the imaging unit 60 as a biometric authentication device can constitute, for example, a fingerprint sensor.
撮像部60は、図3に示すように、マイクロレンズアレイモジュール61と、撮像部外枠62と、上述した撮像素子8と、基板63とを有している。マイクロレンズアレイモジュール61は、上側から見てディスプレイ4の有効領域4A内に配置されている。撮像素子8は、基板63上に配置されている。
(3) As shown in FIG. 3, the imaging unit 60 includes a microlens array module 61, an imaging unit outer frame 62, the above-described imaging device 8, and a substrate 63. The micro lens array module 61 is arranged in the effective area 4A of the display 4 when viewed from above. The imaging element 8 is arranged on the substrate 63.
マイクロレンズアレイモジュール61は、撮像素子8とディスプレイ4の有効領域4Aとの間に配置されている。マイクロレンズアレイモジュール61は、上側から順に、カバーガラス兼導光板65と、マイクロレンズアレイ66と、導光板67とを有している。
The microlens array module 61 is disposed between the image sensor 8 and the effective area 4A of the display 4. The microlens array module 61 includes a cover glass / light guide plate 65, a microlens array 66, and a light guide plate 67 in this order from the top.
マイクロレンズアレイ66は、マトリクス状に配置された複数のマイクロレンズを有している。マイクロレンズアレイ66は、複数のマイクロレンズのそれぞれによって指等の物体からの物体光を撮像素子8に向けて集光する。
The microlens array 66 has a plurality of microlenses arranged in a matrix. The microlens array 66 condenses object light from an object such as a finger toward the image sensor 8 by each of the plurality of microlenses.
カバーガラス兼導光板65は、マイクロレンズアレイ66の表面を保護する役割を持つ。また、カバーガラス兼導光板65は、ディスプレイ4の有効領域4Aを透過した物体光を、複数のマイクロレンズのそれぞれに導く役割を持つ。カバーガラス兼導光板65は、複数のマイクロレンズのそれぞれに対応する位置に設けられた複数の導光路を有している。
(4) The cover glass and light guide plate 65 has a role of protecting the surface of the microlens array 66. Further, the cover glass and light guide plate 65 has a role of guiding the object light transmitted through the effective area 4A of the display 4 to each of the plurality of microlenses. The cover glass and light guide plate 65 has a plurality of light guide paths provided at positions corresponding to each of the plurality of micro lenses.
導光板67は、図3に示すように、複数の導光路68を有している。複数の導光路68は、複数のマイクロレンズのそれぞれに対応する位置に設けられ、複数のマイクロレンズのそれぞれによって集光された光を撮像素子8に導くようになっている。
(3) The light guide plate 67 has a plurality of light guide paths 68 as shown in FIG. The plurality of light guide paths 68 are provided at positions corresponding to the plurality of microlenses, respectively, and guide the light collected by each of the plurality of microlenses to the image sensor 8.
(リストバンド型電子機器の回路構成例)
図4は、リストバンド型電子機器1の回路構成例等を示すブロック図である。リストバンド型電子機器1は、上述したディスプレイ4、タッチセンサ部7及び撮像素子8等の他に、例えば、制御部11、無線通信部12、無線通信部12に接続されるアンテナ13、NFC(Near Field Communication)通信部14、NFC通信部14に接続されるアンテナ15、位置センサ部16、位置センサ部16に接続されるアンテナ17、メモリ部18、バイブレータ19、モーションセンサ20、音声処理部21、マイクロホン22及びスピーカ23を有している。 (Example of circuit configuration of wristband type electronic device)
FIG. 4 is a block diagram illustrating a circuit configuration example of the wristband typeelectronic device 1 and the like. The wristband type electronic device 1 includes, for example, a control unit 11, a wireless communication unit 12, an antenna 13 connected to the wireless communication unit 12, an NFC (NFC), in addition to the display 4, the touch sensor unit 7, the image sensor 8, and the like. Near Field Communication) communication unit 14, antenna 15 connected to NFC communication unit 14, position sensor unit 16, antenna 17 connected to position sensor unit 16, memory unit 18, vibrator 19, motion sensor 20, voice processing unit 21 , A microphone 22 and a speaker 23.
図4は、リストバンド型電子機器1の回路構成例等を示すブロック図である。リストバンド型電子機器1は、上述したディスプレイ4、タッチセンサ部7及び撮像素子8等の他に、例えば、制御部11、無線通信部12、無線通信部12に接続されるアンテナ13、NFC(Near Field Communication)通信部14、NFC通信部14に接続されるアンテナ15、位置センサ部16、位置センサ部16に接続されるアンテナ17、メモリ部18、バイブレータ19、モーションセンサ20、音声処理部21、マイクロホン22及びスピーカ23を有している。 (Example of circuit configuration of wristband type electronic device)
FIG. 4 is a block diagram illustrating a circuit configuration example of the wristband type
制御部11は、例えばCPU(Central Processing Unit)から構成されており、リストバンド型電子機器1の各部を制御する。例えば、制御部11は、撮像素子8により撮影される指先Fの指紋画像に種々の画像処理を施すと共に、生体情報の一つである指紋の画像(指紋画像)に基づく指紋認証を行う。
The control unit 11 includes, for example, a CPU (Central Processing Unit) and controls each unit of the wristband type electronic device 1. For example, the control unit 11 performs various types of image processing on a fingerprint image of the fingertip F captured by the imaging element 8 and performs fingerprint authentication based on a fingerprint image (fingerprint image), which is one of biological information.
無線通信部12は、例えばBluetooth(登録商標)の規格に基づいて他の端末と近距離無線通信を行う。無線通信部12は、例えばBluetooth(登録商標)の規格に対応して、変調/復調処理やエラー訂正処理等を行う。
The wireless communication unit 12 performs short-range wireless communication with another terminal based on, for example, the Bluetooth (registered trademark) standard. The wireless communication unit 12 performs modulation / demodulation processing, error correction processing, and the like in accordance with, for example, the Bluetooth (registered trademark) standard.
NFC通信部14は、NFCの規格に基づいて、近接したリーダー/ライタと無線通信を行う。なお、図示は省略しているが、リストバンド型電子機器1の各部に対しては、リチウムイオン二次電池等のバッテリから電力が供給される。当該バッテリが、NFCの規格に基づいてワイヤレス充電されるようにしても良い。
The NFC communication unit 14 performs wireless communication with a nearby reader / writer based on the NFC standard. Although illustration is omitted, power is supplied from a battery such as a lithium ion secondary battery to each unit of the wristband type electronic device 1. The battery may be charged wirelessly based on the NFC standard.
位置センサ部16は、例えば、GNSS(Global Navigation Satellite System)と称されるシステムを利用して、現在位置の測位を行う測位部である。これらの無線通信部12、NFC通信部14、位置センサ部16で得られたデータは、制御部11に供給される。そして、制御部11は、供給されたデータに基づく制御を実行する。
The position sensor unit 16 is a positioning unit that performs positioning of the current position by using a system called GNSS (Global Navigation Satellite System), for example. Data obtained by the wireless communication unit 12, the NFC communication unit 14, and the position sensor unit 16 are supplied to the control unit 11. And the control part 11 performs control based on the supplied data.
メモリ部18は、制御部11が実行するプログラムが格納されるROM(Read Only Memory)や制御部11がプログラムを実行する際のワークメモリとして使用されるRAM(Random Access Memory)やデータ記憶用の不揮発性メモリ等を総称したものである。なお、メモリ部18には、指紋認証用に用いられる正規ユーザの指紋の特徴量(以下、登録済特徴量と適宜、称する)が記憶される。この登録済特徴量は、例えば、リストバンド型電子機器1を初めて使用する際に初期登録される。
The memory unit 18 includes a ROM (Read Only Memory) in which a program executed by the control unit 11 is stored, a RAM (Random Access Memory) used as a work memory when the control unit 11 executes the program, and a data storage This is a general term for non-volatile memories and the like. The memory unit 18 stores a feature amount of a fingerprint of an authorized user used for fingerprint authentication (hereinafter, appropriately referred to as a registered feature amount). This registered feature amount is initially registered, for example, when the wristband type electronic device 1 is used for the first time.
バイブレータ19は、例えば、リストバンド型電子機器1の本体部3を振動させる部材である。バイブレータ19による本体部3の振動で、電話の着信や電子メールの受信等が通知される。
The vibrator 19 is, for example, a member that vibrates the main body 3 of the wristband type electronic device 1. By vibrating the main body 3 by the vibrator 19, an incoming call, reception of an e-mail, or the like is notified.
モーションセンサ20は、リストバンド型電子機器1を装着したユーザの動きを検出する。モーションセンサ20としては、加速度センサ、ジャイロセンサ、電子コンパス、気圧センサ、血圧や脈拍等を検出するバイオセンサなどが使用される。また、リストバンド型電子機器1をユーザが装着したか否かを検出するための圧力センサ等がバンド部2や本体部3の裏側(手首に向く側)に設けられていても良い。
The motion sensor 20 detects the movement of the user wearing the wristband type electronic device 1. As the motion sensor 20, an acceleration sensor, a gyro sensor, an electronic compass, a barometric pressure sensor, a biosensor for detecting blood pressure, a pulse, and the like are used. Further, a pressure sensor or the like for detecting whether or not the user wears the wristband type electronic device 1 may be provided on the back side (the side facing the wrist) of the band portion 2 or the main body portion 3.
音声処理部21には、マイクロホン22とスピーカ23とが接続され、音声処理部21が、無線通信部12での無線通信で接続された相手と通話の処理を行う。また、音声処理部21は、音声入力操作のための処理を行うこともできる。
(4) The microphone 22 and the speaker 23 are connected to the voice processing unit 21, and the voice processing unit 21 performs a call process with the other party connected by wireless communication in the wireless communication unit 12. The voice processing unit 21 can also perform a process for a voice input operation.
ディスプレイ4、タッチセンサ部7等については既に説明してあるので、重複した説明を省略する。
(4) Since the display 4, the touch sensor unit 7, and the like have already been described, duplicate description will be omitted.
以上、リストバンド型電子機器1の構成例について説明した。勿論、リストバンド型電子機器1は、上述した構成例に限定されるものではなく、上述したリストバンド型電子機器1の構成の一部が無い構成でも良いし、他の構成が追加された構成でも良い。
The configuration example of the wristband type electronic device 1 has been described above. Of course, the wristband-type electronic device 1 is not limited to the above-described configuration example, and may have a configuration in which a part of the configuration of the above-described wristband-type electronic device 1 is not provided, or a configuration in which another configuration is added. But it's fine.
[制御部について]
図5は、制御部11が有する機能の一例を説明するための機能ブロック図である。制御部11は、前処理部11a、特徴点検出部11b、特徴量抽出部11c及びマッチング処理部11dを有している。 [About the control unit]
FIG. 5 is a functional block diagram for explaining an example of a function of thecontrol unit 11. The control unit 11 includes a preprocessing unit 11a, a feature point detection unit 11b, a feature amount extraction unit 11c, and a matching processing unit 11d.
図5は、制御部11が有する機能の一例を説明するための機能ブロック図である。制御部11は、前処理部11a、特徴点検出部11b、特徴量抽出部11c及びマッチング処理部11dを有している。 [About the control unit]
FIG. 5 is a functional block diagram for explaining an example of a function of the
前処理部11aは、入力される指紋画像に対して種々の補正処理を行う。前処理部11aにより行われる処理の詳細については、後述する。
(4) The preprocessing unit 11a performs various correction processes on the input fingerprint image. Details of the processing performed by the preprocessing unit 11a will be described later.
特徴点検出部11bは、公知の方法を適用することにより指紋を含む画像から指紋の特徴点を検出する。指紋の特徴点とは、例えば、図6に示すような、指紋の指紋線に描かれる模様の中の終点や分岐点、後述する指紋線の交差点や孤立点であり、指紋を照合する際に必要な特徴的な箇所である。なお、指紋線とは、本実施の形態では、指紋の隆線として説明するが、指紋の隆線及び谷線の少なくとも一方であれば良い。
The feature point detection unit 11b detects a feature point of a fingerprint from an image including the fingerprint by applying a known method. The characteristic points of the fingerprint are, for example, end points and branch points in the pattern drawn on the fingerprint line of the fingerprint as shown in FIG. 6, and intersections and isolated points of the fingerprint line described later. It is a necessary characteristic part. In the present embodiment, the fingerprint line is described as a ridge of a fingerprint, but may be at least one of a ridge and a valley of the fingerprint.
特徴量抽出部11cは、特徴点検出部11bにより検出された個々の特徴点を特徴付ける特徴量を抽出する。特徴量としては、特徴点の位置、特徴線の向き(例えば、隆線で規定される所定方向に対する相対的な向き(ベクトル)等である。本実施の形態では、特徴量抽出部11cは、特徴点を含む周辺画像に基づいて、当該特徴点の特徴量を抽出する。周辺画像は、例えば、特徴点を中心として、3mm×3mmの大きさで切り出し、角度で正規化された画像が適用されるが、これに限定されるものではない。ただし、角度で正規化した上で特徴量を抽出することの効果として、登録時と照合時で撮影された指の向きが異なっていても、特徴点の角度を正規化することにより、抽出される特徴量が変化しにくくなる効果、つまり指を置く角度に対するロバスト性が向上する効果、が得られることに注意されたい。係る周辺画像を使用することにより、特徴点の周辺の情報を特徴量に含むことが可能となる。例えば、特徴点の周辺に汗腺が存在する場合には、当該特徴点に対する汗腺の相対的な位置を、当該特徴点の特徴量とすることができる。このように、本実施の形態では、特徴点の位置、特徴点の向き及び汗腺の位置の少なくとも一つを特徴量として使用する。特に1000ppi以上の高解像度画像を利用した際には、少ない特徴点(例えば1点ないし2点)数でも十分に個人を識別することが可能になるため、本開示に係る実施の形態は指の広い範囲の指紋撮影が必ずしも必要とされない、小面積での指紋照合に適した手法とも言える。
The feature amount extraction unit 11c extracts a feature amount characterizing each feature point detected by the feature point detection unit 11b. The feature amount includes a position of a feature point, a direction of a feature line (for example, a relative direction (vector) with respect to a predetermined direction defined by a ridge, and the like. In the present embodiment, the feature amount extraction unit 11 c A feature amount of the feature point is extracted based on the peripheral image including the feature point, for example, an image obtained by cutting out a size of 3 mm × 3 mm around the feature point and normalizing the angle is applied. However, it is not limited to this, but the effect of extracting the feature amount after normalizing with the angle is that even if the orientation of the finger photographed at the time of registration and at the time of verification is different, Note that by normalizing the angle of the feature point, an effect of making the extracted feature amount hard to change, that is, an effect of improving the robustness with respect to the angle at which the finger is placed can be obtained. By doing For example, when a sweat gland is present around a feature point, the relative position of the sweat gland with respect to the feature point may be included in the feature amount of the feature point. As described above, in the present embodiment, at least one of the position of the characteristic point, the direction of the characteristic point, and the position of the sweat gland is used as the characteristic amount, and particularly, a high-resolution image of 1000 ppi or more is used. In this case, since it is possible to sufficiently identify an individual even with a small number of feature points (for example, one or two points), the embodiment according to the present disclosure does not necessarily need to take a fingerprint of a wide area of a finger. It can also be said to be a method suitable for fingerprint matching in a small area.
マッチング処理部11dは、特徴量抽出部11cにより抽出された特徴量と、予め登録されている登録済特徴量とを照合するマッチング処理を行い、マッチング処理の結果である照合スコアを出力する。照合スコアが閾値以上であれば、指紋認証が成立、即ち、正規ユーザと判定される。反対に、照合スコアが閾値より小さい場合には、指紋認証が不成立となる。マッチング処理の結果が、表示、音、振動等によりユーザに報知されても良い。マッチング処理の結果、認証が成立した場合には、リストバンド型電子機器1の所定の機能の利用が許可される等、アプリケーションに応じた利用が可能となる。なお、本実施の形態では、登録済特徴量はメモリ部18に記憶されているものとして説明するが、クラウド上のサーバ装置等、外部装置に記憶されていても良く、指紋認証を行う際に外部装置から登録済特徴量がダウンロードされても良い。係る構成の場合、セキュリティを向上させる観点から、マッチング処理が終了した後に、登録済特徴量がリストバンド型電子機器1から自動的に消去されるようにしても良い。
The matching processing unit 11d performs a matching process of comparing the feature amount extracted by the feature amount extracting unit 11c with a registered feature amount registered in advance, and outputs a matching score as a result of the matching process. If the collation score is equal to or more than the threshold, the fingerprint authentication is established, that is, it is determined that the user is an authorized user. Conversely, if the collation score is smaller than the threshold, the fingerprint authentication is not established. The result of the matching process may be notified to the user by display, sound, vibration, or the like. As a result of the matching process, when the authentication is established, the use according to the application becomes possible, for example, the use of the predetermined function of the wristband type electronic device 1 is permitted. In the present embodiment, the registered feature amount is described as being stored in the memory unit 18. However, the registered feature amount may be stored in an external device such as a server device on a cloud. The registered feature amount may be downloaded from an external device. In such a configuration, from the viewpoint of improving security, the registered feature amount may be automatically deleted from the wristband type electronic device 1 after the matching process is completed.
[前処理部について]
次に、前処理部11aについて説明する。図7は、前処理部11aの機能の一例を説明するための機能ブロック図である。前処理部11aは、補正処理に含まれる機能を実行する構成として、例えば、ノイズ除去部101、画像生成部としての隆線推定画像生成部102及び確信度マップ生成部103を有している。 [About pre-processing unit]
Next, thepre-processing unit 11a will be described. FIG. 7 is a functional block diagram illustrating an example of a function of the preprocessing unit 11a. The preprocessing unit 11a includes, for example, a noise removal unit 101, a ridge estimation image generation unit 102 as an image generation unit, and a certainty factor map generation unit 103 as a configuration that executes functions included in the correction processing.
次に、前処理部11aについて説明する。図7は、前処理部11aの機能の一例を説明するための機能ブロック図である。前処理部11aは、補正処理に含まれる機能を実行する構成として、例えば、ノイズ除去部101、画像生成部としての隆線推定画像生成部102及び確信度マップ生成部103を有している。 [About pre-processing unit]
Next, the
(ノイズ除去部について)
ノイズ除去部101は、指紋画像に含まれるノイズを除去する。図8A~図8Dは、ノイズ除去部101により行われるノイズ除去処理を説明するための図である。図8Aの右側の画像は指紋画像IM1Aを示しており、指紋画像IM1Aには、ゴミNAが写り込んでいる。ノイズ除去部101は、例えば、隣接画素間の輝度値の変化が所定以上である領域をゴミと判別し、ゴミNAの周辺画素を用いた補間処理等を行うことによりゴミNAを除去する。指紋画像IM1AからゴミNAが除去された画像を用いて、図8Aの右側に示すような隆線推定画像IM2Aが隆線推定画像生成部102により生成される。なお、ゴミ等のノイズを除去する処理としては、他の公知の処理を適用することができる。以下に説明するゴミ以外のノイズを除去する処理でも同様である。 (About the noise removal section)
Thenoise removing unit 101 removes noise included in a fingerprint image. 8A to 8D are diagrams for explaining the noise removal processing performed by the noise removal unit 101. FIG. The image on the right side of FIG. 8A shows the fingerprint image IM1A, and dust NA is reflected in the fingerprint image IM1A. The noise removing unit 101 determines, for example, a region in which a change in the luminance value between adjacent pixels is equal to or greater than a predetermined value as dust, and removes the dust NA by performing an interpolation process using peripheral pixels of the dust NA. Using the image from which the dust NA has been removed from the fingerprint image IM1A, the ridge estimation image IM2A as shown on the right side of FIG. 8A is generated by the ridge estimation image generation unit 102. In addition, as a process for removing noise such as dust, other known processes can be applied. The same applies to the processing for removing noise other than dust described below.
ノイズ除去部101は、指紋画像に含まれるノイズを除去する。図8A~図8Dは、ノイズ除去部101により行われるノイズ除去処理を説明するための図である。図8Aの右側の画像は指紋画像IM1Aを示しており、指紋画像IM1Aには、ゴミNAが写り込んでいる。ノイズ除去部101は、例えば、隣接画素間の輝度値の変化が所定以上である領域をゴミと判別し、ゴミNAの周辺画素を用いた補間処理等を行うことによりゴミNAを除去する。指紋画像IM1AからゴミNAが除去された画像を用いて、図8Aの右側に示すような隆線推定画像IM2Aが隆線推定画像生成部102により生成される。なお、ゴミ等のノイズを除去する処理としては、他の公知の処理を適用することができる。以下に説明するゴミ以外のノイズを除去する処理でも同様である。 (About the noise removal section)
The
また、ノイズ除去部101は、例えば、ゴミ以外のノイズである固定パターンノイズを除去する。図8Bの左側の画像は指紋画像IM1Bを示しており、指紋画像IM1Bには、例えば、縦線の縞模様の固定パターンノイズNBが写り込んでいる。係る固定パターンノイズNBとしては、例えば、ディスプレイ4の構造、より具体的には、ディスプレイ4自体が有する模様を挙げることができる。
{Circle around (4)} The noise removing unit 101 removes fixed pattern noise that is noise other than dust, for example. The image on the left side of FIG. 8B shows the fingerprint image IM1B, and the fingerprint image IM1B includes, for example, a fixed pattern noise NB having a vertical stripe. Examples of the fixed pattern noise NB include, for example, the structure of the display 4, more specifically, the pattern of the display 4 itself.
本実施の形態に係るリストバンド型電子機器1の構造の場合、操作方向を基準として、ディスプレイ4の奥側に撮像素子8が配置されている。このため、撮像素子8を介して得られる画像にディスプレイ4が有する模様が固定パターンノイズNBとして、指紋画像に写り込む虞がある。しかしながら、ノイズ除去部101は、このような固定パターンノイズNBを除去し、ノイズNBの箇所を補間するようにしているので、本実施の形態に係るリストバンド型電子機器1の構造の場合であっても、指紋認証の精度が低下してしまうことを防止することができる。指紋画像IM1Bから固定パターンノイズNBが除去された画像を用いて、図8Bの右側に示すような隆線推定画像IM2Bが隆線推定画像生成部102により生成される。
In the case of the structure of the wristband type electronic device 1 according to the present embodiment, the imaging element 8 is disposed on the back side of the display 4 with reference to the operation direction. For this reason, there is a possibility that a pattern included in the display 4 in an image obtained through the imaging element 8 is reflected in the fingerprint image as fixed pattern noise NB. However, since the noise removing unit 101 removes such fixed pattern noise NB and interpolates the location of the noise NB, the noise removing unit 101 has the structure of the wristband type electronic device 1 according to the present embodiment. However, it is possible to prevent the accuracy of fingerprint authentication from being reduced. Using the image from which the fixed pattern noise NB has been removed from the fingerprint image IM1B, the ridge estimation image IM2B as shown on the right side of FIG. 8B is generated by the ridge estimation image generation unit 102.
また、ノイズ除去部101は、ゴミ以外のノイズである撮像素子の境界を除去する。例えば、撮像素子8が、複数のサブセンサ部として4個の撮像素子を有し、当該4個の撮像素子を組み合わせた構成を有する場合を想定する。仕様上、あるサイズの撮像素子8が要求される場合に、既存サイズの撮像素子を組み合わせることで要求されるサイズの撮像素子8を形成することができるのであれば、新たなサイズの撮像素子8を別途、製造するより、製造コスト等の面で有利である。
{Circle around (2)} The noise removing unit 101 removes boundaries of the image sensor which are noises other than dust. For example, it is assumed that the image sensor 8 has four image sensors as a plurality of sub-sensor units and has a configuration in which the four image sensors are combined. In the case where an image sensor 8 of a certain size is required in specifications, if an image sensor 8 of a required size can be formed by combining image sensors of an existing size, an image sensor 8 of a new size is required. Is more advantageous in terms of manufacturing cost and the like than manufacturing separately.
しかしながら、複数の撮像素子を組み合わせた構造を撮像素子8が有する場合、図8Cの左側に示すように、複数の撮像素子間の境界が指紋画像IM1CにノイズNCとして写り込む。ノイズ除去部101は、このようなノイズNCを除去し、ノイズNCの箇所を補間するようにしているので、本実施の形態に係るリストバンド型電子機器1の構造の場合であっても、指紋認証の精度が低下してしまうことを防止することができる。指紋画像IM1CからノイズNCが除去された画像を用いて、図8Cの右側に示すような隆線推定画像IM2Cが隆線推定画像生成部102により生成される。
However, when the imaging device 8 has a structure in which a plurality of imaging devices are combined, as shown on the left side of FIG. 8C, a boundary between the plurality of imaging devices appears in the fingerprint image IM1C as noise NC. Since the noise removing unit 101 removes such noise NC and interpolates the location of the noise NC, even in the case of the structure of the wristband type electronic device 1 according to the present embodiment, the fingerprint is It is possible to prevent the accuracy of authentication from being reduced. Using the image from which the noise NC has been removed from the fingerprint image IM1C, a ridge estimation image IM2C as shown on the right side of FIG. 8C is generated by the ridge estimation image generation unit 102.
また、ノイズ除去部101は、撮像素子8を介して得られる画像に、指紋とは全く異なるものがノイズとして写り込んでいる場合は、それを除去する。例えば、図8Dの左側に示すように、指紋と異なるもの(図示の例では、迷路状の模様を有するもの)がノイズNDとして画像IM1Dに写り込んでいる。ノイズ除去部101は、例えば、隆線に対応する曲線模様を全く含まないものについては指紋以外のものであると判別し、その模様を除去する。図8Dの右側には除去後の画像IM2Dが示されている。係る処理は、例えば、指紋認証を行う際に、ユーザの服等がディスプレイ4に触れたとき等に有用である。なお、画像IM2Dには指紋は写っていないため、画像IM2Dの場合は、指紋認証に係る処理が行われないようにしても良い。
{Circle around (4)} If the image obtained via the image sensor 8 includes noise completely different from the fingerprint in the image obtained through the image sensor 8, the noise is removed. For example, as shown on the left side of FIG. 8D, a different thing from the fingerprint (in the example shown, having a maze-like pattern) appears in the image IM1D as noise ND. For example, the noise removing unit 101 determines that a curved pattern corresponding to a ridge is not a fingerprint, and removes the curved pattern. The image IM2D after the removal is shown on the right side of FIG. 8D. Such a process is useful, for example, when performing fingerprint authentication when the user's clothes or the like touches the display 4. In addition, since the fingerprint is not shown in the image IM2D, the process related to the fingerprint authentication may not be performed in the case of the image IM2D.
以上のように、ノイズ除去部101による補正処理が行われることにより、ノイズの影響によって指紋認証の精度が低下してしまうことを防止することができる。また、指紋認証の精度が低下してしまうことにより発生する認証の失敗によるフィードバックがユーザに対して行われてしまうことを防止することができる。
As described above, by performing the correction processing by the noise removing unit 101, it is possible to prevent the accuracy of fingerprint authentication from being reduced due to the influence of noise. In addition, it is possible to prevent feedback from being performed to the user due to authentication failure caused by a decrease in the accuracy of fingerprint authentication.
(隆線推定画像生成部について)
次に、隆線推定画像生成部102について説明する。隆線推定画像生成部102は、ノイズ除去部101による処理が施された画像に基づいて、指紋線に基づく模様を推定した隆線推定画像を生成する。隆線推定画像の生成方法としては、公知の方法を適用することができる。本実施の形態に係る隆線推定画像の生成方法の例について説明する。 (About the ridge estimation image generator)
Next, the ridge estimationimage generation unit 102 will be described. The ridge estimation image generation unit 102 generates a ridge estimation image in which a pattern based on a fingerprint line is estimated based on the image processed by the noise removal unit 101. As a method for generating the ridge estimation image, a known method can be applied. An example of a method for generating a ridge estimation image according to the present embodiment will be described.
次に、隆線推定画像生成部102について説明する。隆線推定画像生成部102は、ノイズ除去部101による処理が施された画像に基づいて、指紋線に基づく模様を推定した隆線推定画像を生成する。隆線推定画像の生成方法としては、公知の方法を適用することができる。本実施の形態に係る隆線推定画像の生成方法の例について説明する。 (About the ridge estimation image generator)
Next, the ridge estimation
例1として、隆線推定画像生成部102は、ノイズ除去部101による処理が施された画像にFFT(Fast Fourier Transform)を利用し、指紋の指紋線の平均的な周期(例えば、0.4mm周期)の周波数前後のバンドパスフィルタを適用することにより、隆線推定画像を生成する。
As an example 1, the ridge estimation image generation unit 102 uses FFT (Fast {Fourier} Transform) on the image processed by the noise removal unit 101, and calculates the average period (for example, 0.4 mm) of the fingerprint line of the fingerprint. A ridge estimation image is generated by applying a bandpass filter before and after the frequency of (period).
他の例である例2として、隆線推定画像生成部102は、周辺1mm角分の領域毎にFFTを利用してその領域で支配的な周波数(以下、主要周波数と適宜、称する)と波の角度(指紋の流れ方向)とを抽出し、その周波数と角度に合わせたGaborフィルタを適用するにより隆線推定画像を生成する。上述した2つの例によれば、主な隆線/谷線が強調され、小さなノイズの影響を小さくすることができる。
As another example 2, the ridge estimation image generation unit 102 uses the FFT for each area of 1 mm square in the vicinity to generate a frequency (hereinafter, referred to as a main frequency as appropriate) and a dominant frequency in the area. (The flow direction of the fingerprint) is extracted, and a Gabor filter adapted to the frequency and the angle is applied to generate a ridge estimation image. According to the above two examples, main ridges / valleys are emphasized, and the influence of small noise can be reduced.
例2において、指紋の流れ方向と主要周波数とを検出する方法の例について、図9を参照して説明する。図9Aにおける左側に示す画像IM8は、ある指紋画像IM8を示している。また、図9Aにおける右側には、画像IM8に対してFFTを適用することにより得られる周波数スペクトルが示されている。当該周波数スペクトルに重畳されている放射状のラインのうち一のラインは、後述する積分値が最も大きい成分を示している。また、図9Bは、積分値が最も大きくなる成分の方向(主方向)の周波数プロファイルを示している。
In Example 2, an example of a method for detecting the flow direction and the main frequency of the fingerprint will be described with reference to FIG. The image IM8 shown on the left side in FIG. 9A shows a certain fingerprint image IM8. 9A, a frequency spectrum obtained by applying the FFT to the image IM8 is shown. One of the radial lines superimposed on the frequency spectrum indicates a component having the largest integral value described later. FIG. 9B shows a frequency profile in the direction (principal direction) of the component in which the integrated value is the largest.
第1のステップとして、周波数スペクトルの16方向についてプロファイルを抽出し、その積分値が最も大きい方向を決定する。これが主な波の方向成分になる。続く、第2のステップとして、主方向の周波数プロファイルからピーク値を検出し、そのピーク値に対応する周波数を主要周波数とする。このようにして、指紋の流れ方向と主要周波数を検出することができる。
と し て As a first step, profiles are extracted for 16 directions of the frequency spectrum, and the direction with the largest integral value is determined. This is the main directional component of the wave. Subsequently, as a second step, a peak value is detected from the frequency profile in the main direction, and a frequency corresponding to the peak value is set as a main frequency. Thus, the flow direction and the main frequency of the fingerprint can be detected.
なお、本実施の形態に係る隆線推定画像生成部102は、撮影された領域の外側まで所定範囲、拡張して、指紋の模様を推定するようにしている。例えば、図10Aに示す指紋画像IM9Aに基づいて、図10Bに示すように、指紋画像IM9Aより所定サイズ大きい範囲まで拡大した隆線推定画像IM9Bを生成する。例えば、元のサイズ(指紋画像IM9Aのサイズ)により得られた指紋線を、当該指紋線の流れ(向き)に沿って延長する。係る処理により、所定の指紋線と当該指紋線とは別の指紋線が合わさる位置、換言すれば、上述した指紋の特徴点の一つである指紋線の分岐点や交差点を得ることができる場合がある。以上の処理により、例えば、撮像素子8のサイズが小さく、当該撮像素子8により得られる画像の領域に限界がある場合でも、より多くの特徴点を得ることが可能となり、指紋認証の精度を向上させることができる。
Note that the ridge estimation image generation unit 102 according to the present embodiment extends a predetermined range to the outside of the captured area to estimate the fingerprint pattern. For example, based on the fingerprint image IM9A shown in FIG. 10A, as shown in FIG. 10B, a ridge estimation image IM9B enlarged to a range larger than the fingerprint image IM9A by a predetermined size is generated. For example, the fingerprint line obtained by the original size (the size of the fingerprint image IM9A) is extended along the flow (direction) of the fingerprint line. In the case where a position where a predetermined fingerprint line and another fingerprint line are combined, in other words, a branch point or an intersection of the fingerprint line, which is one of the characteristic points of the fingerprint, can be obtained by such processing. There is. By the above processing, for example, even when the size of the image sensor 8 is small and the area of the image obtained by the image sensor 8 is limited, more feature points can be obtained, and the accuracy of fingerprint authentication is improved. Can be done.
(確信度マップ生成部について)
次に、確信度マップ生成部103について説明する。確信度マップ生成部103は、指紋に対応する模様を推定した画像である隆線推定画像の領域のうち、推定の結果の確からしさを示す確信度マップを生成する。 (About the confidence map generator)
Next, the certaintyfactor map generator 103 will be described. The certainty map generation unit 103 generates a certainty map indicating the certainty of the estimation result in the area of the ridge estimation image that is the image obtained by estimating the pattern corresponding to the fingerprint.
次に、確信度マップ生成部103について説明する。確信度マップ生成部103は、指紋に対応する模様を推定した画像である隆線推定画像の領域のうち、推定の結果の確からしさを示す確信度マップを生成する。 (About the confidence map generator)
Next, the certainty
図11は、確信度マップの一例である確信度マップMA10を示している。図11に示す確信度マップMA10では、画像の領域が白及び黒の領域にそれぞれ分割されている。本例では、白の領域が、確信度が高い領域、即ち、指紋線の模様が正確に得られているとされる領域である。一方、黒の領域が、確信度が低い領域である。確信度に対する所定の閾値が設定され、確信度が閾値以上であれば確信度が高い領域として設定され、確信度が閾値より小さければ確信度が低い領域として設定される。
FIG. 11 shows a certainty factor map MA10, which is an example of the certainty factor map. In the certainty map MA10 shown in FIG. 11, the image area is divided into white and black areas. In this example, the white area is an area having a high degree of certainty, that is, an area where a fingerprint line pattern is accurately obtained. On the other hand, a black region is a region with low confidence. A predetermined threshold value is set for the certainty factor. If the certainty factor is equal to or larger than the threshold value, the certainty factor is set as a high-confidence area.
ここで、確信度に関する一例について説明する。例えば、画像から所定サイズの画像(例えば、1mm×1mmの矩形画像)を切り出す。そして、切り出した画像について、各画素が有する輝度値の分布を示す輝度分布を作成する。
Here, an example of the certainty factor will be described. For example, an image of a predetermined size (for example, a rectangular image of 1 mm × 1 mm) is cut out from the image. Then, with respect to the cut-out image, a brightness distribution indicating a brightness value distribution of each pixel is created.
図12は、輝度分布の一例を示している。輝度の頻度分布Pi(i=0,・・・,255)を下から積分した積分値が10%に達する輝度値、すなわち
Σ0 BV1Pi=0.1
となる輝度値、つまり10パーセンタイルの輝度値をBV1とし、輝度の頻度分布Piを上から積分した積分値が10%に達する輝度値、すなわち
ΣBV2 255Pi=0.1
となる輝度値、つまり90パーセンタイルの輝度値をBV2とする。輝度値BV1と輝度値BV2との差分値Dが、確信度として設定される。なお、切り出した画像の輝度値の分散が確信度とされても良い。 FIG. 12 shows an example of the luminance distribution. A luminance value at which an integrated value obtained by integrating the luminance frequency distribution Pi (i = 0,..., 255) from below reaches 10%, that is, Σ 0 BV1 Pi = 0.1
BV1 is a luminance value that satisfies the following expression, that is, a luminance value at which an integral value obtained by integrating the luminance frequency distribution Pi from the top reaches 10%, that is, Σ BV2 255 Pi = 0.1
, That is, the luminance value of the 90th percentile is BV2. The difference value D between the luminance value BV1 and the luminance value BV2 is set as the certainty factor. Note that the variance of the luminance values of the extracted images may be used as the certainty factor.
Σ0 BV1Pi=0.1
となる輝度値、つまり10パーセンタイルの輝度値をBV1とし、輝度の頻度分布Piを上から積分した積分値が10%に達する輝度値、すなわち
ΣBV2 255Pi=0.1
となる輝度値、つまり90パーセンタイルの輝度値をBV2とする。輝度値BV1と輝度値BV2との差分値Dが、確信度として設定される。なお、切り出した画像の輝度値の分散が確信度とされても良い。 FIG. 12 shows an example of the luminance distribution. A luminance value at which an integrated value obtained by integrating the luminance frequency distribution Pi (i = 0,..., 255) from below reaches 10%, that is, Σ 0 BV1 Pi = 0.1
BV1 is a luminance value that satisfies the following expression, that is, a luminance value at which an integral value obtained by integrating the luminance frequency distribution Pi from the top reaches 10%, that is, Σ BV2 255 Pi = 0.1
, That is, the luminance value of the 90th percentile is BV2. The difference value D between the luminance value BV1 and the luminance value BV2 is set as the certainty factor. Note that the variance of the luminance values of the extracted images may be used as the certainty factor.
なお、上述した隆線推定画像生成部102の機能と確信度マップ生成部103の機能とを一つの機能ブロックとして構成し、当該機能ブロックが確信度付き隆線推定画像を生成するようにしても良い。
Note that the function of the ridge estimation image generation unit 102 and the function of the certainty map generation unit 103 described above may be configured as one function block, and the function block may generate a ridge estimation image with certainty. good.
図13を参照して、確信度付き隆線推定画像の生成方法の一例について説明する。図13Aに示すように、入力指紋画像xに対して、関数h及び関数f'による演算を行うことにより隆線推定画像f(x)を得る(但し、f(x)=f'(h(x)))。
An example of a method of generating a ridge estimation image with certainty will be described with reference to FIG. As shown in FIG. 13A, a ridge estimation image f (x) is obtained by performing an operation using a function h and a function f ′ on an input fingerprint image x (where f (x) = f ′ (h ( x))).
また、入力指紋画像xに対して、関数h及び関数g'による演算を行うことにより確信度マップであるCertainty画像g(x)を得る(但し、g(x)=g'(h(x)))。Certainty画像g(x)において、白の領域が誤差α以下で認識できる見込みの領域であり、黒の領域が誤差α以下に抑えられない領域である。
Also, a Certainty image g (x), which is a certainty factor map, is obtained by performing an operation using the function h and the function g ′ on the input fingerprint image x (however, g (x) = g ′ (h (x) )). In the Certainty image g (x), a white area is an area that can be recognized with an error of α or less, and a black area is an area that cannot be suppressed to an error of α or less.
ここで、入力指紋画像xに対して正確な隆線画像を、正解隆線画像yとする。正解隆線画像yと隆線推定画像f(x)との推定誤差を推定誤差dyとする。
Here, an accurate ridge image with respect to the input fingerprint image x is defined as a correct ridge image y. An estimation error between the correct ridge image y and the ridge estimation image f (x) is defined as an estimation error dy.
処理の目的の一つは、xからyに近い画像f(x)を推定することである。また、他の目的は、正しく推定できる見込みの領域を認識する、換言すれば、推定誤差をα以下にできる見込みの領域であるか否かを判別することである。
One of the purposes of the process is to estimate an image f (x) that is close to y from x. Another object is to recognize a region that is likely to be correctly estimated, in other words, to determine whether or not the region is a region where the estimation error can be reduced to α or less.
ここで、制御部11は、図13Bに示すロス関数を最小化する関数f、gを同時に学習する(但し、0≦g(x)≦1)。図13Bに示すロス関数における括弧で摘示した箇所が推定誤差dyiとなる。
Here, the control unit 11 simultaneously learns functions f and g that minimize the loss function shown in FIG. 13B (however, 0 ≦ g (x) ≦ 1). The portion shown in parentheses in the loss function shown in FIG. 13B is the estimated error dyi.
ここでは「gi(x)・dyi+(1-gi(x))・α」を最小化するため、谷線推定の誤差がdyi<αとなる見込みの画素については、Certainty gi(x)を大きくする力が働く。
Here, in order to minimize “gi (x) · dyi + (1−gi (x)) · α”, Certainty gi (x) is increased for pixels where the error of the valley line estimation is likely to be dyi <α. The power to work works.
一方、谷線推定の誤差がdyi>αとなる見込みの画素については、Certainty gi(x)を小さくする力が働く。結果として、「gi(x)・dyi+(1-gi(x))・α」を最小化することにより、Certainty gi(x)は谷線推定の誤差がdyi<αとなる見込みの大きさを示すよう最適化される。
On the other hand, for pixels where the error of the valley line estimation is likely to be dyi> α, a force acts to reduce Certainty gi (x). As a result, by minimizing “gi (x) · dyi + (1−gi (x)) · α”, Certainty gi (x) indicates the likelihood that the error in the valley line estimation will be dyi <α. Optimized as shown.
[処理の流れについて]
(登録処理について)
次に、リストバンド型電子機器1で行われる処理の流れについて説明する。始めに、図14を参照して、指紋の特徴点に対応する特徴量を登録する登録処理について説明する。図14Aは、登録処理の流れを示した図である。また、図14Bは、各処理で得られる画像等を各処理に対応付けて示した図である。 [Processing flow]
(About registration process)
Next, the flow of processing performed in the wristband typeelectronic device 1 will be described. First, a registration process for registering a feature amount corresponding to a feature point of a fingerprint will be described with reference to FIG. FIG. 14A is a diagram showing the flow of the registration process. FIG. 14B is a diagram showing an image and the like obtained in each process in association with each process.
(登録処理について)
次に、リストバンド型電子機器1で行われる処理の流れについて説明する。始めに、図14を参照して、指紋の特徴点に対応する特徴量を登録する登録処理について説明する。図14Aは、登録処理の流れを示した図である。また、図14Bは、各処理で得られる画像等を各処理に対応付けて示した図である。 [Processing flow]
(About registration process)
Next, the flow of processing performed in the wristband type
ステップST11では、画像入力処理が行われる。例えば、ディスプレイ4に指先が接触され、撮像素子8を介して指紋画像が取得される。指紋認証を取得する際に、発光部6が発光する。そして、処理がステップST12に進む。
で は In step ST11, an image input process is performed. For example, a fingertip is brought into contact with the display 4 and a fingerprint image is obtained via the imaging device 8. When the fingerprint authentication is acquired, the light emitting unit 6 emits light. Then, the process proceeds to step ST12.
ステップST12では、前処理部11aによる前処理が行われる。具体的には、ノイズ除去部101により、指紋画像からノイズが除去される。また、ノイズが除去された指紋画像に基づいて、隆線推定画像生成部102が、隆線推定画像を生成する。また、確信度マップ生成部103が、確信度マップを生成する。なお、図14Bでは、確信度マップの図示は省略している。そして、処理がステップST13に進む。
で は In step ST12, preprocessing is performed by the preprocessing unit 11a. Specifically, noise is removed from the fingerprint image by the noise removing unit 101. The ridge estimation image generation unit 102 generates a ridge estimation image based on the fingerprint image from which noise has been removed. Further, the certainty map generation unit 103 generates a certainty map. In FIG. 14B, illustration of the certainty factor map is omitted. Then, the process proceeds to step ST13.
ステップST13では、特徴点検出部11bが、隆線推定画像に基づいて、指紋の特徴点を検出する。本実施の形態では、特徴点検出部11bは、確信度マップを参照し、一定以上の確信度であると判定された領域内から特徴点を検出する。図14Bでは、3個の特徴点(円が付された箇所の中心)が検出された例が示されている。そして、処理がステップST14に進む。
In step ST13, the feature point detection unit 11b detects a feature point of the fingerprint based on the ridge estimation image. In the present embodiment, the feature point detection unit 11b refers to the certainty factor map and detects a feature point from an area determined to have a certainty factor or more. FIG. 14B shows an example in which three feature points (the centers of circles) are detected. Then, the process proceeds to step ST14.
ステップST14では、特徴量抽出部11cが、各特徴点を特徴付ける特徴量を抽出する。上述したように、特徴量抽出部11cは、各特徴点を中心とする所定サイズの画像を切り出し、切り出した画像に基づいて特徴量を抽出する。そして、処理がステップST15に進む。
In step ST14, the feature amount extraction unit 11c extracts a feature amount characterizing each feature point. As described above, the feature amount extraction unit 11c cuts out an image of a predetermined size centering on each feature point, and extracts a feature amount based on the cut out image. Then, the process proceeds to step ST15.
ステップST15は、制御部11が、ステップST14における処理で抽出された各特徴点の特徴量を登録するテンプレート登録処理を行う。各特徴点の特徴量は、例えば、メモリ部28に記憶される。メモリ部28に記憶された特徴量が、登録済特徴量として、次に説明するマッチング処理で使用される。
In step ST15, the control unit 11 performs a template registration process of registering the feature amount of each feature point extracted in the process in step ST14. The feature amount of each feature point is stored in the memory unit 28, for example. The feature amount stored in the memory unit 28 is used as a registered feature amount in a matching process described below.
(マッチング処理について)
次に、図15を参照して、マッチング処理について説明する。図15Aは、マッチング処理の流れを示した図である。図15Bは、各処理で取得される特徴量の例や、処理内容を説明する際に参照される図を各処理に対応付けて示した図である。 (About matching process)
Next, the matching processing will be described with reference to FIG. FIG. 15A is a diagram illustrating a flow of the matching process. FIG. 15B is a diagram illustrating an example of a feature amount acquired in each process and a diagram referred to when describing the process content, in association with each process.
次に、図15を参照して、マッチング処理について説明する。図15Aは、マッチング処理の流れを示した図である。図15Bは、各処理で取得される特徴量の例や、処理内容を説明する際に参照される図を各処理に対応付けて示した図である。 (About matching process)
Next, the matching processing will be described with reference to FIG. FIG. 15A is a diagram illustrating a flow of the matching process. FIG. 15B is a diagram illustrating an example of a feature amount acquired in each process and a diagram referred to when describing the process content, in association with each process.
ステップST21では、ディスプレイ4に指先が置かれ、指紋画像が取得される。そして、特徴量が抽出される特徴量抽出処理が行われる。なお、ステップST21における特徴量抽出処理は、上述したステップST11~ST14までを含む処理である。ステップST21の処理により指紋認証を行うための照合用の特徴量が取得される。図15Bでは、5個の特徴点に対応する特徴量が示されている。そして、処理がステップST22に進む。
In step ST21, a fingertip is placed on the display 4, and a fingerprint image is obtained. Then, a feature amount extraction process for extracting a feature amount is performed. The feature amount extraction process in step ST21 is a process including the above-described steps ST11 to ST14. Through the processing in step ST21, a feature amount for collation for performing fingerprint authentication is obtained. FIG. 15B shows feature amounts corresponding to five feature points. Then, the process proceeds to step ST22.
ステップST22では、制御部11がメモリ部28から登録済特徴量を読み出す。図15Bでは、登録済特徴量の一例が示されている。そして、処理がステップST23に進む。
In step ST22, the control unit 11 reads out the registered feature amounts from the memory unit 28. FIG. 15B shows an example of a registered feature amount. Then, the process proceeds to step ST23.
ステップST23では、マッチング処理部11dが、ステップST21の処理で取得された特徴量と、ステップST22で読み出された登録済特徴量とを照合するマッチング処理を行う。
In step ST23, the matching processing unit 11d performs a matching process of comparing the feature amount acquired in the process of step ST21 with the registered feature amount read in step ST22.
マッチング処理の一例について説明する。マッチング処理部11dは、照合用の特徴量と、登録済特徴量との類似度スコアを内積演算によって求め、その結果に基づいて、図15Bに示す類似度スコア行列を生成する。類似度スコア行列における「A」は登録済の特徴点を示し、「B」は照合用の特徴点を表している。但し、(i,j)成分はAiとBi の類似度スコアである。
An example of the matching process will be described. The matching processing unit 11d obtains a similarity score between the feature amount for matching and the registered feature amount by an inner product operation, and generates a similarity score matrix shown in FIG. 15B based on the result. “A” in the similarity score matrix indicates a registered feature point, and “B” indicates a feature point for comparison. Here, the (i, j) component is a similarity score between Ai and Bi.
マッチング処理部11dは、類似度スコア行列に基づいて、照合スコアを算出する。照合スコアが閾値以上であれば指紋認証が成立する。照合スコアが閾値より小さい場合には、指紋認証は成立しない。なお、例えば、類似度スコア行列における最大値が照合スコアとして設定される。類似度スコア行列における平均値が照合スコアとして設定されても良い。類似度スコア行列における各列の最大値の平均値が照合スコアとして設定されても良い。
The matching processing unit 11d calculates a matching score based on the similarity score matrix. If the collation score is equal to or greater than the threshold, fingerprint authentication is established. If the collation score is smaller than the threshold, fingerprint authentication is not established. For example, the maximum value in the similarity score matrix is set as the matching score. The average value in the similarity score matrix may be set as the matching score. The average value of the maximum value of each column in the similarity score matrix may be set as the matching score.
以上説明した第1の実施形態によれば、特徴点の周辺画像に基づいて特徴量を抽出するようにしているので、特徴点そのものの情報以外の情報を当該特徴点の特徴量とすることができる。そのような特徴量を用いたマッチング処理を行うことにより、多様な情報に基づくマッチング処理が行えるようになるため、指紋認証の精度を向上させることが可能となる。
According to the first embodiment described above, since the feature amount is extracted based on the peripheral image of the feature point, information other than the information of the feature point itself can be used as the feature amount of the feature point. it can. By performing the matching process using such feature amounts, the matching process based on various information can be performed, so that the accuracy of fingerprint authentication can be improved.
<第2の実施の形態>
次に、第2の実施の形態について説明する。なお、第1の実施の形態で説明した事項(例えば、リストバンド型電子機器1の構成や機能)については、特に断らない限り、第2の実施の形態に対しても適用することができる。 <Second embodiment>
Next, a second embodiment will be described. Note that the items described in the first embodiment (for example, the configuration and functions of the wristband type electronic device 1) can be applied to the second embodiment unless otherwise specified.
次に、第2の実施の形態について説明する。なお、第1の実施の形態で説明した事項(例えば、リストバンド型電子機器1の構成や機能)については、特に断らない限り、第2の実施の形態に対しても適用することができる。 <Second embodiment>
Next, a second embodiment will be described. Note that the items described in the first embodiment (for example, the configuration and functions of the wristband type electronic device 1) can be applied to the second embodiment unless otherwise specified.
第1の実施の形態でも説明したように、リストバンド型電子機器1では、撮像素子8を使用して指紋認証を行うようにしている。一般に、撮像素子(より具体的な例としてはCOMSセンサ)を用いたセンシングでは、他の方式(例えば、静電容量方式)によるセンシングに比べて、消費電力が大きくなるという問題がある。必要とされる電力に応じた容量のバッテリを用いれば良いものの、ウエアラブル機器の場合には、搭載できるバッテリの大きさにも制約があり、バッテリの容量にも限界が生じる。従って、不要な電力の消費を極力抑制することが望まれる。また、ウエアラブル機器の場合には、配設されるボタン等の入力デバイスの数や大きさにも制約が生じる。従って、不要な電力の消費を極力抑制するための制御は、ボタン等の物理的なデバイスに対する操作をトリガーとしないで実行されることが望まれる。係る観点を考慮しつつ、第2の実施の形態について詳細に説明する。
As described in the first embodiment, in the wristband type electronic device 1, fingerprint authentication is performed using the imaging device 8. In general, there is a problem that power consumption is larger in sensing using an image sensor (more specifically, a COMS sensor) than in sensing using another method (for example, a capacitance method). Although a battery having a capacity corresponding to the required power may be used, in the case of a wearable device, the size of the battery that can be mounted is limited, and the capacity of the battery is limited. Therefore, it is desired to minimize unnecessary power consumption. In the case of a wearable device, the number and size of input devices such as buttons to be provided are also restricted. Therefore, it is desirable that the control for minimizing unnecessary power consumption be performed without using an operation on a physical device such as a button as a trigger. The second embodiment will be described in detail while considering such a viewpoint.
[状態遷移について]
図16は、リストバンド型電子機器1の状態遷移を示す図である。リストバンド型電子機器1は、指紋認証に係る動作モードとして、例えば、3つのモード間を遷移可能とされている。3つのモードは、モード0、モード1及びモード2である。消費電力の観点からみれば、モード0が最も消費電力が小さく、また、モード2が最も消費電力が大きい。モード1の消費電力は、モード0の消費電力よりは大きくモード2の消費電力よりは小さい。なお、モード0が第3のモードの一例に対応し、モード1、2がそれぞれ第1、第2のモードの一例に対応する。 [About state transition]
FIG. 16 is a diagram illustrating a state transition of the wristband typeelectronic device 1. The wristband type electronic device 1 is capable of transitioning between, for example, three modes as an operation mode related to fingerprint authentication. The three modes are mode 0, mode 1 and mode 2. From the viewpoint of power consumption, mode 0 has the lowest power consumption, and mode 2 has the highest power consumption. The power consumption in mode 1 is larger than the power consumption in mode 0 and smaller than the power consumption in mode 2. Note that mode 0 corresponds to an example of the third mode, and modes 1 and 2 correspond to examples of the first and second modes, respectively.
図16は、リストバンド型電子機器1の状態遷移を示す図である。リストバンド型電子機器1は、指紋認証に係る動作モードとして、例えば、3つのモード間を遷移可能とされている。3つのモードは、モード0、モード1及びモード2である。消費電力の観点からみれば、モード0が最も消費電力が小さく、また、モード2が最も消費電力が大きい。モード1の消費電力は、モード0の消費電力よりは大きくモード2の消費電力よりは小さい。なお、モード0が第3のモードの一例に対応し、モード1、2がそれぞれ第1、第2のモードの一例に対応する。 [About state transition]
FIG. 16 is a diagram illustrating a state transition of the wristband type
[各モードにおける動作について]
次に、各モードにおける動作の一例について説明する。なお、各モードにおいて、以下に説明する動作内容の全てが行われる必要は無く、少なくとも一つの動作が行われれば良い。 [Operation in each mode]
Next, an example of the operation in each mode will be described. In each mode, it is not necessary to perform all of the operations described below, and at least one operation may be performed.
次に、各モードにおける動作の一例について説明する。なお、各モードにおいて、以下に説明する動作内容の全てが行われる必要は無く、少なくとも一つの動作が行われれば良い。 [Operation in each mode]
Next, an example of the operation in each mode will be described. In each mode, it is not necessary to perform all of the operations described below, and at least one operation may be performed.
(動作内容の概要)
モード0は、休止状態であり、発光部6がオフされると共に、撮像素子8を動作させない、即ち、撮像素子8を使用した指紋のセンシングを行わないモードである。
モード1は、待機状態であり、発光部6がオンされると共に、撮像素子8を使用した指紋のセンシングを行うモードである。なお、モード1におけるセンシングとは、ディスプレイ4に接触しているものが指紋であるか否かを判別できる程度のセンシングで良い。より具体的には、指紋(例えば、指紋の特徴点)が含まれるか否か判断できる程度の画像を取得するセンシングで良い。
モード2は、認証状態であり、発光部6がオンされると共に、指紋の特徴量が取得され、取得された特徴量と登録済特徴量とを照合するマッチング処理が行われるモードである。また、モード2では、モード1の設定とは異なる設定に基づいて、撮像素子8を介して画像が取得される。
モード1で、例えば、画像から指紋の特徴点が検出され、ディスプレイ4に接触された物体が指先であると判定された場合に、より消費電力が大きいモード2に動作モードが遷移する。係るモード遷移により、ディスプレイ4に服等の指先以外が接触した場合であっても、消費電力が大きいマッチング処理等が不必要に行われることを防止することができる。従って、例えば、バッテリの容量の低下を抑制することができる。 (Overview of operation details)
Mode 0 is a pause mode, in which the light emitting unit 6 is turned off and the image sensor 8 is not operated, that is, the fingerprint sensing using the image sensor 8 is not performed.
Mode 1 is a standby state in which the light emitting unit 6 is turned on and fingerprint sensing using the image sensor 8 is performed. Note that the sensing in the mode 1 may be such that it is possible to determine whether or not the object in contact with the display 4 is a fingerprint. More specifically, sensing that acquires an image that can determine whether or not a fingerprint (for example, a characteristic point of the fingerprint) is included may be used.
Themode 2 is an authentication state, in which the light emitting unit 6 is turned on, a feature amount of the fingerprint is acquired, and a matching process for comparing the acquired feature amount with the registered feature amount is performed. In the mode 2, an image is acquired via the image sensor 8 based on a setting different from the setting in the mode 1.
Inmode 1, for example, when a feature point of a fingerprint is detected from an image and it is determined that the object touching the display 4 is a fingertip, the operation mode transitions to mode 2 in which power consumption is higher. By such a mode transition, it is possible to prevent unnecessary execution of a matching process or the like that consumes a large amount of power even when the display 4 is touched by something other than a fingertip of clothes or the like. Therefore, for example, a decrease in the capacity of the battery can be suppressed.
モード0は、休止状態であり、発光部6がオフされると共に、撮像素子8を動作させない、即ち、撮像素子8を使用した指紋のセンシングを行わないモードである。
モード1は、待機状態であり、発光部6がオンされると共に、撮像素子8を使用した指紋のセンシングを行うモードである。なお、モード1におけるセンシングとは、ディスプレイ4に接触しているものが指紋であるか否かを判別できる程度のセンシングで良い。より具体的には、指紋(例えば、指紋の特徴点)が含まれるか否か判断できる程度の画像を取得するセンシングで良い。
モード2は、認証状態であり、発光部6がオンされると共に、指紋の特徴量が取得され、取得された特徴量と登録済特徴量とを照合するマッチング処理が行われるモードである。また、モード2では、モード1の設定とは異なる設定に基づいて、撮像素子8を介して画像が取得される。
モード1で、例えば、画像から指紋の特徴点が検出され、ディスプレイ4に接触された物体が指先であると判定された場合に、より消費電力が大きいモード2に動作モードが遷移する。係るモード遷移により、ディスプレイ4に服等の指先以外が接触した場合であっても、消費電力が大きいマッチング処理等が不必要に行われることを防止することができる。従って、例えば、バッテリの容量の低下を抑制することができる。 (Overview of operation details)
The
In
(各モードにおける動作の具体例)
各モードにおける動作の具体例について説明する。なお、モード0では、指紋認証に係る処理が行われないモードである。従って、以下の説明では、モード1及びモード2の動作の具体例について説明する。 (Specific examples of operation in each mode)
Specific examples of the operation in each mode will be described. Note thatmode 0 is a mode in which processing related to fingerprint authentication is not performed. Therefore, in the following description, a specific example of the operation in mode 1 and mode 2 will be described.
各モードにおける動作の具体例について説明する。なお、モード0では、指紋認証に係る処理が行われないモードである。従って、以下の説明では、モード1及びモード2の動作の具体例について説明する。 (Specific examples of operation in each mode)
Specific examples of the operation in each mode will be described. Note that
第1の例として、例えば、制御部11により発光部6の明るさを制御する照明制御が行われる。当該照明制御に応じて、各モードにおける動作がなされる。例えば、モード1では、指紋の特徴点が得られれば良いので、発光部6の明るさ(輝度)が小さめとなるように設定される。一方で、モード2では、マッチング処理を行う必要があるため、特徴点の周辺画像から汗腺の位置等の特徴量を取得する必要がある。従って、高精細な画像が得られるように、発光部6の明るさをモード1より大きくする。なお、指先からの反射光は、指状態や指の押し当て具合によって光量が変化するため、さらに、画像の輝度を元に発光部6の発光強度を適応的に調整しても良い。
As a first example, for example, illumination control for controlling the brightness of the light emitting unit 6 by the control unit 11 is performed. The operation in each mode is performed according to the illumination control. For example, in the mode 1, since it is only necessary to obtain the characteristic points of the fingerprint, the brightness (luminance) of the light emitting unit 6 is set to be small. On the other hand, in mode 2, since it is necessary to perform the matching process, it is necessary to acquire a feature amount such as a position of a sweat gland from a peripheral image of the feature point. Therefore, the brightness of the light emitting unit 6 is set to be larger than that in mode 1 so that a high-definition image is obtained. Since the amount of the reflected light from the fingertip changes depending on the state of the finger and the degree of pressing of the finger, the light emission intensity of the light emitting unit 6 may be adjusted adaptively based on the luminance of the image.
第2の例として、例えば、制御部11が撮像素子8中のアクティブとなる画素を制御することにより、解像度を変更する解像度制御が行われる。当該解像度制御に応じて、各モードにおける動作がなされる。例えば、モード1では、低解像度が設定され、低解像度でのセンシングがなされる。低解像度とは、例えば、指紋の特徴点が検出可能となる300~500ppi(pixel per inch)程度の解像度を意味する。モード2では、高解像度が設定され、高解像度でのセンシングがなされる。高解像度とは、例えば、汗腺など、指紋線よりさらに細かい特徴が撮影可能となる1000ppi程度以上の解像度を意味する。
As a second example, for example, resolution control for changing the resolution is performed by the control unit 11 controlling active pixels in the image sensor 8. The operation in each mode is performed according to the resolution control. For example, in mode 1, low resolution is set, and sensing at low resolution is performed. The low resolution means, for example, a resolution of about 300 to 500 ppi (pixels per inch) at which a feature point of a fingerprint can be detected. In mode 2, high resolution is set, and sensing at high resolution is performed. The high resolution means, for example, a resolution of about 1000 ppi or more at which a feature finer than a fingerprint line such as a sweat gland can be photographed.
第3の例として、例えば、制御部11が撮像素子8中のアクティブとなる画素の領域を制御することにより、撮像範囲であるセンシング領域を制御するセンシング領域制御が行われる。モード1では、撮像素子8の一部分(例えば、中央付近のみ)が使用されたセンシングが行われる。モード2では、撮像素子8の全体が使用されたセンシングが行われる。
As a third example, for example, the control unit 11 controls a region of an active pixel in the image sensor 8 to perform a sensing region control for controlling a sensing region which is an imaging range. In mode 1, sensing using a part (for example, only near the center) of the image sensor 8 is performed. In mode 2, sensing using the entire image sensor 8 is performed.
上述した例における制御を組み合わせた制御が行われても良い。例えば、モード1では撮像素子8全体で低解像度でのセンシングを行い指紋の特徴点を検出する。モード2ではその検出された特徴点付近の領域のみを高解像度でセンシングするようにしても良い。
(4) Control combining the control in the above-described example may be performed. For example, in mode 1, sensing at a low resolution is performed by the entire image sensor 8 to detect a feature point of a fingerprint. In mode 2, only the area near the detected feature point may be sensed at high resolution.
[モード間の遷移について]
次にモード間の遷移について説明する。動作モードは、所定のトリガーや時間経過、処理の結果等に応じて遷移する。図16に示すように、モード0からモード1へは、トリガーPに基づいて遷移する。また、モード1からモード2へは、トリガーQに基づいて遷移する。 [Transition between modes]
Next, transition between modes will be described. The operation mode transitions according to a predetermined trigger, a lapse of time, a result of processing, and the like. As shown in FIG. 16, a transition is made frommode 0 to mode 1 based on the trigger P. Also, a transition is made from mode 1 to mode 2 based on the trigger Q.
次にモード間の遷移について説明する。動作モードは、所定のトリガーや時間経過、処理の結果等に応じて遷移する。図16に示すように、モード0からモード1へは、トリガーPに基づいて遷移する。また、モード1からモード2へは、トリガーQに基づいて遷移する。 [Transition between modes]
Next, transition between modes will be described. The operation mode transitions according to a predetermined trigger, a lapse of time, a result of processing, and the like. As shown in FIG. 16, a transition is made from
動作モードがモード1における処理、より具体的には画像に基づく判別処理において、ディスプレイ4に接触したものが指紋でない場合には、動作モードがモード1からモード0に遷移する。また、動作モードがモード1である状態が所定時間継続した場合には、動作モードがモード1からモード0に遷移する(タイムアウト)。
(4) In the processing in the operation mode of mode 1, more specifically, in the discrimination processing based on the image, if the one touching the display 4 is not a fingerprint, the operation mode changes from mode 1 to mode 0. In addition, when the state in which the operation mode is mode 1 continues for a predetermined time, the operation mode transitions from mode 1 to mode 0 (timeout).
動作モードがモード2である状態が所定時間継続した場合には、動作モードがモード2からモード1に遷移する(タイムアウト)。また、動作モードがモード2におけるマッチング処理が終了し、指紋認証の結果が得られた場合には、動作モードがモード2からモード0に遷移する。
If the state in which the operation mode is mode 2 continues for a predetermined time, the operation mode transitions from mode 2 to mode 1 (timeout). When the matching process in the operation mode 2 is completed and the result of the fingerprint authentication is obtained, the operation mode changes from the mode 2 to the mode 0.
なお、モード0からモード2へ動作モードが直接、遷移可能とされても良い。例えば、トリガーRに基づいて、動作モードがモード0からモード2へ遷移可能とされても良い。トリガーRとしては、指紋認証を行うことを指示する操作入力が挙げられる。この場合は、予め指紋認証を行うことが明確であるので、動作モードがモード0からモード2に直接、遷移するようにしても良い。
The operation mode may be directly transitable from mode 0 to mode 2. For example, the operation mode may be allowed to transition from mode 0 to mode 2 based on the trigger R. An example of the trigger R is an operation input for instructing to perform fingerprint authentication. In this case, since it is clear that fingerprint authentication is performed in advance, the operation mode may directly transition from mode 0 to mode 2.
[トリガーの具体例]
次に、トリガーの具体例について説明する。トリガーRの具体例については既に説明しているため、重複した説明を省略する。 [Specific example of trigger]
Next, a specific example of the trigger will be described. Since a specific example of the trigger R has already been described, a duplicate description will be omitted.
次に、トリガーの具体例について説明する。トリガーRの具体例については既に説明しているため、重複した説明を省略する。 [Specific example of trigger]
Next, a specific example of the trigger will be described. Since a specific example of the trigger R has already been described, a duplicate description will be omitted.
(トリガーPの具体例)
トリガーPとしては、リストバンド型電子機器1の使用開始が検出されたタイミングを挙げられる。リストバンド型電子機器1の使用が開始されたタイミングで、所定のアプリケーションを実行するために指紋認証が行われる可能性が高いことが想定される。従って、動作モードがモード0からモード1に遷移する。 (Specific example of trigger P)
The trigger P includes a timing at which the start of use of the wristband typeelectronic device 1 is detected. It is assumed that there is a high possibility that fingerprint authentication will be performed to execute a predetermined application at the timing when the use of the wristband type electronic device 1 is started. Therefore, the operation mode changes from mode 0 to mode 1.
トリガーPとしては、リストバンド型電子機器1の使用開始が検出されたタイミングを挙げられる。リストバンド型電子機器1の使用が開始されたタイミングで、所定のアプリケーションを実行するために指紋認証が行われる可能性が高いことが想定される。従って、動作モードがモード0からモード1に遷移する。 (Specific example of trigger P)
The trigger P includes a timing at which the start of use of the wristband type
トリガーPのより具体的な例としては、図17A及び図17Bに示すように、加速度センサの波形(加速度センサの出力)又は加速度センサの出力変化が閾値以上か閾値以下になった場合が挙げられる。この場合には、リストバンド型電子機器1が使用される可能性が高いので、動作モードがモード0からモード1に遷移する。なお、加速度センサは、モーションセンサ20の一つとして適用することができる。
As a more specific example of the trigger P, as shown in FIGS. 17A and 17B, a case where the waveform of the acceleration sensor (the output of the acceleration sensor) or the change in the output of the acceleration sensor is equal to or more than the threshold or equal to or less than the threshold is given. . In this case, there is a high possibility that the wristband-type electronic device 1 will be used, so that the operation mode transitions from mode 0 to mode 1. Note that the acceleration sensor can be applied as one of the motion sensors 20.
トリガーPの具体的な他の例としては、図18に示すように、3軸加速度の合成ベクトルの方向(重力方向)に、閾値以上の変化があった場合を挙げることができる。なお、リストバンド型電子機器1のモーションセンサ20には、例えば、各軸に対応するセンサ出力が定義されている。リストバンド型電子機器1に対応する各軸の例が、図19A及び図19Bに記載されている。3軸加速度を3次元ベクトルで表現し、その方向に変化があれば、手の向きが変わったと判定される。係る場合も、リストバンド型電子機器1に対して、指紋認証を含む何らかのアクションが行われる可能性が高いことから、動作モードがモード0からモード1に遷移する。
As another specific example of the trigger P, as shown in FIG. 18, a case where there is a change equal to or more than a threshold value in the direction (gravity direction) of the composite vector of the three-axis acceleration can be given. In the motion sensor 20 of the wristband type electronic device 1, for example, a sensor output corresponding to each axis is defined. Examples of each axis corresponding to the wristband type electronic device 1 are shown in FIGS. 19A and 19B. The three-axis acceleration is represented by a three-dimensional vector, and if there is a change in the direction, it is determined that the hand direction has changed. Also in such a case, the operation mode transitions from mode 0 to mode 1 because there is a high possibility that some action including fingerprint authentication is performed on the wristband type electronic device 1.
トリガーPの具体的な他の例について、図20を参照する。図20Aに示すように、加速度センサの出力に閾値以上の変化が生じた箇所を含むように、所定区間が設定される。設定された所定区間に対応する加速度センサの出力が、図20Bに模式的に示す認識器に入力される。認識器は、加速度センサの出力に対して関数fを適用することにより、所定のジェスチャが発生したか否かを判定するものである。認識器による処理の結果、図20Cに示すように、認識器の判定結果が得られる。当該判定結果である、定義されたジェスチャらしさを示すスコアf(x)が閾値以上である場合を、トリガーPとする。リストバンド型電子機器1を装着した状態でのジェスチャが検出された場合には、リストバンド型電子機器1に対して、指紋認証を含む何らかのアクションが行われる可能性が高い。従って、動作モードがモード0からモード1に遷移する。
Another specific example of the trigger P will be described with reference to FIG. As shown in FIG. 20A, the predetermined section is set so that the output of the acceleration sensor includes a portion where a change equal to or more than the threshold value occurs. The output of the acceleration sensor corresponding to the set predetermined section is input to the recognizer schematically shown in FIG. 20B. The recognizer determines whether a predetermined gesture has occurred by applying the function f to the output of the acceleration sensor. As a result of the processing by the recognizer, a determination result of the recognizer is obtained as shown in FIG. 20C. A case where the score f (x) indicating the defined gesture-likeness, which is the determination result, is equal to or greater than a threshold value is defined as a trigger P. When a gesture in a state where the wristband type electronic device 1 is worn is detected, there is a high possibility that some action including fingerprint authentication is performed on the wristband type electronic device 1. Therefore, the operation mode changes from mode 0 to mode 1.
トリガーPは、ディスプレイ4に対する指先の接触や、ディスプレイ4に接触した指先の移動が検出された場合でも良い。指先でなく、何からの物体の接触や、当該物体の移動が検出された場合がトリガーPとして設定されても良い。図21A及び図21Bは、ディスプレイ4に対する撮像素子8及びタッチセンサ部7のそれぞれの位置を模式的に示した図である。物体の接触や移動を検出するタッチセンサ部7は、図21A及び図21Bに示すように、例えば、撮像素子8の近傍に配置される。
The trigger P may be a case where a contact of the fingertip with the display 4 or a movement of the fingertip with the display 4 is detected. A trigger P may be set when a contact of an object or a movement of the object is detected instead of the fingertip. FIGS. 21A and 21B are diagrams schematically showing respective positions of the image sensor 8 and the touch sensor unit 7 with respect to the display 4. The touch sensor unit 7 that detects contact or movement of an object is arranged, for example, in the vicinity of the image sensor 8 as shown in FIGS. 21A and 21B.
以上、トリガーPの具体例について説明したが、これに限定されるものではなく、トリガーPとして、種々の条件を設定することができる。上述した例を組み合わせた条件がトリガーPとされても良い。
Although a specific example of the trigger P has been described above, the present invention is not limited to this, and various conditions can be set as the trigger P. A combination of the above-described examples may be used as the trigger P.
(トリガーQの具体例)
次に、トリガーQの具体例について説明する。トリガーQは、例えば、撮像素子8を介して取得された画像に指紋が含まれることを条件とするトリガーである。 (Specific example of trigger Q)
Next, a specific example of the trigger Q will be described. The trigger Q is, for example, a trigger on the condition that a fingerprint is included in an image acquired via theimaging device 8.
次に、トリガーQの具体例について説明する。トリガーQは、例えば、撮像素子8を介して取得された画像に指紋が含まれることを条件とするトリガーである。 (Specific example of trigger Q)
Next, a specific example of the trigger Q will be described. The trigger Q is, for example, a trigger on the condition that a fingerprint is included in an image acquired via the
図22A~図22Dに示すように、指紋の指紋線(ここでは隆線及び谷線)の周期として考えられる周期を設定する。図22Aは0.6mm周期の例を示し、図22Bは0.3mm周期の例を示し、図22Cは0.15mm周期の例を示し、図22Dは0.075mm周期の例を示している。撮像素子8を介して得られる画像から、各周期に対応する周波数成分を抽出する。そして、各周波数成分に対して、図23に示すような例えば、32通り(11.25度刻み)の角度パターンに対するレスポンスを計算し、その平均値を得る。上述した4つの周波数成分のうちの少なくとも一つに対応する平均値が閾値以上である場合には、画像に写っているものが指紋である可能性が高い。従って、4つの周波数成分のうちの少なくとも一つに対応する平均値が閾値以上であるという条件がトリガーQとして設定される。
周期 As shown in FIGS. 22A to 22D, a cycle that can be considered as a cycle of a fingerprint line (here, a ridge line and a valley line) of a fingerprint is set. 22A shows an example of a 0.6 mm cycle, FIG. 22B shows an example of a 0.3 mm cycle, FIG. 22C shows an example of a 0.15 mm cycle, and FIG. 22D shows an example of a 0.075 mm cycle. A frequency component corresponding to each cycle is extracted from an image obtained via the image sensor 8. Then, for each frequency component, for example, 32 types of responses (in increments of 11.25 degrees) as shown in FIG. 23 are calculated, and the average value is obtained. When the average value corresponding to at least one of the four frequency components described above is equal to or greater than the threshold value, it is highly likely that the object shown in the image is a fingerprint. Therefore, a condition that the average value corresponding to at least one of the four frequency components is equal to or larger than the threshold is set as the trigger Q.
また、指紋の特徴点が閾値以上の個数、検出されたという条件がトリガーQとして設定されても良い。指紋の特徴点としては、図24Aに示す指紋線の終端点、図24Bに示す指紋線の分岐点のほか、図24Cに示す指紋線の交差点や図24Dに示す指紋線が点状に孤立している孤立点が含まれても良い。
{Circle around (4)} The condition that the number of fingerprint feature points equal to or larger than the threshold value is detected may be set as the trigger Q. As the characteristic points of the fingerprint, in addition to the end point of the fingerprint line shown in FIG. 24A, the branch point of the fingerprint line shown in FIG. 24B, the intersection of the fingerprint line shown in FIG. 24C and the fingerprint line shown in FIG. May be included.
以上、トリガーQの具体例について説明したが、これに限定されるものではなく、トリガーQとして、種々の条件を設定することができる。
Although a specific example of the trigger Q has been described above, the present invention is not limited to this, and various conditions can be set as the trigger Q.
[処理の流れ]
次に、図25及び図26のフローチャートを参照して、第2の実施の形態に係る処理の流れについて説明する。なお、特に断らない限り、図25及び図26に示す処理は、例えば、制御部11による制御に応じて実行される。 [Processing flow]
Next, the flow of processing according to the second embodiment will be described with reference to the flowcharts of FIGS. Unless otherwise specified, the processes illustrated in FIGS. 25 and 26 are executed, for example, under the control of thecontrol unit 11.
次に、図25及び図26のフローチャートを参照して、第2の実施の形態に係る処理の流れについて説明する。なお、特に断らない限り、図25及び図26に示す処理は、例えば、制御部11による制御に応じて実行される。 [Processing flow]
Next, the flow of processing according to the second embodiment will be described with reference to the flowcharts of FIGS. Unless otherwise specified, the processes illustrated in FIGS. 25 and 26 are executed, for example, under the control of the
なお、図25及び図26のフローチャートにおける、丸印が付されたA、B及びCは処理の連続性を示している。また、処理の開始時における動作モードがモード0であるものとして説明する。
In the flowcharts of FIGS. 25 and 26, circles A, B, and C indicate the continuity of the processing. Also, the description will be given on the assumption that the operation mode at the start of the process is mode 0.
図25におけるステップST31では、例えば、モーションセンサ20の出力に基づいて、加速度データが取得される。そして、処理がステップST32に進む。
In step ST31 in FIG. 25, for example, acceleration data is obtained based on the output of the motion sensor 20. Then, the process proceeds to step ST32.
ステップST32では、ステップST31で得られた加速度データを使用して、制御部11が、トリガーPが成立するか否かを認識する処理を行う。なお、上述したように、トリガーPが成立したか否かは加速度データ以外のデータを使用して判断されても良い。そして、処理がステップST33に進む。
In step ST32, using the acceleration data obtained in step ST31, the control unit 11 performs a process of recognizing whether or not the trigger P is established. As described above, whether or not the trigger P has been established may be determined using data other than the acceleration data. Then, the process proceeds to step ST33.
ステップST33では、ステップST32における処理の結果に基づいて、トリガーPが成立するか否かが判断される。ここで、トリガーPが成立しない場合は、処理がステップST31に戻る。トリガーPが成立する場合には、処理がステップST34に進む。
In step ST33, it is determined whether or not the trigger P is established based on the result of the process in step ST32. Here, if the trigger P is not established, the process returns to step ST31. If the trigger P is established, the process proceeds to step ST34.
トリガーPが成立することから動作モードがモード0からモード1に遷移する。そして、ステップST34では、第1経過時間が0に設定される(初期化される)。第1経過時間は、全体の処理が所定時間内に終了したか否か、換言すれば、処理がタイムアウトしたか否かを判断するための時間である。そして、処理がステップST35に進む。
か ら Since the trigger P is established, the operation mode changes from mode 0 to mode 1. Then, in step ST34, the first elapsed time is set to 0 (initialized). The first elapsed time is a time for determining whether or not the entire processing has been completed within a predetermined time, in other words, whether or not the processing has timed out. Then, the process proceeds to step ST35.
ステップST35では、第2経過時間が0に設定される(初期化される)。第2経過時間は、モード1の処理が所定時間内に終了したか否か、換言すれば、処理がタイムアウトしたか否かを判断するための時間である。そして、処理がステップST36に進む。
In step ST35, the second elapsed time is set to 0 (initialized). The second elapsed time is a time for determining whether or not the processing of the mode 1 has been completed within a predetermined time, in other words, whether or not the processing has timed out. Then, the process proceeds to step ST36.
ステップST36では、発光部6がモード1に対応する明るさでもってオンする。そして、処理がステップST37に進む。
In step ST36, the light emitting section 6 is turned on with the brightness corresponding to mode 1. Then, the process proceeds to step ST37.
ステップST37では、モード1に対応する設定に応じたセンシングが開始される。そして、処理がステップST38に進む。
In step ST37, sensing according to the setting corresponding to mode 1 is started. Then, the process proceeds to step ST38.
ステップST38では、ステップST37のセンシングの結果、撮像素子8を介して画像が取得される。そして、処理がステップST39に進む。ステップST39では、トリガーQを認識する処理が行われる。そして、処理がステップST40に進む。
In step ST38, an image is obtained via the imaging element 8 as a result of the sensing in step ST37. Then, the process proceeds to step ST39. In step ST39, a process of recognizing the trigger Q is performed. Then, the process proceeds to step ST40.
図26におけるステップST40では、制御部11が、ステップST39の処理の結果、トリガーQが成立するか否かが判断される。ここで、トリガーQが成立しない場合は、処理がステップST41に進む。
In step ST40 in FIG. 26, the control unit 11 determines whether or not the trigger Q is established as a result of the processing in step ST39. Here, if the trigger Q is not established, the process proceeds to step ST41.
ステップST41では、第2経過時間が所定の閾値th1より大きいか否かが判断される。th1は、例えば、10秒程度に設定される。ここで、第2経過時間が所定の閾値th1より大きい場合には、モード1における処理がタイムアウトし、処理がステップST31に戻る。第2経過時間が所定の閾値th1以下の場合には、モード1における処理が繰り返される。即ち、処理がステップST38に戻り、再度、画像が取得され、ステップST38以降の処理が行われる。
In step ST41, it is determined whether the second elapsed time is greater than a predetermined threshold th1. th1 is set to, for example, about 10 seconds. Here, if the second elapsed time is greater than the predetermined threshold th1, the process in mode 1 times out, and the process returns to step ST31. If the second elapsed time is equal to or less than the predetermined threshold th1, the process in mode 1 is repeated. That is, the process returns to step ST38, an image is acquired again, and the processes after step ST38 are performed.
ステップST40における判定処理において、トリガーQが成立する場合には、動作モードがモード1からモード2に遷移した後、処理がステップST42に進む。ステップST42では、第3経過時間が0に設定される(初期化される)。第3経過時間は、モード2の処理が所定時間内に終了したか否か、換言すれば、処理がタイムアウトしたか否を判断するための時間である。そして、処理がステップST43に進む。
に お い て If the trigger Q is satisfied in the determination processing in step ST40, the operation mode transitions from mode 1 to mode 2 and then the processing proceeds to step ST42. In step ST42, the third elapsed time is set to 0 (initialized). The third elapsed time is a time for determining whether or not the process of mode 2 has been completed within a predetermined time, in other words, whether or not the process has timed out. Then, the process proceeds to Step ST43.
ステップST43では、モード2に応じた撮影領域、照明(発光部6)、解像度の少なくとも一つに関する設定がなされ、当該設定による画像の撮影が行われ、指紋画像が取得される。さらに、指紋画像の特徴点を特徴付ける特徴量が抽出される。そして、処理がステップST44に進む。
In step ST43, a setting relating to at least one of a shooting area, lighting (light emitting unit 6), and resolution according to mode 2 is performed, an image is shot based on the setting, and a fingerprint image is obtained. Further, a feature amount characterizing a feature point of the fingerprint image is extracted. Then, the process proceeds to step ST44.
ステップST44では、得られた特徴量と登録済特徴量とを照合するマッチング処理が行われる。そして、処理がステップST45に進む。
In step ST44, a matching process is performed to match the obtained feature amount with the registered feature amount. Then, the process proceeds to step ST45.
ステップST45では、品質として十分であるか否かが判断される。例えば、検出された特徴点の個数が閾値以上である場合には、品質が十分であると判断される。また、マッチング処理の結果、特徴量の比較に基づいて類似とされた特徴点の数がある閾値thAと閾値thBとの間(但し閾値thA>thB)であれば品質が十分でないと判定されても良い。係る場合は、特徴量の比較に基づいて類似とされた特徴点の数が閾値thA以上(この場合は指紋認証成立)、若しくは、特徴量の比較に基づいて類似とされた特徴点の数が閾値thB以下(この場合は指紋認証不成立)の場合は、指紋認証の結果を判断する上での品質としては十分であると判断される。ステップST45において、品質が十分でないと判定された場合は、処理がステップST46に進む。
で は In step ST45, it is determined whether or not the quality is sufficient. For example, if the number of detected feature points is equal to or greater than a threshold, it is determined that the quality is sufficient. Also, as a result of the matching process, if the number of feature points that are similar based on the comparison of the feature amounts is between a certain threshold thA and a certain threshold thB (threshold thA> thB), it is determined that the quality is not sufficient. Is also good. In such a case, the number of feature points that are similar based on the comparison of the feature amounts is equal to or greater than the threshold thA (in this case, fingerprint authentication is established), or the number of feature points that are similar based on the comparison of the feature amounts is If it is equal to or smaller than the threshold thB (in this case, fingerprint authentication is not established), it is determined that the quality is sufficient for determining the result of fingerprint authentication. If it is determined in step ST45 that the quality is not sufficient, the process proceeds to step ST46.
ステップST46では、第3経過時間が閾値th2より大きいか否かが判断される。閾値th2は、例えば、10秒程度に設定される。第3経過時間が閾値th2以下である場合には、処理がステップST47に進む。
In step ST46, it is determined whether the third elapsed time is greater than a threshold th2. The threshold th2 is set to, for example, about 10 seconds. If the third elapsed time is equal to or less than the threshold th2, the process proceeds to step ST47.
第3経過時間が閾値th2以下でありタイムアウトまでの時間が経過していないことから、モード2の処理が継続される。即ち、ステップST47では、再度、撮像素子8を介して画像が取得され、ステップST44以降の処理が行われる。
(4) Since the third elapsed time is equal to or less than the threshold th2 and the time until the timeout has not elapsed, the mode 2 process is continued. That is, in step ST47, an image is acquired again via the imaging element 8, and the processing after step ST44 is performed.
第3経過時間が閾値th2より大きい場合には、処理がステップST48に進む。ステップST48では、第1経過時間が閾値th3より大きいか否かが判断される。判断の結果、第1経過時間が閾値th3以下の場合は、処理全体がタイムアウトする時間が経過していないので処理がステップST38に戻り、モード1に係る処理が再度、行われる。判断の結果、第1経過時間が閾値th3より大きい場合は、処理全体がタイムアウトする時間が経過しているので処理が最初の処理であるステップST31に戻る。
If the third elapsed time is greater than the threshold th2, the process proceeds to step ST48. In step ST48, it is determined whether the first elapsed time is greater than a threshold th3. If the first elapsed time is equal to or smaller than the threshold th3 as a result of the determination, the process returns to step ST38 because the time-out for the entire process has not elapsed, and the process related to mode 1 is performed again. If the first elapsed time is greater than the threshold th3 as a result of the determination, the time-out for the entire process has elapsed, and the process returns to step ST31, which is the first process.
以上説明したように、第2の実施の形態によれば、撮像素子8を使用して指紋をセンシングする場合であっても、リストバンド型電子機器1の動作モードを適切に設定することにより制御部11や撮像素子8で消費される電力を抑制することができる。また、入力デバイスに対する操作を行うことなくモード遷移が可能となる。
As described above, according to the second embodiment, even when fingerprints are sensed using the imaging device 8, control is performed by appropriately setting the operation mode of the wristband type electronic device 1. The power consumed by the unit 11 and the imaging element 8 can be suppressed. Further, mode transition can be performed without performing an operation on the input device.
[第2の実施の変形例]
上述した第2の実施の形態では、モード1では、指紋に関するマッチング処理を行わない例について説明したが、これに限定されるものではない。例えば、モード1において、低解像度の画像を使用したマッチング処理が行われても良い。例えば、指紋認証が成立した場合に決済が可能となるアプリケーションを想定する。決済額が例えば1000円以下の少額の場合にはそれほど高いセキュリティは要求されない。従って、モード1に係る処理が行われ、低解像度の画像を使用したマッチング処理が行われる。これに対して、決済額が1000円を超える高額の場合には高いセキュリティが要求される。従って、モード2に係る処理が行われ、高解像度の画像を使用したマッチング処理が行われる。このように、モード1からモード2に切り替わるための条件であるトリガーQがアプリケーションの内容に応じた条件であっても良い。 [Modification of Second Embodiment]
In the above-described second embodiment, an example in which the matching process regarding the fingerprint is not performed inmode 1 has been described, but the present invention is not limited to this. For example, in mode 1, a matching process using a low-resolution image may be performed. For example, assume an application that enables payment when fingerprint authentication is established. When the payment amount is a small amount of, for example, 1000 yen or less, not so high security is required. Therefore, the processing according to the mode 1 is performed, and the matching processing using the low-resolution image is performed. On the other hand, when the settlement amount is a large amount exceeding 1,000 yen, high security is required. Therefore, the process according to the mode 2 is performed, and the matching process using the high-resolution image is performed. As described above, the trigger Q which is a condition for switching from the mode 1 to the mode 2 may be a condition according to the content of the application.
上述した第2の実施の形態では、モード1では、指紋に関するマッチング処理を行わない例について説明したが、これに限定されるものではない。例えば、モード1において、低解像度の画像を使用したマッチング処理が行われても良い。例えば、指紋認証が成立した場合に決済が可能となるアプリケーションを想定する。決済額が例えば1000円以下の少額の場合にはそれほど高いセキュリティは要求されない。従って、モード1に係る処理が行われ、低解像度の画像を使用したマッチング処理が行われる。これに対して、決済額が1000円を超える高額の場合には高いセキュリティが要求される。従って、モード2に係る処理が行われ、高解像度の画像を使用したマッチング処理が行われる。このように、モード1からモード2に切り替わるための条件であるトリガーQがアプリケーションの内容に応じた条件であっても良い。 [Modification of Second Embodiment]
In the above-described second embodiment, an example in which the matching process regarding the fingerprint is not performed in
モード1からモード2に切り替わるための条件であるトリガーQの内容が動的に切り替わるようにしても良い。例えば、制御部11がリストバンド型電子機器1のバッテリ残容量を取得する。バッテリの残容量、具体的には、SoC(State of Charge)が例えば30%以下になった場合に、トリガーQの内容を切り替え、トリガーQの内容を厳しくする(モード1からモード2に動作モードが遷移しづらくする。)。例えば、トリガーQの内容を、上述した個々のトリガーQの例を組み合わせたものに設定する。係る処理により、バッテリの残容量が小さい場合であっても、モード1からモード2に動作モードが誤って遷移してしまうことを極力、防止することができる。従って、モード2に係る処理で電力が大きく消費されることによりバッテリの残容量が低下し、リストバンド型電子機器1の動作が停止してしまうことを防止することができる。
The content of the trigger Q, which is a condition for switching from mode 1 to mode 2, may be dynamically switched. For example, the control unit 11 acquires the remaining battery capacity of the wristband type electronic device 1. When the state of charge of the battery, specifically, the SoC (State $ of $ Charge) becomes, for example, 30% or less, the content of the trigger Q is switched and the content of the trigger Q is made strict (the operation mode is changed from mode 1 to mode 2). Makes the transition difficult.). For example, the content of the trigger Q is set to a combination of the examples of the individual triggers Q described above. By such processing, even if the remaining battery charge is small, it is possible to prevent the operation mode from erroneously transitioning from mode 1 to mode 2 as much as possible. Therefore, it is possible to prevent the remaining power of the battery from being reduced due to the large power consumption in the processing in the mode 2 and the operation of the wristband type electronic device 1 from being stopped.
また、第2の実施の形態において、図27に示すように、モード0、1に係る処理を実行するための制御部(第2制御部11A)が設けられる構成であっても良い。そして、トリガーQが成立した段階で第2制御部11Aが上位のホストである制御部11に通知処理を行い、モード2に係るマッチング処理等の処理は、制御部11が行うようにしても良い。上位のホストである制御部11は、リストバンド型電子機器1の種々の処理を制御するため消費電力が大きくなることから、撮像素子8を介して画像が得られた場合(換言すれば、ディスプレイ4に何かが接触した場合)に、制御部11を起動すると全体としての消費電力が大きくなる虞がある。従って、モード0及びモード1を実行する下位の制御部である第2制御部11Aを設けることが好ましい。
Also, in the second embodiment, as shown in FIG. 27, a configuration may be adopted in which a control unit (second control unit 11A) for executing the processes related to modes 0 and 1 is provided. Then, when the trigger Q is established, the second control unit 11A may perform a notification process to the control unit 11 which is a higher-level host, and the control unit 11 may perform a process such as a matching process related to the mode 2. . The control unit 11, which is a higher-level host, controls various processes of the wristband type electronic device 1 and thus consumes a large amount of power. Therefore, when an image is obtained via the image sensor 8 (in other words, the display 4 (when something touches), activating the control unit 11 may increase the overall power consumption. Therefore, it is preferable to provide the second control unit 11A, which is a lower control unit that executes mode 0 and mode 1.
<変形例>
以上、本開示の複数の実施の形態について具体的に説明したが、本開示の内容は上述した実施の形態に限定されるものではなく、本開示の技術的思想に基づく各種の変形が可能である。以下、変形例について説明する。 <Modification>
As described above, a plurality of embodiments of the present disclosure have been specifically described. However, the contents of the present disclosure are not limited to the above-described embodiments, and various modifications based on the technical idea of the present disclosure are possible. is there. Hereinafter, modified examples will be described.
以上、本開示の複数の実施の形態について具体的に説明したが、本開示の内容は上述した実施の形態に限定されるものではなく、本開示の技術的思想に基づく各種の変形が可能である。以下、変形例について説明する。 <Modification>
As described above, a plurality of embodiments of the present disclosure have been specifically described. However, the contents of the present disclosure are not limited to the above-described embodiments, and various modifications based on the technical idea of the present disclosure are possible. is there. Hereinafter, modified examples will be described.
上述した実施の形態において、マッチング処理の結果、指紋認証が成立するための閾値がアプリケーションの内容に応じて変更されても良い。例えば、高額の決済が可能とするための指紋認証が行われる場合、画像品質に関する基準を高くしたり、照合スコアに対する閾値を大きく変更するようにしても良い。
In the above embodiment, as a result of the matching process, the threshold for establishing fingerprint authentication may be changed according to the content of the application. For example, when fingerprint authentication is performed to enable high-value payment, the standard for image quality may be increased, or the threshold value for the matching score may be significantly changed.
上述した実施の形態に係るリストバンド型電子機器1の構成は適宜、変更できる。例えば、導光板5や発光部6がない構成でも良い。係る構成の場合、例えば、ディスプレイ4(具体例としてはOLED)の光を用いた撮影が行われる。
The configuration of the wristband type electronic device 1 according to the above-described embodiment can be changed as appropriate. For example, a configuration without the light guide plate 5 and the light emitting unit 6 may be employed. In the case of such a configuration, for example, imaging using light from the display 4 (specifically, an OLED) is performed.
上述した実施の形態において、生体情報は指紋に限定されるものではなく、手のひらの血管、網膜の毛細血管等であっても良く、これらを組み合わせたものであっても良い。なお、指紋は、指先全体の指紋線による模様である必要はなく、その一部が含まれていれば良い。他の生体情報についても同様である。
In the above-described embodiment, the biological information is not limited to the fingerprint, but may be a blood vessel of a palm, a capillary blood vessel of a retina, or a combination thereof. Note that the fingerprint does not need to be a pattern formed by the fingerprint lines of the entire fingertip, but may include a part thereof. The same applies to other biological information.
本開示は、装置、方法、プログラム、システム等により実現することもできる。例えば、上述した実施の形態で説明した機能を行うプログラムをダウンロード可能とし、実施の形態で説明した機能を有しない装置が当該プログラムをダウンロードしてインストールすることにより、当該装置において実施の形態で説明した制御を行うことが可能となる。本開示は、このようなプログラムを配布するサーバにより実現することも可能である。また、各実施の形態、変形例で説明した事項は、適宜組み合わせることが可能である。
The present disclosure can also be realized by an apparatus, a method, a program, a system, and the like. For example, a program that performs the function described in the above-described embodiment can be downloaded, and a device that does not have the function described in the embodiment downloads and installs the program, thereby explaining the program in the embodiment. Control can be performed. The present disclosure can also be realized by a server that distributes such a program. In addition, matters described in each of the embodiments and the modified examples can be appropriately combined.
本開示は、以下の構成も採ることができる。
(1)
少なくとも、第1のモードと、前記第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定する制御部を有し、
前記制御部は、
前記第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
前記画像に生体情報が含まれることをトリガーとして、動作モードを前記第1のモードから前記第2のモードに遷移させ、
前記第2のモードでは、少なくとも前記生体情報を使用したマッチング処理を行う
情報処理装置。
(2)
前記第2のモードでは、前記第1のモードの設定とは異なる設定に基づいて、前記センサ部により画像が取得される
(1)に記載の情報処理装置。
(3)
前記制御部は、前記第1のモードでは、前記センサ部により撮影が行われるタイミングで発光する発光部を第1の輝度で発光させ、前記第2のモードでは、前記第1の輝度より大きい第2の輝度で前記発光部を発光させる
(2)に記載の情報処理装置。
(4)
前記制御部は、前記第1のモードでは、第1の解像度の前記画像を取得するように前記センサ部を制御し、前記第2のモードでは、前記第1の解像度より大きい第2の解像度の前記画像を取得するように前記センサ部を制御する
(2)又は(3)に記載の情報処理装置。
(5)
前記制御部は、前記第1のモードでは、前記センサ部の一部を使用して前記画像を取得する制御を行い、前記第2のモードでは、前記センサ部の全体を使用して前記画像を取得する制御を行う
(2)から(4)までの何れかに記載の情報処理装置。
(6)
前記第1のモードの消費電力より消費電力が小さい第3のモードから前記第1のモードに動作モードが遷移可能とされており、
前記制御部は、
前記情報処理装置の動きが検出された場合及び所定の操作が検出された場合の少なくとも一方をトリガーとして、前記第3のモードから前記第1のモードに動作モードを遷移させる
(3)に記載の情報処理装置。
(7)
前記第3のモードでは、前記制御部は、前記発光部及び前記センサ部をオフする
(6)に記載の情報処理装置。
(8)
前記所定の操作を検出するタッチセンサ部が前記センサ部の近傍に設けられている
(6)又は(7)に記載の情報処理装置。
(9)
前記発光部を有する
(3)、(6)から(8)までの何れかに記載の情報処理装置。
(10)
前記トリガーの内容が、前記第1のモードから前記第2のモードに遷移しづらくなるように変更される
(1)から(9)までの何れかに記載の情報処理装置。
(11)
前記制御部は、前記センサ部を介して得られる生体情報を含む画像から、特徴点を検出する特徴点検出部と、前記特徴点を含む周辺画像に基づいて、前記特徴点を特徴付ける特徴量を抽出する特徴量抽出部とを有する
(1)から(10)までの何れかに記載の情報処理装置。
(12)
前記生体情報は、指紋及び血管の少なくとも一方である
(1)から(11)までの何れかに記載の情報処理装置。
(13)
前記第1のモードに係る処理が前記制御部とは異なる他の制御部によって行われる
(1)から(12)までの何れかに記載の情報処理装置。
(14)
少なくとも、第1のモードと、前記第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定する制御部と、
画像を取得するセンサ部と
を有し、
前記制御部は、
前記第1のモードでは、前記センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
前記画像に生体情報が含まれることをトリガーとして、動作モードを前記第1のモードから前記第2のモードに遷移させ、
前記第2のモードでは、少なくとも前記生体情報を使用したマッチング処理を行う
ウエアラブル機器。
(15)
制御部が、少なくとも、第1のモードと、前記第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定し、
前記制御部は、
前記第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
前記画像に生体情報が含まれることをトリガーとして、動作モードを前記第1のモードから前記第2のモードに遷移させ、
前記第2のモードでは、少なくとも前記生体情報を使用したマッチング処理を行う
情報処理方法。
(16)
制御部が、少なくとも、第1のモードと、前記第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定し、
前記制御部は、
前記第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
前記画像に生体情報が含まれることをトリガーとして、動作モードを前記第1のモードから前記第2のモードに遷移させ、
前記第2のモードでは、少なくとも前記生体情報を使用したマッチング処理を行う
情報処理方法をコンピュータに実行させるプログラム。 The present disclosure can also adopt the following configurations.
(1)
At least a control unit that selectively sets a first mode and a second mode in which a process that consumes more power than the first mode is performed;
The control unit includes:
In the first mode, it is determined whether biological information is included in an image obtained through the sensor unit,
With the biological information included in the image as a trigger, the operation mode is changed from the first mode to the second mode,
In the second mode, an information processing device that performs a matching process using at least the biological information.
(2)
The information processing device according to (1), wherein in the second mode, an image is acquired by the sensor unit based on a setting different from the setting in the first mode.
(3)
The control unit causes the light-emitting unit that emits light at a timing at which an image is taken by the sensor unit to emit light at a first luminance in the first mode, and the light-emitting unit emits a light having a first luminance higher than the first luminance in the second mode. The information processing device according to (2), wherein the light emitting unit emits light at a luminance of 2.
(4)
In the first mode, the control unit controls the sensor unit to acquire the image of a first resolution. In the second mode, the control unit has a second resolution larger than the first resolution. The information processing device according to (2) or (3), wherein the sensor unit is controlled to acquire the image.
(5)
In the first mode, the control unit performs control to acquire the image using a part of the sensor unit. In the second mode, the image is obtained using the entire sensor unit. The information processing apparatus according to any one of (2) to (4), which performs control to acquire.
(6)
The operation mode can be shifted from the third mode having lower power consumption than the power consumption of the first mode to the first mode,
The control unit includes:
The operation mode is changed from the third mode to the first mode by using at least one of a case where a movement of the information processing apparatus is detected and a case where a predetermined operation is detected as a trigger. Information processing device.
(7)
The information processing device according to (6), wherein in the third mode, the control unit turns off the light emitting unit and the sensor unit.
(8)
The information processing device according to (6) or (7), wherein a touch sensor unit that detects the predetermined operation is provided near the sensor unit.
(9)
The information processing apparatus according to any one of (3), (6) to (8), including the light emitting unit.
(10)
The information processing apparatus according to any one of (1) to (9), wherein the content of the trigger is changed so that the transition from the first mode to the second mode becomes difficult.
(11)
The control unit, from an image including biological information obtained via the sensor unit, a feature point detection unit that detects a feature point, and a feature amount characterizing the feature point based on a peripheral image including the feature point. The information processing apparatus according to any one of (1) to (10), further comprising: a feature amount extracting unit to extract.
(12)
The information processing apparatus according to any one of (1) to (11), wherein the biological information is at least one of a fingerprint and a blood vessel.
(13)
The information processing apparatus according to any one of (1) to (12), wherein the processing according to the first mode is performed by another control unit different from the control unit.
(14)
A control unit that selectively sets at least a first mode and a second mode in which processing that consumes more power than the first mode is performed;
And a sensor unit for acquiring an image.
The control unit includes:
In the first mode, it is determined whether or not biological information is included in an image obtained through the sensor unit,
With the biological information included in the image as a trigger, the operation mode is changed from the first mode to the second mode,
In the second mode, a wearable device that performs a matching process using at least the biological information.
(15)
The control unit selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed;
The control unit includes:
In the first mode, it is determined whether biological information is included in an image obtained through the sensor unit,
With the biological information included in the image as a trigger, the operation mode is changed from the first mode to the second mode,
In the second mode, an information processing method that performs a matching process using at least the biological information.
(16)
The control unit selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed;
The control unit includes:
In the first mode, it is determined whether biological information is included in an image obtained through the sensor unit,
Triggered by the biological information being included in the image, the operation mode is changed from the first mode to the second mode,
In the second mode, a program for causing a computer to execute an information processing method that performs a matching process using at least the biological information.
(1)
少なくとも、第1のモードと、前記第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定する制御部を有し、
前記制御部は、
前記第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
前記画像に生体情報が含まれることをトリガーとして、動作モードを前記第1のモードから前記第2のモードに遷移させ、
前記第2のモードでは、少なくとも前記生体情報を使用したマッチング処理を行う
情報処理装置。
(2)
前記第2のモードでは、前記第1のモードの設定とは異なる設定に基づいて、前記センサ部により画像が取得される
(1)に記載の情報処理装置。
(3)
前記制御部は、前記第1のモードでは、前記センサ部により撮影が行われるタイミングで発光する発光部を第1の輝度で発光させ、前記第2のモードでは、前記第1の輝度より大きい第2の輝度で前記発光部を発光させる
(2)に記載の情報処理装置。
(4)
前記制御部は、前記第1のモードでは、第1の解像度の前記画像を取得するように前記センサ部を制御し、前記第2のモードでは、前記第1の解像度より大きい第2の解像度の前記画像を取得するように前記センサ部を制御する
(2)又は(3)に記載の情報処理装置。
(5)
前記制御部は、前記第1のモードでは、前記センサ部の一部を使用して前記画像を取得する制御を行い、前記第2のモードでは、前記センサ部の全体を使用して前記画像を取得する制御を行う
(2)から(4)までの何れかに記載の情報処理装置。
(6)
前記第1のモードの消費電力より消費電力が小さい第3のモードから前記第1のモードに動作モードが遷移可能とされており、
前記制御部は、
前記情報処理装置の動きが検出された場合及び所定の操作が検出された場合の少なくとも一方をトリガーとして、前記第3のモードから前記第1のモードに動作モードを遷移させる
(3)に記載の情報処理装置。
(7)
前記第3のモードでは、前記制御部は、前記発光部及び前記センサ部をオフする
(6)に記載の情報処理装置。
(8)
前記所定の操作を検出するタッチセンサ部が前記センサ部の近傍に設けられている
(6)又は(7)に記載の情報処理装置。
(9)
前記発光部を有する
(3)、(6)から(8)までの何れかに記載の情報処理装置。
(10)
前記トリガーの内容が、前記第1のモードから前記第2のモードに遷移しづらくなるように変更される
(1)から(9)までの何れかに記載の情報処理装置。
(11)
前記制御部は、前記センサ部を介して得られる生体情報を含む画像から、特徴点を検出する特徴点検出部と、前記特徴点を含む周辺画像に基づいて、前記特徴点を特徴付ける特徴量を抽出する特徴量抽出部とを有する
(1)から(10)までの何れかに記載の情報処理装置。
(12)
前記生体情報は、指紋及び血管の少なくとも一方である
(1)から(11)までの何れかに記載の情報処理装置。
(13)
前記第1のモードに係る処理が前記制御部とは異なる他の制御部によって行われる
(1)から(12)までの何れかに記載の情報処理装置。
(14)
少なくとも、第1のモードと、前記第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定する制御部と、
画像を取得するセンサ部と
を有し、
前記制御部は、
前記第1のモードでは、前記センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
前記画像に生体情報が含まれることをトリガーとして、動作モードを前記第1のモードから前記第2のモードに遷移させ、
前記第2のモードでは、少なくとも前記生体情報を使用したマッチング処理を行う
ウエアラブル機器。
(15)
制御部が、少なくとも、第1のモードと、前記第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定し、
前記制御部は、
前記第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
前記画像に生体情報が含まれることをトリガーとして、動作モードを前記第1のモードから前記第2のモードに遷移させ、
前記第2のモードでは、少なくとも前記生体情報を使用したマッチング処理を行う
情報処理方法。
(16)
制御部が、少なくとも、第1のモードと、前記第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定し、
前記制御部は、
前記第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
前記画像に生体情報が含まれることをトリガーとして、動作モードを前記第1のモードから前記第2のモードに遷移させ、
前記第2のモードでは、少なくとも前記生体情報を使用したマッチング処理を行う
情報処理方法をコンピュータに実行させるプログラム。 The present disclosure can also adopt the following configurations.
(1)
At least a control unit that selectively sets a first mode and a second mode in which a process that consumes more power than the first mode is performed;
The control unit includes:
In the first mode, it is determined whether biological information is included in an image obtained through the sensor unit,
With the biological information included in the image as a trigger, the operation mode is changed from the first mode to the second mode,
In the second mode, an information processing device that performs a matching process using at least the biological information.
(2)
The information processing device according to (1), wherein in the second mode, an image is acquired by the sensor unit based on a setting different from the setting in the first mode.
(3)
The control unit causes the light-emitting unit that emits light at a timing at which an image is taken by the sensor unit to emit light at a first luminance in the first mode, and the light-emitting unit emits a light having a first luminance higher than the first luminance in the second mode. The information processing device according to (2), wherein the light emitting unit emits light at a luminance of 2.
(4)
In the first mode, the control unit controls the sensor unit to acquire the image of a first resolution. In the second mode, the control unit has a second resolution larger than the first resolution. The information processing device according to (2) or (3), wherein the sensor unit is controlled to acquire the image.
(5)
In the first mode, the control unit performs control to acquire the image using a part of the sensor unit. In the second mode, the image is obtained using the entire sensor unit. The information processing apparatus according to any one of (2) to (4), which performs control to acquire.
(6)
The operation mode can be shifted from the third mode having lower power consumption than the power consumption of the first mode to the first mode,
The control unit includes:
The operation mode is changed from the third mode to the first mode by using at least one of a case where a movement of the information processing apparatus is detected and a case where a predetermined operation is detected as a trigger. Information processing device.
(7)
The information processing device according to (6), wherein in the third mode, the control unit turns off the light emitting unit and the sensor unit.
(8)
The information processing device according to (6) or (7), wherein a touch sensor unit that detects the predetermined operation is provided near the sensor unit.
(9)
The information processing apparatus according to any one of (3), (6) to (8), including the light emitting unit.
(10)
The information processing apparatus according to any one of (1) to (9), wherein the content of the trigger is changed so that the transition from the first mode to the second mode becomes difficult.
(11)
The control unit, from an image including biological information obtained via the sensor unit, a feature point detection unit that detects a feature point, and a feature amount characterizing the feature point based on a peripheral image including the feature point. The information processing apparatus according to any one of (1) to (10), further comprising: a feature amount extracting unit to extract.
(12)
The information processing apparatus according to any one of (1) to (11), wherein the biological information is at least one of a fingerprint and a blood vessel.
(13)
The information processing apparatus according to any one of (1) to (12), wherein the processing according to the first mode is performed by another control unit different from the control unit.
(14)
A control unit that selectively sets at least a first mode and a second mode in which processing that consumes more power than the first mode is performed;
And a sensor unit for acquiring an image.
The control unit includes:
In the first mode, it is determined whether or not biological information is included in an image obtained through the sensor unit,
With the biological information included in the image as a trigger, the operation mode is changed from the first mode to the second mode,
In the second mode, a wearable device that performs a matching process using at least the biological information.
(15)
The control unit selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed;
The control unit includes:
In the first mode, it is determined whether biological information is included in an image obtained through the sensor unit,
With the biological information included in the image as a trigger, the operation mode is changed from the first mode to the second mode,
In the second mode, an information processing method that performs a matching process using at least the biological information.
(16)
The control unit selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed;
The control unit includes:
In the first mode, it is determined whether biological information is included in an image obtained through the sensor unit,
Triggered by the biological information being included in the image, the operation mode is changed from the first mode to the second mode,
In the second mode, a program for causing a computer to execute an information processing method that performs a matching process using at least the biological information.
1・・・ウエアラブル機器、4・・・ディスプレイ、6・・・発光部、8・・・撮像素子、11・・・制御部、11A・・・第2制御部、11a・・・前処理部、11c・・・特徴量抽出部、11d・・・マッチング処理部、101・・・ノイズ除去部、102・・・隆線推定画像生成部、103・・・確信度マップ生成部
DESCRIPTION OF SYMBOLS 1 ... Wearable apparatus, 4 ... Display, 6 ... Light emitting part, 8 ... Image sensor, 11 ... Control part, 11A ... Second control part, 11a ... Preprocessing part , 11c: feature amount extraction unit, 11d: matching processing unit, 101: noise removal unit, 102: ridge estimation image generation unit, 103: confidence map generation unit
Claims (16)
- 少なくとも、第1のモードと、前記第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定する制御部を有し、
前記制御部は、
前記第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
前記画像に生体情報が含まれることをトリガーとして、動作モードを前記第1のモードから前記第2のモードに遷移させ、
前記第2のモードでは、少なくとも前記生体情報を使用したマッチング処理を行う
情報処理装置。 At least a control unit that selectively sets a first mode and a second mode in which a process that consumes more power than the first mode is performed;
The control unit includes:
In the first mode, it is determined whether biological information is included in an image obtained through the sensor unit,
With the biological information included in the image as a trigger, the operation mode is changed from the first mode to the second mode,
In the second mode, an information processing device that performs a matching process using at least the biological information. - 前記第2のモードでは、前記第1のモードの設定とは異なる設定に基づいて、前記センサ部により画像が取得される
請求項1に記載の情報処理装置。 The information processing device according to claim 1, wherein in the second mode, an image is acquired by the sensor unit based on a setting different from the setting in the first mode. - 前記制御部は、前記第1のモードでは、前記センサ部により撮影が行われるタイミングで発光する発光部を第1の輝度で発光させ、前記第2のモードでは、前記第1の輝度より大きい第2の輝度で前記発光部を発光させる
請求項2に記載の情報処理装置。 The control unit causes the light-emitting unit that emits light at a timing at which an image is taken by the sensor unit to emit light at a first luminance in the first mode, and the light-emitting unit emits a light having a first luminance higher than the first luminance in the second mode. The information processing device according to claim 2, wherein the light emitting unit emits light at a luminance of 2. - 前記制御部は、前記第1のモードでは、第1の解像度の前記画像を取得するように前記センサ部を制御し、前記第2のモードでは、前記第1の解像度より大きい第2の解像度の前記画像を取得するように前記センサ部を制御する
請求項2に記載の情報処理装置。 In the first mode, the control unit controls the sensor unit to acquire the image of a first resolution. In the second mode, the control unit has a second resolution larger than the first resolution. The information processing device according to claim 2, wherein the sensor unit is controlled so as to acquire the image. - 前記制御部は、前記第1のモードでは、前記センサ部の一部を使用して前記画像を取得する制御を行い、前記第2のモードでは、前記センサ部の全体を使用して前記画像を取得する制御を行う
請求項2に記載の情報処理装置。 In the first mode, the control unit performs control to acquire the image using a part of the sensor unit. In the second mode, the image is obtained using the entire sensor unit. The information processing apparatus according to claim 2, wherein the information processing apparatus performs control to acquire the information. - 前記第1のモードの消費電力より消費電力が小さい第3のモードから前記第1のモードに動作モードが遷移可能とされており、
前記制御部は、
前記情報処理装置の動きが検出された場合及び所定の操作が検出された場合の少なくとも一方をトリガーとして、前記第3のモードから前記第1のモードに動作モードを遷移させる
請求項3に記載の情報処理装置。 The operation mode can be shifted from the third mode having lower power consumption than the power consumption of the first mode to the first mode,
The control unit includes:
The operation mode is changed from the third mode to the first mode by using at least one of a case where a movement of the information processing apparatus is detected and a case where a predetermined operation is detected as a trigger. Information processing device. - 前記第3のモードでは、前記制御部は、前記発光部及び前記センサ部をオフする
請求項6に記載の情報処理装置。 The information processing device according to claim 6, wherein in the third mode, the control unit turns off the light emitting unit and the sensor unit. - 前記所定の操作を検出するタッチセンサ部が前記センサ部の近傍に設けられている
請求項6に記載の情報処理装置。 The information processing apparatus according to claim 6, wherein a touch sensor unit that detects the predetermined operation is provided near the sensor unit. - 前記発光部を有する
請求項3に記載の情報処理装置。 The information processing device according to claim 3, further comprising the light emitting unit. - 前記トリガーの内容が、前記第1のモードから前記第2のモードに遷移しづらくなるように変更される
請求項1に記載の情報処理装置。 The information processing device according to claim 1, wherein the content of the trigger is changed so that the transition from the first mode to the second mode is difficult. - 前記制御部は、前記センサ部を介して得られる生体情報を含む画像から、特徴点を検出する特徴点検出部と、前記特徴点を含む周辺画像に基づいて、前記特徴点を特徴付ける特徴量を抽出する特徴量抽出部とを有する
請求項1に記載の情報処理装置。 The control unit, from an image including biological information obtained via the sensor unit, a feature point detection unit that detects a feature point, and a feature amount characterizing the feature point based on a peripheral image including the feature point. The information processing device according to claim 1, further comprising: a feature amount extracting unit that extracts the feature amount. - 前記生体情報は、指紋及び血管の少なくとも一方である
請求項1に記載の情報処理装置。 The information processing device according to claim 1, wherein the biological information is at least one of a fingerprint and a blood vessel. - 前記第1のモードに係る処理が前記制御部とは異なる他の制御部によって行われる
請求項1に記載の情報処理装置。 The information processing device according to claim 1, wherein the processing according to the first mode is performed by another control unit different from the control unit. - 少なくとも、第1のモードと、前記第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定する制御部と、
画像を取得するセンサ部と
を有し、
前記制御部は、
前記第1のモードでは、前記センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
前記画像に生体情報が含まれることをトリガーとして、動作モードを前記第1のモードから前記第2のモードに遷移させ、
前記第2のモードでは、少なくとも前記生体情報を使用したマッチング処理を行う
ウエアラブル機器。 A control unit that selectively sets at least a first mode and a second mode in which processing that consumes more power than the first mode is performed;
And a sensor unit for acquiring an image.
The control unit includes:
In the first mode, it is determined whether or not biological information is included in an image obtained through the sensor unit,
With the biological information included in the image as a trigger, the operation mode is changed from the first mode to the second mode,
In the second mode, a wearable device that performs a matching process using at least the biological information. - 制御部が、少なくとも、第1のモードと、前記第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定し、
前記制御部は、
前記第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
前記画像に生体情報が含まれることをトリガーとして、動作モードを前記第1のモードから前記第2のモードに遷移させ、
前記第2のモードでは、少なくとも前記生体情報を使用したマッチング処理を行う
情報処理方法。 The control unit selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed;
The control unit includes:
In the first mode, it is determined whether biological information is included in an image obtained through the sensor unit,
With the biological information included in the image as a trigger, the operation mode is changed from the first mode to the second mode,
In the second mode, an information processing method that performs a matching process using at least the biological information. - 制御部が、少なくとも、第1のモードと、前記第1のモードより消費電力が大きい処理が行われる第2のモードとを選択的に設定し、
前記制御部は、
前記第1のモードでは、センサ部を介して得られる画像に生体情報が含まれるか否かを判断し、
前記画像に生体情報が含まれることをトリガーとして、動作モードを前記第1のモードから前記第2のモードに遷移させ、
前記第2のモードでは、少なくとも前記生体情報を使用したマッチング処理を行う
情報処理方法をコンピュータに実行させるプログラム。 The control unit selectively sets at least a first mode and a second mode in which a process that consumes more power than the first mode is performed;
The control unit includes:
In the first mode, it is determined whether biological information is included in an image obtained through the sensor unit,
With the biological information included in the image as a trigger, the operation mode is changed from the first mode to the second mode,
In the second mode, a program for causing a computer to execute an information processing method that performs a matching process using at least the biological information.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018-115759 | 2018-06-19 | ||
JP2018115759A JP2019219833A (en) | 2018-06-19 | 2018-06-19 | Information processing apparatus, wearable device, information processing method, and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019244496A1 true WO2019244496A1 (en) | 2019-12-26 |
Family
ID=68983179
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/018523 WO2019244496A1 (en) | 2018-06-19 | 2019-05-09 | Information processing device, wearable equipment, information processing method, and program |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2019219833A (en) |
WO (1) | WO2019244496A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024203016A1 (en) * | 2023-03-28 | 2024-10-03 | ソニーグループ株式会社 | Information processing apparatus, method, and program |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021152925A1 (en) * | 2020-01-30 | 2021-08-05 | 株式会社村田製作所 | Biometric information measurement apparatus and biometric information measurement system |
WO2022178431A1 (en) * | 2021-02-22 | 2022-08-25 | Hoffmann Christopher J | Motion tracking light controller |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004318892A (en) * | 2003-04-18 | 2004-11-11 | Agilent Technol Inc | System and method for time space multiplexing in finger image inputting application |
JP2012248047A (en) * | 2011-05-30 | 2012-12-13 | Seiko Epson Corp | Biological identification device and biological identification method |
JP2017509062A (en) * | 2014-02-21 | 2017-03-30 | フィンガープリント カーズ アーベー | Control method of electronic equipment |
JP2017084045A (en) * | 2015-10-27 | 2017-05-18 | 京セラ株式会社 | Electronic apparatus, authentication method of electronic apparatus, and authentication program |
WO2017132258A1 (en) * | 2016-01-29 | 2017-08-03 | Synaptics Incorporated | Initiating fingerprint capture with a touch screen |
-
2018
- 2018-06-19 JP JP2018115759A patent/JP2019219833A/en active Pending
-
2019
- 2019-05-09 WO PCT/JP2019/018523 patent/WO2019244496A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004318892A (en) * | 2003-04-18 | 2004-11-11 | Agilent Technol Inc | System and method for time space multiplexing in finger image inputting application |
JP2012248047A (en) * | 2011-05-30 | 2012-12-13 | Seiko Epson Corp | Biological identification device and biological identification method |
JP2017509062A (en) * | 2014-02-21 | 2017-03-30 | フィンガープリント カーズ アーベー | Control method of electronic equipment |
JP2017084045A (en) * | 2015-10-27 | 2017-05-18 | 京セラ株式会社 | Electronic apparatus, authentication method of electronic apparatus, and authentication program |
WO2017132258A1 (en) * | 2016-01-29 | 2017-08-03 | Synaptics Incorporated | Initiating fingerprint capture with a touch screen |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024203016A1 (en) * | 2023-03-28 | 2024-10-03 | ソニーグループ株式会社 | Information processing apparatus, method, and program |
Also Published As
Publication number | Publication date |
---|---|
JP2019219833A (en) | 2019-12-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3523754B1 (en) | Face liveness detection method and apparatus, and electronic device | |
US10860850B2 (en) | Method of recognition based on iris recognition and electronic device supporting the same | |
KR100947990B1 (en) | Gaze Tracking Apparatus and Method using Difference Image Entropy | |
US9750420B1 (en) | Facial feature selection for heart rate detection | |
US11163995B2 (en) | User recognition and gaze tracking in a video system | |
US10928904B1 (en) | User recognition and gaze tracking in a video system | |
WO2019244496A1 (en) | Information processing device, wearable equipment, information processing method, and program | |
US11275458B2 (en) | Method, electronic device, and storage medium for fingerprint recognition | |
US9785863B2 (en) | Fingerprint authentication | |
US11335090B2 (en) | Electronic device and method for providing function by using corneal image in electronic device | |
KR102544320B1 (en) | Electronic apparatus and controlling method thereof | |
WO2019173011A1 (en) | Electronic device including contactless palm biometric sensor and related methods | |
KR20200004724A (en) | Method for operating authorization related the biometric information, based on an image including the biometric information obtained by using the biometric sensor and the electronic device supporting the same | |
US20190278970A1 (en) | Detection device, information processing device, and information processing method | |
KR20180137830A (en) | Apparatus for recognizing pressure and electronic apparatus including the same | |
WO2019244497A1 (en) | Information processing device, wearable equipment, information processing method, and program | |
KR20190088679A (en) | Electronic device and method for determining fingerprint processing method based on pressure level of fingerprint input | |
EP4398136A1 (en) | Electronic device for controlling biometric signal-based operation, and operating method therefor | |
US20230074386A1 (en) | Method and apparatus for performing identity recognition on to-be-recognized object, device and medium | |
CN115829575A (en) | Payment verification method, device, terminal, server and storage medium | |
JP7228509B2 (en) | Identification device and electronic equipment | |
EP4362481A1 (en) | Method for displaying guide for position of camera, and electronic device | |
US11460928B2 (en) | Electronic device for recognizing gesture of user from sensor signal of user and method for recognizing gesture using the same | |
KR20190027704A (en) | Electronic apparatus and method for recognizing fingerprint in electronic apparatus | |
US11899884B2 (en) | Electronic device and method of recognizing a force touch, by electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19822335 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19822335 Country of ref document: EP Kind code of ref document: A1 |