WO2008099963A1 - 物体形状生成方法、物体形状生成装置及びプログラム - Google Patents
物体形状生成方法、物体形状生成装置及びプログラム Download PDFInfo
- Publication number
- WO2008099963A1 WO2008099963A1 PCT/JP2008/052908 JP2008052908W WO2008099963A1 WO 2008099963 A1 WO2008099963 A1 WO 2008099963A1 JP 2008052908 W JP2008052908 W JP 2008052908W WO 2008099963 A1 WO2008099963 A1 WO 2008099963A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- projection
- projected
- blood vessel
- space
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
- G06T7/596—Depth or shape recovery from multiple images from stereo images from three or more stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/13—Sensors therefor
- G06V40/1312—Sensors therefor direct reading, e.g. contactless acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/14—Vascular patterns
Definitions
- the present invention relates to an object shape generation method, an object shape generation apparatus, and a program, and is suitable for application to, for example, biometric authentication.
- Background art
- Biometrics authentication is a technique for identifying whether or not the person is the person using the identification target of the living body.
- One of the living body identification targets is the finger blood vessel.
- Patent Literature 1 Patent 2 0 0 2 _ 1 7 5 5 2 9 Publication.
- a view volume intersection method Shape From Silhouette method. This visual volume intersection method is based on the image of the object in multiple viewpoints and the position information of the camera, etc., by leaving the area where all of the Sylettes in each image intersect in the target space as the object area, The shape is generated.
- the part other than the blood vessels in the living body is not hollow and is blocked by each tissue such as fat.
- each tissue such as fat.
- the blood vessel portion on the back surface that is not projected on the imaging surface does not remain as an object region because not all of the Sylettes in each image intersect in the target space.
- the blood vessel portion on the back surface that is not projected on the imaging surface does not remain as an object region because not all of the Sylettes in each image intersect in the target space.
- the present invention has been made in consideration of the above points, and an object shape generation device and an object that can generate the shape of an object with high accuracy even in a situation where the back side of the object cannot be imaged. I would like to propose a shape generation method and program.
- the present invention provides an object shape generation method in which, for each of a plurality of images captured from around the object, an object projected on the image is projected from the viewpoint position of the image onto the projection space.
- the first step of detecting the projection areas projected to the projection plane separated from the projection surface in the projection space by a specified length in the depth direction is common to the detected projection areas.
- a second step for extracting the part is provided.
- the present invention is also an object shape generation device having a work memory and an image processing unit that executes image processing in the work memory, the image processing unit for each of a plurality of images captured from around the object.
- image processing unit executes image processing in the work memory, the image processing unit for each of a plurality of images captured from around the object.
- the present invention is a program that projects an object projected on an image from a viewpoint position of an image onto a projection space for each of a plurality of images captured from around the object to a control unit that controls a work memory. If the projection sky To detect each projection area projected to a projection plane that is separated by a specified length in the depth direction from the projection surface between them, and to extract a common part of each detected projection area did.
- the common part of the shot region projected to the deepest part of the projection space is not extracted as a three-dimensional image of the blood vessel, but only a specified length in the depth direction from the projection surface in the projection space.
- the visual volume can be extracted by focusing on the object surface part to be imaged.
- FIG. 1 is a schematic diagram of (A) the imaging direction side (the front side of the object) and (B) the opposite side of the imaging direction (the back side of the object) for explaining the problems in the visual volume intersection method.
- FIG. 2 is a block diagram showing a configuration of the authentication apparatus according to the present embodiment.
- Fig. 3 is a schematic diagram showing the state transition of a rotating finger.
- FIG. 4 is a schematic diagram showing the relationship between the imaging surface and the blood vessels in the image.
- FIG. 5 is a block diagram showing a functional configuration of the control unit.
- FIG. 6 is a schematic diagram for explaining the calculation of the rotation correction amount.
- Figure 7 is a schematic diagram showing the images before and after embossing.
- FIG. 8 is a schematic diagram for explaining the motion amount calculation processing.
- FIG. 9 is a schematic diagram showing the luminance state of the blood vessel in the image after the relief processing.
- FIG. 10 is a schematic diagram showing luminance state transition in the blood vessel extraction processing.
- FIG. 11 is a schematic diagram for explaining the uniformization of the luminance state.
- Fig. 12 is a schematic diagram showing the poxel space.
- Figure 13 is a flowchart showing the object shape generation processing procedure.
- FIG. 14 is a schematic diagram for explaining the detection of the siltet region (1).
- FIG. 15 is a schematic diagram for explaining the arrangement relationship of each image arranged around the poxel space.
- FIG. 16 is a schematic diagram for explaining the detection of the silt region (2).
- Fig. 17 is a schematic diagram showing the extracted state of the silt region.
- FIG. 2 shows the overall configuration of authentication apparatus 1 according to the present embodiment.
- This authentication device 1 connects an operation unit 11, an imaging unit 12, a memory 13, an interface 14, and a notification unit 15 to the control unit 10 via the bus 16. Consists of.
- the control unit 10 includes a central processing unit (CPU) that controls the entire authentication device 1, a read only memory (R 0 M) that stores various programs and setting information, and the work memory of the CPU. It is configured as a computer that includes R AM. (Random Access Memory).
- CPU central processing unit
- R 0 M read only memory
- R AM Random Access Memory
- execution command C OM 1 of the user to be registered (hereinafter referred to as a registrant) or a registrant himself or herself is registered.
- Execution command COM2 for a mode for determining the presence or absence of this (hereinafter referred to as the authentication mode) is given from the operation unit 11 in response to a user operation.
- the control unit 10 determines a mode to be executed based on the execution instructions COM1 and COM2, and based on a program corresponding to the determination result, the imaging unit 12, the memory 13 and the interface
- the device 14 and the notification unit 15 are appropriately controlled to execute the blood vessel registration mode or the authentication mode.
- the imaging unit 12 is based on the lens position in the optical system, the aperture value of the aperture, and the shirt speed of the imaging element ( Adjust the exposure time.
- the imaging unit 12 performs A / D (Analog / Digital) conversion on the image signal sequentially output from the imaging device at a predetermined cycle as an imaging result of the imaging device, and the image data obtained as a result of the conversion is converted.
- a / D Analog / Digital
- the imaging unit 12 drives the near-infrared light source during the period specified by the control unit 10 and is specific to the blood vessel at the position specified as the imaging target (hereinafter referred to as the imaging position). Is irradiated with near-infrared light that is absorbed.
- the memory 13 is, for example, a flash memory, and stores or reads data specified by the control unit 10.
- the interface 14 exchanges various data with an external device connected via a predetermined transmission line.
- the notification unit 15 is composed of a display unit 15 a and a voice output unit 15 b.
- the display unit 15 5 a displays the contents based on the display data given from the control unit 10 by characters and shapes. Display on the display screen.
- the audio output unit 15 b is configured to output audio based on audio data given from the control unit 10 from a speaker.
- the control unit 10 When the blood vessel registration mode is determined as the mode to be executed, the control unit 10 must change the operation mode to the blood vessel registration mode and rotate the finger along the curved surface of the finger pad at the imaging position. Notifying through the notification unit 15 and operating the imaging unit 12.
- the control unit 10 generates a stereoscopic image of the blood vessel from the images sequentially given from the imaging unit 12 as an imaging result of the imaging unit 12, and represents a value representing the shape of the stereoscopic image of the blood vessel (hereinafter referred to as this). This is registered as data to be registered (hereinafter referred to as “registration data”) by storing it in the memory 13.
- control unit 10 can execute the blood vessel registration mode.
- control unit 10 determines the authentication mode as the mode to be executed, the control unit 10 changes the operation mode to the authentication mode, and notifies that the finger must be rotated along the curved surface of the finger pad at the imaging position. Notification is made via the unit 15 and the imaging unit 12 is operated.
- the control unit 10 generates a three-dimensional image of the blood vessel from the images sequentially given from the imaging unit 12 as an imaging result in the imaging unit 12 in the same manner as the blood vessel registration mode, and determines the blood vessel shape value of the blood vessel. Extract. Then, the control unit 10 collates the extracted blood vessel shape value with the blood vessel shape value stored in the memory 13 as registration data, and determines whether or not it can be approved by the registrant from the collation result. It is made to do.
- the control unit 10 if it is determined that the user cannot be approved as a registrant, the control unit 10 notifies the user visually and audibly through the display unit 15 a and the audio output unit 15 b. On the other hand, it can be approved with the registrant.
- the control unit 10 sends a message indicating approval with the registrant to the device connected to the interface 14.
- data indicating that the user has been approved as a trigger is used as a trigger, for example, a predetermined process to be executed when authentication is successful, such as locking the door for a certain period of time or releasing the restricted operation mode. Is done.
- control unit 10 can execute the authentication mode.
- This process can be functionally divided into an image rotation unit 21, a blood vessel extraction unit 22, a motion amount calculation unit 23, and a 3D image generation unit 24 as shown in FIG.
- the image rotation unit 21, the blood vessel extraction unit 2 2, the motion amount calculation unit 23 and the 3D image generation unit 24 will be described in detail.
- the image rotation unit 21 corrects the rotation of the multi-viewpoint image so that the direction of the finger displayed in the image becomes the reference direction.
- the image rotation unit 21 arranges an optical film that transmits only visible light at a predetermined position on the optical axis at intervals different from the imaging cycle, and images the finger as an imaging target (hereinafter referred to as the finger image). Are acquired at predetermined intervals with respect to an image whose blood vessel is an imaging target (hereinafter referred to as a blood vessel image).
- the blood vessel image is an image formed on the image sensor using near infrared light as imaging light
- the finger image is an image formed on the image sensor using visible light as imaging light.
- the image rotation unit 21 acquires a finger image (FIG. 6 (A)), extracts a finger region displayed on the finger image (FIG. 6 (B)), Extract points that make up the finger contour (hereinafter referred to as finger contour points) (Fig. 6 (C)).
- the image rotation unit 21 also weights the points corresponding to the horizontal contour line among the finger contour points as points constituting the finger joints (hereinafter referred to as finger joint points) by .Hough transformation or the like.
- finger joint points points constituting the finger joints
- .Hough transformation or the like By extracting (FIG. 6 (D)), the joint line (hereinafter referred to as the joint line) JNL is identified from the “finger joint point” (FIG. 6 (E)).
- the image rotation unit 21 obtains an angle 0 x formed from the joint line JNL with respect to the line LN in the column direction in the image as a rotation correction amount of the blood vessel image (FIG. 6 (E)), and the rotation correction amount Accordingly, each blood vessel image captured until the next finger image is acquired is rotationally corrected.
- the longitudinal direction of the fingers displayed in the blood vessel image is aligned with the row direction of the image.
- the image rotation unit 21 performs image rotation processing on the image data sequentially input from the imaging unit 12 as a multi-viewpoint blood vessel image continuously captured along the circumference of the finger.
- the image data obtained as a result of the processing is sent to the blood vessel extraction unit 22.
- the blood vessel extraction unit 22 extracts a blood vessel portion displayed in the blood vessel image. An example of the extraction method in the blood vessel extraction unit 22 will be described.
- the blood vessel extraction unit 2 2 uses the relief unit 2 2 A to create a relief using a differential filter such as a Gaussian fill or a log fill for the image input from the image rotation unit 2 1. Relieve blood vessels by applying treatment.
- Figure 7 shows the images before and after the relief.
- the boundary between the blood vessel and the other parts is unknown, but in the blood vessel image after embossing (Fig. 7 (B)), the boundary is clear. It becomes.
- the embossing process in the embossed portion 21 enhances the blood vessels, and as a result, the blood vessels can be clearly distinguished from other parts. Will come.
- the blood vessel extraction unit 22 performs binarization processing on the image data in which the blood vessels are embossed in the binarization unit 22 B by using the set luminance value as a reference, thereby obtaining a binary blood vessel image. (Hereinafter referred to as a binary blood vessel image), and the image data obtained as a result of the processing is sent to the 3D image generation unit 24.
- the motion amount calculation unit 23 calculates the motion amount from the blood vessels displayed in the multi-viewpoint blood vessel images that are continuously imaged along the circumference of the finger.
- the motion amount calculation unit 23 receives a first blood vessel image input from the blood vessel extraction unit 22 and a second blood vessel image input from the blood vessel extraction unit 22 before the first blood vessel image.
- the amount of movement of the corresponding part in the blood vessel to be displayed is calculated by optical flow.
- the first blood vessel image is called the current image
- the second blood vessel image is called the previous image.
- the motion amount calculation unit 23 determines a point of interest (hereinafter referred to as a point of interest) AP in the current image IM 1, and centers on the point of interest AP.
- a point of interest hereinafter referred to as a point of interest
- Pixel block hereinafter referred to as the focus block
- the motion amount calculation unit 23 searches the previous image I M2 for a block that minimizes the difference from the luminance value in the block of interest ABL, and the searched block
- the center of RBL is the point corresponding to the target point AP (hereinafter referred to as the corresponding point) XP, and the position vector V (V to the corresponding point XP with reference to the position AP ′ corresponding to the target point AP x , V y ).
- the motion amount calculation unit 23 searches the previous image IM 2 for blocks corresponding to a plurality of attention blocks in the current image IM 1.
- a central (XP) in the block the average of the mean (horizontal direction of the base-vector Ingredient V x of the position base-vector between the center and the same position of the target block (AP '), in the vertical direction
- the average of the vector component V y is calculated as the amount of motion, and this is sent to the 3D image generator 2 4 as data (hereinafter referred to as motion data). ing.
- This amount of movement is not only in the direction of the horizontal movement (rotation direction) with respect to the surface on which the finger is placed, but also in the direction of vertical movement with respect to the surface due to finger pressure and rotation axis fluctuation This value represents the movement in the direction (perpendicular to the rotation direction).
- an image (image after the embossing process and before binarization) obtained in the intermediate process of the blood vessel extraction process is adopted as the motion amount calculation target image.
- the blood vessel and other portions are clearly distinguished, and the luminance of the blood vessel in the image is shown in FIG.
- the information represents the actual cross-sectional state.
- this information is discarded in the image after blood vessel extraction processing (image after binarization processing) as shown in Fig. 10, for example, in Figs. 11 (A) and 11 (B).
- Figs. 11 (A) and 11 (B) As shown, even if the cross sections of blood vessels that are different from each other are represented, the rate of becoming the same is increased after the extraction process.
- the target block ABL in the current image IM 1 from the previous image IM 2 When searching for a block that minimizes the difference from the brightness value of the block (Fig. 8 (B)), the same as the brightness value of the target block ABL. Many blocks with one or substantially the same luminance value appear. For this reason, it is not possible to search for a block RBL that truly corresponds to the target block ABL, and as a result, a situation in which the accuracy of calculating the displacement is reduced is caused.
- this motion amount calculation unit 23 uses an image (image after embossing and before binarization) obtained in the intermediate process of blood vessel extraction processing as the image for motion amount calculation. ing.
- a plurality of blocks of interest in the current image IM 1 are general (this is all pixels of the current image IM 1, but the end points, branch points and inflection points of blood vessels displayed in the current image IM 1 or these points) You may make it be a part of.
- the search range for the block that minimizes the difference from the luminance value in the target block ABL from the previous image IM 2 is generally the entire previous image IM 2, but only the amount of misalignment detected in the past A range corresponding to the size of a plurality of attention blocks may be set around the shifted position, and the shape of the range may be switched in accordance with the temporal change amount of the position shift amount detected in the past. It may be.
- the three-dimensional image generation unit 24 For each blood vessel image captured from around the finger, the three-dimensional image generation unit 24 projects blood vessels projected on the projection space when the blood vessels projected on the image from the viewpoint position of the image are projected on the projection space. Each of the silhouette regions is detected, and a common portion of the detected silhouette regions is extracted as a three-dimensional blood vessel image (three-dimensional volume).
- the 3D image generation unit 24 in this embodiment does not extract the common part of the shot area projected to the innermost part of the projection space as a three-dimensional image of the blood vessel, but rather from the projection surface in the projection space to the depth direction.
- the common part of the Sylhet region projected to the projection plane separated by a specified length is extracted as a three-dimensional image of the blood vessel.
- the generation unit 24 defines a three-dimensional space (hereinafter referred to as a “poxel space”) having a predetermined shape, which is a cubic unit called “poxel”, as a projective space (FIG. 1).
- Step SP a three-dimensional space having a predetermined shape, which is a cubic unit called “poxel”, as a projective space (FIG. 1).
- the 3D image generation unit 24 stores various values stored in the ROM as camera information such as focal length and image center, and information on the projection length in the projection space from the projection surface to the depth direction. Based on the memorized value and the value of the motion amount input from the motion amount calculation unit .2 3, a plurality of image data input from the blood vessel extraction unit 2 2 (binary blood vessel image) From this, blood vessel shape data is generated.
- the three-dimensional image generation unit 24 rotates the binary blood vessel image that is input first from the blood vessel extraction unit 22 as a reference image, for example, from among the viewpoints around the poxel space, as shown in FIG. Arranged at a position corresponding to the viewpoint with an angle of 0 [°], and projected onto the projection plane in the projection space up to the projection plane separated by a specified length L in the depth direction (the space surrounded by the solid line in Fig. 14)
- the shot area AR is detected ( Figure 13: Step SP2).
- Fig. 14 shows an example in which Fig. 1 (A) is used as the reference image among the objects in Fig. 1.
- each poxel in the poxel space is back-projected toward the reference image to calculate a projection point, and the projection point of the voxel is within the outline of the blood vessel displayed in the reference image.
- the poxels that exist in this area are left as a shot area.
- the 3D image generation unit 24 for the binary blood vessel images input from the blood vessel extraction unit 2 2 to the second and subsequent ones, in the rotation direction from the reference image to the binary blood vessel image that is the current processing target.
- the corresponding motion amount (hereinafter referred to as the rotational motion amount) is recognized based on the motion amount input from the motion amount calculation unit 23.
- the 3D image generation unit 24 uses this rotational movement amount as V x, and the rotation axis of the finger If the value set as the distance from to the blood vessel is r,
- ⁇ ro arctan (V x / rj (1) is used to obtain the rotation angle of the binary blood vessel image to be processed with respect to the reference image (hereinafter referred to as the first rotation angle) 0 r . It is determined whether or not this is less than 3 60.] (FIG. 13: step SP 3).
- step SP 4 The difference between the rotation angle of 1 (9 r) and the rotation angle of the binary blood vessel image and the reference image (hereinafter referred to as the second rotation angle) in which the visual volume is detected immediately before the current processing target is obtained. Then, it is determined whether or not this difference is equal to or greater than a predetermined threshold (FIG. 13: step SP 4).
- Step SP 4 the 3D image generation unit 24 sets the binary blood vessel image input next to the binary blood vessel image as the processing target without obtaining the silhouette region of the binary blood vessel image that is the current processing target. .
- the 3D image generation unit 2'4 can prevent the calculation of useless syllette areas.
- Step SP 4 “YE S” this means that the finger is in a rotating state.
- the 3D image generation unit 24 first rotates the binary blood vessel image IM x to be processed with respect to the viewpoint VP s of the reference position IM s. Place it at the position corresponding to the viewpoint VP x that forms the corner 0 ".
- the three-dimensional image generation unit 24 uses this binary blood vessel image IM x to create a sky from the projection surface in the projection space to the projection surface separated by a specified length in the depth direction. After detecting the silhouette region projected between them (FIG. 13: step SP5), the binary blood vessel image input next to the binary blood vessel image is set as the processing target.
- the three-dimensional image generation unit 24 arranges the binary blood vessel image IM x that is the current processing target around the poxel space, the binary blood vessel image IM X, and the binary blood vessel image IM x .
- the binary blood vessel image IM ( ⁇ -t) for which the visual volume was detected immediately before the amount of movement in the direction orthogonal to the finger rotation direction (the binary blood vessel image that is the current processing target and the last placed)
- the vertical vector component V y in the binary blood vessel image is recognized based on the motion amount data, and only the motion amount is corrected (direction parallel to the Z-axis direction of the poxel space) RD Correct the position of the viewpoint VP x .
- the 3D image generation unit 24 can detect the silhouette region by following the fluctuation even if the finger pressure amount or the rotation axis changes during the rotation of the finger, so that it is orthogonal to the rotation direction of the finger. Compared to the case where the amount of movement in the direction is not taken into account, the salt region can be detected accurately.
- the three-dimensional image generation unit 24 satisfies the first rotation angle ⁇ r with respect to the reference image. Until the binary blood vessel image with an angle of 360 [°] or more is selected as the current processing target, each of the blood vessel silhouette regions that are projected on the binary blood vessel image is detected (Fig. 13: Step SP 3— SP 4—SP 5 loop).
- the detection target of the silhouette region is a region (solid line) projected to a projection plane that is separated by a specified length in the depth direction from the projection surface in the voxel space (projection space). Therefore, if attention is paid to the image on the front side of the object in Fig. 1 (Fig. 1 (A)) and the image on the back side of the object (Fig. 1 (B)), Even if there is a part that does not have a common volume, the pixels of the projected part (silette area) of the object in each image remain.
- the first rotation angle 0 r with respect to the reference image Is more than 3 60 [° ']
- the common part (solid line part) is displayed as a three-dimensional image (three-dimensional volume) of the blood vessel that is faithful to the actual object.
- the portion of the cylindrical region is a poxel that remains as a non-projected portion.
- the 3D image generator 24 has a first rotation angle 0 r with respect to the reference image. 3 6 0
- this common portion of the poxel is recognized as a three-dimensional image of the blood vessel, and the voxel data is extracted as the three-dimensional image.
- this poxel data is registered in the memory 13 as registration data, and in the authentication mode, it is checked against the registration data registered in the memory 13. Yes.
- control unit 10 three-dimensional image generation unit 24 in the authentication device 1 is imaged from around the finger and a plurality of binary blood vessels obtained by extracting blood vessels in the captured image. Input images sequentially.
- the 3D image generation unit 24 has a depth from the projection surface in the voxel space when the object projected on the image is reflected from the visual point position of the image in the voxel space. Projection areas projected to projection planes separated by a specified length in the direction are detected (see, for example, Fig. 14), and a common part is extracted from the detected projection areas (see Fig. 17).
- This 3D image generator 24 does not extract the common part of the shot area projected to the innermost part of the poxel space as a three-dimensional image of the blood vessel, but only a specified length in the depth direction from the projection surface in the poxel space.
- the visual volume can be obtained by paying attention to the area to be played.
- the 3D image generation unit 24 can use the common portion as the projected portion of the blood vessel as long as the view volume near the blood vessel surface is common. It is possible to remain as a poxel in the silette area), and thus, even when the blood vessel portion existing on the back side of the imaging surface cannot be projected, a shape that accurately reflects the actual blood vessel is displayed. (See Figure 15 for example).
- the authentication device 1 that can generate the shape of the blood vessel with high accuracy can be realized.
- a blood vessel in a living body is applied as an imaging target.
- the present invention is not limited to this, for example, a nerve, a fingerprint or a face on the surface of a living body, It may be applied, or an object other than a living body can be applied.
- the embossing process can be omitted as appropriate depending on the imaging target to be applied.
- an imaging target inside a living body such as a nerve or blood vessel
- a finger is applied as a living body part
- parts such as palms, toes, arms, eyes or arms can be applied.
- the detection target of the projection area projected onto the projection space is fixed.
- the present invention may be variable.
- a value (fixed value) representing the projection length with respect to the vertical direction is stored in the ROM, but instead, for example, information representing the association between the body fat percentage and the value representing the projection length is stored.
- the control unit 10 Before detecting the shot region of the first input image (reference image) (FIG. 13: step SP2), the control unit 10 notifies the input of the body fat rate of the user to be imaged. Direct through. Then, the control unit 10 detects the body fat rate input from the operation unit 11 1 and switches the projection length by setting a value corresponding to the detected body fat rate. ,
- the controller 10 detects the finger width of the finger contour displayed on the blood vessel image before detecting the salt region of the first input image (reference image) (FIG. 13: step SP 2).
- the projection length is switched by detecting the viewpoint corresponding to the blood vessel image from the information in the finger outline displayed on the finger image and setting a value corresponding to the detected viewpoint.
- the projection length can be set according to the depth from the finger surface to the position where the blood vessel exists on the finger dorsal side and finger pad side, a shape that reflects the actual blood vessel more faithfully is displayed. can do.
- a setting step for detecting information related to the imaging target and setting a specified length associated with the detected information is performed.
- the present invention is not limited to this, and the CD (Compact Disc), DVD (Digital Versatile Disc), installed from a program storage medium such as a semiconductor memory, or downloaded from a program providing server on the Internet. And authentication mode may be executed.
- CD Compact Disc
- DVD Digital Versatile Disc
- control unit 10 executes the registration process and the authentication process has been described.
- the present invention is not limited to this, and a part of these processes is performed by the graphics work station. It may be executed.
- the authentication device 1 having the imaging function, the verification function, and the registration function has been described.
- the present invention is not limited to this, and the function is determined according to the application. Alternatively, a part of each function may be applied to a single device.
- the present invention can be used in the field of biometric authentication.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Computer Graphics (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2008800053389A CN101617338B (zh) | 2007-02-16 | 2008-02-14 | 物体形状生成方法、物体形状生成设备及程序 |
US12/527,290 US8780116B2 (en) | 2007-02-16 | 2008-02-14 | Object-shape generation method, object-shape generation apparatus, and program |
EP08711697.6A EP2120206A4 (en) | 2007-02-16 | 2008-02-14 | METHOD FOR PRODUCING OBJECT FORMS, DEVICE FOR GENERATING OBJECT FORMS AND CORRESPONDING PROGRAM |
KR1020097016961A KR20090110348A (ko) | 2007-02-16 | 2008-02-14 | 물체 형상 생성 방법, 물체 형상 생성 장치 및 프로그램 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2007-036766 | 2007-02-16 | ||
JP2007036766A JP2008203995A (ja) | 2007-02-16 | 2007-02-16 | 物体形状生成方法、物体形状生成装置及びプログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2008099963A1 true WO2008099963A1 (ja) | 2008-08-21 |
Family
ID=39690183
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2008/052908 WO2008099963A1 (ja) | 2007-02-16 | 2008-02-14 | 物体形状生成方法、物体形状生成装置及びプログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US8780116B2 (ja) |
EP (1) | EP2120206A4 (ja) |
JP (1) | JP2008203995A (ja) |
KR (1) | KR20090110348A (ja) |
CN (1) | CN101617338B (ja) |
WO (1) | WO2008099963A1 (ja) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2939900A1 (fr) * | 2008-12-17 | 2010-06-18 | Sagem Defense Securite | Dispositif d'hybridation en boucle fermee integre par construction. |
JP5529568B2 (ja) * | 2010-02-05 | 2014-06-25 | キヤノン株式会社 | 画像処理装置、撮像装置、制御方法及びプログラム |
JP5050094B2 (ja) * | 2010-12-21 | 2012-10-17 | 株式会社東芝 | 映像処理装置及び映像処理方法 |
JP2015049551A (ja) * | 2013-08-30 | 2015-03-16 | 日立オムロンターミナルソリューションズ株式会社 | 生体認証装置 |
JP6838912B2 (ja) * | 2016-09-29 | 2021-03-03 | キヤノン株式会社 | 画像処理装置、画像処理方法およびプログラム |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10243941A (ja) * | 1997-03-05 | 1998-09-14 | Toshiba Corp | 画像再構成処理装置 |
JP2000152938A (ja) * | 1998-04-23 | 2000-06-06 | General Electric Co <Ge> | 物体ボリュ―ムをイメ―ジングするシステム及び方法 |
JP2002175529A (ja) | 2000-12-06 | 2002-06-21 | Matsushita Electric Ind Co Ltd | 個人識別装置 |
JP2003067726A (ja) * | 2001-08-27 | 2003-03-07 | Sanyo Electric Co Ltd | 立体モデル生成装置及び方法 |
JP2007000219A (ja) * | 2005-06-22 | 2007-01-11 | Hitachi Ltd | 個人認証装置 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4858128A (en) * | 1986-08-11 | 1989-08-15 | General Electric Company | View-to-view image correction for object motion |
JP2001273497A (ja) * | 2000-03-27 | 2001-10-05 | Sanyo Electric Co Ltd | 個人識別装置 |
US6914601B2 (en) * | 2001-06-12 | 2005-07-05 | Minolta Co., Ltd. | Method, apparatus, and computer program for generating three-dimensional shape data or volume data |
JP2002366935A (ja) * | 2001-06-12 | 2002-12-20 | Minolta Co Ltd | ボリュームデータの生成方法および装置並びにコンピュータプログラム |
JP2004070792A (ja) * | 2002-08-08 | 2004-03-04 | Telecommunication Advancement Organization Of Japan | ボクセルデータ符号化方式 |
DE102004041115A1 (de) * | 2004-08-24 | 2006-03-09 | Tbs Holding Ag | Verfahren und Anordnung zur Erfassung biometrischer Daten |
US7756324B2 (en) * | 2004-11-24 | 2010-07-13 | Kabushiki Kaisha Toshiba | 3-dimensional image processing apparatus |
-
2007
- 2007-02-16 JP JP2007036766A patent/JP2008203995A/ja active Pending
-
2008
- 2008-02-14 EP EP08711697.6A patent/EP2120206A4/en not_active Withdrawn
- 2008-02-14 WO PCT/JP2008/052908 patent/WO2008099963A1/ja active Application Filing
- 2008-02-14 KR KR1020097016961A patent/KR20090110348A/ko not_active Application Discontinuation
- 2008-02-14 US US12/527,290 patent/US8780116B2/en not_active Expired - Fee Related
- 2008-02-14 CN CN2008800053389A patent/CN101617338B/zh not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10243941A (ja) * | 1997-03-05 | 1998-09-14 | Toshiba Corp | 画像再構成処理装置 |
JP2000152938A (ja) * | 1998-04-23 | 2000-06-06 | General Electric Co <Ge> | 物体ボリュ―ムをイメ―ジングするシステム及び方法 |
JP2002175529A (ja) | 2000-12-06 | 2002-06-21 | Matsushita Electric Ind Co Ltd | 個人識別装置 |
JP2003067726A (ja) * | 2001-08-27 | 2003-03-07 | Sanyo Electric Co Ltd | 立体モデル生成装置及び方法 |
JP2007000219A (ja) * | 2005-06-22 | 2007-01-11 | Hitachi Ltd | 個人認証装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2120206A4 |
Also Published As
Publication number | Publication date |
---|---|
JP2008203995A (ja) | 2008-09-04 |
CN101617338B (zh) | 2012-09-05 |
EP2120206A4 (en) | 2013-08-07 |
US20100073378A1 (en) | 2010-03-25 |
CN101617338A (zh) | 2009-12-30 |
EP2120206A1 (en) | 2009-11-18 |
KR20090110348A (ko) | 2009-10-21 |
US8780116B2 (en) | 2014-07-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5061645B2 (ja) | 情報抽出方法、情報抽出装置、プログラム、登録装置及び照合装置 | |
US9881204B2 (en) | Method for determining authenticity of a three-dimensional object | |
KR20180072734A (ko) | 눈 피처들을 사용한 눈 포즈 식별 | |
US20110164792A1 (en) | Facial recognition apparatus, method and computer-readable medium | |
EP2339507B1 (en) | Head detection and localisation method | |
JP2001101429A (ja) | 顔面の観測方法および顔観測装置ならびに顔観測処理用の記録媒体 | |
JP2008537190A (ja) | 赤外線パターンを照射することによる対象物の三次元像の生成 | |
KR101444538B1 (ko) | 3차원 얼굴 인식 시스템 및 그의 얼굴 인식 방법 | |
US10909363B2 (en) | Image acquisition system for off-axis eye images | |
WO2012147027A1 (en) | Face location detection | |
WO2008099963A1 (ja) | 物体形状生成方法、物体形状生成装置及びプログラム | |
Benalcazar et al. | A 3D iris scanner from multiple 2D visible light images | |
WO2008105545A1 (ja) | 情報抽出方法、登録装置、照合装置及びプログラム | |
JP2004126738A (ja) | 3次元計測を用いた個人認証装置および認証方法 | |
JP4636338B2 (ja) | 表面抽出方法、表面抽出装置及びプログラム | |
Kayal et al. | Use of Kinect in a Multicamera setup for action recognition applications | |
Man et al. | 3D gaze estimation based on facial feature tracking | |
JPH04370703A (ja) | 三次元物体の検出方法及び装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200880005338.9 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08711697 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2008711697 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 4140/CHENP/2009 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12527290 Country of ref document: US Ref document number: 1020097016961 Country of ref document: KR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |