JP5242492B2 - 3D image processing device - Google Patents

3D image processing device Download PDF

Info

Publication number
JP5242492B2
JP5242492B2 JP2009108585A JP2009108585A JP5242492B2 JP 5242492 B2 JP5242492 B2 JP 5242492B2 JP 2009108585 A JP2009108585 A JP 2009108585A JP 2009108585 A JP2009108585 A JP 2009108585A JP 5242492 B2 JP5242492 B2 JP 5242492B2
Authority
JP
Japan
Prior art keywords
image
dimensional image
scan
processing apparatus
scan image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2009108585A
Other languages
Japanese (ja)
Other versions
JP2010253131A (en
Inventor
圭一郎 岡本
金姫 陳
Original Assignee
株式会社トーメーコーポレーション
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社トーメーコーポレーション filed Critical 株式会社トーメーコーポレーション
Priority to JP2009108585A priority Critical patent/JP5242492B2/en
Publication of JP2010253131A publication Critical patent/JP2010253131A/en
Application granted granted Critical
Publication of JP5242492B2 publication Critical patent/JP5242492B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Description

  The present invention relates to a three-dimensional image processing apparatus that processes a three-dimensional image of an anterior segment or fundus.

  Currently, the most widely used surgical technique for reducing intraocular pressure in glaucoma is to create an aqueous humor outlet in the sclera and call it a bleb (bleb) under the conjunctiva as the aqueous humor drainage channel A trabeculectomy to form a space. However, after surgery, the bleb disappears due to tissue adhesion or the like, and the effect of reducing intraocular pressure may be lost, causing the intraocular pressure to rise again. Therefore, observation and evaluation of postoperative blebs are indispensable and important.

  Furthermore, in recent years, a surgical method called viscocanalostomy that forms a space called a lake in the sclera by deep sclerectomy has also appeared. Research on the contribution to pressure drop is being conducted.

  Therefore, blebs and rakes are evaluated by analyzing a two-dimensional tomographic image inside the eyeball acquired by an optical coherence tomography apparatus (OCT apparatus) as disclosed in Patent Document 1, and in particular, volume Quantitative evaluation is required.

  By the way, when an anterior eye part is imaged by the optical coherence tomography apparatus, the measurement light is scattered in each tissue inside the eyeball such as the conjunctiva and the sclera, so that the sensitivity in the depth direction is attenuated. In particular, in the case of an optical coherence tomography apparatus using the Fourier domain method, the shape of the depth direction of the eye is imaged by Fourier transforming the spectral signal of the reflected light in each tissue of the eye, so that the depth direction The attenuation of the sensitivity to becomes very prominent. FIG. 1 shows a two-dimensional tomographic image obtained by photographing a rake with an anterior ocular optical coherence tomography apparatus using the Fourier domain method.

  Therefore, when a rake region is automatically analyzed using a two-dimensional tomographic image captured by an optical coherence tomography apparatus, it is determined that the rake region is open due to the influence of sensitivity attenuation in the depth direction. In some cases, it was difficult to identify automatically.

JP 2007-225349 A

  Here, the present invention has been made in the background as described above, and the problem to be solved is to specify the region of the target region and measure the volume of the target region using the C-scan image. Accordingly, it is an object of the present invention to provide a three-dimensional image processing apparatus that can improve the accuracy of automatic recognition of a region to be measured and can perform more accurate volume measurement.

  Hereinafter, embodiments of the present invention made to solve the above-described problems will be described. In addition, the component employ | adopted in each aspect as described below is employable by arbitrary combinations as much as possible.

That is, the first aspect of the present invention is a three-dimensional image acquisition unit that acquires a three-dimensional image by capturing tomographic images in the depth direction of the eye to be examined at a plurality of positions by optical coherence tomography. In the three-dimensional image processing apparatus, comprising: a display unit that displays the three-dimensional image; and an image analysis unit that analyzes the three-dimensional image. The image analysis unit extracts a C-scan image from the three-dimensional image. C-scan image extracting means for specifying, a specifying means for specifying a measurement part in the extracted C-scan image, a measurement part region specifying means for specifying the area of the specified measurement part, and the specified And an area measuring means for measuring the area of the measurement region, and measuring the volume of the measurement region.

  In the three-dimensional image processing apparatus structured according to this aspect, the contrast of the analysis image is made uniform by specifying the region of the target region using the C-scan image that is a slice plane perpendicular to the depth direction. Thus, the accuracy of automatic recognition of the measurement target region can be improved.

  According to a second aspect of the present invention, in the three-dimensional image processing apparatus according to the first aspect, the C-scan image extraction unit extracts a local image surrounding the measurement site of the C-scan image. It includes a local image extracting means.

  In the three-dimensional image processing apparatus having the structure according to this aspect, the target part to be measured is found from the extracted C-scan image, and a local region surrounding the measurement target part is extracted as a local image, thereby measuring the target part. It is only necessary to analyze an image having the necessary minimum information, and the analysis time can be shortened.

  According to a third aspect of the present invention, in the three-dimensional image processing apparatus according to the first or second aspect, the C-scan image extraction means and the designation means are at least one displayed on the display means. Operation means for manually extracting and specifying the C-scan image of the sheet is included.

  In the three-dimensional image processing apparatus structured according to this aspect, if the examiner manually extracts and designates at least one C-scan image, the other C-scan images are automatically extracted. Since the designation is performed, the burden on the examiner can be reduced and the analysis time can be shortened.

  According to a fourth aspect of the present invention, in the three-dimensional image processing apparatus according to the first to third aspects, the C-scan image extraction unit adjusts the contrast of the extracted C-scan image. And a contrast adjusting means.

  In the three-dimensional image processing apparatus having the structure according to this aspect, it is easier to specify the measurement target region by adjusting the contrast of the extracted C-scan image. The accuracy of automatic recognition can be improved.

  According to a fifth aspect of the present invention, in the three-dimensional image processing apparatus according to the fourth aspect, the contrast adjustment unit is configured to adjust the brightness of the C-scan image adjacent to the C-scan image for performing contrast adjustment. The contrast is adjusted based on the information.

  In the three-dimensional image processing apparatus having the structure according to this aspect, the contrast adjustment is performed based on the luminance information of the C-scan image adjacent to the C-scan image for which the contrast adjustment is performed, so that the measurement target region can be reliably detected. Since the resolution can be increased, it becomes possible to divide the measurement target region and the other regions, thereby further improving the accuracy of automatic recognition.

FIG. 1 is an image showing an example of a two-dimensional tomographic image obtained by photographing a lake with an anterior ocular optical coherence tomography apparatus. FIG. 2 is a schematic diagram illustrating an example of the appearance of an embodiment of the anterior segment 3D image capturing apparatus. FIG. 3 is an explanatory diagram for explaining the definition of scanning and increasing / decreasing directions. FIG. 4 is a schematic block diagram showing an example of the configuration of the embodiment of the anterior segment 3D image capturing apparatus. FIG. 5 is a flowchart showing an example of a processing procedure for volume measurement of a target region in the anterior segment 3D image processing apparatus. FIG. 6 is an image showing an example of a local image before contrast adjustment and after contrast adjustment (center luminance TH = 90).

  Hereinafter, in order to clarify the present invention more specifically, embodiments of the present invention will be described in detail with reference to the drawings.

  In this embodiment, an anterior ocular segment three-dimensional image capturing apparatus including an anterior ocular segment three-dimensional image processing apparatus will be described. FIG. 2 shows an example of the appearance of the anterior segment 3D image capturing apparatus according to this embodiment. The anterior segment 3D image capturing apparatus 100 includes a computer 102 and an anterior segment optical coherence tomography apparatus 104. The computer 102 and the anterior ocular optical coherence tomography apparatus 104 are connected by a cable 106.

  The anterior ocular optical coherence tomography apparatus 104 is an apparatus for imaging a tomographic image of the anterior ocular segment by optical coherence tomography, and acquires a two-dimensional tomographic image by scanning the measurement light with respect to the eye to be examined one-dimensionally ( (B-scan) A three-dimensional image is obtained by acquiring a plurality of two-dimensional tomographic images while shifting the position in a direction perpendicular to the two-dimensional tomographic image (C-scan). For details, see Japanese Patent Application No. 2007-319563 by the present applicant. In this specification, the B-scan image is a tomographic image (two-dimensional tomographic image; XY plane or YZ plane) of the anterior segment of the anterior eye, and the C-scan image is It is a plane image (XY plane) of an anterior eye part. (See FIG. 3) The anterior segment optical coherence tomography apparatus 104 transmits digital data of the captured three-dimensional image of the anterior segment to the computer 102 via the cable 106.

  The computer 102 receives the digital data transmitted from the anterior ocular optical coherence tomography apparatus 104. By applying the processing described later to the received digital data, the image is corrected to an optimal image for anterior segment analysis, and anterior segment analysis is performed. The computer 102 is an example of the “anterior segment three-dimensional image processing apparatus” of the present invention.

  The computer 102 includes a microprocessor, a RAM, a ROM, a hard disk drive, a display device, an operation device, and the like, like a conventional computer. The hard disk drive stores a computer program in advance. The microprocessor causes the computer to execute processing described later by expanding the computer program in the RAM.

  Next, FIG. 4 shows a schematic block diagram of the configuration of the anterior segment 3D image capturing apparatus 100 according to this embodiment. The anterior segment 3D image capturing apparatus 100 includes a control unit 10, a storage unit 12, a display unit 14, an operation unit 16, an image analysis unit 18, and a 3D image capturing unit 20. The computer 102 includes a control unit 10, a storage unit 12, a display unit 14, an operation unit 16, and an image analysis unit 18, and the anterior ocular optical coherence tomography device 104 functions as an example of the three-dimensional image capturing unit 20. To do.

  The control unit 10 controls each unit of the anterior segment 3D image capturing apparatus 100. In particular, when the control unit 10 displays a three-dimensional image of the anterior segment on the display unit 14 or when an operation is performed using the operation unit 16, the control unit 10 performs processing according to the operation content. The imaging apparatus 100 is made to execute. In addition, a process of reading information stored in the storage unit 12 and a process of storing information in the storage unit 12 are performed. The control unit 10 includes a microprocessor of the computer 102.

  The storage unit 12 stores various information including the above-described computer program and three-dimensional image. The storage unit 12 includes a hard disk drive, and may include a RAM and a ROM.

  The display unit 14 is a display device that displays information controlled by the control unit 10. The display unit 14 is configured by, for example, a liquid crystal display.

  The operation unit 16 is operated by the examiner when operating the anterior segment 3D image capturing apparatus 100 or inputting information to the anterior segment 3D image capturing apparatus 100. The operation unit 16 inputs a signal corresponding to the operation by the examiner to the control unit 10. The control unit 10 operates the anterior segment 3D image capturing apparatus 100 based on this signal. The operation unit 16 includes, for example, a keyboard and a mouse.

  The display unit 14 and the operation unit 16 do not need to be configured from individual devices. For example, it is possible to use a touch panel in which a display device and an operation device are integrated.

  The image analysis unit 18 measures the volume of the target part in the three-dimensional image. The image processing unit includes a microprocessor.

  In the anterior segment three-dimensional image capturing apparatus 100 having the above-described configuration, an outline of a processing procedure for volume measurement of a target region performed by the image analysis unit 18 is shown in FIG.

  In S1, the examiner selects and extracts one i-th C-scan image with good contrast from the anterior segment three-dimensional image.

  In S <b> 2, the control unit 10 causes the display unit 14 to display the extracted C-scan image (i-th).

  In S <b> 3, the examiner operates the operation unit 16 for the C-scan image (i-th) displayed on the display unit 14 to designate a local image surrounding the measurement site. That is, the examiner finds a part to be measured by observing the C-scan image (i-th) displayed on the display unit 14, for example, a part corresponding to a rake, and drags a mouse over a local region surrounding the part. Enter with. And the control part 10 acquires the input area | region as an i-th local image.

  In S4, the control unit 10 displays the acquired local image (i-th) on the display unit 14, and the examiner operates the operation unit 16 on the local image (i-th) displayed on the display unit 14. To specify the measurement site. That is, the examiner inputs one point P in the rake of the local image (i-th) displayed on the display unit 14 by clicking the mouse or the like. At this time, it is preferable that one point P to be input is substantially the center of the rake.

  In S5, the control unit 10 sets an initial value of the number N of patterns of clustering of the luminance of the local image acquired in S3 (S15). In this embodiment, N = 3 and an initial value are set. This is because if N is larger than 3, processing such as merging between classes is necessary to extract the measurement region, so that the detection accuracy may be deteriorated, and the image processing time is also long. become longer. Conversely, if N is smaller than 3, the measurement region may be opened. Therefore, N = 3 is more preferably adopted as the initial value.

  In S6, the control unit 10 converts the local image into N values using a clustering algorithm.

  In S7, the control unit 10 identifies an area having the point P designated in S4 (S14) as a rake area in the clustered local image.

  In S8, the control unit 10 determines whether or not the specified rake region is open. If it is determined that the rake region is open, the process of S9 is performed. If it is determined that the rake region is not open, it is evaluated as a good image, and the processing from S10 onward is performed.

  In S9, the control unit 10 adds 1 to the number of clustering patterns: N. And the process after S6 is performed again.

  In S10, when the rake region is specified, the control unit 10 calculates the area S and the center of gravity G of the rake region.

  In S11, the control unit 10 determines whether the calculated area S is smaller than a preset area threshold MinS. When area S ≧ area threshold MinS, it is evaluated that the rake region exists, and the process of S12 is performed. In the case of area S <area threshold MinS, it is evaluated that there is no rake region, and the processing after S16 is performed.

  In S12, the control unit 10 accumulates the area S calculated in S10.

  In S13, when the analysis of the i-th C-scan image ends, the control unit 10 analyzes the adjacent C-scan image. Analysis of adjacent C-scan images is performed in an increasing direction and a decreasing direction with the i-th C-scan image as a reference. (See FIG. 3) In the case of increasing direction analysis, i = i + 1th, and in the case of decreasing direction analysis, i = i−1th C-scan image is extracted.

  In S <b> 14, the control unit 10 uses the centroid G, shape, and position of the rake region calculated in S <b> 10 in the extracted C-scan image (i = i + 1-th or i = i−1-th). -The rake area of the scanned image (i = i + 1st or i = i-1st) is automatically specified, and the local image (i = i + 1th or i = i-1th) is acquired. Further, the control unit 10 automatically designates the center of gravity G calculated in S10 as a rake target (point P) to be measured in the C-scan image (i = i + 1 th or i = i−1 th).

  In S15, the control unit 10 performs contrast adjustment of the local image (i = i + 1-th or i = i−1-th). In this embodiment, first, the local image (i = i + 1) is used by using luminance information of the local image (i-th) adjacent to the local image (i = i + 1-th or i = i-1-th) for which contrast adjustment is to be performed. Or, i = i−1) brightness adjustment is performed. Specifically, the central brightness TH of the area that is not the rake area of the local image (i-th) calculated in S6 is converted as the maximum brightness value of the local image (i = i + 1-th or i = i-1-th). . That is, the luminance information of the local image (i = i + 1-th or i = i−1-th) is analyzed, and when there is luminance higher than the central luminance TH, all are converted to the central luminance TH. Then, the control unit 10 enhances the contrast of the local image (i = i + 1-th or i = i−1-th) from 0 to TH to 0 to 255. In this embodiment, an area that is not a rake area is an area that has a higher 2-class luminance than the rake area. FIG. 6 shows local images before contrast adjustment and after contrast adjustment (center luminance TH = 90).

  In S16, the control unit 10 calculates the volume of the rake by multiplying the area accumulated in S12 by the distance between the C-scan images.

  As mentioned above, although one embodiment of the present invention has been described in detail, the present invention is not limited in any way by the specific description in the embodiment, and various changes, modifications, and modifications based on the knowledge of those skilled in the art. Needless to say, the present invention can be implemented in a mode with improvements and the like, and all such modes are included in the scope of the present invention without departing from the gist of the present invention.

  For example, in this embodiment, the anterior ocular segment optical coherence tomography apparatus 104 is used as the anterior ocular segment 3D image capturing apparatus 100. However, the present invention is not limited to this, and an apparatus capable of acquiring digital data of a 3D image. For example, an ultrasonic anterior ocular segment observation apparatus may be used. The computer 102 as the anterior segment 3D image capturing apparatus 100 and the anterior segment optical coherence tomography apparatus 104 are connected by a cable 106, but the anterior segment optical coherence tomography is connected to a server on a communication line. A configuration may be used in which a three-dimensional image of the anterior segment captured by the apparatus is stored and the captured image is read out by the anterior segment three-dimensional image processing apparatus. Moreover, although it was set as the anterior eye part three-dimensional image processing apparatus provided with the anterior eye part three-dimensional image processing apparatus, you may be comprised only from the anterior eye part three-dimensional image imaging device. That is, the anterior segment 3D image processing apparatus may be mounted on the anterior segment 3D image capturing apparatus.

  In this embodiment, rake volume measurement is performed for the anterior segment, but bleb volume measurement may be performed. It is also possible to measure the volume of drusen, which is the precursor lesion of cysts and age-related macular degeneration, in the fundus oculi. In other words, any volume measurement of a target part that forms a closed space is applicable.

  Further, in S5 of the volume measurement process of the target part executed by the image processing unit 18 of the present embodiment, N = 3 and an initial value are set. However, the present invention is not limited to this, and N = 2 is set. Also good. In this case, in S15 of the volume measurement process of the target part executed by the image processing unit 18, a region that is not a rake region is a region having one class luminance higher than the rake region, and is based on luminance information of the region. Brightness adjustment.

  Further, in S15 of the volume measurement process of the target part executed by the image processing unit 18 of the present embodiment, the area that is not the rake area is set as an area having a higher 2-class luminance than the rake area, but is not limited thereto. It may be an area having three-class luminance higher than that of the rake area. In this case, it is necessary to set the number N of clustering patterns to 4.

DESCRIPTION OF SYMBOLS 10 Control part 12 Memory | storage part 14 Display part 16 Operation part 18 Image analysis part 20 3D image imaging | photography part

Claims (5)

  1. Three-dimensional image acquisition means for acquiring a three-dimensional image by capturing tomographic images in the depth direction of the eye to be examined at a plurality of positions by optical coherence tomography ; and display means for displaying the acquired three-dimensional image; In a three-dimensional image processing apparatus comprising an image analysis means for analyzing the three-dimensional image,
    Wherein the image analyzing means, a C- scan image extracting means for extracting a C- scan image from the 3-dimensional image, and designation means for designating a measurement region in the extracted said C- scan image, is the designated the A three-dimensional image comprising: a measurement part region specifying unit that specifies a region of a measurement part; and an area measurement unit that measures the area of the specified measurement part region, and performing volume measurement of the measurement part Processing equipment.
  2.   The three-dimensional image processing apparatus according to claim 1, wherein the C-scan image extraction unit includes a local image extraction unit that extracts a local image surrounding the measurement site of the C-scan image.
  3.   The C-scan image extraction means and the designation means include an operation means for an examiner to manually extract and designate at least one C-scan image displayed on the display means. The three-dimensional image processing apparatus according to claim 1, wherein the three-dimensional image processing apparatus is characterized.
  4.   The three-dimensional image processing apparatus according to claim 1, wherein the C-scan image extraction unit includes a contrast adjustment unit that adjusts a contrast of the extracted C-scan image.
  5.   5. The three-dimensional image processing apparatus according to claim 4, wherein the contrast adjustment unit performs contrast adjustment based on luminance information of the C-scan image adjacent to the C-scan image on which contrast adjustment is performed.
JP2009108585A 2009-04-28 2009-04-28 3D image processing device Active JP5242492B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2009108585A JP5242492B2 (en) 2009-04-28 2009-04-28 3D image processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2009108585A JP5242492B2 (en) 2009-04-28 2009-04-28 3D image processing device

Publications (2)

Publication Number Publication Date
JP2010253131A JP2010253131A (en) 2010-11-11
JP5242492B2 true JP5242492B2 (en) 2013-07-24

Family

ID=43314746

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2009108585A Active JP5242492B2 (en) 2009-04-28 2009-04-28 3D image processing device

Country Status (1)

Country Link
JP (1) JP5242492B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5932369B2 (en) * 2012-01-27 2016-06-08 キヤノン株式会社 Image processing system, processing method, and program
JP2013153881A (en) * 2012-01-27 2013-08-15 Canon Inc Image processing system, processing method, and program

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07334702A (en) * 1994-06-10 1995-12-22 Toshiba Corp Display device
JP4614548B2 (en) * 2001-01-31 2011-01-19 パナソニック株式会社 Ultrasonic diagnostic equipment
JP4599191B2 (en) * 2005-03-01 2010-12-15 国立大学法人神戸大学 Diagnostic imaging processing apparatus and diagnostic imaging processing program
JP4653542B2 (en) * 2005-04-06 2011-03-16 東芝メディカルシステムズ株式会社 Image processing device
JP4855150B2 (en) * 2006-06-09 2012-01-18 株式会社トプコン Fundus observation apparatus, ophthalmic image processing apparatus, and ophthalmic image processing program
JP4389032B2 (en) * 2007-01-18 2009-12-24 国立大学法人 筑波大学 Optical coherence tomography image processing device
JP2009082463A (en) * 2007-09-28 2009-04-23 Fujifilm Corp Image analysis apparatus, image processor, image analysis program, image processing program, image analysis method and image processing method

Also Published As

Publication number Publication date
JP2010253131A (en) 2010-11-11

Similar Documents

Publication Publication Date Title
US9824273B2 (en) Image processing system, processing method, and storage medium
US20170258321A1 (en) User interface for efficiently displaying relevant oct imaging data
US9943224B2 (en) Image processing apparatus and image processing method
EP2845534B1 (en) Ophthalmic apparatus
US9924860B2 (en) Geographic atrophy identification and measurement
Wilkins et al. Automated segmentation of intraretinal cystoid fluid in optical coherence tomography
US10092178B2 (en) Systems and methods for efficiently obtaining measurements of the human eye using tracking
US9918634B2 (en) Systems and methods for improved ophthalmic imaging
US10441163B2 (en) Ophthalmic diagnosis support apparatus and ophthalmic diagnosis support method
US9044167B2 (en) Image processing device, imaging system, image processing method, and program for causing computer to perform image processing
KR101483501B1 (en) Ophthalmologic apparatus and control method of the same
EP2633804B1 (en) Ophthalmologic photographing apparatus
JP4940069B2 (en) Fundus observation apparatus, fundus image processing apparatus, and program
US9872614B2 (en) Image processing apparatus, method for image processing, image pickup system, and computer-readable storage medium
US7980697B2 (en) Fundus oculi observation device and ophthalmic image display device
US9418423B2 (en) Motion correction and normalization of features in optical coherence tomography
EP2542141B1 (en) Image processing apparatus, control method, and optical coherence tomography system
US8419186B2 (en) Fundus observation apparatus
RU2481056C2 (en) Device for image processing, method of image processing, device for capturing tomogram, programme and carrier for programme recording
ES2374069T3 (en) Method of examination of the eye by tomography of optical coherence.
US8770753B2 (en) Scanning and processing using optical coherence tomography
JP5582772B2 (en) Image processing apparatus and image processing method
JP6057567B2 (en) Imaging control apparatus, ophthalmic imaging apparatus, imaging control method, and program
US9398846B2 (en) Image processing apparatus, image processing system, image processing method, and image processing computer program
JP5792967B2 (en) Image processing apparatus and image processing system

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20120319

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20121004

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20121105

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20121227

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20130318

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20130403

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

Ref document number: 5242492

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20160412

Year of fee payment: 3

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250