CN1251157C - Object three-dimensional model quick obtaining method based on active vision - Google Patents

Object three-dimensional model quick obtaining method based on active vision Download PDF

Info

Publication number
CN1251157C
CN1251157C CN 02158343 CN02158343A CN1251157C CN 1251157 C CN1251157 C CN 1251157C CN 02158343 CN02158343 CN 02158343 CN 02158343 A CN02158343 A CN 02158343A CN 1251157 C CN1251157 C CN 1251157C
Authority
CN
China
Prior art keywords
grating
equation
template
image
dimensional model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 02158343
Other languages
Chinese (zh)
Other versions
CN1512455A (en
Inventor
吴福朝
胡占义
王光辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN 02158343 priority Critical patent/CN1251157C/en
Publication of CN1512455A publication Critical patent/CN1512455A/en
Application granted granted Critical
Publication of CN1251157C publication Critical patent/CN1251157C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present invention relates to an active vision based fast acquisition method for a three-dimensional model of an object. The method comprises the steps that a light plane equation in a reference coordinate system and a projection transformation matrix from the reference coordinate system to a camera for each grating plane projected by a projector are calibrated; an object is placed in front of a system, and an object image with gratings and an object image with only textures are respectively taken; the taken images are input into a computer; grating edges projected onto the object are extracted from the input images automatically or in the mode of human computer interaction so as to realize clustering; each extracted edge point is back projected the corresponding light plane equation in the space to solve the three-dimensional coordinates of all the grating edge points of the object in the reference coordinate system, and consequently, a three-dimensional model for the visible surface of the object is obtained; the extracted three-dimensional points on the object surface are triangularly decomposed, and the texture information of the object image with textures is mapped onto the acquired three-dimensional model; the object rotates at a certain angle; the steps are repeated to obtain three-dimensional models for different lateral faces of the object; and a complete three-dimensional model of the object is obtained through data fusion.

Description

Object dimensional model fast acquiring method based on active vision
Technical field
The present invention relates to area of computer aided automatic modeling and measurement, particularly based on the object dimensional model fast acquiring method of active vision.
Background technology
In many fields of our daily life, often need carry out three-dimensional modeling to object.Dimensional Modeling Technology all has wide practical use at numerous areas such as robot barrier identification and navigation, virtual reality, three-dimensional evaluations, medical science shaping, digitizing mold, three-dimensional engraving, industrial design, quick manufacturing, archaeology, historical relic's protection, ecommerce, interactive entertainments.
Traditional modeling method generally is manually to finish by 3D processing softwares such as AutoCAD, 3DS MAX, and this method is very consuming time, and the precision of institute's established model is subject to the human factor influence.Another kind of method is to obtain object model by instrument and equipments such as contact type measurement instrument, laser scanners, these class methods have improved modeling accuracy to a certain extent, but because these equipment may bring infringement to people and some object, thereby limited its range of application.
In recent years, along with the development and the application of computer vision, people begin to explore by computer vision methods, carry out modeling by image.Characteristics such as the method based on vision is untouchable with it, cost is low, sampling is fast are subjected to numerous researchers' extensive concern always.Method based on vision adopts principle of triangulation usually, and summary is got up, and generally is divided into passive vision system and active vision system two big classes.The passive vision system is usually said stereoscopic vision method, adopts multiple cameras or a video camera to take several subject image at diverse location and comes the restoration scenario depth information.But these class methods all can't be avoided the problem of seeking matching relationship between different images, because images match is a classic problem of computer vision field, thereby have limited the widespread use of these class methods.Active vision system generally adopts the method for structured light.This system generally includes a video camera and a projector, projector throws the pattern through coding of some artificial designs to object, as point, grid, striation etc., video camera is taken illuminated object and is obtained the deformation pattern that these structured light patterns form at body surface, utilize the coding techniques and the principle of triangulation of structured light, recover the three-dimensional structure of object.This method is by the information of structured light, thereby simplified the images match problem to a certain extent.
Summary of the invention
The object of the present invention is to provide a kind of comparatively easy, practical, fast and object dimensional model acquisition methods based on active vision with higher modeling accuracy and robustness.
For achieving the above object, a kind of object dimensional model fast acquiring method based on active vision comprises step:
1) active vision system of being made up of camera and projector equipment is demarcated, and uses each grating planar that calibrating template labeling projection equipment launched grating planar equation and reference coordinate projective transformation matrix of being tied to camera under reference frame;
2) object is placed on system the place ahead of forming by camera and projector equipment, starts respectively and close projector equipment, take a width of cloth and have the subject image of grating and the subject image that a width of cloth only has texture;
3) image of taking is imported computing machine by scanner or some special purpose interface;
4) the edges of grating point that utilizes edge detection algorithm from input picture, to extract to be projected on the object and carry out cluster;
5) camera projective transformation matrix and the grating planar equation that utilizes step 1) to demarcate, each marginal point back projection that step 4) is extracted is to the space, calculate the intersection point of back projection and grating planar, this intersection point is the three-dimensional coordinate of corresponding this grating edge point of object under reference frame, thereby obtains the three-dimensional model of the visible surface of object;
6) marginal point on the object that step 4) is extracted carries out triangle decomposition, thereby the texture information that will have the subject image of texture is mapped on the three-dimensional model that is obtained;
7) object is rotated to an angle repeating step 2) to step 6), can obtain the not three-dimensional model of ipsilateral of object, the three-dimensional modeling data of each side is merged, can obtain complete object dimensional model.
Modeling method provided by the invention is carried out disposable demarcation to whole active vision system, and do not need video camera and projector equipment parameter are demarcated respectively, have characteristics such as comparatively easy, practical, that modeling accuracy is high, robustness is good, make based on the modeling method of active vision and move towards practical.
Description of drawings
Fig. 1 is the active vision system structural drawing;
Fig. 2 is the process flow diagram of this method;
Fig. 3 is the plane reference template synoptic diagram of system;
Fig. 4 is the three-dimensional calibrating template synoptic diagram of system;
Fig. 5 is the image that has grating of never ipsilateral shooting and the subject image that only has texture;
Fig. 6 is the not reconstruction model and the texture result of ipsilateral of object;
Fig. 7 is the never result that observes of ipsilateral of the complete three-dimensional object model that recovers.
Embodiment
This invention is made up of an optical grating projection equipment (as projector, slide projector etc.), a camera (as digital camera, industrial camera etc.), calibrating block, computing machine and related software.Projector equipment and camera can horizontal or vertically be placed, and grating planar and camera that projector equipment is launched form an angle, as shown in Figure 1.Need the object of modeling to be placed in the place ahead effective focal length scope of projector equipment and camera, make the grating that projector equipment launches can be, and camera can obtain this image at the body surface blur-free imaging.This method comprises mainly that system calibrating, subject image are obtained, image input, image characteristics extraction and steps such as cluster, object dimensional Model Calculation, object texture mapping, three-dimensional modeling data merge, object model data-switching and processing are formed, as shown in Figure 2, specifying that each goes on foot is as follows:
1. active vision system is demarcated
Demarcate and to obtain each grating planar that projector equipment launches plane equation and reference coordinate projective transformation matrix of being tied to camera under reference frame in fact exactly.System calibrating need be by calibrating template, and only needs to install or systematic parameter is once finished when changing in that system is initial.The present invention proposes two kinds of scaling methods, and first method adopts the monoplane calibrating template, contains patterns such as known point of some geological informations or rectangle on template, and as shown in Figure 3, calibration process may further comprise the steps:
1) obtains template and raster image.Calibrating template is placed on system the place ahead, starts projector equipment, make optical grating projection on the plane reference template, take the template image that a width of cloth has grating; Calibrating template is accurately moved a certain distance forward or backward, take the template image that a width of cloth has grating again;
2) input picture.The image of taking is imported computing machine by scanner or some special purpose interface;
3) labeling projection transformation matrix.From two width of cloth images, extract Given information in the template by automatic or man-machine interaction mode.Selected georeferencing coordinate system (generally being selected on the template), then the coordinate of these information under this reference frame is known, can obtain reference coordinate thus and be tied to projective transformation matrix between image.
4) straight-line equation of calculating grating edge correspondence on template.From two width of cloth images, extract the marginal information of each bar grating fringe, promptly pairing image straight line by automatic or man-machine interaction mode.The projective transformation matrix of obtaining by step 3), can obtain each stripe edge back projection to the space pairing space plane equation, this equation intersects at straight line with the equation of corresponding stencil plane, can obtain the equation of this straight line under reference frame thus.
5) demarcate the plane equation of grating planar under reference frame.Obtain the equation of each edge two corresponding straight lines on two diverse locations of template of each bar grating by step 4), these two straight lines are on same optical plane, therefore, equation by these two straight lines can be determined the plane equation of this optical plane under reference frame uniquely, and similar approach can calibrate the plane equation of all grating planars under reference frame.
Second kind of scaling method adopts the three-dimensional calibrating template that has two mutual vertical planes, contains patterns such as known point of some geological informations or rectangle on each plane of template, and as shown in Figure 5, calibration process may further comprise the steps:
1) obtains template and raster image.Calibrating template is placed on system the place ahead, starts projector equipment, each bar grating is projected on two vertical planes of template simultaneously, take the template image that a width of cloth has grating;
2) input picture.The image of taking is imported computing machine by scanner or some special purpose interface;
3) labeling projection transformation matrix.From input picture, extract Given information in two planes of template by automatic or man-machine interaction mode.Selected georeferencing coordinate system (generally being selected on the template), then the coordinate of these information under this reference frame is known, can obtain reference coordinate thus and be tied to projective transformation matrix between image.
4) straight-line equation of calculating grating edge correspondence on template.Each bar edges of grating is corresponding two straight lines in two vertical planes of template, extract this two image straight lines by automatic or man-machine interaction mode from image.The projective transformation matrix of obtaining by step 3), can obtain each bar image straight line back projection to the space pairing space plane equation, this equation intersects at straight line with the equation of corresponding stencil plane, can obtain the equation of this straight line under reference frame thus.
5) demarcate the plane equation of grating planar under reference frame.Obtain the equation of each edge two corresponding straight lines on two Different Plane of template of each bar grating by step 4), these two straight lines are on same optical plane, therefore, equation by these two straight lines can be determined the plane equation of this optical plane under reference frame uniquely, and similar approach can calibrate the plane equation of all grating planars under reference frame.
2. obtain subject image
Object is placed on system the place ahead, starts projector equipment grating is incident upon on the object, take the subject image that a width of cloth has grating; Close projector equipment, take the subject image that a width of cloth only has texture;
3. input picture
The image of taking is imported computing machine by scanner or some special purpose interface;
4. image characteristics extraction and cluster
Optical grating projection is on nonplanar object, and its edge no longer is a straight line, but some deformation curves that change with body surface curvature.The edges of grating point that utilizes edge detection algorithm to extract from input picture to be projected on the object also carries out cluster, promptly will belong to same edges of grating point according to the grating encoding strategy and be put in the same class.
5. object dimensional Model Calculation
The camera projective transformation matrix of demarcating by step 1, can obtain each marginal point back projection of extracting by step 4 to the space pairing space line equation, the grating planar equation of demarcating by the step 1 pairing spatial light plane equation of this marginal point as can be known again.This space line equation and optic plane equations intersect can determine a unique intersection point, and this intersecting point coordinate is object corresponding to the three-dimensional data of this grating edge point under reference frame.Similarly method can be obtained the three-dimensional data of all grating edge points under reference frame on the object, thereby obtains the three-dimensional model of object.
6. object texture mapping
The subject image that has grating that step 2 is obtained and its corresponding point coordinate of subject image that only has a texture are identical, marginal point on the object that step 4 is extracted carries out triangle decomposition, body surface is decomposed into many small tri patchs, just the texture information of object can be mapped on the three-dimensional model that step 5 obtains.
7. three-dimensional modeling data merges
Object is rotated to an angle, and repeating step 2 can obtain the not three-dimensional model of ipsilateral of object to step 6.The three-dimensional modeling data of each side is merged, can obtain complete object dimensional model.
8. object model data-switching and processing
The resulting object dimensional model of step 5 is a kind of cloud data, through certain data processing and conversion, it can be stored as data types such as IGES, AutoCAD, 3DS, STL, OBJ, make this model can utilize the 3D process software of many standards and industrial modeling software (as 3D MAX, AutoCAD, Rhino, Pro-Engineering etc.) edit, and generate the data that various reverse engineerings and three-dimensional carving machine can be processed.
Embodiment
Fig. 5 to Fig. 7 is the result schematic diagram of the inventive method embodiment.At first from the object ipsilateral subject image (as shown in Figure 5) of taking the subject image that has grating respectively and only having texture not; Reconstruct the model of the point of each side of object then respectively by method provided by the invention, and with texture (as shown in Figure 6) to the model of rebuilding; At last, the model of each side of object of rebuilding is carried out data fusion, recover complete three-dimensional object model (as shown in Figure 7).The inventive method is 0.5mm to the precision of object reconstruction model, and this is the recovery accuracy requirement that can satisfy general object model fully.

Claims (3)

1. object dimensional model fast acquiring method based on active vision comprises step:
1) active vision system of being made up of camera and projector equipment is demarcated, and uses each grating planar that calibrating template labeling projection equipment launched grating planar equation and reference coordinate projective transformation matrix of being tied to camera under reference frame;
2) object is placed on system the place ahead of forming by camera and projector equipment, starts respectively and close projector equipment, take a width of cloth and have the subject image of grating and the subject image that a width of cloth only has texture;
3) image of taking is imported computing machine by scanner or some special purpose interface;
4) the edges of grating point that utilizes edge detection algorithm from input picture, to extract to be projected on the object and carry out cluster;
5) camera projective transformation matrix and the grating planar equation that utilizes step 1) to demarcate, each marginal point back projection that step 4) is extracted is to the space, calculate the intersection point of back projection and grating planar, this intersection point is the three-dimensional coordinate of corresponding this grating edge point of object under reference frame, thereby obtains the three-dimensional model of the visible surface of object;
6) marginal point on the object that step 4) is extracted carries out triangle decomposition, thereby the texture information that will have the subject image of texture is mapped on the three-dimensional model that is obtained;
7) object is rotated to an angle repeating step 2) to step 6), can obtain the not three-dimensional model of ipsilateral of object, the three-dimensional modeling data of each side is merged, can obtain complete object dimensional model.
2. by the described object dimensional model fast acquiring method of claim 1, it is characterized in that the described active vision system demarcation of being made up of camera and projector equipment comprises step based on active vision:
1) obtains template and raster image, calibrating template is placed on system the place ahead, start projector equipment, make optical grating projection on the plane reference template, take the template image that a width of cloth has grating, calibrating template is accurately moved a certain distance forward or backward, take the template image that a width of cloth has grating again;
2) input picture is imported computing machine with the image of taking by scanner or some special purpose interface;
3) labeling projection transformation matrix, from two width of cloth images, extract Given information in the template by automatic or man-machine interaction mode, selected georeferencing coordinate system on template, then the coordinate of these information under this reference frame is known, can obtain reference coordinate thus and be tied to projective transformation matrix between image.
4) straight-line equation of calculating grating edge correspondence on template, from two width of cloth images, extract the marginal information of each bar grating fringe by automatic or man-machine interaction mode, it is pairing image straight line, the projective transformation matrix of obtaining by step 3), can obtain each stripe edge back projection to the space pairing space plane equation, this equation intersects at straight line with the equation of corresponding stencil plane, can obtain the equation of this straight line under reference frame thus;
5) demarcate the plane equation of grating planar under reference frame, obtain the equation of each edge two corresponding straight lines on two diverse locations of template of each bar grating by step 4), these two straight lines are on same grating planar, therefore, can determine the grating planar equation of this grating planar under reference frame uniquely by the equation of these two straight lines.
3. by the described object dimensional model fast acquiring method of claim 1, it is characterized in that the described active vision system demarcation of being made up of camera and projector equipment comprises step based on active vision:
1) obtains template and raster image, calibrating template is placed on system the place ahead of being made up of camera and projector equipment, start projector equipment, each bar grating is projected on two vertical planes of template simultaneously, take the template image that a width of cloth has grating;
2) input picture is imported computing machine with the image of taking by scanner or some special purpose interface;
3) labeling projection transformation matrix extracts Given information in two planes of template by automatic or man-machine interaction mode from input picture.Selected georeferencing coordinate system, then the coordinate of these information under this reference frame is known, can obtain reference coordinate thus and be tied to projective transformation matrix between image;
4) straight-line equation of calculating grating edge correspondence on template, each bar edges of grating is corresponding two straight lines in two vertical planes of template, extract this two image straight lines by automatic or man-machine interaction mode from image.The projective transformation matrix of obtaining by step 3), can obtain each bar image straight line back projection to the space pairing space plane equation, this equation intersects at straight line with the equation of corresponding stencil plane, can obtain the equation of this straight line under reference frame thus;
5) demarcate the grating planar equation of grating planar under reference frame, obtain the equation of each edge two corresponding straight lines on two Different Plane of template of each bar grating by step 4), these two straight lines are on same grating planar, therefore, can determine the grating planar equation of this grating planar under reference frame uniquely by the equation of these two straight lines.
CN 02158343 2002-12-27 2002-12-27 Object three-dimensional model quick obtaining method based on active vision Expired - Fee Related CN1251157C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 02158343 CN1251157C (en) 2002-12-27 2002-12-27 Object three-dimensional model quick obtaining method based on active vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 02158343 CN1251157C (en) 2002-12-27 2002-12-27 Object three-dimensional model quick obtaining method based on active vision

Publications (2)

Publication Number Publication Date
CN1512455A CN1512455A (en) 2004-07-14
CN1251157C true CN1251157C (en) 2006-04-12

Family

ID=34236981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 02158343 Expired - Fee Related CN1251157C (en) 2002-12-27 2002-12-27 Object three-dimensional model quick obtaining method based on active vision

Country Status (1)

Country Link
CN (1) CN1251157C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101318263B (en) * 2007-06-08 2011-12-07 深圳富泰宏精密工业有限公司 Laser engraving system and laser engraving method employing the same

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100342208C (en) * 2004-12-28 2007-10-10 北京航空航天大学 Modeling method of laser auto collimating measurement for angle in 2D
EP1681533A1 (en) * 2005-01-14 2006-07-19 Leica Geosystems AG Method and geodesic instrument for surveying at least one target
CN100388319C (en) * 2006-07-25 2008-05-14 深圳大学 Multi-viewpoint attitude estimating and self-calibrating method for three-dimensional active vision sensor
CN101226645B (en) * 2008-02-18 2012-09-05 朱东晖 Method for implementing object perspective variation effect in two-dimension image
CN101510310B (en) * 2009-02-19 2010-12-29 上海交通大学 Method for segmentation of high resolution remote sensing image based on veins clustering constrain
TWI493500B (en) * 2009-06-18 2015-07-21 Mstar Semiconductor Inc Image processing method and related apparatus for rendering two-dimensional image to show three-dimensional effect
CN103162643A (en) * 2013-04-02 2013-06-19 北京博维恒信科技发展有限责任公司 Monocular structured light three-dimensional scanner
CN103559735B (en) * 2013-11-05 2017-03-01 重庆安钻理科技股份有限公司 A kind of three-dimensional rebuilding method and system
CN113532326B (en) * 2016-02-29 2023-11-21 派克赛斯有限责任公司 System and method for assisted 3D scanning
CN107346553A (en) * 2016-05-06 2017-11-14 深圳超多维光电子有限公司 The determination method and apparatus of stripe information in a kind of image
FR3069941B1 (en) * 2017-08-03 2020-06-26 Safran METHOD FOR NON-DESTRUCTIVE INSPECTION OF AN AERONAUTICAL PART AND ASSOCIATED SYSTEM
CN109425292B (en) * 2017-08-29 2020-06-16 西安知微传感技术有限公司 Three-dimensional measurement system calibration device and method based on one-dimensional line structured light
CN108536142B (en) * 2018-03-18 2020-06-12 上海交通大学 Industrial robot anti-collision early warning system and method based on digital grating projection
CN109631799B (en) * 2019-01-09 2021-03-26 浙江浙大万维科技有限公司 Intelligent measuring and marking method
DE102019100822A1 (en) * 2019-01-14 2020-07-16 Lufthansa Technik Aktiengesellschaft Boroscopic inspection procedure
CN110472365B (en) * 2019-08-22 2023-04-18 苏州智科源测控科技有限公司 Method for establishing modal test three-dimensional geometric model
CN111179300A (en) * 2019-12-16 2020-05-19 新奇点企业管理集团有限公司 Method, apparatus, system, device and storage medium for obstacle detection
CN112229331B (en) * 2020-09-22 2022-01-07 南京理工大学 Monocular vision-based object rotation angle and translation distance measuring method
CN112665530B (en) * 2021-01-05 2022-09-23 银昌龄 Light plane recognition device corresponding to projection line, three-dimensional measurement system and method
CN113997124B (en) * 2021-12-07 2022-12-02 上海交通大学 System and method for acquiring visual image of wear surface of cutter for piston machining

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101318263B (en) * 2007-06-08 2011-12-07 深圳富泰宏精密工业有限公司 Laser engraving system and laser engraving method employing the same

Also Published As

Publication number Publication date
CN1512455A (en) 2004-07-14

Similar Documents

Publication Publication Date Title
CN1251157C (en) Object three-dimensional model quick obtaining method based on active vision
CN110264567B (en) Real-time three-dimensional modeling method based on mark points
US10789752B2 (en) Three dimensional acquisition and rendering
Li et al. A reverse engineering system for rapid manufacturing of complex objects
US7822267B2 (en) Enhanced object reconstruction
Jiang et al. Symmetric architecture modeling with a single image
CN103971408B (en) Three-dimensional facial model generating system and method
CN101320473B (en) Free multi-vision angle, real-time three-dimensional reconstruction system and method
CN101697233B (en) Structured light-based three-dimensional object surface reconstruction method
CN105547189B (en) High-precision optical method for three-dimensional measurement based on mutative scale
CN102054276B (en) Camera calibration method and system for object three-dimensional geometrical reconstruction
CN102222363A (en) Method for fast constructing high-accuracy personalized face model on basis of facial images
CN111028295A (en) 3D imaging method based on coded structured light and dual purposes
Kersten et al. Potential of automatic 3D object reconstruction from multiple images for applications in architecture, cultural heritage and archaeology
CN113505626A (en) Rapid three-dimensional fingerprint acquisition method and system
Wenzel et al. High-resolution surface reconstruction from imagery for close range cultural Heritage applications
CN101923730B (en) Fisheye camera and multiple plane mirror devices-based three-dimensional reconstruction method
CN113432558B (en) Device and method for measuring irregular object surface area based on laser
CN107103620B (en) Depth extraction method of multi-optical coding camera based on spatial sampling under independent camera view angle
CN113269860A (en) High-precision three-dimensional data real-time progressive rendering method and system
WO2008044096A1 (en) Method for three-dimensionally structured light scanning of shiny or specular objects
Rohith et al. A camera flash based projector system for true scale metric reconstruction
Pages et al. 3D facial merging for virtual human reconstruction
Van Gool et al. 3D modeling for communications
KR20030015625A (en) Calibration-free Approach to 3D Reconstruction Using A Cube Frame

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20060412

Termination date: 20171227