CN111783672A - Image feature identification method for improving bridge dynamic displacement precision - Google Patents
Image feature identification method for improving bridge dynamic displacement precision Download PDFInfo
- Publication number
- CN111783672A CN111783672A CN202010627632.6A CN202010627632A CN111783672A CN 111783672 A CN111783672 A CN 111783672A CN 202010627632 A CN202010627632 A CN 202010627632A CN 111783672 A CN111783672 A CN 111783672A
- Authority
- CN
- China
- Prior art keywords
- points
- feature
- image
- point
- bridge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
An image feature identification method for improving bridge dynamic displacement accuracy relates to a bridge displacement measurement method. Acquiring a bridge video image, wherein a visual target is in the video; detecting characteristic points of the image, calculating a Hessian matrix of each pixel point in the image, performing convolution processing by adopting a Gaussian function, and taking an extreme value as a characteristic point; purifying by adopting a neighbor ratio purification method, and primarily screening correct feature matching points; purifying by adopting a positive and negative bidirectional purification method, and removing characteristic points which do not meet requirements; purifying by adopting a main direction included angle method, performing difference on angles corresponding to the characteristic points in the two frames of images, and comparing the difference with an angle threshold value to further eliminate error points; and performing interframe matching frame by frame to obtain the characteristic points of each frame of image. The problems that the existing bridge image identification based on the feature points is complex in calculation, too many in error matching points and low in displacement measurement precision are solved, and accurate and rapid identification and tracking of the target feature points in the bridge area can be achieved.
Description
Technical Field
The invention relates to a bridge displacement measurement method, in particular to an image feature identification method for improving the bridge dynamic displacement precision, and belongs to the field of bridge engineering health monitoring and safety assessment.
Background
In recent years, the high-speed rail construction of China accelerates the formation of nets, and the high-speed rail becomes the first choice for people to go out. The construction of high-speed rail generally adopts the mode of taking the way with the bridge, so the bridge structure accounts for a very big rate in the high-speed rail line. Along with the impact on the bridge when the high-speed rail runs, the damage caused by the bridge structure is gradually accumulated, and serious safety accidents can be caused in serious cases. In order to ensure the safe operation of the high-speed rail, the dynamic displacement of the high-speed rail bridge needs to be monitored or periodically checked.
At present, displacement measuring tools and methods commonly used in high-speed rail bridges mainly comprise displacement sensors, acceleration sensors, laser deflectometers and the like, and the methods are complex in construction process, long in time consumption, high in cost and difficult to measure in places with complex space structures. With the development of photography technology, methods such as image processing and digital image correlation are increasingly receiving attention to directly extract structural vibration displacement information from images.
With the development of computer vision, many scholars have combined bridge vibration displacement with computer vision. At present, many computer vision methods based on feature point detection encounter the problem that a large number of mismatching points occur in recognition, and a large amount of time is consumed and the effect is not good in processing, so that how to provide an efficient, rapid and accurate computer measurement method for the problem that the number of mismatching points is too many becomes an urgent problem to be solved.
Disclosure of Invention
The invention aims to solve the problems of complex calculation, excessive error matching points during identification and low displacement measurement precision of the existing bridge image identification based on the feature points, and provides an image feature identification method for improving the bridge dynamic displacement precision, which can realize accurate and rapid identification and tracking of the target feature points in a bridge area, provides a feasible way for the application of an advanced computer vision technology in bridge vibration displacement measurement, and provides a technical scheme for the follow-up intelligent real-time monitoring of bridge structure vibration displacement.
In order to achieve the purpose, the invention adopts the following technical scheme: an image feature identification method for improving bridge dynamic displacement accuracy comprises the following steps:
the method comprises the following steps: acquiring a bridge video image, placing a visual target with characteristics at a position to be detected of a bridge, shooting by using a commercial digital camera, and containing the visual target in the video;
step two: detecting characteristic points of the visual target image, calculating a Hessian matrix of each pixel point in the image, performing matrix row-column convolution processing by adopting a Gaussian function L (x, t) ═ G (t) I (x, t), and preliminarily judging the characteristic points through a Hessian matrix discriminant, wherein extreme values of a determinant of the Hessian matrix are the characteristic points;
step three: carrying out feature point matching on the preliminarily screened feature points, rejecting partial error feature points, firstly adopting a neighbor ratio purification method for purification, selecting a feature point, finding out the point with the minimum Euclidean distance and the minimum second Euclidean distance between the feature point and the feature point in the second frame image, carrying out a ratio Q on the minimum distance and the minimum second distance, and setting a threshold TqSatisfy TqIf the requirement is less than Q, the feature is considered to be a correct feature matching point;
step four: continuously adopting a positive and negative bidirectional purification method for purification, finding out a corresponding characteristic point in the second frame image by one characteristic point in the first frame according to the neighbor ratio method, and if a plurality of characteristic points in the second frame image all meet TqIf the number of the points is less than Q, removing, if only one characteristic point corresponds to the characteristic point, based on the characteristic point, finding the point of the first frame image meeting the requirement by adopting a neighbor ratio method, if the point is consistent with the previous point, determining that the point is correctly matched, otherwise, if a plurality of points are matched, removing;
step five: finally, the characteristic points are purified by adopting a main direction included angle method, and the characteristic points in the two frames of images are correspondedThe angle is differed from the set angle threshold thetaλComparing, when the angle satisfies theta1-θ2<θλTime is considered as the correct matching point, where θ1、θ2The included angle of the main direction between the feature point matching points in the two frames of images is adopted, so that error points are further eliminated;
step six: and repeating the second step to the fifth step, performing interframe matching frame by frame, identifying and tracking the characteristic points until the video is finished, and obtaining the characteristic points of each frame of image.
Compared with the prior art, the invention has the beneficial effects that: the method can quickly and accurately identify and track the characteristic points in the image, provides a technical means for the real-time monitoring of the subsequent bridge displacement, adopts a three-step purification method during characteristic matching, comprises a neighbor ratio purification method, a positive and negative two-way purification method and a main direction included angle method, gradually screens the characteristic points, ensures the measurement precision requirement, can measure the 0.5mm bridge vibration displacement, provides a technical means support for the practical engineering application, has the characteristics of high efficiency, intelligence, quickness and low cost compared with the traditional displacement measurement method, and provides a solution for the automation of bridge health monitoring.
Drawings
FIG. 1 is a diagram illustrating a video image capture status of a bridge according to an embodiment of the present invention;
FIG. 2 is an image of a frame taken in an embodiment of the present invention;
FIG. 3 is a target region resulting from processing the image of FIG. 2;
FIG. 4 shows the preliminary detected feature points of the target region in the embodiment of the present invention;
FIG. 5 shows feature points after the target region neighbor ratio is refined in an embodiment of the present invention;
FIG. 6 shows feature points of the target region after bidirectional forward and backward purification according to an embodiment of the present invention;
FIG. 7 shows the feature points after the main direction included angle of the target region is refined in the embodiment of the present invention.
Detailed Description
The technical solutions in the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the invention, rather than all embodiments, and all other embodiments obtained by those skilled in the art without any creative work based on the embodiments of the present invention belong to the protection scope of the present invention.
The invention discloses an image feature identification method for improving bridge dynamic displacement precision, which comprises the following steps:
the method comprises the following steps: acquiring a bridge video image, placing a visual target with characteristics at a position to be detected of a bridge, shooting by using a commercial digital camera, and containing the visual target in the video;
step two: detecting characteristic points of the visual target image, calculating a Hessian matrix of each pixel point in the image, performing matrix row-column convolution processing by adopting a Gaussian function L (x, t) ═ G (t) I (x, t), and preliminarily judging the characteristic points through a Hessian matrix discriminant, wherein extreme values of a determinant of the Hessian matrix are the characteristic points;
the concrete judgment is as follows: when the sign of a determinant is positive, then both eigenvalues of the determinant are positive or negative, so that the point can be classified as an extreme point,
Det(H)=LxxLyy-LxyLxy
where f (x, y) is any pixel in the image, H (f (x, y)) is the Hessian matrix calculated from the point f (x, y),is the second derivative of f (x, y) with respect to x,is f (x, y) to x, yThe second derivative of the first order,is the second derivative of f (x, y) over y, is the Hessian matrix after Gaussian convolution, Lxx(X, σ) is a second-order differential of GaussianConvolution with image f (x, y) at pixel point (x, y), Lxy(X, σ) is a second-order differential of GaussianConvolution with image f (x, y) at pixel point (x, y), Lyy(X, σ) is a second-order differential of GaussianConvolution with the image f (x, y) at pixel point (x, y),is the second derivative of the gaussian function L (x, t) ═ g (t) I (x, t) over x,is the second derivative of the gaussian function L (x, t) ═ g (t) I (x, t) to x, y,is the second derivative of the gaussian function L (x, t) ═ g (t) I (x, t) to y, det (h) is the Hessian matrix discriminant.
Step three: carrying out feature point matching on the preliminarily screened feature points, rejecting partial error feature points, firstly adopting a neighbor ratio purification method for purification, selecting a feature point, finding out the point with the minimum Euclidean distance and the minimum second Euclidean distance between the feature point and the feature point in the second frame image, carrying out a ratio Q on the minimum distance and the minimum second distance, and setting a threshold TqSatisfy TqIf the requirement is less than Q, the feature is considered to be a correct feature matching point;
Tq<Q
in the formula (d)i,jRepresenting the Euclidean distance, x, between the ith and the j characteristic points in the two frames of imagesiRepresents the ith feature point, x, of the first frame imagejRepresenting the j-th feature point of the adjacent frame image, dMinimum sizeFor the minimum distance, d, of two feature points calculated in two imagesSecond smallestFor the next smallest distance of two feature points calculated in two frames of images, Q is dMinimum sizeAnd dSecond smallestRatio of (A) to (B), TqThe value is generally between 0.4 and 0.8 for the set threshold value, and T is satisfiedqThe point < Q serves as a preliminarily determined feature point.
Step four: the characteristic points screened out in the step two still contain partial error points, further purification is needed, and a positive and negative bidirectional purification method is continuously adopted for purification, and the specific method is as follows: finding out corresponding feature point in second frame image by one feature point in first frame according to above neighbor ratio method, if there are multiple feature points in second frame image all satisfying TqIf the ratio is less than the Q requirement, the corresponding characteristic point is considered not to be removed, if only one characteristic point corresponds to the characteristic point, the characteristic point is used as the basis, the point which meets the requirement of the first frame image is found by adopting a neighbor ratio method, if the point is consistent with the previous point, the correct matching is considered, otherwise, if the multiple points are matched, the removal is carried out, and the correct matching is not shown;
step five: and finally, purifying the characteristic points by adopting a main direction included angle method, wherein the specific method comprises the following steps: the angle corresponding to the feature point in the two frames of images is differenced with the set angle threshold thetaλComparing, when the angle satisfies | theta1-θ2|<θλTime is considered as the correct matching point, where θ1、θ2The main direction included angle between the feature point matching points in the two frames of images is adopted, and the threshold value is selected by solving the rotation angles of all the feature points in the two frames of imagesThe average value is obtained, so that error points can be further removed, and the rest are the feature points with higher matching degree;
step six: and repeating the second step to the fifth step, performing interframe matching frame by frame, identifying and tracking the characteristic points until the video is finished, and obtaining the characteristic points of each frame of image.
Example (b):
the method comprises the following steps: testing a bridge structure of a laboratory, identifying that a target is placed at a bridge deck as shown in fig. 1, carrying out vibration video shooting on the bridge deck, wherein the shooting time of the experiment is 10s, as shown in fig. 2, a certain frame of image shot in a video is processed, and as shown in fig. 3, a small area where the target is located is processed;
step two: hessian algorithm code programming can be realized by calling a function through a program, coordinate transformation of all pixel points can be performed in the algorithm, and the preliminarily detected feature points are shown in FIG. 4;
step three: euclidean distance calculation is carried out on the points identified in the image, and a threshold value T is setq0.5, as shown in fig. 5 by screening;
step four: calculating Euclidean distance between the feature point in the second frame image and the first frame image, and setting a set threshold value TqWhen the point is 0.5, the points which both sides satisfy the requirement are reserved, and the result is shown in fig. 6;
step five: calculating the direction angle of the characteristic point, and recording the characteristic point of the first frame as theta1And the second frame feature point is marked as theta2Calculating the average value theta of the angular variations of all the feature pointsλTo satisfy | θ1-θ2|<θλThe points of (a) are retained and the results are shown in fig. 7.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.
Claims (3)
1. An image feature identification method for improving bridge dynamic displacement accuracy is characterized by comprising the following steps: the method comprises the following steps:
the method comprises the following steps: acquiring a bridge video image, placing a visual target with characteristics at a position to be detected of a bridge, shooting by using a commercial digital camera, and containing the visual target in the video;
step two: detecting characteristic points of the visual target image, calculating a Hessian matrix of each pixel point in the image, performing matrix row-column convolution processing by adopting a Gaussian function L (x, t) ═ G (t) I (x, t), and preliminarily judging the characteristic points through a Hessian matrix discriminant, wherein extreme values of a determinant of the Hessian matrix are the characteristic points;
step three: carrying out feature point matching on the preliminarily screened feature points, rejecting partial error feature points, firstly adopting a neighbor ratio purification method for purification, selecting a feature point, finding out the point with the minimum Euclidean distance and the minimum second Euclidean distance between the feature point and the feature point in the second frame image, carrying out a ratio Q on the minimum distance and the minimum second distance, and setting a threshold TqSatisfy TqIf the requirement is less than Q, the feature is considered to be a correct feature matching point;
step four: continuously adopting a positive and negative bidirectional purification method for purification, finding out a corresponding characteristic point in the second frame image by one characteristic point in the first frame according to the neighbor ratio method, and if a plurality of characteristic points in the second frame image all meet TqIf the number of feature points is less than Q, the feature points are removed, and if only one feature point corresponds to the feature points, the feature points are removedBased on the characteristic points, a neighbor ratio method is adopted to find out the points of the first frame image meeting the requirements, if the points are consistent with the previous points, the matching is considered to be correct, otherwise, if a plurality of points are matched, the points are removed;
step five: finally, the characteristic points are purified by adopting a main direction included angle method, the difference value of the angles corresponding to the characteristic points in the two frames of images is carried out, and the difference value is compared with a set angle threshold value thetaλComparing, when the angle satisfies | theta1-θ2|<θλTime is considered as the correct matching point, where θ1、θ2The included angle of the main direction between the feature point matching points in the two frames of images is adopted, so that error points are further eliminated;
step six: and repeating the second step to the fifth step, performing interframe matching frame by frame, identifying and tracking the characteristic points until the video is finished, and obtaining the characteristic points of each frame of image.
2. The image feature identification method for improving the bridge dynamic displacement accuracy according to claim 1, is characterized in that: the threshold value T set in the third stepqThe value is between 0.4 and 0.8.
3. The image feature identification method for improving the bridge dynamic displacement accuracy according to claim 1, is characterized in that: angle threshold theta in the fifth stepλThe selection is obtained by solving the average value of the rotation angles of all the characteristic points in the two frames of images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010627632.6A CN111783672A (en) | 2020-07-01 | 2020-07-01 | Image feature identification method for improving bridge dynamic displacement precision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010627632.6A CN111783672A (en) | 2020-07-01 | 2020-07-01 | Image feature identification method for improving bridge dynamic displacement precision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111783672A true CN111783672A (en) | 2020-10-16 |
Family
ID=72758003
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010627632.6A Pending CN111783672A (en) | 2020-07-01 | 2020-07-01 | Image feature identification method for improving bridge dynamic displacement precision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111783672A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112489043A (en) * | 2020-12-21 | 2021-03-12 | 无锡祥生医疗科技股份有限公司 | Heart disease detection device, model training method, and storage medium |
CN113076883A (en) * | 2021-04-08 | 2021-07-06 | 西南石油大学 | Blowout gas flow velocity measuring method based on image feature recognition |
CN114184127A (en) * | 2021-12-13 | 2022-03-15 | 哈尔滨工业大学 | Single-camera target-free building global displacement monitoring method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106534616A (en) * | 2016-10-17 | 2017-03-22 | 北京理工大学珠海学院 | Video image stabilization method and system based on feature matching and motion compensation |
JP6120037B1 (en) * | 2016-11-30 | 2017-04-26 | 国際航業株式会社 | Inspection device and inspection method |
CN108037132A (en) * | 2017-12-25 | 2018-05-15 | 华南理工大学 | A kind of visual sensor system and method for the detection of dry cell pulp layer paper winding defect |
-
2020
- 2020-07-01 CN CN202010627632.6A patent/CN111783672A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106534616A (en) * | 2016-10-17 | 2017-03-22 | 北京理工大学珠海学院 | Video image stabilization method and system based on feature matching and motion compensation |
JP6120037B1 (en) * | 2016-11-30 | 2017-04-26 | 国際航業株式会社 | Inspection device and inspection method |
CN108037132A (en) * | 2017-12-25 | 2018-05-15 | 华南理工大学 | A kind of visual sensor system and method for the detection of dry cell pulp layer paper winding defect |
Non-Patent Citations (2)
Title |
---|
SHUAI SHAO 等: "Experiment of Structural Geometric Morphology Monitoring for Bridges Using Holographic Visual Sensor", 《SENSORS》 * |
黄建坤: "基于图像序列的桥梁形变位移测量方法", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112489043A (en) * | 2020-12-21 | 2021-03-12 | 无锡祥生医疗科技股份有限公司 | Heart disease detection device, model training method, and storage medium |
CN113076883A (en) * | 2021-04-08 | 2021-07-06 | 西南石油大学 | Blowout gas flow velocity measuring method based on image feature recognition |
CN114184127A (en) * | 2021-12-13 | 2022-03-15 | 哈尔滨工业大学 | Single-camera target-free building global displacement monitoring method |
CN114184127B (en) * | 2021-12-13 | 2022-10-25 | 哈尔滨工业大学 | Single-camera target-free building global displacement monitoring method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111783672A (en) | Image feature identification method for improving bridge dynamic displacement precision | |
CN108898085B (en) | Intelligent road disease detection method based on mobile phone video | |
CN114663436A (en) | Cross-scale defect detection method based on deep learning | |
CN111429484A (en) | Multi-target vehicle track real-time construction method based on traffic monitoring video | |
CN105678213B (en) | Dual-mode mask person event automatic detection method based on video feature statistics | |
CN110503638B (en) | Spiral adhesive quality online detection method | |
CN112258446A (en) | Industrial part defect detection method based on improved YOLO algorithm | |
CN113538503A (en) | Solar panel defect detection method based on infrared image | |
CN113240623A (en) | Pavement disease detection method and device | |
CN116862910B (en) | Visual detection method based on automatic cutting production | |
CN111582270A (en) | Identification tracking method based on high-precision bridge region visual target feature points | |
CN115457277A (en) | Intelligent pavement disease identification and detection method and system | |
CN107463939B (en) | Image key straight line detection method | |
CN116958052A (en) | Printed circuit board defect detection method based on YOLO and attention mechanism | |
CN109657682B (en) | Electric energy representation number identification method based on deep neural network and multi-threshold soft segmentation | |
CN114170686A (en) | Elbow bending behavior detection method based on human body key points | |
CN113705564A (en) | Pointer type instrument identification reading method | |
CN117787690A (en) | Hoisting operation safety risk identification method and identification device | |
CN117710843A (en) | Intersection dynamic signal timing scheme detection method based on unmanned aerial vehicle video | |
CN110634154B (en) | Template matching method for target tracking with large-range speed variation | |
CN112614105A (en) | Depth network-based 3D point cloud welding spot defect detection method | |
CN112330675A (en) | AOD-Net based traffic road image atmospheric visibility detection method | |
CN116740036A (en) | Method and system for detecting cutting point position of steel pipe end arc striking and extinguishing plate | |
CN111161264A (en) | Method for segmenting TFT circuit image with defects | |
CN109615603A (en) | A kind of visual attention model of task based access control driving extracts the universal method of laser stripe |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201016 |