CN103528568A - Wireless channel based target pose image measuring method - Google Patents
Wireless channel based target pose image measuring method Download PDFInfo
- Publication number
- CN103528568A CN103528568A CN201310464818.4A CN201310464818A CN103528568A CN 103528568 A CN103528568 A CN 103528568A CN 201310464818 A CN201310464818 A CN 201310464818A CN 103528568 A CN103528568 A CN 103528568A
- Authority
- CN
- China
- Prior art keywords
- image
- pose
- target
- wireless channel
- information data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
本发明一种基于无线信道数据传输系统的目标位姿图像测量方法,涉及图像处理与位姿测量技术。包括设置发射端和接收端,发射端包括摄像机和图像处理单元,图像处理单元提取拍摄图像的至少3个特征点,确定其坐标,建立所述坐标与目标的先验特征的匹配关系,生成图像特征信息数据;图像特征信息数据通过无线信道从发射端传输到接收端;接收端的位姿解算单元,采用接收到的图像特征信息数据进行位姿参数解算。本发明将位姿的图像测量分为特征提取和位姿参数解算两个步骤,采用传输图像特征信息数据代替整幅图像,满足容量较低的无线信道要求。特征提取在高分辨率和高帧率图像的条件下完成,从而保证了位姿参数的测量精度。
The invention relates to a target pose image measurement method based on a wireless channel data transmission system, which relates to image processing and pose measurement technologies. Including setting the transmitting end and the receiving end, the transmitting end includes a camera and an image processing unit, the image processing unit extracts at least 3 feature points of the captured image, determines its coordinates, establishes a matching relationship between the coordinates and the prior characteristics of the target, and generates an image Feature information data; image feature information data is transmitted from the transmitter to the receiver through a wireless channel; the pose calculation unit at the receiver uses the received image feature information data to perform pose parameter calculation. The invention divides the image measurement of pose into two steps of feature extraction and pose parameter calculation, and adopts the transmission of image feature information data to replace the entire image to meet the requirement of a wireless channel with low capacity. Feature extraction is done under the conditions of high resolution and high frame rate images, thus ensuring the measurement accuracy of pose parameters.
Description
技术领域technical field
本发明涉及图像处理与位姿测量技术,尤其涉及一种基于无线信道数据传输系统的目标位姿图像测量方法。The invention relates to image processing and pose measurement technology, in particular to a target pose image measurement method based on a wireless channel data transmission system.
背景技术Background technique
位姿图像测量技术具有不接触被测物体的优点,在科研、军事、空间开发等领域具有十分重要的应用价值。在某些位姿测量的场景中,需要将观测点设置在飞行器(例如飞机、飞船、火箭等)上,测量与飞行器具有相对运动的飞行目标(例如卫星、火箭等)的相对位姿参数。设置在飞行器上的观测点通过无线信道与地面接收站通信。Pose image measurement technology has the advantage of not touching the measured object, and has very important application value in scientific research, military, space development and other fields. In some pose measurement scenarios, it is necessary to set observation points on aircraft (such as airplanes, spacecraft, rockets, etc.) to measure the relative pose parameters of flying targets (such as satellites, rockets, etc.) that have relative motion with the aircraft. The observation point set on the aircraft communicates with the ground receiving station through a wireless channel.
图像测量的精度高度依赖于图像分辨率和帧率,当图像分辨率和帧率足够高时才能获得高测量精度。在航天、军事等领域,测量系统采用无线信道传输数据时,信道容量较低使测量系统的图像分辨率和帧率较低,需要通过合理的方案设计,优化位姿图像测量技术的各个步骤,达到较高的位姿测量精度。The accuracy of image measurement is highly dependent on image resolution and frame rate, and high measurement accuracy can only be obtained when the image resolution and frame rate are high enough. In aerospace, military and other fields, when the measurement system uses a wireless channel to transmit data, the channel capacity is low, so the image resolution and frame rate of the measurement system are low. It is necessary to optimize the various steps of the pose image measurement technology through a reasonable scheme design. achieve higher pose measurement accuracy.
北京理工大学航天测控实验室研制的目标位姿远程图像观测系统,采用数据压缩技术实现了在较低的信道容量下图像的无线传输,图像可用于对目标位姿进行定性的观测。但是受无线信道容量所限,这种系统所传输的图像分辨率和码率较低,不能对目标位姿进行定量测量。The target pose remote image observation system developed by the Aerospace Measurement and Control Laboratory of Beijing Institute of Technology uses data compression technology to realize the wireless transmission of images under low channel capacity, and the image can be used for qualitative observation of the target pose. However, limited by the capacity of the wireless channel, the image resolution and code rate transmitted by this system are low, and the quantitative measurement of the target pose cannot be performed.
国防科学技术大学对基于视觉的空间目标位姿测量方法进行了研究,在论文“基于视觉的空间目标位置姿态测量方法研究”中,提出了在较高图像分辨率的条件下精确测量目标位姿的方法,但是该方法没有考虑在低信道容量下高分辨率图像无法传输的问题。The National University of Defense Technology has conducted research on the vision-based space target pose measurement method. In the paper "Research on the vision-based space target position and pose measurement method", it is proposed to accurately measure the target pose under the condition of higher image resolution. method, but this method does not consider the problem that high-resolution images cannot be transmitted under low channel capacity.
典型无线信道容量不高于2Mbps,上述方法在此较低的信道容量下都不能够实现对目标位姿的高精度图像测量,为此需要针对此信道容量的较低数据传输条件设计目标位姿图像测量方法。The typical wireless channel capacity is not higher than 2Mbps. The above methods cannot achieve high-precision image measurement of the target pose under this low channel capacity. Therefore, it is necessary to design the target pose for the low data transmission conditions of this channel capacity. Image measurement method.
发明内容Contents of the invention
为了解决现有技术中存在的上述缺陷,本发明要解决的技术问题是针对无线信道的低信道容量的传输条件提出一种目标位姿图像测量方法,该方法能够实现对目标位姿的高精度图像测量,适用于具有先验信息的各种类型的合作目标。In order to solve the above-mentioned defects existing in the prior art, the technical problem to be solved in the present invention is to propose a target pose image measurement method for the transmission condition of low channel capacity of the wireless channel, which can realize high precision of the target pose Image measurement for various types of cooperative targets with prior information.
本发明解决其技术问题所采取的技术方案是:The technical scheme that the present invention solves its technical problem to take is:
一种基于无线信道的目标位姿图像测量方法,将位姿测量算法分解为特征提取和位姿参数解算两环节,且在特征提取环节得到的图像特征信息数据的传输码率满足无线信道的低容量限制。A target pose image measurement method based on a wireless channel. The pose measurement algorithm is decomposed into two links: feature extraction and pose parameter calculation, and the transmission code rate of the image feature information data obtained in the feature extraction link meets the requirements of the wireless channel. Low volume limit.
本发明的方法包括,设置发射端和接收端;The method of the present invention includes setting a transmitting end and a receiving end;
所述发射端包括摄像机和图像处理单元;所述摄像机拍摄飞行目标的高分辨率图像;其中摄像机拍摄的图像分辨率高于512×512,帧率高于50bps;The transmitting end includes a camera and an image processing unit; the camera captures high-resolution images of flying targets; wherein the resolution of the images captured by the camera is higher than 512×512, and the frame rate is higher than 50bps;
所述图像处理单元提取所述摄像机拍摄图像的至少3个特征点,确定特征点在图像中的坐标;并建立所述坐标与目标的先验特征的匹配关系,生成图像特征信息数据。The image processing unit extracts at least 3 feature points of the image captured by the camera, determines the coordinates of the feature points in the image; and establishes a matching relationship between the coordinates and the prior features of the target to generate image feature information data.
在对摄像机拍摄的图像进行特征提取的步骤中,所提取的特征点可以是所述图像的角点、边缘等图像特征。In the step of extracting features from the image captured by the camera, the extracted feature points may be image features such as corner points and edges of the image.
在所述步骤中,所述生成的图像特征信息数据需经过图像预处理、目标检测、特征检测、特征匹配、图像特征信息数据输出等环节,以上环节可选择与测量场景相适应的已成熟的公知算法实现。In the steps, the generated image feature information data needs to go through links such as image preprocessing, target detection, feature detection, feature matching, and image feature information data output. Known algorithm implementation.
由于本发明的方法中待测目标为合作目标,因此具有先验特征信息,与图像处理有关的算法可根据目标特征的不同采取公知的相应的特征匹配算法,生成的图像特征信息数据能够计算出目标位姿参数。Since the target to be measured is a cooperative target in the method of the present invention, it has prior feature information, and the algorithm related to image processing can adopt a known corresponding feature matching algorithm according to the difference of target features, and the generated image feature information data can be calculated. Target pose parameters.
所述生成的图像特征信息数据通过无线信道从发射端传输到接收端。The generated image feature information data is transmitted from the transmitting end to the receiving end through a wireless channel.
所述接收端包括位姿解算单元,位姿解算单元采用接收到的所述图像特征信息数据进行位姿参数解算。The receiving end includes a pose calculation unit, and the pose calculation unit uses the received image feature information data to perform pose parameter calculation.
数据传输系统的无线信道容量较低,不能实时传送高分辨率和高帧率图像,但是图像特征信息数据的传输码率可以满足无线信道的容量限制。The wireless channel capacity of the data transmission system is low, and it cannot transmit high-resolution and high frame rate images in real time, but the transmission code rate of image feature information data can meet the capacity limit of the wireless channel.
本发明将传输的数据流采用图像特征信息数据代替整幅图像,从而满足容量较低的无线信道要求;并将位姿的图像测量分为特征提取和位姿参数解算两个步骤,特征提取在高分辨率和高帧率图像的条件下完成,保证了图像特征信息数据的精度,从而保证了位姿参数的测量精度。The present invention replaces the entire image with image feature information data for the transmitted data stream, thereby meeting the requirements of a wireless channel with a low capacity; It is completed under the conditions of high resolution and high frame rate images, which ensures the accuracy of image feature information data, thereby ensuring the measurement accuracy of pose parameters.
通过以下结合附图以举例方式对本发明的实施方式进行详细的描述后,本发明的其他特征、特点和优点将会更加的明显。Other characteristics, characteristics and advantages of the present invention will be more apparent after the following detailed description of the embodiments of the present invention by way of examples in conjunction with the accompanying drawings.
附图说明Description of drawings
图1是本发明的一种基于无线信道数据传输系统的目标位姿图像测量方法的系统组成框图。Fig. 1 is a system composition block diagram of a target pose image measurement method based on a wireless channel data transmission system of the present invention.
图2是本发明的一种基于无线信道数据传输系统的目标位姿图像测量方法的测量流程图。Fig. 2 is a measurement flowchart of a target pose image measurement method based on a wireless channel data transmission system according to the present invention.
图3是本发明的一种基于无线信道数据传输系统的目标位姿图像测量方法的飞行目标外观图。Fig. 3 is an appearance diagram of a flying target in a method for measuring a target pose image based on a wireless channel data transmission system according to the present invention.
图4是本发明的一种基于无线信道数据传输系统的目标位姿图像测量方法的飞行目标的特征点示意图。Fig. 4 is a schematic diagram of feature points of a flying target in a target pose image measurement method based on a wireless channel data transmission system according to the present invention.
图5是本发明的一种基于无线信道数据传输系统的目标位姿图像测量方法的示意图。Fig. 5 is a schematic diagram of a target pose image measurement method based on a wireless channel data transmission system according to the present invention.
图6是本发明的一种基于无线信道数据传输系统的目标位姿图像测量方法的中心投影模型图。Fig. 6 is a central projection model diagram of a target pose image measurement method based on a wireless channel data transmission system of the present invention.
图7是本发明的一种基于无线信道数据传输系统的目标位姿图像测量方法的图像特征信息数据提取流程图。Fig. 7 is a flow chart of image characteristic information data extraction of a target pose image measurement method based on a wireless channel data transmission system according to the present invention.
图8是本发明的一种基于无线信道数据传输系统的目标位姿图像测量方法中飞行目标图像梯度方向示意图。Fig. 8 is a schematic diagram of the gradient direction of the flying target image in a target pose image measurement method based on the wireless channel data transmission system of the present invention.
图9是本发明的一种基于无线信道数据传输系统的目标位姿图像测量方法的像素坐标系与图像坐标系示意图。9 is a schematic diagram of a pixel coordinate system and an image coordinate system of a target pose image measurement method based on a wireless channel data transmission system according to the present invention.
具体实施方式Detailed ways
下面结合附图和一个典型的具体实施方式对本发明做详细说明。The present invention will be described in detail below in conjunction with the accompanying drawings and a typical specific implementation manner.
参见附图所示,图1是本发明一种基于无线信道数据传输系统的目标位姿图像测量方法的系统的组成框图。本发明的系统由发射端和接收端两部分组成。其中,发射端由摄像机、图像处理单元、调制及功放器、发射机组成;接收端由接收机、检测及解调器、位姿解算单元组成。其中,本发明的系统发射端完成图像采集、图像特征提取、信号调制和放大、信号发送等功能;接收端完成信号接收、信号解调、位姿解算、参数显示等功能。Referring to the accompanying drawings, Fig. 1 is a system block diagram of a method for measuring a target pose image based on a wireless channel data transmission system according to the present invention. The system of the present invention consists of two parts, the transmitting end and the receiving end. Among them, the transmitting end is composed of a camera, an image processing unit, a modulation and power amplifier, and a transmitter; the receiving end is composed of a receiver, a detection and demodulator, and a pose calculation unit. Among them, the transmitting end of the system of the present invention completes functions such as image acquisition, image feature extraction, signal modulation and amplification, and signal transmission; the receiving end completes functions such as signal reception, signal demodulation, pose calculation, and parameter display.
参见图2所示,本发明的测量流程如下:采用高分辨率摄像机拍摄目标图像,摄像机的图像分辨率高于512×512,帧率高于50bps;所拍摄的图像传给图像处理器,图像处理器对图像进行特征提取,并将图像特征信息数据,经调制及功放后通过发射机发送。可以经过信道容量不高于2Mbps的低带宽的无线信道传送给接收端。接收端的接收机接收图像特征信息数据后,经检测及解调器传给位姿解算单元进行位姿参数解算。Referring to shown in Fig. 2, the measurement process of the present invention is as follows: adopt high-resolution video camera to shoot target image, the image resolution of video camera is higher than 512 * 512, and frame rate is higher than 50bps; The image taken is sent to image processor, image The processor performs feature extraction on the image, and sends the image feature information data through the transmitter after being modulated and amplified. It can be transmitted to the receiving end through a low-bandwidth wireless channel with a channel capacity not higher than 2Mbps. After the receiver at the receiving end receives the image feature information data, it is sent to the pose calculation unit through the detection and demodulator to calculate the pose parameters.
本发明的核心步骤在于,将目标的位姿测量算法分为图像特征提取和位姿参数求解两部分,分别在系统的发送端和接收端完成,从而使系统完成对目标的位姿参数测量。The core step of the present invention is to divide the pose measurement algorithm of the target into two parts: image feature extraction and pose parameter solution, which are respectively completed at the sending end and receiving end of the system, so that the system can complete the pose parameter measurement of the target.
下面以待测目标为带有标记图案的锥形目标为例,具体说明本发明的目标位姿测量的方法。In the following, the target pose measurement method of the present invention will be described in detail by taking the target to be measured as a cone-shaped target with a marking pattern as an example.
图3为待测飞行目标的外观图,图4为飞行目标的特征点示意图,以图4所示的类梯形的4个顶点作为目标的特征点。Fig. 3 is an appearance diagram of the flying target to be tested, and Fig. 4 is a schematic diagram of the feature points of the flying target, and the four vertices of the trapezoid shown in Fig. 4 are used as the feature points of the target.
(一)位姿参数与坐标系定义(1) Pose parameters and coordinate system definition
图5为本发明的位姿测量系统简图,图6为本发明的测量方法的中心投影模型图。对待测飞行目标的位姿测量过程中共定义5个坐标系:Fig. 5 is a schematic diagram of the pose measurement system of the present invention, and Fig. 6 is a central projection model diagram of the measurement method of the present invention. There are 5 coordinate systems defined in the pose measurement process of the flight target to be tested:
(1)目标坐标系O-XYZ(1) Target coordinate system O-XYZ
(2)测量坐标系Ow-XwYwZw (2) Measuring coordinate system O w -X w Y w Z w
(3)摄像机坐标系Oc-XcYcZc (3) Camera coordinate system O c -X c Y c Z c
(4)图像坐标系o-xy(4) Image coordinate system o-xy
(5)像素坐标系o'-uv(5) Pixel coordinate system o'-uv
同时,设特征点Pi在上述5个坐标系中的坐标分别为:Wi=(Xi,Yi,Zi)T, w=(xi,yi)T,w'=(ui,vi)T。(本专利默认i=1,2,3,4)At the same time, let the coordinates of the feature point P i in the above five coordinate systems be: W i =(X i ,Y i ,Z i ) T , w=(x i ,y i ) T , w'=(u i ,v i ) T . (This patent defaults to i=1,2,3,4)
Wi与之间存在以下关系:W i and The following relationship exists between:
T为平移向量,是一个三维向量T=[TX,TY,TZ]T。表示两坐标系之间的相对位置,即目标坐标系的原点在测量坐标系的坐标。T is a translation vector, which is a three-dimensional vector T=[T X ,T Y ,T Z ] T . Indicates the relative position between the two coordinate systems, that is, the coordinates where the origin of the target coordinate system is in the measurement coordinate system.
R为旋转矩阵,是三个角度(α,β,γ)的三角函数组合。绕X轴旋转角α,绕Y轴旋转角β,绕Z轴旋转角γ。目标坐标系按顺序分别绕三个坐标轴旋转,就达到三条坐标轴分别与测量坐标系对应的三条坐标轴方向一致的姿态。R用来描述目标坐标系相对于测量坐标系的姿态。R与(α,β,γ)的关系为:R is a rotation matrix, which is a combination of trigonometric functions of three angles (α, β, γ). Rotate the angle α around the X axis, rotate the angle β around the Y axis, and rotate the angle γ around the Z axis. The target coordinate system rotates around the three coordinate axes respectively in order to achieve a posture in which the three coordinate axes are in the same direction as the three coordinate axes corresponding to the measurement coordinate system. R is used to describe the attitude of the target coordinate system relative to the measurement coordinate system. The relationship between R and (α, β, γ) is:
位姿测量就是要求解出平移向量T与姿态角(α,β,γ)。Pose measurement is required to solve the translation vector T and the attitude angle (α, β, γ).
(二)特征点提取(2) Feature point extraction
特征点提取就是要提取出目标特征在摄像机所拍摄的图像的像素坐标系中的坐标。待测目标的外观要进行标记方案设计来产生先验图像特征信息数据,针对不同的目标特征标记设计方案不同。先验图像特征信息数据可以是特征点或者特征线,其数量决定了图像特征信息数据所要占用的无线信道带宽。图像求解目标位姿至少需要3个特征点的先验信息。特征点个数的上限与信道分配给测量系统的带宽相关,但是采用的特征点个数越多计算复杂度越大,造成图像处理单元的负载过大,最优个数为3或者4个。Feature point extraction is to extract the coordinates of the target features in the pixel coordinate system of the image captured by the camera. The appearance of the target to be tested needs to be designed with a marking scheme to generate a priori image feature information data, and the marking scheme is different for different target features. The prior image feature information data may be feature points or feature lines, the number of which determines the wireless channel bandwidth to be occupied by the image feature information data. The prior information of at least 3 feature points is required to solve the target pose of the image. The upper limit of the number of feature points is related to the bandwidth allocated by the channel to the measurement system, but the more the number of feature points used, the greater the computational complexity, resulting in an excessive load on the image processing unit. The optimal number is 3 or 4.
以图3所示的目标为例,说明特征点提取的过程。目标特征为每个类梯形区域的4个顶点P1,P2,P3,P4,根据本发明的针对该特征点的高精度提取方法,提取流程如图7,具体步骤如下:Take the object shown in Figure 3 as an example to illustrate the process of feature point extraction. The target feature is 4 vertices P1, P2, P3, P4 of each trapezoidal area, according to the high-precision extraction method for this feature point of the present invention, the extraction process is shown in Figure 7, and the specific steps are as follows:
(1)检测类梯形整像素边缘,本发明的具体实施方式采用Canny算子检测类梯形边缘,步骤为:(1) Detect class trapezoidal integer pixel edge, the specific embodiment of the present invention adopts Canny operator to detect class trapezoidal edge, and the steps are:
第一步:用高斯滤波器平滑图像。高斯平滑函数为:Step 1: Smooth the image with a Gaussian filter. The Gaussian smoothing function is:
第二步:用一阶偏导的有限差分来计算梯度的幅值和方向。一阶差分卷积模板为:Step 2: Calculate the magnitude and direction of the gradient using the finite difference of the first-order partial derivatives. The first-order differential convolution template is:
第三步:对梯度幅值进行非极大值抑制。仅仅得到全局的梯度并不足以确定边缘,因此为确定边缘,必须保留局部梯度最大的点,而抑制非极大值,本发明利用梯度的方向求梯度。Step 3: Perform non-maximum suppression on the gradient magnitude. Only obtaining the global gradient is not enough to determine the edge, so in order to determine the edge, the point with the largest local gradient must be reserved, and the non-maximum value must be suppressed. The present invention uses the direction of the gradient to calculate the gradient.
ξ[i,j]=Sector(θ[i,j]) (6)ξ[i,j]=Sector(θ[i,j]) (6)
如图8所示的待测飞行目标图像梯度方向示意图,四个扇区的标号为0到3,对应3×3邻域的四种可能组合。在每一点上,邻域的中心像素M与沿着梯度线的两个像素相比。如果M的梯度值不比沿梯度线的两个相邻像素梯度值大,则令M=0。即As shown in FIG. 8 , the schematic diagram of the gradient direction of the image of the flying target to be tested, the four sectors are labeled from 0 to 3, corresponding to four possible combinations of 3×3 neighborhoods. At each point, the central pixel M of the neighborhood is compared to the two pixels along the gradient line. If the gradient value of M is not greater than the gradient values of two adjacent pixels along the gradient line, then let M=0. Right now
N[i,j]=NMS(M[i,j],ξ[i,j]) (7)N[i,j]=NMS(M[i,j],ξ[i,j]) (7)
第四步:用双阈值算法检测和连接边缘。减少假边缘段数量的典型方法是对N[i,j]使用一个阈值,将低于阈值的所有值赋零值。本发明采用双阈值算法。选取双阈值:τ1和τ2,且τ1≈2τ2,得到两个阈值边缘图像T1[i,j]和T2[i,j],在T1中收取边缘,将T2中所有间隙连接起来。Step 4: Detect and connect edges with a double-threshold algorithm. A typical approach to reduce the number of false edge segments is to use a threshold on N[i,j], assigning all values below the threshold a value of zero. The present invention adopts a double threshold algorithm. Select double thresholds: τ 1 and τ 2 , and τ 1 ≈ 2τ 2 , get two threshold edge images T 1 [i,j] and T 2 [i,j], collect the edge in T 1 , and convert T 2 All gaps are connected.
利用Canny算法得到类梯形的整像素边缘点Pi(m,n)。The trapezoid-like integer pixel edge point P i (m,n) is obtained by Canny algorithm.
(2)分离类梯形边缘中的腰、上底、下底。由于黑色类梯形和白色类梯形相邻,类梯形腰两边颜色的有黑色和白色两种;而类梯形上、下底两边的颜色,一边是灰色(背景)而另一边是白色或黑色(特征标记)。根据这一特点,将类梯形腰从边缘点中分离出来。类梯形的腰是直线,用最小二乘法拟合出来。在类梯形边缘点中去掉腰上的边缘点,得到类梯形上、下底边缘点。类梯形上、下底是圆弧,投影到图像中成为椭圆弧(或圆弧),用这些点分别拟合两条上、下底所在的椭圆曲线。(2) Separate the waist, upper bottom and lower bottom of the trapezoidal edge. Because the black trapezoid is adjacent to the white trapezoid, the colors on both sides of the trapezoid are black and white; and the colors of the upper and lower bottom of the trapezoid are gray (background) and white or black (characteristics). mark). According to this feature, the trapezoidal waist is separated from the edge points. The waist of the quasi-trapezoid is a straight line, which is fitted by the least square method. Remove the edge points on the waist from the edge points of the trapezoid to obtain the upper and lower edge points of the trapezoid. The upper and lower bases of the quasi-trapezoid are circular arcs, which are projected into the image to become elliptical arcs (or circular arcs), and these points are used to fit the two elliptic curves where the upper and lower bases are located.
(3)提取类梯形亚像素边缘。在基于Canny算子求类梯形整像素边缘点Pi(m,n)之后,将已知的整像素边缘点Pi(m,n)的梯度方向近似替代其未知的亚象素边缘点的梯度方向,并在整像素边缘点的梯度方向上进行插值,获得差值函数φ(x,y),由于图像边缘处的灰度导数值最大,因此,φ(x,y)的最大值点的坐标即为亚像素边缘点的坐标,那么,再通过求差值函数φ(x,y)的最大值就可获取亚像素边缘点的坐标P′i(m′,n′)。(3) Extract trapezoidal sub-pixel edges. After calculating the trapezoidal integer pixel edge point P i (m, n) based on the Canny operator, the gradient direction of the known integer pixel edge point P i (m, n) is approximately replaced by the unknown sub-pixel edge point Gradient direction, and interpolate in the gradient direction of the edge point of the whole pixel to obtain the difference function φ(x, y). Since the gray derivative value at the edge of the image is the largest, the maximum point of φ(x, y) The coordinates of are the coordinates of the sub-pixel edge point, then, by calculating the maximum value of the difference function φ(x, y), the coordinates P′ i (m′, n′) of the sub-pixel edge point can be obtained.
设R为类梯形的边缘点Pi(m,n)的梯度幅值,R0为边缘点Pi(m,n)的灰度梯度的模,R-1、R1分别是在梯度方向上与Pi点相邻的两像素点Pi-1、Pi+1的梯度幅值,R0、R-1、R1由八模板Sobel算子获得,则边缘点Pi(m,n)的亚像素坐标P′i(m′,n′)为:Let R be the gradient magnitude of the trapezoidal edge point P i (m,n), R 0 is the modulus of the gray gradient of the edge point P i (m,n), and R -1 and R 1 are respectively in the gradient direction The gradient magnitudes of two pixel points P i-1 and P i+1 adjacent to P i on the upper edge, R 0 , R -1 , and R 1 are obtained by the eight-template Sobel operator, then the edge point P i (m, The sub-pixel coordinates P′ i (m′,n′) of n) are:
式中,W为相邻像素点到边缘点的距离,W=1或,θ为梯度方向与图像纵轴的夹角。In the formula, W is the distance from the adjacent pixel point to the edge point, W=1 or , θ is the angle between the gradient direction and the vertical axis of the image.
根据上述方法,求出腰、上底、下底每个边缘点的亚像素坐标P′i(m′,n′)。According to the above method, the sub-pixel coordinates P′ i (m′, n′) of each edge point of the waist, upper bottom, and lower bottom are obtained.
(4)求解特征点亚像素坐标(4) Solve the sub-pixel coordinates of feature points
由于类梯形的腰是沿圆锥体母线的直线段,因此,类梯形的腰可以用最小二乘法拟合成直线模型y=kx+b。设PL1(mL1,nL1)和PL2(mL2,nL2)分别是类梯形两条腰上的整像素坐标,P′L1(m′L1,n′L1)和P′L2(m′L2,n′L2)为相应点的亚像素边缘点坐标,则类梯形的腰L1、L2为:Since the waist of the trapezoid is a straight line segment along the generatrix of the cone, the waist of the trapezoid can be fitted into a straight line model y=kx+b by the least square method. Let P L1 (m L1 , n L1 ) and P L2 (m L2 , n L2 ) be the integer pixel coordinates on the two waists of the quasi-trapezoid respectively, and P′ L1 (m′ L1 , n′ L1 ) and P′ L2 ( m′ L2 , n′ L2 ) are the sub-pixel edge point coordinates of the corresponding point, then the waists L 1 and L 2 of the trapezoid are:
式中,k1、b1、k2、b2分别为:In the formula, k 1 , b 1 , k 2 , b 2 are respectively:
式中,表示像素点均值。In the formula, Indicates the pixel mean.
因为类梯形上底和下底为圆弧,投影到图像中成为椭圆弧,因此,类梯形上底和下底拟合成二次曲线方程模型ax2+bxy+cy2+dx+ey+f=0。令PC1(mC1,nC1)、PC2(mC2,nC2)为类梯形上底和下底的整像素坐标,P′C1(m′C1,n′C1)、P′C2(m′C2,n′C2)为相应点的亚像素坐标,类梯形的上底和下底分别为L3、L4。Because the upper and lower bases of the trapezoid are circular arcs, they are projected into the image to become elliptical arcs. Therefore, the upper and lower bases of the trapezoid are fitted into a quadratic curve equation model ax 2 +bxy+cy 2 +dx+ey+f =0. Let P C1 (m C1 ,n C1 ), P C2 (m C2 ,n C2 ) be the integer pixel coordinates of the upper and lower bases of the trapezoid, P′ C1 (m′ C1 ,n′ C1 ), P′ C2 ( m′ C2 , n′ C2 ) are the sub-pixel coordinates of the corresponding points, and the upper and lower bases of the trapezoid are L 3 and L 4 respectively.
由于圆锥体距离摄像机较远时,上底和下底上的边缘点数量较少,拟合出的二次曲线效果不理想,与真实曲线误差较大。针对该问题,本发明利用拉格朗日差值法拟合上底和下底所在的二次曲线。When the cone is far away from the camera, the number of edge points on the upper and lower bases is small, and the fitted quadratic curve is not ideal, and the error with the real curve is large. To solve this problem, the present invention uses the Lagrangian difference method to fit the quadratic curve where the upper bottom and the lower bottom are located.
求类梯形L1,L3交点坐标PL1L3(m,n),的方法如下:The method of finding the coordinates P L1L3 (m,n) of the intersection point of the quasi-trapezoid L 1 and L 3 is as follows:
(a)找出L3上距L1为d≈1,d≈3的两个亚像素边缘点P′1(m′1,n′1)、P′2(m′2,n′2)。( a ) Find two sub-pixel edge points P ′ 1 ( m′ 1 ,n′ 1 ), P′ 2 (m′ 2 ,n′ 2 ).
(b)提取L1另一侧类梯形上底上距L1为d≈1的亚像素边缘点P′3(m′3,n′3)。(b) Extract the sub-pixel edge point P′ 3 (m′ 3 ,n′ 3 ) whose distance from L 1 to the top and bottom of the trapezoid on the other side of L 1 is d≈1.
(c)基于式,P′1(m′1,n′1)、P′2(m′2,n′2)、P′3(m′3,n′3)进行拉格朗日差值(c) Based on the formula, P′ 1 (m′ 1 ,n′ 1 ), P′ 2 (m′ 2 ,n′ 2 ), P′ 3 (m′ 3 ,n′ 3 ) perform Lagrangian difference value
(4)基于式,求出L1,L3交点坐标(4) Based on the formula, find the intersection coordinates of L 1 and L 3
类似的,可以求出类梯形腰与上底和下底的其它3个交点坐标PL2L3(m,n),PL1L4(m,n),PL2L4(m,n)。Similarly, the coordinates P L2L3 (m,n), P L1L4 (m,n), and P L2L4 (m,n) of the other three intersection points between the quasi-trapezoid waist and the upper and lower bases can be obtained.
根据本发明的具体实施方式,上述特征点提取、建立图像特征点与目标先验特征的匹配关系,生成图像特征信息数据的全部环节由以DSP芯片为核心的图像处理单元完成。According to a specific embodiment of the present invention, the above-mentioned feature point extraction, establishment of matching relationship between image feature points and target prior features, and generation of image feature information data are all completed by the image processing unit with the DSP chip as the core.
(三)传输码率计算(3) Transmission code rate calculation
生成图像特征信息数据后,要将图像特征信息数据通过无线信道传给用于装有位姿测量软件的位姿解算单元。After the image feature information data is generated, the image feature information data must be transmitted to the pose calculation unit equipped with pose measurement software through a wireless channel.
经过特征点提取得到待测飞行目标在高分辨率摄像机所拍摄的图像中的图像特征信息数据,本发明中采用图像特征信息数据取代图像本身作为无线信道的传输内容,从而满足无线信道的容量限制。以图3所示目标为例,其特征点为4个,计算传输码率。Through feature point extraction, the image feature information data of the flying target to be tested in the image taken by the high-resolution camera is obtained. In the present invention, the image feature information data is used to replace the image itself as the transmission content of the wireless channel, thereby meeting the capacity limit of the wireless channel. . Taking the object shown in Figure 3 as an example, its feature points are 4, and the transmission code rate is calculated.
每帧图像的图像特征信息数据包括4个特征点的坐标和编号,每个特征点坐标占用8Bytes的存储空间,特征点编号占用2Bytes的存储空间,因此单帧图像的图像特征信息数据数据量为40Bytes。The image feature information data of each frame of image includes the coordinates and numbers of 4 feature points. The coordinates of each feature point occupy a storage space of 8 Bytes, and the number of feature points occupies a storage space of 2 Bytes. Therefore, the image feature information data volume of a single frame image is 40Bytes.
计算采用4个特征点,图像帧率不同时的码率。如表1所示:The calculation uses 4 feature points and the bit rate when the image frame rate is different. As shown in Table 1:
表1码率计算(1)Table 1 Code rate calculation (1)
如表1所示的计算结果,采用较高的帧率200bps时,无线信道用于传输图像图像特征信息数据数据的带宽为62.5Kbps,远低于典型无线信道2Mbps的信道容量,可以为无线信道节省大量带宽。As shown in the calculation results in Table 1, when a higher frame rate of 200bps is adopted, the bandwidth of the wireless channel for transmitting image feature information data is 62.5Kbps, which is far lower than the channel capacity of a typical wireless channel of 2Mbps, which can be used for wireless channel Save a lot of bandwidth.
计算图像帧率为100bps,采用的特征点个数不同时的码率。如表2所示:Calculate the bit rate when the image frame rate is 100bps and the number of feature points used is different. As shown in table 2:
表2码率计算(2)Table 2 bit rate calculation (2)
如表2所示的计算结果,采用8个特征点时,无线信道用于传输图像特征信息数据的带宽为62.5Kbps,远低于典型无线信道2Mbps的信道容量。As shown in the calculation results in Table 2, when 8 feature points are used, the bandwidth of the wireless channel for transmitting image feature information data is 62.5Kbps, which is far lower than the channel capacity of a typical wireless channel of 2Mbps.
(四)位姿参数解算(4) Calculation of pose parameters
位姿参数解算由图像处理单元完成,可通过公知的位姿测量算法完成。实践中可通过装有位姿测量软件的计算机解算目标的位姿参数。本发明的方法中,待测目标为合作目标,因此具有先验图像特征信息数据。在前面步骤中,特征点在目标坐标系O-XYZ中的坐标Wi=(Xi,Yi,Zi)是先验信息,为已知;特征点在像素坐标系中的坐标w'=(ui,vi)T通过上述的特征点提取步骤获得。位姿参数解算就是利用以上已知量结合本发明的高分辨率摄像机的参数求解出目标的位姿参数。The calculation of the pose parameters is completed by the image processing unit, which can be completed by a known pose measurement algorithm. In practice, the pose parameters of the target can be calculated by a computer equipped with pose measurement software. In the method of the present invention, the object to be measured is a cooperative object, so it has prior image feature information data. In the previous steps, the coordinate W i =(X i , Y i , Z i ) of the feature point in the target coordinate system O-XYZ is prior information and known; the coordinate w' of the feature point in the pixel coordinate system =(u i ,v i ) T is obtained through the above feature point extraction steps. The pose parameter calculation is to use the above known quantities in combination with the parameters of the high-resolution camera of the present invention to solve the pose parameters of the target.
图5为本发明的位姿测量系统简图,本发明的高分辨率摄像机坐标系为O-XcYcZc,摄像机主点为坐标系原点,摄像机横纵方向分别为x轴和y轴,摄像机光轴为z轴。测量坐标系为Ow-XwYwZw。Fig. 5 is a schematic diagram of the pose measurement system of the present invention, the high-resolution camera coordinate system of the present invention is OX c Y c Z c , the principal point of the camera is the origin of the coordinate system, and the horizontal and vertical directions of the camera are respectively the x-axis and the y-axis, The optical axis of the camera is the z axis. The measurement coordinate system is O w -X w Y w Z w .
(1)求解特征点图像坐标系坐标(1) Solve the coordinates of the feature point image coordinate system
设像素坐标系中各个像素之间在x轴和y轴方向上的距离,即像素实际物理尺寸,分别是dx和dy,图像中心的像素坐标为(u0,v0),如图9,由像素坐标系和图像坐标系的变换关系可以由图像中任意点在像素坐标系坐标w'=(ui,vi)T求解图像坐标系坐标w=(xi,yi)T,即Let the distance between each pixel in the pixel coordinate system in the direction of x-axis and y-axis, that is, the actual physical size of the pixel, be dx and dy respectively, and the pixel coordinate of the center of the image is (u 0 , v 0 ), as shown in Figure 9, According to the transformation relationship between the pixel coordinate system and the image coordinate system, the image coordinate system coordinate w=(xi , y i ) T can be solved by any point in the image at the coordinate w'=(u i ,v i ) T of the pixel coordinate system, namely
(2)求解特征点摄像机坐标系坐标(2) Solve the coordinates of the feature point camera coordinate system
f表示摄像机的焦距,则4个特征点图像坐标系坐标(xi,yi)T与摄像机坐标系坐标
同时这4个点中的任意2点之间的三维空间距离是已知的(先验信息),即:At the same time, the three-dimensional space distance between any two of these four points is known (priori information), namely:
由式可得8个等式,由式可得6个等式,此可以得1个超定方程组,用最小二乘法即可求解出
(3)求解特征点世界坐标系坐标(3) Solve the world coordinate system coordinates of feature points
设摄像机坐标系和世界坐标系之间的相对变换关系由旋转矩阵Rc与平移向量Tc表示。Rc与Tc在测量前的摄像机安装过程中为已知(摄像机标定)。即:Let the relative transformation relationship between the camera coordinate system and the world coordinate system be represented by the rotation matrix Rc and the translation vector Tc. Rc and Tc are known during camera installation before measurement (camera calibration). Right now:
(4)求解目标的位姿参数(4) Solve the pose parameters of the target
目标的位姿参数由旋转矩阵R和平移向量T表示,世界坐标系坐标与的物体坐标系坐标存在以下关系:The pose parameters of the target are represented by the rotation matrix R and the translation vector T, and the coordinates of the world coordinate system and the coordinates of the object coordinate system have the following relationship:
设4个特征点的质心在世界坐标系和目标坐标系中的坐标分别为:Let the coordinates of the centroids of the four feature points in the world coordinate system and the target coordinate system be:
则4个特征点在以质心为原点的坐标系下的新坐标为:Then the new coordinates of the four feature points in the coordinate system with the center of mass as the origin are:
其中:in:
从而得到:and thus get:
设:set up:
旋转矩阵R所对应的四元数r?,是N的最大特征值所对应的特征向量。令
从而求得R,并由式(2)求解(α,β,γ),由式(18)求解T=[TX,TY,TZ]T,得到目标的位姿参数。Then R is obtained, and (α, β, γ) is solved by formula (2), T=[T X ,T Y ,T Z ] T is solved by formula (18), and the pose parameters of the target are obtained.
应当认识到,以上描述只是本发明的一个特定实施例,本发明并不仅仅局限于以上图示或描述的特定的结构,权利要求将覆盖本发明的实质精神及范围内的所有变化方案。It should be recognized that the above description is only a specific embodiment of the present invention, and the present invention is not limited to the specific structures shown or described above, and the claims will cover all the variations within the true spirit and scope of the present invention.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310464818.4A CN103528568B (en) | 2013-10-08 | 2013-10-08 | A kind of object pose image measuring method based on wireless channel |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310464818.4A CN103528568B (en) | 2013-10-08 | 2013-10-08 | A kind of object pose image measuring method based on wireless channel |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103528568A true CN103528568A (en) | 2014-01-22 |
CN103528568B CN103528568B (en) | 2016-08-17 |
Family
ID=49930771
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310464818.4A Active CN103528568B (en) | 2013-10-08 | 2013-10-08 | A kind of object pose image measuring method based on wireless channel |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103528568B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104933432A (en) * | 2014-03-18 | 2015-09-23 | 北京思而得科技有限公司 | Processing method for finger pulp crease and finger vein images |
CN108090931A (en) * | 2017-12-13 | 2018-05-29 | 中国科学院光电技术研究所 | Anti-blocking and anti-interference marker identification and pose measurement method based on combination of circle and cross features |
CN108871314A (en) * | 2018-07-18 | 2018-11-23 | 江苏实景信息科技有限公司 | A kind of positioning and orientation method and device |
CN109003305A (en) * | 2018-07-18 | 2018-12-14 | 江苏实景信息科技有限公司 | A kind of positioning and orientation method and device |
CN112789672A (en) * | 2018-09-10 | 2021-05-11 | 感知机器人有限公司 | Control and navigation system, attitude optimization, mapping and positioning technology |
CN108225327B (en) * | 2017-12-31 | 2021-05-14 | 芜湖哈特机器人产业技术研究院有限公司 | A method of constructing and locating a top-marked map |
CN108180917B (en) * | 2017-12-31 | 2021-05-14 | 芜湖哈特机器人产业技术研究院有限公司 | Top map construction method based on pose graph optimization |
CN114268742A (en) * | 2022-03-01 | 2022-04-01 | 北京瞭望神州科技有限公司 | Sky eye chip processing apparatus |
CN114509089A (en) * | 2021-12-31 | 2022-05-17 | 成都弓网科技有限责任公司 | Non-contact rail transit train speed direction mileage detection method and system |
CN116664414A (en) * | 2023-03-27 | 2023-08-29 | 北京理工大学 | Unified image defogging and denoising method based on unsupervised learning |
US11827351B2 (en) | 2018-09-10 | 2023-11-28 | Perceptual Robotics Limited | Control and navigation systems |
CN117710449A (en) * | 2024-02-05 | 2024-03-15 | 中国空气动力研究与发展中心高速空气动力研究所 | NUMA-based real-time pose video measurement assembly line model optimization method |
-
2013
- 2013-10-08 CN CN201310464818.4A patent/CN103528568B/en active Active
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104933432A (en) * | 2014-03-18 | 2015-09-23 | 北京思而得科技有限公司 | Processing method for finger pulp crease and finger vein images |
CN108090931A (en) * | 2017-12-13 | 2018-05-29 | 中国科学院光电技术研究所 | Anti-blocking and anti-interference marker identification and pose measurement method based on combination of circle and cross features |
CN108225327B (en) * | 2017-12-31 | 2021-05-14 | 芜湖哈特机器人产业技术研究院有限公司 | A method of constructing and locating a top-marked map |
CN108180917B (en) * | 2017-12-31 | 2021-05-14 | 芜湖哈特机器人产业技术研究院有限公司 | Top map construction method based on pose graph optimization |
CN108871314B (en) * | 2018-07-18 | 2021-08-17 | 江苏实景信息科技有限公司 | Positioning and attitude determining method and device |
CN108871314A (en) * | 2018-07-18 | 2018-11-23 | 江苏实景信息科技有限公司 | A kind of positioning and orientation method and device |
CN109003305A (en) * | 2018-07-18 | 2018-12-14 | 江苏实景信息科技有限公司 | A kind of positioning and orientation method and device |
CN109003305B (en) * | 2018-07-18 | 2021-07-20 | 江苏实景信息科技有限公司 | Positioning and attitude determining method and device |
CN112789672A (en) * | 2018-09-10 | 2021-05-11 | 感知机器人有限公司 | Control and navigation system, attitude optimization, mapping and positioning technology |
US11827351B2 (en) | 2018-09-10 | 2023-11-28 | Perceptual Robotics Limited | Control and navigation systems |
CN112789672B (en) * | 2018-09-10 | 2023-12-12 | 感知机器人有限公司 | Control and navigation system, gesture optimization, mapping and positioning techniques |
US11886189B2 (en) | 2018-09-10 | 2024-01-30 | Perceptual Robotics Limited | Control and navigation systems, pose optimization, mapping, and localization techniques |
CN114509089A (en) * | 2021-12-31 | 2022-05-17 | 成都弓网科技有限责任公司 | Non-contact rail transit train speed direction mileage detection method and system |
CN114268742A (en) * | 2022-03-01 | 2022-04-01 | 北京瞭望神州科技有限公司 | Sky eye chip processing apparatus |
CN116664414A (en) * | 2023-03-27 | 2023-08-29 | 北京理工大学 | Unified image defogging and denoising method based on unsupervised learning |
CN117710449A (en) * | 2024-02-05 | 2024-03-15 | 中国空气动力研究与发展中心高速空气动力研究所 | NUMA-based real-time pose video measurement assembly line model optimization method |
CN117710449B (en) * | 2024-02-05 | 2024-04-16 | 中国空气动力研究与发展中心高速空气动力研究所 | NUMA-based real-time pose video measurement assembly line model optimization method |
Also Published As
Publication number | Publication date |
---|---|
CN103528568B (en) | 2016-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103528568B (en) | A kind of object pose image measuring method based on wireless channel | |
US10311297B2 (en) | Determination of position from images and associated camera positions | |
CN105627991B (en) | A kind of unmanned plane image real time panoramic joining method and system | |
CN104333675B (en) | A Method of Panoramic Electronic Image Stabilization Based on Spherical Projection | |
CN111415388A (en) | A visual positioning method and terminal | |
CN105809689B (en) | Hull six degree of freedom measurement method based on machine vision | |
US8259993B2 (en) | Building shape change detecting method, and building shape change detecting system | |
CN103822615B (en) | A kind of multi-control point extracts and the unmanned aerial vehicle target real-time location method be polymerized automatically | |
CN101916452B (en) | Method for automatically stitching unmanned aerial vehicle remote sensing images based on flight control information | |
CN105261047B (en) | A method for extracting the center of butt ring based on short-range short-arc segment images | |
CN108665499B (en) | Near distance airplane pose measuring method based on parallax method | |
CN108955685B (en) | Refueling aircraft taper sleeve pose measuring method based on stereoscopic vision | |
CN104835115A (en) | Imaging method for aerial camera, and system thereof | |
CN109405835B (en) | Relative pose measurement method based on linear and circular monocular images of non-cooperative targets | |
CN103093459B (en) | Utilize the method that airborne LiDAR point cloud data assisted image mates | |
CN104700399A (en) | Method for demarcating large-deformation landslide displacement field based on high-resolution remote sensing image | |
CN103426165A (en) | Precise registration method of ground laser-point clouds and unmanned aerial vehicle image reconstruction point clouds | |
CN107121125A (en) | A kind of communication base station antenna pose automatic detection device and method | |
CN113222820A (en) | Pose information assisted aerial remote sensing image splicing method | |
CN107238375A (en) | Detect the one camera photogrammetric survey method of bridge rubber bearing displacement | |
CN104318604A (en) | 3D image stitching method and apparatus | |
CN105427284A (en) | Fixed target marking method based on airborne android platform | |
CN110246194A (en) | A fast calibration method for the rotation relationship between camera and inertial measurement unit | |
CN113295171B (en) | Monocular vision-based attitude estimation method for rotating rigid body spacecraft | |
CN115222819A (en) | Camera self-calibration and target tracking method based on multi-mode information reference in airport large-range scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |