CN111091577B - Line characteristic description method based on pseudo-twin network - Google Patents

Line characteristic description method based on pseudo-twin network Download PDF

Info

Publication number
CN111091577B
CN111091577B CN201911241559.2A CN201911241559A CN111091577B CN 111091577 B CN111091577 B CN 111091577B CN 201911241559 A CN201911241559 A CN 201911241559A CN 111091577 B CN111091577 B CN 111091577B
Authority
CN
China
Prior art keywords
straight line
network
matrix
line
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911241559.2A
Other languages
Chinese (zh)
Other versions
CN111091577A (en
Inventor
付苗苗
霍占强
刘红敏
张一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Nanjing Artificial Intelligence Innovation Research Institute
Institute of Automation of Chinese Academy of Science
Henan University of Technology
Original Assignee
Zhongke Nanjing Artificial Intelligence Innovation Research Institute
Institute of Automation of Chinese Academy of Science
Henan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Nanjing Artificial Intelligence Innovation Research Institute, Institute of Automation of Chinese Academy of Science, Henan University of Technology filed Critical Zhongke Nanjing Artificial Intelligence Innovation Research Institute
Priority to CN201911241559.2A priority Critical patent/CN111091577B/en
Publication of CN111091577A publication Critical patent/CN111091577A/en
Application granted granted Critical
Publication of CN111091577B publication Critical patent/CN111091577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a line characteristic description method based on a pseudo-twin network, which comprises the following steps: step S11: acquiring image pairs of different transformations of the same scene; step S12: detecting a straight line in the image; step S13: obtaining a correct matching straight line pair; step S14: determining an image block corresponding to the straight line; step S2: constructing a full convolution pseudo-twin network for line characteristic description; step S31: acquiring a training subset; step S32: calculating a network output feature vector; step S33: adjusting network parameters; step S4: updating network parameter values; step S5: iteratively updating parameters to the appointed times; step S6: and acquiring a descriptor of the input straight line. The method provided by the invention can obtain descriptors with strong differentiation.

Description

Line characteristic description method based on pseudo-twin network
Technical Field
The invention relates to the technical field of image feature description, in particular to a line feature description method based on a pseudo-twin network.
Background
Image local feature description and matching are used as one of basic research problems in the fields of image processing and computer vision, and are widely applied to many scenes such as three-dimensional reconstruction, wide baseline matching, panoramic stitching, image retrieval and the like. In recent years, related researchers have proposed a number of methods for describing image features, and summarized into two main categories: one is a descriptor method based on manual design, and the other is a descriptor method based on learning. Most of the methods describe local regions as a unique feature descriptor, and the most classical of the manual design methods is SIFT descriptor. It is generally believed that better performing feature descriptors are invariant to matching blocks under variations in illumination, blurring, deformation, etc., and are strongly distinguishable from non-matching blocks.
In recent years, due to revolutionary changes brought to various fields by rapid development of manual design descriptors and occurrence of deep learning, and large-scale point matching data sets provided in literature, a series of learning-based point feature descriptors are presented, wherein Zagoruyko et al propose various block matching neural network models including twin networks, dual-channel networks and the like, and compare the network performances of various block matching; in 2017 Tian et al, a CNN model L2-net based on a full convolution structure is proposed, training of L2-net is based on a progressive sampling strategy and a loss function composed of three error terms, and they train L2-net by optimizing the relative distance between descriptors in a batch, and the descriptors output by the model are matched by the L2 distance; and Anastasiya Mishchuk et al inspired by the SIFT matching criteria of Lowe, suggested that triplet loss applies to one compact descriptor named HardNet of the L2-net architecture. However, the straight line feature is also one of the most important image features, which is indispensable in many application scenarios. For example, in some low-texture scenes, local point features and region features alone are insufficient. In contrast, in these scenarios, the line features have more information. Unfortunately, the literature centerline feature descriptors develop slowly compared to the point feature descriptors, and remain in the stage of manual design. The main reasons are as follows: the deep full convolution neural network depends on a large number of labeled training samples, a large number of manpower and financial resources are needed for constructing a large number of labeled training samples, the phenomenon of overfitting caused by too few labeled training samples is avoided, and the reasons of uncertainty of the end points of the straight line, lack of abundant textures in the local neighborhood of the straight line and the like are met.
Disclosure of Invention
The present invention addresses the above-mentioned problems and in order to make the line descriptor more stable and robust over a large number of image variations, it is an object to provide a learning-based line descriptor with a greater stability and differentiation. In order to achieve the object, a line characteristic description method based on a pseudo-twin network comprises the following steps:
step S1: constructing a data set for line characterization network training;
step S11: acquiring image pairs of different transformations of the same scene;
step S12: detecting a straight line in the image;
step S13: obtaining a correct matching straight line pair;
step S14: determining an image block corresponding to the straight line;
step S2: constructing a full convolution pseudo-twin network for line characteristic description;
step S3: training the network by using the line matching data set;
step S31: acquiring a training subset;
step S32: calculating a network output feature vector;
step S33: adjusting network parameters;
step S4: updating network parameter values;
step S5: iteratively updating parameters to the appointed times;
step S6: and acquiring a descriptor of the input straight line.
Aiming at the problems, the invention provides a line characteristic description method based on a pseudo-twin network, which is characterized in that a data set for line characteristic description network training is firstly constructed, then a migration learning strategy is combined, and model parameters of the built full-convolution pseudo-twin network are initialized by utilizing L2-Net model parameters pre-trained on a large data set Liberty, so that line characteristic descriptors with stronger distinguishing capability and robustness are obtained on line matching data. The method provided by the invention can overcome the problems and has stronger stability and better performance.
Drawings
Fig. 1 is a flowchart of a line characteristic description method based on a pseudo-twin network according to an embodiment of the present invention.
Fig. 2 is a network architecture diagram of a pseudo-twin network based line characterization method.
Detailed Description
Fig. 1 is a flow chart of a line characteristic description method based on a pseudo-twin network, which mainly comprises the following steps: obtaining image pairs of different transformations of the same scene, detecting straight lines in the images, obtaining right matching straight line pairs, determining image blocks corresponding to the straight lines, building a full convolution pseudo-twin network for describing line characteristics, obtaining training subsets, calculating network output characteristic vectors, adjusting network parameters, updating network parameter values, updating parameter iteration to specified times, and obtaining descriptors of input straight lines. The specific implementation details of each step are as follows:
step S1: constructing a data set for line feature description network training in a specific mode comprising the steps of S11, S12, S13 and S14;
step S11: shooting images of different scenes and different visual angles and rotation angles, and performing compression, illumination, noise and other transformations on the images to form image pairs of the same scene and different transformations;
step S12: extracting straight lines in the image by using an existing straight line detection method, such as a Canny edge detection operator;
step S13: the method comprises the steps of obtaining correct matching straight line pairs, specifically, for any image pair, carrying out straight line matching by using a mean-standard deviation line descriptor described in the literature MSLD: A robust descriptor for line matching, pattern recognition.2009,42 (5), obtaining matching straight line pairs in the image pair, then manually eliminating error matching, and obtaining a correct matching straight line pair set { (L) in the image pair j ,L j ′),j=1,2,...,N L }, wherein L j Representing straight lines in the 1 st image in the image pair, L j ' denote the sum L in image 2 of the image pair j Correctly matched straight line, N L The number of the matched straight line pairs;
step S14: determining image blocks corresponding to straight lines, and concrete directionsThe formula is that, for any straight line L composed of Num (L) points, any pixel point on L is marked as P k K=1, 2,..num (L) will be given by P k A square area with a length of 64 in the direction of the straight line L and the direction perpendicular to the straight line L is defined as a point P k Point P of the support region of (2) k The matrix of luminance values of the support region is denoted as I (P k ) The average matrix M (L) =mean (I (P) 1 ),I(P 2 ),...,I(P Num(L) ) Standard deviation matrix STD (L) =std (I (P) 1 ),I(P 2 ),...,I(P Num(L) ) Where Mean represents the Mean of the calculated matrix, std represents the standard deviation of the calculated matrix, and the normalized matrix of 64×128 corresponding to the straight line L is written as
Figure BDA0002306390000000051
Wherein A is L (:,1:64)=M(L),A L (:,65:128)=STD(L);
Step S2: constructing a full convolution pseudo-twin network for line feature description, namely constructing a full convolution neural network with two branches, wherein each branch is an independent L2-Net, the size of a convolution kernel of the last layer is changed from 8×8 to 8×16, the number of the convolution kernels is changed from 128 to 256, other settings are the same as those of the L2-Net, the tail ends of the two branch networks are subjected to feature splicing to obtain the full convolution pseudo-twin network for line feature description, and the full convolution pseudo-twin network is recorded as CS PSLTL-Net, wherein the first six layers of the two branches are initialized by using model parameters of the L2-Net pre-trained on a data set Liberty, and parameter values of the last layer of the two branches in the CS PSLTL-Net use default initialization values in Pytorch;
step S3: training the network CS PSLTL-Net by using the line matching data set specifically comprises the steps of S31, S32 and S33:
step S31: the training subset is obtained by randomly selecting n pairs of matching straight lines from the line matching data set obtained in the step S1, and combining the normalization matrixes corresponding to the straight lines into a matrix
Figure BDA0002306390000000052
Wherein->
Figure BDA0002306390000000053
Is a straight line L j Corresponding normalization matrix, < >>
Figure BDA0002306390000000054
Is a straight line L j ' corresponding normalized matrix, straight line L j And L j ' is a matched straight line pair;
step S32, calculating the network output characteristic vector by using the normalized matrix of any straight line obtained in the step S31
Figure BDA0002306390000000055
For->
Figure BDA0002306390000000056
Respectively downsampling the mean matrix M (L) and standard deviation matrix STD (L), and splicing to obtain 32×64 matrix ∈>
Figure BDA0002306390000000057
Will->
Figure BDA0002306390000000058
As input to the first branch of the network CS PSLTL-Net; extraction of
Figure BDA0002306390000000059
The center region M of the mean matrix M (L) and the standard deviation matrix STD (L) c (L)=M(L)(32-15:32+16,32-15:32+16)、STD c (L) =std (L) (32-15:32+16 ), a matrix of 32×64 is obtained
Figure BDA0002306390000000061
Will->
Figure BDA0002306390000000062
As input to the second branch of the network CS PSLTL-Net; splicing the output characteristic vectors of the two branches together to obtain an output characteristic vector corresponding to the input straight line;
step S33: tuning network parametersIn a specific manner, the normalization matrix for any matching straight line pair in step S31
Figure BDA0002306390000000063
And->
Figure BDA0002306390000000064
Obtain->
Figure BDA0002306390000000065
Corresponding output feature vector a i Obtaining a corresponding output feature vector b according to step S32 i The method comprises the steps of carrying out a first treatment on the surface of the Calculating a distance matrix D of size n x n, wherein +.>
Figure BDA0002306390000000066
Calculating the triplet loss function->
Figure BDA0002306390000000067
Wherein->
Figure BDA0002306390000000068
Represents closest to a i Non-matching descriptors, j min =arg min j=1...n,j≠i d(a i ,b j ),/>
Figure BDA0002306390000000069
Represents closest to b i Non-matching descriptors, k min =arg min k=1...n,k≠i d(a k ,b i ) Acquiring new network model parameters by using a random gradient descent method according to the Loss function;
step S4: updating the parameter value of the network CS PSLTL-Net by utilizing the network parameters acquired in the step S3;
step S5: repeating the steps S3 and S4 until the parameter updating reaches the designated times;
step S6: the descriptor of the input straight line is obtained, specifically, for any given image pair, the image block corresponding to any straight line in the images obtained in the steps S12 and S14 is input into the full convolution pseudo-twin network obtained in the step S5, and the descriptor of the straight line can be output.
Aiming at the problems, the invention provides a line characteristic description method based on a pseudo-twin network, which is characterized in that a data set for line characteristic description network training is firstly constructed, then deep migration learning is utilized, model parameters of two branches of the built full convolution pseudo-twin network are initialized by using L2-Net model parameters pre-trained on a large data set Liberty, so that line characteristic descriptors with stronger distinguishing capability are obtained on line matching data. The method provided by the invention can overcome the problems and has stronger stability and better performance.

Claims (2)

1. A line characterization method based on a pseudo-twin network, characterized in that the method comprises the following steps:
step S1: constructing a data set for line characterization network training;
the specific mode of the step S1 comprises the steps S11, S12, S13 and S14;
step S11: shooting images of different scenes and different visual angles and rotation angles, and performing compression, illumination, noise and other transformations on the images to form image pairs of the same scene and different transformations;
step S12: extracting straight lines in the image by using a Canny edge detection operator;
step S13: obtaining correct matching straight line pairs, for any image pair, carrying out straight line matching by using a mean-standard deviation line descriptor to obtain a matching straight line pair in the image pair, then manually eliminating incorrect matching to obtain a correct matching straight line pair set { (L) in the image pair j ,L j ′),j=1,2,...,N L }, wherein L j Representing straight lines in the 1 st image in the image pair, L j ' denote the sum L in image 2 of the image pair j Correctly matched straight line, N L The number of the matched straight line pairs;
step S14: determining an image block corresponding to a straight line, and for any straight line L formed by Num (L) points, marking any pixel point on the L as P k K=1, 2,..num (L) will be given by P k A square area with a length of 64 in the direction of the straight line L and the direction perpendicular to the straight line L is defined as a point P k Point P of the support region of (2) k The matrix of luminance values of the support region is denoted as I (P k ) The average matrix M (L) =mean (I (P) 1 ),I(P 2 ),...,I(P Num(L) ) Standard deviation matrix STD (L) =std (I (P) 1 ),I(P 2 ),...,I(P Num(L) ) Where Mean represents the Mean of the calculated matrix, std represents the standard deviation of the calculated matrix, and the normalized matrix of 64×128 corresponding to the straight line L is written as
Figure FDA0004235687590000011
Wherein A is L (:,1:64)=M(L),A L (:,65:128)=STD(L);
Step S2: constructing a full convolution pseudo-twin network for line feature description, constructing a full convolution neural network with two branches, wherein each branch is an independent L2-Net, the size of a convolution kernel of the last layer is modified from 8×8 to 8×16, the number of the convolution kernels is modified from 128 to 256, other settings are the same as those of the L2-Net, the tail ends of the two branch networks are subjected to feature stitching to obtain the full convolution pseudo-twin network for line feature description, and the full convolution pseudo-twin network is recorded as CS PSLTL-Net, wherein the first six layers of the two branches are initialized by using model parameters of the L2-Net pre-trained on a data set Liberty, and the parameter value of the last layer of the two branches in the CS PSLTL-Net uses a default initialization value in Pytorch;
step S3: training the network CS PSLTL-Net by using the line matching data set;
step S4: updating the parameter value of the network CS PSLTL-Net by utilizing the network parameters acquired in the step S3;
step S5: repeating the steps S3 and S4 until the parameter updating reaches the designated times;
step S6: and (3) acquiring descriptors of input straight lines, and inputting image blocks corresponding to any straight line in the images acquired according to the steps S12 and S14 into the full convolution pseudo-twin network acquired in the step S5 for any given image pair, so that the descriptors of the straight lines can be output.
2. The pseudo-twin network-based line characteristic description method according to claim 1, wherein the specific manner includes steps S31, S32 and S33:
step S31: acquiring a training subset, randomly selecting n pairs of matching straight lines from the line matching data set acquired in the step S1, and combining normalization matrixes corresponding to the straight lines into a matrix
Figure FDA0004235687590000021
Wherein->
Figure FDA0004235687590000022
Is a straight line L j A corresponding normalized matrix is provided for each of the plurality of data sets,
Figure FDA0004235687590000023
is a straight line L j ' corresponding normalized matrix, straight line L j And L j ' is a matched straight line pair;
step S32, calculating a network output feature vector, and normalizing the matrix of any straight line obtained in the step S31
Figure FDA0004235687590000024
For->
Figure FDA0004235687590000025
Respectively downsampling the mean matrix M (L) and standard deviation matrix STD (L), and splicing to obtain 32×64 matrix
Figure FDA0004235687590000031
Will->
Figure FDA0004235687590000032
As input to the first branch of the network CS PSLTL-Net; extracting->
Figure FDA0004235687590000033
The center region M of the mean matrix M (L) and the standard deviation matrix STD (L) c (L)=M(L)(32-15:32+16,32-15:32+16)、STD c (L) =std (L) (32-15:32+16 ), a matrix of 32×64 is obtained
Figure FDA0004235687590000034
Will->
Figure FDA0004235687590000035
As input to the second branch of the network CS PSLTL-Net; splicing the output characteristic vectors of the two branches together to obtain an output characteristic vector corresponding to the input straight line;
step S33: adjusting network parameters, and normalizing matrix of any matching straight line pair in step S31
Figure FDA0004235687590000036
And
Figure FDA0004235687590000037
obtain->
Figure FDA0004235687590000038
Corresponding output feature vector a i Obtain ∈32>
Figure FDA0004235687590000039
Corresponding output feature vector b i The method comprises the steps of carrying out a first treatment on the surface of the Calculating a distance matrix D of size n x n, wherein +.>
Figure FDA00042356875900000310
Calculating the triplet loss function->
Figure FDA00042356875900000311
Figure FDA00042356875900000312
Wherein->
Figure FDA00042356875900000313
Represents closest to a i Non-matching descriptors, j min =argmin j=1...n,j≠i d(a i ,b j ),/>
Figure FDA00042356875900000314
Represents closest to b i Non-matching descriptors, k min =arg min k=1...n,k≠i d(a k ,b i ) And obtaining new network model parameters by using a random gradient descent method according to the Loss function.
CN201911241559.2A 2019-12-06 2019-12-06 Line characteristic description method based on pseudo-twin network Active CN111091577B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911241559.2A CN111091577B (en) 2019-12-06 2019-12-06 Line characteristic description method based on pseudo-twin network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911241559.2A CN111091577B (en) 2019-12-06 2019-12-06 Line characteristic description method based on pseudo-twin network

Publications (2)

Publication Number Publication Date
CN111091577A CN111091577A (en) 2020-05-01
CN111091577B true CN111091577B (en) 2023-06-23

Family

ID=70396064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911241559.2A Active CN111091577B (en) 2019-12-06 2019-12-06 Line characteristic description method based on pseudo-twin network

Country Status (1)

Country Link
CN (1) CN111091577B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183224A (en) * 2020-09-07 2021-01-05 北京达佳互联信息技术有限公司 Model training method for image recognition, image recognition method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101528757B1 (en) * 2013-10-15 2015-06-16 한국과학기술연구원 Texture-less object recognition using contour fragment-based features with bisected local regions
CN108388927B (en) * 2018-03-26 2021-10-29 西安电子科技大学 Small sample polarization SAR terrain classification method based on deep convolution twin network
CN110197254B (en) * 2019-04-25 2022-05-24 中国科学院自动化研究所南京人工智能芯片创新研究院 Line feature description method based on deep transfer learning

Also Published As

Publication number Publication date
CN111091577A (en) 2020-05-01

Similar Documents

Publication Publication Date Title
Wu et al. Busternet: Detecting copy-move image forgery with source/target localization
CN111488756B (en) Face recognition-based living body detection method, electronic device, and storage medium
CN110197254B (en) Line feature description method based on deep transfer learning
Ling et al. Balancing deformability and discriminability for shape matching
CN109740679B (en) Target identification method based on convolutional neural network and naive Bayes
CN109657612B (en) Quality sorting system based on facial image features and application method thereof
CN111709980A (en) Multi-scale image registration method and device based on deep learning
JP2011511340A (en) Feature-based signature for image identification
CN111192194B (en) Panoramic image stitching method for curtain wall building facade
CN112085017B (en) Tea leaf tender shoot image segmentation method based on significance detection and Grabcut algorithm
CN111488937B (en) Image matching method based on multi-scale neighbor deep neural network
CN111832642A (en) Image identification method based on VGG16 in insect taxonomy
CN115761356A (en) Image recognition method and device, electronic equipment and storage medium
CN111091577B (en) Line characteristic description method based on pseudo-twin network
Potje et al. Extracting deformation-aware local features by learning to deform
CN113128518B (en) Sift mismatch detection method based on twin convolution network and feature mixing
CN104268550A (en) Feature extraction method and device
CN110533652B (en) Image stitching evaluation method based on rotation invariant LBP-SURF feature similarity
CN111079585B (en) Pedestrian re-identification method combining image enhancement with pseudo-twin convolutional neural network
CN115311691B (en) Joint identification method based on wrist vein and wrist texture
Hossein-Nejad et al. Retinal image registration based on auto-adaptive SIFT and redundant keypoint elimination method
CN113011506B (en) Texture image classification method based on deep fractal spectrum network
Zhang et al. Region constraint person re-identification via partial least square on riemannian manifold
CN111027616B (en) Line characteristic description system based on end-to-end learning
CN108492256A (en) The quick joining method of UAV Video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 211100 floor 3, building 3, Qilin artificial intelligence Industrial Park, 266 Chuangyan Road, Nanjing, Jiangsu

Applicant after: Zhongke Nanjing artificial intelligence Innovation Research Institute

Applicant after: HENAN POLYTECHNIC University

Applicant after: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES

Address before: 211000 3rd floor, building 3, 266 Chuangyan Road, Jiangning District, Nanjing City, Jiangsu Province

Applicant before: NANJING ARTIFICIAL INTELLIGENCE CHIP INNOVATION INSTITUTE, INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES

Applicant before: HENAN POLYTECHNIC University

Applicant before: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES

GR01 Patent grant
GR01 Patent grant