CN109086830A - Typical association analysis based on sample punishment closely repeats video detecting method - Google Patents

Typical association analysis based on sample punishment closely repeats video detecting method Download PDF

Info

Publication number
CN109086830A
CN109086830A CN201810921808.1A CN201810921808A CN109086830A CN 109086830 A CN109086830 A CN 109086830A CN 201810921808 A CN201810921808 A CN 201810921808A CN 109086830 A CN109086830 A CN 109086830A
Authority
CN
China
Prior art keywords
video
similarity
sample
matrix
original video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810921808.1A
Other languages
Chinese (zh)
Other versions
CN109086830B (en
Inventor
沈项军
强娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Tuogu Information Technology Co ltd
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN201810921808.1A priority Critical patent/CN109086830B/en
Publication of CN109086830A publication Critical patent/CN109086830A/en
Application granted granted Critical
Publication of CN109086830B publication Critical patent/CN109086830B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2136Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of typical association analysis based on sample punishment closely to repeat video detecting method, by to original video U and inquiry video V, read wherein all videos, audio and video data feature is visualized using self-similarity method, the sample matrix Y for generating the sample matrix X and inquiry video V of original video U, is added sample penalty factor α to sample matrix X and YiAnd βiIt improves later typical association analysis algorithm and is associated analysis, add Lagrange multiplier λ1And λ2And sparse item parameter lambda3And λ4, it is converted into solution eigenvalue problem, obtaining characteristic value is v and w;Average frame is defined, by average frame fiAnd fjThe distance between H (fi,fj) as a parameter for calculating original video and inquiry video similarity, it calculates original video U and inquires the similarity between video V, constitute similarity vector, similar matrix is calculated according to similarity vector, it can effectively solve the problem that the repetition video test problems in mass storage system (MSS), improve the nearly accuracy for repeating video detection.

Description

Typical correlation analysis near-duplicate video detection method based on sample punishment
Technical Field
The invention belongs to the field of video detection, and particularly relates to a typical correlation analysis near-duplicate video detection method based on sample punishment.
Background
With the development of web2.0 and a community network, video multimedia has gradually become an application hotspot of the internet, and a user can shoot, edit and upload video clips, so that the number of videos is rapidly expanded. For example, an average of tens of millions of newly added uploaded videos in a video sharing website YouTube every day, how to retrieve content interesting to users from the videos is a problem to be solved urgently in video application, at present, a main video retrieval website utilizes video tags and related texts to index and retrieve video clips, and the method can meet basic retrieval requirements. There are three main methods for detecting near-duplicate video, which are: video level, frame level and mix level.
First, a Multi Feature Hashing (MFH) method based on a supervision method is proposed as a typical near-duplicate video detection method at a video level. The method maps video keyframes to a Hamming space using a plurality of image features and learning a set of hash functions. The method has the advantages of strong compactness and high efficiency of storage and retrieval, but the method can easily draw wrong conclusions due to large information loss.
Second, approximately repeated video at the frame level is determined by comparisons between individual frames or candidate video sequences, three of local, global, and spatio-temporal. With Scale Invariant Feature Transform (SIFT) as a local feature, wu et al proposed exhaustive matching to measure similarity. However, the calculation cost is high. Later methods have been proposed to group descriptors into bags of words to reduce the matching cost. Among global feature methods, a method of encoding a frame into symbols and representing the frame using a covariance matrix is proposed. Although these methods enable efficient online retrieval, the details of the frame are largely lost. Space-time methods have been developed to improve accuracy and reduce the computational cost of matching, taking into account the relationship between time frames, such as: a video sequence is modeled as a set of w-panning and an intensity marker (IMark) is introduced to perform sequence matching. These methods have advantages in dealing with temporal transforms, but are weaker in dealing with strong spatial transforms.
Third, the mixed-level near-duplicate video approach is a method that applies a hierarchical screening and refinement scheme to cluster and screen out near-duplicate videos, as proposed to filter non-near-duplicate videos using a pattern-based index tree, and rank candidate videos using m-pattern-based dynamic programming and time-shifted m-pattern similarity. However, this method is only advantageous when the video cannot be clearly classified as anomalous or near repetitive.
Disclosure of Invention
The invention provides a sample punishment-based typical correlation analysis near-duplicate video detection method according to the defects and shortcomings of the prior art, and aims to solve the problem of duplicate video detection in a mass storage system and improve the accuracy of near-duplicate video detection.
The technical scheme adopted by the invention is as follows, and the typical correlation analysis near-repeat video detection method based on sample punishment comprises the following steps:
step 1, reading all videos of an original video U and an inquiry video V, visualizing audio and video data characteristics by adopting a self-similarity method, and generating a sample matrix X (D multiplied by N) of the original video U and a sample matrix Y (D multiplied by M) of the inquiry video V, wherein N and M respectively represent the frame number in the videos U and V, D represents the dimensionality of the videos, and D is more than N and is more than or equal to M.
step 2, adding a sample penalty factor α to the sample matrixes X and Yiand betaiPerforming correlation analysis by using the improved typical correlation analysis algorithm, and adding a Lagrange multiplier lambda1And λ2And sparse term parameter λ3And λ4Converting the obtained characteristic values into a problem of solving the characteristic values to obtain the characteristic values v and w;
step 3, defining average frame, average frame f of original videoiAnd average frame f of query videojDistance H (f) therebetweeni,fj) As a parameter for calculating the similarity between the original video and the query video;
and 4, calculating the similarity between the original video U and the query video V to form a similarity vector S (X, Y): respectively averagely segmenting the sample matrixes X and Y to obtain a plurality of video segments, and respectively taking out video segments U from segmented videos of the matrixes X and YiAnd UjCalculating the average frame f of the video segmentkAnd fk+nFrom the average frame fkAnd fk+nCalculating the similarity SnForming a similarity vector S (X, Y) according to the calculated similarities among all the video segments;
step 5, calculating a similarity matrix, and calculating the minimum value S in S (X, Y)minCompared with an empirical threshold theta, if SminIf the value is less than the empirical threshold value theta, judging that the segments are similar segments, and recording d in a matrix modeijTo record the similar attributes of the original video U and the query video V, the original video U and the query video V are similar, then dij1 or else dij0, the formula is shown below:
further, adding a sample penalty factor alpha to X and Yiand betaiThe improved typical correlation analysis algorithm performs correlation analysis, and the formula is as follows:
where v and w are the weight of the video, X, respectivelyiRepresenting each row of elements of the matrix obtained after processing the original video, and YjRepresentative is each row element of the matrix resulting from processing the query video.
DXAnd DYIs a discrimination matrix; said DXAnd DYExpressed as:
further, solving the eigenvalue eigenvector problem, the formula is as follows:
wherein, λ is the characteristic value of the characteristic equation to be solved, and the Lagrange multiplier λ1And λ2Set to 0.01, sparse term parameter λ3And λ4Set to 0.1.
Further, the average frame is an average value of corresponding characteristic values of adjacent K frames in the unit segment, and a calculation formula of the average frame is as follows:
further, average frame fiAnd fjDistance H (f) therebetweeni,fj) The calculation formula is as follows:
if the frame f is averagediAnd fjThe larger the difference between the characteristic values v and w is, and the average frame f is calculatediAnd fjThe greater the distance H therebetween.
Further, the similarity between the original video U and the query video V is represented as:
wherein q is the dimension of the vector.
The invention has the beneficial effects that: according to the defects and defects of the prior art, the invention provides a typical correlation analysis near-duplicate video detection method based on sample punishment, strengthens the original typical correlation analysis algorithm, introduces the sample punishment factor, improves the effect of the typical correlation analysis algorithm, solves the duplicate video detection problem in a mass storage system, and improves the accuracy of near-duplicate video detection.
Drawings
FIG. 1 is a block flow diagram of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, the technical solution adopted by the present invention is as follows:
step 1, reading all videos of an original video U and a query video V, visualizing audio and video data characteristics by adopting a self-similarity method, and generating sample matrixes X (DxN) and Y (DxM) (D > N ≧ M), wherein N and M respectively represent the frame number in the videos U and V, namely each list of the sample matrixes X and Y represents one frame as a variable, and D represents the dimension of the video, namely each line (namely X is X ≧ M)iAnd yiWhere i ═ 1,2, K, D) denotes all frame values in the same dimension as a sample.
step 2, adding a sample penalty factor α to the sample matrixes X and Yiand betaiThe improved typical correlation analysis algorithm performs correlation analysis, and the formula is as follows:
where v and w are the weights of the video, I is the identity matrix, αiand betaiAs a penalty factor, DXAnd DYIs an authentication matrix, an authentication matrix DXAnd DYis constructed according to the structural meaning of the Laplace matrix, and the elements on the diagonal are all penalty factors alphaiand betaiThe diagonal matrix is formed and is expressed as follows:
adding Lagrange multiplier lambda to equation (1)1And λ2And two sparse term parameters λ3And λ4The method can be converted into a problem of solving characteristic value characteristic vectors, the characteristic values of the original video U and the query video V are obtained as V and w, and the formula is as follows:
wherein the Lagrange multiplier λ1And λ2Set to 0.01, sparse term parameter λ3And λ4Set to 0.1.
And 3, in the invention, the average frames are used as the video key frame characteristics, and the distance between the average frames is calculated.
The specific process is as follows:
the average frame is the average value of the corresponding characteristic values of the adjacent K frames in the unit segment, and then the calculation formula of the average frame is as follows:
② defining an average frame fiAnd fjDistance H (f) therebetweeni,fj) As a parameter for calculating the similarity between the original video and the query video, the calculation formula is as follows:
wherein, N is 1,2, 3, N, M is 1,2, 3, and M.
analysis of the distance between the average frames shows that if the average frame fiAnd fjThe larger the difference between the characteristic values v and w is, and the average frame f is calculatediAnd fjThe greater the distance H therebetween.
And 4, calculating the similarity between the original video U and the query video V, and taking out q average frames from the original video U and the query video V respectively. I.e. S (X, Y) ═ S0,S1,...,Sq-1)。
Some explanations will be given first, using fkAnd fk+nRespectively representing the original video UiAnd UjDue to video segments UiIs composed of two consecutive video unit segments UviAnd Uvi+1Serially connected so that the video segment UiThere are 2q average frames, each with (f)1,f2,...,f2q) And (f'1,f'2,...,f'2q) To express, the similarity is calculated as follows:
analyzing to obtain the average frame fiAnd fjThe larger the distance between the two is, the larger the distance H is, the calculated SnThe larger.
Step 5, calculating a similarity matrix, wherein the similarity S is calculated according to the previous content, and the difference between the video frame sequences is largernThe larger, i.e. the larger S (X, Y), the more dissimilar between the original video U and the query video V. For error reasons, we will find the minimum S of S (X, Y)minCompared with an empirical threshold theta, if SminIf the value is less than the empirical threshold value theta, the segments are judged to be similar segments, and d recorded in a matrix mode is used in the inventionijTo record the similar attributes of the original video U and the query video V, the original video U and the query video V are similar, then dij1 or else dij0, the formula is shown below:
in order to evaluate the performance of the exemplary correlation analysis near-duplicate video detection method based on sample penalty, the method is compared with the exemplary correlation analysis retrieval method which is not improved.
Since the retrieval results will contain relevant and irrelevant videos, how to evaluate the effectiveness of the method is an important topic. To measure the detection accuracy, the present invention uses a Precision-Recall (PR) curve. Wherein,
in addition, average accuracy (MAP) was used to check the effectiveness of near-duplicate video detection. For comparison of computational efficiency, an average processing Time (MP-Time) is used, which is the average Time (unit: seconds) for processing queries for generating search and location results.
Intermediate results of part of the Table
The recall and precision ratios are calculated for the data in conjunction with table 1 to determine whether there is duplication between the original video U and the query video V and to find the optimal empirical threshold.
The above embodiments are only used for illustrating the design idea and features of the present invention, and the purpose of the present invention is to enable those skilled in the art to understand the content of the present invention and implement the present invention accordingly, and the protection scope of the present invention is not limited to the above embodiments. Therefore, all equivalent changes and modifications made in accordance with the principles and concepts disclosed herein are intended to be included within the scope of the present invention.

Claims (8)

1. A typical correlation analysis near-duplicate video detection method based on sample penalty is characterized by comprising the following steps:
step 1, reading all videos in an original video U and an inquiry video V, visualizing audio and video data characteristics by adopting a self-similarity method, and generating a sample matrix X (D multiplied by N) of the original video U and a sample matrix Y (D multiplied by M) of the inquiry video V, wherein N and M respectively represent the frame number in the videos U and V, D represents the dimensionality of the videos, and D is more than N and is more than or equal to M;
step 2, adding sample punishment factors to the sample matrixes X and Ysub alphaiand betaiPerforming correlation analysis by using the improved typical correlation analysis algorithm, and adding a Lagrange multiplier lambda1And λ2And sparse term parameter λ3And λ4Converting the obtained characteristic values into a problem of solving the characteristic values to obtain the characteristic values v and w;
step 3, defining an average frame, and averaging the average frame f of the original videoiAnd average frame f of query videojDistance H (f) therebetweeni,fj) As a parameter for calculating the similarity between the original video and the query video;
step 4, calculating the similarity between the original video U and the query video V to form a similarity vector S (X, Y);
and 5, calculating a similarity matrix according to the similarity vector.
2. the method according to claim 1, wherein a sample penalty factor α is added to X and Yiand betaiThe improved typical correlation analysis algorithm performs correlation analysis, and the formula is as follows:
where v and w are the weight of the video, X, respectivelyiRepresenting each row of elements of the matrix obtained after processing the original video, and YjRepresenting each row of elements, D, of a matrix resulting from processing the query videoXAnd DYIs a discrimination matrix, DXAnd DYExpressed as:
3. the method according to claim 1, wherein the eigenvalue eigenvector problem is solved by the following formula:
wherein λ is the eigenvalue of the characteristic equation to be solved, I is the unit matrix, λ1And λ2Is the Greeny multiplier, lambda3And λ4Is a sparse term parameter.
4. The method according to claim 3, wherein the average frame is an average value of corresponding feature values of adjacent K frames in a unit segment, and the average frame calculation formula is as follows:
5. the method according to claim 3, wherein the average frame f is a frame of a videoiAnd fjDistance H (f) therebetweeni,fj) The calculation formula is as follows:
wherein, N is 1,2, 3, N, M is 1,2, 3, and M.
6. The method for detecting the near-duplicate video based on the canonical correlation analysis with sample penalty as claimed in claim 1, wherein the method for calculating the similarity between the original video U and the query video V is as follows: respectively averagely segmenting the sample matrixes X and Y to obtain a plurality of video segments, and respectively segmenting the segmented videos of the matrixes X and YFetching video segments UiAnd UjCalculating the average frame f of the video segmentkAnd fk+nFrom the average frame fkAnd fk+nCalculating the similarity SnAnd constructing a similarity vector S (X, Y) according to the calculated similarity between all the video segments.
7. The method according to claim 6, wherein the similarity between the original video U and the query video V is represented as:
wherein q is the dimension of the vector.
8. The method according to claim 1, wherein the minimum S in the similarity vector S (X, Y) is set as SminD is recorded in matrix form in comparison with empirical threshold θijTo record the similar attributes of the original video U and the query video V, if SminIf the value is less than the empirical threshold value theta, judging that the video segments are similar segments, and if the original video U and the query video V are similar, dij1 or else dij0, the formula is shown below:
CN201810921808.1A 2018-08-14 2018-08-14 Typical correlation analysis near-duplicate video detection method based on sample punishment Active CN109086830B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810921808.1A CN109086830B (en) 2018-08-14 2018-08-14 Typical correlation analysis near-duplicate video detection method based on sample punishment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810921808.1A CN109086830B (en) 2018-08-14 2018-08-14 Typical correlation analysis near-duplicate video detection method based on sample punishment

Publications (2)

Publication Number Publication Date
CN109086830A true CN109086830A (en) 2018-12-25
CN109086830B CN109086830B (en) 2021-09-10

Family

ID=64834656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810921808.1A Active CN109086830B (en) 2018-08-14 2018-08-14 Typical correlation analysis near-duplicate video detection method based on sample punishment

Country Status (1)

Country Link
CN (1) CN109086830B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532405A (en) * 2019-08-15 2019-12-03 咪咕文化科技有限公司 Multimedia file processing method, communication device and computer readable storage medium
CN110826475A (en) * 2019-11-01 2020-02-21 北京齐尔布莱特科技有限公司 Method and device for detecting near-duplicate video and computing equipment
CN112288047A (en) * 2020-12-25 2021-01-29 成都索贝数码科技股份有限公司 Broadcast television news stripping method based on probability distribution transformation clustering
CN113283351A (en) * 2021-05-31 2021-08-20 深圳神目信息技术有限公司 Video plagiarism detection method using CNN to optimize similarity matrix

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216885A (en) * 2008-01-04 2008-07-09 中山大学 Passerby face detection and tracing algorithm based on video
CN102779184A (en) * 2012-06-29 2012-11-14 中国科学院自动化研究所 Automatic positioning method of approximately repeated video clips
CN102945375A (en) * 2012-11-20 2013-02-27 天津理工大学 Multi-view monitoring video behavior detection and recognition method under multiple constraints
CN103617233A (en) * 2013-11-26 2014-03-05 烟台中科网络技术研究所 Method and device for detecting repeated video based on semantic content multilayer expression

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216885A (en) * 2008-01-04 2008-07-09 中山大学 Passerby face detection and tracing algorithm based on video
CN102779184A (en) * 2012-06-29 2012-11-14 中国科学院自动化研究所 Automatic positioning method of approximately repeated video clips
CN102945375A (en) * 2012-11-20 2013-02-27 天津理工大学 Multi-view monitoring video behavior detection and recognition method under multiple constraints
CN103617233A (en) * 2013-11-26 2014-03-05 烟台中科网络技术研究所 Method and device for detecting repeated video based on semantic content multilayer expression

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
欧阳书馨: "面向素描的异质人脸识别", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532405A (en) * 2019-08-15 2019-12-03 咪咕文化科技有限公司 Multimedia file processing method, communication device and computer readable storage medium
CN110532405B (en) * 2019-08-15 2023-01-10 咪咕文化科技有限公司 Multimedia file processing method, communication device and computer readable storage medium
CN110826475A (en) * 2019-11-01 2020-02-21 北京齐尔布莱特科技有限公司 Method and device for detecting near-duplicate video and computing equipment
CN112288047A (en) * 2020-12-25 2021-01-29 成都索贝数码科技股份有限公司 Broadcast television news stripping method based on probability distribution transformation clustering
CN112288047B (en) * 2020-12-25 2021-04-09 成都索贝数码科技股份有限公司 Broadcast television news stripping method based on probability distribution transformation clustering
CN113283351A (en) * 2021-05-31 2021-08-20 深圳神目信息技术有限公司 Video plagiarism detection method using CNN to optimize similarity matrix
CN113283351B (en) * 2021-05-31 2024-02-06 深圳神目信息技术有限公司 Video plagiarism detection method using CNN optimization similarity matrix

Also Published As

Publication number Publication date
CN109086830B (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN111797321B (en) Personalized knowledge recommendation method and system for different scenes
CN105912611B (en) A kind of fast image retrieval method based on CNN
CN109086830B (en) Typical correlation analysis near-duplicate video detection method based on sample punishment
CN103608826B (en) Annotated using product in the video of Web information mining
CN111506773B (en) Video duplicate removal method based on unsupervised depth twin network
CN111382283B (en) Resource category label labeling method and device, computer equipment and storage medium
US7142602B2 (en) Method for segmenting 3D objects from compressed videos
Ulutas et al. Frame duplication detection based on bow model
Zhou et al. Online video recommendation in sharing community
CN107750015A (en) Detection method, device, storage medium and the equipment of video copy
CN111182364B (en) Short video copyright detection method and system
CN104573130A (en) Entity resolution method based on group calculation and entity resolution device based on group calculation
Varna et al. Modeling and analysis of correlated binary fingerprints for content identification
Wang et al. Aspect-ratio-preserving multi-patch image aesthetics score prediction
CN111639230B (en) Similar video screening method, device, equipment and storage medium
Fei et al. New fusional framework combining sparse selection and clustering for key frame extraction
Wang et al. Attention-based deep metric learning for near-duplicate video retrieval
TWI452477B (en) Multi-label text categorization based on fuzzy similarity and k nearest neighbors
Sun et al. Feature fusion and clustering for key frame extraction
Nie et al. Robust video hashing based on representative-dispersive frames
Yang et al. Keyframe recommendation based on feature intercross and fusion
Mallick et al. Video retrieval using salient foreground region of motion vector based extracted keyframes and spatial pyramid matching
Dhoot et al. Efficient Dimensionality Reduction for Big Data Using Clustering Technique
CN115082854A (en) Pedestrian searching method oriented to security monitoring video
Zhao et al. Star-gnn: spatial-temporal video representation for content-based retrieval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220902

Address after: Room 201, South 2nd Floor, Administrative Committee of Chaohu Economic Development Zone, Anhui Province, 238014

Patentee after: Hefei Tuogu Information Technology Co.,Ltd.

Address before: Zhenjiang City, Jiangsu Province, 212013 Jingkou District Road No. 301

Patentee before: JIANGSU University

TR01 Transfer of patent right