CN110769259A - Image data compression method for tracking track content of video target - Google Patents
Image data compression method for tracking track content of video target Download PDFInfo
- Publication number
- CN110769259A CN110769259A CN201911073045.0A CN201911073045A CN110769259A CN 110769259 A CN110769259 A CN 110769259A CN 201911073045 A CN201911073045 A CN 201911073045A CN 110769259 A CN110769259 A CN 110769259A
- Authority
- CN
- China
- Prior art keywords
- image
- cluster
- content
- frame
- data compression
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Signal Processing (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses an image data compression method for tracking track content of a video target, and relates to the technical field of image and video processing. The method comprises the following steps: acquiring each frame of image of a target track, and setting the number of data compression outputs; clustering all the images according to time information, wherein the clustering number is the data compression output number; iteratively updating the clusters until the mean square error of two adjacent clusters is less than a given threshold value; and taking the image closest to the cluster center in each cluster to form a new target track. The invention can pre-specify the output number of data compression and then perform data compression on the time dimension or the content characteristic dimension, thereby effectively retaining the characteristic representativeness in the compressed image data and greatly reducing the data volume.
Description
Technical Field
The invention relates to the technical field of image and video processing, in particular to an image data compression method for tracking track content of a video target.
Background
In the field of video tracking analysis, in order to reduce the impact of video storage on hardware storage space and the information analysis cost caused by mass data, target detection is carried out on targets interested in videos, then effective track tracking matching is carried out on the detected targets, the data after track tracking matching is sorted and stored, and image content information is stored in a database for later query according to the time for detecting the target data.
After the track tracking is completed, a large amount of detection target data still exists in one track which is completed by tracking matching, and the data amount is large, so that the subsequent data query and statistical analysis are not facilitated. Because the feature variation amplitude of the object (vehicle, pedestrian, etc.) in the video is not obvious in many scenes, there is great information redundancy. How to compress and store the image result data connected by the track scientifically and according is the key for solving the problems.
Through analysis, the normal moving target moving scene is relatively simple, the image characteristic change in the whole moving process is not severe, and the corresponding characteristic change amplitude is not obvious; some devices always move at a constant speed to rush out a visual field, time and feature changes are continuous in the whole process, and how to effectively extract features for compression is one of the problems to be solved at present; on the other hand, the conventional compression method cannot specify the number of compressed image frames, so that more data are often available after compression.
Disclosure of Invention
The invention aims to provide an image data compression method for video target tracking track content, which can pre-specify the output number of data compression and then carry out data compression on a time dimension or a content characteristic dimension, thereby effectively retaining the characteristic representativeness in the compressed image data and greatly reducing the data volume.
In order to achieve the purpose, the invention provides the following technical scheme:
a method for compressing image data of video target tracking track content is characterized by comprising the following steps:
s1, acquiring each frame of image of the target track, and setting the number of data compression outputs;
s2, clustering all the images according to time information, wherein the clustering number is the data compression output number;
s3, iteratively updating the clusters until the mean square error of two adjacent clusters is less than a given threshold;
and S4, taking the image closest to the cluster center in each cluster to form a new target track.
Further, the step S1 includes a pre-determination step, and if the number of frames of the image of the target track is not greater than the number of compressed data outputs, the target track is directly output.
Further, the clustering method in S2 is as follows:
extracting the time information of each frame image and storing the time information into a Timestamps sequence, setting the number N of clustering centers of the clusters to be equal to the number of data compression outputs, and randomly extracting N pieces of time information from the Timestamps sequence to give initial values to the N clustering centers.
Further, the specific content of the iteratively updated cluster in S3 is as follows:
s31, calculating the time distance dist between all the time information in the Timestamps sequence and N clustering centers: dist ═ Timestamps [ i]-centersnL, |; wherein Timestamps [ i ]]Is the temporal information of the ith frame of picture in the Timestamps sequence, centersnIs the value of the nth cluster center;
s32, classifying Timestamps [ i ] into the cluster corresponding to the distance dist minimum according to the distance dist minimum principle, and adjusting the Timestamps [ i ] into cluster members;
s33, carrying out weighted average on the time information of the cluster members, and calculating a new cluster center of the cluster;
s34, solving the mean square error between each cluster member and the new cluster center;
and S35, calculating the difference between the current mean square error and the last mean square error, if the difference is greater than a given threshold EndTahreh, returning to S31 for iterative operation, and otherwise, jumping to S4.
Further, the threshold value EndThreh is 10-5。
A method for compressing image data of video target tracking track content is characterized by comprising the following steps:
s1, acquiring each frame of image of the target track, and setting the number of data compression outputs;
s2, extracting the content characteristics of each frame image;
s3, iteratively calculating the similarity of the content features of two adjacent frames of images, deleting the next frame of image in the two adjacent frames of images when the similarity is greater than a given threshold value, and calculating the similarity of the content features of the next frame of image and the previous frame of image in the two adjacent frames of images;
s4, recording the similarity gradient of the content characteristics of two adjacent frames of images;
s5, calculating the difference degree of the content characteristics of two adjacent frames of images;
and S6, constructing a new track by taking the image corresponding to topN data with the maximum difference.
Further, the step S1 includes a pre-determination step, and if the number of frames of the image of the target track is not greater than the number of compressed data outputs, the target track is directly output.
Further, the content features are color histograms or abstract features extracted by adopting a deep convolutional neural network.
Further, the specific content of S3 is as follows: calculating the similarity of the content features FeaBudgets [ i ] and FeaBudgets [ i +1] of two adjacent frame images, wherein the FeaBudgets [ i ] is the content feature of the ith frame image; if the similarity is greater than a given threshold value SimiThr, deleting the (i + 1) th frame image, and further calculating the similarity of FeaBudgets [ i ] and FeaBudgets [ i +2 ]; otherwise, calculating the similarity of FeaBudgets [ i +1] and FeaBudgets [ i +2 ].
Compared with the prior art, the invention has the beneficial effects that: the invention can pre-specify the data compression output number, so that any target track is compressed to the length expected by a user. Then, according to the distribution of the tracking target in the time dimension, referring to the given output number, iteratively updating the clustering center to obtain a data compression result in the time dimension; or, starting from the content characteristic dimension, filtering out images with unobvious change of image content characteristics, and then extracting the first N images with the maximum difference degree to form a new track, thereby achieving the purpose of data compression. The invention not only ensures the final data volume of data compression and greatly reduces the data volume, but also effectively retains the characteristic representativeness in the compressed image data.
Drawings
Fig. 1 is a flowchart of a method according to a first embodiment of the present invention.
FIG. 2 is a flowchart of a method according to a second embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
referring to fig. 1, the present invention provides a method for compressing image data of a tracking track content of a video object, comprising the following steps:
s1, acquiring each frame of image construction image set of the target track, and setting the data compression output number N;
and further comprising pre-judgment, if the number of the image frames of the target track is not more than the data compression output number N, directly outputting the target track without data compression, otherwise, performing S2.
And S2, clustering all the images according to the time information, wherein the clustering number is the data compression output number.
Specifically, the time information of each frame image in the image set is extracted and stored in the Timestamps sequence, and clusters are setnThe number N of cluster centers is equal to the number of data compression outputs, and the initial cluster clustersnThere are no cluster members, N is more than or equal to 1 and less than or equal to N. And randomly extracting N pieces of time information from the Timestamps sequence, and endowing the N pieces of time information to the clustering centers with initial values. Preferably, the first N time information in the Timestamps sequence are assigned.
And S3, iteratively updating the clusters until the mean square error of two adjacent clusters is less than a given threshold value. The method comprises the following specific steps:
s31, calculating positions in the Timestamps sequenceTime distances dist of the time information from the N cluster centers: dist ═ Timestamps [ i]-centersnL, |; wherein Timestamps [ i ]]Is the temporal information of the ith frame of picture in the Timestamps sequence, centersnIs the value of the nth cluster center;
s32, classifying Timestamps [ i ] into a cluster corresponding to the distance dist minimum according to the distance dist minimum principle, and adjusting the Timestamps [ i ] into cluster members until all time information in the Timestamps sequence is classified;
s33, Clusters for each clusternWeighted average is carried out on the time information of the cluster members, and the new cluster centers 'of the cluster is calculated'n;
Among them, Timestampsn,kCluster clusteringsnTime information of the nth cluster member. Thus, the cluster center is updated for all of the N clusters.
S34, solving the mean square error between each cluster member and the new cluster center;
wherein, MSErr (j) is the mean square error of the current iteration, and j is the iteration frequency; when j is 0, the mean square error obtained from the initial value of the cluster center centers is represented.
S35, calculating the difference between the current MSErr (j) and the last MSErr (j-1), if the difference is larger than a given threshold EndChreh, the threshold is preferably EndChreh 10-5. Returning to S31 to carry out iterative operation, and the iterative times j are progressive, otherwise, the clustering result is stable, and outputting clustersnAnd jumps to S4.
And S4, according to the time nearest principle, images corresponding to the time information nearest to the cluster center in each cluster are taken and sequentially arranged according to the time sequence to form a new target track consisting of N frames of images, so that the compression of image data is realized.
Example two:
referring to fig. 2, the present invention provides a method for compressing image data of a tracking track content of a video target, comprising the following steps:
s1, acquiring each frame of image construction image set of the target track, and setting the data compression output number N;
and further comprising pre-judgment, if the number of the image frames of the target track is not more than the data compression output number N, directly outputting the target track without data compression, otherwise, performing S2.
And S2, extracting the content characteristics corresponding to each frame of image in the image set according to the time sequence. Preferably, the content features are put into the FeaBudgets sequence by adopting a color histogram or abstract features extracted by a deep convolutional neural network.
And S3, iteratively calculating the similarity of the content features FeaBudgets [ i ] and FeaBudgets [ i +1] of two adjacent frames of images in the FeaBudgets sequence, wherein the FeaBudgets [ i ] is the content feature of the ith frame of image, and the FeaBudgets [ i +1] is the content feature of the ith +1 frame of image. When the similarity is greater than a given threshold value SimiThr, the change amplitude of the image of the (i + 1) th frame compared with the image of the (i) th frame is not large, the image is not the image with obvious characteristics required by us, and the image is deleted; preferably, simitr ═ 0.85. Deleting the image of the next frame, namely the image of the (i + 1) th frame, in the two adjacent frames. Moving the subsequent images forward by one bit in sequence, namely taking the next frame image (i +2 th frame image) and the previous frame image (i th frame image) in the two adjacent frame images to calculate the similarity of the content characteristics;
and if the similarity is smaller than or equal to a given threshold value SimiThr, the change range of the i +1 th frame image is considered to be larger than that of the i-th frame image, the two images are simultaneously reserved, then i is made to be i +1, the steps are repeated, the similarity of FeaBudgets [ i +1] and FeaBudgets [ i +2] is calculated until all the similarity calculation is completed, and the image with the small content characteristic change range in the original track is deleted.
At this time, if the number of frames of the remaining images is not greater than the data compression output number N, the data compression does not need to be continued, and the remaining images are directly output to form a new target trajectory, otherwise, S4 is performed.
S4, recording the similarity of the content features of every two adjacent frames of images in the rest images to form a similarity set FeaBudgetsBetter;
S5, calculating the difference FeaGrad [ i ] of the content characteristics of the two adjacent frame images](ii) a In particular, FeaGrad [ i ]]=1-FeaBudgetsBetter[i]Wherein FeaBudgetsBetter[i]Representing similarity sets FeaBudgetsBetterThe ith similarity of (1); FeaGrad [ i ]]Indicating the corresponding degree of difference.
And S6, taking topN data with the maximum difference degree, and forming a new track by the corresponding image.
Specifically, FeaGrad [ i ] is sorted according to the size sequence, topN difference degrees with the largest value are taken, and each difference degree corresponds to two adjacent frames of images. And aiming at the maximum topN difference degrees, one frame image with earlier time in two adjacent frame images is taken and arranged according to the time sequence to form a new target track so as to achieve the compression of the image data.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Claims (9)
1. A method for compressing image data of video target tracking track content is characterized by comprising the following steps:
s1, acquiring each frame of image of the target track, and setting the number of data compression outputs;
s2, clustering all the images according to time information, wherein the clustering number is the data compression output number;
s3, iteratively updating the clusters until the mean square error of two adjacent clusters is less than a given threshold;
and S4, taking the image closest to the cluster center in each cluster to form a new target track.
2. The method according to claim 1, wherein said S1 further comprises a pre-determination step, and if the number of frames of the target track is not greater than the number of compressed data outputs, the target track is directly output.
3. The method of claim 1, wherein the clustering method in S2 is as follows:
extracting the time information of each frame image and storing the time information into a Timestamps sequence, setting the number N of clustering centers of the clusters to be equal to the number of data compression outputs, and randomly extracting N pieces of time information from the Timestamps sequence to give initial values to the N clustering centers.
4. The method according to claim 3, wherein the specific content of the iteratively updated cluster in S3 is as follows:
s31, calculating the time distance dist between all the time information in the Timestamps sequence and N clustering centers: dist ═ Timestamps [ i]-centersnL, |; wherein Timestamps [ i ]]Is the temporal information of the ith frame of picture in the Timestamps sequence, centersnIs the value of the nth cluster center;
s32, classifying Timestamps [ i ] into the cluster corresponding to the distance dist minimum according to the distance dist minimum principle, and adjusting the Timestamps [ i ] into cluster members;
s33, carrying out weighted average on the time information of the cluster members, and calculating a new cluster center of the cluster;
s34, solving the mean square error between each cluster member and the new cluster center;
and S35, calculating the difference between the current mean square error and the last mean square error, if the difference is greater than a given threshold EndTahreh, returning to S31 for iterative operation, and otherwise, jumping to S4.
5. The method as claimed in claim 4, wherein the threshold EndThreh-10 is set as-5。
6. A method for compressing image data of video target tracking track content is characterized by comprising the following steps:
s1, acquiring each frame of image of the target track, and setting the number of data compression outputs;
s2, extracting the content characteristics of each frame image;
s3, iteratively calculating the similarity of the content features of two adjacent frames of images, deleting the next frame of image in the two adjacent frames of images when the similarity is greater than a given threshold value, and calculating the similarity of the content features of the next frame of image and the previous frame of image in the two adjacent frames of images;
s4, recording the similarity gradient of the content characteristics of two adjacent frames of images;
s5, calculating the difference degree of the content characteristics of two adjacent frames of images;
and S6, constructing a new track by taking the image corresponding to topN data with the maximum difference.
7. The method as claimed in claim 6, wherein said S1 further comprises a pre-determining step, and if the number of frames of the target track is not greater than the number of compressed data outputs, directly outputting the target track.
8. The method of claim 6, wherein the content features are color histograms or abstract features extracted using a deep convolutional neural network.
9. The method according to claim 6, wherein the details of S3 are as follows: calculating the similarity of the content features FeaBudgets [ i ] and FeaBudgets [ i +1] of two adjacent frame images, wherein the FeaBudgets [ i ] is the content feature of the ith frame image; if the similarity is greater than a given threshold value SimiThr, deleting the (i + 1) th frame image, and further calculating the similarity of FeaBudgets [ i ] and FeaBudgets [ i +2 ]; otherwise, calculating the similarity of FeaBudgets [ i +1] and FeaBudgets [ i +2 ].
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911073045.0A CN110769259A (en) | 2019-11-05 | 2019-11-05 | Image data compression method for tracking track content of video target |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911073045.0A CN110769259A (en) | 2019-11-05 | 2019-11-05 | Image data compression method for tracking track content of video target |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110769259A true CN110769259A (en) | 2020-02-07 |
Family
ID=69336322
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911073045.0A Pending CN110769259A (en) | 2019-11-05 | 2019-11-05 | Image data compression method for tracking track content of video target |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110769259A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111314708A (en) * | 2020-02-25 | 2020-06-19 | 腾讯科技(深圳)有限公司 | Image data compression method and device, storage medium and electronic equipment |
CN113095397A (en) * | 2021-04-03 | 2021-07-09 | 国家计算机网络与信息安全管理中心 | Image data compression method based on hierarchical clustering method |
CN113596401A (en) * | 2021-07-29 | 2021-11-02 | 上海应用技术大学 | Image de-similarity transmission and restoration method based on ORB similarity judgment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106127807A (en) * | 2016-06-21 | 2016-11-16 | 中国石油大学(华东) | A kind of real-time video multiclass multi-object tracking method |
CN106851437A (en) * | 2017-01-17 | 2017-06-13 | 南通同洲电子有限责任公司 | A kind of method for extracting video frequency abstract |
CN107273510A (en) * | 2017-06-20 | 2017-10-20 | 广东欧珀移动通信有限公司 | Photo recommends method and Related product |
CN107454454A (en) * | 2017-08-30 | 2017-12-08 | 微鲸科技有限公司 | Method for information display and device |
CN107590419A (en) * | 2016-07-07 | 2018-01-16 | 北京新岸线网络技术有限公司 | Camera lens extraction method of key frame and device in video analysis |
KR20180096096A (en) * | 2017-02-20 | 2018-08-29 | 한국해양과학기술원 | Coastline monitoring apparatus and method using ocean color image |
CN109858406A (en) * | 2019-01-17 | 2019-06-07 | 西北大学 | A kind of extraction method of key frame based on artis information |
-
2019
- 2019-11-05 CN CN201911073045.0A patent/CN110769259A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106127807A (en) * | 2016-06-21 | 2016-11-16 | 中国石油大学(华东) | A kind of real-time video multiclass multi-object tracking method |
CN107590419A (en) * | 2016-07-07 | 2018-01-16 | 北京新岸线网络技术有限公司 | Camera lens extraction method of key frame and device in video analysis |
CN106851437A (en) * | 2017-01-17 | 2017-06-13 | 南通同洲电子有限责任公司 | A kind of method for extracting video frequency abstract |
KR20180096096A (en) * | 2017-02-20 | 2018-08-29 | 한국해양과학기술원 | Coastline monitoring apparatus and method using ocean color image |
CN107273510A (en) * | 2017-06-20 | 2017-10-20 | 广东欧珀移动通信有限公司 | Photo recommends method and Related product |
CN107454454A (en) * | 2017-08-30 | 2017-12-08 | 微鲸科技有限公司 | Method for information display and device |
CN109858406A (en) * | 2019-01-17 | 2019-06-07 | 西北大学 | A kind of extraction method of key frame based on artis information |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111314708A (en) * | 2020-02-25 | 2020-06-19 | 腾讯科技(深圳)有限公司 | Image data compression method and device, storage medium and electronic equipment |
CN113095397A (en) * | 2021-04-03 | 2021-07-09 | 国家计算机网络与信息安全管理中心 | Image data compression method based on hierarchical clustering method |
CN113596401A (en) * | 2021-07-29 | 2021-11-02 | 上海应用技术大学 | Image de-similarity transmission and restoration method based on ORB similarity judgment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107729809B (en) | Method and device for adaptively generating video abstract and readable storage medium thereof | |
CN103593464B (en) | Video fingerprint detecting and video sequence matching method and system based on visual features | |
CN104508682B (en) | Key frame is identified using the openness analysis of group | |
US8358837B2 (en) | Apparatus and methods for detecting adult videos | |
CN111460961B (en) | Static video abstraction method for CDVS-based similarity graph clustering | |
CN104376003B (en) | A kind of video retrieval method and device | |
CN110717411A (en) | Pedestrian re-identification method based on deep layer feature fusion | |
WO2018137126A1 (en) | Method and device for generating static video abstract | |
Sujatha et al. | A study on keyframe extraction methods for video summary | |
CN110769259A (en) | Image data compression method for tracking track content of video target | |
CN110688524B (en) | Video retrieval method and device, electronic equipment and storage medium | |
JP2001155169A (en) | Method and system for dividing, classifying and summarizing video image | |
EP2419861A1 (en) | Key frames extraction for video content analysis | |
CN112418012B (en) | Video abstract generation method based on space-time attention model | |
CN103929685A (en) | Video abstract generating and indexing method | |
CN103390040A (en) | Video copy detection method | |
CN110381392B (en) | Video abstract extraction method, system, device and storage medium thereof | |
CN111723773A (en) | Remnant detection method, device, electronic equipment and readable storage medium | |
CN110807790B (en) | Image data extraction and compression method for video target trajectory tracking content | |
CN110188625B (en) | Video fine structuring method based on multi-feature fusion | |
WO2023082641A1 (en) | Electronic archive generation method and apparatus, and terminal device and storage medium | |
CN111159475B (en) | Pedestrian re-identification path generation method based on multi-camera video image | |
Zhao et al. | Key-frame extraction based on HSV histogram and adaptive clustering | |
CN109359530B (en) | Intelligent video monitoring method and device | |
CN114724218A (en) | Video detection method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200207 |
|
RJ01 | Rejection of invention patent application after publication |