CN102129689A - Method for modeling background based on camera response function in automatic gain scene - Google Patents
Method for modeling background based on camera response function in automatic gain scene Download PDFInfo
- Publication number
- CN102129689A CN102129689A CN201110044805.2A CN201110044805A CN102129689A CN 102129689 A CN102129689 A CN 102129689A CN 201110044805 A CN201110044805 A CN 201110044805A CN 102129689 A CN102129689 A CN 102129689A
- Authority
- CN
- China
- Prior art keywords
- gain
- frame
- background
- gray
- function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000005316 response function Methods 0.000 title claims abstract description 30
- 230000008859 change Effects 0.000 claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 24
- 238000001514 detection method Methods 0.000 claims abstract description 18
- 238000004458 analytical method Methods 0.000 claims abstract description 10
- 238000011084 recovery Methods 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 68
- 230000011218 segmentation Effects 0.000 claims description 15
- 238000011410 subtraction method Methods 0.000 claims description 10
- 238000006467 substitution reaction Methods 0.000 claims description 9
- 230000000996 additive effect Effects 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 8
- 238000012417 linear regression Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000011946 reduction process Methods 0.000 claims description 6
- 238000005286 illumination Methods 0.000 claims description 4
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 3
- 229920006395 saturated elastomer Polymers 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 238000007476 Maximum Likelihood Methods 0.000 abstract 1
- 238000007796 conventional method Methods 0.000 abstract 1
- 238000004422 calculation algorithm Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 7
- 239000000654 additive Substances 0.000 description 5
- 230000004438 eyesight Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000004744 fabric Substances 0.000 description 2
- 239000012467 final product Substances 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000007596 consolidation process Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Landscapes
- Studio Devices (AREA)
Abstract
Description
Claims (4)
- Under the automatic gain scene based on the background modeling method of camera response function, it is characterized in that in background subtraction method motion detection, under the camera automatic gain scene, the reference background frame is followed the variation of camera gain coefficient in real time, obtain the reference background frame identical, may further comprise the steps with the present frame gain coefficient:1) passes through based on the gradual analysis of automatic gain, the establishing target function, set the critical flase drop threshold value of automatic gain, the gray-value variation of the critical flase drop threshold value of described automatic gain during with the critical flase drop of system's generation automatic gain is characterized as according to setting, detect frame by frame whether the critical flase drop of automatic gain takes place, if take place then obtain the background area of rough segmentation, and use the method for joint histogram to obtain training data, be specially:11) the average item that utilizes weight maximum among the parameter camera response function EMoR approximate as camera response function CRF obtains gray scale difference value function BDF, and then the postiive gain when obtaining critical flase drop compares k PpWith negative ratio of gains k Nn:As 1<k c/ k r<k Pp, postiive gain then takes place but do not cause the motion flase drop as yet, work as k Nn<k c/ k r<1, then generation is born gain but is not caused motion flase drop, k as yet r, k cIt is respectively the gain coefficient of reference background frame R and present frame C;12) be respectively when the ratio of gains: critical flase drop postiive gain k Pp, 1, k during the negative gain of critical flase drop Nn, obtain the corresponding gray scale difference functions respectively, according to k Pp, 1, k NnCorresponding BDF curve is divided into four parts with image-region, constructs objective function with this, and when objective function takes place greater than the then critical flase drop of the critical flase drop threshold value of setting, and rough segmentation goes out the background area of present frame;13) background area pixels of rough segmentation is via the noise reduction process based on joint histogram, and removes and contain 0,255 data item, obtains the training data of low-dimensional;2) with resulting training data in the step 1) as the input data, by disposable recovery obtains the camera response function of global optimum based on the method for maximal possibility estimation and restriction on the parameters;3) maximal value of gray scale difference value is the monotonic increasing function about the ratio of gains, by the correlativity of the preceding background difference and the ratio of gains, and aforesaid monotonic increasing function, by preceding background frames and step 2) in the camera response function that recovered, ask for the ratio of gains frame by frame; Preceding background difference refers to the difference of present frame and reference background frame, and preceding background frames is the general designation of present frame and reference background frame;4) if the ratio of gains that step 3) is determined is not 1, then by the ratio of gains and step 2) the camera response function that recovers, obtain the reference background frame identical with the gain coefficient of present frame, otherwise the reference background frame is constant, upgrade the reference background frame thus frame by frame, obtain the reference background frame identical with the present frame gain coefficient.
- 2. based on the background modeling method of camera response function, it is characterized in that step 1) is specially under the automatic gain scene according to claim 1:11) establishing that the camera automatic gain causes in the entire image pixel quantity takes place is the absolute flase drop of N ', positive and negative ratio of gains k Pp, k NnFork pp=min{k c/k r|num(BDF(B r(p i),k c/k r)>σ(p i))/N<N′;1≤i≤N}(1)k nn=min{k c/k r|num(BDF(B r(p i),k c/k r)<-σ(p i))/N<N′;1≤i≤N}(2)Wherein the collection of pixels in the entire image is P={p 1, p 2...., p N, N is a total number of image pixels; B r(p i), B r(p i) be respectively pixel p iThe gray-scale value of corresponding reference background frame R and the gray-scale value of present frame C; k r, k cIt is respectively the gain coefficient of reference background frame R and present frame C; σ (p i) be p iThe preceding background decision threshold that point is corresponding; Can obtain BDF by CRF; Num () represents qualified number of pixels;The distribution character of each image-region is respectively k by the ratio of gains during 12) according to critical flase drop Pp, 1, k NnThe time obtain corresponding BDF curve, structure objective function T is divided into four classes with image-region, when T takes place greater than the then critical flase drop of threshold value, and rough segmentation goes out background:Make x=I i, y BDF (I i, k j/ k i), k wherein i, k jBe respectively automatic gain two two field picture i of front and back, the pairing gain coefficient of j, k take place j/ k iBe the ratio of gains, I iBe the gray-scale value of i two field picture, k j/ k iBe respectively k Pp, k Nn, 1 o'clock, obtain curve y=Lp (x), y=Ln (x), y=0 is divided into four parts with image-region, P=PA ∪ PB ∪ PC ∪ PD, when automatic postiive gain taking place and be in critical flase drop, PA is the current background zone, and B is arranged r(p i)-B r(p i)>0, B r(p i)-B r(p i)<LP (B r(p i)); Automatic postiive gain when taking place in PB, and the former highlight regions of having powerful connections that the moving object of low gray-scale value shelters from has B r(p i)-B r(p i)<Ln (B r(p i)); PC when automatic postiive gain takes place for, the moving object of low gray-scale value shelters from former have powerful connections than dark areas, Ln (B is arranged r(p i))<B r(p i)-B r(p i)<0; PD when automatic postiive gain takes place for, the moving object of high gray-scale value shelters from former have powerful connections than dark areas, B is arranged r(p i)-B r(p i)>Lp (B r(p i)), and satisfy num (PA)>>num (PD), num (PB)>num (PC); When negative automatically gain taking place and be in critical flase drop, PA is that the moving object of high gray-scale value shelters from former gray-scale value upper zone of having powerful connections, and 0<B is arranged r(p i)-B r(p i)<Lp (B r(p i)); PB for the moving object of low gray-scale value shelters from former have powerful connections than dark areas, Ln (B is arranged r(p i))<B r(p i)-B r(p i); PC is the current background zone, and Ln (B is arranged r(p i))<B r(p i)-B r(p i)<0; Automatically negative gain takes place shelter from former low gray-scale value zone of having powerful connections because of the moving object that is high gray-scale value, PD is this zone, because cause strong gray scale difference value, B is arranged r(p i)-B r(p i)>Lp (B r(p i)), and satisfy num (PC)>>num (PB), num (PD)>num (PA),Set up objective function thus:The T absolute value is big more, and then the probability that takes place of automatic gain is big more, and setting critical flase drop threshold value t is 0.75, makes that PBG is the background area pixels set of rough segmentation, and when T>t, postiive gain takes place but do not cause motion flase drop, PBG=PA automatically; When T<-during t, negative automatically gain takes place but does not cause motion flase drop, PBG=PC;13) based on the noise reduction process of joint histogram:Make H (IX, PX, the X) element number of the gray-scale value that is illustrated in collection of pixels PX among the image X from 0 to IX, promptlyB (px i, X) be pixel p x among the image X iGray-scale value, make joint histogram be:Q_BTF={(m,IC(m))|H(IC(m),PBG,C)=H(m,PBG,R)} (5)Wherein m ∈ { 0,1,2, ...., 255}, 0≤IC (m)≤255, R, C are respectively reference background frame and present frame, the character that is not subtracted by the CRF dullness, the Q_BTF element number is 256, the element that contains 0 or 255 among the Q_BTF is removed,, obtained gathering P_BTF to remove saturated and the error that is caused that end, element number M<255, P_BTF is the training data of low-dimensional.
- 3. based on the background modeling method of camera response function, it is characterized in that step 2 under the automatic gain scene according to claim 2) be specially:21) in the EMoR framework, based on logarithm and contrafunctional computing, gain coefficient is separated from CRF with scene illumination, in mathematical analysis, change into the input training data set V=P_BTF that linear regression problem: CRF recovers, V satisfiesIV iBe that gain coefficient is k iThe time the gradation of image value, IV jBe that gain coefficient is k jThe time the gradation of image value, M is the training data number, IV i, IV jSatisfy:IV i+ε=BTF ij(IV j) (8)Wherein BTF is a luminance transfer function, and ε is a Gaussian noise, recovers based on the CRF of maximal possibility estimation and restriction on the parameters under the EMoR framework, obtains globally optimal solution, to the general type negate function of EMoR with take the logarithm:l 1(I) ... .l N(I) be to CRF database D oRF negate function and the back of taking the logarithm use major component that pivot analysis PCA obtains by by main to time arrangement, g 0(I) be CRF database D oRF negate function and take the logarithm after average; Under desirable noise-free case, IV i, IV jCorresponding brightness value q is identical, and gain coefficient k difference, with IV i, IV jSubstitution formula (9) is also subtracted each other,Under actual conditions, i.e. IV i, IV jSatisfy formula (8), then formula (10) is deformed into:Because ε is a Gaussian noise, has additive property, so formula (11) has:Wherein the Gaussian noise that obtains for the linear operation of ε in formula (11) of ε ' makes d 0-ln (k i/ k j), have:Order:t(m)=g 0(IV j(m))-g 0(IV i(m))Then formula (13) becomesD wherein T=(d 0, d 1, d 2... .., d N), Φ=(φ 0, φ 1, φ 2... .., φ N) T, then following formula becomes the linear regression of standard;22) use the CRF based on maximal possibility estimation and restriction on the parameters to ask for, smallest error function is:Wherein, M is the element number of set V, know by EMoR, and when formula (14) is got different n values, basis function l n(I) weight is different, and n is big more, and then the weight of Dui Ying basis function in expression formula is more little, and the respective weights coefficient is more little, the smallest error function under the operation parameter constraint:E(d)=E D(d)+λE d(d) (16)λ is constrained parameters, is diagonal matrix, 0<λ 1<λ 2<... ..<λ NBe the element on the matrix λ diagonal line,With formula (15), (17) substitution formula (16):By maximal possibility estimation, formula (18) is differentiated to dMake formula (19) be 0 and the distortion:Obtain:d=(λ+Φ TΦ) -1Φ Tt (21)With formula (21) substitution formula (9), obtain CRF through asking the exponential sum function of negating.
- According under claim 2 or the 3 described automatic gain scenes based on the background modeling method of camera response function, it is characterized in that step 3) is specially:31) analyze that to obtain the BDF maximal value be monotonic increasing function about the ratio of gains, promptly both singly answer;32) ask for based on the automatic gain of homography:Make that the BDF maximal value is Δ MI (k j/ k i)), corresponding horizontal ordinate is MI (k j/ k i):(MI(k j/k i)=I i,ΔMI(k j/k i)=ΔI ji)|max{ΔI ji=BDF(I i,k j/k i)},0≤I i≤255 (22)If present frame C and reference background frame gray difference only cause all pixel p in the image so by automatic gain iCorresponding coordinate (x (i)=B r(p i), y (i)=B r(p i)-B r(p i)) formed distribution DC drop on curve D L:{ (x=I, y=Δ I) | Δ I=BDF (I, k c/ k r) on, and Δ MI (k c/ k r)=max (y (i)) by the homography of Δ MI, can obtain k c/ k rIf the prospect of doing exercises exists, k c/ k rInterval s kBe [k C-1/ k R-1-k_th, k C-1/ k R-1+ k_th], k C-1, k R-1Be the gain coefficient of previous frame C and B, k_th is a ratio of gains gradual change scope, gets 0.12, then obtains the interval s of MI correspondence m, ask DC at interval s mPeak value coordinate (MB, Δ MB):Make MOI (k j/ k i)=MB obtains k by homography j/ k i=k m, in the ideal case, if the peak value that is caused by automatic gain then satisfies k simultaneously m∈ s k, MI (k m)=MB, the new ratio of gains is k c/ k r=k m, consider noise effect, when | MI (k m)-MB<TM, and k m∈ s k, then to upgrade the ratio of gains, otherwise be the peak value that causes by sport foreground, the ratio of gains is constant, and TM gets 5 here;33) according to step 32) ask for the ratio of gains frame by frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011100448052A CN102129689B (en) | 2011-02-24 | 2011-02-24 | Method for modeling background based on camera response function in automatic gain scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011100448052A CN102129689B (en) | 2011-02-24 | 2011-02-24 | Method for modeling background based on camera response function in automatic gain scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102129689A true CN102129689A (en) | 2011-07-20 |
CN102129689B CN102129689B (en) | 2012-11-14 |
Family
ID=44267764
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011100448052A Expired - Fee Related CN102129689B (en) | 2011-02-24 | 2011-02-24 | Method for modeling background based on camera response function in automatic gain scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102129689B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102509076A (en) * | 2011-10-25 | 2012-06-20 | 重庆大学 | Principal-component-analysis-based video image background detection method |
CN105574896A (en) * | 2016-02-01 | 2016-05-11 | 衢州学院 | High-efficiency background modeling method for high-resolution video |
CN109844825A (en) * | 2016-10-24 | 2019-06-04 | 昕诺飞控股有限公司 | There are detection systems and method |
CN110049250A (en) * | 2019-05-15 | 2019-07-23 | 重庆紫光华山智安科技有限公司 | Image state switching method and device |
CN110290318A (en) * | 2018-12-29 | 2019-09-27 | 中国科学院软件研究所 | Spaceborne image procossing and method and system of making decisions on one's own |
CN113014827A (en) * | 2021-03-05 | 2021-06-22 | 深圳英美达医疗技术有限公司 | Imaging automatic gain compensation method, system, storage medium and ultrasonic endoscope |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1450444A (en) * | 2002-04-09 | 2003-10-22 | 三星电子株式会社 | Method and circuit for adjusting background contrast in display apparatus |
CN101216888A (en) * | 2008-01-14 | 2008-07-09 | 浙江大学 | A video foreground extracting method under conditions of view angle variety based on fast image registration |
CN101742319A (en) * | 2010-01-15 | 2010-06-16 | 北京大学 | Background modeling-based static camera video compression method and background modeling-based static camera video compression system |
-
2011
- 2011-02-24 CN CN2011100448052A patent/CN102129689B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1450444A (en) * | 2002-04-09 | 2003-10-22 | 三星电子株式会社 | Method and circuit for adjusting background contrast in display apparatus |
CN101216888A (en) * | 2008-01-14 | 2008-07-09 | 浙江大学 | A video foreground extracting method under conditions of view angle variety based on fast image registration |
CN101742319A (en) * | 2010-01-15 | 2010-06-16 | 北京大学 | Background modeling-based static camera video compression method and background modeling-based static camera video compression system |
Non-Patent Citations (3)
Title |
---|
《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 20031130 Michael D. Grossberg et al Determining the Camera Response from Images: What Is Knowable? 全文 1-4 第25卷, 第11期 2 * |
《Proceedings of the 8th International IEEE Conference on Intelligent Transportation Systems》 20050916 Rita Cucchiara et al Auto-iris Compensation for Traffic Surveillance Systems 全文 1-4 , 2 * |
《计算机学报》 20060430 章卫祥 等 一个稳健的用于HDR图像的相机响应函数标定算法 全文 1-4 第29卷, 第4期 2 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102509076A (en) * | 2011-10-25 | 2012-06-20 | 重庆大学 | Principal-component-analysis-based video image background detection method |
CN102509076B (en) * | 2011-10-25 | 2013-01-02 | 重庆大学 | Principal-component-analysis-based video image background detection method |
CN105574896A (en) * | 2016-02-01 | 2016-05-11 | 衢州学院 | High-efficiency background modeling method for high-resolution video |
CN105574896B (en) * | 2016-02-01 | 2018-03-27 | 衢州学院 | A kind of efficient background modeling method towards high-resolution video |
CN109844825A (en) * | 2016-10-24 | 2019-06-04 | 昕诺飞控股有限公司 | There are detection systems and method |
CN110290318A (en) * | 2018-12-29 | 2019-09-27 | 中国科学院软件研究所 | Spaceborne image procossing and method and system of making decisions on one's own |
CN110049250A (en) * | 2019-05-15 | 2019-07-23 | 重庆紫光华山智安科技有限公司 | Image state switching method and device |
CN110049250B (en) * | 2019-05-15 | 2020-11-27 | 重庆紫光华山智安科技有限公司 | Camera shooting state switching method and device |
CN113014827A (en) * | 2021-03-05 | 2021-06-22 | 深圳英美达医疗技术有限公司 | Imaging automatic gain compensation method, system, storage medium and ultrasonic endoscope |
Also Published As
Publication number | Publication date |
---|---|
CN102129689B (en) | 2012-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10810723B2 (en) | System and method for single image object density estimation | |
Jodoin et al. | Extensive benchmark and survey of modeling methods for scene background initialization | |
US8243991B2 (en) | Method and apparatus for detecting targets through temporal scene changes | |
EP2959454B1 (en) | Method, system and software module for foreground extraction | |
CN107123131B (en) | Moving target detection method based on deep learning | |
CN102129689B (en) | Method for modeling background based on camera response function in automatic gain scene | |
CN111797653B (en) | Image labeling method and device based on high-dimensional image | |
CN108197546B (en) | Illumination processing method and device in face recognition, computer equipment and storage medium | |
US9129379B2 (en) | Method and apparatus for bilayer image segmentation | |
CN106886216B (en) | Robot automatic tracking method and system based on RGBD face detection | |
US20140307917A1 (en) | Robust feature fusion for multi-view object tracking | |
US10026004B2 (en) | Shadow detection and removal in license plate images | |
US20070154088A1 (en) | Robust Perceptual Color Identification | |
Stringa | Morphological Change Detection Algorithms for Surveillance Applications. | |
CN113324864B (en) | Pantograph carbon slide plate abrasion detection method based on deep learning target detection | |
CN105044122A (en) | Copper part surface defect visual inspection system and inspection method based on semi-supervised learning model | |
CN103344583B (en) | A kind of praseodymium-neodymium (Pr/Nd) component concentration detection system based on machine vision and method | |
CN112419261B (en) | Visual acquisition method and device with abnormal point removing function | |
Tiwari et al. | A survey on shadow detection and removal in images and video sequences | |
Raut et al. | Detection and identification of plant leaf diseases based on python | |
CN114298948A (en) | Ball machine monitoring abnormity detection method based on PSPNet-RCNN | |
Cao et al. | Learning spatial-temporal representation for smoke vehicle detection | |
KR102171384B1 (en) | Object recognition system and method using image correction filter | |
CN111127355A (en) | Method for finely complementing defective light flow graph and application thereof | |
Cristani et al. | A spatial sampling mechanism for effective background subtraction. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20110720 Assignee: Nanjing Mdt InfoTech Ltd Assignor: Nanjing University Contract record no.: 2013320000099 Denomination of invention: Method for modeling background based on camera response function in automatic gain scene Granted publication date: 20121114 License type: Exclusive License Record date: 20130314 |
|
LICC | Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model | ||
ASS | Succession or assignment of patent right |
Owner name: NANJING HUICHUAN INDUSTRIAL VISUAL TECHNOLOGY DEVE Free format text: FORMER OWNER: NANJING UNIVERSITY Effective date: 20140612 |
|
C41 | Transfer of patent application or patent right or utility model | ||
COR | Change of bibliographic data |
Free format text: CORRECT: ADDRESS; FROM: 210093 NANJING, JIANGSU PROVINCE TO: 210042 NANJING, JIANGSU PROVINCE |
|
TR01 | Transfer of patent right |
Effective date of registration: 20140612 Address after: 210042, Nanjing District, Jiangsu province Xu Xu Zhuang Software Park, B District, F District, three layers of research Patentee after: NANJING HUICHUAN INDUSTRIAL VISUAL TECHNOLOGY DEVELOPMENT CO., LTD. Address before: 210093 Nanjing, Gulou District, Jiangsu, No. 22 Hankou Road Patentee before: Nanjing University |
|
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20121114 Termination date: 20200224 |
|
CF01 | Termination of patent right due to non-payment of annual fee |