CN104408747B - Human motion detection method suitable for depth image - Google Patents

Human motion detection method suitable for depth image Download PDF

Info

Publication number
CN104408747B
CN104408747B CN201410717382.XA CN201410717382A CN104408747B CN 104408747 B CN104408747 B CN 104408747B CN 201410717382 A CN201410717382 A CN 201410717382A CN 104408747 B CN104408747 B CN 104408747B
Authority
CN
China
Prior art keywords
pixel
value
image
background
background model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410717382.XA
Other languages
Chinese (zh)
Other versions
CN104408747A (en
Inventor
孟明
杨方波
鲁少娜
朱俊青
桂奇政
佘青山
罗志增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Yanzong Industry Investment Development Co ltd
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201410717382.XA priority Critical patent/CN104408747B/en
Publication of CN104408747A publication Critical patent/CN104408747A/en
Application granted granted Critical
Publication of CN104408747B publication Critical patent/CN104408747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The invention provides a human motion detection method suitable for a depth image. The method comprises the following steps: firstly, dividing the image into an upper layer and a lower layer, wherein the upper layer and the lower layer are used for creating a background model by different neighborhoods, and a reference model is added while the background model is created; secondly, adjusting the parameter of the difference threshold value of an image lower-layer algorithm and comparing pixels with the background model for pixel classification in a following video; thirdly, updating the background model by different updating modes based on the classified pixels; finally, denoising false detection points. According to the method, the identification and detection rates of a human body are remarkably increased.

Description

A kind of body movement detection method being applied to depth image
Technical field
The invention belongs to computer vision field, it is related to a kind of method of the detection of the human motion for depth image.
Background technology
In the research of human motion visual analysises, human motion detection is a crucial pre-treatment step, directly affects The follow-up effect followed the tracks of, identify, therefore human motion detection algorithm is always the study hotspot in this field.
3D sensor with Microsoft Kinect as representative, can obtain the depth image of the three-dimensional information assuming object, give people Body motion detection and analysis provide new way.Compare common color image, depth image has some obvious advantages, such as tired Shade and the lighting issues of disturbing coloured image are little on depth image impact.
ViBe (visual background extractor) algorithm is also referred to as visual background and extracts Operators Algorithm, is One of background subtraction method.It is a kind of Pixel-level video background modeling algorithm, effect be better than conventional average background model, Mixed Gauss model scheduling algorithm, has the features such as amount of calculation is little, treatment effeciency is high.But due to depth image and common color figure As the difference of characteristic, the method is applied to during depth image has problems with:1) Near Ground moving target is difficult to detect, tool Embody and like the foot disappearance being connected with ground in image.2) there is the people of motion, ViBe algorithm meeting in the background image when modeling People is initialized as background, after people's motion, this region can be judged to this phenomenon of prospect " ghost ", on the other hand when people transports again Move during " ghost " region due to during overlapping overlapping cannot detect this phenomenon of prospect be referred to as " shadow ".3) due to The inherent shortcoming of Kinect sensor leads to static pixel to be mistaken for prospect.It is primarily due to the accuracy error of sensor:Long distance From pixel also have certain change in neighbouring video sequence and even lose, object edge is unstable.
Content of the invention
In order to tackle problem mentioned above, the present invention propose on the basis of classical ViBe algorithm a kind of for depth image Body movement detection method.
In order to realize object above, the inventive method mainly includes the following steps that:
Step (1). set up background model;
Step (2). pixel classifications;
Step (3). background model updates;
Step (4). flase drop point denoising Processing.
The present invention has the advantages that:
1st, a kind of adaptive image layered process and the modeling pattern of different neighbo r pattern is proposed in background modeling, And increased the reference model M of removal " ghost " phenomenon in background modelR(x).
2nd, increased foreground point testing sequence during pixel classifications, " ghost is eliminated by the comparison of current pixel and reference model Shadow ".
3rd, increased the background model more New Policy based on foreground point in terms of model modification, solve " shadow " phenomenon and ask Topic, keeps the accuracy of background model.
4th, flase drop point denoising Processing has been carried out using threshold method to classification results.
5th, complete in depth image can extract movement human, create for follow-up research such as gait analysises etc. Premise, has broad application prospects in human motion analysis field.
Brief description
Fig. 1 DViBe algorithm flow chart;
Fig. 2 a 24- neighborhood sample graph;
Fig. 2 b 14- neighborhood sample graph;
Fig. 3 difference threshold RbImpact to PCC.
Specific embodiment
The body movement detection method of the depth image of DViBe algorithm of the present invention is described below in conjunction with the accompanying drawings.
Fig. 1 is DViBe algorithm flow chart, and alms giver will include following step in fact:
(1) divide the image into upper and lower two-layer, upper and lower two-layer sets up background model using different neighborhoods.Set up background model While increased a reference model MR(x).
(2) difference threshold R of adjustment image lower floor algorithmbParameter.
(3) in ensuing video, each pixel is compared with background model and carry out pixel classifications.
(4) different update modes are adopted to update background model based on sorted pixel.
(5) flase drop point denoising Processing.
One by one each step is described in detail below.
Step one, Background Modeling
(1) the adaptive image layered technology based on depth image
The essence of the demixing technology of image is by image layered ground area and non-ground area two parts.In order to adapt to Kinect The motor angle change of sensor and the change of position.Using depth image ground pixel depth value longitudinally upward direction increase The maximum distance D of characteristic and the effective sighting distance of sensor is layered to image, and detailed process is as follows:
1) string randomly selecting image starts to travel through vertically upward pixel from nethermost pixel.
2) vertical coordinate of the pixel in D ± 5 scope for first pixel value of record is y, does not have after such as having traveled through this row pixel Find the pixel in D ± 5 scope for the pixel value, then acquiescence y is the maximum of vertical coordinate.
3) process of repetition 1 and 2 m time, can obtain m ordinate value { y1,y2,…,ym-1,ym}.
4) consider object stop and sensor error situations such as the value of y that can cause excessive or too small to dividing Impact, this m ordinate value is ranked up by size, chooses k middle value, calculates their average it is possible to obtain The marginal vertical coordinate of image:
(2) pixel background model and reference model are set up
DViBe algorithm uses the mode of single-frame imagess modeling, is therefore image in the first two field picture and sets up initially Change background model.Note v (x) be image in be located at x at pixel given color space value, can by background model each Pixel x is modeled as set M (x).Value v of the n pixel randomly selecting from the neighborhood of pixel x is comprised in M (x)i, i= 1 ..., n, referred to as background sample value, that is,
M (x)={ v1,v2,v3,…,vn-1,vn} (2)
MR(x)=v (x) (3)
Wherein viIt is the background sample value for i for the index, subscript n.For number of samples, v (x) is the pixel value of pixel x.For What upper layer images were chosen is 24 neighborhoods as Fig. 2 a, is 14 neighborhoods as Fig. 2 b for lower image selection.
Step 2, pixel classifications
(1) classification ultimate principle
By corresponding with background model for pixel value v (x) model M (x) is compared to pixel x in present image Classification.Sample value v in note v (x) and M (x)iIn the distance of given color space it is:
dis[v(x),vi]=| | v (x)-vi|| (4)
To given difference threshold R.Statistics dis [v (x), vi] number of < R being represented with C, as C < CminWhen then x be Foreground point, on the contrary it is background dot, CminIt is pixel classifications match parameter.
(2) parameter adjustment
Parameter in image layered rear pixel classifications is also required to suitably adjust, and the top section and underclad portion of image is adopted With different distance thresholds.For upper layer images classification, distance threshold RtKeep not being changed into 20.Ground image surface due to lower image Plain depth value is close, wants effective detection at ground and goes out moving target, except change sample choose domain pattern in addition in addition it is also necessary to Adjust threshold value RbSize.RbLess moving target is more easily detected, but RbToo small a large amount of background dots can be caused to be mistaken for before Sight spot.Different RbThe impact to correct recognition rata (PCC) for the value as shown in figure 3, R can be obtained by figurebBigger discrimination is higher, but After being greater than 6, PCC tends towards stability, and considers overall discrimination and the discrimination of foot may be selected Rb=6.
(3) foreground point verification
There is the people of motion, ViBe algorithm can be by the background that is initialized as of human factor error, therefore in the background image when modeling Generate the problem that:After people is mobile, the value of former area pixel point there occurs huge change, and has the characteristics that pixel value becomes big. These points can be judged to foreground point by ViBe algorithm, and always remains as the state of prospect.We will be this not corresponding actual The prospect point set of Moving Objects is referred to as " ghost ".Become big characteristic using pixel value to pass through in current pixel and background model Respective pixel compares, and can go ghost phenomenon.Verification rule is as follows:
Wherein v (x, t0) it is MRX the value of pixel x in (), v (x, t) is the value of pixel at corresponding x in present image.When away from From when exceeding Kinect sensor detection range, pixel it is possible that the situation of loss is pixel value is 0, can lead by this phenomenon Some foreground points are caused to be mistaken for background dot.Add qualificationss v (x, t0) > 0 can prevent above-mentioned reason and by normal prospect Point is mistaken for background dot.
Step 3, background model
(1) former algorithm model updates ultimate principle
The purpose that background model updates is to ensure that As time goes on background model still can keep accuracy.Work as picture When plain x is classified as background dot, just trigger the renewal process of this pixel background model M (x).Side initially with random sub-sample Method chooses whether to update M (x), for the model choosing renewal, then chooses a sample value current pixel at random from M (x) Value v (x) replace so that in background model sample value life cycle exponentially monotonic decay.In order to keep Space Consistency, Renewal process is also updated to the background model of the neighborhood territory pixel of x at random using same method.
Shadow phenomenon is because when people is re-moved to ghost region due in overlapping pixel value and its background model Sample value is close, and overlapping region can be detected as background by algorithm.In former algorithm, the more New Policy of the background model based on background dot exists Also can solve shadow in subsequent frame and but ghost phenomenon process is relatively slow.
(2) it is based on foreground point background model to update and reference model renewal
Background model based on foreground point more New Policy specific strategy is as follows:
1) in statistical picture, all pixels are continuously judged to prospect number of times F.Then again count when being detected as background dot.
2) frame number threshold value F is setmin, as F > FminShi Ze is considered that background dot is judged to foreground point by algorithm by mistake.
3) this point is revised as background dot, F is counted again, update reference model MRX (), sets up new background model, Modeling sample is chosen using adaptive layered neighbo r pattern above.
Step 4, flase drop point denoising Processing
The feature of Kinect sensor be error increase with the increase of distance and distance less than during certain value also no Method detects.Noise occurs in that depth value is larger and object edge mostly.So substantial amounts of mistake can be removed with threshold method Cautious.Then think that this point is background when the depth value of the pixel of the prospect that is detected as exceeds set point, that is,
Wherein T and t represents depth value, and 255 represent prospect, and 0 represents background.T and t can be according to the effective sighting distance of sensor Depth value determines, if it is known that region residing for human body can determine the value of T and t according to human region, so can reduce more Flase drop point.
Experiment shows, the human motion detection that the Vibe algorithm after improvement is directed to depth image has certain feasibility, The discrimination of human body and verification and measurement ratio are all significantly improved.

Claims (3)

1. a kind of body movement detection method being applied to depth image is it is characterised in that the method comprises the following steps:
Step one, Background Modeling
(1) adaptive image layered based on depth image
Long distance using the depth image ground pixel depth value characteristic that direction increases longitudinally upward and the effective sighting distance of sensor From D, image is layered, detailed process is as follows:
1) string randomly selecting image starts to travel through vertically upward pixel from nethermost pixel;
2) vertical coordinate of the pixel in D ± 5 scope for first pixel value of record is y, does not find after such as having traveled through this row pixel Pixel value is the maximum of vertical coordinate in the pixel of D ± 5 scope, then acquiescence y;
3) repeat 1) and 2) process m time, obtain m ordinate value { y1,y2,…,ym-1,ym};
4) this m ordinate value is ranked up by size, chooses k middle value, calculate their average, obtain image Marginal vertical coordinate:
y ‾ = y 1 + y 2 + ... + y k k ;
(2) pixel background model and reference model are set up
Note v (x) is the depth value being located at pixel x in depth image, and each pixel x in background model is modeled as a set M(x);Value v of the n pixel randomly selecting from the neighborhood of pixel x is comprised in M (x)i, i=1 ..., n, referred to as background sample Value, that is,
M (x)={ v1,v2,v3,…,vn-1,vn};
Wherein viIt is the background sample value for i for the index, subscript n is number of samples;It is 24 neighborhoods for upper layer images selection, right 14 neighborhoods being chosen in lower image;If a reference model MRX () preserves the depth value of pixel x, for foreground point school Test, that is,
MR(x)=v (x);
Step 2, pixel classifications
By corresponding with background model for pixel value v (x) model M (x) is compared to pixel x in present image is divided Class;Sample value v in note v (x) and M (x)iIn the distance of given color space it is:
dis[v(x),vi]=| | v (x)-vi||;
To given difference threshold R;Statistics dis [v (x), vi] number of < R being represented with C, as C < CminWhen then x be prospect Point, on the contrary it is background dot, CminIt is pixel classifications match parameter;
Step 3, background model updates
The purpose that background model updates is to ensure that As time goes on background model still can keep accuracy;When pixel x When being classified as background dot, just trigger the renewal process of this pixel background model M (x);Method initially with random sub-sample Choose whether to update M (x), for the model choosing renewal, then choose a sample value current pixel value v at random from M (x) (x) replace so that in background model sample value life cycle exponentially monotonic decay;In order to keep Space Consistency, more New process is also updated to the background model of the neighborhood territory pixel of x at random using same method;
Step 4, flase drop point denoising Processing
Remove substantial amounts of flase drop point with threshold method;Then think when the depth value of the pixel of the prospect that is detected as exceeds set point This point is background, that is,
v ( x ) = 0 v ( x ) > T | | v ( x ) < t 255 e l s e ;
Wherein T and t represents depth value, and 255 represent prospect, and 0 represents background.
2. a kind of body movement detection method being applied to depth image according to claim 1 it is characterised in that:Image After layering, in pixel classifications, different distance thresholds are adopted to the top section of image and underclad portion;Upper layer images are divided Class, distance threshold RtKeep not being changed into 20;Because the ground pixel depth value of lower image is close, want effectively to examine at ground Measure moving target, in addition it is also necessary to adjustable range threshold value R in addition to neighbo r pattern chosen by change samplebFor 6.
3. a kind of body movement detection method being applied to depth image according to claim 1 it is characterised in that:When building There is the people of motion in background image during mould, become big characteristic using pixel value and pass through current pixel picture corresponding with background model Element more then removes ghost phenomenon;Verification rule is as follows:
v ( x , t 0 ) - v ( x , t ) > 0 v ( x , t 0 ) > 0 ;
Wherein v (x, t0) it is MRX the value of pixel x in (), v (x, t) is the value of pixel at corresponding x in present image.
CN201410717382.XA 2014-12-01 2014-12-01 Human motion detection method suitable for depth image Active CN104408747B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410717382.XA CN104408747B (en) 2014-12-01 2014-12-01 Human motion detection method suitable for depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410717382.XA CN104408747B (en) 2014-12-01 2014-12-01 Human motion detection method suitable for depth image

Publications (2)

Publication Number Publication Date
CN104408747A CN104408747A (en) 2015-03-11
CN104408747B true CN104408747B (en) 2017-02-22

Family

ID=52646375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410717382.XA Active CN104408747B (en) 2014-12-01 2014-12-01 Human motion detection method suitable for depth image

Country Status (1)

Country Link
CN (1) CN104408747B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631862B (en) * 2015-12-21 2019-05-24 浙江大学 A kind of background modeling method based on neighborhood characteristics and grayscale information
CN106251348B (en) * 2016-07-27 2021-02-02 广东外语外贸大学 Self-adaptive multi-cue fusion background subtraction method for depth camera
CN107067843A (en) * 2017-02-10 2017-08-18 广州动创信息科技有限公司 Body-sensing touch-control electronic blank tutoring system
CN107066950A (en) * 2017-03-14 2017-08-18 北京工业大学 A kind of human testing window rapid extracting method based on depth information
CN107454316B (en) * 2017-07-24 2021-10-15 艾普柯微电子(江苏)有限公司 Motion detection method and device
CN107441691B (en) * 2017-09-12 2019-07-02 上海视智电子科技有限公司 Body building method and body-building equipment based on body-sensing camera
CN109407839B (en) * 2018-10-18 2020-06-30 京东方科技集团股份有限公司 Image adjusting method and device, electronic equipment and computer readable storage medium
CN111915687A (en) * 2020-07-13 2020-11-10 浙江工业大学 Background extraction method with depth information and color information
CN112101090B (en) * 2020-07-28 2023-05-16 四川虹美智能科技有限公司 Human body detection method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903110A (en) * 2012-09-29 2013-01-30 宁波大学 Segmentation method for image with deep image information
CN104077776A (en) * 2014-06-27 2014-10-01 深圳市赛为智能股份有限公司 Visual background extracting algorithm based on color space self-adapting updating

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2395102B1 (en) * 2010-10-01 2013-10-18 Telefónica, S.A. METHOD AND SYSTEM FOR CLOSE-UP SEGMENTATION OF REAL-TIME IMAGES

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903110A (en) * 2012-09-29 2013-01-30 宁波大学 Segmentation method for image with deep image information
CN104077776A (en) * 2014-06-27 2014-10-01 深圳市赛为智能股份有限公司 Visual background extracting algorithm based on color space self-adapting updating

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
EVibe: 一种改进的Vibe 运动目标检测算法;余烨 等;《仪器仪表学报》;20140430;第35卷(第4期);第924-931页 *
一种新的基于ViBe的运动目标检测方法;胡小冉 等;《计算机科学》;20140228;第41卷(第2期);第149-152页 *

Also Published As

Publication number Publication date
CN104408747A (en) 2015-03-11

Similar Documents

Publication Publication Date Title
CN104408747B (en) Human motion detection method suitable for depth image
CN109508710A (en) Based on the unmanned vehicle night-environment cognitive method for improving YOLOv3 network
CN103530893B (en) Based on the foreground detection method of background subtraction and movable information under camera shake scene
CN105046206B (en) Based on the pedestrian detection method and device for moving prior information in video
CN106096561A (en) Infrared pedestrian detection method based on image block degree of depth learning characteristic
CN103198493B (en) A kind ofly to merge and the method for tracking target of on-line study based on multiple features self-adaptation
CN107301378B (en) Pedestrian detection method and system based on multi-classifier integration in image
CN105894701B (en) The identification alarm method of transmission line of electricity external force damage prevention Large Construction vehicle
CN106651872A (en) Prewitt operator-based pavement crack recognition method and system
CN104182985B (en) Remote sensing image change detection method
CN108280450A (en) A kind of express highway pavement detection method based on lane line
CN104376551A (en) Color image segmentation method integrating region growth and edge detection
CN107180228A (en) A kind of grad enhancement conversion method and system for lane detection
CN104915642B (en) Front vehicles distance measuring method and device
CN108537782A (en) A method of building images match based on contours extract with merge
CN106503170B (en) It is a kind of based on the image base construction method for blocking dimension
CN103871062A (en) Lunar surface rock detection method based on super-pixel description
CN107944354A (en) A kind of vehicle checking method based on deep learning
CN107301417A (en) A kind of method and device of the vehicle brand identification of unsupervised multilayer neural network
CN104778696A (en) Image edge grading-detection method based on visual pathway orientation sensitivity
CN108399366A (en) It is a kind of based on the remote sensing images scene classification extracting method classified pixel-by-pixel
CN105678773B (en) A kind of soft image dividing method
CN103077383B (en) Based on the human motion identification method of the Divisional of spatio-temporal gradient feature
CN105740814B (en) A method of determining solid waste dangerous waste storage configuration using video analysis
CN104537637B (en) A kind of single width still image depth estimation method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20150311

Assignee: FUZHOU FUGUANG WATER SCIENCE & TECHNOLOGY CO.,LTD.

Assignor: HANGZHOU DIANZI University

Contract record no.: 2019330000071

Denomination of invention: Human motion detection method suitable for depth image

Granted publication date: 20170222

License type: Common License

Record date: 20190718

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201211

Address after: Room 1004-5, building 8, 3333 Guangyi Road, Daqiao Town, Nanhu District, Jiaxing City, Zhejiang Province

Patentee after: Jiaxing Xunfu New Material Technology Co.,Ltd.

Address before: Room 3003-1, building 1, Gaode land center, Jianggan District, Hangzhou City, Zhejiang Province

Patentee before: Zhejiang Zhiduo Network Technology Co.,Ltd.

Effective date of registration: 20201211

Address after: Room 3003-1, building 1, Gaode land center, Jianggan District, Hangzhou City, Zhejiang Province

Patentee after: Zhejiang Zhiduo Network Technology Co.,Ltd.

Address before: 310018 No. 2 street, Xiasha Higher Education Zone, Hangzhou, Zhejiang

Patentee before: HANGZHOU DIANZI University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201221

Address after: 224002 Building 5, No. 55, Taishan South Road, Yancheng Economic and Technological Development Zone, Jiangsu Province

Patentee after: Jiangsu Suya Heavy Industry Technology Co.,Ltd.

Address before: Room 1004-5, building 8, 3333 Guangyi Road, Daqiao Town, Nanhu District, Jiaxing City, Zhejiang Province

Patentee before: Jiaxing Xunfu New Material Technology Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210226

Address after: 224002 room 1209, business building, comprehensive bonded zone, No. 18, South hope Avenue, Yancheng Economic and Technological Development Zone, Jiangsu Province

Patentee after: Jiangsu Yanzong Industry Investment Development Co.,Ltd.

Address before: 224002 Building 5, No. 55, Taishan South Road, Yancheng Economic and Technological Development Zone, Jiangsu Province

Patentee before: Jiangsu Suya Heavy Industry Technology Co.,Ltd.

EC01 Cancellation of recordation of patent licensing contract
EC01 Cancellation of recordation of patent licensing contract

Assignee: FUZHOU FUGUANG WATER SCIENCE & TECHNOLOGY Co.,Ltd.

Assignor: HANGZHOU DIANZI University

Contract record no.: 2019330000071

Date of cancellation: 20210517