CN112132043B - Fire fighting channel occupation self-adaptive detection method based on monitoring video - Google Patents
Fire fighting channel occupation self-adaptive detection method based on monitoring video Download PDFInfo
- Publication number
- CN112132043B CN112132043B CN202011013470.3A CN202011013470A CN112132043B CN 112132043 B CN112132043 B CN 112132043B CN 202011013470 A CN202011013470 A CN 202011013470A CN 112132043 B CN112132043 B CN 112132043B
- Authority
- CN
- China
- Prior art keywords
- model
- value
- pixel
- background
- fire fighting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Abstract
The invention discloses a fire fighting channel occupation self-adaptive detection method based on a monitoring video, which is different from other fire fighting channel occupation detection methods by using a target detection technology in the field of computer vision, does not need to manually construct features, and effectively reduces the occurrence of false alarm and false alarm failure of a manual feature extraction method; the slow moving object is easily detected by a mixed Gaussian background modeling method; people or fire-fighting vehicles suspected to occupy the articles are removed by adopting a target detection algorithm, so that the occurrence of false alarm conditions is effectively avoided; the method for judging continuous multiframes is adopted, so that the accuracy of system detection is improved, and the occurrence of false alarm is reduced.
Description
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a fire fighting access occupation self-adaptive detection method.
Background
The 'fire fighting channel' is a 'life channel', and residents can be relieved only by keeping the 'life path' smooth all the time. The fire rescue is racing with the time, the fire channel occupied by the blockage frequently becomes the maximum obstruction of the fire rescue, and once the fire channel is blocked and occupied, great hidden dangers are buried for the life and property safety of residents. The realization of efficient detection and real-time early warning of fire fighting access occupation is a problem which needs to be solved urgently in the current society. The manual monitoring of the fire passage consumes a lot of manpower and cannot meet the requirements of accuracy, robustness and real-time performance.
Disclosure of Invention
In order to solve the technical problems mentioned in the background art, the invention provides a fire fighting access occupation self-adaptive detection method based on a monitoring video.
In order to achieve the technical purpose, the technical scheme of the invention is as follows:
a fire fighting channel occupation self-adaptive detection method based on a monitoring video comprises the following steps:
(1) acquiring a video stream corresponding to a camera in a scene to be detected, intercepting a video frame image, drawing one or more polygons on the video frame image as a key detection area, wherein the key detection area is effective to the video frame intercepted after the video stream is set;
(2) taking pixel values at certain positions of all frames in a video segment, averaging the pixel values to be used as a standard value for comparison, and using the frame with the pixel value at the position closest to the standard value in the video segment as a key frame of the video segment;
(3) modeling a mixed Gaussian background, and judging whether each pixel value of a key frame belongs to the background or the foreground according to the established model;
(4) comparing the video frame image intercepted in the step (1) with the background image obtained in the step (3), calculating the area proportion occupied by the change areas of the two frame images, judging the video frame image to be an abnormal target if the proportion is larger than a set threshold value, comparing the area proportion occupied by the change areas of the two frame images with the set polygon key detection area, judging whether the two frame images have an intersection, removing the abnormal target if the two frame images do not have the intersection, and keeping the abnormal target and the corresponding circumscribed rectangle frame if the two frame images have the intersection;
(5) detecting whether 2 target objects, namely personnel and fire fighting vehicles, appear by using a target detection algorithm, if so, removing the 2 objects, and not taking the objects as abnormal targets;
(6) comprehensively judging whether objects occupy the fire fighting passage or not according to the steps (4) and (5);
(7) if the situation that the object occupies the fire fighting access is judged in the step (6), storing the position information of the object;
(8) and judging whether the channel occupation occurs at the same position, and if the continuous multiframes are the channel occupation occurring at the same position, triggering alarm information.
Further, in step (3), the observation data set { X) for the random variable X1,x2,...,xN},xtIs at t timeThe sample of a pixel, t is 1,2, …, N is the number of sampling points, then a single sampling point sample xtObeyed mixed Gaussian distribution probability density function p (x)t):
Wherein k is the total number of distribution models, η (x)t,μi,t,τi,t) Is the ith Gaussian distribution at time ti,tIs a mean value, τi,tIs a covariance matrix, δi,tIs the variance, I is the three-dimensional identity matrix, ωi,tThe weight of the ith gaussian distribution at time t.
Further, in step (3), the method for determining whether the pixel value belongs to the background or the foreground is as follows:
(3-1) Each new pixel value XtAnd comparing the current k distribution models according to the following formula until finding a matched new pixel value distribution model:
|Xt-μi,t-1|≤2.5σi,t-1
wherein, mui,t-1Means, σ, at time t-1i,t-1Represents the standard deviation at time t-1;
(3-2) if the matched model meets the background requirement, the pixel belongs to the background, otherwise, the pixel belongs to the foreground;
(3-3) updating the weight of each model according to the following formula, and then normalizing the weight of each model:
ωi,t=(1-α)*ωi,t-1+α*Mi,t
where α is the learning rate, M is the matched modeli,t1, noThen Mi,t=0;
(3-4) the mean value and the standard deviation of the unmatched models are unchanged, and the parameters of the matched models are updated according to the following formula:
μi,t=(1-ρ)*μi,t-1+ρ*Xt
ρ=α*η(Xt|μt,σt)
wherein eta (X)t|μt,σt) Representing a pixel value XtSatisfies the matched i-th Gaussian distribution model at the time ttAnd σtFor the mean and standard deviation of the population, the superscript T represents the transpose;
(3-5) if there is no pattern matching in the step (3-1), replacing the model with the minimum weight, namely, the mean value of the model is the current pixel value, the standard deviation is the maximum value of other gaussian components, and the weight is the minimum value of the other gaussian components;
(3-6) Each model is based on its own weight and α2Sorting the ratio in descending order;
(3-7) selecting the first B models as backgrounds, wherein B satisfies the following formula:
wherein, T0Is a preset threshold value representing the proportion of background components in the whole Gaussian process, and T is more than or equal to 00Less than or equal to 1; re-detecting each pixel XtAnd whether the B models are matched with the obtained model B, if so, the model B is a background, and otherwise, the model B is a foreground.
Adopt the beneficial effect that above-mentioned technical scheme brought:
(1) the invention applies the target detection technology in the field of computer vision, is different from other fire fighting channel occupation detection methods, does not need to manually construct features, and effectively reduces the occurrence of false alarm and false alarm of a manual feature extraction method;
(2) the method is particularly suitable for detecting the slowly moving object by the mixed Gaussian background modeling method, because the background is a Gaussian distribution, if the target object stops, a new Gaussian distribution can be formed when certain foreground data are gathered, and the stopped object can also be the background, but if the target object moves slowly, the new Gaussian distribution is difficult to form in a short time, namely the slowly moving object is easily detected by applying the mixed Gaussian distribution;
(3) the invention adopts a target detection algorithm to detect whether suspected occupied articles flowing down a video stream are personnel or fire-fighting vehicles, thereby effectively avoiding the occurrence of false alarm;
(4) the invention adopts a continuous multi-frame judgment method instead of using the detection result of a single-frame picture as the final judgment result, thereby improving the accuracy of system detection and reducing the occurrence of false alarm.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The technical scheme of the invention is explained in detail in the following with the accompanying drawings.
The invention designs a fire fighting channel occupation self-adaptive detection method based on a monitoring video, which comprises the following steps as shown in figure 1:
step 1: acquiring a video stream corresponding to a camera in a scene to be detected, intercepting a video frame image, drawing one or more polygons on the video frame image as a key detection area, wherein the key detection area is effective to video frames intercepted after the video stream is set;
step 2: taking pixel values at certain positions of all frames in a video segment, averaging the pixel values to serve as a comparison standard value, and taking the frame with the pixel value at the position closest to the standard value in the video segment as a key frame of the video segment;
and step 3: modeling a mixed Gaussian background, and judging whether each pixel value of a key frame belongs to the background or the foreground according to the established model;
and 4, step 4: comparing the video frame image intercepted in the step 1 with the background image obtained in the step 3, calculating the area proportion occupied by the change area of the two frame images, if the proportion is larger than a set threshold value, judging an abnormal target, comparing the area proportion occupied by the external connection rectangular frame where the abnormal target is located with the area of the set polygon key detection area, judging whether the two frames have an intersection, if not, removing the abnormal target, and if so, keeping the abnormal target and the corresponding external connection rectangular frame;
and 5: detecting whether 2 target objects, namely personnel and fire fighting vehicles, appear by using a target detection algorithm, if so, removing the 2 objects, and not taking the objects as abnormal targets;
step 6: according to the steps 4 and 5, comprehensively judging whether an object occupies a fire fighting channel;
and 7: if the situation that the object occupies the fire fighting access is judged in the step 6, the position information of the object is stored;
and 8: and judging whether the channel occupation occurs at the same position, and if the continuous multiframes are the channel occupation occurring at the same position, triggering alarm information.
In this embodiment, the step 3 is implemented by the following preferred scheme:
in the Gaussian mixture background model, the color information among the pixels is considered to be irrelevant, and the processing of each pixel point is independent. For each pixel point in the video image, the change of the value of each pixel point in the sequence image can be regarded as a random process which continuously generates the pixel value, namely, the color presentation rule of each pixel point is described by Gaussian distribution, and the Gaussian distribution model is divided into a monomodal (unimodal) Gaussian distribution model and a multimodal (multimodal) Gaussian distribution model.
For a multi-peak Gaussian distribution model, each pixel point of an image is modeled according to superposition of a plurality of Gaussian distributions with different weights, each Gaussian distribution corresponds to a state which can possibly generate the color presented by the pixel point, and the weight and distribution parameters of each Gaussian distribution are updated along with time. When processing color images, it is assumed that the image pixels R, G, B have three color channels that are independent of each other and have the same variance.
Observation data set { X for random variable X1,x2,…,xN},xtA sample of a pixel at time t, where t is 1,2, …, N is the number of sampling points, and a single sampling point sample xtObeyed mixed Gaussian distribution probability density function p (x)t):
Wherein k is the total number of distribution models, η (x)t,μi,t,τi,t) Is the ith Gaussian distribution at time ti,tIs a mean value, τi,tIs a covariance matrix, δi,tIs the variance, I is the three-dimensional identity matrix, ωi,tThe weight of the ith gaussian distribution at time t.
The method for judging whether the pixel value belongs to the background or the foreground is as follows:
3-1, each new pixel value XtAnd comparing the k current distribution models according to the following formula until a new matched pixel value distribution model is found:
|Xt-μi,t-1|≤2.5σi,t-1
wherein, mui,t-1Means, σ, at time t-1i,t-1Represents the standard deviation at time t-1;
3-2, if the matched model meets the background requirement, the pixel belongs to the background, otherwise, the pixel belongs to the foreground;
3-3, updating the weight of each model according to the following formula, and then normalizing the weight of each model:
ωi,t=(1-α)*ωi,t-1+α*Mi,t
where α is the learning rate, M is the matched modeli,t1, otherwise Mi,t=0;
3-4, the mean value and the standard deviation of the unmatched model are unchanged, and the parameters of the matched model are updated according to the following formula:
μi,t=(1-ρ)*μi,t-1+ρ*Xt
ρ=α*η(Xt|μt,σt)
wherein eta (X)t|μt,σt) Representing a pixel value XtSatisfies the matched i-th Gaussian distribution model at the time ttAnd σtFor the mean and standard deviation of the population, the superscript T represents the transpose;
3-5, if no pattern is matched in the step 3-1, replacing the model with the minimum weight, namely, the mean value of the model is the current pixel value, the standard deviation is the maximum value of other Gaussian components, and the weight is the minimum value of the other Gaussian components;
3-6, each model according to self weight and alpha2Sorting the ratio in descending order;
3-7, selecting the first B models as backgrounds, wherein B satisfies the following formula:
wherein, T0Is a preset threshold value representing the proportion of background components in the whole Gaussian process, and T is more than or equal to 00Less than or equal to 1; re-detecting each pixel XtAnd whether the B models are matched with the obtained model B, if so, the model B is a background, and otherwise, the model B is a foreground.
The embodiments are only for illustrating the technical idea of the present invention, and the technical idea of the present invention is not limited thereto, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the scope of the present invention.
Claims (1)
1. A fire fighting channel occupation self-adaptive detection method based on a monitoring video is characterized by comprising the following steps:
(1) acquiring a video stream corresponding to a camera in a scene to be detected, intercepting a video frame image, drawing one or more polygons on the video frame image as a key detection area, wherein the key detection area is effective to the video frame intercepted after the video stream is set;
(2) taking pixel values at certain positions of all frames in a video segment, averaging the pixel values to serve as a comparison standard value, and taking the frame with the pixel value at the position closest to the standard value in the video segment as a key frame of the video segment;
(3) modeling a mixed Gaussian background, and judging whether each pixel value of a key frame belongs to the background or the foreground according to the established model;
observation data set { X for random variable X1,x2,...,xN},xtA sample of a pixel at time t, where t is 1,2, …, N is the number of sampling points, and a single sampling point sample xtObeyed mixed Gaussian distribution probability density function p (x)t):
Wherein k is the total number of distribution models, η (x)t,μi,t,τi,t) Is the ith Gaussian distribution at time ti,tIs a mean value, τi,tIs a covariance matrix, δi,tIs the variance, I is the three-dimensional identity matrix, ωi,tThe weight of the ith Gaussian distribution at the time t;
the method for judging whether the pixel value belongs to the background or the foreground is as follows:
(3-1) Each new pixel value XtAnd comparing the k current distribution models according to the following formula until a new matched pixel value distribution model is found:
|Xt-μi,t-1|≤2.5σi,t-1
wherein, mui,t-1Means, σ, at time t-1i,t-1Represents the standard deviation at time t-1;
(3-2) if the matched model meets the background requirement, the pixel belongs to the background, otherwise, the pixel belongs to the foreground;
(3-3) updating the weight of each model according to the following formula, and then normalizing the weight of each model:
ωi,t=(1-α)*ωi,t-1+α*Mi,t
where α is the learning rate, M is the matched modeli,t1, otherwise Mi,t=0;
(3-4) the mean value and the standard deviation of the unmatched models are unchanged, and the parameters of the matched models are updated according to the following formula:
μi,t=(1-ρ)*μi,t-1+ρ*Xt
ρ=α*η(Xt|μt,σt)
wherein eta (X)t|μt,σt) Representing a pixel value XtSatisfies the matched i-th Gaussian distribution model at the time ttAnd σtFor the mean and standard deviation of the population, the superscript T represents the transpose;
(3-5) if there is no pattern matching in the step (3-1), replacing the model with the minimum weight, namely, the mean value of the model is the current pixel value, the standard deviation is the maximum value of other gaussian components, and the weight is the minimum value of the other gaussian components;
(3-6) Each model is based on its own weight and α2Sorting the ratio in descending order;
(3-7) selecting the first B models as backgrounds, wherein B satisfies the following formula:
wherein, T0Is a preset threshold value representing the proportion of background components in the whole Gaussian process, and T is more than or equal to 00Less than or equal to 1; re-detecting each pixel XtWhether the B models are matched with the obtained model or not is judged, if so, the model is a background, and if not, the model is a foreground;
(4) comparing the video frame image intercepted in the step (1) with the background image obtained in the step (3), calculating the area proportion occupied by the change regions of the two frame images, if the proportion is greater than a set threshold value, judging an abnormal target, comparing the region of an external connection rectangular frame where the abnormal target is located with the set polygon key detection region, judging whether the two frames have an intersection, if not, removing the abnormal target, and if so, keeping the abnormal target and a corresponding external connection rectangular frame;
(5) detecting whether 2 target objects, namely personnel and fire fighting vehicles, appear by using a target detection algorithm, if so, removing the 2 objects, and not taking the objects as abnormal targets;
(6) comprehensively judging whether objects occupy the fire fighting passage or not according to the steps (4) and (5);
(7) if the situation that the object occupies the fire fighting access is judged in the step (6), the position information of the object is stored;
(8) and judging whether the channel occupation occurs at the same position, and if the continuous multiframes are the channel occupation occurring at the same position, triggering alarm information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011013470.3A CN112132043B (en) | 2020-09-24 | 2020-09-24 | Fire fighting channel occupation self-adaptive detection method based on monitoring video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011013470.3A CN112132043B (en) | 2020-09-24 | 2020-09-24 | Fire fighting channel occupation self-adaptive detection method based on monitoring video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112132043A CN112132043A (en) | 2020-12-25 |
CN112132043B true CN112132043B (en) | 2021-06-29 |
Family
ID=73841085
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011013470.3A Active CN112132043B (en) | 2020-09-24 | 2020-09-24 | Fire fighting channel occupation self-adaptive detection method based on monitoring video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112132043B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112966545A (en) * | 2020-12-31 | 2021-06-15 | 杭州拓深科技有限公司 | Average hash-based fire fighting channel occupancy monitoring method and device, electronic device and storage medium |
CN113179389A (en) * | 2021-04-15 | 2021-07-27 | 江苏濠汉信息技术有限公司 | System and method for identifying crane jib of power transmission line dangerous vehicle |
CN113421431B (en) * | 2021-06-17 | 2022-12-02 | 京东方科技集团股份有限公司 | Emergency channel monitoring method and device |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9609287B2 (en) * | 2005-03-02 | 2017-03-28 | En-Gauge, Inc. | Remote monitoring |
CN104240222A (en) * | 2013-06-19 | 2014-12-24 | 贺江涛 | Intelligent detecting method and device for firefighting access blockage |
CN103366374B (en) * | 2013-07-12 | 2016-04-20 | 重庆大学 | Based on the passageway for fire apparatus obstacle detection method of images match |
WO2018132461A1 (en) * | 2017-01-10 | 2018-07-19 | Babak Rezvani | Emergency drone guidance device |
CN108376406A (en) * | 2018-01-09 | 2018-08-07 | 公安部上海消防研究所 | A kind of Dynamic Recurrent modeling and fusion tracking method for channel blockage differentiation |
JP7023803B2 (en) * | 2018-06-21 | 2022-02-22 | 関西電力株式会社 | Monitoring system |
CN109409238B (en) * | 2018-09-28 | 2020-05-19 | 深圳市中电数通智慧安全科技股份有限公司 | Obstacle detection method and device and terminal equipment |
CN110189355A (en) * | 2019-05-05 | 2019-08-30 | 暨南大学 | Safe escape channel occupies detection method, device, electronic equipment and storage medium |
CN110232359B (en) * | 2019-06-17 | 2021-10-01 | 中国移动通信集团江苏有限公司 | Retentate detection method, device, equipment and computer storage medium |
CN110766915A (en) * | 2019-09-19 | 2020-02-07 | 重庆特斯联智慧科技股份有限公司 | Alarm method and system for identifying fire fighting access state |
CN111209866A (en) * | 2020-01-08 | 2020-05-29 | 无锡图灵视频科技有限公司 | Intelligent detection algorithm for blockage of fireproof channel |
-
2020
- 2020-09-24 CN CN202011013470.3A patent/CN112132043B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112132043A (en) | 2020-12-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112132043B (en) | Fire fighting channel occupation self-adaptive detection method based on monitoring video | |
CN110135269B (en) | Fire image detection method based on mixed color model and neural network | |
CN110765964B (en) | Method for detecting abnormal behaviors in elevator car based on computer vision | |
CN107085714B (en) | Forest fire detection method based on video | |
CN108647649B (en) | Method for detecting abnormal behaviors in video | |
CN104463253B (en) | Passageway for fire apparatus safety detection method based on adaptive background study | |
CN105787472B (en) | A kind of anomaly detection method based on the study of space-time laplacian eigenmaps | |
CN109919053A (en) | A kind of deep learning vehicle parking detection method based on monitor video | |
CN112069975A (en) | Comprehensive flame detection method based on ultraviolet, infrared and vision | |
CN105678803A (en) | Video monitoring target detection method based on W4 algorithm and frame difference | |
CN103473788A (en) | Indoor fire and flame detection method based on high-definition video images | |
CN111553214B (en) | Method and system for detecting smoking behavior of driver | |
CN113537099A (en) | Dynamic detection method for fire smoke in highway tunnel | |
CN107909044A (en) | A kind of demographic method of combination convolutional neural networks and trajectory predictions | |
CN110969642B (en) | Video filtering method and device, electronic equipment and storage medium | |
Cheng et al. | A multiscale parametric background model for stationary foreground object detection | |
CN110349178B (en) | System and method for detecting and identifying abnormal behaviors of human body | |
CN112464765A (en) | Safety helmet detection algorithm based on single-pixel characteristic amplification and application thereof | |
CN108960181B (en) | Black smoke vehicle detection method based on multi-scale block LBP and hidden Markov model | |
CN115331152A (en) | Fire fighting identification method and system | |
CN108241837B (en) | Method and device for detecting remnants | |
CN111462169B (en) | Mouse trajectory tracking method based on background modeling | |
CN108010063A (en) | A kind of moving target based on video enters or leaves the detection method in region | |
CN107016349B (en) | Crowd flow analysis method based on depth camera | |
Hsieh et al. | Abnormal event detection using trajectory features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: 11-14 / F, tower a, Tengfei building, 88 Jiangmiao Road, yanchuangyuan, Jiangbei new district, Nanjing, Jiangsu Province 210000 Patentee after: Anyuan Technology Co.,Ltd. Address before: 11-14 / F, tower a, Tengfei building, 88 Jiangmiao Road, yanchuangyuan, Jiangbei new district, Nanjing, Jiangsu Province 210000 Patentee before: NANJING ANYUAN TECHNOLOGY Co.,Ltd. |