CN111401276B - Safety helmet wearing identification method and system - Google Patents

Safety helmet wearing identification method and system Download PDF

Info

Publication number
CN111401276B
CN111401276B CN202010201329.XA CN202010201329A CN111401276B CN 111401276 B CN111401276 B CN 111401276B CN 202010201329 A CN202010201329 A CN 202010201329A CN 111401276 B CN111401276 B CN 111401276B
Authority
CN
China
Prior art keywords
detection frame
safety helmet
wear
confidence
characteristic points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010201329.XA
Other languages
Chinese (zh)
Other versions
CN111401276A (en
Inventor
安民洙
葛晓东
梁立宏
林玉娟
姜贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Light Speed Intelligent Equipment Co ltd
Tenghui Technology Building Intelligence Shenzhen Co ltd
Original Assignee
Tenghui Technology Building Intelligence Shenzhen Co ltd
Guangdong Light Speed Intelligent Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tenghui Technology Building Intelligence Shenzhen Co ltd, Guangdong Light Speed Intelligent Equipment Co ltd filed Critical Tenghui Technology Building Intelligence Shenzhen Co ltd
Priority to CN202010201329.XA priority Critical patent/CN111401276B/en
Publication of CN111401276A publication Critical patent/CN111401276A/en
Application granted granted Critical
Publication of CN111401276B publication Critical patent/CN111401276B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/08Construction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computer Security & Cryptography (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Helmets And Other Head Coverings (AREA)

Abstract

The invention discloses a method and a system for identifying the wearing of a safety helmet, which can divide the image into scales, and detect characteristic points on the basis of the scales to determine a final detection frame, so that the wearing identification of the safety helmet can be realized by fully utilizing the existing building site video stream by the method, other monitoring equipment is not required to be installed, the cost is saved, the algorithm can ensure the stability, the robustness and the high precision of the result, the accuracy is improved, and the identification efficiency is also improved.

Description

Safety helmet wearing identification method and system
Technical Field
The application relates to the technical field of image recognition processing, in particular to a method and a system for recognizing wearing of a safety helmet.
Background
The safety problem of the construction site is a first thing which is highly valued by the supervision department and the construction unit, so that the video monitoring system is especially important to ensure the personal safety of constructors, the property safety of construction materials, equipment and the like of the construction site, and the monitoring center can be ensured to master the site construction dynamics in real time.
However, these video information transmitted back to the monitoring center is generally watched by human eyes, which is time consuming and laborious and is subject to fatigue, resulting in erroneous judgment.
Disclosure of Invention
The invention provides a method and a system for identifying the wearing of a safety helmet, which are used for solving the problems that the wearing of the safety helmet in a monitoring video in the prior art needs manual identification, is time-consuming and labor-consuming, and causes easy judgment errors.
The specific technical scheme is as follows:
A method of headgear wear identification, the method comprising:
Performing scale division on the acquired image to obtain sub-images corresponding to N scales, wherein N is a positive integer greater than or equal to 2;
Determining 3 feature points in each pixel point on each scale of the sub-image, wherein each feature point at least comprises a confidence value and non-maximum suppression;
Filtering all the characteristic points according to the confidence value and the non-maximum value inhibition, and obtaining a final target detection frame according to the screened characteristic points;
and determining whether to wear the safety helmet according to the color in the target detection frame.
Optionally, filtering all feature points according to the confidence value and non-maximum suppression includes:
ignoring feature points with confidence values smaller than a threshold value;
Sorting the rest characteristic points according to non-maximum suppression, and obtaining a detection frame with highest score;
Reducing the confidence value of the detection frame with the overlapping area larger than the appointed proportion;
And taking the detection frame with the ranking result larger than the threshold value as a final target detection frame.
Optionally, each of the feature points includes a center point coordinate of the detection frame, a width and a height of the detection frame, a confidence of the detection frame, and a probability of each category of the detection frame.
Optionally, after determining whether to wear the safety helmet according to the color in the target detection frame, the method further includes:
when the personnel wearing the safety helmet is detected, marking the detection frame as green, and displaying and outputting;
when the personnel are detected not to wear the safety helmet, the detection frame is marked with red, and the output is displayed.
A headgear wear identification system, the system comprising:
The dividing module is used for dividing the acquired image into scales to obtain sub-images corresponding to N scales, wherein N is a positive integer greater than or equal to 2;
A determining module, configured to determine 3 feature points in each pixel point on each scale of the sub-image, where each feature point includes at least a confidence value and a non-maximum suppression;
the processing module is used for filtering all the characteristic points according to the confidence value and the non-maximum value inhibition, and obtaining a final target detection frame according to the screened characteristic points; and determining whether to wear the safety helmet according to the color in the target detection frame.
Optionally, the processing module is specifically configured to ignore feature points with a confidence value smaller than a threshold value; sorting the rest characteristic points according to non-maximum suppression, and obtaining a detection frame with highest score; reducing the confidence value of the detection frame with the overlapping area larger than the appointed proportion; and taking the detection frame with the ranking result larger than the threshold value as a final target detection frame.
Optionally, the determining module is specifically configured to include a center point coordinate of the detection frame, a width and a height of the detection frame, a confidence level of the detection frame, and a probability of each category of the detection frame for each feature point in the feature points.
Optionally, the processing module is further configured to mark the detection frame as green and display output when detecting that the person wears the safety helmet; when the personnel are detected not to wear the safety helmet, the detection frame is marked with red, and the output is displayed.
The method provided by the embodiment of the invention can be used for carrying out scale division on the image and carrying out feature point detection on the basis of the scale division to determine the final detection frame, so that the wearing identification of the safety helmet can be realized by fully utilizing the existing building site video stream without installing other monitoring equipment, the cost is saved, the algorithm can ensure the stability, the robustness and the high precision of the result, the accuracy is improved, and the identification efficiency is also improved.
Drawings
FIG. 1 is a flowchart of a method for identifying the wearing of a helmet according to an embodiment of the present invention;
FIG. 2 is a schematic view of a worker wearing a helmet according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a helmet wearing recognition system according to an embodiment of the present invention.
Detailed Description
The following detailed description of the technical solutions of the present invention will be given by way of the accompanying drawings and the specific embodiments, and it should be understood that the specific technical features of the embodiments and the embodiments of the present invention are merely illustrative of the technical solutions of the present invention, and not limiting, and that the embodiments and the specific technical features of the embodiments of the present invention may be combined with each other without conflict.
Fig. 1 is a flowchart of a method for identifying wearing of a helmet according to an embodiment of the present invention, where the method includes:
s1, carrying out scale division on the acquired image to obtain sub-images corresponding to N scales;
in the first place, the method provided by the invention is applied to a system, and the system comprises four parts: the camera collects field video, the server processes and stores the video, the client side identifies the result and displays the result, and the alarm device. And transmitting the video stream acquired by the camera to a server.
Firstly, a server respectively obtains sub-images (feature maps) of three scales through a basic convolutional neural network, wherein the specific scales can be as follows: 13 x 21, 26 x 21, 52 x 21. Of course, only three dimensions are exemplified in the embodiment of the present invention, and may be adjusted according to actual situations in a specific application scenario.
S2, determining 3 characteristic points in each pixel point on each scale of the sub-image;
In the embodiment of the present invention, since the targets are classified into two categories, namely: the helmet is worn and not worn, so 3 feature points, that is, 3 proposal, can be further determined on each scale, where the number of proposal corresponding to the three scales is 13×13×3+26×26×3+52×52×3=1067 pieces proposal. Each proposal contains 7 pieces of information: center_x, center_ y, w, h, confidence, probability of 2classes, i.e., center point coordinates of box, width, height, target confidence, probability of each category.
S3, filtering all the characteristic points according to the confidence value and the non-maximum value inhibition, and obtaining a final target detection frame according to the screened characteristic points;
after all proposal was obtained, all proposals obtained was filtered. The filtering is divided into two aspects, confidence filtering and soft-nms (non-maximum suppression) filtering. Confidence filtering is that proposal with confidence below the threshold will be ignored. Softnms is to sort the detection frames according to the scores, then keep the frame with the highest score, simultaneously reduce the confidence of other frames with the overlapping area larger than a certain proportion, then assign a threshold value, and finally keep the detection frame with the score larger than the threshold value. Soft-nms may reduce the instances of missed detection relative to nms. Finally, the target detection frame is obtained.
S4, determining whether to wear the safety helmet according to the color in the target detection frame.
When the personnel wearing the safety helmet is detected, marking the detection frame as green, and displaying and outputting; when the personnel are detected not to wear the safety helmet, the detection frame is marked with red, and the output is displayed. As shown in fig. 2, the detection frame is displayed green for the person wearing the helmet in fig. 2, and is displayed red for the worker not wearing the helmet.
The method provided by the embodiment of the invention can be used for carrying out scale division on the image and carrying out feature point detection on the basis of the scale division to determine the final detection frame, so that the wearing identification of the safety helmet can be realized by fully utilizing the existing building site video stream without installing other monitoring equipment, the cost is saved, the algorithm can ensure the stability, the robustness and the high precision of the result, the accuracy is improved, and the identification efficiency is also improved.
Corresponding to the method provided by the invention, the embodiment of the invention also provides a system for identifying wearing of the safety helmet, as shown in fig. 3, which is a schematic structural diagram of the system for identifying wearing of the safety helmet in the embodiment of the invention, and the system comprises:
the dividing module 301 is configured to scale-divide the acquired image to obtain N sub-images corresponding to the scales, where N is a positive integer greater than or equal to 2;
A determining module 302, configured to determine 3 feature points in each pixel point on each scale of the sub-image, where each feature point includes at least a confidence value and a non-maximum suppression;
The processing module 303 is configured to filter all the feature points according to the confidence value and the non-maximum suppression, and obtain a final target detection frame according to the screened feature points; and determining whether to wear the safety helmet according to the color in the target detection frame.
Further, in the embodiment of the present invention, the processing module 303 is specifically configured to ignore feature points with confidence values smaller than a threshold value; sorting the rest characteristic points according to non-maximum suppression, and obtaining a detection frame with highest score; reducing the confidence value of the detection frame with the overlapping area larger than the appointed proportion; and taking the detection frame with the ranking result larger than the threshold value as a final target detection frame.
Further, in the embodiment of the present invention, the determining module 302 is specifically configured to use each of the feature points to include a center point coordinate of the detection frame, a width and a height of the detection frame, a confidence level of the detection frame, and a probability of each category of the detection frame.
Further, in the embodiment of the present invention, the processing module 303 is further configured to mark the detection frame as green and display output when detecting that the person wears the helmet; when the personnel are detected not to wear the safety helmet, the detection frame is marked with red, and the output is displayed.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following appended claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application, including those modified to include the use of specific symbols, labels, and so forth to determine vertices.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (2)

1. A method of headgear wear identification, the method comprising:
the acquired image is subjected to scale division through a basic convolutional neural network, so that sub-images corresponding to 3 scales are obtained;
Determining 3 feature points in each pixel point on each scale of the sub-image, wherein each feature point at least comprises a confidence value and non-maximum suppression;
filtering all the characteristic points according to the confidence value and the non-maximum value inhibition, and obtaining a final target detection frame according to the screened characteristic points;
Determining whether to wear the safety helmet according to the color in the target detection frame;
Wherein, the 3 scales are respectively: 13 x 21, 26 x 21, 52 x 21;
each of the characteristic points comprises a center point coordinate of the detection frame, a width and a height of the detection frame, a confidence level of the detection frame and a probability of each category of the detection frame;
Filtering all feature points according to the confidence value and non-maximum suppression, wherein the filtering comprises the following steps:
ignoring feature points with confidence values smaller than a threshold value;
Sorting the rest characteristic points according to non-maximum suppression, and obtaining a detection frame with highest score;
Reducing the confidence value of the detection frame with the overlapping area larger than the appointed proportion;
taking a detection frame which is larger than a threshold value in the finally obtained sequencing result as a final target detection frame;
after determining whether to wear a safety helmet according to the color in the target detection frame, the method further comprises:
when the personnel wearing the safety helmet is detected, marking the detection frame as green, and displaying and outputting;
When the person is detected not to wear the safety helmet, the detection frame is marked as red, and the output is displayed.
2. A headgear wear identification system, the system comprising:
the dividing module is used for dividing the acquired image into scales through the basic convolutional neural network to obtain sub-images corresponding to 3 scales;
A determining module, configured to determine 3 feature points in each pixel point on each scale of the sub-image, where each feature point includes at least a confidence value and a non-maximum suppression;
the processing module is used for filtering all the characteristic points according to the confidence value and the non-maximum value inhibition, and obtaining a final target detection frame according to the screened characteristic points; determining whether to wear the safety helmet according to the color in the target detection frame;
Wherein, the 3 scales are respectively: 13 x 21, 26 x 21, 52 x 21;
the processing module is specifically configured to ignore feature points with confidence values smaller than a threshold value; sorting the rest characteristic points according to non-maximum suppression, and obtaining a detection frame with highest score; reducing the confidence value of the detection frame with the overlapping area larger than the appointed proportion; taking a detection frame which is larger than a threshold value in the finally obtained sequencing result as a final target detection frame;
the determining module is specifically used for determining the probability of each category of the detection frame, wherein each characteristic point in the characteristic points comprises the center point coordinates of the detection frame, the width and the height of the detection frame and the confidence of the detection frame;
the processing module is also used for marking the detection frame as green and displaying and outputting when detecting that the personnel wear the safety helmet; when the person is detected not to wear the safety helmet, the detection frame is marked as red, and the output is displayed.
CN202010201329.XA 2020-03-20 2020-03-20 Safety helmet wearing identification method and system Active CN111401276B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010201329.XA CN111401276B (en) 2020-03-20 2020-03-20 Safety helmet wearing identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010201329.XA CN111401276B (en) 2020-03-20 2020-03-20 Safety helmet wearing identification method and system

Publications (2)

Publication Number Publication Date
CN111401276A CN111401276A (en) 2020-07-10
CN111401276B true CN111401276B (en) 2024-05-17

Family

ID=71428982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010201329.XA Active CN111401276B (en) 2020-03-20 2020-03-20 Safety helmet wearing identification method and system

Country Status (1)

Country Link
CN (1) CN111401276B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112863187B (en) * 2021-01-18 2022-04-15 阿波罗智联(北京)科技有限公司 Detection method of perception model, electronic equipment, road side equipment and cloud control platform

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN108960340A (en) * 2018-07-23 2018-12-07 电子科技大学 Convolutional neural networks compression method and method for detecting human face
CN109145854A (en) * 2018-08-31 2019-01-04 东南大学 A kind of method for detecting human face based on concatenated convolutional neural network structure
CN110188724A (en) * 2019-06-05 2019-08-30 中冶赛迪重庆信息技术有限公司 The method and system of safety cap positioning and color identification based on deep learning
CN110348329A (en) * 2019-06-24 2019-10-18 电子科技大学 Pedestrian detection method based on video sequence interframe information
CN110399905A (en) * 2019-07-03 2019-11-01 常州大学 The detection and description method of safety cap wear condition in scene of constructing
CN110852283A (en) * 2019-11-14 2020-02-28 南京工程学院 Helmet wearing detection and tracking method based on improved YOLOv3

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN108960340A (en) * 2018-07-23 2018-12-07 电子科技大学 Convolutional neural networks compression method and method for detecting human face
CN109145854A (en) * 2018-08-31 2019-01-04 东南大学 A kind of method for detecting human face based on concatenated convolutional neural network structure
CN110188724A (en) * 2019-06-05 2019-08-30 中冶赛迪重庆信息技术有限公司 The method and system of safety cap positioning and color identification based on deep learning
CN110348329A (en) * 2019-06-24 2019-10-18 电子科技大学 Pedestrian detection method based on video sequence interframe information
CN110399905A (en) * 2019-07-03 2019-11-01 常州大学 The detection and description method of safety cap wear condition in scene of constructing
CN110852283A (en) * 2019-11-14 2020-02-28 南京工程学院 Helmet wearing detection and tracking method based on improved YOLOv3

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张志超.《中国优秀硕士学位论文全文数据库(工程科技I.辑)》.2020,第15-33页. *

Also Published As

Publication number Publication date
CN111401276A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
CN110222672B (en) Method, device and equipment for detecting wearing of safety helmet in construction site and storage medium
CN111523432B (en) Intelligent construction site safety helmet detection system and method thereof
CN103886612B (en) Automatic water level extraction method and system based on reservoir monitoring camera
CN107437318B (en) Visible light intelligent recognition algorithm
CN113793234B (en) Wisdom garden platform based on digit twin technique
CN110781853B (en) Crowd abnormality detection method and related device
US20150139494A1 (en) Slow change detection system
CN108875531B (en) Face detection method, device and system and computer storage medium
CN205068153U (en) Distributing type visual positioning system based on walking robot
CN105139011B (en) A kind of vehicle identification method and device based on mark object image
CN114727063B (en) Path safety monitoring system, method and device for construction site
CN112084963B (en) Monitoring early warning method, system and storage medium
CN104077568A (en) High-accuracy driver behavior recognition and monitoring method and system
CN115171361B (en) Dangerous behavior intelligent detection and early warning method based on computer vision
CN116310943B (en) Method for sensing safety condition of workers
CN111401276B (en) Safety helmet wearing identification method and system
CN114662208B (en) Construction visualization system and method based on Bim technology
CN111259763A (en) Target detection method and device, electronic equipment and readable storage medium
CN112270807A (en) Old man early warning system that tumbles
CN115565214A (en) Regional alarm supervisory systems based on block chain
CN111753587B (en) Ground falling detection method and device
CN111582183A (en) Mask identification method and system in public place
CN110390224B (en) Traffic sign recognition method and device
CN108122415B (en) License plate information determination method and device
CN115641607A (en) Method, device, equipment and storage medium for detecting wearing behavior of power construction site operator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210901

Address after: 701, 7 / F, No. 60, Chuangxin Third Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, 519000

Applicant after: Guangdong Light Speed Intelligent Equipment Co.,Ltd.

Applicant after: Tenghui Technology Building Intelligence (Shenzhen) Co.,Ltd.

Address before: 701, 7 / F, No. 60, Chuangxin Third Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, 519000

Applicant before: Guangdong Light Speed Intelligent Equipment Co.,Ltd.

GR01 Patent grant
GR01 Patent grant