CN113283392A - Building scene understanding system and method based on deep neural network - Google Patents

Building scene understanding system and method based on deep neural network Download PDF

Info

Publication number
CN113283392A
CN113283392A CN202110716807.5A CN202110716807A CN113283392A CN 113283392 A CN113283392 A CN 113283392A CN 202110716807 A CN202110716807 A CN 202110716807A CN 113283392 A CN113283392 A CN 113283392A
Authority
CN
China
Prior art keywords
data set
unit
soil
module
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110716807.5A
Other languages
Chinese (zh)
Inventor
罗恒阳
程飞
齐敏
邵闻达
殷黎明
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Weishitong Intelligent Technology Co ltd
Original Assignee
Suzhou Weishitong Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Weishitong Intelligent Technology Co ltd filed Critical Suzhou Weishitong Intelligent Technology Co ltd
Priority to CN202110716807.5A priority Critical patent/CN113283392A/en
Publication of CN113283392A publication Critical patent/CN113283392A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/38Outdoor scenes
    • G06V20/39Urban scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a building scene understanding system and method based on a deep neural network, belonging to the technical field of environmental protection measures, and comprising an image acquisition module, an edge calculation module and a storage module, wherein the edge calculation module comprises an AI core operation module and a network transmission module, and the AI core operation module comprises a model application unit and a soil judgment unit; the model application unit judges the images acquired by the image acquisition module, the images shot by the image acquisition module are synchronously transmitted to the model application unit, and the model application unit infers the positions and the occupied sizes of the four types of pavements, namely iron plates, green nets, bare soil and concrete pavements in the images and transmits the positions and the occupied sizes to the soil judgment unit to judge whether the bare soil pavements exist. The method can accurately identify the bare soil, the hardened ground, the green ground and the dust screen covered ground in the scene to calculate the bare soil coverage rate, thereby realizing the effect of long-time, high-quality and considerable statistics of the bare soil coverage rate.

Description

Building scene understanding system and method based on deep neural network
Technical Field
The invention belongs to the technical field of environmental management and environmental protection measures, and particularly relates to a building scene understanding system and method based on a deep neural network.
Background
In environmental sanitation management and environmental protection measures, dust control is an important protection measure, in the traditional dust management, a PM10 parameter provided by an environmental quality sensor is used as an important dust index parameter, however, when a large particle concentration alarm given by the environmental quality sensor indicates that the environmental quality is reduced at the moment, inhalable particles in the environment are close to or reach a specified concentration threshold value, so that the environmental supervision and treatment mode based on the environmental quality sensor often causes the time delay of environmental protection, and is a remedy measure when the environmental quality is poor. In order to fundamentally reduce the dust emission situation, the dust screen is used to cover the bare soil to become an important dust emission suppression measure.
In the protection measures of covering bare soil by a dust screen, the bare soil coverage rate becomes an important environmental supervision index, and in the traditional bare soil coverage supervision, a supervisor is often used for visiting on the spot, so that the bare soil coverage effect is subjectively judged. The supervision is untimely and random due to manual patrol, the bare soil coverage condition of a supervised place is difficult to reflect really and timely, and long-time high-quality supervision is difficult to achieve only by manual work.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a building scene understanding system and method based on a deep neural network, which can accurately identify bare land, hardened ground, greening ground and dust screen covered ground in a scene to calculate bare soil coverage rate, thereby realizing the effect of long-time, high-quality and considerable statistics of bare soil coverage rate.
A building scene understanding system and method based on a deep neural network comprises an image acquisition module, an edge calculation module and a storage module, wherein the edge calculation module comprises an AI core operation module and a network transmission module, images collected by the image acquisition module are uploaded to the storage module through the network transmission module, and the AI core operation module comprises a model application unit and a soil judgment unit;
the image acquisition module is installed in a building site and shoots the ground of the building site, the model application unit judges the image acquired by the image acquisition module, the image shot by the image acquisition module is synchronously transmitted to the model application unit, and the model application unit infers the positions and occupied sizes of four types of pavements, namely iron plates, green nets, bare soil and concrete pavements in the image and transmits the positions and occupied sizes to the soil judgment unit to judge whether the bare soil pavements exist.
The invention is further configured to: the edge calculation module also comprises a data set manufacturing unit, a model training unit and a precision judgment unit, the image of the image acquisition unit is intercepted, and the image with four types of pavements, namely an iron plate, a green net, bare soil and a concrete pavement, is manually selected and uploaded to the data set manufacturing unit;
the data set making unit converts the selected pictures into a data set, the data set making unit makes the selected pictures into the data set and uploads the data set to the model training unit, and the model training unit trains the data set and outputs the data set to the model application unit.
The invention is further configured to: the model training unit adopts a target detection deep neural network and is set as one of Fast R-CNN, Yolov3 and Yolo4 networks.
The invention is further configured to: the data set making unit performs size modification and enhancement on the input picture through a computer algorithm, wherein the size modification and enhancement comprises operations of geometric transformation, random pruning, standardization, normalization and brightness and contrast adjustment.
The invention is further configured to: the soil judgment unit is used for displaying the detected effect graph and outputting the proportion of the bare soil pavement when judging that the bare soil pavement exists in the target area;
and when the soil half-section unit judges that no bare pavement exists in the target area, the detected effect picture is displayed and the characters meeting the requirements are output.
The invention is further configured to: the image acquisition module is set as any one of a monitoring camera, a wide-angle camera and an infrared camera.
The invention is further configured to: the storage module is set as a cloud server.
A building scene understanding method based on a deep neural network comprises the following steps:
s1: the image acquisition module shoots ground photos and videos of a construction site and uploads the ground photos and videos to the storage module;
s2: manually selecting four types of pavements with iron plates, green nets, bare soil and concrete pavements in the storage module and transmitting the pavements to a data making unit to form a data set;
s3, transmitting the data set to a model training unit, detecting the positions and occupied sizes of the four types of road surfaces in the data set image by the model training unit, and transmitting the data set image to a model application unit when the judgment unit judges that the precision meets the requirement;
s4: the method comprises the following steps that an image acquisition module is called to shoot ground pictures of a construction site and the ground pictures are transmitted to a model application unit, and the model application unit judges the positions and occupied sizes of four types of pavements in a target area and transmits the positions and occupied sizes to a soil judgment unit;
s5: the soil judging unit judges whether the bare soil pavement exists in the target area, displays the detected effect graph and outputs the proportion of the bare soil pavement when the bare soil pavement exists, and displays the detected effect graph and outputs words meeting the requirements when the bare soil pavement does not exist.
The invention further provides that S2 includes the steps of:
a1: performing labeling operation by using an image labeling tool;
a2: utilizing an image processing tool to perform resize operation on the picture to modify the size;
a3: performing enhancement operation on the data set, performing geometric transformation, random pruning, standardization and normalization, brightness and contrast adjustment on the picture, and performing scrambling operation on the data set;
a4: the data set is expressed as m: the proportion of n is divided into a training set and a testing set, and the proportion is 8:2/99:1 according to the data volume;
a5: the data set is converted into a data format required for model training.
In conclusion, the invention has the following beneficial effects:
1. the image acquisition module acquires ground images of a construction site, and the edge calculation module analyzes and judges bare soil pavements so as to realize the effect of long-time, high-quality and considerable bare soil coverage rate statistics;
2. through a data set making unit, a model training unit, a precision judging unit and a model application unit, images in a storage module are called and selected to be made into a data set, the judging precision is trained and uploaded to the model application unit, and the position and the size of the bare soil pavement in the real-time shot construction site pavement image are identified;
3. and judging and outputting the identification result of the model application unit through the setting of the soil judgment unit.
Drawings
FIG. 1 is a schematic diagram of the connection of various modules in the present invention;
FIG. 2 is a flow chart for embodying the overall process of the present invention;
FIG. 3 is a flow chart for embodying a model training module in the present invention.
Detailed Description
The present invention is further described in detail below with reference to the attached drawings, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. In the present specification, the terms "upper", "lower", "left", "right" and "middle" are used for clarity of description, and are not intended to limit the scope of the present invention, and changes or modifications in the relative relationship may be made without substantial technical changes and modifications.
Example (b):
as shown in fig. 1 to 3, the building scene understanding system based on the deep neural network designed in the present invention includes an image acquisition module, an edge calculation module and a storage module, the storage module is configured as a cloud server, the edge calculation module includes an AI core operation module and a network transmission module, an image collected by the image acquisition module is uploaded to the storage module through the network transmission module, the AI core operation module includes a data set making unit, a model training unit, a precision judgment unit, a model application unit and a soil judgment unit, and the image acquisition module is installed in a building site, shoots the ground of the building site, and is configured as any one of a monitoring camera, a wide-angle camera and an infrared camera.
As shown in fig. 1 to 3, as many pictures and videos in the storage module as possible are called, the videos are intercepted into ground pictures of a required construction site, the called and intercepted pictures are manually classified into pictures with four ground types of iron plates, green nets, bare soil and concrete pavements and uploaded to a data set making unit to be converted into a data set, the data set making unit performs size modification and enhancement on the input pictures through a computer algorithm, the size modification and enhancement content comprises geometric transformation, random trimming, standardization, normalization, brightness adjustment and contrast operation to form a data set, then the data set is uploaded to a model training unit, and the model training unit trains and outputs the data set to a model application unit.
As shown in fig. 1 to fig. 3, the model training unit adopts a target detection deep neural network, which is set as one of Fast R-CNN, Yolov3, and Yolo4 networks, and in this embodiment, a Yolov4 deep neural network with better performance is adopted. The model training unit is connected with a precision judging unit, after the data set is input into the model training unit, the model training unit identifies and judges positions occupied by different types of target areas in the data set and then conveys the positions to the precision judging unit, and when the precision judging unit identifies that the precision meets the requirement, the positions are transmitted to the model application unit, otherwise, the positions enter the model training unit again for identification.
As shown in fig. 2 and 3, the model application unit judges the images acquired by the image acquisition module, the images shot by the image acquisition module are synchronously transmitted to the model application unit, and the model application unit deduces the positions and occupied sizes of the four types of pavements, namely iron plate, green net, bare soil and concrete pavement, in the images and transmits the positions and occupied sizes to the soil judgment unit to judge whether the bare soil pavement exists or not;
when the soil judgment unit judges that the bare pavement exists in the target area, the detected effect graph is displayed and the ratio of the bare pavement is output;
and when the soil half-section unit judges that no bare pavement exists in the target area, displaying the detected effect picture and outputting a character meeting the requirements.
The invention also designs a building scene understanding method based on the deep neural network, which comprises the following steps:
s1: the image acquisition module shoots ground photos and videos of a construction site and uploads the ground photos and videos to the storage module;
s2: manually selecting four types of pavements with iron plates, green nets, bare soil and concrete pavements in the storage module and transmitting the pavements to a data making unit to form a data set;
s3, transmitting the data set to a model training unit, detecting the positions and occupied sizes of the four types of road surfaces in the data set image by the model training unit, and transmitting the data set image to a model application unit when the judgment unit judges that the precision meets the requirement;
s4: the method comprises the following steps that an image acquisition module is called to shoot ground pictures of a construction site and the ground pictures are transmitted to a model application unit, and the model application unit judges the positions and occupied sizes of four types of pavements in a target area and transmits the positions and occupied sizes to a soil judgment unit;
s5: the soil judging unit judges whether the bare soil pavement exists in the target area, displays the detected effect graph and outputs the proportion of the bare soil pavement when the bare soil pavement exists, and displays the detected effect graph and outputs words meeting the requirements when the bare soil pavement does not exist.
Wherein, S2 includes the following steps:
a1: performing labeling operation by using an image labeling tool;
a2: utilizing an image processing tool to perform resize operation on the picture to modify the size;
a3: performing enhancement operation on the data set, performing geometric transformation, random pruning, standardization and normalization, brightness and contrast adjustment on the picture, and performing scrambling operation on the data set;
a4: the data set is expressed as m: the proportion of n is divided into a training set and a testing set, and the proportion is 8:2/99:1 according to the data volume;
a5: the data set is converted into a data format required for model training.
The method comprises the steps of firstly, setting up equipment at a position needing to be shot, installing an image acquisition module and an AI core operation module at the position where the overall scene of a construction site can be shot, and setting storage equipment of a cloud server to acquire a real-time code stream of the image acquisition module through a 4G network. The video and the pictures uploaded by the image acquisition module are downloaded by the cloud server as much as possible, and the pictures with four types of pavements, namely iron plates, green nets, bare soil and concrete pavements, are manually selected from the pictures and uploaded to the data set manufacturing module.
And then, sequentially performing label printing, image size modification, data set enhancement, scrambling and task division, wherein when the data volume is large, the proportion of the training set to the testing set is 99:1, otherwise, the proportion is 8:2, and then the data set is converted into a voc format according to the model conversion requirement.
Network selection is carried out, building site detection is used as a detection task, and a target detection deep neural network is adopted, wherein a Yolov4 deep neural network is adopted in the embodiment. Firstly, configuring a deep learning environment, adopting a Yolov4 deep neural network under a Darknat framework, installing nvidia display card drive on a computer under an Ubuntu system, configuring cuda and cudnn, installing an OpenCV image processing tool, installing a Pythroch, and carrying a Deterctron2 environment.
The data set is converted into a format of a coco data set and is placed into an image segmentation network model to be used, a Mask-RCNN network is adopted in the embodiment, corresponding weights of the Mask-RCNN network are obtained from a Deterctron2 official website, the network model is modified in a personalized mode, relevant configuration parameters are defined, the configuration parameters comprise types and the setting of hyper-parameters, then a Python script is used for calling the Mask-RCNN deep learning network model configured under a Deterctron2 platform to train the data set of the network model, and then the hyper-parameters are adjusted continuously according to the effect of the trained model in a test set for training until the recognition effect meets the expectation.
After the model is trained, when a user issues a command for checking the bare soil coverage rate or a related category area ratio task, a server locally downloads a real-time construction site photo and inputs the real-time construction site photo into a network for reasoning and prediction, the result is the position and the size of different types of ground in a target area, the result is utilized and transmitted to a soil judgment unit, a detected effect picture is displayed when the bare soil exists, the bare soil area ratio is output, and the detected effect is displayed and a word meeting the requirements is output when the bare soil does not exist.
The present embodiment is only for explaining the present invention, and it is not limited to the present invention, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present invention.

Claims (9)

1. A building scene understanding system based on a deep neural network is characterized in that: the edge calculation module comprises an AI core operation module and a network transmission module, the image collected by the image collection module is uploaded to the storage module through the network transmission module, and the AI core operation module comprises a model application unit and a soil judgment unit;
the image acquisition module is installed in a building site and shoots the ground of the building site, the model application unit judges the image acquired by the image acquisition module, the image shot by the image acquisition module is synchronously transmitted to the model application unit, and the model application unit infers the positions and occupied sizes of four types of pavements, namely iron plates, green nets, bare soil and concrete pavements in the image and transmits the positions and occupied sizes to the soil judgment unit to judge whether the bare soil pavements exist.
2. The deep neural network-based building scene understanding system of claim 1, wherein: the edge calculation module also comprises a data set manufacturing unit, a model training unit and a precision judgment unit, the image of the image acquisition unit is intercepted, and the image with four types of pavements, namely an iron plate, a green net, bare soil and a concrete pavement, is manually selected and uploaded to the data set manufacturing unit;
the data set making unit converts the selected pictures into a data set, the data set making unit makes the selected pictures into the data set and uploads the data set to the model training unit, and the model training unit trains the data set and outputs the data set to the model application unit.
3. The deep neural network-based building scene understanding system of claim 2, wherein: the model training unit adopts a target detection deep neural network and is set as one of Fast R-CNN, Yolov3 and Yolo4 networks.
4. The deep neural network-based building scene understanding system of claim 3, wherein: the data set making unit performs size modification and enhancement on the input picture through a computer algorithm, wherein the size modification and enhancement comprises operations of geometric transformation, random pruning, standardization, normalization and brightness and contrast adjustment.
5. The deep neural network-based building scene understanding system of claim 1, wherein: the soil judgment unit is used for displaying the detected effect graph and outputting the proportion of the bare soil pavement when judging that the bare soil pavement exists in the target area;
and when the soil half-section unit judges that no bare pavement exists in the target area, the detected effect picture is displayed and the characters meeting the requirements are output.
6. The deep neural network-based building scene understanding system of claim 5, wherein: the image acquisition module is set as any one of a monitoring camera, a wide-angle camera and an infrared camera.
7. The deep neural network-based building scene understanding system of claim 6, wherein: the storage module is set as a cloud server.
8. A deep neural network based building scene understanding method according to any one of claims 5 to 7, characterized by comprising the steps of:
s1: the image acquisition module shoots ground photos and videos of a construction site and uploads the ground photos and videos to the storage module;
s2: manually selecting four types of pavements with iron plates, green nets, bare soil and concrete pavements in the storage module and transmitting the pavements to a data making unit to form a data set;
s3, transmitting the data set to a model training unit, detecting the positions and occupied sizes of the four types of road surfaces in the data set image by the model training unit, and transmitting the data set image to a model application unit when the judgment unit judges that the precision meets the requirement;
s4: the method comprises the following steps that an image acquisition module is called to shoot ground pictures of a construction site and the ground pictures are transmitted to a model application unit, and the model application unit judges the positions and occupied sizes of four types of pavements in a target area and transmits the positions and occupied sizes to a soil judgment unit;
s5: the soil judging unit judges whether the bare soil pavement exists in the target area, displays the detected effect graph and outputs the proportion of the bare soil pavement when the bare soil pavement exists, and displays the detected effect graph and outputs words meeting the requirements when the bare soil pavement does not exist.
9. The building scene understanding method based on the deep neural network as claimed in claim 8, characterized by comprising the following steps:
a1: performing labeling operation by using an image labeling tool;
a2: utilizing an image processing tool to perform resize operation on the picture to modify the size;
a3: performing enhancement operation on the data set, performing geometric transformation, random pruning, standardization and normalization, brightness and contrast adjustment on the picture, and performing scrambling operation on the data set;
a4: the data set is expressed as m: the proportion of n is divided into a training set and a testing set, and the proportion is 8:2/99:1 according to the data volume;
a5: the data set is converted into a data format required for model training.
CN202110716807.5A 2021-06-28 2021-06-28 Building scene understanding system and method based on deep neural network Pending CN113283392A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110716807.5A CN113283392A (en) 2021-06-28 2021-06-28 Building scene understanding system and method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110716807.5A CN113283392A (en) 2021-06-28 2021-06-28 Building scene understanding system and method based on deep neural network

Publications (1)

Publication Number Publication Date
CN113283392A true CN113283392A (en) 2021-08-20

Family

ID=77285720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110716807.5A Pending CN113283392A (en) 2021-06-28 2021-06-28 Building scene understanding system and method based on deep neural network

Country Status (1)

Country Link
CN (1) CN113283392A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529600A (en) * 2016-11-16 2017-03-22 桂林理工大学 SVM-based recognition method of building angular points in high-resolution optical image
CN107784283A (en) * 2017-10-24 2018-03-09 防灾科技学院 The unmanned plane high score image coal mine fire area land cover classification method of object-oriented
CN108288059A (en) * 2017-12-29 2018-07-17 中国电子科技集团公司第二十七研究所 A kind of building waste monitoring method based on high-definition remote sensing technology
CN112215815A (en) * 2020-10-12 2021-01-12 杭州视在科技有限公司 Bare soil coverage automatic detection method for construction site

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529600A (en) * 2016-11-16 2017-03-22 桂林理工大学 SVM-based recognition method of building angular points in high-resolution optical image
CN107784283A (en) * 2017-10-24 2018-03-09 防灾科技学院 The unmanned plane high score image coal mine fire area land cover classification method of object-oriented
CN108288059A (en) * 2017-12-29 2018-07-17 中国电子科技集团公司第二十七研究所 A kind of building waste monitoring method based on high-definition remote sensing technology
CN112215815A (en) * 2020-10-12 2021-01-12 杭州视在科技有限公司 Bare soil coverage automatic detection method for construction site

Similar Documents

Publication Publication Date Title
CN106203265B (en) A kind of Construction Fugitive Dust Pollution source monitors automatically and coverage forecasting system and method
CN110084165B (en) Intelligent identification and early warning method for abnormal events in open scene of power field based on edge calculation
CN110807353A (en) Transformer substation foreign matter identification method, device and system based on deep learning
CN110826514A (en) Construction site violation intelligent identification method based on deep learning
US20160260306A1 (en) Method and device for automated early detection of forest fires by means of optical detection of smoke clouds
CN103096121B (en) A kind of camera movement detection method and device
CN111582234B (en) Large-scale oil tea tree forest fruit intelligent detection and counting method based on UAV and deep learning
CN109711377B (en) Method for positioning and counting examinees in single-frame image monitored by standardized examination room
CN106991668B (en) Evaluation method for pictures shot by skynet camera
CN106384106A (en) Anti-fraud face recognition system based on 3D scanning
CN110769195B (en) Intelligent monitoring and recognizing system for violation of regulations on power transmission line construction site
CN106331636A (en) Intelligent video monitoring system and method of oil pipelines based on behavioral event triggering
CN110703760B (en) Newly-added suspicious object detection method for security inspection robot
CN110956104A (en) Method, device and system for detecting overflow of garbage can
CN108875620A (en) The monitoring method and system of instruction plant
CN112464766A (en) Farmland automatic identification method and system
CN110781853A (en) Crowd abnormality detection method and related device
CN114089786A (en) Autonomous inspection system based on unmanned aerial vehicle vision and along mountain highway
CN115798265A (en) Digital tower construction method based on digital twinning technology and implementation system thereof
CN110691224A (en) Transformer substation perimeter video intelligent detection system
CN114387558A (en) Transformer substation monitoring method and system based on multi-dimensional video
CN114708532A (en) Monitoring video quality evaluation method, system and storage medium
CN113989394A (en) Image processing method and system for color temperature of automatic driving simulation environment
CN113283392A (en) Building scene understanding system and method based on deep neural network
CN111597900A (en) Illegal dog walking identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination