CN110688955A - Building construction target detection method based on YOLO neural network - Google Patents

Building construction target detection method based on YOLO neural network Download PDF

Info

Publication number
CN110688955A
CN110688955A CN201910926462.9A CN201910926462A CN110688955A CN 110688955 A CN110688955 A CN 110688955A CN 201910926462 A CN201910926462 A CN 201910926462A CN 110688955 A CN110688955 A CN 110688955A
Authority
CN
China
Prior art keywords
target
darknet
neural network
recognition model
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910926462.9A
Other languages
Chinese (zh)
Inventor
张翔
姚江涛
董丽丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Architecture and Technology
Original Assignee
Xian University of Architecture and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Architecture and Technology filed Critical Xian University of Architecture and Technology
Priority to CN201910926462.9A priority Critical patent/CN110688955A/en
Publication of CN110688955A publication Critical patent/CN110688955A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a building construction target detection method based on a YOLO neural network, which comprises the following steps of 1, collecting original image data in a construction site, dividing the collected original image data into a test set and a training set, and preprocessing the training set; step 2, training a target recognition model of the YOLO neural network based on Darknet-53; step 3, testing the target recognition model based on Darknet-53 by using a test set to obtain a test result; step 4, analyzing the test result of the step 3; and 5, acquiring images at a construction site, and detecting the building construction target in the acquired images by using a target identification model based on Darknet-53. The invention overcomes the defect that the existing YOLO algorithm cannot quickly and accurately identify the problems of depth and small scale of the target map in the building construction site image.

Description

Building construction target detection method based on YOLO neural network
Technical Field
The invention relates to the field of deep learning target detection, in particular to a building construction target detection method based on a YOLO neural network.
Background
With the rapid development of the building industry in China, under the trend that the technical and construction period requirements of the building industry are increasingly strict, the assembly type building gradually plays an increasingly important role, and the method has very important research significance for the target detection of the assembly type building construction.
Object detection techniques, as part of computer vision, aim to quickly locate and classify objects in an identified image. The target detection of the assembly type building construction is to position and classify the building construction to be assembled at a building construction site. The existing YOLO algorithm can be used for identifying the problems of clear image structure, proper target size and good illumination condition, but cannot be used for quickly and accurately identifying the problems of depth, small scale and the like of a target image in an image of a building construction site.
Disclosure of Invention
The invention aims to provide a building construction target detection method based on a YOLO neural network, and aims to overcome the defect that the existing YOLO algorithm cannot quickly and accurately identify the problems of depth and small scale of a target map in a building construction site image.
The purpose of the invention is realized by the following technical scheme:
a building construction target detection method based on a YOLO neural network comprises the following steps:
step 1, collecting original image data at a construction site, wherein the original image data comprises different image layer depths of a plurality of single targets and a plurality of multi-angles constructed by buildings during collection; then dividing the collected original image data into a test set and a training set, and preprocessing the training set;
step 2, extracting features of the training set preprocessed in the step 1 through space pyramid pooling to obtain feature images, mapping candidate frames of the obtained feature images into feature images of a YOLO neural network, and finally training a Darknet-53-based target recognition model of the YOLO neural network to obtain a trained Darknet-53-based target recognition model;
step 3, testing the target recognition model based on Darknet-53 by using a test set to obtain a test result;
step 4, analyzing the test result of the step 3, and if the position of the building component is accurately marked and the building component is identified in the test result, performing the step 5; if the position of the building component cannot be accurately marked and/or the building component cannot be identified in the test result, adjusting parameters based on a Darknet-53 target identification model, repeating the step 3 until the position of the building component is accurately marked and the building component is identified in the test result, and then performing the step 5;
and 5, acquiring images at a construction site, and detecting the building construction target in the acquired images by using a target identification model based on Darknet-53.
Preferably, in step 1, the process of collecting original image data at the construction site includes the following steps:
preferably, step 1.1, shooting the building construction of a construction site at different angles, wherein shot objects comprise a single object and a plurality of single objects;
step 1.2, shooting the target according to the step 1.1 when the illumination conditions are different at different times in a day, wherein the shooting is carried out under the conditions of direct light and backlight;
when step 1.2 and step 1.3 are performed, as much raw image data as possible is captured including other building constructions than the construction site building elements.
Preferably, the process of preprocessing the training set includes: and (3) marking the image data of the training set by adopting a LabelImg-master marking tool, and converting the generated XML file into a txt file which can be read by a target recognition model based on Darknet-53 after marking the target.
Preferably, the Darknet-53-based target recognition model comprises a convolutional layer, a feature fusion layer and a prediction layer, wherein:
and (3) rolling layers: the method is used for extracting features of an input image, and dividing an acquired feature map into three different scales of 13 × 13, 26 × 26 and 52 × 52 by adopting a mode of convolution and up-sampling;
a characteristic fusion layer: the characteristic fusion device is used for carrying out characteristic fusion on the characteristic graphs of the three different scales acquired by the convolutional layer;
prediction layer: and the method is used for performing target prediction by using the feature map after the feature fusion layer is fused, and correspondingly generating three prediction maps with different scales.
Preferably, in step 2, when the target recognition model based on Darknet-53 is trained through the training set obtained through the preprocessing in step 1, the target recognition model based on Darknet-53 is trained by using a pre-training weight file Darknet53.conv.74, and the method includes the following steps:
step 2.1, configuring target recognition model parameters based on Darknet-53, and performing target recognition by matching a YOLO neural network with a target recognition model based on Darknet-53;
step 2.2, setting a training set, a pre-training weight file dark net53.conv.74 and a storage path of a newly generated weight file, and storing the name of the identified target in a names file;
and 2.3, entering a storage path based on the Darknet-53 target identification model under the DOS, starting training of the Darknet-53 target identification model by using a training instruction, generating new weight files after iterating preset times in the training process, and screening the best weight file in the generated new weight files.
Preferably, in step 2.1, the target identification model parameters based on Darknet-53 include Classes, step size and learning rate.
Preferably, the Classes is set to 1, the step size is set to 64, and the learning rate is set to 0.001.
Preferably, in step 3, the target recognition model based on Darknet-53 is tested by using the best trained weight file in the new weight file generated in step 2.3.
Preferably, in the step 4, the adjusted parameters based on the Darknet-53 target recognition model include a learning rate and an iteration number;
when adjusting parameters of the target recognition model based on Darknet-53:
if the position of the building component cannot be accurately marked in the test result, the learning rate is reduced and the iteration times are increased;
if the building component cannot be identified and the loss function is too large, the learning rate is increased, the learning rate is reduced after the loss function is reduced, the iteration times are increased, and the step 3 is repeated.
The invention has the following beneficial effects:
the building construction target detection method based on the YOLO neural network utilizes the idea of space gold tower pooling. Firstly, extracting features of an original image in a YOLO neural network to obtain a feature image, mapping a candidate frame obtained by the original image in a spatial pyramid pooling mode to the feature image of the first YOLO neural network for feature fusion, and finally outputting through a prediction layer; the space pyramid is introduced into the network in a pooling mode, a search area in the feature map is enlarged, a good identification effect is achieved for small target identification, and identification accuracy of the model is improved on the basis of the original model.
Drawings
FIG. 1 is a network architecture diagram of a YOLO neural network employed in the present invention;
FIG. 2 is a network structure diagram of the YOLO neural network after introducing the spatial pyramid pooling on the original image according to the present invention;
fig. 3 is a comparison graph of a YOLO neural network obtained after spatial pyramid pooling is introduced to an original image, which is an image collected at a construction site, and compared with a YOLO recognition effect in the prior art, where fig. 3(a) is an identification graph before improvement in weak illumination, fig. 3(b) is an identification graph after improvement in weak illumination, fig. 3(c) is an identification graph before improvement in layer depth, and fig. 3(d) is an identification graph after improvement in layer depth.
Fig. 4(a) is a recognition accuracy diagram before improvement of illumination in the embodiment of the present invention, fig. 4(b) is a recognition accuracy diagram after improvement of illumination in the embodiment of the present invention, fig. 4(c) is a recognition accuracy diagram before improvement of image layer depth in the embodiment of the present invention, and fig. 4(d) is a recognition accuracy diagram after improvement of image layer depth in the embodiment of the present invention.
Detailed Description
The invention is further illustrated below with reference to the accompanying figures and examples.
The technical scheme adopted by the invention is as follows: a building construction target detection method based on a YOLO neural network comprises the following steps:
step one, acquiring original image data at a construction site, wherein the original image data must contain multiple angles of building construction and different layer depths of multiple single targets during acquisition. In consideration of the influence of the illumination conditions, the original image data needs to be acquired under different illumination conditions in different time periods. Then dividing the collected original image data into a test set and a training set, and preprocessing the images of the training set.
The specific step one comprises the following steps:
step 1.1, shooting the building construction of the construction site at different angles by adopting a single-lens reflex camera in the construction site, wherein the shot object must comprise a single target and a plurality of single targets.
And step 1.2, shooting in batches, shooting the target according to the requirements of the step 1.1 when the illumination conditions are different at different times of a day, and acquiring images under external conditions such as direct light, backlight and the like by fully considering the influence of illumination during shooting.
Step 1.3, constructing other buildings containing construction sites as much as possible under the condition of meeting the step 1.1 and the step 1.2;
and step 1.4, marking the image data acquired from the training set by the steps by adopting a LabelImg-master marking tool, and converting the generated XML file into a txt file which can be read by the model after the marking target is finished.
And step two, extracting features of the training set preprocessed in the step 1 through space pyramid pooling to obtain feature images, mapping candidate frames of the obtained feature images into feature images of the YOLO neural network, and finally training a Darknet-53-based target recognition model of the YOLO neural network to obtain a trained Darknet-53-based target recognition model.
The Darknet-53-based target recognition model comprises a convolutional layer, a feature fusion layer and a prediction layer, wherein:
and (3) rolling layers: the method comprises the steps that an input image is subjected to feature extraction through the layer, and a convolution layer divides an acquired feature map into three different scales of 13 x 13, 26 x 26 and 52 x 52 in a mode of convolution while up-sampling;
a characteristic fusion layer: and the characteristic fusion layer performs characteristic fusion on the three characteristic graphs with different scales acquired by the convolutional layer.
Prediction layer: and performing target prediction on the fused feature maps through a prediction layer to generate three prediction maps with different scales.
When the target recognition model based on Darknet-53 is trained, the target recognition model based on Darknet-53 is trained by using a pre-training weight file Darknet53.conv.74, and the method comprises the following steps:
and 2.1, configuring target recognition model parameters based on Darknet-53, and performing target recognition by adopting a YOLO neural network and a target recognition model based on Darknet-53. Wherein the parameters include Classes, step size, and learning rate. Typically, set Classes to 1, set step size to 64, and set learning rate to 0.001.
And 2.2, setting a training set, a pre-training weight file darknet53.conv.74 and a storage path of a newly generated weight file, and storing the name of the identified target in the names file.
And 2.3, entering a storage path based on the Darknet-53 target identification model under the DOS, starting training of the Darknet-53 target identification model by using a training instruction, generating new weight files after iterating preset times in the training process, and screening the best weight file in the generated new weight files.
Step three, testing the target recognition model based on Darknet-53 by using a test set to obtain a test result;
step four, analyzing the test result of the step three, and if the position of the building element is accurately marked and the building element is identified in the test result, performing step five; if the position of the building component cannot be accurately marked and/or the building component cannot be identified in the test result, adjusting parameters based on a Darknet-53 target identification model, repeating the step three until the position of the building component is accurately marked and the building component is identified in the test result, and then performing the step five; wherein the adjusted parameters based on the Darknet-53 target recognition model comprise learning rate and iteration times; when adjusting parameters of the target recognition model based on Darknet-53: if the position of the building component cannot be accurately marked in the test result, the learning rate is reduced and the iteration times are increased; if the building component cannot be identified and the loss function is too large, the learning rate is increased, the learning rate is reduced after the loss function is reduced, the iteration times are increased, and the step three is repeated.
And step five, collecting images at a construction site, and detecting the building construction target in the collected images by using a target identification model based on Darknet-53.
Aiming at the problems that an identification object with a small layer depth target exists in an image which is usually acquired in reality, an existing target identification model has the problems that the identification of the small target is difficult and the like, the invention provides that space pyramid pooling is introduced into an original image, firstly, the original image is subjected to feature extraction in a convolution layer of the YOLO to obtain a feature image, and then a candidate frame obtained by the original image in a space pyramid pooling mode is mapped into the feature image of the YOLO for processing and outputting; the spatial pyramid is introduced into the network in a pooling mode, a search area in the feature map is enlarged, a good identification effect is achieved for small target identification, redundant candidate frames are removed after the clustering, the search speed is guaranteed, and the identification accuracy of the model is improved on the original basis.
Examples
Firstly, preprocessing an original image acquired from a construction site, and the steps are as follows:
step 1: the original image acquired from the construction site is divided into two parts, wherein one part is training set data used for training a Darknet-53-based target recognition model, and the other part is test set data used for detecting the Darknet-53-based target recognition model.
Step 2: the data of the training set are uniformly labeled, and the labeling is performed in an increasing mode from 1, so that the XML file generated later can be easily distinguished.
And step 3: and (3) labeling the image data of the training set one by using a LabelImg-master labeling tool, and converting the generated XML file into a txt file which can be read by the model after the labeling is finished.
So far, the preprocessing work of the original image data is completed to obtain a training set, and the following training is started based on a Darknet-53 target recognition model, and the steps are as follows:
step 1: the Darknet-53-based target recognition model comprises a convolutional layer, a feature fusion layer and a prediction layer, and the structure diagram of the model is shown in FIG. 1.
And (3) rolling layers: the method comprises the steps that an input image is subjected to feature extraction through the layer, and a convolution layer divides an acquired feature map into three different scales of 13 x 13, 26 x 26 and 52 x 52 in a mode of convolution while up-sampling;
a characteristic fusion layer: and the characteristic fusion layer performs characteristic fusion on the three characteristic graphs with different scales acquired by the convolutional layer.
Prediction layer: and performing target prediction on the feature map obtained by fusing the feature fusion layers through the prediction layer to generate three prediction maps with different scales.
Step 2: the target recognition model based on Darknet-53 is trained on the basis of the pre-training weight file Darknet53.conv.74 by using the pre-training weight file Darknet53. conv.74. And configuring network parameters based on a Darknet-53 target identification model, and performing target identification by matching YOLO with the Darknet-53 model. For the single-class object recognition of the present embodiment, Classes is set to 1, the step size is set to 64, and the learning rate is 0.001.
And step 3: the saving paths of the training set, the dark net53.conv.74 and the newly generated weight file are set, and the name of the recognition target is saved in the names file.
And 4, step 4: entering a target recognition model storage path based on Darknet-53 under DOS, starting training of the target recognition model based on Darknet-53 by using a training instruction, generating a new weight file after iteration for a certain number of times in the training process, storing the new weight file in the set path, and finishing training of the target recognition model based on Darknet-53 after the iteration reaches the set number of times.
And finishing training based on the Darknet-53 target recognition model. In the actual scene recognition, the recognition result of the target in the image can be obtained only by inputting the path and the name of the image to be recognized under the path of the target recognition model based on Darknet-53.
The building construction target detection method based on the YOLO neural network introduces spatial pyramid pooling into a network structure of a first YOLO, firstly performs feature extraction on an original image in a convolution layer of the YOLO to obtain a feature image, then maps a candidate frame obtained by the original image in a spatial pyramid pooling mode into the feature image of the YOLO, performs feature fusion after maximum pooling, and finally outputs through a prediction layer; the spatial pyramid is introduced into the network in a pooling mode, a search area in the feature map is enlarged, a good identification effect is achieved for small target identification, redundant candidate frames are removed after the clustering, the search speed is guaranteed, and the identification accuracy of the model is improved on the original basis.
In order to compare the advantages and the disadvantages of the building construction target detection method based on the YOLO neural network in comparison with the YOLO neural network model in the prior art, the model is tested by adopting the same test set data, and the experimental results are shown in Table 1. The result shows that the target recognition accuracy is superior to that of the existing YOLO neural network model.
The target recognition result pair ratio is shown in table 1:
TABLE 1
Figure BDA0002219067460000091
As can be seen from table 1, compared with the YOLO neural network in the prior art, the YOLO neural network of the present invention improves the accuracy of target recognition under the condition that the recognition speed is not very different.
As can be seen from fig. 3(a) -3 (d) and 4(a) -4 (d), fig. 3(a) is a result of identifying a target by a YOLO neural network in the prior art, and fig. 3(b) is a result of identifying a target by a YOLO neural network in the present invention, and tests show that the YOLO neural network in the present invention can still identify a small target under the condition of non-ideal illumination conditions; fig. 3(c) is a recognition result of a YOLO neural network on a plurality of single-class targets in the prior art, fig. 3(d) is a recognition result of a YOLO neural network on a plurality of single-class targets in the present invention, and tests show that the YOLO neural network in the present invention can select a more ideal recognition frame in the recognition of a plurality of single-class targets; as can be seen from fig. 4(a) and 4(b), in the case that the lighting condition is not ideal, the YOLO neural network of the present invention has a significant improvement in the recognition of small targets compared to the existing YOLO neural network model. As can be seen from fig. 4(c) and 4(d), the classification accuracy of the YOLO neural network of the present invention for a plurality of single categories reaches over 99%, which is significantly improved compared with the YOLO neural network in the prior art.
The above can be derived: the building construction target detection method based on the YOLO neural network improves the accuracy of the model for identifying the small target; a comparison test of the existing YOLO neural network and the YOLO neural network of the invention shows that compared with the existing YOLO neural network model, the YOLO neural network model of the invention has higher recognition precision on the premise of not losing the recognition speed.

Claims (10)

1. A building construction target detection method based on a YOLO neural network is characterized by comprising the following steps:
step 1, collecting original image data at a construction site, wherein the original image data comprises different image layer depths of a plurality of single targets and a plurality of multi-angles constructed by buildings during collection; then dividing the collected original image data into a test set and a training set, and preprocessing the training set;
step 2, extracting features of the training set preprocessed in the step 1 through space pyramid pooling to obtain feature images, mapping candidate frames of the obtained feature images into feature images of a YOLO neural network, and finally training a Darknet-53-based target recognition model of the YOLO neural network to obtain a trained Darknet-53-based target recognition model;
step 3, testing the target recognition model based on Darknet-53 by using a test set to obtain a test result;
step 4, analyzing the test result of the step 3, and if the position of the building component is accurately marked and the building component is identified in the test result, performing the step 5; if the position of the building component cannot be accurately marked and/or the building component cannot be identified in the test result, adjusting parameters based on a Darknet-53 target identification model, repeating the step 3 until the position of the building component is accurately marked and the building component is identified in the test result, and then performing the step 5;
and 5, acquiring images at a construction site, and detecting the building construction target in the acquired images by using a target identification model based on Darknet-53.
2. The method for detecting the building construction target based on the YOLO neural network as claimed in claim 1, wherein in the step 1, the process of collecting the original image data at the construction site comprises the following steps:
step 1.1, shooting building construction of a construction site at different angles, wherein shot objects comprise a single target and a plurality of single targets;
step 1.2, shooting the target according to the step 1.1 when the illumination conditions are different at different times in a day, wherein the shooting is carried out under the conditions of direct light and backlight;
when step 1.2 and step 1.3 are performed, as much raw image data as possible is captured including other building constructions than the construction site building elements.
3. The building construction target detection method based on the YOLO neural network as claimed in claim 1, wherein the process of preprocessing the training set comprises: and (3) marking the image data of the training set by adopting a LabelImg-master marking tool, and converting the generated XML file into a txt file which can be read by a target recognition model based on Darknet-53 after marking the target.
4. The building construction target detection method based on the YOLO neural network as claimed in claim 1, wherein the Darknet-53-based target recognition model comprises a convolutional layer, a feature fusion layer and a prediction layer, wherein:
and (3) rolling layers: the method is used for extracting features of an input image, and dividing an acquired feature map into three different scales of 13 × 13, 26 × 26 and 52 × 52 by adopting a mode of convolution and up-sampling;
a characteristic fusion layer: the characteristic fusion device is used for carrying out characteristic fusion on the characteristic graphs of the three different scales acquired by the convolutional layer;
prediction layer: and the method is used for performing target prediction by using the feature map after the feature fusion layer is fused, and correspondingly generating three prediction maps with different scales.
5. The building construction target detection method based on the YOLO neural network as claimed in claim 1, wherein in the step 2, when the Darknet-53-based target recognition model is trained through the training set obtained through the preprocessing in the step 1, the Darknet-53-based target recognition model is trained by using a pre-training weight file Darknet53.conv.74, and the method comprises the following steps:
step 2.1, configuring target recognition model parameters based on Darknet-53, and performing target recognition by matching a YOLO neural network with a target recognition model based on Darknet-53;
step 2.2, setting a training set, a pre-training weight file dark net53.conv.74 and a storage path of a newly generated weight file, and storing the name of the identified target in a names file;
and 2.3, entering a storage path based on the Darknet-53 target identification model under the DOS, starting training of the Darknet-53 target identification model by using a training instruction, generating new weight files after iterating preset times in the training process, and screening the best weight file in the generated new weight files.
6. The method as claimed in claim 5, wherein in step 2.1, the target identification model parameters based on Darknet-53 include Classes, step size and learning rate.
7. The building construction target detection method based on the YOLO neural network as claimed in claim 6, wherein Classes is set to 1, the step length is set to 64, and the learning rate is set to 0.001.
8. The method for detecting the building construction target based on the YOLO neural network as claimed in claim 5, wherein in the step 3, the target recognition model based on Darknet-53 is tested by using the best trained weight file in the new weight file generated in the step 2.3.
9. The method as claimed in claim 1, wherein the adjusted parameters based on Darknet-53 target recognition model in step 4 include learning rate and iteration number.
10. The building construction target detection method based on the YOLO neural network as claimed in claim 9, wherein when adjusting parameters of the target recognition model based on Darknet-53:
if the position of the building component cannot be accurately marked in the test result, the learning rate is reduced and the iteration times are increased;
if the building component cannot be identified and the loss function is too large, the learning rate is increased, the learning rate is reduced after the loss function is reduced, the iteration times are increased, and the step 3 is repeated.
CN201910926462.9A 2019-09-27 2019-09-27 Building construction target detection method based on YOLO neural network Pending CN110688955A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910926462.9A CN110688955A (en) 2019-09-27 2019-09-27 Building construction target detection method based on YOLO neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910926462.9A CN110688955A (en) 2019-09-27 2019-09-27 Building construction target detection method based on YOLO neural network

Publications (1)

Publication Number Publication Date
CN110688955A true CN110688955A (en) 2020-01-14

Family

ID=69110799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910926462.9A Pending CN110688955A (en) 2019-09-27 2019-09-27 Building construction target detection method based on YOLO neural network

Country Status (1)

Country Link
CN (1) CN110688955A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633213A (en) * 2020-12-29 2021-04-09 应急管理部国家自然灾害防治研究院 Zhang Heng satellite lightning whistle sound wave detection method and system based on YOLO neural network
CN113052133A (en) * 2021-04-20 2021-06-29 平安普惠企业管理有限公司 Yolov 3-based safety helmet identification method, apparatus, medium and equipment
CN113160209A (en) * 2021-05-10 2021-07-23 上海市建筑科学研究院有限公司 Target marking method and target identification method for building facade damage detection
CN113762229A (en) * 2021-11-10 2021-12-07 山东天亚达新材料科技有限公司 Intelligent identification method and system for building equipment in building site

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894045A (en) * 2016-05-06 2016-08-24 电子科技大学 Vehicle type recognition method with deep network model based on spatial pyramid pooling
CN107871125A (en) * 2017-11-14 2018-04-03 深圳码隆科技有限公司 Architecture against regulations recognition methods, device and electronic equipment
US10108850B1 (en) * 2017-04-24 2018-10-23 Intel Corporation Recognition, reidentification and security enhancements using autonomous machines
CN109064461A (en) * 2018-08-06 2018-12-21 长沙理工大学 A kind of detection method of surface flaw of steel rail based on deep learning network
CN109522963A (en) * 2018-11-26 2019-03-26 北京电子工程总体研究所 A kind of the feature building object detection method and system of single-unit operation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894045A (en) * 2016-05-06 2016-08-24 电子科技大学 Vehicle type recognition method with deep network model based on spatial pyramid pooling
US10108850B1 (en) * 2017-04-24 2018-10-23 Intel Corporation Recognition, reidentification and security enhancements using autonomous machines
CN107871125A (en) * 2017-11-14 2018-04-03 深圳码隆科技有限公司 Architecture against regulations recognition methods, device and electronic equipment
CN109064461A (en) * 2018-08-06 2018-12-21 长沙理工大学 A kind of detection method of surface flaw of steel rail based on deep learning network
CN109522963A (en) * 2018-11-26 2019-03-26 北京电子工程总体研究所 A kind of the feature building object detection method and system of single-unit operation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
沈兰荪等: "《中医舌象的采集与分析》", 30 April 2007 *
许庆勇: "《基于深度学习理论的纹身图像识别与检测研究》", 31 December 2018 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633213A (en) * 2020-12-29 2021-04-09 应急管理部国家自然灾害防治研究院 Zhang Heng satellite lightning whistle sound wave detection method and system based on YOLO neural network
CN113052133A (en) * 2021-04-20 2021-06-29 平安普惠企业管理有限公司 Yolov 3-based safety helmet identification method, apparatus, medium and equipment
CN113160209A (en) * 2021-05-10 2021-07-23 上海市建筑科学研究院有限公司 Target marking method and target identification method for building facade damage detection
CN113762229A (en) * 2021-11-10 2021-12-07 山东天亚达新材料科技有限公司 Intelligent identification method and system for building equipment in building site

Similar Documents

Publication Publication Date Title
CN110688955A (en) Building construction target detection method based on YOLO neural network
CN113705478B (en) Mangrove single wood target detection method based on improved YOLOv5
CN103324937B (en) The method and apparatus of label target
CN111179251A (en) Defect detection system and method based on twin neural network and by utilizing template comparison
CN109509187B (en) Efficient inspection algorithm for small defects in large-resolution cloth images
CN104778474B (en) A kind of classifier construction method and object detection method for target detection
CN107229930A (en) A kind of pointer instrument numerical value intelligent identification Method and device
CN109447979B (en) Target detection method based on deep learning and image processing algorithm
CN108830332A (en) A kind of vision vehicle checking method and system
CN112926405A (en) Method, system, equipment and storage medium for detecting wearing of safety helmet
CN105654066A (en) Vehicle identification method and device
CN111932511B (en) Electronic component quality detection method and system based on deep learning
CN108932712A (en) A kind of rotor windings quality detecting system and method
CN111652835A (en) Method for detecting insulator loss of power transmission line based on deep learning and clustering
Mann et al. Automatic flower detection and phenology monitoring using time‐lapse cameras and deep learning
CN108509993A (en) A kind of water bursting in mine laser-induced fluorescence spectroscopy image-recognizing method
CN103344583A (en) Praseodymium-neodymium (Pr/Nd) component content detection system and method based on machine vision
CN111681214A (en) Aviation bearing surface rivet detection method based on U-net network
CN110348494A (en) A kind of human motion recognition method based on binary channels residual error neural network
CN115861170A (en) Surface defect detection method based on improved YOLO V4 algorithm
CN115526852A (en) Molten pool and splash monitoring method in selective laser melting process based on target detection and application
CN115841488A (en) Hole checking method of PCB (printed Circuit Board) based on computer vision
CN116682106A (en) Deep learning-based intelligent detection method and device for diaphorina citri
CN108154199B (en) High-precision rapid single-class target detection method based on deep learning
CN113128335A (en) Method, system and application for detecting, classifying and discovering micro-body paleontological fossil image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200114

RJ01 Rejection of invention patent application after publication