CN113269710B - AAU construction process detecting system based on target detection - Google Patents

AAU construction process detecting system based on target detection Download PDF

Info

Publication number
CN113269710B
CN113269710B CN202110305916.8A CN202110305916A CN113269710B CN 113269710 B CN113269710 B CN 113269710B CN 202110305916 A CN202110305916 A CN 202110305916A CN 113269710 B CN113269710 B CN 113269710B
Authority
CN
China
Prior art keywords
model
detection
training
data set
aau
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110305916.8A
Other languages
Chinese (zh)
Other versions
CN113269710A (en
Inventor
王晓君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Tianyue Electronic Technology Co ltd
Original Assignee
Guangzhou Tianyue Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Tianyue Electronic Technology Co ltd filed Critical Guangzhou Tianyue Electronic Technology Co ltd
Priority to CN202110305916.8A priority Critical patent/CN113269710B/en
Publication of CN113269710A publication Critical patent/CN113269710A/en
Application granted granted Critical
Publication of CN113269710B publication Critical patent/CN113269710B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an AAU construction process detection system based on target detection. The AAU construction process detection system based on target detection comprises the following steps: s1: collecting an AAU dataset; s2: manually marking to obtain a marking file; s3: segmenting the dataset; s4: training a detection model: s5: a flash deployment model provides a micro-service interface; s6: the prediction result is subjected to difficult sample screening; s7: iterative training of the target detection model. The AAU construction process detection system based on target detection provided by the invention only needs mobile phone photographing and uploading to complete detection, and is simple and convenient to operate; forming a class expert system by deep learning, and improving consistency and accuracy of a process detection result; and a closed loop system with difficult sample collection and model iterative training is formed, so that the detection generalization capability of the deep learning model is effectively improved.

Description

AAU construction process detecting system based on target detection
Technical Field
The invention relates to the field of construction process detection of AAU in the communication industry, in particular to an AAU construction process detection system based on target detection.
Background
Communication refers to information exchange and transmission between people or between people and nature through a certain action or medium, and broadly refers to that two or more parties needing information adopt any method and any medium under the condition of not violating the respective wish, so that the information is accurately and safely transmitted from one party to the other party.
At present, the detection of the AAU installation process in the communication industry is mainly carried out manually after construction, the judgment standard is subjective, the judgment result is uneven, and the follow-up checking, supervision and tracing are complicated.
Therefore, it is necessary to provide an AAU construction process detection system based on target detection to solve the above technical problems.
Disclosure of Invention
The invention provides an AAU construction process detection system based on target detection, which solves the problems that the detection of the AAU installation process in the current communication industry is mainly carried out manually after construction, the judgment standard is subjective, the judgment result is uneven, and the follow-up checking, supervision and tracing are complicated.
In order to solve the technical problems, the AAU construction process detection system based on target detection provided by the invention comprises the following steps:
s1: collecting an AAU dataset;
s2: manually marking to obtain a marking file;
s3: segmenting the dataset;
s4: training a detection model:
s5: a flash deployment model provides a micro-service interface;
s6: the prediction result is subjected to difficult sample screening;
s7: iterative training of the target detection model;
s8: after the process detection system is stable, the difficult sample collection and the iterative training of the model can be finished; and the on-line detection of the AAU construction process is realized by a mobile phone photographing and uploading mode.
Preferably, the collecting AAU dataset is collecting dataset before deep learning training, mainly collecting N technological photos of AAU equipment in construction and completed construction, wherein the photos should include construction technology detection items requiring technological judgment, each technology in the dataset includes qualified items and unqualified items, and the number of photos for each detection item N is more than or equal to 2000.
Preferably, the step S2 of manually marking, obtaining the markup file to collect enough data setsThen, the second step of manually marking construction detection items is carried out; the part is marked by using the existing marking software LabelImg, and mainly operates to select items to be detected by using a rectangular frame and endow the items to be detected with proper labels; generating an XML file for each marked picture after manual marking, wherein the XML file comprises the position coordinates of each detection item in the picture relative to the origin of the picture and the endowed real label; is->The detection data set is composed of corresponding annotation files XML>
Preferably, the S3 segmentation data set is a detection data setSegmentation into training data sets according to the ratio of 7:2:1 +.>Test data set->And verify data set->;/>For training a model->For evaluating the quality of the model predictions and adjusting the corresponding parameters,/->The generalization ability of the already trained model was tested.
Preferably, the S4 training detection model is a built target detection model, and the YOLOV4 model is mainly used for training detection; training data setAfter processing, inputting the model into a network for training, stopping training when the model loss tends to be stable in a specified training round, and storing to obtain a model with highest training precision, wherein YOLOV4 is a one-stage target detection model, so that the position and the type of a predicted target object from end to end can be realized; compared with the Yolov3, the Yolov4 modifies the main network to be CSPDarknet53, and simultaneously adopts the idea of space pyramid pooling to expand the experience, adopts a path aggregation module in the PANet as a neg part, changes the image enhancement mode and the like, and improves the accuracy and the speed of target detection through the series of trigs; the system is deployed on the GPU of 2080Ti after initial training, and the detection precision of AAU high-definition pictures shot by a mobile phone is 90% and the detection speed is 10fps.
Preferably, in the step S5, a micro-service interface is provided to obtain an optimal model, and then model deployment is performed on a server, mainly, an http micro-service is built by using a python-based flash frame to provide an online prediction interface, after the interface is built, a constructor only needs to take a picture on a front-end Web page or a mobile phone APP to upload a picture to a corresponding detection interface, a target detection model on the server can be called to perform inference prediction, and after receiving a picture, the micro-service interface calls the existing optimal detection model to perform forward inference, so that a real-time prediction result, namely the label and the confidence of each item to be detected, is obtained; judging whether the predicted result is qualified or not according to the predicted label, marking and displaying the predicted result on the picture, generating an XML file which is the same as the marked file, and enabling the position coordinates of the detection items to correspond to the predicted label one by one with the picture; and generating a quality inspection report from the project of the predicted process error, and returning the quality inspection report to a front-end constructor for checking.
Preferably, the difficult sample screening of the S6 predicted result includes four cases a: the actual correct prediction is correct; b: actually correctly predicting errors; c: actual mispredictions are wrong; d: the actual misprediction is correct; after the constructor receives the prediction results, if the prediction results are the conditions of A and C, carrying out construction site process rectification according to the actual conditions; if the predicted result is B and D, the model prediction error is described, and the pictures with the prediction error and the corresponding returned predicted result XML files are manually collected to form an error-prone data setThe method comprises the steps of carrying out a first treatment on the surface of the Collect a certain number of->Then, inputting label software LabelImg to correct the error label; this part is used as a difficult sample data set for appropriate data enhancement and the previous data set +.>Merging, likewise divided into training data sets +.>Test data set->And verify data set->
Preferably, the iterative training of the S7 target detection model is to obtain a training data set after combining difficult samplesStarting model training again, stopping training when the model loss tends to be stable in a specified training round, storing to obtain a model with highest training precision, verifying on an updated data set by using the model stored in the primary training, comparing the verification precision of the two models, and reserving a model with better prediction effect; updating the model deployed on the server, and carrying out the next difficult sample collection and iterative training of the model to form a closed-loop data set collection and model training system.
Compared with the related art, the AAU construction process detection system based on target detection has the following beneficial effects:
the invention provides an AAU construction process detection system based on target detection, which realizes AAU construction process detection based on target detection, and only needs mobile phone photographing and uploading to complete detection, so that the operation is simple and convenient; forming a class expert system by deep learning, and improving consistency and accuracy of a process detection result; and a closed loop system with difficult sample collection and model iterative training is formed, so that the detection generalization capability of the deep learning model is effectively improved.
Drawings
Fig. 1 is a flowchart of an AAU construction process detection system based on target detection provided by the present invention.
Detailed Description
The invention will be further described with reference to the drawings and embodiments.
Referring to fig. 1 in combination, fig. 1 is a flowchart of an AAU construction process detection system based on target detection according to the present invention. AAU construction process detecting system based on target detection includes the following steps:
s1: collecting an AAU dataset;
s2: manually marking to obtain a marking file;
s3: segmenting the dataset;
s4: training a detection model:
s5: a flash deployment model provides a micro-service interface;
s6: the prediction result is subjected to difficult sample screening;
s7: iterative training of the target detection model;
s8: after the process detection system is stable, the difficult sample collection and the iterative training of the model can be finished; and the on-line detection of the AAU construction process is realized by a mobile phone photographing and uploading mode.
The S1 collects AAU data set as data set before deep learning trainingThe collection is mainly to collect N technological photos of construction and completion of AAU (active antenna unit) equipment, wherein the photos contain construction technology detection items which need to be subjected to technology judgment, each technology in the data set contains qualified items and unqualified items, and the number N of the photos aiming at each detection item is more than or equal to 2000.
S2, manually marking to obtain a marked file which is enough data sets to be collectedThen, the second step of manually marking construction detection items is carried out; the part is marked by using the existing marking software LabelImg, and mainly operates to select items to be detected by using a rectangular frame and endow the items to be detected with proper labels; generating an XML file for each marked picture after manual marking, wherein the XML file comprises the position coordinates of each detection item in the picture relative to the origin of the picture and the endowed real label; is->The detection data set is composed of corresponding annotation files XML>
The S3 segmentation data set is a detection data setSegmentation into training data sets according to the ratio of 7:2:1 +.>Test data set and validation data set +.>;/>For training a model->For evaluating the quality of the model predictions and adjusting the corresponding parameters,/->The generalization ability of the already trained model was tested.
The S4 training detection model is a built target detection model, and is mainly used for training detection by using a YOLOV4 model; training data setAfter processing, inputting the model into a network for training, stopping training when the model loss tends to be stable in a specified training round, and storing to obtain a model with highest training precision, wherein YOLOV4 is a one-stage target detection model, so that the position and the type of a predicted target object from end to end can be realized; compared with the Yolov3, the Yolov4 modifies the main network to be CSPDarknet53, and simultaneously adopts the idea of space pyramid pooling to expand the experience, adopts a path aggregation module in the PANet as a neg part, changes the image enhancement mode and the like, and improves the accuracy and the speed of target detection through the series of trigs; the system is deployed on the GPU of 2080Ti after initial training, and is used for mobile phone shootingThe detection precision of the photographed AAU high-definition picture is 90%, and the detection speed is 10fps.
In the step S5, a micro-service interface is provided to obtain an optimal model, then model deployment is carried out on a server, an http micro-service is built by using a python-based flash frame to provide an online prediction interface, after the interface is built, a constructor only needs to take a picture on a front-end Web page or a mobile phone APP and upload the picture to a corresponding detection interface, a target detection model on the server can be called to carry out reasoning prediction, and after receiving a picture, the micro-service interface calls the existing optimal detection model to carry out forward reasoning, so that a real-time prediction result is obtained, namely the label and the confidence of each item to be detected and (x, y, w, h); judging whether the predicted result is qualified or not according to the predicted label, marking and displaying the predicted result on the picture, generating an XML file which is the same as the marked file, and enabling the position coordinates of the detection items to correspond to the predicted label one by one with the picture; and generating a quality inspection report from the project of the predicted process error, and returning the quality inspection report to a front-end constructor for checking.
The S6 prediction result is subjected to difficult sample screening, and four cases A are included in the prediction result: the actual correct prediction is correct; b: actually correctly predicting errors; c: actual mispredictions are wrong; d: the actual misprediction is correct; after the constructor receives the prediction results, if the prediction results are the conditions of A and C, carrying out construction site process rectification according to the actual conditions; if the predicted result is B and D, the model prediction error is described, and the pictures with the prediction error and the corresponding returned predicted result XML files are manually collected to form an error-prone data setThe method comprises the steps of carrying out a first treatment on the surface of the Collect a certain number of->Then, inputting label software LabelImg to correct the error label; this part is used as a difficult sample data set for appropriate data enhancement and the previous data set +.>Merging, likewise divided into training data sets +.>Test data set->And verify data set->
The iterative training of the S7 target detection model is to obtain a training data set after the difficult samples are combinedStarting model training again, stopping training when the model loss tends to be stable in a specified training round, storing to obtain a model with highest training precision, verifying on an updated data set by using the model stored in the primary training, comparing the verification precision of the two models, and reserving a model with better prediction effect; updating a model deployed on a server, performing next difficult sample collection and iterative training of the model to form a closed-loop data set collection and model training system, photographing by constructors after construction by using a mobile phone, clicking and submitting the pictures to the server for detection, performing process detection by the server by using the pre-trained model, and finally returning a detection report to a mobile phone end of a user, wherein the detection report can be checked and saved, and the process detection consistency is improved; and a closed loop system of difficult sample data set collection and model iterative training is formed, and the generalization capability of the deep learning model is continuously enhanced.
Compared with the related art, the AAU construction process detection system based on target detection has the following beneficial effects:
the AAU construction process detection is realized based on target detection, and detection is completed only by shooting and uploading by a mobile phone, so that the operation is simple and convenient; forming a class expert system by deep learning, and improving consistency and accuracy of a process detection result; and a closed loop system with difficult sample collection and model iterative training is formed, so that the detection generalization capability of the deep learning model is effectively improved.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present invention.

Claims (2)

1. The AAU construction process detection system based on target detection is characterized by comprising the following steps:
s1: collecting an AAU dataset;
s2: manually marking to obtain a marking file;
s3: segmenting the dataset;
s4: training a detection model:
s5: a flash deployment model provides a micro-service interface;
s6: the prediction result is subjected to difficult sample screening;
s7: iterative training of the target detection model;
s8: after the process detection system is stable, the difficult sample collection and the iterative training of the model can be finished; the on-line detection of the AAU construction process is realized by a mobile phone photographing and uploading mode;
the S1 collects AAU data set as data set F before deep learning training image Collecting, namely collecting N technological photos of construction and completion of AAU equipment, wherein the photos comprise construction technology detection items which need to be subjected to technology judgment, each technology in the data set comprises qualified items and unqualified items, and the number N of the photos aiming at each detection item is more than or equal to 2000;
s2, manually marking to obtain a marked file which is enough data set F image Then, the second step of manually marking construction detection items is carried out; the part is marked by using the existing marking software LabelImg, and mainly operates to select items to be detected by using a rectangular frame and endow the items to be detected with proper labels; an XML file is generated for each marked picture after manual marking, whereinThe method comprises the steps that the position coordinates of each detection item in the picture relative to the origin of the picture and the assigned real label are contained; from picture dataset F image The detection data set F is composed by corresponding markup files XML 0
The S3 segmentation data set is a to-be-detected data set F 0 Segmentation into training data set F in a ratio of 7:2:1 train Test data set F test And validating the data set F val ;F train For training models, F val For evaluating the quality of model prediction and adjusting the corresponding parameters, F test Testing the generalization capability of the trained model;
the S4 training detection model is a built target detection model, and is mainly used for training detection by using a YOLOV4 model; training data set F train After processing, inputting the model into a network for training, stopping training when the model loss tends to be stable in a specified training round, and storing to obtain a model with highest training precision, wherein YOLOV4 is a one-stage target detection model, so that the position and the type of a predicted target object from end to end can be realized; compared with the Yolov3, the Yolov4 modifies the main network to be CSPDarknet53, and simultaneously adopts the idea of space pyramid pooling to expand the experience, adopts a path aggregation module in the PANet as a neg part, changes the image enhancement mode and the like, and improves the accuracy and the speed of target detection through the series of trigs; the system is deployed on a GPU of 2080Ti after initial training, and the detection precision of an AAU high-definition picture shot by a mobile phone is 90% and the detection speed is 10fps;
the S6 prediction result is subjected to difficult sample screening, and four cases A are included in the prediction result: the actual correct prediction is correct; b: actually correctly predicting errors; c: actual mispredictions are wrong; d: the actual misprediction is correct; after the constructor receives the prediction results, if the prediction results are the conditions of A and C, carrying out construction site process rectification according to the actual conditions; if the predicted result is B and D, the model prediction error is described, and the pictures with the prediction error and the corresponding returned predicted result XML files are manually collected to form an error-prone data set F wrong The method comprises the steps of carrying out a first treatment on the surface of the Collecting a certain amount of F wrong After that, transportPerforming error label correction in labeling software LabelImg; this part is used as a difficult sample data set to perform appropriate data enhancement and the last data set F 0 Merging, also divided into training data sets F train Test data set F test And validating the data set F val
The iterative training of the S7 target detection model is to obtain a training data set F after combining difficult samples train Starting model training again, stopping training when the model loss tends to be stable in a specified training round, storing to obtain a model with highest training precision, verifying on an updated data set by using the model stored in the primary training, comparing the verification precision of the two models, and reserving a model with better prediction effect; updating the model deployed on the server, and carrying out the next difficult sample collection and iterative training of the model to form a closed-loop data set collection and model training system.
2. The AAU construction process detection system based on target detection according to claim 1, wherein in the S5, a micro-service interface is provided for obtaining an optimal model and then model deployment is carried out on a server, an http micro-service is built by mainly utilizing a python-based flash frame to provide an online prediction interface, after the interface is built, a constructor only needs to take a picture on a front-end Web page or a mobile phone APP and upload the picture to a corresponding detection interface, the target detection model on the server can be called for reasoning and prediction, after the micro-service interface receives the picture, the existing optimal detection model is called for forward reasoning, and a real-time prediction result, namely the label and the confidence of each item to be detected, is obtained; judging whether the predicted result is qualified or not according to the predicted label, marking and displaying the predicted result on the picture, generating an XML file which is the same as the marked file, and enabling the position coordinates of the detection items to correspond to the predicted label one by one with the picture; and generating a quality inspection report from the project of the predicted process error, and returning the quality inspection report to a front-end constructor for checking.
CN202110305916.8A 2021-03-19 2021-03-19 AAU construction process detecting system based on target detection Active CN113269710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110305916.8A CN113269710B (en) 2021-03-19 2021-03-19 AAU construction process detecting system based on target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110305916.8A CN113269710B (en) 2021-03-19 2021-03-19 AAU construction process detecting system based on target detection

Publications (2)

Publication Number Publication Date
CN113269710A CN113269710A (en) 2021-08-17
CN113269710B true CN113269710B (en) 2024-04-09

Family

ID=77228441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110305916.8A Active CN113269710B (en) 2021-03-19 2021-03-19 AAU construction process detecting system based on target detection

Country Status (1)

Country Link
CN (1) CN113269710B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549473B (en) * 2022-02-23 2024-04-19 中国民用航空总局第二研究所 Road surface detection method and system with autonomous learning rapid adaptation capability

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573275A (en) * 2018-03-07 2018-09-25 浙江大学 A kind of construction method of online classification micro services
CN110826514A (en) * 2019-11-13 2020-02-21 国网青海省电力公司海东供电公司 Construction site violation intelligent identification method based on deep learning
CN112084866A (en) * 2020-08-07 2020-12-15 浙江工业大学 Target detection method based on improved YOLO v4 algorithm
CN112149761A (en) * 2020-11-24 2020-12-29 江苏电力信息技术有限公司 Electric power intelligent construction site violation detection method based on YOLOv4 improved algorithm

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573275A (en) * 2018-03-07 2018-09-25 浙江大学 A kind of construction method of online classification micro services
CN110826514A (en) * 2019-11-13 2020-02-21 国网青海省电力公司海东供电公司 Construction site violation intelligent identification method based on deep learning
CN112084866A (en) * 2020-08-07 2020-12-15 浙江工业大学 Target detection method based on improved YOLO v4 algorithm
CN112149761A (en) * 2020-11-24 2020-12-29 江苏电力信息技术有限公司 Electric power intelligent construction site violation detection method based on YOLOv4 improved algorithm

Also Published As

Publication number Publication date
CN113269710A (en) 2021-08-17

Similar Documents

Publication Publication Date Title
KR101967089B1 (en) Convergence Neural Network based complete reference image quality evaluation
CN108428227B (en) No-reference image quality evaluation method based on full convolution neural network
WO2021238262A1 (en) Vehicle recognition method and apparatus, device, and storage medium
CN110866471A (en) Face image quality evaluation method and device, computer readable medium and communication terminal
CN109741332A (en) A kind of image segmentation and mask method of man-machine coordination
CN108875963A (en) Optimization method, device, terminal device and the storage medium of machine learning model
US20230119593A1 (en) Method and apparatus for training facial feature extraction model, method and apparatus for extracting facial features, device, and storage medium
Ji et al. Learning temporal action proposals with fewer labels
CN109145828B (en) Method and apparatus for generating video category detection model
TW201947463A (en) Model test method and device
CN110781976B (en) Extension method of training image, training method and related device
CN114332578A (en) Image anomaly detection model training method, image anomaly detection method and device
WO2021056914A1 (en) Automatic modeling method and apparatus for object detection model
CN110472581A (en) A kind of cell image analysis method based on deep learning
CN112001399B (en) Image scene classification method and device based on local feature saliency
CN112966754B (en) Sample screening method, sample screening device and terminal equipment
CN109919302B (en) Training method and device for neural network of image
CN108710893A (en) A kind of digital image cameras source model sorting technique of feature based fusion
CN111598213A (en) Network training method, data identification method, device, equipment and medium
CN113269710B (en) AAU construction process detecting system based on target detection
CN115082752A (en) Target detection model training method, device, equipment and medium based on weak supervision
CN107133631A (en) A kind of method and device for recognizing TV station's icon
CN114445684A (en) Method, device and equipment for training lane line segmentation model and storage medium
Zhang et al. HVS revisited: A comprehensive video quality assessment framework
CN113838076A (en) Method and device for labeling object contour in target image and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant