CN113283452A - Method for detecting installation and disassembly steps of large equipment - Google Patents

Method for detecting installation and disassembly steps of large equipment Download PDF

Info

Publication number
CN113283452A
CN113283452A CN202110646769.0A CN202110646769A CN113283452A CN 113283452 A CN113283452 A CN 113283452A CN 202110646769 A CN202110646769 A CN 202110646769A CN 113283452 A CN113283452 A CN 113283452A
Authority
CN
China
Prior art keywords
steps
text
installation
image
mounting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110646769.0A
Other languages
Chinese (zh)
Other versions
CN113283452B (en
Inventor
简易成
宁德奎
张巨会
姚林
赵世范
奚正茂
杨峰
施昌平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sinohydro Bureau 7 Co Ltd
Original Assignee
Sinohydro Bureau 7 Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sinohydro Bureau 7 Co Ltd filed Critical Sinohydro Bureau 7 Co Ltd
Priority to CN202110646769.0A priority Critical patent/CN113283452B/en
Publication of CN113283452A publication Critical patent/CN113283452A/en
Application granted granted Critical
Publication of CN113283452B publication Critical patent/CN113283452B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02WCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
    • Y02W90/00Enabling technologies or technologies with a potential or indirect contribution to greenhouse gas [GHG] emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting the mounting and dismounting steps of cross-mode large-scale equipment. The method comprises the following steps: establishing a cross-modal data set, applying a SENet network and a text module, and detecting; the detection is that a database comprising images of the installation and removal steps is searched, the installation and removal steps are preliminarily determined, then the image labeling technology based on deep learning is utilized to carry out the literal description on the images of the installation and removal work, the similarity calculation is carried out on the literal description in a text database to determine the steps to which the installation and removal steps belong, finally the judgment is carried out by combining the results of the images and the text, if the steps to which the installation and removal work belongs are determined at the same time, the steps to which the installation and removal work belongs are determined, and if the steps to which the installation and removal work belongs are determined at the same time, the detection and the judgment are carried out again. The method constructs a specific image and text data set in the installation and the disassembly of the large special equipment, adopts the SENET network and the image text description technology, can well adapt to the construction scene of the large special equipment, can detect the installation and the disassembly steps according to the identification result, and judges the correctness of the installation and the disassembly steps.

Description

Method for detecting installation and disassembly steps of large equipment
Technical Field
The invention relates to a method for installing and disassembling large-scale equipment, in particular to a method for detecting the steps of installing and disassembling the large-scale equipment in a cross-mode.
Background
At present, with the high-speed development of economy in China, more and more capital construction projects are provided, large-scale special equipment on a project site is common building construction equipment, and meanwhile, the large-scale special equipment on the project site is complex in structure and high in requirement on safety of the project site. However, safety problems in the installation and disassembly processes of large special equipment are easy to ignore at present, the supervision in the installation and disassembly processes is usually carried out in a manual mode on a construction site at present, supervision personnel need abundant installation and disassembly experiences, and major safety accidents are easy to cause once the supervision personnel are overlooked.
Chinese patent publication No. CN109626224A discloses a construction method for mounting and dismounting a bridge crane in a limited space, which is designed based on specific parameters of the bridge crane and considering the limited space. However, it can be seen from the patent that human errors or possible misoperation can be detected or early warned in the process of installing and disassembling the design equipment, and once the human errors occur, serious consequences can be caused.
Disclosure of Invention
The invention provides a method for detecting the mounting and dismounting steps of large-scale equipment in a cross-mode aiming at the defects or the improvement requirements of the prior art, and aims to detect the mounting and dismounting steps of the large-scale special equipment, replace supervision personnel with artificial intelligence and realize intelligent management and control on the mounting and dismounting steps of the large-scale special equipment.
The invention is realized by the following technical scheme:
a method for detecting the mounting and dismounting steps of large equipment is characterized by comprising the following steps: aiming at the detected mounting and dismounting steps, a working image database of the mounting and dismounting equipment comprising images of the mounting and dismounting steps is used for searching, the mounting and dismounting steps belong to which step of the whole steps are preliminarily determined, then, the image marking technology based on deep learning is used for carrying out the writing description on the mounting and dismounting working images, the writing description is carried out in a text database for similarity calculation, so as to determine which step of the mounting and dismounting steps belongs to which step of the text database, finally, the judgment is carried out by combining the results of the images and the texts, if the images and the texts are determined to belong to one step at the same time, the step of the mounting and dismounting work is confirmed, and if the images and the texts belong to one step, the detection judgment is carried out again.
Further, the method for detecting the mounting and dismounting steps of the large-scale equipment comprises the following steps: cross-modal data set establishment, SENET network and text module application and detection.
The cross-modal data set establishment is as follows: collecting image data and text data in the process of the installation and disassembly step of the large-scale characteristic equipment, and labeling the image according to the installation and disassembly step; really performing the sequence of the installation and disassembly steps corresponding to each image to construct a data set for the subsequent training deep learning model;
the SENET network is used for identifying image information in the process of installing and detaching large special equipment; the text module is used for performing text description on the mounting and dismounting image by using the deep network.
And finally, making a decision by combining the marks obtained by the image and the text, if the marks are correct, determining the step number of the installation and removal process, otherwise, further finely adjusting the network, and repeating the above results.
The cross-modal dataset comprises two parts, one part is an image dataset, and the other part is a text dataset; the image data set and the text data set are related, and the mounting and dismounting steps of the large-scale equipment are simultaneously described by images and texts, wherein the images are numbered according to the sequence of the mounting and dismounting steps, and each image is provided with the text description.
The SEnet network is a convolutional neural network as an extracted feature.
Further the SENET network and text module applications are:
converting each image in the training set into a characteristic vector by using SENET, converting an input image in the mounting and dismounting process into the characteristic vector by using the SENET, comparing the characteristic vector with the characteristic vector in the training set, and selecting the number with the closest result as the number of the current mounting and dismounting process;
similarly, using SENet to extract the feature vector of the image, inputting the feature vector into LSTM to obtain the text description of the image, and converting the text in the text database into text vector by Word2 Vec; converting the text description of the mounting and dismounting operation into Word2Vec vectors, calculating the distance between the vectors and the text vectors of each step in the database, and selecting the number which is closest to the selection result and belongs to the current mounting and dismounting step.
Compared with the prior art, the method has the advantages that the image and text data set in the installation and disassembly of the specific large special equipment is constructed, the SENet network and image text description technology is adopted, the method can be well suitable for the construction scene of the large special equipment, the installation and disassembly steps can be detected according to the identification result, and the correctness of the installation and disassembly steps is judged.
The invention provides a method for detecting the mounting and dismounting steps of cross-mode large-scale equipment. The basic method of the invention detects or warns human errors or possible misoperation in the process of assembling and disassembling the equipment, and can effectively avoid serious loss and consequence caused by human errors.
Drawings
Fig. 1 is a core module of a SENet network for detecting image information in the installation and removal processes of large special equipment according to an embodiment of the present invention;
FIG. 2 is a core door mechanism algorithm of the LSTM module for performing text description on the mounting and dismounting image according to the embodiment of the present invention;
fig. 3 is a comparison between image information and text information during the installation and removal step and a final check of the detection of the final installation and removal step according to the embodiment of the present invention.
Detailed Description
The present invention is further described below in conjunction with the following detailed description, which is intended to further illustrate the principles of the invention and is not intended to limit the invention in any way, but is equivalent or analogous to the present invention without departing from its scope. Meanwhile, the techniques involved in the embodiments described below may be combined with each other as long as they do not conflict with each other.
With reference to the attached drawings.
The invention provides a method for detecting the mounting and dismounting steps of cross-mode large-scale equipment. The mounting and dismounting process of large-scale equipment is a very complicated process, which not only requires the matching accuracy of each component in the mounting and dismounting process, but also requires certain steps for mounting and dismounting. Some of the assembling and disassembling works are performed on the basis of the assembling and disassembling works which are already completed before, that is, if the assembling and disassembling sequence is incorrect, the assembling and disassembling work of the whole large-scale equipment is difficult to complete successfully, and a large amount of manpower and material resources are consumed for adjusting to the correct sequence.
The invention provides a detection method for the mounting and dismounting steps of a cross-mode large-scale device, which comprises the steps of firstly utilizing an image of a mounting and dismounting work being carried out and an image in an image database of the mounting and dismounting steps to carry out retrieval, preliminarily determining which step the work belongs to which step of the whole steps, then utilizing an image marking technology based on deep learning to carry out textual description on the mounting and dismounting image, carrying out similarity calculation on the textual description in a text database to determine which step the work belongs to which step the work belongs, finally judging by combining the results of the image and the text, if the work belongs to a certain step at the same time, confirming the step the mounting and dismounting work belongs to, and if the work belongs to a certain step at the same time, adjusting a model to carry out detection judgment again. The method provided by the invention can effectively detect the whole process of installing and disassembling the large-scale equipment, not only can save the cost, but also can ensure the normal operation of the installing and disassembling work of the whole large-scale equipment.
The detection method for the mounting and dismounting steps of the cross-mode large-scale equipment is specifically divided into three parts:
(1) training and testing of the model requires a labeled data set. The data set is divided into two parts, one part being an image data set and the other part being a text data set. The image dataset and the text dataset are correlated. The mounting and dismounting steps of the large-scale equipment are simultaneously described by images and texts, wherein the images are numbered according to the sequence of the mounting and dismounting steps, and the text description of each image is arranged below the image.
(2) In order to enable the extracted image features to better describe the mounting and dismounting operations, the method uses a SENET network as a convolution neural network for extracting the features. And converting each image in the training set into a characteristic vector by using SENET, converting an input mounting and dismounting image into the characteristic vector by using the SENET, comparing the characteristic vector with the characteristic vector in the training set, and selecting the number with the closest result as the number of the current mounting and dismounting process.
(3) The feature vector of an image is also extracted using SENET and then input into LSTM to get a textual description of the image. Firstly, converting texts in a text database into text vectors by using Word2Vec, then converting text description of the mounting and dismounting operation into the Word2Vec vectors, calculating the distance between the vectors and the text vectors of each step in the database, and selecting the number to which the closest result belongs as the current mounting and dismounting step. And finally, making a decision by combining the marks obtained by the image and the text, if the marks are correct, determining the step number of the installation and removal process, otherwise, further finely adjusting the network, and repeating the above results.
In the detection method for the installation and removal steps of the cross-mode large-scale equipment, for extracting image features, various excellent convolutional neural networks are available at present, in order to balance the complexity of the networks and the identification accuracy of the networks, the invention adopts SENET as the image feature extraction network, and the core of the SENET is shown in figure 1.
Firstly, obtaining the representation of each channel by using global average pooling, then converting each value into a probability value between 0 and 1 by using a sigmoid function through two fully-connected layers for representing the importance degree of each channel, and finally multiplying the weight by an original feature map and then transmitting the result to the next layer, wherein the formula representation is shown as a formula (1):
Figure BDA0003110170350000061
the whole network is a residual error network using an excitation extrusion module, and 2048-dimensional feature vectors obtained by the network are reduced to 512-dimensional feature vectors by 1 × 1 convolution in order to reduce the computational complexity. Inputting each image corresponding to the installation and disassembly steps of the same device in the image database into the network to obtain a characteristic vector, inputting the tested installation and disassembly images into the network to obtain the characteristic vector of the image, calculating the distance between the vector and each vector in the database, and selecting the number corresponding to the image with the minimum distance as the number of the operation. Here, the distance between vectors is calculated using the euclidean distance as shown in equation (2).
Figure BDA0003110170350000062
That is, the square of the difference is calculated for the elements at each position between two vectors, and finally the sum is added, and then the square root is opened.
The image feature vector of the test image obtained above is saved and then input into the LSTM. The LSTM is a short-term memory network in long time, is a special recurrent neural network, and can solve the problems of gradient loss and gradient explosion in the long sequence training process. The core of the LSTM is three door mechanisms, namely an input door, a forgetting door and an output door. For a forget gate, the formula is shown in (3).
ft=σ(Wf[ht-1,xt]+bf) (3)
Wherein b isfRepresents the output of the last cell, xtRepresents the input of the current cell and σ represents the sigmoid function. LSTM can decide what information to discard and retain from the cells. After the image feature vector passes through the LSTM, a text description about the image can be finally obtained, and the overall architecture is as shown in fig. 2.
Firstly, converting a text in a text database into a text vector by using Word2Vec, then converting a generated text of the installation and removal test image into a text vector by using Word2Vec, similarly, calculating the distance between the text vector and each text vector in the database, and for accurate calculation, using cosine similarity, as shown in formula (4).
Figure BDA0003110170350000071
And finally, taking the number which is most similar to the test vector as the number of the operation step. And when the numbers obtained by using the image information and the text information are consistent, determining the position of the mounting and dismounting operation in the whole mounting and dismounting step, comparing the position with the real step, determining whether the mounting and dismounting operation is correct or not, and feeding back the correct position to an operator. The decision making process is shown in figure 3.

Claims (5)

1. A method for detecting the mounting and dismounting steps of large equipment is characterized by comprising the following steps: aiming at the detected mounting and dismounting steps, searching a working image database of the mounting and dismounting device including images of the mounting and dismounting steps, preliminarily determining which step the mounting and dismounting steps belong to, then performing textual description on the mounting and dismounting working images by using an image labeling technology based on deep learning, performing similarity calculation on the textual description in a text database to determine which step the mounting and dismounting steps belong to in the text database, finally judging by combining the results of the images and the texts, if determining to belong to one step at the same time, confirming the step the mounting and dismounting work belongs to, and otherwise, re-performing detection judgment.
2. The method for detecting the installation and dismantling steps of the large equipment according to claim 1, comprising the following steps: establishing a cross-modal data set, applying a SENet network and a text module, and detecting;
the cross-modal data set establishment is as follows: collecting image data and text data in the process of the installation and disassembly step of the large-scale characteristic equipment, and labeling the image according to the installation and disassembly step; really performing the sequence of the installation and disassembly steps corresponding to each image to construct a data set for the subsequent training deep learning model;
the SEnet network is: identifying image information in the installation and disassembly processes of the large special equipment; the text module application is: performing text description on the mounting and dismounting image by using a deep network;
the detection is as follows: and (4) making a decision by combining the marks obtained by the image and the text, if the marks are correct, determining the step number of the installation and removal process, otherwise, further adjusting or recalling the SENET network and the text module, and repeating the detection step.
3. The large-scale equipment installation and removal step detection method according to claim 2, characterized in that: the cross-modal dataset comprises two parts, one part is an image dataset, and the other part is a text dataset; the image data set and the text data set are related, and the mounting and dismounting steps of the large-scale equipment are simultaneously described by images and texts, wherein the images are numbered according to the sequence of the mounting and dismounting steps, and each image is provided with the text description.
4. The large-scale equipment installation and removal step detection method according to claim 3, characterized in that: the SEnet network is a convolutional neural network as an extracted feature.
5. The method for detecting the installation and dismantling steps of the large scale equipment according to claim 3, wherein the SENET network and text module application is:
converting each image in the training set into a feature vector by using SENET; for an input image of the mounting and dismounting process, converting the image into a characteristic vector by using SENET, comparing the characteristic vector with the characteristic vector in a training set, and selecting the number with the closest result as the number of the current mounting and dismounting process;
similarly, using SENet to extract the feature vector of the image, inputting the feature vector into LSTM to obtain the text description of the image, and converting the text in the text database into text vector by Word2 Vec; converting the text description of the mounting and dismounting operation into Word2Vec vectors, calculating the distance between the vectors and the text vectors of each step in the database, and selecting the number which is closest to the selection result and belongs to the current mounting and dismounting step.
CN202110646769.0A 2021-06-10 2021-06-10 Large-scale equipment mounting and dismounting step detection method Active CN113283452B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110646769.0A CN113283452B (en) 2021-06-10 2021-06-10 Large-scale equipment mounting and dismounting step detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110646769.0A CN113283452B (en) 2021-06-10 2021-06-10 Large-scale equipment mounting and dismounting step detection method

Publications (2)

Publication Number Publication Date
CN113283452A true CN113283452A (en) 2021-08-20
CN113283452B CN113283452B (en) 2023-07-25

Family

ID=77284133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110646769.0A Active CN113283452B (en) 2021-06-10 2021-06-10 Large-scale equipment mounting and dismounting step detection method

Country Status (1)

Country Link
CN (1) CN113283452B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170061250A1 (en) * 2015-08-28 2017-03-02 Microsoft Technology Licensing, Llc Discovery of semantic similarities between images and text
CN108595636A (en) * 2018-04-25 2018-09-28 复旦大学 The image search method of cartographical sketching based on depth cross-module state correlation study
CN111738042A (en) * 2019-10-25 2020-10-02 北京沃东天骏信息技术有限公司 Identification method, device and storage medium
CN111782852A (en) * 2020-06-23 2020-10-16 西安电子科技大学 High-level semantic image retrieval method based on deep learning
CN111914589A (en) * 2019-05-07 2020-11-10 大金工业株式会社 Monitoring method, computing equipment, device, monitoring system and computer-readable storage medium for installation process of air conditioning unit
CN112905810A (en) * 2021-02-09 2021-06-04 吴兆江 Cross-modal image-text retrieval method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170061250A1 (en) * 2015-08-28 2017-03-02 Microsoft Technology Licensing, Llc Discovery of semantic similarities between images and text
CN108595636A (en) * 2018-04-25 2018-09-28 复旦大学 The image search method of cartographical sketching based on depth cross-module state correlation study
CN111914589A (en) * 2019-05-07 2020-11-10 大金工业株式会社 Monitoring method, computing equipment, device, monitoring system and computer-readable storage medium for installation process of air conditioning unit
CN111738042A (en) * 2019-10-25 2020-10-02 北京沃东天骏信息技术有限公司 Identification method, device and storage medium
CN111782852A (en) * 2020-06-23 2020-10-16 西安电子科技大学 High-level semantic image retrieval method based on deep learning
CN112905810A (en) * 2021-02-09 2021-06-04 吴兆江 Cross-modal image-text retrieval method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZIQIANG ZHANG, 《COMPUTERS AND ELECTRONICS IN AGRICULTURE》, vol. 166, pages 1 - 11 *
张超, 《人民黄河》, vol. 41, pages 207 - 208 *

Also Published As

Publication number Publication date
CN113283452B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN111598860B (en) Lithium battery defect detection method based on yolov3 network embedded into self-attention door module
CN111368690B (en) Deep learning-based video image ship detection method and system under influence of sea waves
CN111861978A (en) Bridge crack example segmentation method based on Faster R-CNN
CN110263934B (en) Artificial intelligence data labeling method and device
CN110751076B (en) Vehicle detection method
CN111476307A (en) Lithium battery surface defect detection method based on depth field adaptation
CN114022904A (en) Noise robust pedestrian re-identification method based on two stages
CN113780345A (en) Small sample classification method and system facing small and medium-sized enterprises and based on tensor attention
CN117516937A (en) Rolling bearing unknown fault detection method based on multi-mode feature fusion enhancement
CN115757103A (en) Neural network test case generation method based on tree structure
CN115019294A (en) Pointer instrument reading identification method and system
CN114897085A (en) Clustering method based on closed subgraph link prediction and computer equipment
CN110717602A (en) Machine learning model robustness assessment method based on noise data
CN111290953B (en) Method and device for analyzing test logs
CN113283452B (en) Large-scale equipment mounting and dismounting step detection method
CN108345943B (en) Machine learning identification method based on embedded coding and contrast learning
CN115753102A (en) Bearing fault diagnosis method based on multi-scale residual error sub-domain adaptation
CN114757287A (en) Automatic testing method based on multi-mode fusion of text and image
CN114550197A (en) Terminal strip image detection information matching method
CN113688735A (en) Image classification method and device and electronic equipment
CN117951632B (en) PU contrast learning anomaly detection method and system based on multi-mode prototype network
CN116560894B (en) Unmanned aerial vehicle fault data analysis method, server and medium applying machine learning
CN118132738B (en) Extraction type question-answering method for bridge evaluation text
CN118115591B (en) Power prediction model training method, photovoltaic power station operation and maintenance method and related devices
CN118036555B (en) Low-sample font generation method based on skeleton transfer and structure contrast learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant