CN116168246A - Method, device, equipment and medium for identifying waste slag field for railway engineering - Google Patents

Method, device, equipment and medium for identifying waste slag field for railway engineering Download PDF

Info

Publication number
CN116168246A
CN116168246A CN202310151598.3A CN202310151598A CN116168246A CN 116168246 A CN116168246 A CN 116168246A CN 202310151598 A CN202310151598 A CN 202310151598A CN 116168246 A CN116168246 A CN 116168246A
Authority
CN
China
Prior art keywords
image information
information
engineering
network model
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310151598.3A
Other languages
Chinese (zh)
Inventor
陈泽昊
张洁瑜
汤晓光
魏强
余小周
郝光
李刚
王大鹏
李德良
程驰
李志源
韩美清
吉奕康
戎玉博
方敏哲
王祺
韩丽源
郭锐
王志刚
周杨
刘红良
张天龙
王俊彦
黄录峰
杨皓元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Railway Construction Management Co ltd
China Academy of Railway Sciences Corp Ltd CARS
China State Railway Group Co Ltd
Energy Saving and Environmental Protection and Occupational Safety and Health Research of CARS
Original Assignee
China Railway Construction Management Co ltd
China Academy of Railway Sciences Corp Ltd CARS
China State Railway Group Co Ltd
Energy Saving and Environmental Protection and Occupational Safety and Health Research of CARS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Railway Construction Management Co ltd, China Academy of Railway Sciences Corp Ltd CARS, China State Railway Group Co Ltd, Energy Saving and Environmental Protection and Occupational Safety and Health Research of CARS filed Critical China Railway Construction Management Co ltd
Priority to CN202310151598.3A priority Critical patent/CN116168246A/en
Publication of CN116168246A publication Critical patent/CN116168246A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02WCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
    • Y02W90/00Enabling technologies or technologies with a potential or indirect contribution to greenhouse gas [GHG] emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method, a device, equipment and a medium for identifying a waste slag field for railway engineering, which relate to the technical field of production detection and comprise the following steps: the method comprises the steps of obtaining remote sensing image information and real-time image information, preprocessing the remote sensing image information to obtain spectrum image information, inputting the spectrum image information into a classification model for training to obtain engineering image information and labeling vector information, correspondingly binding the labeling vector information with the engineering image information to obtain a labeling vector file, carrying out vector grid conversion processing on the labeling vector file, taking image pixel values and the engineering image information as training samples, carrying out feature fusion processing on the training samples to obtain edge fusion features, training a preset network model based on the edge fusion features to obtain a trained identification network model, and inputting the real-time image information into the trained identification network model for training to obtain waste slag field identification information. The method and the device have the effect of improving the identification efficiency of the waste slag field.

Description

Method, device, equipment and medium for identifying waste slag field for railway engineering
Technical Field
The application relates to the field of engineering environmental protection, in particular to a method, a device, equipment and a medium for identifying a waste slag field for railway engineering.
Background
Railway engineering is a kind of civil engineering facilities on the railway, and also refers to the technology applied in each stage of constructing the railway (survey design, construction, maintenance and reconstruction), and the construction and development of the railway provide traffic convenience for people and promote economic and social development.
At present, in the implementation process of railway construction projects, a large amount of waste slag is accumulated to form a waste slag field, and the formation of the waste slag field has serious influence on the surrounding ecological environment, so that the ecological environment influence monitoring is required to be carried out in the railway construction process. The monitoring work generally needs to use satellite remote sensing images to carry out classification and identification on the surface plaque content, and the influence of construction on land utilization types is counted. The traditional image classification and identification method is to manually classify and label the ground object content by special software such as Arcgis and the like, and calculate the software according to the labeled result, so that the generation of a waste slag field and the monitoring of the influence of the waste slag field on the ecological environment are realized.
For the related technology, the inventor considers that the manual mode is adopted for classifying and marking, so that the marking speed is low, and the marking accuracy cannot be ensured due to manual subjective judgment, so that the defect of low identification efficiency of a waste slag field exists.
Disclosure of Invention
In order to improve the recognition efficiency of the waste slag field, the application provides a method, a device, equipment and a medium for recognizing the waste slag field for railway engineering.
In a first aspect, the present application provides a method for identifying a slag disposal site for railway engineering, which adopts the following technical scheme:
a method of slag yard identification for railway engineering, comprising:
acquiring remote sensing image information and real-time image information, wherein the remote sensing image information is used for representing satellite remote sensing image information constructed by railway engineering at different area positions, and the real-time image information is used for representing satellite remote sensing image information within a preset range along the current railway;
preprocessing the remote sensing image information to obtain spectrum image information;
inputting the spectral image information into a trained classification model for training to obtain engineering image information and annotation vector information corresponding to the engineering image information, wherein the engineering image information is used for representing image information of different types of scenes in the railway engineering construction process, and the annotation vector information is used for representing three-dimensional geographic coordinate information corresponding to the engineering image information;
Correspondingly binding the labeling vector information with the engineering image information to obtain a labeling vector file;
performing vector grid conversion processing on the labeling vector file, and taking an image pixel value obtained after the processing and engineering image information corresponding to the image pixel value as a training sample, wherein the image pixel value is used for representing a pixel value corresponding to each engineering image information in the labeling vector file;
performing feature fusion processing on the training sample to obtain edge fusion features;
training a preset network model based on the edge fusion characteristics to obtain a trained identification network model;
and inputting the real-time image information into a trained identification network model for training to obtain the identification information of the waste slag field.
In another possible implementation manner, the preprocessing the remote sensing image information to obtain spectral image information includes:
performing geometric correction processing on the remote sensing image information to obtain corrected image information;
performing image fusion processing on the corrected image information and the multispectral image to obtain fusion image information;
and performing image mosaic processing on the fused image information to obtain spectrum image information.
In another possible implementation manner, the performing feature fusion processing on the training sample to obtain an edge fusion feature includes:
establishing a first DSM model based on the image pixel values;
extracting feature data information in the first DSM model, extracting DSN level edge characteristics of the feature data information and the engineering image information, and obtaining edge combination results of different scales, wherein the feature data information comprises feature type information and space coordinate data corresponding to the feature type information;
and carrying out edge feature fusion on the engineering image information, the ground feature data information and the edge detection results with different scales to obtain edge fusion features.
In another possible implementation manner, the training the preset network model based on the edge fusion feature to obtain a trained identification network model includes:
creating a first classification network model and a second classification network model, wherein the first classification network model is used for identifying and training the engineering type of the engineering image information, and the second classification network model is used for identifying and training the feature characteristics in the feature data information;
Training the first classification network model based on the engineering image information and the edge fusion characteristics to obtain a trained first classification network model;
training the second classification network model based on the DSM model and the edge fusion feature to obtain a trained second classification network model;
and carrying out feature fusion on the first classification network model and the second classification network model to obtain an identification network model.
In another possible implementation manner, the inputting the real-time image information into the trained recognition network model for training to obtain the slag disposal site recognition information includes:
performing overlapped slicing processing on the real-time image information to obtain cut image information;
constructing a second DSM model based on the cutting image information, and calling DSM data in the second DSM model;
and inputting the cutting image information and the DSM data into the identification network model for prediction training to obtain the identification information of the waste slag field.
In another possible implementation manner, the inputting the cutting image information and the DSM data into the recognition network model to perform predictive training, and obtaining the slag disposal site recognition information further includes:
Judging whether overlapping slice images exist in the cutting image information or not;
if yes, determining a pixel prediction value corresponding to each overlapped slice image based on the spoil field identification information;
and carrying out occurrence rate ranking on the pixel predicted values corresponding to each overlapped slice image to obtain a target predicted result.
In another possible implementation, the method further includes:
finely crushing, classifying and taking out the waste slag field identification information to obtain optimized identification information;
performing corresponding spatial data import on the optimized identification information based on the real-time image information to obtain coordinate identification information;
performing grid vector conversion processing on the coordinate identification information to obtain vector identification information;
judging whether the vector identification information has preset abnormality or not, if so, generating intervention information to inform staff to perform intervention correction on the vector identification information.
In a second aspect, the present application provides a slag disposal site identification device for railway engineering, which adopts the following technical scheme:
a slag yard identification device for railway engineering, comprising:
the information acquisition module is used for acquiring remote sensing image information and real-time image information, wherein the remote sensing image information is used for representing satellite remote sensing image information built by railway engineering at different area positions, and the real-time image information is used for representing satellite remote sensing image information within a preset range along the current railway;
The image preprocessing module is used for preprocessing the remote sensing image information to obtain spectrum image information;
the image classification module is used for inputting the spectrum image information into a trained classification model for training to obtain engineering image information and annotation vector information corresponding to the engineering image information, wherein the engineering image information is used for representing image information of different types of scenes in the railway engineering construction process, and the annotation vector information is used for representing three-dimensional geographic coordinate information corresponding to the engineering image information;
the information binding module is used for correspondingly binding the annotation vector information with the engineering image information to obtain an annotation vector file;
the vector conversion module is used for carrying out vector grid conversion processing on the labeling vector file, taking the processed image pixel value and engineering image information corresponding to the image pixel value as training samples, wherein the image pixel value is used for representing the pixel value corresponding to each engineering image information in the labeling vector file;
the feature fusion module is used for carrying out feature fusion processing on the training sample to obtain edge fusion features;
The network training module is used for training a preset network model based on the edge fusion characteristics to obtain a trained identification network model;
and the image recognition module is used for inputting the real-time image information into the trained recognition network model for training to obtain the waste slag field recognition information.
In one possible implementation manner, the image preprocessing module is specifically configured to, when preprocessing the remote sensing image information to obtain spectral image information:
performing geometric correction processing on the remote sensing image information to obtain corrected image information;
performing image fusion processing on the corrected image information and the multispectral image to obtain fusion image information;
and performing image mosaic processing on the fused image information to obtain spectrum image information.
In another possible implementation manner, the feature fusion module is specifically configured to, when performing feature fusion processing on the training sample to obtain an edge fusion feature:
establishing a first DSM model based on the image pixel values;
extracting feature data information in the first DSM model, extracting DSN level edge characteristics of the feature data information and the engineering image information, and obtaining edge combination results of different scales, wherein the feature data information comprises feature type information and space coordinate data corresponding to the feature type information;
And carrying out edge feature fusion on the engineering image information, the ground feature data information and the edge detection results with different scales to obtain edge fusion features.
In another possible implementation manner, the network training module is specifically configured to, when training a preset network model based on the edge fusion feature to obtain a trained identification network model:
creating a first classification network model and a second classification network model, wherein the first classification network model is used for identifying and training the engineering type of the engineering image information, and the second classification network model is used for identifying and training the feature characteristics in the feature data information;
training the first classification network model based on the engineering image information and the edge fusion characteristics to obtain a trained first classification network model;
training the second classification network model based on the DSM model and the edge fusion feature to obtain a trained second classification network model;
and carrying out feature fusion on the first classification network model and the second classification network model to obtain an identification network model.
In another possible implementation manner, the image recognition module is specifically configured to, when inputting the real-time image information into a trained recognition network model to perform training to obtain the slag disposal site recognition information:
performing overlapped slicing processing on the real-time image information to obtain cut image information;
constructing a second DSM model based on the cutting image information, and calling DSM data in the second DSM model;
and inputting the cutting image information and the DSM data into the identification network model for prediction training to obtain the identification information of the waste slag field.
In another possible implementation, the apparatus further includes: an overlap judging module, a pixel determining module and a pixel arrangement module, wherein,
the overlapping judging module is used for judging whether overlapping slice images exist in the cutting image information or not;
the pixel determining module is used for determining a pixel prediction value corresponding to each overlapped slice image based on the waste slag field identification information when the overlapped slice image exists in the cutting image information;
and the pixel ranking module is used for performing occurrence ranking on the pixel predicted values corresponding to each overlapped slice image to obtain a target predicted result.
In another possible implementation, the apparatus further includes: a fine classification module, a data importing module, a vector conversion module and a vector judgment module, wherein,
the fine crushing classification module is used for carrying out fine crushing classification on the waste slag field identification information and taking out the waste slag field identification information to obtain optimized identification information;
the data importing module is used for importing corresponding spatial data to the optimized identification information based on the real-time image information to obtain coordinate identification information;
the vector conversion module is used for carrying out grid-to-vector conversion on the coordinate identification information to obtain vector identification information;
the vector judgment module is used for judging whether the vector identification information has preset abnormality or not, and if so, generating intervention information so as to inform staff to perform intervention correction on the vector identification information.
In a third aspect, the present application provides an electronic device, which adopts the following technical scheme:
an electronic device, the electronic device comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to: a method of slag yard identification for railway engineering is performed as shown in any one of the possible implementations according to the first aspect.
In a fourth aspect, the present application provides a computer readable storage medium, which adopts the following technical scheme:
a computer-readable storage medium, comprising: a computer program is stored that can be loaded and executed by a processor to implement a method for slag yard identification for railway engineering as shown in any one of the possible implementations of the first aspect.
In summary, the present application includes at least one of the following beneficial technical effects:
by adopting the technical scheme, when the waste slag field is identified, remote sensing image information of the railway engineering building satellite positioned at different area positions and real-time image information of the satellite in a preset range along the current railway are obtained, then the remote sensing image information is preprocessed to obtain spectrum image information, the spectrum image information is input into a trained classification model to be trained, engineering image information and label vector information corresponding to the engineering image information are obtained, wherein the engineering image information is used for representing image information of different types of scenes in the railway engineering building process, the label vector information is used for representing three-dimensional geographic coordinate information corresponding to the engineering image information, then the label vector information is correspondingly bound with the engineering image information to obtain a label vector file, then vector grid processing is carried out on the label vector file, image pixel values obtained after processing and the engineering image information corresponding to the image pixel values are used as training samples, then feature fusion processing is carried out on the training samples to obtain edge fusion characteristics, the label vector information is used for representing image information of different types of the preset network in the railway engineering building process, and the label vector information is used for identifying the model, and the accuracy of the waste slag field is improved, and the accuracy of the whole image is improved.
Drawings
FIG. 1 is a flow chart of a method for identifying a waste yard for railroad engineering according to an embodiment of the present application;
FIG. 2 is a schematic structural view of a slag yard identification device for railroad engineering according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Detailed Description
The present application is described in further detail below in conjunction with figures 1-3.
Modifications of the embodiments which do not creatively contribute to the invention may be made by those skilled in the art after reading the present specification, but are protected by patent laws only within the scope of claims of the present application.
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In this context, unless otherwise specified, the term "/" generally indicates that the associated object is an "or" relationship.
Embodiments of the present application are described in further detail below with reference to the drawings attached hereto.
The embodiment of the application provides a slag yard identification method for railway engineering, which is executed by electronic equipment, wherein the electronic equipment can be a server or terminal equipment, and the server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server for providing cloud computing service. The terminal device may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, etc., and the terminal device and the server may be directly or indirectly connected through wired or wireless communication, which is not limited herein, and as shown in fig. 1, the method includes:
and step S10, acquiring remote sensing image information and real-time image information.
The remote sensing image information is used for representing satellite remote sensing image information built by railway engineering at different area positions, and the real-time image information is used for representing satellite remote sensing image information within a preset range along the current railway.
In this embodiment of the present application, the acquisition area of the remote sensing image information includes: mongolian railways, hangzhou long passenger special lines, dan Jigao iron, qu Jiujie railways and Sichuan-Tibetan railways.
Specifically, the preset range is a buffer zone of 2 km along the current railway.
Specifically, the satellite remote sensing image is also called a satellite image, the remote sensing refers to remote sensing, the satellite remote sensing extracts information of the ground by detecting the reflection of an earth surface object on electromagnetic waves and electromagnetic waves emitted by the electromagnetic waves in space by a satellite, and therefore the remote recognition of the ground object is completed, and the image obtained by converting and recognizing the electric wave information is the satellite remote sensing image.
Specifically, the remote sensing image information and the real-time image information are composed of pixel points, and the richer the pixel points are, the smaller the size of the detail recognized by photographing is. The density of pixels on an image photograph is often represented by a number of lines per millimeter, with more lines representing higher image quality. For example, the satellite images are arranged 250 lines per square millimeter, i.e., within each square millimeter: 62500 pixels, the distance between two adjacent pixels is only 4 microns, which is related to the focal length of the camera and the flying height of the satellite. If the focal length is 2 meters and the flying height is 150 km, the ground distance is 0.3 meters according to the geometric relation. This length is the ground resolution of the image.
And S11, preprocessing the remote sensing image information to obtain spectrum image information.
In an embodiment of the present application, the preprocessing includes: geometric correction, image fusion and image mosaic are affected by various imaging factors, so that the position, shape, size, azimuth and other features of the ground feature in the remote sensing image information deviate from the corresponding features of the real ground feature, and therefore the geometric correction of the image is required. And then fusing the corrected remote sensing image information by using the full-color and multispectral images, so that the fused remote sensing image information has new spatial and spectral resolution.
And step S12, inputting the spectral image information into the trained classification model for training to obtain engineering image information and annotation vector information corresponding to the engineering image information.
The engineering image information is used for representing image information of different types of scenes in the railway engineering construction process, and the labeling vector information is used for representing three-dimensional geographic coordinate information corresponding to the engineering image information.
And step S13, correspondingly binding the labeling vector information with the engineering image information to obtain a labeling vector file.
Step S14, vector grid conversion processing is carried out on the labeling vector file, the processed image pixel values and engineering image information corresponding to the image pixel values are used as training samples, and the image pixel values are used for representing the pixel values corresponding to each engineering image information in the labeling vector file.
Specifically, since the training samples are subsequently input into the ASPP-Aug-HED-DSM convolutional classification network (the input samples are images) for processing, vector-to-grid processing is performed on the labeling vector file, and vector data and grid data are two data types most commonly used by ArcGIS software. The vector data is internal data stored in a vector structure in a computer, is a direct product of tracking type digitization, and in ArcGIS, the vector data generally refers to shape, namely, layer data with the suffix of data format of shp in ArcCatalog; the raster data are array data which are arranged according to rows and columns of grid cells and have different gray scales or colors, the position of each cell is defined by a row and column number, the expressed entity position is hidden in the row and column position, in ArcGIS, the raster data format is diversified, and common data format suffixes are. Tif,. Gif,. Img,. Jpg and the like.
And S15, performing feature fusion processing on the training sample to obtain edge fusion features.
And S16, training the preset network model based on the edge fusion characteristics to obtain a trained identification network model.
And S17, inputting the real-time image information into a trained recognition network model for training to obtain the waste slag field recognition information.
In the embodiment of the application, when a waste slag field is identified, remote sensing image information of a railway engineering building satellite positioned at different area positions and real-time image information of the satellite in a preset range along a current railway are obtained, then the remote sensing image information is preprocessed to obtain spectrum image information, the spectrum image information is input into a trained classification model to be trained to obtain engineering image information and label vector information corresponding to the engineering image information, wherein the engineering image information is used for representing image information of different types of scenes in the railway engineering building process, the label vector information is used for representing three-dimensional geographic coordinate information corresponding to the engineering image information, then the label vector information is bound with the engineering image information correspondingly to obtain a label vector file, then the label vector file is subjected to vector grid conversion processing, the image pixel value obtained after the processing and the engineering image information corresponding to the image pixel value are used as training samples, then the training samples are subjected to feature fusion processing to obtain edge fusion features, training is performed on the training network model based on the edge fusion features to obtain a trained identification network model, then the real-time image information is input into the preset identification network model, and the accuracy of the waste slag field is improved.
In one possible implementation manner of the embodiment of the present application, step S11 specifically includes step S111 (not shown in the figure) and step S112 (not shown in the figure), where,
step S111, performing geometric correction processing on the remote sensing image information to obtain corrected image information.
Specifically, the geometric distortion of the remote sensing image is geometrically corrected. The geometric distortion is two kinds of (1) distortion caused by the self performance of the remote sensing instrument, including scale distortion, skew distortion, center shift distortion, scanning nonlinear distortion, radial distortion, orthogonal distortion and the like. (2) The distortion caused by the flight attitude of the vehicle (airplane or satellite), the former including projection distortion caused by the inclination of the flight attitude of the vehicle and scale error caused by the change in altitude, and the latter including distortion caused by the relief of the topography and the curvature of the earth. Geometric corrections are typically made using electronic computers and optical instruments. The principle is that the element of one distorted image is transformed from the original position to another correct image through a certain coordinate transformation. The geometric correction of the image also comprises the steps of painting a coordinate grid, registering the multispectral image and transforming the remote sensing image obtained by certain projection into map projection.
And step S112, performing image fusion processing on the corrected image information and the multispectral image to obtain fused image information.
In particular, multispectral images refer to images that contain many bands, sometimes only 3 bands (color images are one example) but sometimes much more, even hundreds. Each band is a gray scale image representing scene brightness derived from the sensitivity of the sensor used to generate the band. In such an image, each pixel is associated with a string of values in different bands, i.e. a vector, by the pixel. This series is called a spectral signature of the pixel.
Step S113, performing image mosaic processing on the fused image information to obtain spectrum image information.
In the embodiment of the present application, the manner of performing the image mosaic processing on the fused image information includes: and selecting one image with uniform brightness and color from the image information to be fused as a reference image of mosaic, and performing mosaic on other images from near to far according to the reference image.
In one possible implementation manner of the embodiment of the present application, step S15 (not shown in the figure) specifically includes step S51 (not shown in the figure), step S52 (not shown in the figure), and step S53 (not shown in the figure), where,
Step S51, a first DSM model is built based on the image pixel values.
Step S52, feature data information in the first DSM model is called, DSN-level edge feature extraction is conducted on the feature data information and engineering image information, and edge combination results of different scales are obtained, wherein the feature data information comprises feature type information and space coordinate data corresponding to the feature type information.
In the embodiment of the application, the DNS level edge feature extraction comprises DNS1-DNS5 stage level extraction, each DSN (deep-supported Net) outputs an edge detection result of different scales, and the edge detection results of different scales of the image and the DSM are simply combined to generate DSN-fuse1, DSN-fuse2, DSN-fuse3, DSN-fuse4 and DSN-fuse5 respectively. And finally, combining the generated 5 DSN-fuse edge detection simple answer combination results with the original image and the original DSM respectively to obtain an edge combination result.
And step S53, carrying out edge feature fusion on the engineering image information, the ground feature data information and the edge detection results with different scales to obtain edge fusion features.
In the embodiment of the application, the first DSM model is an ASPP-Aug-HED-DSM model, which is obtained by introducing an integral edge detection network (Holistcally-Nested Edge Detection, HED) as a ground feature boundary feature detection sub-network into an ASPP-Aug multi-scale expansion convolution classification network to classify images, and introducing DSM (Digital Surface Model, chinese digital surface model) elevation data as network training auxiliary data while fully playing the high accuracy advantage of the HED integral edge feature detection sub-network in ground feature boundary detection.
Specifically, because the high-resolution remote sensing data contains abundant ground object information and has a larger image size, even if overlapping slicing processing is performed on an image pair, the high-resolution remote sensing image can lead to that the same classified target ground object is distributed in different slices, which is not beneficial to the learning of the convolution network on the integral characteristics of the target ground object. Furthermore, since CNN requires a large amount of training data to obtain a high-precision classification result, if the amount of training data is insufficient, it will result in a high bias of network parameters toward training its data. The image may typically be subjected to enhancement processing including random cropping, flipping, and random perturbation of the image in terms of brightness, saturation, hue, and contrast. However, the enhancement method cannot be used for enhancing certain ground features in a targeted manner. While using Object Proposal methods, areas containing potential features can be found in the image, such as Selective Search for Selective Search and edgeBox, etc.
In the embodiment of the application, a graph theory segmentation method is adopted to segment the high-resolution remote sensing image into a plurality of small areas. Based on the segmentation results described above, the Selective Search method is then used to generate the bounding box of the potential target as an enhancement of the sample data, so that more valuable training data can be obtained using the method of unsupervised image segmentation than using simple image enhancement. According to the method, potential ground objects and labels thereof are extracted from image data and used as supplement of training data, so that classification accuracy and model generalization capability are improved, and an ASPP-Aug multi-scale expansion convolution classification network is formed.
Specifically, the HED network utilizes a multi-output network architecture for edge detection. The HED network is based on a VGG-16 network structure, a convolutional layer before each pooling layer of VGG-16 outputs an edge output (Side-output) characteristic diagram, and the receptive fields of convolution operations corresponding to the 5 edge output characteristic diagrams are 5, 14, 40, 92 and 196 respectively. In the training stage, the 5 Side-output feature maps respectively calculate losses by taking edge images generated by the classification samples as label data and then respectively perform back propagation. Unlike a traditional CNN that contains only one forward-backward propagation stream, the HED network has multiple forward-feedback streams whose gradients are equal to the weighted fusion of the gradients returned by subsequent layers when traveling backward. Because of the difference of the receptive fields, the receptive fields of the Side-output feature map close to the input image are small, and the local features of the image can be extracted more; the rear Side-output feature map has large receptive field, and can extract high-level semantic features. Finally, the 5 Side-output feature graphs are weighted and fused into an output layer, and the output layer and the tag data calculate loss and are back-propagated.
Specifically, the HED global edge detection network has the following feature points relative to the conventional edge detection method:
1. For integral image training and prediction, the image-to-image edge detection can be realized based on FCN (Fully Convolutional Networks, full convolution network), the input of the algorithm is a multichannel high-resolution remote sensing image, and the output of the algorithm is 5 edge detection intensity images. And nesting multi-level feature learning in a network based on the FCN (fuzzy c-means) multi-level structure, taking all the 5 scale feature layers as internal edge layers to generate edge detection results with different scales, and respectively connecting deconvolution layers to the 5 edge detection feature images to restore the feature images to the original size.
2. In the high-resolution remote sensing image, due to the existence of shadow, the characteristics of the target in the shadow area on the image are sharply reduced, so that the characteristics are lost in the characteristic extraction process, and the classification accuracy is directly reduced. The elevation information of the ground object in the image can be not influenced by the shadow of the image, the data representing the height characteristics of the ground object is added as the classification auxiliary information of the original image in the characteristic extraction process, and adverse influence of factors such as the shadow on the classification result is reduced.
Specifically, the digital elevation model (Digital Elevation Model, DEM) refers to a data set representing planar coordinates (X, Y) and elevations (Z) of regular lattice points within a certain range, and is mainly formed by describing spatial distribution of the morphology of a target research area, acquiring elevation data through contour lines or similar three-dimensional models, and then interpolating the data. DEM is a branch of digital terrain model DTM (Digital Terrain Model). DTM represents the spatial distribution of linear or nonlinear combinations of various topographical factors including elevation, such as slope and grade. The digital surface model (Digital Surface Model, DSM) is a ground elevation model that includes ground level information of ground trees, buildings, and the like. The DSM further includes altitude information of other ground manifestations other than the ground on the basis of the DEM. For example, in forest areas, the DSM may be used to detect forest growth, and in urban areas, the DSM may be used to check urban building construction.
In one possible implementation manner of the embodiment of the present application, step S16 specifically includes step S61 (not shown in the figure), step S62 (not shown in the figure), and step S63 (not shown in the figure), where,
step S61, creating a first classification network model and a second classification network model.
The first classification network model is used for identifying and training the engineering type of the engineering image information, and the second classification network model is used for identifying and training the ground feature characteristics in the ground feature data information.
Specifically, the first classification network model is an ASPP-Aug-Image model, and the second classification network model is an ASPP-Aug-DSM.
And step S62, training the first classification network model based on the engineering image information and the edge fusion characteristics to obtain a trained first classification network model.
And step S63, training the second classification network model based on the DSM model and the edge fusion characteristics to obtain a trained second classification network model.
And S64, performing feature fusion on the first classification network model and the second classification network model to obtain an identification network model.
In one possible implementation manner of the embodiment of the present application, step S17 specifically includes step S71 (not shown in the figure), step S72 (not shown in the figure), and step S73 (not shown in the figure), where,
Step S71, performing overlapped slicing processing on the real-time image information to obtain cut image information.
For the embodiment of the application, a plurality of different intervals are adopted for overlapped slice segmentation in the slicing process to increase the number of samples and improve the generalization capability of the model.
Step S72, a second DSM model is constructed based on the cut image information, and DSM data in the second DSM model is retrieved.
In an embodiment of the present application, the second DSM model is an ASPP-Aug-HED-DSM model.
And step S73, inputting the cutting image information and DSM data into an identification network model for prediction training to obtain the waste slag field identification information.
A possible implementation manner of the embodiment of the present application, step S73 (not shown in the figure) further includes step S731 (not shown in the figure), step S732 (not shown in the figure), and step S733 (not shown in the figure), where,
step S731, it is determined whether or not there is an overlapping slice image in the cut image information.
Step S732, if present, determines a pixel prediction value corresponding to each overlapping slice image based on the spoil field identification information.
And step S733, performing occurrence rate ranking on the pixel predicted values corresponding to each overlapped slice image to obtain a target predicted result.
In the embodiment of the present application, the manner of comparing the pixel prediction values includes: and the pixel prediction value with the largest prediction value is selected as a target prediction result.
In one possible implementation manner of the embodiment of the present application, step S17 further includes step S18 (not shown in the figure), step S19 (not shown in the figure), step S20 (not shown in the figure), and step S20a (not shown in the figure), where,
and S18, carrying out fine crushing classification and extraction on the waste slag field identification information to obtain optimized identification information.
And step S19, carrying out corresponding spatial data import on the optimized identification information based on the real-time image information to obtain the coordinate identification information.
And step S20, carrying out grid vector conversion processing on the seat identification information to obtain vector identification information.
Step S21a, judging whether the vector identification information has preset abnormality, if so, generating intervention information to inform staff to perform intervention correction on the vector identification information.
Specifically, the preset anomalies include: vector recognition classification errors due to algorithm inaccuracy.
The above embodiments describe a method for identifying a waste dump for railway engineering from the viewpoint of a method flow, and the following embodiments describe a device for identifying a waste dump for railway engineering from the viewpoint of a virtual module or a virtual unit, and in particular, the following embodiments are described below.
The embodiment of the application provides a waste residue field identification device for railway engineering, as shown in the figure, the waste residue field identification device 20 for railway engineering specifically may include: an information acquisition module 21, an image preprocessing module 22, an image classification module 23, an information binding module 24, a vector conversion module 25, a feature fusion module 26, a network training module 27 and an image recognition module 28, wherein,
the information acquisition module 21 is used for acquiring remote sensing image information and real-time image information, wherein the remote sensing image information is used for representing satellite remote sensing image information constructed by railway engineering at different area positions, and the real-time image information is used for representing satellite remote sensing image information within a preset range along the current railway;
an image preprocessing module 22, configured to preprocess remote sensing image information to obtain spectral image information;
the image classification module 23 is configured to input spectral image information into the trained classification model for training, to obtain engineering image information and labeling vector information corresponding to the engineering image information, where the engineering image information is used to represent image information of different types of scenes in the railway engineering construction process, and the labeling vector information is used to represent three-dimensional geographic coordinate information corresponding to the engineering image information;
An information binding module 24, configured to bind the labeling vector information with the engineering image information correspondingly, so as to obtain a labeling vector file;
the vector conversion module 25 is configured to perform vector grid conversion on the labeling vector file, and use the image pixel value obtained after the processing and the engineering image information corresponding to the image pixel value as a training sample, where the image pixel value is used to represent a pixel value corresponding to each piece of engineering image information in the labeling vector file;
the feature fusion module 26 is configured to perform feature fusion processing on the training sample to obtain an edge fusion feature;
the network training module 27 is configured to train the preset network model based on the edge fusion feature, so as to obtain a trained identification network model;
the image recognition module 28 is configured to input the real-time image information into the trained recognition network model for training, so as to obtain the waste slag field recognition information.
In one possible implementation manner of the embodiment of the present application, when the image preprocessing module 22 performs preprocessing on remote sensing image information to obtain spectral image information, the image preprocessing module is specifically configured to:
performing geometric correction processing on the remote sensing image information to obtain corrected image information;
performing image fusion processing on the corrected image information and the multispectral image to obtain fused image information;
And performing image mosaic processing on the fused image information to obtain spectrum image information.
In another possible implementation manner of this embodiment of the present application, when performing feature fusion processing on a training sample, the feature fusion module 26 is specifically configured to:
establishing a first DSM model based on image pixel values;
the method comprises the steps of calling ground feature data information in a first DSM model, and extracting DSN level edge characteristics of the ground feature data information and engineering image information to obtain edge combination results of different scales, wherein the ground feature data information comprises ground feature type information and space coordinate data corresponding to the ground feature type information;
and carrying out edge feature fusion on the engineering image information, the ground object data information and the edge detection results with different scales to obtain edge fusion features.
In another possible implementation manner of this embodiment of the present application, the network training module 27 is configured to, when training the preset network model based on the edge fusion feature to obtain a trained identification network model, specifically:
creating a first classification network model and a second classification network model, wherein the first classification network model is used for identifying and training the engineering type of the engineering image information, and the second classification network model is used for identifying and training the feature characteristics in the feature data information;
Training the first classification network model based on engineering image information and edge fusion characteristics to obtain a trained first classification network model;
training the second classification network model based on the DSM model and the edge fusion characteristics to obtain a trained second classification network model;
and carrying out feature fusion on the first classification network model and the second classification network model to obtain an identification network model.
In another possible implementation manner of the embodiment of the present application, when the image recognition module 28 inputs the real-time image information into the trained recognition network model to perform training, the image recognition module is specifically configured to:
performing overlapped slicing processing on the real-time image information to obtain cut image information;
constructing a second DSM model based on the cut image information, and retrieving DSM data in the second DSM model;
and inputting the cutting image information and DSM data into an identification network model for prediction training to obtain the waste slag field identification information.
In another possible implementation, the apparatus 20 further includes: an overlap judging module, a pixel determining module and a pixel arrangement module, wherein,
the overlapping judging module is used for judging whether overlapping slice images exist in the cutting image information;
A pixel determination module for determining a pixel prediction value corresponding to each of the overlapped slice images based on the spoil field identification information when the overlapped slice images exist in the cut image information;
and the pixel ranking module is used for performing occurrence ranking on the pixel predicted values corresponding to each overlapped slice image to obtain a target predicted result.
In another possible implementation, the apparatus 20 further includes: a fine classification module, a data importing module, a vector conversion module and a vector judgment module, wherein,
the fine crushing classification module is used for carrying out fine crushing classification and taking out on the waste slag field identification information to obtain optimized identification information;
the data importing module is used for importing corresponding spatial data of the optimized identification information based on the real-time image information to obtain coordinate identification information;
the vector conversion module is used for carrying out grid vector conversion processing on the coordinate identification information to obtain vector identification information;
the vector judgment module is used for judging whether the vector identification information has preset abnormality or not, and if so, generating intervention information so as to inform staff of carrying out intervention correction on the vector identification information.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In an embodiment of the present application, as shown in fig. 3, an electronic device 300 shown in fig. 3 includes: a processor 301 and a memory 303. Wherein the processor 301 is coupled to the memory 303, such as via a bus 302. Optionally, the electronic device 300 may also include a transceiver 304. It should be noted that, in practical applications, the transceiver 304 is not limited to one, and the structure of the electronic device 300 is not limited to the embodiment of the present application.
The processor 301 may be a CPU (Central Processing Unit ), general purpose processor, DSP (Digital Signal Processor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. Processor 301 may also be a combination that implements computing functionality, e.g., comprising one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
Bus 302 may include a path to transfer information between the components. Bus 302 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect Standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. Bus 302 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 3, but not only one bus or one type of bus.
The Memory 303 may be, but is not limited to, a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory ), a CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 303 is used for storing application program codes for executing the present application and is controlled to be executed by the processor 301. The processor 301 is configured to execute the application code stored in the memory 303 to implement what is shown in the foregoing method embodiments.
Among them, electronic devices include, but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. But may also be a server or the like. The electronic device shown in fig. 3 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
The present application provides a computer readable storage medium having a computer program stored thereon, which when run on a computer, causes the computer to perform the corresponding method embodiments described above. Compared with the related technology, in the embodiment of the application, in the process of producing products by the industrial robot, the production visual frames are analyzed and screened to obtain the visual key frames, then the first map points and the common view are determined based on the visual key frames, the depth value association analysis is carried out on the first map points and the common view to obtain the visual map points, then the feature analysis is carried out on the feature points in the visual map points to obtain the production analysis result, then whether the preset production defects exist in the production analysis result is judged, if the preset production defects exist, the position detection is carried out on the visual map points with the preset production defects to obtain the abnormal position information, so that when the abnormal faults occur for the first time in the production process, the position of the defective nodes of the staff is timely and accurately informed, the staff can maintain in time, the secondary defects are avoided, and the effect of reducing the maintenance cost of the produced products is achieved.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The foregoing is only a partial embodiment of the present application and it should be noted that, for a person skilled in the art, several improvements and modifications can be made without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. A method for identifying a waste slag field for railway engineering, comprising the steps of:
acquiring remote sensing image information and real-time image information, wherein the remote sensing image information is used for representing satellite remote sensing image information constructed by railway engineering at different area positions, and the real-time image information is used for representing satellite remote sensing image information within a preset range along the current railway;
Preprocessing the remote sensing image information to obtain spectrum image information;
inputting the spectral image information into a trained classification model for training to obtain engineering image information and annotation vector information corresponding to the engineering image information, wherein the engineering image information is used for representing image information of different types of scenes in the railway engineering construction process, and the annotation vector information is used for representing three-dimensional geographic coordinate information corresponding to the engineering image information;
correspondingly binding the labeling vector information with the engineering image information to obtain a labeling vector file;
performing vector grid conversion processing on the labeling vector file, and taking an image pixel value obtained after the processing and engineering image information corresponding to the image pixel value as a training sample, wherein the image pixel value is used for representing a pixel value corresponding to each engineering image information in the labeling vector file;
performing feature fusion processing on the training sample to obtain edge fusion features;
training a preset network model based on the edge fusion characteristics to obtain a trained identification network model;
and inputting the real-time image information into a trained identification network model for training to obtain the identification information of the waste slag field.
2. The method for identifying a waste slag yard for railway engineering according to claim 1, wherein the preprocessing the remote sensing image information to obtain spectral image information comprises the following steps:
performing geometric correction processing on the remote sensing image information to obtain corrected image information;
performing image fusion processing on the corrected image information and the multispectral image to obtain fusion image information;
and performing image mosaic processing on the fused image information to obtain spectrum image information.
3. The method for identifying a waste slag yard for railway engineering according to claim 1, wherein the performing feature fusion processing on the training samples to obtain edge fusion features comprises:
establishing a first DSM model based on the image pixel values;
extracting feature data information in the first DSM model, extracting DSN level edge characteristics of the feature data information and the engineering image information, and obtaining edge combination results of different scales, wherein the feature data information comprises feature type information and space coordinate data corresponding to the feature type information;
and carrying out edge feature fusion on the engineering image information, the ground feature data information and the edge detection results with different scales to obtain edge fusion features.
4. The method for identifying a waste slag yard for railway engineering according to claim 3, wherein training the preset network model based on the edge fusion feature to obtain a trained identification network model comprises the following steps:
creating a first classification network model and a second classification network model, wherein the first classification network model is used for identifying and training the engineering type of the engineering image information, and the second classification network model is used for identifying and training the feature characteristics in the feature data information;
training the first classification network model based on the engineering image information and the edge fusion characteristics to obtain a trained first classification network model;
training the second classification network model based on the DSM model and the edge fusion feature to obtain a trained second classification network model;
and carrying out feature fusion on the first classification network model and the second classification network model to obtain an identification network model.
5. The method for identifying a waste slag field for railway engineering according to claim 1, wherein the step of inputting the real-time image information into a trained identification network model for training to obtain the waste slag field identification information comprises the following steps:
Performing overlapped slicing processing on the real-time image information to obtain cut image information;
constructing a second DSM model based on the cutting image information, and calling DSM data in the second DSM model;
and inputting the cutting image information and the DSM data into the identification network model for prediction training to obtain the identification information of the waste slag field.
6. The method for identifying a waste slag yard of railway engineering according to claim 5, wherein the inputting the cutting image information and the DSM data into the identification network model for predictive training, obtaining the waste slag yard identification information, further comprises:
judging whether overlapping slice images exist in the cutting image information or not;
if yes, determining a pixel prediction value corresponding to each overlapped slice image based on the spoil field identification information;
and carrying out occurrence rate ranking on the pixel predicted values corresponding to each overlapped slice image to obtain a target predicted result.
7. A method of identifying a waste yard for railway engineering according to claim 1, further comprising:
finely crushing, classifying and taking out the waste slag field identification information to obtain optimized identification information;
Performing corresponding spatial data import on the optimized identification information based on the real-time image information to obtain coordinate identification information;
performing grid vector conversion processing on the coordinate identification information to obtain vector identification information;
judging whether the vector identification information has preset abnormality or not, if so, generating intervention information to inform staff to perform intervention correction on the vector identification information.
8. A slag yard identification device for railway engineering, comprising:
the information acquisition module is used for acquiring remote sensing image information and real-time image information, wherein the remote sensing image information is used for representing satellite remote sensing image information built by railway engineering at different area positions, and the real-time image information is used for representing satellite remote sensing image information within a preset range along the current railway;
the image preprocessing module is used for preprocessing the remote sensing image information to obtain spectrum image information;
the image classification module is used for inputting the spectrum image information into a trained classification model for training to obtain engineering image information and annotation vector information corresponding to the engineering image information, wherein the engineering image information is used for representing image information of different types of scenes in the railway engineering construction process, and the annotation vector information is used for representing three-dimensional geographic coordinate information corresponding to the engineering image information;
The information binding module is used for correspondingly binding the annotation vector information with the engineering image information to obtain an annotation vector file;
the vector conversion module is used for carrying out vector grid conversion processing on the labeling vector file, taking the processed image pixel value and engineering image information corresponding to the image pixel value as training samples, wherein the image pixel value is used for representing the pixel value corresponding to each engineering image information in the labeling vector file;
the feature fusion module is used for carrying out feature fusion processing on the training sample to obtain edge fusion features;
the network training module is used for training a preset network model based on the edge fusion characteristics to obtain a trained identification network model;
and the image recognition module is used for inputting the real-time image information into the trained recognition network model for training to obtain the waste slag field recognition information.
9. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to: -performing a method for identifying a waste yard for railway works according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method for identifying a waste yard for railway works as claimed in any one of claims 1 to 7.
CN202310151598.3A 2023-02-13 2023-02-13 Method, device, equipment and medium for identifying waste slag field for railway engineering Pending CN116168246A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310151598.3A CN116168246A (en) 2023-02-13 2023-02-13 Method, device, equipment and medium for identifying waste slag field for railway engineering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310151598.3A CN116168246A (en) 2023-02-13 2023-02-13 Method, device, equipment and medium for identifying waste slag field for railway engineering

Publications (1)

Publication Number Publication Date
CN116168246A true CN116168246A (en) 2023-05-26

Family

ID=86417929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310151598.3A Pending CN116168246A (en) 2023-02-13 2023-02-13 Method, device, equipment and medium for identifying waste slag field for railway engineering

Country Status (1)

Country Link
CN (1) CN116168246A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116526682A (en) * 2023-07-05 2023-08-01 河北万博电器有限公司 Control method, device, equipment and medium of high-low voltage complete switch equipment
CN116664553A (en) * 2023-07-26 2023-08-29 天津矿山工程有限公司 Explosion drilling method, device, equipment and medium based on artificial intelligence

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116526682A (en) * 2023-07-05 2023-08-01 河北万博电器有限公司 Control method, device, equipment and medium of high-low voltage complete switch equipment
CN116664553A (en) * 2023-07-26 2023-08-29 天津矿山工程有限公司 Explosion drilling method, device, equipment and medium based on artificial intelligence
CN116664553B (en) * 2023-07-26 2023-10-20 天津矿山工程有限公司 Explosion drilling method, device, equipment and medium based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN109934163B (en) Aerial image vehicle detection method based on scene prior and feature re-fusion
Zai et al. 3-D road boundary extraction from mobile laser scanning data via supervoxels and graph cuts
CN111179152B (en) Road identification recognition method and device, medium and terminal
CN110598784B (en) Machine learning-based construction waste classification method and device
CN113378686B (en) Two-stage remote sensing target detection method based on target center point estimation
CN116168246A (en) Method, device, equipment and medium for identifying waste slag field for railway engineering
CN113628291B (en) Multi-shape target grid data vectorization method based on boundary extraction and combination
EP4174792A1 (en) Method for scene understanding and semantic analysis of objects
CN113610070A (en) Landslide disaster identification method based on multi-source data fusion
CN113239736A (en) Land cover classification annotation graph obtaining method, storage medium and system based on multi-source remote sensing data
CN114943902A (en) Urban vegetation unmanned aerial vehicle remote sensing classification method based on multi-scale feature perception network
CN114519819B (en) Remote sensing image target detection method based on global context awareness
Prochazka et al. Automatic lane marking extraction from point cloud into polygon map layer
CN115497002A (en) Multi-scale feature fusion laser radar remote sensing classification method
KC Enhanced pothole detection system using YOLOX algorithm
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN114218999A (en) Millimeter wave radar target detection method and system based on fusion image characteristics
Kada 3D reconstruction of simple buildings from point clouds using neural networks with continuous convolutions (convpoint)
Pahlavani et al. 3D reconstruction of buildings from LiDAR data considering various types of roof structures
Mahphood et al. Virtual first and last pulse method for building detection from dense LiDAR point clouds
CN114387293A (en) Road edge detection method and device, electronic equipment and vehicle
Zhang Photogrammetric point clouds: quality assessment, filtering, and change detection
Widyaningrum et al. Tailored features for semantic segmentation with a DGCNN using free training samples of a colored airborne point cloud
Höhle et al. Automated Extraction of Topographic Map Data from Remotely Sensed Imagery by Classification and Cartographic Enhancement: An Introduction to New Mapping Tools
Yu et al. A cue line based method for building modeling from LiDAR and satellite imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination