CN116630909B - Unmanned intelligent monitoring system and method based on unmanned aerial vehicle - Google Patents

Unmanned intelligent monitoring system and method based on unmanned aerial vehicle Download PDF

Info

Publication number
CN116630909B
CN116630909B CN202310717185.7A CN202310717185A CN116630909B CN 116630909 B CN116630909 B CN 116630909B CN 202310717185 A CN202310717185 A CN 202310717185A CN 116630909 B CN116630909 B CN 116630909B
Authority
CN
China
Prior art keywords
construction
road
feature
classification
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310717185.7A
Other languages
Chinese (zh)
Other versions
CN116630909A (en
Inventor
左欢金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Teshineng Intelligent Technology Co ltd
Original Assignee
Guangdong Teshineng Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Teshineng Intelligent Technology Co ltd filed Critical Guangdong Teshineng Intelligent Technology Co ltd
Priority to CN202310717185.7A priority Critical patent/CN116630909B/en
Publication of CN116630909A publication Critical patent/CN116630909A/en
Application granted granted Critical
Publication of CN116630909B publication Critical patent/CN116630909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

An unmanned intelligent monitoring system and method based on unmanned aerial vehicle, it obtains the construction monitoring image gathered by unmanned aerial vehicle; the construction monitoring video collected by the unmanned aerial vehicle is utilized, whether the phenomenon of road occupation construction exists in the road target area or not is automatically identified through deep learning and artificial intelligence technology, so that the efficiency and coverage of road supervision are improved, and reliable data support is provided for urban management departments.

Description

Unmanned intelligent monitoring system and method based on unmanned aerial vehicle
Technical Field
The application relates to the technical field of intelligent monitoring, and in particular relates to an unmanned intelligent monitoring system based on an unmanned aerial vehicle and a method thereof.
Background
Road traffic capacity can be influenced by road occupation construction, traffic jam and potential safety hazard are caused, and challenges are brought to urban management.
At present, supervision on road occupation construction mainly depends on manual inspection and complaint reporting, and is low in efficiency, and is difficult to cover the whole period and the whole area. Thus, a solution is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides an unmanned intelligent monitoring system based on an unmanned aerial vehicle and a method thereof, wherein the system acquires a construction monitoring image acquired by the unmanned aerial vehicle; the construction monitoring video collected by the unmanned aerial vehicle is utilized, whether the phenomenon of road occupation construction exists in the road target area or not is automatically identified through deep learning and artificial intelligence technology, so that the efficiency and coverage of road supervision are improved, and reliable data support is provided for urban management departments.
In a first aspect, an unmanned intelligent monitoring system based on unmanned aerial vehicle is provided, which includes: the monitoring image acquisition module is used for acquiring a construction monitoring image acquired by the unmanned aerial vehicle; the road area identification module is used for passing the construction monitoring image through a road target detection network to obtain a road region-of-interest image; the road space feature extraction module is used for obtaining a road space feature matrix through a first convolution neural network model using a space attention mechanism through the road region-of-interest image; a construction area extraction module for applying a mask to the construction monitoring image based on the position of the road region of interest image in the construction monitoring image to obtain a mask construction monitoring image; the construction space feature extraction module is used for obtaining a construction operation object feature matrix through a second convolution neural network model using a space attention mechanism by using the mask construction monitoring image; the spatial correlation feature extraction module is used for acquiring a classification feature map through a spatial correlation feature extractor based on a third convolutional neural network model after aggregating the road spatial feature matrix and the construction work object feature matrix into an input tensor; the consistency optimization module is used for carrying out manifold geometric consistency optimization on the classification characteristic map so as to obtain an optimized classification characteristic map; and
And the monitoring result generation module is used for enabling the optimized classification characteristic diagram to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the phenomenon of road occupation construction exists or not.
In a second aspect, an unmanned intelligent monitoring method based on an unmanned aerial vehicle is provided, which comprises the following steps: acquiring a construction monitoring image acquired by an unmanned aerial vehicle; the construction monitoring image passes through a road target detection network to obtain a road region-of-interest image; the image of the region of interest of the road is processed through a first convolution neural network model using a spatial attention mechanism to obtain a road spatial feature matrix; applying a mask to the construction monitoring image based on the position of the road region of interest image in the construction monitoring image to obtain a mask construction monitoring image; the mask construction monitoring image is subjected to a second convolution neural network model by using a spatial attention mechanism to obtain a construction operation object feature matrix; the road space feature matrix and the construction operation object feature matrix are aggregated into an input tensor, and then a space association feature extractor based on a third convolutional neural network model is used for obtaining a classification feature map; performing manifold geometric consistency optimization on the classification feature map to obtain an optimized classification feature map; and
And the optimized classification characteristic diagram is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the phenomenon of road occupation construction exists.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a block diagram of an unmanned on vehicle-based unmanned intelligent monitoring system according to an embodiment of the present application.
Fig. 2 is a block diagram of the road space feature extraction module in the unmanned on vehicle-based unmanned intelligent monitoring system according to an embodiment of the application.
Fig. 3 is a block diagram of the construction space feature extraction module in the unmanned on vehicle-based unmanned intelligent monitoring system according to an embodiment of the application.
Fig. 4 is a block diagram of the consistency optimization module in the unmanned on vehicle-based unmanned intelligent monitoring system according to an embodiment of the present application.
Fig. 5 is a block diagram of the monitoring result generating module in the unmanned on vehicle-based unmanned intelligent monitoring system according to an embodiment of the present application.
Fig. 6 is a flowchart of an unmanned on vehicle-based intelligent monitoring method according to an embodiment of the present application.
Fig. 7 is a schematic diagram of a system architecture of an unmanned intelligent monitoring method based on an unmanned aerial vehicle according to an embodiment of the application.
Fig. 8 is an application scenario diagram of an unmanned on vehicle-based unmanned intelligent monitoring system according to an embodiment of the application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Unless defined otherwise, all technical and scientific terms used in the examples of this application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application.
In the description of the embodiments of the present application, unless otherwise indicated and defined, the term "connected" should be construed broadly, and for example, may be an electrical connection, may be a communication between two elements, may be a direct connection, or may be an indirect connection via an intermediary, and it will be understood by those skilled in the art that the specific meaning of the term may be understood according to the specific circumstances.
It should be noted that, the term "first\second\third" in the embodiments of the present application is merely to distinguish similar objects, and does not represent a specific order for the objects, it is to be understood that "first\second\third" may interchange a specific order or sequence where allowed. It is to be understood that the "first\second\third" distinguishing objects may be interchanged where appropriate such that the embodiments of the present application described herein may be implemented in sequences other than those illustrated or described herein.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
In one embodiment of the present application, fig. 1 is a block diagram of an unmanned on vehicle-based intelligent monitoring system according to an embodiment of the present application. As shown in fig. 1, an unmanned on vehicle-based unmanned intelligent monitoring system 100 according to an embodiment of the present application includes: a monitoring image acquisition module 110 for acquiring a construction monitoring image acquired by the unmanned aerial vehicle; the road region identification module 120 is configured to pass the construction monitoring image through a road target detection network to obtain a road region of interest image; the road space feature extraction module 130 is configured to obtain a road space feature matrix by using a first convolutional neural network model of a spatial attention mechanism for the road region of interest image; a construction area extraction module 140, configured to apply a mask to the construction monitoring image based on the position of the road region of interest image in the construction monitoring image to obtain a masked construction monitoring image; a construction space feature extraction module 150, configured to obtain a construction work object feature matrix by using the second convolutional neural network model of the spatial attention mechanism for the mask construction monitoring image; the spatial correlation feature extraction module 160 is configured to aggregate the road spatial feature matrix and the construction work object feature matrix into an input tensor, and then obtain a classification feature map through a spatial correlation feature extractor based on a third convolutional neural network model; the consistency optimization module 170 is configured to perform manifold geometric consistency optimization on the classification feature map to obtain an optimized classification feature map; and a monitoring result generating module 180, configured to pass the optimized classification feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether a phenomenon of road occupation exists.
The unmanned intelligent monitoring system based on the unmanned aerial vehicle can realize real-time monitoring and detection of a construction site, 1. The safety and efficiency of the construction site can be improved, potential safety hazards of the construction site can be timely found through real-time monitoring, accidents are reduced, meanwhile, the construction progress and efficiency can be monitored, and the construction quality and efficiency are improved. 2. The automatic monitoring can be realized, the unmanned aerial vehicle can automatically cruise and collect images, the system can automatically identify the road area and the construction area, the automatic monitoring is realized, and the labor burden is reduced. 3. The method can improve the monitoring precision and accuracy, and can identify and analyze the image with high precision and high accuracy by using the techniques such as convolutional neural network, spatial attention mechanism and the like, thereby improving the monitoring precision and accuracy. 4. The intelligent analysis and optimization can be realized, and the accuracy and precision of monitoring can be further improved by optimizing manifold geometric consistency of the classification characteristic diagram, so that the intelligent analysis and optimization can be realized.
Specifically, in the embodiment of the present application, the monitoring image acquisition module 110 is configured to acquire a construction monitoring image acquired by the unmanned aerial vehicle. Aiming at the technical problems, the technical conception of the method is to automatically identify whether the phenomenon of road occupation construction exists in a road target area or not by utilizing a construction monitoring video acquired by an unmanned aerial vehicle through deep learning and artificial intelligence technology so as to improve the efficiency and coverage of road supervision and provide reliable data support for urban management departments.
Specifically, in the technical scheme of the present application, first, a construction monitoring image acquired by an unmanned aerial vehicle is acquired. Here, the construction monitoring image acquired by using the unmanned aerial vehicle can obtain a wider viewing angle and more comprehensive and more accurate monitoring data. In particular, the wider viewing angle can shoot the road condition of a larger area, and can better reflect the condition in the whole construction area. In addition, as the unmanned aerial vehicle can fast fly and shoot, more data can be obtained in a short time, and the data acquisition efficiency is improved.
By utilizing the unmanned aerial vehicle to collect construction monitoring images, the monitoring efficiency and coverage can be improved, the unmanned aerial vehicle can perform aerial overlooking and panoramic shooting, the construction site can be monitored more comprehensively and efficiently, the coverage area is wider, and the monitoring effect is better. The automatic monitoring can be realized, the unmanned aerial vehicle can automatically cruise, the automatic monitoring on a construction site is realized, manual intervention is not needed, and the monitoring efficiency and accuracy are greatly improved. The unmanned aerial vehicle can shoot high-resolution images, can capture details of a construction site more clearly, improves the monitoring precision and accuracy, and can discover illegal behaviors such as road occupation construction more rapidly. The intelligent analysis and optimization can be realized, the intelligent analysis and optimization of the construction site can be realized by carrying out deep learning and artificial intelligence technology processing on the acquired images, more valuable data support is provided for the city management department, and the city construction is better planned and managed.
Specifically, in the embodiment of the present application, the road area identifying module 120 is configured to pass the construction monitoring image through a road target detection network to obtain a road region of interest image. And then, the construction monitoring image passes through a road target detection network to obtain a road region-of-interest image. Here, the road area can be automatically identified by the road object detection network. That is, in this way, attention can be focused on the road area while the background area is ignored.
The object detection network is a deep learning algorithm, and is used for automatically detecting and locating a specific object in an image or video. The object detection network may classify each pixel point in the image as a target object or a non-target object and may determine the location and size of the target object. The object detection network is typically composed of two parts: a feature extraction network and a target detection network. Feature extraction networks are used to extract features from an input image, typically implemented using Convolutional Neural Networks (CNNs). The target detection network uses these features to predict the location and class of the target object. Common target detection networks include YOLO (You Only Look Once), faster R-CNN (Faster Region-based Convolutional Neural Network), SSD (Single Shot MultiBox Detector), and the like.
Further, the road region of interest refers to a region related to a road in a construction monitoring image, and generally refers to the road itself and a region around the road. When construction monitoring is carried out, attention can be focused on the road area by extracting the road area of interest, so that the monitoring efficiency and accuracy are improved.
In other embodiments of the present application, the extraction of the road region of interest may be implemented using image processing techniques and object detection algorithms. In terms of image processing, algorithms such as Canny edge detection, hough transform, etc. can be used to extract road edges. In terms of object detection, the road area may be automatically identified using an object detection network in a deep learning algorithm, such as YOLO, SSD, fast R-CNN, etc.
After the road region of interest is extracted, the road region of interest can be further analyzed and processed, and the road region of interest can be fused with other monitoring data, so that more intelligent monitoring and analysis are realized.
Specifically, in the embodiment of the present application, the road spatial feature extraction module 130 is configured to obtain the road spatial feature matrix by using a first convolutional neural network model of a spatial attention mechanism for the road region of interest image. The road region of interest image is then passed through a first convolutional neural network model using a spatial attention mechanism to obtain a road spatial feature matrix. The convolutional neural network has strong feature extraction capability, and can extract high-dimensional road space features when processing road region-of-interest images, so that the effect of road occupation construction detection is improved. In particular, using the spatial attention mechanism may focus more on the spatial location information of the road during feature extraction.
Fig. 2 is a block diagram of the road space feature extraction module in the unmanned on vehicle-based unmanned intelligent monitoring system according to an embodiment of the present application, as shown in fig. 2, the road space feature extraction module 130 includes: a first convolutional encoding unit 131, configured to perform convolutional encoding on the road region of interest image by using a convolutional encoding part of a first convolutional neural network model of the spatial attention mechanism to obtain a road convolutional feature map; a first spatial attention unit 132 for inputting the road convolution feature map into a spatial attention portion of a first convolution neural network model of the spatial attention mechanism to obtain a road spatial attention map; a first activation unit 133 for activating the road space attention map by Softmax activation function to obtain a road space attention profile; a first space enhancement unit 134, configured to calculate a point-by-point multiplication of the road space attention feature map and the road convolution feature map to obtain a road space enhancement feature map; and a first pooling unit 135, configured to perform global averaging pooling on the road space enhancement feature map along a channel dimension to obtain the road space feature matrix.
The attention mechanism is a data processing method in machine learning, and is widely applied to various machine learning tasks such as natural language processing, image recognition, voice recognition and the like. On one hand, the attention mechanism is that the network is hoped to automatically learn out the places needing attention in the picture or text sequence; on the other hand, the attention mechanism generates a mask by the operation of the neural network, the weights of the values on the mask. In general, the spatial attention mechanism calculates the average value of different channels of the same pixel point, and then obtains spatial features through some convolution and up-sampling operations, and the pixels of each layer of the spatial features are given different weights.
Spatial attention mechanisms are a technique that enables neural network models to focus more on important areas in an image. In a first convolutional neural network model using a spatial attention mechanism, the model selects important regions in the image through the attention mechanism, thereby improving the classification accuracy of the model and the understanding ability of the image.
Specifically, this model includes two key modules: a convolution layer and an attention layer. The convolution layer is used to extract features in the image, while the attention layer determines which regions are most critical to the classification task by calculating the importance of each pixel. The first convolutional neural network model using the spatial attention mechanism has good effects in tasks such as image classification, target detection and the like. By using the attention mechanism, the model can more accurately identify the target in the image, thereby improving classification accuracy. In addition, the model can automatically pay attention to important areas in the image, so that dependence on artificial feature engineering is reduced, and the robustness and generalization capability of the model are improved.
Specifically, in the embodiment of the present application, the construction area extracting module 140 is configured to apply a mask to the construction monitoring image based on the position of the road region of interest image in the construction monitoring image to obtain a masked construction monitoring image. In order to improve detection efficiency, in the technical scheme of the application, a mask is applied to the construction monitoring image based on the position of the road region-of-interest image on the construction monitoring image so as to obtain a mask construction monitoring image.
By this means, only information in the target area can be retained and information irrelevant to the target can be eliminated, namely, the construction work object is retained, so that the follow-up computer model can be focused on the construction work object better. Thus, noise interference can be effectively reduced, and the accuracy of data is improved.
In one embodiment of the application, the road region of interest image can be used as a template to be matched with the construction monitoring image, so as to obtain the position in the construction monitoring image. Next, a mask may be applied on the construction monitoring image according to the position of the template to obtain a masked construction monitoring image. Masking the construction monitoring image refers to covering off non-road areas in the construction monitoring image, and only preserving the image of the road region of interest.
The implementation mode of the mask can adopt mask operation in image processing, namely, the image of the region of interest of the road is used as a mask, AND bitwise AND operation is carried out on the image of the construction monitoring image, so that the mask construction monitoring image can be obtained. The mask construction monitoring image can reduce the background noise and the interference of irrelevant information, and improves the recognition and analysis precision of the road area. By the mode, the road area can be accurately identified and monitored, so that the road occupation construction problem can be timely found and processed, and the safety and efficiency of a construction site are improved.
Specifically, in the embodiment of the present application, the construction space feature extraction module 150 is configured to obtain the construction work object feature matrix by using the second convolutional neural network model of the spatial attention mechanism for the mask construction monitoring image. Further, in order to extract the characteristics of the construction work object, the mask construction monitoring image is subjected to a second convolution neural network model using a spatial attention mechanism to obtain a construction work object characteristic matrix. Likewise, the use of a spatial attention mechanism can help the model to better focus on the construction work object and the surrounding feature information thereof, so that the model focuses on the change and the feature of the construction work object, thereby improving the accuracy of the model.
Fig. 3 is a block diagram of the construction space feature extraction module in the unmanned on vehicle-based unmanned intelligent monitoring system according to an embodiment of the present application, as shown in fig. 3, the construction space feature extraction module 150 includes: a second convolution encoding unit 151, configured to perform convolution encoding on the mask construction monitoring image by using the convolution encoding portion of the second convolution neural network model using a spatial attention mechanism to obtain a construction convolution feature map; a second spatial attention unit 152 for inputting the construction convolution feature map into a spatial attention portion of the second convolution neural network model using a spatial attention mechanism to obtain a construction spatial attention map; a second activating unit 153, configured to activate the construction space attention map by Softmax to obtain a construction space attention profile; a second space enhancement unit 154 for calculating a per-position point multiplication of the construction space attention feature map and the construction convolution feature map to obtain a construction space enhancement feature map; and a second pooling unit 155, configured to perform global averaging pooling on the construction space enhancement feature map along a channel dimension to obtain the construction work object feature matrix.
In another embodiment of the present application, after the road region of interest image is acquired, it is matched with the construction monitoring image in a manner using a deep learning method, such as a target detection algorithm based on a convolutional neural network. After matching, the position of the road region of interest in the construction monitoring image can be obtained, then the construction monitoring image can be subjected to mask processing according to the position information to obtain a mask construction monitoring image, and the mask construction monitoring image only retains the information in the road region of interest, so that the calculation amount of subsequent image processing and analysis can be reduced, and the efficiency of a monitoring system is improved.
The mask construction monitoring image can be used for processing modules such as subsequent road area identification, space feature extraction, classification feature map generation and the like. Meanwhile, as the mask construction monitoring image only keeps the information in the interested area of the road, the situations of misjudgment and missed detection can be reduced, and the accuracy and reliability of the monitoring system are improved.
Specifically, in the embodiment of the present application, the spatial correlation feature extraction module 160 is configured to aggregate the road spatial feature matrix and the construction work object feature matrix into an input tensor, and then obtain a classification feature map through a spatial correlation feature extractor based on a third convolutional neural network model. As described above, the spatial feature matrix of the road and the feature matrix of the construction work object in the target area are extracted, and the spatial feature of the road and the construction work object in the target area are aggregated into the input tensor and then passed through the spatial correlation feature extractor based on the third convolutional neural network model to obtain the classification feature map. In this way, the information can be integrated and the association of both in the spatial dimension can be mined.
Wherein, the spatial correlation feature extraction module 160 is configured to: each layer of the spatial correlation feature extractor based on the third convolutional neural network model is used for respectively carrying out input data in forward transfer of the layer: carrying out convolution processing on the input data to obtain a convolution characteristic diagram; carrying out mean pooling treatment based on a feature matrix on the convolution feature map to obtain a pooled feature map; performing nonlinear activation on the pooled feature map to obtain an activated feature map; the output of the last layer of the spatial correlation feature extractor based on the third convolutional neural network model is the classification feature map, and the input of the first layer of the spatial correlation feature extractor based on the third convolutional neural network model is the input tensor.
The convolutional neural network (Convolutional Neural Network, CNN) is an artificial neural network and has wide application in the fields of image recognition and the like. The convolutional neural network may include an input layer, a hidden layer, and an output layer, where the hidden layer may include a convolutional layer, a pooling layer, an activation layer, a full connection layer, etc., where the previous layer performs a corresponding operation according to input data, outputs an operation result to the next layer, and obtains a final result after the input initial data is subjected to a multi-layer operation.
The convolutional neural network model has excellent performance in the aspect of image local feature extraction by taking a convolutional kernel as a feature filtering factor, and has stronger feature extraction generalization capability and fitting capability compared with the traditional image feature extraction algorithm based on statistics or feature engineering.
Specifically, in the embodiment of the present application, the consistency optimization module 170 is configured to perform manifold geometry consistency optimization on the classification feature map to obtain an optimized classification feature map. Fig. 4 is a block diagram of the consistency optimization module in the unmanned on vehicle-based unmanned intelligent monitoring system according to an embodiment of the present application, as shown in fig. 4, the consistency optimization module 170 includes: a piece-wise approximation factor calculation unit 171 for calculating piece-wise approximation factors of the convex-decomposition-based feature geometry metrics for each feature matrix of the classification feature map to obtain a plurality of piece-wise approximation factors; and a weighting optimization unit 172, configured to weight each feature matrix by the plurality of piece-wise approximation factors to obtain the optimized classification feature map.
In the technical scheme of the application, the difference of source images between the road region-of-interest image and the mask construction monitoring image is considered, and the convolution neural network model using a spatial attention mechanism strengthens the spatial distribution of image feature semantics while the spatial semantic features of the images are extracted, so that the large difference exists between the integral feature distribution of the road spatial feature matrix and the construction operation object feature matrix. Therefore, when the road space feature matrix and the construction operation object feature matrix are aggregated into the input tensor and then the classification feature map is obtained through the space association feature extractor based on the third convolutional neural network model, as each feature matrix of the classification feature map expresses association features between the road space feature matrix and the overall feature distribution of the construction operation object feature matrix, each feature matrix also has higher inconsistency of overall distribution, namely, each feature matrix in the channel dimension of the classification feature map has higher manifold geometric inconsistency of high-dimensional feature manifold caused by the inconsistency of overall distribution, thereby influencing convergence difficulty of the classification feature map when the classification feature map is subjected to classification regression through the classifier, namely, reducing training speed and accuracy of a converged classification result.
Thus, the applicant of the present application calculated a piece-wise approximation factor of the convex decomposition-based feature geometry metric for each feature matrix of the classification feature map, expressed as: calculating a piece-wise approximation factor of each feature matrix of the classification feature map based on the convex decomposition feature geometry metric in an optimization formula to obtain a plurality of piece-wise approximation factors; wherein, the optimization formula is:wherein->Is the +.o of the classification characteristic diagram>First->Individual row vectors or column vectors, ">Representation->The function of the function is that,representation->Function (F)>Representing concatenating the vectors, and +.>Representing the square of the two norms of the vector, +.>Representing the plurality of slice-by-slice approximation factors.
In particular, the piecewise approximation factor of the convex decomposition-based feature geometry metric may be determined byDefining a symbolized distance measure between local geometries of the high-dimensional feature manifold of each feature matrix to obtain a differentiable convex indicator (con indicator) of each convex polyhedron object based on convex polyhedron (con polytope) decomposition of the high-dimensional feature manifold, and further>The function determines a hyperplane distance parameter for a learnable piece-wise convex decomposition of the high-dimensional feature manifold to approximately measure feature geometry. In this way, by weighting the feature matrix by the piece-by-piece approximation factors based on the feature geometric measurement of the convex decomposition, the manifold geometric consistency of the high-dimensional feature manifold among different feature matrixes of the classification feature map under different channels can be improved, so that the convergence difficulty of the classification feature map when the classification feature map carries out classification regression through the classifier is reduced, namely, the training speed and the accuracy of the converged classification result are improved.
Specifically, in the embodiment of the present application, the monitoring result generating module 180 is configured to pass the optimized classification feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether a phenomenon of road occupation construction exists. And then, the classification characteristic diagram passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the phenomenon of road occupation construction exists or not. The classifier can classify the classification characteristic map into two types of 'the phenomenon of occupying the road construction' and 'the phenomenon of not occupying the road construction', thereby realizing automatic detection and identification of the occupying the road construction. In practical application, the classification result has higher interpretability and can be used as a basis for decision making. Specifically, in response to the classification result being "the phenomenon of occupying the road construction", corresponding measures should be taken in time to ensure the smoothness of urban traffic and the safety of construction.
Fig. 5 is a block diagram of the monitoring result generation module in the unmanned on vehicle-based unmanned intelligent monitoring system according to an embodiment of the present application, as shown in fig. 5, the monitoring result generation module 180 includes: a developing unit 181, configured to develop the optimized classification feature map into classification feature vectors according to row vectors or column vectors; a full-connection encoding unit 182, configured to perform full-connection encoding on the classification feature vector by using multiple full-connection layers of the classifier to obtain an encoded classification feature vector; and a classification unit 183, configured to pass the encoded classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
The classifier may employ some classical machine learning algorithms, such as Support Vector Machine (SVM), decision Tree (Decision Tree), random Forest (Random Forest), and the like. In the training process of the classifier, a plurality of marked road occupation construction image samples are required to be prepared, so that the classifier can learn the characteristics of road occupation construction, and a new road occupation construction image is identified.
By the mode, the automatic identification of the road occupation construction phenomenon can be realized, so that the road occupation construction problem can be found and processed in time, and the safety and the efficiency of a construction site are improved. In addition, the system can realize intelligent analysis and optimization, for example, statistics and analysis can be carried out on the road occupation construction condition of a construction site, so that the construction flow is optimized and improved, and the construction efficiency and quality are improved.
In summary, an unmanned intelligent monitoring system 100 based on an unmanned aerial vehicle, which acquires a construction monitoring image collected by the unmanned aerial vehicle, is illustrated according to an embodiment of the present application; the construction monitoring video collected by the unmanned aerial vehicle is utilized, whether the phenomenon of road occupation construction exists in the road target area or not is automatically identified through deep learning and artificial intelligence technology, so that the efficiency and coverage of road supervision are improved, and reliable data support is provided for urban management departments.
As described above, the unmanned aerial vehicle-based intelligent monitoring system 100 according to the embodiment of the present application may be implemented in various terminal devices, for example, a server for unmanned aerial vehicle-based intelligent monitoring, or the like. In one example, the unmanned aerial vehicle-based intelligent monitoring system 100 according to embodiments of the present application may be integrated into a terminal device as one software module and/or hardware module. For example, the unmanned intelligent monitoring system 100 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the unmanned intelligent monitoring system 100 based on unmanned aerial vehicle can also be one of a plurality of hardware modules of the terminal device.
Alternatively, in another example, the unmanned aerial vehicle-based intelligent monitoring system 100 and the terminal device may be separate devices, and the unmanned aerial vehicle-based intelligent monitoring system 100 may be connected to the terminal device through a wired and/or wireless network and transmit the interactive information in a agreed data format.
In one embodiment of the present application, fig. 6 is a flowchart of an unmanned on vehicle-based intelligent monitoring method according to an embodiment of the present application. Fig. 7 is a schematic diagram of a system architecture of an unmanned intelligent monitoring method based on an unmanned aerial vehicle according to an embodiment of the application. As shown in fig. 6 and fig. 7, an unmanned intelligent monitoring method based on an unmanned aerial vehicle according to an embodiment of the present application includes: 210, acquiring a construction monitoring image acquired by an unmanned aerial vehicle; 220, passing the construction monitoring image through a road target detection network to obtain a road region-of-interest image; 230, passing the road region of interest image through a first convolutional neural network model using a spatial attention mechanism to obtain a road spatial feature matrix; 240, masking the construction monitoring image based on the position of the road region of interest image in the construction monitoring image to obtain a masking construction monitoring image; 250, the mask construction monitoring image is processed through a second convolution neural network model using a spatial attention mechanism to obtain a construction operation object feature matrix; 260, aggregating the road space feature matrix and the construction operation object feature matrix into an input tensor, and then obtaining a classification feature map through a space association feature extractor based on a third convolutional neural network model; 270, performing manifold geometric consistency optimization on the classification characteristic map to obtain an optimized classification characteristic map; and 280, passing the optimized classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the phenomenon of road occupation construction exists or not.
In a specific example, in the unmanned intelligent monitoring method based on the unmanned aerial vehicle, the step of obtaining the road space feature matrix by using the first convolution neural network model of the space attention mechanism on the image of the region of interest of the road includes: performing convolutional encoding on the road region of interest image by using a convolutional encoding part of a first convolutional neural network model of the spatial attention mechanism to obtain a road convolutional feature map; inputting the road convolution feature map into a spatial attention portion of a first convolution neural network model of the spatial attention mechanism to obtain a road spatial attention map; activating a function through Softmax to obtain a road space attention characteristic diagram by the road space attention map; calculating the point-by-point multiplication of the road space attention feature map and the road convolution feature map to obtain a road space enhancement feature map; and carrying out global averaging on the road space enhancement feature map along the channel dimension to obtain the road space feature matrix.
In a specific example, in the unmanned intelligent monitoring method based on the unmanned aerial vehicle, the mask construction monitoring image is obtained by using a second convolutional neural network model of a spatial attention mechanism to obtain a construction work object feature matrix, and the method includes: performing convolutional encoding on the mask construction monitoring image by using a convolutional encoding part of the second convolutional neural network model using a spatial attention mechanism to obtain a construction convolutional feature map; inputting the construction convolution feature map into a spatial attention portion of the second convolution neural network model using a spatial attention mechanism to obtain a construction spatial attention map; activating a function through Softmax to obtain a construction space attention characteristic diagram by the construction space attention map; calculating the construction space attention feature map and the construction convolution feature map according to the position point multiplication to obtain a construction space enhancement feature map; and carrying out global averaging on the construction space enhancement feature map along the channel dimension to obtain the construction operation object feature matrix.
It will be appreciated by those skilled in the art that the specific operation of the steps in the unmanned aerial vehicle-based unmanned intelligent monitoring method described above has been described in detail in the description of the unmanned aerial vehicle-based intelligent monitoring system described above with reference to fig. 1 to 5, and thus, repetitive descriptions thereof will be omitted.
Fig. 8 is an application scenario diagram of an unmanned on vehicle-based unmanned intelligent monitoring system according to an embodiment of the application. As shown in fig. 8, in this application scenario, first, a construction monitoring image (e.g., C as illustrated in fig. 8) acquired by an unmanned aerial vehicle (e.g., M as illustrated in fig. 8) is acquired; then, the acquired construction monitoring image is input to a server (e.g., S as illustrated in fig. 8) in which an unmanned aerial vehicle-based unmanned intelligent monitoring algorithm is deployed, wherein the server is capable of processing the construction monitoring image based on the unmanned aerial vehicle-based intelligent monitoring algorithm to generate a classification result for indicating whether or not there is a phenomenon of the occupied road construction.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (8)

1. Unmanned on vehicle-based unmanned on duty intelligent monitoring system, its characterized in that includes:
the monitoring image acquisition module is used for acquiring a construction monitoring image acquired by the unmanned aerial vehicle;
the road area identification module is used for passing the construction monitoring image through a road target detection network to obtain a road region-of-interest image;
the road space feature extraction module is used for obtaining a road space feature matrix through a first convolution neural network model using a space attention mechanism through the road region-of-interest image;
a construction area extraction module for applying a mask to the construction monitoring image based on the position of the road region of interest image in the construction monitoring image to obtain a mask construction monitoring image;
the construction space feature extraction module is used for obtaining a construction operation object feature matrix through a second convolution neural network model using a space attention mechanism by using the mask construction monitoring image;
The spatial correlation feature extraction module is used for acquiring a classification feature map through a spatial correlation feature extractor based on a third convolutional neural network model after aggregating the road spatial feature matrix and the construction work object feature matrix into an input tensor;
the consistency optimization module is used for carrying out manifold geometric consistency optimization on the classification characteristic map so as to obtain an optimized classification characteristic map; and
the monitoring result generation module is used for enabling the optimized classification characteristic diagram to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the phenomenon of road occupation construction exists or not;
wherein, the uniformity optimization module includes:
a piece-wise approximation factor calculation unit for calculating piece-wise approximation factors of the convex decomposition-based feature geometry metrics for each feature matrix of the classification feature map to obtain a plurality of piece-wise approximation factors; and
the weighting optimization unit is used for weighting each feature matrix by the plurality of piece-by-piece approximation factors to obtain the optimized classification feature map;
wherein the piece-by-piece approximation factor calculation unit is used for: calculating a piece-wise approximation factor of each feature matrix of the classification feature map based on the convex decomposition feature geometry metric in an optimization formula to obtain a plurality of piece-wise approximation factors;
Wherein, the optimization formula is:
wherein,is the +.o of the classification characteristic diagram>First->A number of row vectors or column vectors,representation->Function (F)>Representation->Function (F)>Representing concatenating the vectors, and +.>Representing the square of the two norms of the vector, +.>Representing the plurality of slice-by-slice approximation factors.
2. The unmanned on vehicle-based unattended intelligent monitoring system according to claim 1, wherein the road space feature extraction module comprises:
the first convolution coding unit is used for carrying out convolution coding on the image of the region of interest of the road by using a convolution coding part of a first convolution neural network model of the spatial attention mechanism so as to obtain a road convolution characteristic diagram;
a first spatial attention unit for inputting the road convolution feature map into a spatial attention portion of a first convolution neural network model of the spatial attention mechanism to obtain a road spatial attention map;
the first activating unit is used for activating the function through Softmax to obtain a road space attention characteristic diagram;
the first space enhancement unit is used for calculating the position-based point multiplication of the road space attention feature map and the road convolution feature map to obtain a road space enhancement feature map; and
And the first pooling unit is used for carrying out global mean pooling on the road space enhancement feature map along the channel dimension so as to obtain the road space feature matrix.
3. The unmanned on vehicle-based unattended intelligent monitoring system according to claim 2, wherein the construction space feature extraction module comprises:
a second convolution encoding unit, configured to perform convolution encoding on the mask construction monitoring image by using a convolution encoding part of the second convolution neural network model using a spatial attention mechanism to obtain a construction convolution feature map;
a second spatial attention unit for inputting the construction convolution feature map into a spatial attention portion of the second convolution neural network model using a spatial attention mechanism to obtain a construction spatial attention map;
the second activating unit is used for activating the function through Softmax to obtain a construction space attention characteristic diagram;
the second space enhancement unit is used for calculating the construction space attention characteristic diagram and the construction convolution characteristic diagram and multiplying the construction space attention characteristic diagram and the construction convolution characteristic diagram according to the position points to obtain a construction space enhancement characteristic diagram; and
and the second pooling unit is used for carrying out global mean pooling on the construction space enhancement feature map along the channel dimension so as to obtain the construction operation object feature matrix.
4. The unmanned aerial vehicle-based unattended intelligent monitoring system according to claim 3, wherein the spatial correlation feature extraction module is configured to: each layer of the spatial correlation feature extractor based on the third convolutional neural network model is used for respectively carrying out input data in forward transfer of the layer:
carrying out convolution processing on the input data to obtain a convolution characteristic diagram;
carrying out mean pooling treatment based on a feature matrix on the convolution feature map to obtain a pooled feature map; and
non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map;
the output of the last layer of the spatial correlation feature extractor based on the third convolutional neural network model is the classification feature map, and the input of the first layer of the spatial correlation feature extractor based on the third convolutional neural network model is the input tensor.
5. The unmanned on vehicle-based unattended intelligent monitoring system according to claim 4, wherein the monitoring result generation module comprises:
the unfolding unit is used for unfolding the optimized classification characteristic graph into classification characteristic vectors according to row vectors or column vectors;
The full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a plurality of full-connection layers of the classifier so as to obtain coded classification characteristic vectors; and
and the classification unit is used for passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
6. An unmanned intelligent monitoring method based on unmanned aerial vehicle is characterized by comprising the following steps:
acquiring a construction monitoring image acquired by an unmanned aerial vehicle;
the construction monitoring image passes through a road target detection network to obtain a road region-of-interest image;
the image of the region of interest of the road is processed through a first convolution neural network model using a spatial attention mechanism to obtain a road spatial feature matrix;
applying a mask to the construction monitoring image based on the position of the road region of interest image in the construction monitoring image to obtain a mask construction monitoring image;
the mask construction monitoring image is subjected to a second convolution neural network model by using a spatial attention mechanism to obtain a construction operation object feature matrix;
the road space feature matrix and the construction operation object feature matrix are aggregated into an input tensor, and then a space association feature extractor based on a third convolutional neural network model is used for obtaining a classification feature map;
Performing manifold geometric consistency optimization on the classification feature map to obtain an optimized classification feature map; and
the optimized classification characteristic diagram is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the phenomenon of road occupation construction exists or not;
performing manifold geometric consistency optimization on the classification characteristic map to obtain an optimized classification characteristic map, wherein the method comprises the following steps:
calculating a piece-wise approximation factor of each feature matrix of the classification feature map based on the convex decomposition feature geometry metric to obtain a plurality of piece-wise approximation factors; and
weighting each feature matrix by the plurality of piece-by-piece approximation factors to obtain the optimized classification feature map;
wherein computing a piece-wise approximation factor of each feature matrix of the classification feature map based on the convex decomposition feature geometry metric to obtain a plurality of piece-wise approximation factors comprises: calculating a piece-wise approximation factor of each feature matrix of the classification feature map based on the convex decomposition feature geometry metric in an optimization formula to obtain a plurality of piece-wise approximation factors;
wherein, the optimization formula is:
wherein,is the +.o of the classification characteristic diagram>First- >A number of row vectors or column vectors,representation->Function (F)>Representation->Function (F)>Representing concatenating the vectors, and +.>Representing the square of the two norms of the vector, +.>Representing the plurality of slice-by-slice approximation factors.
7. The unmanned aerial vehicle-based intelligent monitoring method of claim 6, wherein passing the road region of interest image through a first convolutional neural network model using a spatial attention mechanism to obtain a road spatial feature matrix comprises:
performing convolutional encoding on the road region of interest image by using a convolutional encoding part of a first convolutional neural network model of the spatial attention mechanism to obtain a road convolutional feature map;
inputting the road convolution feature map into a spatial attention portion of a first convolution neural network model of the spatial attention mechanism to obtain a road spatial attention map;
activating a function through Softmax to obtain a road space attention characteristic diagram by the road space attention map;
calculating the point-by-point multiplication of the road space attention feature map and the road convolution feature map to obtain a road space enhancement feature map; and
and carrying out global mean pooling on the road space enhancement feature map along the channel dimension to obtain the road space feature matrix.
8. The unmanned aerial vehicle-based intelligent monitoring method of claim 7, wherein the masking construction monitoring image is obtained by using a second convolutional neural network model of a spatial attention mechanism to obtain a construction work object feature matrix, comprising:
performing convolutional encoding on the mask construction monitoring image by using a convolutional encoding part of the second convolutional neural network model using a spatial attention mechanism to obtain a construction convolutional feature map;
inputting the construction convolution feature map into a spatial attention portion of the second convolution neural network model using a spatial attention mechanism to obtain a construction spatial attention map;
activating a function through Softmax to obtain a construction space attention characteristic diagram by the construction space attention map;
calculating the construction space attention feature map and the construction convolution feature map according to the position point multiplication to obtain a construction space enhancement feature map; and
and carrying out global mean pooling on the construction space enhancement feature map along the channel dimension to obtain the construction operation object feature matrix.
CN202310717185.7A 2023-06-16 2023-06-16 Unmanned intelligent monitoring system and method based on unmanned aerial vehicle Active CN116630909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310717185.7A CN116630909B (en) 2023-06-16 2023-06-16 Unmanned intelligent monitoring system and method based on unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310717185.7A CN116630909B (en) 2023-06-16 2023-06-16 Unmanned intelligent monitoring system and method based on unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN116630909A CN116630909A (en) 2023-08-22
CN116630909B true CN116630909B (en) 2024-02-02

Family

ID=87641877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310717185.7A Active CN116630909B (en) 2023-06-16 2023-06-16 Unmanned intelligent monitoring system and method based on unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN116630909B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291812A (en) * 2020-02-11 2020-06-16 浙江大华技术股份有限公司 Attribute class acquisition method and device, storage medium and electronic device
CN112686207A (en) * 2021-01-22 2021-04-20 北京同方软件有限公司 Urban street scene target detection method based on regional information enhancement
CN114170569A (en) * 2021-12-10 2022-03-11 山东大学 Method, system, storage medium and equipment for monitoring road surface abnormal condition
CN115257784A (en) * 2022-06-15 2022-11-01 西安电子科技大学 Vehicle-road cooperative system based on 4D millimeter wave radar
CN115392320A (en) * 2022-09-08 2022-11-25 江苏鑫鸿电气设备有限公司 Transformer with anti-theft function and method thereof
CN115744084A (en) * 2022-11-21 2023-03-07 华能伊敏煤电有限责任公司 Belt tensioning control system and method based on multi-sensor data fusion
CN115770374A (en) * 2022-12-28 2023-03-10 湖北博利特种汽车装备股份有限公司 Electric cruise fire engine based on CAFS system
CN116001716A (en) * 2022-12-29 2023-04-25 陕西省君凯电子科技有限公司 Intelligent remote management system for mechanical equipment
CN116152768A (en) * 2023-03-01 2023-05-23 重庆赛力斯新能源汽车设计院有限公司 Intelligent driving early warning system and method based on road condition identification
CN116167989A (en) * 2023-02-10 2023-05-26 昇兴博德新材料温州有限公司 Intelligent production method and system for aluminum cup

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11586865B2 (en) * 2021-02-18 2023-02-21 Volkswagen Aktiengesellschaft Apparatus, system and method for fusing sensor data to do sensor translation

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291812A (en) * 2020-02-11 2020-06-16 浙江大华技术股份有限公司 Attribute class acquisition method and device, storage medium and electronic device
CN112686207A (en) * 2021-01-22 2021-04-20 北京同方软件有限公司 Urban street scene target detection method based on regional information enhancement
CN114170569A (en) * 2021-12-10 2022-03-11 山东大学 Method, system, storage medium and equipment for monitoring road surface abnormal condition
CN115257784A (en) * 2022-06-15 2022-11-01 西安电子科技大学 Vehicle-road cooperative system based on 4D millimeter wave radar
CN115392320A (en) * 2022-09-08 2022-11-25 江苏鑫鸿电气设备有限公司 Transformer with anti-theft function and method thereof
CN115744084A (en) * 2022-11-21 2023-03-07 华能伊敏煤电有限责任公司 Belt tensioning control system and method based on multi-sensor data fusion
CN115770374A (en) * 2022-12-28 2023-03-10 湖北博利特种汽车装备股份有限公司 Electric cruise fire engine based on CAFS system
CN116001716A (en) * 2022-12-29 2023-04-25 陕西省君凯电子科技有限公司 Intelligent remote management system for mechanical equipment
CN116167989A (en) * 2023-02-10 2023-05-26 昇兴博德新材料温州有限公司 Intelligent production method and system for aluminum cup
CN116152768A (en) * 2023-03-01 2023-05-23 重庆赛力斯新能源汽车设计院有限公司 Intelligent driving early warning system and method based on road condition identification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Universal Image Embedding: Retaining and Expanding Knowledge With Multi-Domain Fine-Tuning;SOCRATIS GKELIOS等;《IEEE Acess》;第11卷;第38208-38217页 *
联合局部约束的邻域嵌入人脸超分辨率重建;黄福珍;周晨旭;何林巍;;中国图象图形学报(第06期);第18-27页 *

Also Published As

Publication number Publication date
CN116630909A (en) 2023-08-22

Similar Documents

Publication Publication Date Title
US11475660B2 (en) Method and system for facilitating recognition of vehicle parts based on a neural network
CN111738124B (en) Remote sensing image cloud detection method based on Gabor transformation and attention
CN113705478B (en) Mangrove single wood target detection method based on improved YOLOv5
CN110096948B (en) Remote sensing image identification method based on characteristic aggregation convolutional network
CN112966665A (en) Pavement disease detection model training method and device and computer equipment
CN109376736A (en) A kind of small video target detection method based on depth convolutional neural networks
CN110942456B (en) Tamper image detection method, device, equipment and storage medium
CN115830399A (en) Classification model training method, apparatus, device, storage medium, and program product
CN111062950A (en) Method, storage medium and equipment for multi-class forest scene image segmentation
CN116630909B (en) Unmanned intelligent monitoring system and method based on unmanned aerial vehicle
CN112348011B (en) Vehicle damage assessment method and device and storage medium
CN115272882A (en) Discrete building detection method and system based on remote sensing image
CN113537240A (en) Deformation region intelligent extraction method and system based on radar sequence image
CN117671507B (en) River water quality prediction method combining meteorological data
CN117407733B (en) Flow anomaly detection method and system based on countermeasure generation shapelet
CN117274903B (en) Intelligent early warning device and method for electric power inspection based on intelligent AI chip
CN117333814A (en) Object identification method, device and system applied to safety control
Ibrahim et al. Challenging Weather for License Plate Detection: Impaired visibility
Fakharurazi et al. Object Detection in Autonomous Vehicles
CN117671322A (en) Intelligent monitoring method and system based on video
Fan et al. No-Reference Quality Assessment for UAV Patrol Images of Transmission Line
CN117671507A (en) River water quality prediction method combining meteorological data
CN117372715A (en) Tail gas blackness detection method based on visual characteristics
CN117671450A (en) Method for managing image content security of home network based on transducer
CN117523504A (en) Quantum optimization algorithm-based monitoring video vehicle identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant