CN117522987A - Method for detecting illegal building, device for detecting illegal building and storage medium - Google Patents

Method for detecting illegal building, device for detecting illegal building and storage medium Download PDF

Info

Publication number
CN117522987A
CN117522987A CN202311674103.1A CN202311674103A CN117522987A CN 117522987 A CN117522987 A CN 117522987A CN 202311674103 A CN202311674103 A CN 202311674103A CN 117522987 A CN117522987 A CN 117522987A
Authority
CN
China
Prior art keywords
building
coordinates
virtual
unmanned aerial
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311674103.1A
Other languages
Chinese (zh)
Inventor
金楠
伍永靖邦
施钟淇
岳清瑞
凡红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Urban Safety Development Science And Technology Research Institute Shenzhen
Shenzhen Technology Institute of Urban Public Safety Co Ltd
Original Assignee
Urban Safety Development Science And Technology Research Institute Shenzhen
Shenzhen Technology Institute of Urban Public Safety Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Urban Safety Development Science And Technology Research Institute Shenzhen, Shenzhen Technology Institute of Urban Public Safety Co Ltd filed Critical Urban Safety Development Science And Technology Research Institute Shenzhen
Priority to CN202311674103.1A priority Critical patent/CN117522987A/en
Publication of CN117522987A publication Critical patent/CN117522987A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The invention discloses a method for detecting a illegal building, a device for detecting the illegal building and a storage medium, wherein the method comprises the following steps: generating a violation building detection route in the virtual scene according to the three-dimensional city model corresponding to the region to be detected; transmitting the illegal building detection route to a virtual unmanned aerial vehicle of a virtual scene, and acquiring a virtual image fed back by the virtual unmanned aerial vehicle; determining a corresponding violation building in the virtual image based on a preset detection model, and determining pixel coordinates of the violation building and corresponding three-dimensional space coordinates when the virtual unmanned aerial vehicle collects the virtual image; and generating coordinate information of the violation building in a real scene according to the pixel coordinates and the three-dimensional space coordinates. According to the method, the virtual detection scene is constructed, the illegal building is detected in the virtual scene through the virtual unmanned aerial vehicle, so that the influence of dense buildings or communication towers on data acquisition of the unmanned aerial vehicle is avoided, and the detection accuracy is improved.

Description

Method for detecting illegal building, device for detecting illegal building and storage medium
Technical Field
The present invention relates to the field of detection of illegal buildings, and in particular, to a method for detecting illegal buildings, a device for detecting illegal buildings, and a storage medium.
Background
In order to improve the detection efficiency of urban illegal buildings, unmanned aerial vehicles are generally adopted to carry out inspection of the illegal buildings.
In the related inspection mode based on unmanned aerial vehicle for illegal building, due to the complex urban environment, the acquisition of longitude and latitude parameters, euler angles and other original parameters of the unmanned aerial vehicle is influenced by factors such as dense buildings, communication towers and the like. These complications can interfere with the reception and processing of unmanned GPS (Global Positioning System ) or RTK (Real Time Kinematic, real time kinematic) signals, and can also affect gyroscope and accelerometer measurement data, resulting in low accuracy in detection of offending structures.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a method for detecting a illegal building, a device for detecting the illegal building and a storage medium, and solves the problem of low detection accuracy of the illegal building in the prior art.
In order to achieve the above object, the present invention provides a method for detecting a offending building, the method for detecting a offending building comprising the steps of:
generating a violation building detection route in the virtual scene according to the three-dimensional city model corresponding to the region to be detected;
transmitting the illegal building detection route to a virtual unmanned aerial vehicle of a virtual scene, and acquiring a virtual image fed back by the virtual unmanned aerial vehicle;
determining a corresponding violation building in the virtual image based on a preset detection model, and determining pixel coordinates of the violation building and corresponding three-dimensional space coordinates when the virtual unmanned aerial vehicle collects the virtual image;
and generating coordinate information of the violation building in a real scene according to the pixel coordinates and the three-dimensional space coordinates.
Optionally, the step of generating coordinate information of the offending building in the real scene according to the pixel coordinates and the three-dimensional space coordinates includes:
determining a three-dimensional space coordinate system corresponding to the three-dimensional space coordinate, and converting the three-dimensional space coordinate system into a corresponding unmanned aerial vehicle coordinate system in the real scene;
acquiring internal parameters and external parameters of the virtual unmanned aerial vehicle;
Converting the pixel coordinates into camera coordinates based on the internal parameters, and converting the camera coordinates into world coordinates based on the external parameters;
and converting the world coordinates into unmanned aerial vehicle coordinates in the unmanned aerial vehicle coordinate system, wherein the unmanned aerial vehicle coordinates are the coordinate information.
Optionally, the step of converting the pixel coordinates into camera coordinates based on the internal parameters and converting the camera coordinates into world coordinates based on the external parameters includes:
determining main point coordinates of a pixel coordinate system corresponding to the pixel coordinates and focal lengths of the internal references in the transverse and longitudinal directions;
determining a difference value coordinate of the pixel coordinate and the main point coordinate, and processing a transverse value and a longitudinal value of the difference value coordinate and the focal length as a quotient to obtain the camera coordinate;
and determining an external parameter matrix, a rotation matrix and a translation vector corresponding to the external parameter, and converting the camera coordinates into world coordinates based on the external parameter matrix, the rotation matrix and the translation vector.
Optionally, before the step of generating the detection route of the offending building in the virtual scene according to the three-dimensional city model corresponding to the to-be-detected area, the method further includes:
Acquiring a pre-training data set, and processing the pre-training data set based on a detection algorithm to obtain a verification set and a test set;
inputting the test set into a pre-training model, and controlling the pre-training model to perform data enhancement processing on an input image corresponding to the test set;
initializing weight information of the pre-training model, carrying out training iteration processing on a pre-marked data set according to the weight information, and reversely updating the weight information based on the training iteration processing structure;
and carrying out evaluation processing on the training result according to the verification set to obtain evaluation performance, and carrying out super-parameter tuning processing on the pre-training model according to the evaluation performance to obtain the preset detection model.
Optionally, before the step of generating the detection route of the offending building in the virtual scene according to the three-dimensional city model corresponding to the area to be detected, the method includes:
acquiring a plurality of pieces of image information acquired by a real unmanned aerial vehicle in the region to be detected, generating the three-dimensional city model based on the image information, and importing the three-dimensional city model into the virtual scene;
responding to a rendering instruction received by the virtual scene, and adding a simulated rendering effect corresponding to the rendering instruction in the three-dimensional city model; and/or
And responding to the rendering instruction received by the virtual scene, determining rendering materials and texture information of the three-dimensional city model corresponding to the rendering instruction, and adding the rendering materials and the texture information into the three-dimensional city model.
Optionally, before the step of generating the detection route of the offending building in the virtual scene according to the three-dimensional city model corresponding to the to-be-detected area, the method further includes:
generating a patrol task corresponding to the region to be detected, wherein the patrol task comprises a flying spot, a landing spot, a shooting spot and route information of the patrol region;
and setting aerial photographing parameters based on the inspection task, selecting the unmanned aerial vehicle in an unmanned aerial vehicle platform according to the aerial photographing parameters, and controlling the unmanned aerial vehicle to execute the inspection task.
Optionally, the step of determining the corresponding violation building in the virtual image based on the preset detection model and determining the pixel coordinates of the violation building includes:
determining a boundary box of the violation building in the virtual image based on the preset detection model;
taking the coordinates of the center point of the boundary box in the virtual image as the pixel coordinates; or alternatively
And determining a preset calibration point of the boundary box, and taking the coordinates of the preset calibration point in the virtual image as the pixel coordinates.
Optionally, after the step of generating the coordinate information of the violation building in the real scene according to the pixel coordinates and the three-dimensional space coordinates, the method further includes:
outputting the coordinate information to an image detection interface, responding to an operation instruction fed back by the image detection interface, and at least executing the following actions:
generating alarm prompt coordinates of the illegal buildings of the area to be detected according to the coordinate information; or alternatively
Regenerating a patrol task, controlling the real unmanned aerial vehicle to shoot the image of the region to be detected again based on the newly generated patrol task, and generating a new three-dimensional city model according to the re-shot image; or alternatively
And skipping to execute the steps of determining the corresponding violation building in the virtual image based on the preset detection model, determining the pixel coordinates of the violation building, and acquiring the corresponding three-dimensional space coordinates by the virtual unmanned aerial vehicle when the virtual image is acquired.
In addition, in order to achieve the above object, the present invention also provides a device for detecting a offensive building, where the device for detecting a offensive building includes a memory, a processor, and a detection program for a offensive building stored in the memory and capable of running on the processor, and the steps of the method for detecting a offensive building described above are implemented when the detection program for a offensive building is executed by the processor.
In addition, in order to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a detection program for a offending building, which when executed by a processor, implements the steps of the detection method for a offending building as described above.
The embodiment of the invention provides a method for detecting a violation building, a device for detecting the violation building and a storage medium, wherein a violation building detection route is generated in a virtual scene according to a three-dimensional city model corresponding to a region to be detected, then the violation building detection route is sent to a virtual unmanned aerial vehicle of the virtual scene, a virtual image fed back by the virtual unmanned aerial vehicle is obtained, a corresponding violation building in the virtual image is determined based on a preset detection model, pixel coordinates of the violation building are determined, and when the virtual unmanned aerial vehicle collects the virtual image, corresponding three-dimensional space coordinates are generated, and finally coordinate information of the violation building in a real scene is generated according to the pixel coordinates and the three-dimensional space coordinates. It can be seen that by constructing a virtual scene corresponding to the real scene, then controlling the virtual unmanned aerial vehicle in the virtual scene to detect the illegal building, further, in the measurement process, the longitude and latitude parameters, euler angles and other parameters of the unmanned aerial vehicle can be prevented from being influenced by dense buildings and communication towers in the real scene, and the detection accuracy of the illegal building is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flow chart of a first embodiment of a method for detecting a offending building of the present invention;
FIG. 2 is a flow chart of a second embodiment of a method of detecting a offending building of the present invention;
FIG. 3 is a flow chart of a third embodiment of a method for detecting a offending building of the present invention;
fig. 4 is a schematic diagram of a terminal hardware structure of various embodiments of the detection method of the offending building of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the related inspection mode based on unmanned aerial vehicle for illegal building, due to the complex urban environment, the acquisition of longitude and latitude parameters, euler angles and other original parameters of the unmanned aerial vehicle is influenced by factors such as dense buildings, communication towers and the like. These complications can interfere with the reception and processing of unmanned GPS (Global Positioning System ) or RTK (Real Time Kinematic, real time kinematic) signals, and can also affect gyroscope and accelerometer measurement data, resulting in low accuracy in detection of offending structures.
In order to solve the above-mentioned drawbacks, the embodiment of the present invention provides a method for detecting a illegal building, which mainly comprises the following steps:
generating a violation building detection route in the virtual scene according to the three-dimensional city model corresponding to the region to be detected;
transmitting the illegal building detection route to a virtual unmanned aerial vehicle of a virtual scene, and acquiring a virtual image fed back by the virtual unmanned aerial vehicle;
determining a corresponding violation building in the virtual image based on a preset detection model, and determining pixel coordinates of the violation building and corresponding three-dimensional space coordinates when the virtual unmanned aerial vehicle collects the virtual image;
And generating coordinate information of the violation building in a real scene according to the pixel coordinates and the three-dimensional space coordinates.
According to the method, the virtual scene corresponding to the real scene is constructed, and then the virtual unmanned aerial vehicle is controlled in the virtual scene to detect the illegal building, so that the influence of the longitude and latitude parameters, the Euler angle and other parameters of the unmanned aerial vehicle on dense buildings and communication towers in the real scene can be avoided in the measuring process, and the detection accuracy of the illegal building is improved.
In order to better understand the above technical solution, exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Referring to fig. 1, in a first embodiment, the method for detecting a illegal building according to the present invention includes the steps of:
step S10, generating a violation building detection route in a virtual scene according to a three-dimensional city model corresponding to a region to be detected;
In this embodiment, in order to avoid the influence of dense buildings, communication towers, and the like in the real scene on positioning data of the unmanned aerial vehicle, the accuracy of positioning detection of the offending building is low, and a three-dimensional virtual scene corresponding to the to-be-detected area needs to be constructed, so that detection is performed in the virtual scene, and the detection accuracy of the offending building is improved. In the virtual scene, in order to improve the shooting efficiency of the virtual unmanned aerial vehicle for shooting the virtual area to be detected, a patrol route of the illegal building in the virtual scene needs to be generated based on the three-dimensional city model. The area to be detected refers to an area where illegal buildings possibly exist, the size of the area is set according to actual application scenes, and the area comprises urban areas and village and town areas.
In an alternative embodiment of obtaining the three-dimensional city model of the area to be detected, a plurality of pieces of image information collected by the real unmanned aerial vehicle in the area to be detected may be obtained, and then the three-dimensional city model is generated through the image information, and the three-dimensional city model is imported into the virtual scene. In the process of performing three-dimensional reconstruction on the urban area to be detected through the multiple images, the three-dimensional reconstruction can be processed based on geometric methods such as SFM, MVS and the like, or can be processed based on machine learning methods such as NeRF deep learning network, so that a three-dimensional model of the urban area to be detected is generated. In addition, the three-dimensional scene can be constructed by pre-storing the image information of the local region to be detected. The generated three-dimensional city model can be imported into the three-dimensional image software.
After the three-dimensional city model is imported into the virtual scene, before the detection route of the illegal building is generated, rendering processing is needed for the three-dimensional city model in order to improve the authenticity of the three-dimensional city model and further improve the detection accuracy. That is, after a rendering instruction is input in a virtual scene by a worker, a rendering effect corresponding to the rendering instruction is added in a three-dimensional city model in response to the rendering instruction received by the virtual scene. For example, the rendering instruction includes the position, the intensity and the color of the light source of the three-dimensional city model, and the corresponding light source rendering special effects can be added according to the rendering instruction, so that the lighting effect of the natural light is simulated in the virtual scene, and the authenticity of the three-dimensional city model is improved. In addition, the staff can also manually model the incomplete place of the three-dimensional city model and add corresponding materials and textures, namely, when the detection system responds to the rendering instruction, the rendering materials and texture information of the three-dimensional city model corresponding to the rendering instruction need to be determined, and the rendering materials and texture information are added into the three-dimensional city model. The process of simulating the special effect of natural light and manual modeling can be performed independently or together.
Step S20, the illegal building detection route is sent to a virtual unmanned aerial vehicle of a virtual scene, and a virtual image fed back by the virtual unmanned aerial vehicle is obtained;
in this embodiment, after the detection route of the offending building in the virtual scene is sent to the virtual unmanned aerial vehicle, the virtual unmanned aerial vehicle is controlled to detect based on the virtual route, and meanwhile, a virtual image shot in the detection process of the virtual unmanned aerial vehicle is acquired.
In an alternative embodiment, the detection route of the offending building is sent to the virtual unmanned aerial vehicle, and the detection route of the offending building can be taken as an offline package, and the virtual unmanned aerial vehicle executing the detection route downloads and executes the detection route. In addition, the detection route of the illegal building can be transmitted to the virtual unmanned aerial vehicle in real time, so that the virtual unmanned aerial vehicle can conduct real-time route planning in a virtual scene according to the real-time detection route and the current position, and the detection efficiency of the illegal building is improved.
In an optional implementation manner of controlling the virtual unmanned aerial vehicle to shoot in the virtual scene, a flight task can be set in the virtual environment based on the illegal building detection route, the establishment of the flight task can refer to the establishment mode of the real environment, then the image virtually shot in the flight task is acquired, and meanwhile, the three-dimensional space coordinates of the virtual unmanned aerial vehicle in the virtual environment are recorded.
Step S30, determining a corresponding violation building in the virtual image based on a preset detection model, and determining pixel coordinates of the violation building and corresponding three-dimensional space coordinates when the virtual unmanned aerial vehicle collects the virtual image;
in this embodiment, after a plurality of virtual images captured by the virtual unmanned aerial vehicle are obtained, a violation building in the virtual image may be detected according to a preset detection model, and the violation building is framed in the image by using a binding box, and a center point of the binding box is used as a pixel coordinate of the image where the center point of the violation building is located. Thus, as an alternative embodiment, a bounding box of the violation building in the virtual image may be determined in the virtual image based on the preset detection model, and a center point of the bounding box is set at a coordinate of the virtual image as the pixel coordinate. And the acquisition efficiency of the pixel coordinates is improved by calibrating the boundary box of the current virtual image.
In addition, a preset calibration point position can be added in the boundary frame, after the boundary frame corresponding to the violation building is generated based on the preset detection model, the preset calibration point position of the boundary frame can be determined, and the coordinates of the preset calibration point position in the virtual image are used as the pixel coordinates.
When the virtual unmanned aerial vehicle shoots virtual images, the position of the virtual unmanned aerial vehicle when shooting each virtual image is synchronously recorded, so that when a Zhang Xuni image is determined to contain a violation building, the virtual three-dimensional space coordinates stored in association with the image can be directly acquired.
And step S40, generating coordinate information of the violation building in a real scene according to the pixel coordinates and the three-dimensional space coordinates.
In this embodiment, after the pixel coordinates are obtained, coordinate conversion needs to be performed on the pixel coordinates and the three-dimensional space coordinates, so as to obtain coordinates corresponding to the actual violation buildings in the actual scene.
As an alternative embodiment, step S40 includes:
step S41, determining a three-dimensional space coordinate system corresponding to the three-dimensional space coordinate, and converting the three-dimensional space coordinate system into an unmanned aerial vehicle coordinate system corresponding to the real scene;
in this embodiment, a certain point in the three-dimensional model is selected as the origin of the world coordinate system, and the three-dimensional space coordinates and the real world coordinates of the virtual shooting of the unmanned aerial vehicle are compared and registered, so that the coordinate system corresponding to the virtual three-dimensional space coordinates is converted into the unmanned aerial vehicle coordinate system in the real scene, so that the world coordinates of the virtual scene obtained by calculating according to the pixel coordinates are converted into the real world unmanned aerial vehicle coordinate system. The unmanned aerial vehicle coordinate system may be a longitude and latitude coordinate system.
Step S42, obtaining internal parameters and external parameters of the virtual unmanned aerial vehicle;
as an optional implementation manner, when pixel coordinates are converted, internal parameters and external parameters of the virtual unmanned aerial vehicle need to be acquired, the internal parameters refer to camera parameters for realizing calibration when the virtual unmanned aerial vehicle shoots a virtual scene, the external parameters generally comprise the position (X, Y, Z) and the orientation (euler angle or rotation matrix) of the camera, and at this time, because shooting is performed in the virtual environment, each parameter of the external matrix of the camera is not interfered by the complex environment of a real city, and the accuracy of detection and positioning can be remarkably improved.
Step S43, converting the pixel coordinates into camera coordinates based on the internal parameters, and converting the camera coordinates into world coordinates based on the external parameters;
in an alternative embodiment of converting the pixel coordinates into the camera coordinates based on the internal reference, the principal point coordinates of the pixel coordinate system corresponding to the pixel coordinates and the focal length of the internal reference in the horizontal and vertical directions may be determined, then the difference coordinates between the pixel coordinates and the principal point coordinates are determined, and the horizontal and vertical values of the difference coordinates and the focal length are processed to obtain the camera coordinates.
Illustratively, the pixel coordinates (u, v) are the center points of the bounding boxes, the center points of the offending buildings, and the reference parameters obtained by camera calibration are converted into coordinates (xc, yc, zc) in the camera coordinate system. The conversion formula is as follows:
xc=(u-principal_point_x)/focal_length_x
yc=(v-principal_point_y)/focal_length_y
zc=1
Where, (principal_point_x, principal_point_y) is the principal point coordinate of the pixel coordinate system, and focal_length_x and focal_length_y are the values of the camera focal length in the x and y directions, respectively.
After converting the pixel coordinates into camera coordinates, in an alternative implementation manner of converting the camera coordinates into world coordinates based on the external parameters, it is necessary to determine an external parameter matrix, a rotation matrix and a translation vector corresponding to the external parameters, and calculate based on matrix transformation among the external parameter matrix, the rotation matrix and the translation vector, so as to convert the camera coordinates into world coordinates. The conversion process is as follows:
[Xw,Yw,Zw,1]=[R|T]*[xc,yc,zc,1]
wherein [ R|T ] is an extrinsic matrix of the camera, R is a rotation matrix, and T is a translation vector. [ xc, yc, zc,1] is a coordinate in the camera coordinate system, and [ Xw, yw, zw,1] is a coordinate in the world coordinate system. The camera coordinates and the world coordinates are corresponding virtual coordinates in the virtual scene, and are not coordinates of the real scene.
And S44, converting the world coordinates into unmanned aerial vehicle coordinates in the unmanned aerial vehicle coordinate system, wherein the unmanned aerial vehicle coordinates are the coordinate information.
After the corresponding world coordinate system in the virtual scene is obtained, the virtual world coordinate and the real world longitude and latitude coordinate are registered, so that the corresponding real coordinate of the illegal building in the real world is obtained. Based on the method, the illegal building can be detected and positioned on the premise of not being interfered by the urban complex environment, and the detection accuracy of the illegal building is improved.
In the technical scheme disclosed in the embodiment, a three-dimensional city model of a region to be detected is constructed, the model is imported and used in a virtual scene, a violation building detection route corresponding to the three-dimensional city model is generated in the virtual scene, a virtual unmanned aerial vehicle in the virtual scene is controlled to carry out shooting processing based on the route, a virtual image containing the violation building and corresponding coordinates shot by the virtual unmanned aerial vehicle are subjected to coordinate conversion processing, and then the real coordinates of the violation building corresponding to the real scene are calculated based on the violation building virtual world coordinates in the virtual scene. The detection and positioning of the illegal building are completed under the condition of being not interfered by the complex environment in the real scene, and the accuracy of the illegal building A Jin Ce is improved.
Referring to fig. 2, in the second embodiment, based on the first embodiment, before step S10, the method further includes:
step S50, generating a patrol task corresponding to the region to be detected, wherein the patrol task comprises a flying spot, a landing spot, a shooting spot and route information of the patrol region;
step S60, setting aerial photographing parameters based on the inspection task, selecting the unmanned aerial vehicle in the unmanned aerial vehicle platform according to the aerial photographing parameters, and controlling the unmanned aerial vehicle to execute the inspection task.
In this embodiment, before an image for modeling a three-dimensional city model is acquired, a specific unmanned aerial vehicle needs to be selected for acquisition in order to ensure that an image with high coverage and good image quality of a region to be detected can be acquired. Namely, the city inspection flight task of the unmanned aerial vehicle needs to be set.
When a patrol task corresponding to a region to be detected is generated, a city region and a target to be patrol are required to be clearly checked, a specific patrol plan and a target list are formulated, meanwhile, the city building height of the patrol region is mainly considered, a departure point, a landing point and an intermediate route are determined, and factors such as path planning, obstacle avoidance and flight limitation avoiding of safe flight are considered. Therefore, the generated patrol task comprises a flying spot, a landing spot, a shooting spot, route information and the like of the patrol area.
After the inspection task is generated, aerial photographing parameters of the unmanned aerial vehicle are required to be set according to the requirements of the inspection task. The aerial parameters include fly height, speed, line spacing, image acquisition spacing, etc., to ensure that the aerial parameters provide adequate data coverage and image quality. And finally, selecting a proper unmanned aerial vehicle platform according to aerial photographing parameters, wherein factors such as the size, flight time, load capacity, flight stability and the like of the unmanned aerial vehicle are required to be considered in the selection process, so that the unmanned aerial vehicle can meet the requirements of inspection tasks. And finally, after the unmanned aerial vehicle is selected, controlling the unmanned aerial vehicle to execute a patrol task so as to shoot the real-world area to be detected. It can be appreciated that in order to guarantee the authenticity of three-dimensional city modeling, unmanned aerial vehicles are typically equipped with high-definition cameras.
As an optional implementation manner, in the equipment carried by the unmanned aerial vehicle, besides a single high-definition camera, a five-lens camera can be carried, so that the inspection efficiency is improved.
In the technical scheme disclosed by the embodiment, proper tasks are formulated according to the height of urban buildings in the real world, the obstacle avoidance area, the flight avoidance limit area and the like, and unmanned aerial vehicles meeting requirements of shooting tasks are selected based on the tasks, so that the unmanned aerial vehicles can collect images, the buildings in the area to be detected can be covered, meanwhile, the definition of the shot images is guaranteed, the quality of three-dimensional urban modeling is improved, and the accuracy of detecting and positioning illegal buildings in the real world is improved.
Referring to fig. 3, in the third embodiment, based on the first embodiment, before step S10, the method further includes:
step S70, a pre-training data set is obtained, and the pre-training data set is processed based on a detection algorithm to obtain a verification set and a test set;
as an alternative embodiment, the data set may be divided into a training set and a test set by detecting offending buildings in the pre-training data set by a yolo detection algorithm.
Step S80, inputting the test set into a pre-training model, and controlling the pre-training model to perform data enhancement processing on an input image corresponding to the test set;
as an optional implementation manner, data enhancement operations such as random clipping, rotation, scaling, translation and the like are performed on the input images of the test set, so that the diversity of data and the robustness of the model are increased, and the detection accuracy of the pre-training model is further improved.
Step S90, initializing weight information of the pre-training model, carrying out training iteration processing on a pre-marked data set according to the weight information, and reversely updating the weight information based on the training iteration processing structure;
as an alternative embodiment, the model weights may be initialized, either using pre-trained weights as initial weights or randomly. A training iteration is then performed using the labeled dataset based on the weight information. In each iteration, the image is input into the network, the prediction is calculated by forward propagation, then the loss is calculated with the annotation data, and the weights of the network are updated by backward propagation.
And step S100, carrying out evaluation processing on training results according to the verification set to obtain evaluation performance, and carrying out super-parameter tuning processing on the pre-training model according to the evaluation performance to obtain the preset detection model.
As an alternative, the learning rate may be gradually reduced as needed during the training process to help the model converge better, and then periodically using the validation set or test set to evaluate the performance of the model. The evaluation index may include accuracy, recall, average accuracy (mAP), etc., followed by super-parameter tuning based on model performance, such as batch size, learning rate, regularization parameters, etc., and finally completing model training.
In the technical scheme disclosed by the embodiment, the test set is enhanced, so that the data diversity and the robustness of the model are improved, the data set is subjected to training iterative processing based on weight information, the loss is calculated according to marked data, the weight of the network model is updated based on a counter-propagation mode, the model is gradually optimized, finally, the learning rate is gradually reduced in the training process to help the model to better converge, evaluation, super-parameter tuning and the like are performed on the model, a preset detection model which can be applied to a virtual scene and used for detecting a violation building in the virtual image is obtained, and the detection efficiency of the violation building in the virtual scene is improved.
In the fourth embodiment, after generating coordinate information corresponding to the real world of the offending building based on the first embodiment, the coordinate information may be output to the image detection interface, and the worker analyzes and processes the coordinate information, so as to respond to an operation instruction fed back by the interface.
In the real world scene, the position corresponding to the coordinate information is not provided with a building, at the moment, the operator can consider that the current detection is wrong, send out an operation instruction for regenerating a three-dimensional city model and carrying out detection recognition again in an image detection interface, based on the instruction, the detection system regenerates a patrol task, controls the real unmanned aerial vehicle to shoot the image of the region to be detected again based on the newly generated patrol task, generates a new three-dimensional city model according to the newly shot image, and finally jumps to execute the step of generating the illegal building detection route in the virtual scene according to the newly generated three-dimensional city model.
In addition, the method can be considered as that the detection model is wrong in the detection process, and the step of determining the corresponding violation building in the virtual image based on the preset detection model, determining the pixel coordinates of the violation building and the corresponding three-dimensional space coordinates when the virtual unmanned aerial vehicle collects the virtual image can be directly carried out in a skip mode. Through two different jump backtracking processes, the operator can find the cause of error detection conveniently, and the maintenance efficiency in error is improved.
When the coordinate information is completely accurate, alarm prompt coordinates of the illegal buildings of the to-be-detected area can be generated according to the coordinate information so as to prompt corresponding law enforcement personnel to process.
In the technical scheme disclosed in the embodiment, the coordinate information is output to the image detection interface, and the operation instruction fed back by the interface is responded, so that the operator can verify and manage the coordinate information.
Referring to fig. 4, fig. 4 is a schematic diagram of a terminal structure of a hardware running environment according to an embodiment of the present invention.
As shown in fig. 4, the terminal may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a network interface 1003, and a memory 1004. Wherein the communication bus 1002 is used to enable connected communication between these components. The network interface 1003 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 1004 may be a high-speed RAM Memory (Random Access Memory, RAM) or a stable Non-Volatile Memory (NVM), such as a disk Memory. The memory 1004 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the terminal structure shown in fig. 4 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 4, an operating system, a data storage module, a network communication module, and a violation building detection program may be included in the memory 1004 as one type of computer storage medium.
In the terminal shown in fig. 4, the network interface 1003 is mainly used for connecting to a background server, and performing data communication with the background server; the processor 1001 may call the offending building detection program stored in the memory 1004 and perform the following operations:
generating a violation building detection route in the virtual scene according to the three-dimensional city model corresponding to the region to be detected;
transmitting the illegal building detection route to a virtual unmanned aerial vehicle of a virtual scene, and acquiring a virtual image fed back by the virtual unmanned aerial vehicle;
determining a corresponding violation building in the virtual image based on a preset detection model, and determining pixel coordinates of the violation building and corresponding three-dimensional space coordinates when the virtual unmanned aerial vehicle collects the virtual image;
And generating coordinate information of the violation building in a real scene according to the pixel coordinates and the three-dimensional space coordinates.
Further, the processor 1001 may call the offending building detection program stored in the memory 1004, and further perform the following operations:
determining a three-dimensional space coordinate system corresponding to the three-dimensional space coordinate, and converting the three-dimensional space coordinate system into a corresponding unmanned aerial vehicle coordinate system in the real scene;
acquiring internal parameters and external parameters of the virtual unmanned aerial vehicle;
converting the pixel coordinates into camera coordinates based on the internal parameters, and converting the camera coordinates into world coordinates based on the external parameters;
and converting the world coordinates into unmanned aerial vehicle coordinates in the unmanned aerial vehicle coordinate system, wherein the unmanned aerial vehicle coordinates are the coordinate information.
Further, the processor 1001 may call the offending building detection program stored in the memory 1004, and further perform the following operations:
determining main point coordinates of a pixel coordinate system corresponding to the pixel coordinates and focal lengths of the internal references in the transverse and longitudinal directions;
determining a difference value coordinate of the pixel coordinate and the main point coordinate, and processing a transverse value and a longitudinal value of the difference value coordinate and the focal length as a quotient to obtain the camera coordinate;
And determining an external parameter matrix, a rotation matrix and a translation vector corresponding to the external parameter, and converting the camera coordinates into world coordinates based on the external parameter matrix, the rotation matrix and the translation vector.
Further, the processor 1001 may call the offending building detection program stored in the memory 1004, and further perform the following operations:
acquiring a pre-training data set, and processing the pre-training data set based on a detection algorithm to obtain a verification set and a test set;
inputting the test set into a pre-training model, and controlling the pre-training model to perform data enhancement processing on an input image corresponding to the test set;
initializing weight information of the pre-training model, carrying out training iteration processing on a pre-marked data set according to the weight information, and reversely updating the weight information based on the training iteration processing structure;
and carrying out evaluation processing on the training result according to the verification set to obtain evaluation performance, and carrying out super-parameter tuning processing on the pre-training model according to the evaluation performance to obtain the preset detection model.
Further, the processor 1001 may call the offending building detection program stored in the memory 1004, and further perform the following operations:
Acquiring a plurality of pieces of image information acquired by a real unmanned aerial vehicle in the region to be detected, generating the three-dimensional city model based on the image information, and importing the three-dimensional city model into the virtual scene;
responding to a rendering instruction received by the virtual scene, and adding a simulated rendering effect corresponding to the rendering instruction in the three-dimensional city model; and/or
And responding to the rendering instruction received by the virtual scene, determining rendering materials and texture information of the three-dimensional city model corresponding to the rendering instruction, and adding the rendering materials and the texture information into the three-dimensional city model.
Further, the processor 1001 may call the offending building detection program stored in the memory 1004, and further perform the following operations:
generating a patrol task corresponding to the region to be detected, wherein the patrol task comprises a flying spot, a landing spot, a shooting spot and route information of the patrol region;
and setting aerial photographing parameters based on the inspection task, selecting the unmanned aerial vehicle in an unmanned aerial vehicle platform according to the aerial photographing parameters, and controlling the unmanned aerial vehicle to execute the inspection task.
Further, the processor 1001 may call the offending building detection program stored in the memory 1004, and further perform the following operations:
Determining a boundary box of the violation building in the virtual image based on the preset detection model;
taking the coordinates of the center point of the boundary box in the virtual image as the pixel coordinates; or alternatively
And determining a preset calibration point of the boundary box, and taking the coordinates of the preset calibration point in the virtual image as the pixel coordinates.
Further, the processor 1001 may call the offending building detection program stored in the memory 1004, and further perform the following operations:
outputting the coordinate information to an image detection interface, responding to an operation instruction fed back by the image detection interface, and at least executing the following actions:
generating alarm prompt coordinates of the illegal buildings of the area to be detected according to the coordinate information; or alternatively
Regenerating a patrol task, controlling the real unmanned aerial vehicle to shoot the image of the region to be detected again based on the newly generated patrol task, and generating a new three-dimensional city model according to the re-shot image; or alternatively
And skipping to execute the steps of determining the corresponding violation building in the virtual image based on the preset detection model, determining the pixel coordinates of the violation building, and acquiring the corresponding three-dimensional space coordinates by the virtual unmanned aerial vehicle when the virtual image is acquired.
Furthermore, it will be appreciated by those of ordinary skill in the art that implementing all or part of the processes in the methods of the above embodiments may be accomplished by computer programs to instruct related hardware. The computer program comprises program instructions, and the computer program may be stored in a storage medium, which is a computer readable storage medium. The program instructions are executed by at least one processor in the control terminal to carry out the flow steps of the embodiments of the method described above.
The present invention thus also provides a computer-readable storage medium storing a detection program of a offending building, which when executed by a processor implements the steps of the detection method of a offending building as described in the above embodiments.
It should be noted that, because the storage medium provided in the embodiments of the present application is a storage medium used to implement the method in the embodiments of the present application, based on the method described in the embodiments of the present application, a person skilled in the art can understand the specific structure and the modification of the storage medium, and therefore, the description thereof is omitted herein. All storage media used in the methods of the embodiments of the present application are within the scope of protection intended in the present application.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flowchart and/or block of the flowchart illustrations and/or block diagrams, and combinations of flowcharts and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. The method for detecting the illegal building is characterized by comprising the following steps of:
generating a violation building detection route in the virtual scene according to the three-dimensional city model corresponding to the region to be detected;
Transmitting the illegal building detection route to a virtual unmanned aerial vehicle of a virtual scene, and acquiring a virtual image fed back by the virtual unmanned aerial vehicle;
determining a corresponding violation building in the virtual image based on a preset detection model, and determining pixel coordinates of the violation building and corresponding three-dimensional space coordinates when the virtual unmanned aerial vehicle collects the virtual image;
and generating coordinate information of the violation building in a real scene according to the pixel coordinates and the three-dimensional space coordinates.
2. The method of claim 1, wherein the step of generating coordinate information of the offending building in a real scene based on the pixel coordinates and the three-dimensional space coordinates comprises:
determining a three-dimensional space coordinate system corresponding to the three-dimensional space coordinate, and converting the three-dimensional space coordinate system into a corresponding unmanned aerial vehicle coordinate system in the real scene;
acquiring internal parameters and external parameters of the virtual unmanned aerial vehicle;
converting the pixel coordinates into camera coordinates based on the internal parameters, and converting the camera coordinates into world coordinates based on the external parameters;
and converting the world coordinates into unmanned aerial vehicle coordinates in the unmanned aerial vehicle coordinate system, wherein the unmanned aerial vehicle coordinates are the coordinate information.
3. The method of detecting a offending building of claim 2, wherein the steps of converting the pixel coordinates into camera coordinates based on the internal parameters and converting the camera coordinates into world coordinates based on the external parameters include:
determining main point coordinates of a pixel coordinate system corresponding to the pixel coordinates and focal lengths of the internal references in the transverse and longitudinal directions;
determining a difference value coordinate of the pixel coordinate and the main point coordinate, and processing a transverse value and a longitudinal value of the difference value coordinate and the focal length as a quotient to obtain the camera coordinate;
and determining an external parameter matrix, a rotation matrix and a translation vector corresponding to the external parameter, and converting the camera coordinates into world coordinates based on the external parameter matrix, the rotation matrix and the translation vector.
4. The method for detecting a offending building according to claim 1, wherein before the step of generating a offending building detection route in a virtual scene according to a three-dimensional city model corresponding to the area to be detected, further comprises:
acquiring a pre-training data set, and processing the pre-training data set based on a detection algorithm to obtain a verification set and a test set;
inputting the test set into a pre-training model, and controlling the pre-training model to perform data enhancement processing on an input image corresponding to the test set;
Initializing weight information of the pre-training model, carrying out training iteration processing on a pre-marked data set according to the weight information, and reversely updating the weight information based on the training iteration processing structure;
and carrying out evaluation processing on the training result according to the verification set to obtain evaluation performance, and carrying out super-parameter tuning processing on the pre-training model according to the evaluation performance to obtain the preset detection model.
5. The method for detecting a offending building according to claim 1, wherein before the step of generating a offending building detection route in a virtual scene according to a three-dimensional city model corresponding to the area to be detected, the method comprises:
acquiring a plurality of pieces of image information acquired by a real unmanned aerial vehicle in the region to be detected, generating the three-dimensional city model based on the image information, and importing the three-dimensional city model into the virtual scene;
responding to a rendering instruction received by the virtual scene, and adding a simulated rendering effect corresponding to the rendering instruction in the three-dimensional city model; and/or
And responding to the rendering instruction received by the virtual scene, determining rendering materials and texture information of the three-dimensional city model corresponding to the rendering instruction, and adding the rendering materials and the texture information into the three-dimensional city model.
6. The method for detecting a offending building according to claim 1, wherein before the step of generating a offending building detection route in a virtual scene according to a three-dimensional city model corresponding to the area to be detected, further comprises:
generating a patrol task corresponding to the region to be detected, wherein the patrol task comprises a flying spot, a landing spot, a shooting spot and route information of the patrol region;
and setting aerial photographing parameters based on the inspection task, selecting the unmanned aerial vehicle in an unmanned aerial vehicle platform according to the aerial photographing parameters, and controlling the unmanned aerial vehicle to execute the inspection task.
7. The method of claim 1, wherein the step of determining the corresponding offending building in the virtual image based on a preset detection model and determining the pixel coordinates of the offending building comprises:
determining a boundary box of the violation building in the virtual image based on the preset detection model;
taking the coordinates of the center point of the boundary box in the virtual image as the pixel coordinates; or alternatively
And determining a preset calibration point of the boundary box, and taking the coordinates of the preset calibration point in the virtual image as the pixel coordinates.
8. The method of claim 1, wherein after the step of generating coordinate information of the offending building in a real scene based on the pixel coordinates and the three-dimensional space coordinates, further comprising:
outputting the coordinate information to an image detection interface, responding to an operation instruction fed back by the image detection interface, and at least executing the following actions:
generating alarm prompt coordinates of the illegal buildings of the area to be detected according to the coordinate information; or alternatively
Regenerating a patrol task, controlling the real unmanned aerial vehicle to shoot the image of the region to be detected again based on the newly generated patrol task, and generating a new three-dimensional city model according to the re-shot image; or alternatively
And skipping to execute the steps of determining the corresponding violation building in the virtual image based on the preset detection model, determining the pixel coordinates of the violation building, and acquiring the corresponding three-dimensional space coordinates by the virtual unmanned aerial vehicle when the virtual image is acquired.
9. The utility model provides a detection device of building violating regulations, its characterized in that, detection device of building violating regulations includes: memory, a processor and a detection program for a offending building stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the detection method for a offending building as claimed in any one of claims 1 to 8.
10. A computer-readable storage medium, wherein a detection program of a offending building is stored on the computer-readable storage medium, which when executed by a processor, implements the steps of the detection method of a offending building as claimed in any one of claims 1 to 8.
CN202311674103.1A 2023-12-06 2023-12-06 Method for detecting illegal building, device for detecting illegal building and storage medium Pending CN117522987A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311674103.1A CN117522987A (en) 2023-12-06 2023-12-06 Method for detecting illegal building, device for detecting illegal building and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311674103.1A CN117522987A (en) 2023-12-06 2023-12-06 Method for detecting illegal building, device for detecting illegal building and storage medium

Publications (1)

Publication Number Publication Date
CN117522987A true CN117522987A (en) 2024-02-06

Family

ID=89759193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311674103.1A Pending CN117522987A (en) 2023-12-06 2023-12-06 Method for detecting illegal building, device for detecting illegal building and storage medium

Country Status (1)

Country Link
CN (1) CN117522987A (en)

Similar Documents

Publication Publication Date Title
CN109002055B (en) High-precision automatic inspection method and system based on unmanned aerial vehicle
CN108205797A (en) A kind of panoramic video fusion method and device
CN112912920A (en) Point cloud data conversion method and system for 2D convolutional neural network
CN110799989A (en) Obstacle detection method, equipment, movable platform and storage medium
CN111091739B (en) Automatic driving scene generation method and device and storage medium
CN110136058B (en) Drawing construction method based on overlook spliced drawing and vehicle-mounted terminal
Ruf et al. Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision
CN113899360B (en) Generation and precision evaluation method and device for port automatic driving high-precision map
CN110793548A (en) Navigation simulation test system based on virtual-real combination of GNSS receiver hardware in loop
CN114556445A (en) Object recognition method, device, movable platform and storage medium
CN111488783B (en) Method and device for detecting pseudo 3D boundary box based on CNN
CN108803659A (en) The heuristic three-dimensional path planing method of multiwindow based on magic square model
CN111460866B (en) Lane line detection and driving control method and device and electronic equipment
CN114937177A (en) Automatic marking and detection model training and target recognition method and electronic equipment
CN109635639B (en) Method, device, equipment and storage medium for detecting position of traffic sign
CN111401190A (en) Vehicle detection method, device, computer equipment and storage medium
CN113223064A (en) Method and device for estimating scale of visual inertial odometer
CN111476062A (en) Lane line detection method and device, electronic equipment and driving system
CN113378605B (en) Multi-source information fusion method and device, electronic equipment and storage medium
CN117522987A (en) Method for detecting illegal building, device for detecting illegal building and storage medium
CN115077563A (en) Vehicle positioning accuracy evaluation method and device and electronic equipment
Bai et al. Cyber mobility mirror for enabling cooperative driving automation: A co-simulation platform
CN117274526A (en) Neural network model training method and image generating method
CN114529585A (en) Mobile equipment autonomous positioning method based on depth vision and inertial measurement
CN115249407A (en) Indicating lamp state identification method and device, electronic equipment, storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination