CN115311589B - Hidden danger processing method and equipment for lighting building - Google Patents

Hidden danger processing method and equipment for lighting building Download PDF

Info

Publication number
CN115311589B
CN115311589B CN202211243569.1A CN202211243569A CN115311589B CN 115311589 B CN115311589 B CN 115311589B CN 202211243569 A CN202211243569 A CN 202211243569A CN 115311589 B CN115311589 B CN 115311589B
Authority
CN
China
Prior art keywords
image
shelter
lighting
position information
lighting building
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211243569.1A
Other languages
Chinese (zh)
Other versions
CN115311589A (en
Inventor
彭泓越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Qianyuan Zefu Technology Co ltd
Original Assignee
Shandong Qianyuan Zefu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Qianyuan Zefu Technology Co ltd filed Critical Shandong Qianyuan Zefu Technology Co ltd
Priority to CN202211243569.1A priority Critical patent/CN115311589B/en
Publication of CN115311589A publication Critical patent/CN115311589A/en
Application granted granted Critical
Publication of CN115311589B publication Critical patent/CN115311589B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • G01S13/08Systems for measuring distance only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a hidden danger processing method and equipment for a lighting building. Belongs to the technical field of distance and direction measurement. Monitoring the lighting building through a first image monitoring device arranged on the lighting building to obtain a lighting building image; detecting the lighting building image through a preset shielding object detection model to determine the actual position information of a shielding object on the lighting building; under the condition that a shadow shielding image exists in a lighting building image, sending a first ranging instruction and a second ranging instruction to a radar ranging sensor arranged on a lighting building so as to obtain the distance and the direction corresponding to a shielding object; adjusting the shooting angles of the plurality of second image monitoring devices based on the corresponding distances and directions of the shelters to obtain a plurality of shelter image information, and determining the type information of the shelters based on the plurality of shelter image information; and sending a cleaning instruction to a corresponding shielding object processing device so as to correspondingly process the shielding object.

Description

Hidden danger processing method and equipment for lighting building
Technical Field
The application relates to the technical field of distance and azimuth measurement, in particular to a hidden danger processing method and equipment for a lighting building.
Background
Daylighting buildings such as daylighting glass, solar panels and other buildings have certain conditions for sunlight. Under the condition of sufficient illumination, the normal use requirement can be realized.
However, in the process of using the lighting building, the lighting process is easily blocked, for example, objects such as paper and dust falling on the lighting building block the lighting building, or blocks falling outside the lighting building, for example, large garbage bags staying on the electric wires or birds staying for a long time, although the large garbage bags or the birds do not directly stay on the lighting building, the large garbage bags or the birds may shade the lighting building, and the lighting efficiency of the lighting building is reduced. At this time, the lighting process of the lighting building is affected, and certain hidden danger is caused to the lighting building.
In the prior art, hidden dangers of lighting buildings are generally monitored and processed in a manual mode, for example, the lighting buildings are cleaned manually at regular time, and the mode needs higher labor cost. Secondly, for the shelters stopping outside the lighting building, an unmanned aerial vehicle is generally adopted to shoot images, and corresponding shelters are determined according to the shot images so as to be cleaned. But unmanned aerial vehicle continuation of the journey can not be strong, after hovering in the air and flying for a period, need charge to it, consequently can't monitor the shelter in real time, shelters from under the serious condition at the daylighting building, is difficult to in time clear up the shelter, and frequent charging also can cause the waste of resource. Secondly, unmanned aerial vehicle's shooting angle is limited, and in carrying out the testing process to sheltering from the thing, because do not know the concrete position that shelters from the thing, under the condition that has a plurality of shelters from the thing, the existence will shelter from the thing and omit the condition of shooing to it is difficult to carry out thorough clearance to sheltering from the thing.
Disclosure of Invention
The embodiment of the application provides a hidden danger processing method and equipment for a lighting building, which are used for solving the following technical problems: the manual mode of regularly clearing up the daylighting building, not only the human cost is higher, and under the serious condition of daylighting building sheltering from, also be difficult to in time clear up the shelter.
The embodiment of the application adopts the following technical scheme:
the embodiment of the application provides a hidden danger processing method for a lighting building. The method comprises the steps that a first image monitoring device arranged on a lighting building is used for monitoring the lighting building in real time to obtain a lighting building image; analyzing the lighting building image through a preset shelter detection model, and determining the actual position information of a shelter based on a preset coordinate system corresponding to the lighting building under the condition of determining that the shelter exists on the lighting building; under the condition that a shadow shielding image exists in the image of the lighting building, sending a first ranging instruction and a second ranging instruction to a radar ranging sensor arranged on the lighting building so as to obtain the distance and the direction corresponding to the shielding object through the radar ranging sensor; the measuring distance corresponding to the first ranging instruction is smaller than the measuring distance corresponding to the second ranging instruction; determining a plurality of second image monitoring devices of which the distance to the shielding object is smaller than a preset distance based on the distance and the direction corresponding to the shielding object, adjusting the shooting angles of the plurality of second image monitoring devices to obtain a plurality of shielding object image information, and determining the category information of the shielding object based on the plurality of shielding object image information; and sending a cleaning instruction to a corresponding shielding object processing device based on the actual position information of the shielding object or the distance, the direction and the class information corresponding to the shielding object so as to perform corresponding processing on the shielding object.
The embodiment of the application can timely acquire the shielding condition of the lighting building by monitoring the lighting building in real time, so that the shielding object of the lighting building can be timely cleared, and the lighting influence of the shielding object on the lighting building is reduced. Secondly, this application embodiment is through sending first range finding instruction and second range finding instruction to the radar range sensor who sets up on daylighting building, can carry out a lot of to the shelter based on different range finding distances, thereby under the condition that the shelter position changes, also can determine its accurate position. And can monitor the shelter that stops falling outside the daylighting building in real time through radar range finding sensor, compare in the unmanned aerial vehicle that spirals in the sky above the daylighting building, not only monitoring range is wide, and need not frequently to charge, can resources are saved, in addition, this application embodiment carries out the shooting of different angles to the shelter through a plurality of second image prisoner device, can improve the accuracy to shelter class discernment to carry out the cleaning of different modes to the shelter based on different shelter classes. The embodiment of the application can clear the sheltering object through the cleaning device only after determining the specific category and the position of the sheltering object, so that the cleaning device is not needed to work in real time, and the cost for starting the cleaning device can be saved.
In an implementation of this application, send first range finding instruction and second range finding instruction to the radar ranging sensor who sets up on the daylighting building to obtain the distance and the position that the shelter corresponds through radar ranging sensor, specifically include: sending a first ranging instruction to the radar ranging sensor so that the radar ranging sensor transmits a first ranging signal; receiving a reflection signal corresponding to the first ranging signal, and determining position information of a first reference shelter based on the reflection signal; comparing the first reference shelter position information with first preset shelter position information to determine first actual shelter position information based on the shelter position information with differences; the first preset shelter position information is related to building information within a first preset distance corresponding to the periphery of a lighting building; sending a second ranging instruction to the radar ranging sensor so that the radar ranging sensor transmits a second ranging signal; receiving a reflected signal corresponding to the second ranging signal, and determining second reference shelter position information based on the reflected signal; comparing the second reference shelter position information with second preset shelter position information to determine second actual shelter position information based on the shelter position information with the difference; the second preset shelter position information is related to building information within a second preset distance corresponding to the periphery of the lighting building; and determining the actual position information of the current shielding object based on the first actual shielding object position information and the second actual shielding object position information.
In an implementation manner of the present application, based on the first actual obstruction position information and the second actual obstruction position information, the actual position information of the current obstruction is determined, which specifically includes: comparing the first actual shelter position information with the second actual shelter position information, and taking the information that the first actual shelter position information is the same as the second actual shelter position information as first reference actual position information; taking the rest position information in the second actual obstruction position information as second reference actual position information; and obtaining the distance and the direction corresponding to the obstruction based on the first reference actual position information and the second reference actual position information.
In an implementation manner of the present application, detecting a lighting building image through a preset blocking object detection model specifically includes: inputting the lighting building image into a preset shelter detection model to obtain shelter image information on the lighting building; the information of the obstruction image at least comprises one item of obstruction coordinate information and obstruction class information; and determining the actual position information of the shielding object based on the preset coordinate system corresponding to the lighting building and the shielding object coordinate information in the lighting building image.
In an implementation manner of the present application, determining actual position information of a blocking object based on a preset coordinate system corresponding to a lighting building and blocking object coordinate information in a lighting building image specifically includes: matching key points of the lighting building image and a preset lighting building template image to determine a proportional relation between the lighting building image and the preset lighting building template image; the preset lighting building template image is an image shot under the condition that no shielding object exists in the lighting building; the key points are a central point and a plurality of vertexes corresponding to the lighting building image; determining a coordinate point set corresponding to a shelter in the lighting building image, and determining the relative position relation between each coordinate point in the coordinate point set and the key point; and determining the actual position information of the shelter on the lighting building based on the relative position relation and the proportional relation.
In an implementation manner of the present application, after detecting the lighting building image by using the preset obstruction detection model, the method further includes: under the condition that no shielding object exists on the lighting building, inputting the lighting building image into a preset shadow detection model to obtain a shadow shielding image corresponding to the lighting building; the shadow occlusion image comprises one of a lighting building total occlusion image and a lighting building partial occlusion image; under the condition that the shadow occlusion image is a full occlusion image of the lighting building, acquiring current weather information, comparing the current weather information with a historical weather record table, and determining the influence condition of the current weather on the lighting building; the historical weather record table comprises historical weather information and influence conditions of lighting buildings corresponding to the historical weather information; under the condition that the current weather has no influence on the lighting building or the shadow occlusion image is partially occluded, timing the occlusion time of the lighting building; and sending a measuring instruction to the radar ranging sensor under the condition that the timed duration reaches the preset duration and the lighting building is still shielded so as to measure the distance and the direction of the shielding object.
In an implementation manner of the present application, based on the distance and the orientation corresponding to the shielding object, a plurality of second image monitoring devices with a distance between the shielding object being smaller than a preset distance are determined, and shooting angle adjustment is performed on the plurality of second image monitoring devices to obtain a plurality of shielding object image information, and based on the plurality of shielding object image information, category information of the shielding object is determined, which specifically includes: determining a first coordinate corresponding to the obstruction in the three-dimensional map based on the distance and the direction; determining a plurality of second image monitoring devices of which the distance from the shielding object is smaller than a preset distance on the basis of the first coordinate; adjusting the shooting angles of the plurality of second image monitoring devices according to the first coordinates and second coordinates corresponding to the plurality of second image monitoring devices respectively, so as to shoot the shelters based on the adjusted angles respectively; obtaining shielding object images respectively uploaded by a plurality of second image monitoring devices; and performing three-dimensional reconstruction based on the plurality of obstruction images to obtain a three-dimensional reconstruction image corresponding to the obstruction, so as to obtain the category information of the obstruction based on the three-dimensional reconstruction image.
In an implementation manner of the present application, performing three-dimensional reconstruction based on a plurality of obstruction images to obtain a three-dimensional reconstruction graph corresponding to an obstruction, and obtaining category information of the obstruction based on the three-dimensional reconstruction graph specifically includes: respectively extracting two-dimensional outlines of the occlusion object images based on an edge detection algorithm; based on the extracted two-dimensional contour image, generating a three-dimensional reconstruction image corresponding to the shelter by using a multi-view stereoscopic vision algorithm; inputting a plurality of shielding object images, two-dimensional outline images and three-dimensional reconstruction images into a shielding object detection model so as to output reference category information corresponding to each image and similarity between the reference category information and the reference category through the shielding object detection model; grouping the input images based on the reference categories, and determining the number of the images in each group; determining the average similarity corresponding to each group based on the number of the images in each group and the similarity between each image and the reference category; and taking the reference category with the highest average similarity as the category information corresponding to the obstruction.
In an implementation manner of the present application, based on actual position information of the obstruction or distance, orientation, and category information corresponding to the obstruction, a cleaning instruction is sent to a corresponding obstruction processing device to perform corresponding processing on the obstruction, which specifically includes: determining a corresponding shelter processing device based on the lighting building image or the shelter type information; the shielding object processing device at least comprises an unmanned aerial vehicle and a laser emission device; under the condition that the shelter processing device is an unmanned aerial vehicle, processing the shelter on the lighting building through the unmanned aerial vehicle based on the actual position information of the shelter; and under the condition that the shielding object processing device is a laser emitting device, positioning processing is carried out on the laser emitting device based on the distance and the direction corresponding to the shielding object, so that laser is emitted by the laser emitting device to process the shielding object.
The embodiment of the application provides a hidden danger treatment facility for daylighting building, includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to: the method comprises the steps that a first image monitoring device arranged on a lighting building is used for monitoring the lighting building in real time to obtain a lighting building image; analyzing the lighting building image through a preset shelter detection model, and determining the actual position information of a shelter based on a preset coordinate system corresponding to the lighting building under the condition of determining that the shelter exists on the lighting building; under the condition that a shadow shielding image exists in the image of the lighting building, sending a first ranging instruction and a second ranging instruction to a radar ranging sensor arranged on the lighting building so as to obtain the distance and the direction corresponding to the shielding object through the radar ranging sensor; the measuring distance corresponding to the first ranging instruction is smaller than the measuring distance corresponding to the second ranging instruction; determining a plurality of second image monitoring devices of which the distance to the shielding object is smaller than a preset distance based on the distance and the direction corresponding to the shielding object, adjusting the shooting angles of the plurality of second image monitoring devices to obtain a plurality of shielding object image information, and determining the category information of the shielding object based on the plurality of shielding object image information; and sending a cleaning instruction to a corresponding shielding object processing device based on the actual position information of the shielding object or the corresponding distance, direction and category information of the shielding object so as to correspondingly process the shielding object.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects: the embodiment of the application can timely acquire the shielding condition of the lighting building by monitoring the lighting building in real time, so that the shielding object of the lighting building can be timely cleared, and the lighting influence of the shielding object on the lighting building is reduced. Secondly, this application embodiment is through sending first range finding instruction and second range finding instruction to the radar range sensor who sets up on daylighting building, can carry out a lot of to the shelter based on different range finding distances, thereby under the condition that the shelter position changes, also can determine its accurate position. In addition, according to the embodiment of the application, the second image monitoring devices shoot the sheltering object at different angles, so that the accuracy of identification of the type of the sheltering object can be improved, and the sheltering object can be cleaned in different modes based on different sheltering object types.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort. In the drawings:
fig. 1 is a flowchart of a hidden danger processing method for a lighting building according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a hidden danger processing apparatus for a lighting building according to an embodiment of the present application;
the description of the reference numerals,
200 hidden danger processing equipment for lighting buildings, 201 a processor, 202 a memory.
Detailed Description
The embodiment of the application provides a hidden danger processing method and equipment for a lighting building.
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any inventive step based on the embodiments of the present disclosure, shall fall within the scope of protection of the present application.
In the using process of the lighting building, the lighting process is easily shielded, for example, the residence of birds or the shielding of the lighting building by articles such as paper, dust and the like. At this time, the lighting process of the lighting building is affected, and certain hidden danger is caused to the lighting building.
In the prior art, hidden dangers of a lighting building are usually monitored and processed in a manual mode, for example, the lighting building is cleaned manually at regular time, but the mode is high in labor cost, and under the condition that the lighting building is seriously shielded, the shielding object is difficult to clean in time.
In order to solve the above problems, embodiments of the present application provide a hidden danger processing method and apparatus for a lighting building. By monitoring and shooting the lighting buildings in real time, the shielding condition of the lighting buildings can be timely acquired, so that the shielding objects of the lighting buildings can be timely cleared, and the lighting influence of the shielding objects on the lighting buildings is reduced. Secondly, this application embodiment is through sending first range finding instruction and second range finding instruction to the radar range sensor who sets up on daylighting building, can carry out a lot of to the shelter based on different range finding distances, thereby under the condition that the shelter position changes, also can determine its accurate position. In addition, according to the embodiment of the application, the second image monitoring devices shoot the shelters at different angles, so that the accuracy of identification of the type of the shelters can be improved, and the shelters are cleaned in different modes based on different shelters.
The technical solutions proposed in the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a hidden danger processing method for a lighting building according to an embodiment of the present disclosure. As shown in fig. 1, the hidden danger processing method for lighting buildings comprises the following steps:
step 101, performing real-time monitoring on a lighting building through a first image monitoring device arranged on the lighting building to obtain a lighting building image.
In one embodiment of the present application, the lighting building has a certain requirement on illumination, and therefore, if the lighting building is shielded, the lighting building is affected to a certain extent, and thus a certain hidden danger is caused to the lighting building. Therefore, the lighting buildings need to be monitored in real time, and when the situation that the lighting buildings are shielded is found, the shielding objects are cleared in time, so that certain influence on the lighting buildings due to insufficient illumination is avoided.
Specifically, the embodiment of the application is provided with a first image monitoring device on the lighting building, and a camera of the first image monitoring device faces the lighting building. The system is used for monitoring and shooting the lighting building in real time and uploading the shot lighting building image to the server so as to analyze the uploaded lighting building image.
And 102, analyzing the lighting building image through a preset shielding object detection model, and determining the actual position information of the shielding object based on a preset coordinate system corresponding to the lighting building under the condition of determining that the shielding object exists on the lighting building.
In one embodiment of the application, the lighting building image is input into a preset obstruction detection model to obtain obstruction image information on the lighting building, wherein the obstruction image information is the category information of an obstruction. And determining the coordinate information of the shelters in the lighting building image based on a preset coordinate system corresponding to the lighting building image. And determining the actual position information of the shielding object based on the preset coordinate system corresponding to the lighting building and the coordinate information of the shielding object in the lighting building image.
Specifically, the embodiment of the present application is preset with a preset blocking object detection model, and the training process of the preset blocking object detection model is as follows: and taking the lighting building image sample as input, taking the image sample marked with the lighting building shelter as output, and training the neural network model to obtain the preset shelter detection model. The lighting building image sample comprises a lighting building sample image with a shelter and a lighting building sample image without the shelter, and the image sample marked out of the lighting building shelter comprises shelter type information corresponding to different shelters respectively. The category information may include different categories of shelter information such as trash shelters, bird and excrement shelters, dust shelters, and the like, such as paper sheets, plastic bags, and the like.
Furthermore, the preset shielding object detection model is input to the lighting building picture shot by the current first image monitoring device, the shielding objects in the picture are labeled through the preset shielding object detection model, and the category information of the shielding objects is labeled. After the shelters in the image are marked, a straight line where the left edge of the lighting building image is located is used as a y axis, a straight line where the bottom edge of the lighting building image is located is used as an x axis, an intersection point between the left edge and the bottom edge is used as an origin point, a coordinate system is established, and coordinate information of the marked shelters is determined through the coordinate system.
In an embodiment of the present application, a lighting building image and a preset lighting building template image are subjected to key point matching to determine a proportional relationship between the lighting building image and the preset lighting building template image, where the preset lighting building template image is an image photographed under the condition that a lighting building does not have a shelter, and the key points are a central point and a plurality of vertexes corresponding to the lighting building image. And determining a coordinate point set corresponding to the sheltering object in the lighting building image, and determining the relative position relation between each coordinate point in the coordinate point set and the key point. And determining the actual position information of the shelter on the lighting building based on the relative position relation and the proportional relation.
Specifically, in the embodiment of the present application, an image taken when no blocking object exists on the lighting building is used as a preset lighting building template image. That is, there is no dust, paper or other obstruction on the surface of the lighting building. And taking the central point and each vertex of the preset lighting building template as a first key point. And taking the central point and each vertex corresponding to the current lighting building image to be detected as a second key point. And mapping the first key point corresponding to the preset lighting building template to a coordinate system corresponding to the lighting building image, and determining the coordinate of the first key point in the coordinate system. And determining the proportional relation between the lighting building image and the preset lighting building template image based on the coordinates of the first key point in the coordinate system and the coordinates of the second key point in the coordinate system. For example, assuming that the coordinates of the second keypoints corresponding to the daylighting building image are (0, 0), (1, 0), (0, 1), (1, 1), and (0.5 ), respectively, and the coordinates of the first keypoints corresponding to the preset daylighting building template are (0, 0), (2, 0), (0, 2), (2, 2), and (1, 1), respectively, it can be seen that the proportional relationship between the daylighting building image and the preset daylighting building template image is 1:2.
furthermore, edge detection is carried out on the lighting building image to obtain an obstruction image in the lighting building image. And determining a coordinate point set corresponding to the shelter image based on a coordinate system corresponding to the lighting building image. And determining the relative position relation between each coordinate point in the coordinate point set corresponding to the obstruction image and the second key point respectively based on the coordinates of each point in the obstruction image in the coordinate point set and the coordinates of the second key point.
Further, based on the relative position relationship between each coordinate point in the coordinate point set and the second key point and the proportional relationship between the lighting building image and the preset lighting building template image, the position of the shielding object image in the preset lighting building template after being reduced or enlarged by a certain proportion is determined. If the actual size of the current preset lighting building template image is the same as that of the lighting building, the actual position information of the shelter on the lighting building can be directly determined. And if the size of the current preset lighting building template is different from the actual size of the lighting building, determining the actual position information of the shelter on the lighting building based on the proportional relationship between the size of the preset lighting building template and the actual size of the lighting building.
Further, according to the actual position information of the shelter on the lighting building, the position information can be sent to the preset unmanned aerial vehicle, so that the shelter can be cleared away by the unmanned aerial vehicle when the unmanned aerial vehicle goes to the shelter position.
103, under the condition that the shadow shielding image exists in the lighting building image, sending a first ranging instruction and a second ranging instruction to a radar ranging sensor arranged on the lighting building so as to obtain the distance and the direction corresponding to the shielding object through the radar ranging sensor, wherein the measuring distance corresponding to the first ranging instruction is smaller than the measuring distance corresponding to the second ranging instruction.
In one embodiment of the application, in the case that no obstruction exists on the lighting building, the lighting building image is input into a preset shadow detection model to obtain a shadow obstruction image corresponding to the lighting building, wherein the shadow obstruction image includes one of a lighting building full obstruction image and a lighting building partial obstruction image. Under the condition that the shadow shielding image is a full shielding image of the lighting building, acquiring current weather information, comparing the current weather information with a historical weather record table, and determining the influence condition of the current weather on the lighting building; the historical weather record table comprises historical weather information and influence conditions of lighting buildings corresponding to the historical weather information. And under the condition that the current weather has no influence on the lighting building or the shadow occlusion image is partially occluded, timing the occlusion time of the lighting building. And when the timed duration reaches the preset duration and the lighting building is still shielded, sending a measurement instruction to the radar ranging sensor so as to measure the distance and the direction of the shielded object.
Specifically, after the detection by the preset blocking object detection model, if there is no blocking object on the lighting building, it is necessary to detect that there is a blocking object at another place outside the lighting building that affects the lighting building. If a shelter is arranged at other places to shelter the lighting building, a sheltering shadow appears on the lighting building. Therefore, the acquired lighting building image is input into the preset shadow detection model, so that the corresponding shadow labeling image is output through the preset shadow detection model. The embodiment of the present application requires training the preset shadow detection model in advance, and the process is as follows: and taking the lighting building shadow image as an input sample, taking the lighting building image marked with the shadow part as an output sample, and training the neural network model to obtain the preset shadow detection model. The case that the lighting building is shaded includes the case that a partial shadow occurs, that is, a partial area of the lighting building is shielded, and also includes the case that the lighting building is completely shielded.
Further, if it appears that the shadow image labeled in the lighting building image occupies the entire lighting building image, there may be a cause of weather effect. Therefore, at this time, the current weather condition needs to be acquired, and the acquired weather condition is compared with the historical weather record to determine whether the current corresponding weather condition will affect the lighting building. For example, if the current weather condition is rainy, it can be inquired through the historical weather record that the rainy weather affects the lighting building, and therefore, the condition that all the collected lighting building images are shielded by shadows is a normal condition. If the current weather condition is sunny but all the collected lighting building images are shaded, it is indicated that a shelter exists in other places except the lighting building and affects the lighting building, and at this time, in order to ensure that the lighting building normally performs lighting, shelters in other positions need to be cleared.
Further, when the weather has no influence on the lighting building or the shadow occlusion image is partially occluded, the occlusion time of the lighting building is counted. For example, after the timing time is 30S, the lighting building is photographed again, if the lighting building is still shielded, the shielding object needs to be removed, and at this time, the distance and the direction of the shielding object can be measured by the radar ranging sensor.
The embodiment of the application can determine whether the current shelter can influence the lighting building or not by timing the shelter time of the lighting building. For example, some birds may be temporarily parked, thereby temporarily affecting a daylighting building, without the need to repel the temporarily parked birds. If the timing time is up, the shelter still exists, and the shelter is cleared at the moment.
In one embodiment of the application, a first ranging command is sent to the radar ranging sensor to cause the radar ranging sensor to transmit a first ranging signal. And receiving a reflection signal corresponding to the first ranging signal, and determining the position information of the first reference shelter based on the reflection signal. And comparing the first reference shelter position information with first preset shelter position information to determine first actual shelter position information based on the shelter position information with difference, wherein the first preset shelter position information is related to building information in a first preset distance corresponding to the periphery of the lighting building. And sending a second ranging instruction to the radar ranging sensor so that the radar ranging sensor transmits a second ranging signal. And receiving a reflection signal corresponding to the second ranging signal, and determining the position information of the second reference shelter based on the reflection signal. Comparing the second reference shelter position information with second preset shelter position information to determine second actual shelter position information based on the obstacle position information with difference; and the second preset shelter position information is related to the building information within a second preset distance corresponding to the periphery of the lighting building. And determining the actual position information of the current shielding object based on the first actual shielding object position information and the second actual shielding object position information.
Specifically, when the timing time is up, the lighting building is still under the condition of being shielded, a first ranging instruction is sent to the radar ranging sensor arranged on the lighting building, and at the moment, the radar ranging sensor transmits a first ranging signal to the periphery so as to measure the information of the shielding object in the first range. And obtaining the position information of the first reference shelter based on the reflection signal corresponding to the first ranging signal. And comparing the first reference shelter position information with first preset shelter position information to determine first actual shelter position information based on the shelter position information with the difference. For example, the first preset obstruction location information includes two locations, i.e., a location a and a location B, which are information of other buildings within a first preset distance of the lighting building, and if there is a location C in the received reflection signal, it indicates that the location C is a newly added object, and the object at the location C may affect the lighting building, that is, the location C is used as the first actual obstruction location information.
And further, sending a second ranging instruction to a radar ranging sensor arranged on the lighting building, wherein at the moment, the radar ranging sensor transmits a second ranging signal to the periphery so as to measure the shelter information in a second range. And obtaining the position information of the second reference shelter based on the reflection signal corresponding to the second ranging signal. And the measuring distance corresponding to the second ranging signal is greater than the measuring distance corresponding to the first ranging signal. When the building information in the second preset range is measured based on the second ranging signal, the building information in the first range is measured again, and therefore the shelter information measured by the first ranging signal is detected based on the second ranging signal, and the accuracy of the position of the shelter is guaranteed.
In one embodiment of the application, the first actual obstruction position information is compared with the second actual obstruction position information, and the information that the first actual obstruction position information is the same as the second actual obstruction position information is used as the first reference actual position information. And taking the rest position information in the second actual obstruction position information as second reference actual position information. And obtaining the distance and the direction corresponding to the obstruction based on the first reference actual position information and the second reference actual position information.
Specifically, the second actual obstruction position information includes first actual obstruction position information, and if the second actual obstruction position information and the first actual obstruction position information have position information of a certain point at the same time, it is described that the obstruction at the point exists for a long time, and the position information is used as first reference actual position information. In addition, the remaining position information in the second actual shelter position information is used as second reference actual position information, and the distance and the direction corresponding to the shelter are obtained based on the first reference actual position information and the second reference actual position information.
It should be noted that, in the first reference shelter position information obtained through the first distance measurement signal, there may be position information of a bird that stays for a short time, and in the second reference shelter position information obtained through the second distance measurement signal, if there is no position information of the bird, it is indicated that the bird has left, and at this time, it is not necessary to use the position as the reference shelter position. The number of positions of the shielding objects needing to be cleaned is reduced, and the efficiency of cleaning the shielding objects is improved.
And 104, determining a plurality of second image monitoring devices with the distance between the second image monitoring devices and the obstruction being smaller than the preset distance based on the distance and the direction corresponding to the obstruction, adjusting the shooting angles of the plurality of second image monitoring devices to obtain image information of the plurality of obstructions, and determining the type information of the obstruction based on the image information of the plurality of obstructions.
In one embodiment of the application, a first coordinate of the obstruction in the three-dimensional map is determined based on the distance and the orientation. And determining a plurality of second image monitoring devices of which the distance to the shelter is smaller than the preset distance based on the first coordinate. And adjusting the shooting angles of the plurality of second image monitoring devices according to the first coordinates and second coordinates corresponding to the plurality of second image monitoring devices respectively so as to shoot the shelters based on the adjusted angles respectively. And obtaining the shielding object images respectively uploaded by the plurality of second image monitoring devices. And performing three-dimensional reconstruction based on the plurality of obstruction images to obtain a three-dimensional reconstruction image corresponding to the obstruction, so as to obtain the category information of the obstruction based on the three-dimensional reconstruction image.
Specifically, based on the position and the distance, a first coordinate of the obstruction in the three-dimensional map is determined. Meanwhile, a plurality of monitoring devices are arranged around the lighting building, and the coordinate information of a plurality of second monitoring devices is marked in the three-dimensional map in advance. And determining a plurality of second image monitoring devices with the radius smaller than the preset distance by taking the first coordinate corresponding to the shielding object as the circle center.
Further, the angle of the second image monitoring devices relative to the shielding object is determined based on the coordinate positions of the second image monitoring devices and the first coordinate corresponding to the shielding object, so that the shooting angles of the second image monitoring devices are adjusted, and each second image monitoring device can shoot the shielding object.
Furthermore, the shielding object images respectively shot by each second monitoring device are obtained, and because each second monitoring device corresponds to different angles of the shielding object, each shielding object image corresponds to a certain plane of the shielding object, and sometimes the type of the shielding object is difficult to determine only through one plane, therefore, the embodiment of the application performs three-dimensional reconstruction on the shielding object based on a plurality of shielding object images, and determines the type information of the shielding object according to the reconstructed three-dimensional image.
In one embodiment of the application, two-dimensional contour extraction is respectively carried out on the obstruction images based on an edge detection algorithm. And based on the extracted two-dimensional contour image, generating a three-dimensional reconfiguration image corresponding to the shelter by using a multi-view stereoscopic vision algorithm. And inputting the plurality of obstruction images, the two-dimensional outline image and the three-dimensional reconstruction image into an obstruction detection model, and outputting reference category information corresponding to each image and similarity between the reference category information and the reference category through the obstruction detection model. Input images are grouped based on the reference category, and the number of images in each group is determined. And determining the average similarity corresponding to each group based on the number of the images in each group and the similarity between each image and the reference category. And taking the reference category with the highest average similarity as the category information corresponding to the obstruction.
Specifically, after the occlusion object image uploaded by each second monitoring device is obtained, two-dimensional contour extraction is performed on each occlusion object image based on an edge detection algorithm, so that a two-dimensional contour image corresponding to the occlusion object is obtained. Meanwhile, a three-dimensional reconstruction image is generated based on a multi-view stereo algorithm and the obtained two-dimensional contour image. In addition, the embodiment of the application takes a pre-acquired two-dimensional outline sample image, a three-dimensional reconstruction image sample, a shielding object sample image and image category information corresponding to each sample image as training samples, and constructs a classification model, namely a shielding object detection model, so that reference category information corresponding to each image and the similarity between the reference category information and the reference category are output through the shielding object detection model.
Further, the acquired images with the same reference category are divided into the same group. And determining the number of the images in each group and the corresponding similarity of each image in each group. And adding the similarity corresponding to each image in each group, and obtaining the average similarity corresponding to each group based on the result of the addition and the number of the images in each group. And the reference category corresponding to the group with the highest average similarity is used as the category information corresponding to the obstruction.
And 105, sending a cleaning instruction to a corresponding shielding object processing device based on the actual position information of the shielding object or the distance, the direction and the category information corresponding to the shielding object so as to perform corresponding processing on the shielding object.
In one embodiment of the application, based on the lighting building image or the class information of the obstruction, a corresponding obstruction processing device is determined, wherein the obstruction processing device at least comprises an unmanned aerial vehicle and a laser emitting device. And under the condition that the shielding object processing device is an unmanned aerial vehicle, processing the shielding object on the lighting building through the unmanned aerial vehicle based on the actual position information of the shielding object. And under the condition that the shielding object processing device is a laser emitting device, positioning processing is carried out on the laser emitting device based on the distance and the direction corresponding to the shielding object, so that laser is emitted by the laser emitting device to process the shielding object.
For example, a shelter placed on the surface of the lighting building can be obtained based on the lighting building image, and at the moment, the specific position coordinates of the shelter can be sent to the unmanned aerial vehicle so as to clean the shelter through the unmanned aerial vehicle. If the classification information based on shelter is birds, then can send the actual position information of birds to laser emission device to carry out the expulsion to birds.
Fig. 2 is a schematic structural diagram of a hidden danger processing apparatus for a lighting building according to an embodiment of the present application. As shown in fig. 2, the hidden danger treating apparatus 200 for lighting construction includes: at least one processor 201; and a memory 202 communicatively coupled to the at least one processor 201; wherein the memory 202 stores instructions executable by the at least one processor 201 to cause the at least one processor 201 to: the method comprises the steps that a first image monitoring device arranged on a lighting building is used for monitoring the lighting building in real time to obtain a lighting building image; detecting the lighting building image through a preset shelter detection model, and determining the actual position information of a shelter based on a preset coordinate system corresponding to the lighting building under the condition of determining that the shelter exists on the lighting building; under the condition that a shadow shielding image exists in the lighting building image, sending a first ranging instruction and a second ranging instruction to a radar ranging sensor arranged on the lighting building so as to obtain the distance and the direction corresponding to the shielding object through the radar ranging sensor; the measuring distance corresponding to the first ranging instruction is smaller than the measuring distance corresponding to the second ranging instruction; determining a plurality of second image monitoring devices of which the distance to the shelter is smaller than a preset distance based on the distance and the direction corresponding to the shelter, adjusting the shooting angles of the plurality of second image monitoring devices to obtain a plurality of shelter image information, and determining the category information of the shelter based on the plurality of shelter image information; and sending a cleaning instruction to a corresponding shielding object processing device based on the actual position information of the shielding object or the distance, the direction and the category information corresponding to the shielding object so as to perform corresponding processing on the shielding object.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the device, and the nonvolatile computer storage medium, since they are substantially similar to the embodiments of the method, the description is simple, and for the relevant points, reference may be made to the partial description of the embodiments of the method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the embodiments of the present application pertain. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the embodiments of the present application shall be included in the scope of the claims of the present application.

Claims (7)

1. A hidden danger treatment method for lighting buildings is characterized by comprising the following steps:
the method comprises the steps that a first image monitoring device arranged on a lighting building is used for monitoring the lighting building in real time to obtain a lighting building image;
detecting the lighting building image through a preset shelter detection model, and determining the actual position information of a shelter based on a preset coordinate system corresponding to the lighting building under the condition of determining that the shelter exists on the lighting building;
under the condition that a shadow shielding image exists in the lighting building image, sending a first ranging instruction and a second ranging instruction to a radar ranging sensor arranged on the lighting building so as to obtain the distance and the direction corresponding to the shielding object through the radar ranging sensor; the measuring distance corresponding to the first ranging instruction is smaller than the measuring distance corresponding to the second ranging instruction;
determining a plurality of second image monitoring devices of which the distance to the shielding object is smaller than a preset distance based on the distance and the direction corresponding to the shielding object, adjusting the shooting angles of the plurality of second image monitoring devices to obtain a plurality of shielding object image information, and determining the category information of the shielding object based on the plurality of shielding object image information;
based on the actual position information and the category information of the shielding object, sending a cleaning instruction to a corresponding shielding object processing device, or based on the distance, the direction and the category information corresponding to the shielding object, sending a cleaning instruction to the corresponding shielding object processing device so as to perform corresponding processing on the shielding object;
send first range finding instruction and second range finding instruction to the radar ranging sensor who sets up on the daylighting building, in order to pass through radar ranging sensor acquires the distance and the position that the shelter corresponds specifically include:
sending a first ranging instruction to the radar ranging sensor so that the radar ranging sensor transmits a first ranging signal;
receiving a reflection signal corresponding to the first ranging signal, and determining position information of a first reference shelter based on the reflection signal;
comparing the first reference shelter position information with first preset shelter position information to determine first actual shelter position information based on the shelter position information with differences; the first preset shelter position information is related to building information within a first preset distance corresponding to the periphery of the lighting building;
sending a second ranging instruction to the radar ranging sensor so that the radar ranging sensor transmits a second ranging signal;
receiving a reflected signal corresponding to the second ranging signal, and determining second reference shelter position information based on the reflected signal;
comparing the second reference shelter position information with second preset shelter position information to determine second actual shelter position information based on the shelter position information with the difference; the second preset shelter position information is related to building information within a second preset distance corresponding to the periphery of the lighting building;
determining the actual position information of the current shielding object based on the first actual shielding object position information and the second actual shielding object position information;
the determining of the actual position information of the current obstruction based on the first actual obstruction position information and the second actual obstruction position information specifically includes:
comparing the first actual shelter position information with the second actual shelter position information, and taking the information that the first actual shelter position information is the same as the second actual shelter position information as first reference actual position information;
taking the rest position information in the second actual obstruction position information as second reference actual position information;
obtaining the distance and the orientation corresponding to the shelter based on the first reference actual position information and the second reference actual position information;
the method for processing the shielding object comprises the following steps of sending a cleaning instruction to a corresponding shielding object processing device based on actual position information and category information of the shielding object, or sending a cleaning instruction to a corresponding shielding object processing device based on a distance, an orientation and category information corresponding to the shielding object so as to correspondingly process the shielding object, and specifically comprises the following steps:
under the condition that the fact that the shelters exist on the lighting building is determined, determining a corresponding shelter processing device based on the actual position information and the category information of the shelters; determining a corresponding shelter processing device based on the distance, the direction and the category information corresponding to the shelter under the condition that the shadow shelter image exists in the lighting building image; wherein, the shelter processing device at least comprises an unmanned aerial vehicle and a laser emission device;
under the condition that the shielding object processing device is an unmanned aerial vehicle, processing the shielding object through the unmanned aerial vehicle based on the actual position information of the shielding object;
and under the condition that the shielding object processing device is a laser emitting device, positioning processing is carried out on the laser emitting device based on the distance and the direction corresponding to the shielding object, so that laser is emitted by the laser emitting device to process the shielding object.
2. The method according to claim 1, wherein the detecting the lighting building image through a preset obstruction detection model specifically comprises:
inputting the lighting building image into a preset shelter detection model to obtain shelter image information on the lighting building; the shelter image information is the type information of the shelter;
determining coordinate information of a shelter in the lighting building image based on a preset coordinate system corresponding to the lighting building image;
and determining the actual position information of the shielding object based on a preset coordinate system corresponding to the lighting building and the coordinate information of the shielding object in the lighting building image.
3. A hidden danger processing method for lighting buildings according to claim 2, wherein the determining the actual position information of the obstruction based on the preset coordinate system corresponding to the lighting building and the coordinate information of the obstruction in the lighting building image specifically comprises:
matching key points of the lighting building image and a preset lighting building template image to determine a proportional relation between the lighting building image and the preset lighting building template image; the preset lighting building template image is an image shot under the condition that no shielding object exists on the lighting building; the key points are a central point and a plurality of vertexes corresponding to the lighting building image;
determining a coordinate point set corresponding to a shelter in the lighting building image, and determining the relative position relation between each coordinate point in the coordinate point set and the key point;
and determining the actual position information of the shelter on the lighting building based on the relative position relation and the proportional relation.
4. A hidden danger handling method for lighting buildings according to claim 1, characterized in that after the lighting building image is detected by a preset obstruction detection model, the method further comprises:
under the condition that no shielding object exists on the lighting building, inputting the lighting building image into a preset shadow detection model to obtain a shadow shielding image corresponding to the lighting building; the shadow occlusion image comprises one of a lighting building total occlusion image and a lighting building partial occlusion image;
under the condition that the shadow shielding image is a completely shielding image of the lighting building, acquiring current weather information, comparing the current weather information with a historical weather record table, and determining the influence condition of the current weather on the lighting building; the historical weather record table comprises historical weather information and influence conditions of lighting buildings corresponding to the historical weather information;
when the current weather has no influence on the lighting building or the shadow occlusion image is partially occluded, starting timing the occlusion time of the lighting building;
and when the timed duration reaches the preset duration and the lighting building is still shielded, sending a measurement instruction to the radar ranging sensor so as to measure the distance and the direction of the shielding object.
5. The method according to claim 1, wherein the determining, based on the distance and the orientation corresponding to the obstruction, a plurality of second image monitoring devices having a distance to the obstruction smaller than a preset distance, adjusting the shooting angles of the plurality of second image monitoring devices to obtain image information of a plurality of obstructions, and determining the category information of the obstruction based on the image information of the plurality of obstructions, specifically comprises:
determining a first coordinate corresponding to the obstruction in the three-dimensional map based on the distance and the direction;
determining a plurality of second image monitoring devices with the distance to the shielding object smaller than a preset distance based on the first coordinate;
adjusting the shooting angles of the plurality of second image monitoring devices according to the first coordinates and second coordinates corresponding to the plurality of second image monitoring devices respectively, so as to shoot the sheltering object based on the adjusted angles respectively;
obtaining the images of the shelters respectively uploaded by the plurality of second image monitoring devices;
and performing three-dimensional reconstruction based on the plurality of obstruction images to obtain a three-dimensional reconstruction image corresponding to the obstruction, and obtaining the category information of the obstruction based on the three-dimensional reconstruction image.
6. The method according to claim 1, wherein the three-dimensional reconstruction based on the plurality of obstruction images to obtain a three-dimensional reconstruction map corresponding to the obstruction is performed to obtain category information of the obstruction based on the three-dimensional reconstruction map, specifically comprising:
respectively carrying out two-dimensional contour extraction on the shelter images based on an edge detection algorithm;
based on the extracted two-dimensional contour image, generating a three-dimensional reconstruction image corresponding to the shelter by using a multi-view stereoscopic vision algorithm;
inputting a plurality of the obstruction images, the two-dimensional outline images and the three-dimensional reconstruction images into an obstruction detection model so as to output reference category information corresponding to each image and similarity between the reference category information and the reference category through the obstruction detection model;
grouping the input images based on the reference category and determining the number of the images in each group;
determining the average similarity corresponding to each group based on the number of the images in each group and the similarity between each image and the reference category;
and taking the reference category with the highest average similarity as the category information corresponding to the obstruction.
7. A hazard treatment apparatus for a daylighting building, the apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform the method of any of claims 1-6.
CN202211243569.1A 2022-10-12 2022-10-12 Hidden danger processing method and equipment for lighting building Active CN115311589B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211243569.1A CN115311589B (en) 2022-10-12 2022-10-12 Hidden danger processing method and equipment for lighting building

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211243569.1A CN115311589B (en) 2022-10-12 2022-10-12 Hidden danger processing method and equipment for lighting building

Publications (2)

Publication Number Publication Date
CN115311589A CN115311589A (en) 2022-11-08
CN115311589B true CN115311589B (en) 2023-03-31

Family

ID=83868071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211243569.1A Active CN115311589B (en) 2022-10-12 2022-10-12 Hidden danger processing method and equipment for lighting building

Country Status (1)

Country Link
CN (1) CN115311589B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117496445B (en) * 2023-12-12 2024-04-09 山东德才建设有限公司 Building construction equipment fault prediction method, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110775028A (en) * 2019-10-29 2020-02-11 长安大学 System and method for detecting automobile windshield shelters and assisting in driving
CN113112539A (en) * 2021-04-13 2021-07-13 大庆安瑞达科技开发有限公司 Oil and gas field video monitoring communication and regional communication network analysis system, method, equipment and storage medium
CN113376581A (en) * 2020-02-25 2021-09-10 华为技术有限公司 Radar blocking object detection method and device

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107341296B (en) * 2017-06-19 2021-02-26 中国建筑第八工程局有限公司 BIM technology-based lighting optimization processing method for high-rise group building
CN108712606B (en) * 2018-05-14 2019-10-29 Oppo广东移动通信有限公司 Reminding method, device, storage medium and mobile terminal
CN110765815A (en) * 2018-07-26 2020-02-07 北京京东尚科信息技术有限公司 Method and device for detecting shielding of display rack
US10829091B2 (en) * 2018-09-20 2020-11-10 Ford Global Technologies, Llc Vehicle sensor cleaning
FR3087037B1 (en) * 2018-10-03 2021-06-04 Soletanche Freyssinet IMAGE ACQUISITION PROCESS
CN109951636A (en) * 2019-03-18 2019-06-28 Oppo广东移动通信有限公司 It takes pictures processing method, device, mobile terminal and storage medium
CN109948525A (en) * 2019-03-18 2019-06-28 Oppo广东移动通信有限公司 It takes pictures processing method, device, mobile terminal and storage medium
CN109951635B (en) * 2019-03-18 2021-01-12 Oppo广东移动通信有限公司 Photographing processing method and device, mobile terminal and storage medium
CN109978805A (en) * 2019-03-18 2019-07-05 Oppo广东移动通信有限公司 It takes pictures processing method, device, mobile terminal and storage medium
CN111158404A (en) * 2020-01-03 2020-05-15 重庆特斯联智慧科技股份有限公司 Building facade lighting intelligent shielding system and method based on Internet of things
CN113409441A (en) * 2021-05-07 2021-09-17 中建科技集团有限公司 Building information display method, device, equipment and computer readable storage medium
CN114648510A (en) * 2022-03-28 2022-06-21 国网新能源云技术有限公司 State detection method and device of photovoltaic module, storage medium and electronic equipment
CN114648708A (en) * 2022-03-28 2022-06-21 国网电子商务有限公司 State detection method and device of photovoltaic module, storage medium and electronic equipment
CN114444194B (en) * 2022-04-11 2022-07-01 深圳小库科技有限公司 Automatic detection method and device for building specification, computer equipment and storage medium
CN114764521A (en) * 2022-05-10 2022-07-19 浙江大学 Garden building shape optimization method and system based on genetic algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110775028A (en) * 2019-10-29 2020-02-11 长安大学 System and method for detecting automobile windshield shelters and assisting in driving
CN113376581A (en) * 2020-02-25 2021-09-10 华为技术有限公司 Radar blocking object detection method and device
CN113112539A (en) * 2021-04-13 2021-07-13 大庆安瑞达科技开发有限公司 Oil and gas field video monitoring communication and regional communication network analysis system, method, equipment and storage medium

Also Published As

Publication number Publication date
CN115311589A (en) 2022-11-08

Similar Documents

Publication Publication Date Title
CN109977813B (en) Inspection robot target positioning method based on deep learning framework
CN111832536B (en) Lane line detection method and device
WO2018028103A1 (en) Unmanned aerial vehicle power line inspection method based on characteristics of human vision
Le et al. Autonomous robotic system using non-destructive evaluation methods for bridge deck inspection
Yan et al. Automatic extraction of power lines from aerial images
Li et al. Towards automatic power line detection for a UAV surveillance system using pulse coupled neural filter and an improved Hough transform
CN111754578B (en) Combined calibration method for laser radar and camera, system and electronic equipment thereof
CN112465738B (en) Photovoltaic power station online operation and maintenance method and system based on infrared and visible light images
CN102435173A (en) System and method for quickly inspecting tunnel defect based on machine vision
CN114241298A (en) Tower crane environment target detection method and system based on laser radar and image fusion
CN115311589B (en) Hidden danger processing method and equipment for lighting building
CN112528979B (en) Transformer substation inspection robot obstacle distinguishing method and system
CN111539355A (en) Photovoltaic panel foreign matter detection system and detection method based on deep neural network
JP3456339B2 (en) Object observation method, object observation device using the method, traffic flow measurement device and parking lot observation device using the device
CN111239768A (en) Method for automatically constructing map and searching inspection target by electric power inspection robot
CN110136186B (en) Detection target matching method for mobile robot target ranging
CN108710818A (en) A kind of real-time monitoring and statistics system and method for number based on three-dimensional laser radar
CN112101088A (en) Automatic unmanned aerial vehicle power inspection method, device and system
Kim et al. Robotic sensing and object recognition from thermal-mapped point clouds
CN114266960A (en) Point cloud information and deep learning combined obstacle detection method
CN111445522A (en) Passive night-vision intelligent mine detection system and intelligent mine detection method
Nardinocchi et al. Fully automatic point cloud analysis for powerline corridor mapping
Ko et al. ABECIS: An automated building exterior crack inspection system using UAVs, open-source deep learning and photogrammetry
CN112643719A (en) Tunnel security detection method and system based on inspection robot
CN111580128A (en) Method for automatic detection and modeling of motor vehicle driver examination field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant