WO2023060405A1 - 一种无人机监控方法、装置、无人机及监控设备 - Google Patents

一种无人机监控方法、装置、无人机及监控设备 Download PDF

Info

Publication number
WO2023060405A1
WO2023060405A1 PCT/CN2021/123137 CN2021123137W WO2023060405A1 WO 2023060405 A1 WO2023060405 A1 WO 2023060405A1 CN 2021123137 W CN2021123137 W CN 2021123137W WO 2023060405 A1 WO2023060405 A1 WO 2023060405A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
warning
monitoring target
area
monitoring
Prior art date
Application number
PCT/CN2021/123137
Other languages
English (en)
French (fr)
Other versions
WO2023060405A9 (zh
Inventor
黄振昊
方朝晖
马跃涛
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202180102022.7A priority Critical patent/CN117897737A/zh
Priority to PCT/CN2021/123137 priority patent/WO2023060405A1/zh
Publication of WO2023060405A1 publication Critical patent/WO2023060405A1/zh
Publication of WO2023060405A9 publication Critical patent/WO2023060405A9/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras

Definitions

  • the present application relates to the technical field of unmanned aerial vehicles, in particular to a monitoring method and device for unmanned aerial vehicles, unmanned aerial vehicles and monitoring equipment.
  • the fortified area can be monitored by installing a camera in the fortified area.
  • a camera due to the blind spot of the camera, it is often necessary to send additional inspection personnel to inspect the fortified area to prevent dangerous accidents that may occur in the fortified area.
  • Security inspections mainly rely on civil air defense, which lacks flexibility and intelligence. In some accident scenarios, it is even more difficult to analyze and make decisions on accidents quickly and accurately.
  • one of the purposes of this application is to provide a UAV monitoring method and device, UAV and monitoring equipment to increase the flexibility and intelligence of security inspections.
  • a method for monitoring an unmanned aerial vehicle comprising:
  • the position information is determined based on the pose when the camera captures the image
  • the warning information is generated based on the location relationship between the warning object and the warning area.
  • a drone monitoring device including:
  • memory for storing processor-executable instructions
  • a drone including:
  • a power component is used to drive the unmanned aerial vehicle to move in space
  • memory for storing processor-executable instructions
  • a monitoring device communicates with the drone, and the monitoring device includes:
  • memory for storing processor-executable instructions
  • a computer program product including a computer program, and when the computer program is executed by a processor, the operation of the method described in the first aspect above is implemented.
  • a machine-readable storage medium where several computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed, the operations of the method described in the above-mentioned first aspect are performed.
  • the application provides a UAV monitoring method, device, UAV and monitoring equipment, according to the images collected by the camera device carried by the UAV, the monitoring target and the warning object are identified, and the location of the monitoring target and the monitoring object is obtained. information, and then determine the warning area based on the location information of the monitoring target, and generate warning information based on the position relationship between the location of the warning object and the warning area.
  • the above-mentioned scheme greatly increases the flexibility and intelligence of the security inspection of the monitoring target; on the other hand, in some unexpected accident scenarios, the above-mentioned scheme can quickly detect Accident analysis and decision making.
  • Fig. 1 is a flow chart of a drone monitoring method according to an embodiment of the present application.
  • Fig. 2 is a flow chart of a method for monitoring a drone according to another embodiment of the present application.
  • Fig. 3 is a flow chart of a method for monitoring a drone according to another embodiment of the present application.
  • Fig. 4 is a flow chart of a method for monitoring a drone according to another embodiment of the present application.
  • 5(a)-(b) are schematic diagrams of a method for acquiring location information of a monitoring target according to an embodiment of the present application.
  • Fig. 6 is a schematic diagram of a drone monitoring method according to another embodiment of the present application.
  • Fig. 7 is a schematic diagram of a drone monitoring method according to another embodiment of the present application.
  • Fig. 8 is a schematic diagram of road signs according to an embodiment of the present application.
  • Fig. 9(a)-(b) is a schematic diagram of a warning area according to an embodiment of the present application.
  • Fig. 10(a)-(b) is a schematic diagram of a warning area according to another embodiment of the present application.
  • Fig. 11 is a schematic diagram of a warning area according to another embodiment of the present application.
  • Fig. 12 is a schematic diagram of a drone monitoring method according to another embodiment of the present application.
  • Fig. 13(a)-(c) is a schematic diagram showing the positional relationship between the warning object and the warning area according to an embodiment of the present application.
  • Fig. 14 is a schematic structural diagram of a drone monitoring device according to an embodiment of the present application.
  • Fig. 15 is a schematic structural diagram of an unmanned aerial vehicle according to an embodiment of the present application.
  • Fig. 16 is a schematic structural diagram of a monitoring device according to an embodiment of the present application.
  • the fortified area can be monitored by installing a camera in the fortified area.
  • a camera due to the blind spot of the camera, it is often necessary to send additional inspection personnel to inspect the fortified area to prevent dangerous accidents that may occur in the fortified area.
  • Security inspections mainly rely on civil air defense, which lacks flexibility and intelligence.
  • emergency such as fires, natural disasters, traffic accidents, etc., it is even more difficult to analyze and make decisions on emergencies quickly and accurately.
  • Unmanned aerial vehicles such as unmanned aircraft, unmanned boats, unmanned vehicles, etc.
  • UAVs have great mobility and are not restricted by terrain.
  • the UAV After the UAV collects several images of a certain area, it writes the pose information in the image. After the UAV returns, the processing software based on the ground terminal (such as a personal computer and other terminals) projects the area covered by the image onto the data acquisition plane, and then obtains other information such as the location information of the monitored object based on the projected image.
  • the above method requires the UAV to process it through the software on the ground side after returning to the voyage, which has poor timeliness and cannot quickly analyze and make decisions on some unexpected accidents.
  • software processing it is necessary to manually identify the monitored object and manually measure the position information of the monitored object, so automatic identification, machine learning and more in-depth analysis cannot be performed.
  • this application proposes a kind of unmanned aerial vehicle monitoring method, comprises the steps as shown in Figure 1:
  • Step 110 Identify monitoring targets and warning objects in space according to the images collected by the camera device carried by the drone;
  • Step 120 Obtain the location information of the monitoring target and the warning object
  • the position information is determined based on a pose when the camera captures the image.
  • the image collected by the camera device carried by the drone is obtained, the image area of the monitoring target and the warning object is identified in the image, and the pose and posture of the image are collected based on the camera device. In the image area, position information of the monitoring target and the warning object is obtained.
  • the location information may be determined based on other distance sensors on the UAV, for example, binocular, laser radar, millimeter wave radar, and the like.
  • Step 130 Determine the warning area based on the location information of the monitoring target
  • Step 140 Generate warning information based on the location relationship between the warning object and the warning area.
  • the image captured by the camera device mounted on the drone is obtained, the image area of the monitoring target and the warning object is identified in the image, and the pose and position of the image are collected based on the camera device.
  • the image area is used to obtain the position information of the monitoring target and the warning object; the warning area is determined based on the monitoring target position information; the warning information is generated based on the position relationship between the warning object position and the warning area.
  • the position information of the monitoring target and the warning object is determined; based on the monitoring The location information of the target determines the warning area; and the warning information is generated based on the positional relationship between the location of the warning object and the warning area.
  • the UAV monitoring method provided in this application can be applied to UAVs, where UAVs can include unmanned aircraft, unmanned boats, unmanned vehicles and other unmanned equipment.
  • UAVs can include unmanned aircraft, unmanned boats, unmanned vehicles and other unmanned equipment.
  • the following takes the UAV as an unmanned aircraft as an example to expand the description.
  • the UAV can identify the monitoring target and the warning object based on the images collected by its own camera device, and then determine the location information of the monitoring target and the warning object based on the pose and posture when the camera device collects the image, and based on the location information of the monitoring target A warning area is generated, and warning information is generated based on the position relationship between the warning object and the warning area.
  • the method described above can also be applied to monitoring equipment that communicates with drones.
  • the monitoring device may include a remote controller, a terminal device with a video display function, such as a mobile phone, a tablet computer, a PC (Personal Computer, personal computer), a wearable device, and the like.
  • the monitoring equipment can obtain the image collected by the camera device on the drone through the communication link established with the drone, and identify the monitoring target and warning object, and then obtain the location information of the monitoring target and warning object.
  • the position information can be sent to the monitoring equipment after the UAV determines the pose when the image is collected based on the camera device; it is also possible for the UAV to send the pose information when the camera device collects the image to the monitoring device.
  • the monitoring device determines the position information of the monitoring target and the warning target according to the pose information. Then the monitoring device can generate a warning area based on the location information of the monitoring target, and generate warning information based on a position relationship between the location of the warning object and the warning area.
  • some steps are performed on the drone, and some steps are performed on the monitoring device, which is also optional.
  • the imaging device carried by the UAV may be an ordinary camera or a professional camera, or may be an infrared camera, a multi-spectral camera and other imaging devices, which are not limited in this application.
  • the UAV monitoring method provided in this application is based on the images collected by the UAV, and proposes a solution by monitoring the position information of the target and the warning object.
  • the monitoring target can include targets that represent hazards, such as oil tanks, gas stations, fire areas, etc., and objects that need to be monitored; and warning objects can be objects that should be far away from hazards, such as pedestrians, vehicles, animals, people carrying fire sources, etc. movable objects (such as pedestrians smoking), etc.
  • the location information of the monitoring target and the warning object may include real geographic location information.
  • the monitoring target and warning object are identified in each frame of image collected by the UAV, and the warning area is divided according to the location information of the monitoring target, and then the warning information is generated according to the positional relationship between the warning object and the warning area. For example, when the warning object approaches or enters the warning area, the warning information is generated.
  • the warning information may include information in formats such as text, language, and video. Warning information can be displayed in a variety of ways, for example, the warning information can be output through the user interface of the monitoring device, or the warning information can be played through the playback module of the monitoring device, and the warning information can also be output by other devices, such as broadcasting through an external speaker to control the warning Lights blink and so on.
  • the above-mentioned scheme takes advantage of the mobility of the drone, which greatly increases the flexibility and intelligence of the security inspection of the monitoring target; at the same time, the above-mentioned scheme can be executed during the operation of the drone, and there is no need to wait for the drone to return to the ground before using the ground.
  • the software at the end processes the image, so in some accident scenarios, the above-mentioned solution can quickly make analysis and decision on the accident.
  • the drone monitoring method provided by the present application also includes the steps shown in Figure 2:
  • Step 210 Obtain an orthophoto image or a stereogram of the area where the monitoring target is located;
  • Step 220 Show the warning area in the ortho image or the stereogram.
  • the monitoring target and warning area There are two ways to display the monitoring target and warning area: one is to display it in the ortho image of the area where the monitoring target is located, and the other is to display it in the stereogram of the area where the monitoring target is located. Or it could be a combination of both.
  • the ortho image is an image under orthographic projection, which has the advantages of large amount of information and easy interpretation.
  • the monitoring target and the warning area to the monitoring personnel through the orthophoto image the monitoring personnel can obtain the information of the two in an all-round way.
  • orthoimages There are two ways to obtain orthoimages: one is to obtain orthoimages through image synthesis, and the other is to obtain orthoimages through 3D models.
  • the ortho image may be a composite image of the images collected by the camera device.
  • the orthophoto image may be synthesized by processing the collected images based on the pose of the camera device.
  • the acquisition method of the orthophoto image includes the steps shown in Figure 3:
  • Step 310 Obtain a three-dimensional model of the area where the monitoring target is located, and the three-dimensional model is established through images collected by a camera device;
  • Step 320 Obtain the orthophoto image through the 3D model.
  • the images collected by the camera device can be used to synthesize or build a three-dimensional model.
  • the drone that collects images and the drone that executes the above-mentioned drone monitoring method may be the same drone, or different drones. For example, first assign one or more drones to fly to the area where the monitoring target is located to collect several images, and the ground end can synthesize the collected images or build a 3D model. And assign other unmanned aerial vehicles to carry out the above-mentioned unmanned aerial vehicle monitoring method.
  • the warning area is displayed on a three-dimensional map, which can more intuitively and three-dimensionally show the situation of the warning area and its surroundings to the monitoring personnel.
  • the stereogram can be acquired using a 3D model, where the 3D model used to acquire the stereogram and the 3D model used to acquire the orthophoto image can be the same 3D model or different 3D models.
  • a 3D model used to obtain a stereogram may be finer than a 3D model used to obtain an ortho image.
  • the surveillance personnel can better grasp the information near the warning area.
  • the edge area of the image often has relatively large distortion, while the central area can be considered as having no distortion. If the monitoring target is at the edge of the image, deformation will occur in the image, resulting in inaccurate position information. Therefore, in order to ensure the accuracy of the acquired location information of the monitoring target, the location information of the monitoring target and the warning object can be obtained when the monitoring target is in the central area of the image.
  • the image may be subjected to distortion correction processing.
  • the position information of the monitoring target and the warning target can be determined based on the pose when the image is collected by the camera device.
  • the location information of the monitoring target is obtained through the steps shown in Figure 4:
  • Step 410 Obtain the pixel position information of the monitoring target in the image
  • Step 420 Obtain the pose information of the camera device
  • Step 430 Calculate the position information of the monitoring target according to the pixel position information and the pose information.
  • the imaging device includes a lens, a sensor, that is, a photosensitive device (sensor) and other necessary components, and the distance from the lens to the sensor is the focal length f.
  • the pose information of the camera device may be the pose information of the lens or the optical center point of the lens.
  • the pose information includes position information and/or attitude information, the position information may include the world coordinates of the camera, and the attitude information may include a pitch angle, a roll angle, and a yaw angle of the camera.
  • the projection range of sensor510 on the ground is AB.
  • the position information of the projection point of any pixel on the sensor on the ground can be obtained according to the geometric projection relationship by obtaining the pose information of the camera device when collecting images.
  • the center point of the sensor510 can be obtained as Any pixel point (u, v) at the origin projects the position information (X, Y) of point A on the ground.
  • the relationship between the position information (X, Y) of the projection point A, the pixel point (u, v) and the pose information (x, y, z) of the lens 520 is:
  • pixelsize is the size of a single pixel.
  • the center point of the sensor 510 can be obtained as Any pixel point (u, v) at the origin projects the position information (X, Y) of point A on the ground.
  • the relationship between the position information (X, Y) of the projection point A, the pixel point (u, v) and the pose information (x, y, z) of the lens 520 is:
  • ⁇ + ⁇
  • can be acquired through the attitude information of the camera device
  • arctan(u*pixelsize/f)
  • pixelsize is the size of a single pixel.
  • the position information of any pixel point on the sensor on the ground projection point can be obtained when the camera device is in the normal or oblique shooting state.
  • the above uses the monitoring target as an example to illustrate the embodiment of how to acquire the location information of the monitoring target.
  • the above method can also be used to obtain its location information.
  • the position information of the monitoring target includes horizontal position information and height position information
  • the step of obtaining the position information of the monitoring target further includes the steps shown in FIG. 6:
  • Step 610 According to the horizontal position information, use the preset terrain model to find the correction value of the height information;
  • Step 620 Utilize the correction value to update the horizontal position information.
  • the horizontal position information (X, Y) of the monitoring target can be obtained through the steps shown in FIG. 4 .
  • the preset terrain model is used to obtain the height position information of the monitoring target, which can include Digital Elevation Model (Digital Elevation Model, DEM) or Digital Surface Model (Digital Surface Model, DSM).
  • DEM Digital Elevation Model
  • DSM Digital Surface Model
  • the horizontal position information (X, Y) of the monitoring target is calculated based on the pose information of the camera device.
  • z in the pose information may represent the relative height of the current position of the camera device relative to the take-off point (home point).
  • the horizontal position information (X, Y ) will introduce errors. In order to eliminate this error, the horizontal position information (X, Y) can be updated with the correction value H of the height information.
  • Step 710 identifying a measurement point in the image and obtaining pixel position information of the measurement point
  • Step 720 Obtain the pose information of the camera device
  • Step 730 Calculate the position information of the measurement point according to the pixel position information and the pose information
  • Step 740 Determine error information based on the location information of the measurement point and the real location information of the measurement point;
  • Step 750 Use the error information to correct the location information of the monitoring target.
  • the position information of the monitoring target can be corrected by using the measurement points whose real position information is known. After determining the error information between the real position information of the measuring point and the position information of the projected point on the ground, the error information can be used to correct the position information of the monitoring target.
  • the position information of the projected point of the measurement point on the ground can be calculated by using the projection relationship according to the pixel position information of the measurement point on the image and the pose information of the camera device.
  • the measurement point may be a preset landmark with known real location information.
  • these signposts may be displayed in images displayed to monitoring personnel, including ortho images or perspective views.
  • FIG. 8 four road signs with known real location information distributed on the ground are displayed in the orthophoto image, namely Mark 1, Mark 2, Mark 3, and Mark 4.
  • the landmark Mark 1 is (X1, Y1, H1).
  • the position information (X1proj, Y1proj, H1proj) of the projected point of the landmark Mark 1 on the ground can be calculated by the pose information of the camera device and the pixel position information of the landmark Mark 1 in the image.
  • the error information V1 between the real position (X1, Y1, H1) of Mark 1 and the position of the projected point on the ground (X1proj, Y1proj, H1proj) can be obtained, where V1 is a vector.
  • the error information V2, V3, V4 can be obtained in the same manner.
  • the position information of the projected point of the pixel point on the ground is corrected according to the error information.
  • the error information can be interpolated, and the position information of the projected points of these pixel points on the ground can be corrected by using the interpolation value.
  • the real location information of the measurement point can also be determined based on the lidar device carried by the drone.
  • the laser radar equipment carried by the UAV can obtain the point cloud information of the measurement point, and can determine the real position information of the measurement point according to the point cloud information.
  • the lidar on board the drone may be a low-cost lidar that outputs a sparse point cloud.
  • the laser radar and the sensor of the camera device have been calibrated accurately, and an external parameter matrix describing the pose relationship between the two can be determined, and the internal parameter matrix of the sensor can also be calibrated in advance.
  • the conversion relationship between the position information (X, Y, Z) pointcloud determined by the point cloud information of the measurement point and the pixel point (u, v) corresponding to the measurement point on the sensor can be established, and the pixel point can be obtained at the same time (u, v) Position information (X1proj, Y1proj, H1proj) of projected points on the ground.
  • Position information (X1proj, Y1proj, H1proj) of projected points on the ground By comparing the position information (X, Y, Z) pointcloud determined by the point cloud information with the position information of the projection point (X1proj, Y1proj, H1proj), the error information can be determined, and the position information of the projection point can be corrected by using the error information .
  • the error information of the two measurement points can be interpolated, and the position information of the projection points of these pixel points on the ground can be corrected by using the interpolation value.
  • the interpolation value For a specific method for obtaining the interpolation value, reference may be made to related technologies, which are not limited in this application.
  • the real position information of the measurement point can also be calculated based on a vision algorithm.
  • the real position information of the measurement points is calculated according to the visual algorithm.
  • the error information is determined based on the real position information of the measurement point and the position information of the projected point on the ground, and the position information of the projected point is corrected by using the error information.
  • the error information of the two measurement points can be interpolated, and the position information of the projection points of these pixel points on the ground can be corrected by using the interpolation value.
  • the location information of the monitoring target can be obtained, and the location information can be corrected.
  • the location information of the warning object can be acquired according to the method provided in any of the above embodiments.
  • the warning area can be determined based on the location information.
  • the way of determining the warning area can be set according to needs.
  • the preset distance can be expanded outwards from the position of the monitoring target as the warning area according to the preset distance.
  • the preset distance can be flexibly set.
  • the warning area can also be determined in combination with the surrounding environment or other objects of the monitoring target; in other embodiments, the monitoring target may have a certain size and occupy a certain area on the ground, and the location information of the monitoring target may include monitoring The designated position in the target, the warning area can be determined according to the designated position and the preset area model.
  • the specified position in the monitoring target may be the central position of the monitoring target, or other non-central positions in the monitoring target.
  • the preset area model may include size information and shape information of the warning area.
  • the shape information can include a circular area, and the size information can include the radius of the area; the shape information can include a rectangular area, and the size information can include the length and width of the area; the shape information can also include a fan-shaped area, and the size information can include the arc of the area and the area radius.
  • the shape information may also include other arbitrary shapes, which are not limited in this application.
  • the plant area 910 can be determined according to the center position 920 of the plant area 910 and the preset area model.
  • the center position 920 of is the center of the circle, and the circular area with radius R is the warning area 930 .
  • the monitoring target is the sports field 940 .
  • the infrared detector carried by the UAV can recognize that there is an obvious high temperature anomaly in the left area 950 of the sports field 940.
  • the center position of the left area 950 can be used as the designated position, and the designated position can be used as the center of the circle, and the radius is R.
  • the circular area of is the warning area 960.
  • the position information of the monitoring target may include the boundary position of the monitoring target, and the warning area may be determined according to the boundary position and a preset buffer distance.
  • feature extraction and machine learning may be performed on images collected by the camera device to identify boundaries of surveillance targets.
  • the boundary position can be determined according to the external surface feature points of the monitoring target.
  • the boundary of the monitoring target may include an outline or a circumscribed polygon of the monitoring target.
  • a certain plant area is the monitoring target 1010
  • its boundary is the outline 1020 of the monitoring target 1010
  • the warning can be determined Area 1020.
  • the boundary of the monitoring target 1010 is a circumscribed rectangle 1040 , so the alert area 1050 can be determined according to the boundary position and the preset buffer distance.
  • FIG. 11 Also shown in Figure 11 is a schematic diagram of the warning area.
  • the monitoring target is a tank.
  • the top and side boundaries of the tank can be identified, so that the tank can be determined. borders.
  • the position information of the projected points of the boundary pixels on the ground can be obtained one by one to obtain the boundary position set ⁇ POS ⁇ i of the tank.
  • the minimum circumscribing rectangle of the object can be directly circled. Obtain the position information of the boundary pixels of the minimum circumscribed rectangle and the projection point of the center pixel on the ground one by one to obtain the set of boundary positions ⁇ POS ⁇ i of the object.
  • the boundary position of the monitoring target After determining the boundary position of the monitoring target, it can be expanded according to the preset buffer distance L_buff to obtain a warning area (shown as the buffer boundary in FIG. 11 ).
  • the position set of the warning area obtained after expansion is ⁇ POS ⁇ i_buff.
  • a warning area may further include multiple warning level sub-areas, and each warning level sub-area corresponds to a different buffer distance. For example, if a warning area includes two sub-areas with different warning levels, the first sub-area corresponds to the buffer distance L_buff_1, and the second sub-area corresponds to the buffer distance L_buff_2, wherein the buffer distance L_buff_1 is greater than the buffer distance L_buff_2. In this way, the position set of the first sub-area is ⁇ POS ⁇ i_buff_1, and the position set of the second sub-area is ⁇ POS ⁇ i_buff_2.
  • the boundary position of the warning object may also be determined according to the methods provided in the above embodiments. For example, if the warning objects include objects such as pedestrians and bicycles, and the size of the warning objects in the image is smaller than 5*5 pixels, then the minimum circumscribed rectangle of the warning objects can be directly circled, as shown in Figure 11 . And obtain the position information of the boundary pixels of the minimum circumscribed rectangle and the projection point of the center pixel on the ground one by one, so as to obtain the boundary position set ⁇ pos ⁇ i of the warning object.
  • a warning area can also be set for the warning object, and the warning area can be set by the method provided in any of the above-mentioned embodiments, and the present application will not repeat them here. If the buffer distance of the warning object is l_buff, the location set ⁇ pos ⁇ i_buff of the warning area of the warning object.
  • the UAV monitoring method provided by the present application also includes the steps shown in Figure 12:
  • Step 1210 Obtain the type information of the monitoring target
  • Step 1220 The warning area is determined according to the location information and type information of the monitoring target.
  • the warning area can be determined according to the type information of the monitoring target in addition to being divided according to the location information of the monitoring target.
  • the type information of the monitoring target may include low-risk, medium-risk, and high-risk. For example, in a sudden accident scene, the area where the traffic accident scene is located may be classified as low-risk, while the fire area may be classified as high-risk.
  • warning areas of different sizes can be set. For example, the buffer distance set for the monitoring target belonging to the high risk category is the largest, followed by the monitoring target belonging to the medium risk category, and the buffer distance set for the monitoring target belonging to the low risk category is the smallest.
  • a warning area may further include multiple warning level sub-areas, and sub-areas for different warning levels correspond to different levels of warning information.
  • the alert area can be divided into a first sub-area and a second sub-area, and their alert levels increase sequentially.
  • the warning message may be "you have entered the warning area, please leave as soon as possible”.
  • the second sub-area its warning message may be "please stop approaching and leave the warning area quickly”.
  • different warning measures can be taken for different levels of sub-regions.
  • a warning measure of voice broadcast warning information may be taken.
  • warning measures can be taken to notify the warning object through APP, SMS, or telephone.
  • Figure 13 (a)-(c) is a schematic diagram of the positional relationship between the warning object and the warning area. When the positional relationship between the warning object and the warning area satisfies any of them, the warning information is generated.
  • the monitoring target is a factory building area 1310
  • the warning area 1320 is a circular area.
  • the figure shows the circumscribed rectangle of the warning object 1330 .
  • a warning message is generated to remind the warning object 1330 to leave the warning area 1320 .
  • it can be determined whether the boundary position set ⁇ pos ⁇ i of the warning object or the position set ⁇ pos ⁇ i_buff of the warning area of the warning object enters the position set ⁇ POS ⁇ i_buff of the warning area of the monitoring target. Whether the warning object enters the warning area of the monitoring target.
  • a warning message is generated to remind the warning object 1330 to stop approaching the warning area 1320 .
  • the distance between the boundary position set ⁇ pos ⁇ i of the warning object, or the position set ⁇ pos ⁇ i_buff of the warning area of the warning object, and the position set ⁇ POS ⁇ i_buff of the warning area of the monitoring target can be analyzed to It is determined whether the distance between the location of the warning object and the boundary of the warning area of the monitoring object is smaller than a preset distance threshold.
  • the motion information of the warning object can also be extracted based on the location information of the warning object, and the predicted position of the warning object can be generated according to the motion information. If the predicted location of the warning object and the warning area meet the preset conditions, the warning information can be generated. For example, as shown in FIG. 13( c ), the motion information of the warning object 1330 can be extracted based on the location information of the warning object 1330 , and it can be known that the motion information of the warning object 1330 is moving toward the warning area 1320 . The predicted position of the warning object 1330 is generated according to the motion information, and if the predicted position of the warning object 1330 and the warning area 1320 satisfy a preset condition, then the warning information is generated.
  • the preset condition may be that the predicted position is within a warning area.
  • the predicted position of the warning object 1330 may enter the warning area 1320 , so a warning message can be generated to remind the warning object 1330 to change the action track.
  • the warning object can be warned or reminded.
  • the above method may further include the step of: sending the location information of the monitoring target to another mobile device, so that the mobile device performs a target task according to the position information, wherein the target task may include taking pictures of the monitoring target images, and/or send voice messages to the alerted objects.
  • the target task may include taking pictures of the monitoring target images, and/or send voice messages to the alerted objects.
  • an aircraft on duty can be dispatched to automatically fly to the location of the monitoring target for reconnaissance or shouting.
  • the warning object includes a movable object
  • the warning object includes a movable object such as a person or a car
  • the above method may further include the step of: controlling the UAV to track the warning object. For example, when the UAV hovers at a certain position in the air to monitor the monitoring target, if there is a movable warning object in the image collected by the camera device, the whereabouts of the warning object will be tracked.
  • the warning information is generated according to the position information of the warning object and the warning area. When the warning object leaves the shooting range of the camera device, the UAV can return to the hovering position and continue to monitor the monitoring object.
  • the monitoring target includes a movable object.
  • the infrared detector carried by the UAV or through other means recognizes that the movable monitoring target has abnormal high temperature (such as a car on fire or has a risk of fire), or recognizes
  • the monitoring target includes a dangerous mobile source (such as carrying dangerous goods)
  • the above method may further include the step of: controlling the UAV to track the monitoring target.
  • the movable monitoring target catches fire or carries dangerous goods, the UAV can be controlled to track the monitoring target all the time, so as to warn people around the monitoring target to stay away from the monitoring target.
  • the present application also provides another embodiment of a UAV monitoring method, which can identify the monitoring target and warning object in the image in real time through machine learning after acquiring the image collected by the camera device mounted on the UAV, and The location information of the monitoring target and the warning object is determined based on the posture and posture when the image is collected by the camera device, and the location information is corrected. Afterwards, through feature extraction and machine learning on the image, the top and side boundary ranges of the monitoring target are identified, and other vehicles, people, etc. in the image are identified.
  • the position information of the boundary pixel projection points on the ground can be obtained one by one to obtain the boundary position set ⁇ POS ⁇ i of the monitoring target.
  • the smallest circumscribing rectangle of the monitoring target can be directly circled. Obtain the position information of the boundary pixels of the minimum circumscribed rectangle and the projection point of the center pixel on the ground one by one to obtain the set ⁇ POS ⁇ i of the boundary positions of the monitoring target.
  • the warning area After determining the boundary position of the monitoring target, the warning area can be obtained by expanding according to the preset buffer distance L_buff, and the position set of the warning area obtained after the expansion is ⁇ POS ⁇ i_buff.
  • the alert area includes at least two alert level sub-areas, the first sub-area corresponds to the buffer distance L_buff_1, and the second sub-area corresponds to the buffer distance L_buff_2, wherein the buffer distance L_buff_1 is greater than the buffer distance L_buff_2.
  • the position set of the first sub-area is ⁇ POS ⁇ i_buff_1
  • the position set of the second sub-area is ⁇ POS ⁇ i_buff_2.
  • the warning object can also set the warning area through the above method. If the buffer distance of the warning object is l_buff, the position set ⁇ pos ⁇ i_buff of the warning area of the warning object.
  • the UAV can report to the monitoring equipment in real time, and the monitoring equipment will issue the next task scheduling, such as broadcasting through the loudspeaker, so that the warning object leaves the warning area, and the firefighters/security personnel are on standby. control measures.
  • monitoring equipment can also send the geographic coordinates of the monitoring target to the on-duty aircraft, and dispatch the on-duty aircraft to automatically fly to the vicinity of the monitoring target according to the geographic coordinates for investigation or shouting.
  • the location information of the monitoring target and the warning object can be corrected through the above-mentioned solution, and higher-precision ground object information and geographic location can be obtained.
  • it can provide quick guidance for on-site operations in real time, effectively respond to unexpected accidents, and automatically execute the next step based on the analysis results, or link other equipment for joint operations, which greatly improves security.
  • the flexibility and intelligence of inspection can be provided.
  • the present application also provides a structural schematic diagram of a drone monitoring device as shown in FIG. 14 .
  • the UAV monitoring device includes a processor, an internal bus, a network interface, a memory and a non-volatile memory, and of course may also include hardware required by other services.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and then runs it, so as to realize the drone monitoring method described in any of the above embodiments.
  • the present application also provides a schematic structural diagram of a drone as shown in FIG. 15 .
  • the drone includes a fuselage, power components for driving the drone to move in the air, a camera device, and a drone monitoring device as shown in Figure 14.
  • the UAV monitoring device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, and of course may include hardware required by other services.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and then runs it, so as to realize the drone monitoring method described in any of the above embodiments.
  • the present application also provides a schematic structural diagram of a monitoring device as shown in FIG. 16 , the monitoring device communicates with the UAV.
  • the monitoring device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, and of course may also include hardware required by other services.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and then runs it, so as to realize the drone monitoring method described in any of the above embodiments.
  • the present application also provides a computer program product, including a computer program, which can be used to execute one of the methods described in any of the above embodiments when the computer program is executed by a processor.
  • UAV surveillance method UAV surveillance method.
  • the present application also provides a computer storage medium, the storage medium stores a computer program, and when the computer program is executed by a processor, it can be used to perform any of the above-mentioned embodiments.
  • a UAV monitoring method A UAV monitoring method.
  • the device embodiment since it basically corresponds to the method embodiment, for related parts, please refer to the part description of the method embodiment.
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without any creative efforts.

Abstract

本申请提供一种无人机监控方法、装置、无人机及监控设备,所述方法包括:根据无人机搭载的摄像装置所采集的图像,在空间中识别监控目标与警示对象;获取所述监控目标与所述警示对象的位置信息,所述位置信息基于所述摄像装置采集所述图像时的位姿确定;基于所述监控目标的位置信息确定警戒区域;基于所述警示对象的位置与所述警戒区域的位置关系生成警示信息。如此,一方面受益于无人机的机动性,上述方案大大增加了对监控目标安防巡检的灵活性与智能化;另一方面在一些突发事故的场景中,通过上述方案能够迅速对突发事故作出分析与决策。

Description

一种无人机监控方法、装置、无人机及监控设备 技术领域
本申请涉及无人机技术领域,尤其涉及一种无人机监控方法、装置、无人机及监控设备。
背景技术
在相关的安防巡检技术中,可以通过在设防区域内安装摄像头来监视设防区域。但由于摄像头存在视觉盲区,因此往往还需要加派巡检人员对设防区域进行巡检,以防范设防区域内可能发生的危险事故。安防巡检主要还是依赖人防,缺乏灵活性与智能化。在一些突发事故的场景中,更加无法迅速并准确地对突发事故作出分析与决策。
发明内容
有鉴于此,本申请的目的之一是提供一种无人机监控方法、装置、无人机及监控设备,以增加安防巡检的灵活性与智能化。
为了达到上述技术效果,本发明实施例公开了如下技术方案:
第一方面,提供了一种无人机监控方法,所述方法包括:
根据无人机搭载的摄像装置所采集的图像,在空间中识别监控目标与警示对象;
获取所述监控目标与所述警示对象的位置信息,所述位置信息基于所述摄像装置采集所述图像时的位姿确定;
基于所述监控目标的位置信息确定警戒区域;
基于所述警示对象的位置与所述警戒区域的位置关系生成警示信息。
第二方面,提供了一种无人机监控装置,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器调用所述可执行指令时实现如上述第一方面所述方 法的操作。
第三方面,提供了一种无人机,包括:
机身;
动力组件,用于驱动所述无人机在空间中运动;
摄像装置;
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器调用所述可执行指令时实现如上述第一方面所述方法的操作。
第四方面,提供了一种监控设备,所述监控设备与无人机通信,所述监控设备包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器调用所述可执行指令时实现如上述第一方面所述方法的操作。
第五方面,提供了一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上述第一方面所述方法的操作。
第六方面,提供了一种机器可读存储介质,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被执行时执行如上述第一方面所述方法的操作。本申请提供的一种无人机监控方法、装置、无人机及监控设备,根据无人机搭载的摄像装置所采集的图像识别出监控目标和警示对象,以及获取监控目标和监视对象的位置信息,然后基于监控目标的位置信息确定警戒区域,并基于警示对象的位置与警戒区域的位置关系生成警戒信息。如此,一方面受益于无人机的机动性,上述方案大大增加了对监控目标安防巡检的灵活性与智能化;另一方面在一些突发事故的场景中,通过上述方案能够迅速对突发事故作出分析与决策。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动 性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请根据一实施例示出的一种无人机监控方法的流程图。
图2是本申请根据另一实施例示出的一种无人机监控方法的流程图。
图3是本申请根据另一实施例示出的一种无人机监控方法的流程图。
图4是本申请根据另一实施例示出的一种无人机监控方法的流程图。
图5(a)-(b)是本申请根据一实施例示出的监控目标的位置信息获取方法的示意图。
图6是本申请根据另一实施例示出的一种无人机监控方法的示意图。
图7是本申请根据另一实施例示出的一种无人机监控方法的示意图。
图8是本申请根据一实施例示出的路标的示意图。
图9(a)-(b)是本申请根据一实施例示出的警戒区域的示意图。
图10(a)-(b)是本申请根据另一实施例示出的警戒区域的示意图。
图11是本申请根据另一实施例示出的警戒区域的示意图。
图12是本申请根据另一实施例示出的一种无人机监控方法的示意图。
图13(a)-(c)是本申请根据一实施例示出的警示对象与警戒区域位置关系的示意图。
图14是本申请根据一实施例示出的一种无人机监控设备结构示意图。
图15是本申请根据一实施例示出的一种无人机结构示意图。
图16是本申请根据一实施例示出的一种监控设备结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在相关的安防巡检技术中,可以通过在设防区域内安装摄像头来监视设防区域。但由于摄像头存在视觉盲区,因此往往还需要加派巡检人员对设防区域进行巡检,以防范设防区域内可能发生的危险事故。安防巡检主要还是依赖人防,缺乏灵活性与智能化。此外,在一些如火灾、自然灾害、交通事故等突发事故中,更加无法迅速并准确地对突发事故作出分析与决策。
无人机,例如无人飞机、无人船、无人车等有极大的机动性,且不受地形限制。受益于无人机的机动性,若将无人机应用于安防巡检技术中,则将大大增加安防巡检的灵活性与智能化。
然而,如何将无人机应用在安防巡检场景中面临着较多的技术难点。如上述通过摄像头来监控设防区域的方案中,是通过对比摄像头所采集的图像与上一张图像相比是否发生像素变化,来确定设防区域是否被入侵。但由于无人机的机动性,当无人机位置发生变化时,无人机所采集的图像与上一张图像相比就会发生变化,因此无法通过对比图像像素是否发生变化来监控设防区域是否被入侵。
此外,在一些方案中,无人机在收集某区域的若干张图像后,在图像中写入的位姿信息。在无人机返航后基于地面端(如个人电脑等终端)的处理软件将图像所覆盖的范围投影到数据获取的平面上,再根据投影的图像获取被监控物体的位置信息等其他信息。同时,上述方法需要无人机在返航后通过地面端的软件进行处理,时效性较差,对于一些突发事故无法迅速地作出分析与决策。而且通过软件处理后需要经由人工判别被监控物体,以及人工测量被监控物体的位置信息,无法进行自动识别,机器学习以及更深入的分析。
为此,本申请提出了一种无人机监控方法,包括如图1所示的步骤:
步骤110:根据无人机搭载的摄像装置所采集的图像在空间中识别监控目标与警示对象;
步骤120:获取所述监控目标与所述警示对象的位置信息;
例如,所述位置信息基于所述摄像装置采集所述图像时的位姿确定。在一可选实施方案中,获取无人机搭载的摄像装置所采集的图像,在所述图像中识别监控目标与警示对象的图像区域,基于所述摄像装置采集所述图像时的位姿和所述图像区域,获取所述监控目标与所述警示对象的位置信息。
再比如,所述位置信息可以基于无人机上的其他距离传感器确定,例如,双目、激光雷达、毫米波雷达等等。
步骤130:基于所述监控目标的位置信息确定警戒区域;
步骤140:基于所述警示对象的位置与所述警戒区域的位置关系生成警示信息。
此外,在一些方案中,获取无人机搭载的摄像装置所采集的图像,在 所述图像中识别监控目标与警示对象的图像区域,基于所述摄像装置采集所述图像时的位姿和所述图像区域,获取所述监控目标与所述警示对象的位置信息;基于所述监控目标的位置信息确定警戒区域;基于所述警示对象的位置与所述警戒区域的位置关系生成警示信息。
此外,在一些方案中,根据无人机搭载的摄像装置所采集的图像,和所述摄像装置采集所述图像时的位姿,确定监控目标与所述警示对象的位置信息;基于所述监控目标的位置信息确定警戒区域;基于所述警示对象的位置与所述警戒区域的位置关系生成警示信息。
本申请提供的无人机监控方法可以应用于无人机,其中无人机可以包括无人飞机、无人船、无人车等无人驾驶设备。以下以无人机为无人飞机为例展开说明。无人机可以根据自身搭载的摄像装置所采集的图像识别出监控目标和警示对象,然后基于摄像装置采集该图像时的位姿确定监控目标与警示对象的位置信息,并且基于监控目标的位置信息生成警戒区域,基于警示对象的位置与警戒区域的位置关系生成警示信息。
此外,上述方法还可以应用于与无人机通信的监控设备。监控设备可以包括遥控器、具有视频显示功能的终端设备,如手机、平板电脑、PC(Personal Computer,个人电脑)、可穿戴设备等。监控设备可以通过与无人机建立的通信链路获取无人机上搭载的摄像装置所采集的图像,并识别出监控目标和警示对象,然后获取监控目标与警示对象的位置信息。其中,该位置信息可以由无人机基于摄像装置采集该图像时的位姿确定后,发送至监控设备;也可以无人机将摄像装置采集该图像时的位姿信息发送至监控设备,由监控设备根据位姿信息确定监控目标与警示对象的位置信息。然后监控设备可以基于监控目标的位置信息生成警戒区域,并基于警示对象的位置与警戒区域的位置关系生成警示信息。或者,上述方法中,部分步骤在无人机上执行,部分步骤在监控设备上执行也是可选的。
此外,无人机所搭载的摄像装置可以是普通相机或专业相机,也可以是红外相机、多光谱相机等摄像装置,本申请在此不做限制。
本申请提供的无人机监控方法,是基于无人机采集的图像,通过监控目标与警示对象的位置信息来提出解决方案。其中,监控目标可以包括表征危险源的目标,如油罐、加油站、着火区域等需要被监控的物体;而警示对象可以是应远离危险源的对象,如行人、车辆、动物、携带火源的可移动对 象(如在抽烟的行人)等。监控目标与警示对象的位置信息可以包括真实的地理位置信息。在无人机所采集的每一帧图像中都识别出监控目标和警示对象,并根据监控目标的位置信息划分出警戒区域,然后根据警示对象与警戒区域的位置关系生成警示信息。例如,当警示对象靠近、进入警戒区域时即生成警示信息。警示信息可以包括文字、语言、视频等格式的信息。警示信息可以通过多种方式展示,例如可以通过监控设备的用户界面输出警示信息,或者通过监控设备的播放模块播放警示信息,还可以由其他设备输出警示信息,例如通过外设音箱广播,控制警示灯闪烁等等。上述方案利用了无人机的机动性,大大增加了对监控目标安防巡检的灵活性与智能化;同时上述方案可以在无人机作业过程中执行,无需等到无人机返航后再利用地面端的软件对图像进行处理,因此在一些突发事故的场景中,通过上述方案能够迅速对突发事故作出分析与决策。
在一些实施例中,为了更好地向监控人员展示监控目标和警戒区域,本申请提供的无人机监控方法,还包括如图2所示的步骤:
步骤210:获取所述监控目标所在区域的正射图像或立体图;
步骤220:在所述正射图像或立体图中展示所述警戒区域。
监控目标和警戒区域的展示方法有两种:一是展示在监控目标所在区域的正射图像中,二是展示在监控目标所在区域的立体图中。或者也可以是这两者的组合。
对于第一种展示方法,正射图像是一种正射投影下的图像,具有信息量大、易于判读等优点。通过正射图像向监控人员展示监控目标和警戒区域,可以使监控人员全方面地获取这两者的信息。
正射图像的获取方法有两种:一是通过图像合成获取正射图像,二是通过三维模型获取正射图像。
对于第一种获取方法,由于实际通过摄像装置采集的图像都是中心投影,因此正射图像可以是通过摄像装置采集的图像经过合成后的图像。具体地,可以基于摄像装置的位姿处理所采集的图像来合成正射图像。
对于第二种获取方法,正射图像的获取方法包括如图3所示的步骤:
步骤310:获取所述监控目标所在区域的三维模型,所述三维模型通过摄像装置采集的图像建立;
步骤320:通过所述三维模型获取所述正射图像。
在一些实施例中,可以利用摄像装置采集的图像进行合成或建立三维模型。在一些实施例中,采集图像的无人机与执行上述无人机监控方法的无人机可以是同一台无人机,也可以是不同的无人机。例如先指派一台或多台无人机飞行至监控目标所在区域采集若干张图像,地面端可以对采集的图像进行合成或建立三维模型。并指派其他无人机执行上述无人机监控方法。
对于第二种展示方法,警戒区域展示在立体图上,可以更直观、立体地向监控人员展示警戒区域及其周边的情况。在一些实施例中,立体图可以利用三维模型获取,其中用于获取立体图的三维模型与用于获取正射图像的三维模型可以是同一个三维模型,也可以是不同的三维模型。例如用于获取立体图的三维模型可以比用于获取正射图像的三维模型更为精细。
通过在正射图像或立体图像中展示警戒区域,可以让监控人员更好地掌握警戒区域附近的信息。
摄像装置所采集的图像中,图像的边缘区域往往有较大的畸变,而中心区域部分可以认为没有畸变。若监控目标处于图像的边缘位置,则在图像中会产生形变,导致所确定的位置信息并不准确。因此为了保证所获取的监控目标位置信息的准确度,可以获取监控目标在图像的中心区域时,监控目标与警示对象的位置信息。此外,除了保持监控目标处于图像的中心区域,在一些实施例中,还可以在计算监控目标与警示对象的位置信息之前,先对图像进行畸变校正处理。
监控目标与警示对象的位置信息可以基于摄像装置采集图像时的位姿确定。在一些实施例中,监控目标的位置信息通过如图4所示的步骤获取:
步骤410:获取所述监控目标在所述图像中的像素位置信息;
步骤420:获取所述摄像装置的位姿信息;
步骤430:根据所述像素位置信息以及所述位姿信息计算所述监控目标的位置信息。
摄像装置包括镜头、传感器,即感光器件(sensor)等其他必要组件,镜头到sensor的距离为焦距f。摄像装置的位姿信息可以是镜头或者是镜头光心点的位姿信息。位姿信息包括位置信息和/或姿态信息,位置信息可以包括摄像装置的世界坐标,姿态信息可以包括摄像装置的俯仰(pitch)角、横滚(roll)角以及偏航(yaw)角。
如图5(a)-(b)所示,当无人机搭载的摄像装置在作业时,sensor510 在地面上的投影范围为AB。可以通过获取摄像装置在采集图像时的位姿信息,根据几何投影关系,获取sensor上任一像素点在地面上的投影点的位置信息。
如图5(a)所示,当摄像装置处于正射时,即sensor510与地面相互平行,根据镜头520的位姿信息(x,y,z)以及几何投影关系,可以获取以sensor510中心点为原点的任一像素点(u,v)在地面上投影点A的位置信息(X,Y)。投影点A的位置信息(X,Y)与像素点(u,v)以及镜头520的位姿信息(x,y,z)的关系为:
X=x+z*u*pixelsize/f
Y=y+z*v*pixelsize/f
其中,pixelsize为单个像素的尺寸。
如图5(b)所示,当摄像装置处于倾斜摄影时,即sensor510与地面不平行,根据镜头520的位姿信息(x,y,z)以及几何投影关系,可以获取以sensor510中心点为原点的任一像素点(u,v)在地面上投影点A的位置信息(X,Y)。投影点A的位置信息(X,Y)与像素点(u,v)以及镜头520的位姿信息(x,y,z)的关系为:
X=x+z*tanα
Y=y+z*v*pixelsize/f
其中,α=β+γ,β可以通过摄像装置的姿态信息获取,γ=arctan(u*pixelsize/f),pixelsize为单个像素的尺寸。
如此,通过上述方法,可以得到摄像装置处于正射或倾斜摄影状态下,sensor上任一像素点在地面投影点的位置信息。以上以监控目标为例示出了如何获取监控目标的位置信息的实施例。对于警示对象可以同样使用上述方法获取其位置信息。
在一些实施例中,监控目标的位置信息包括水平位置信息与高度位置信息,监控目标的位置信息的获取步骤还包括如图6所示的步骤:
步骤610:根据所述水平位置信息,利用预设的地形模型查找所述高度信息的修正值;
步骤620:利用所述修正值更新所述水平位置信息。
通过如图4所示的步骤可以获取监控目标的水平位置信息(X,Y)。预设的地形模型用于获取监控目标的高度位置信息,可以包括数字高程模型 (Digital Elevation Model,DEM)或数字表面模型(Digital Surface Model,DSM)。根据水平位置信息(X,Y),利用DEM或DSM查找该水平位置下对应的高度信息的修正值H,如此可以获取监控目标的位置信息(X,Y,H)。
如上所述,监控目标的水平位置信息(X,Y)根据摄像装置的位姿信息计算得出。其中,位姿信息中的z可以代表摄像装置当前位置相对于起飞点(home点)的相对高度。在一些实施例中,若home点与监控目标在地面上的投影位置不在同一水平高度上时,利用像装置当前位置相对于home点的相对高度z来计算监控目标的水平位置信息(X,Y)会引入误差。为了消除该误差,可以利用高度信息的修正值H更新水平位置信息(X,Y)。
除了利用高度信息的修正值H来矫正监控目标的位置信息,还可以利用如图7所示的步骤校正监控目标的位置信息:
步骤710:在所述图像中识别测量点并获取所述测量点的像素位置信息;
步骤720:获取所述摄像装置的位姿信息;
步骤730:根据所述像素位置信息以及所述位姿信息计算所述测量点的位置信息;
步骤740:基于所述测量点的位置信息与所述测量点的真实位置信息确定误差信息;
步骤750:利用所述误差信息对所述监控目标的位置信息进行校正。
可以利用已知真实位置信息的测量点对监控目标的位置信息进行校正。确定测量点的真实位置信息与其在地面上投影点的位置信息之间的误差信息后,可以利用该误差信息对监控目标的位置信息进行校正。而测量点在地面上投影点的位置信息可以根据测量点在图像上的像素位置信息、摄像装置的位姿信息,利用投影关系计算得出。
在一些实施例中,测量点可以是预设的已知真实位置信息的路标。在一些实施例中,可以在向监控人员显示的图像(包括正射图像或立体图)中显示这些路标。作为一个实施例,如图8所示,正射图像中显示有地面上分布的4个已知真实位置信息的路标,分别为Mark 1、Mark 2、Mark 3、Mark 4。以路标Mark 1为例,路标Mark 1的真实位置为(X1,Y1,H1)。通过摄像装置的位姿信息以及路标Mark 1在图像中的像素位置信息可以计算路标 Mark 1在地面上投影点的位置信息(X1proj,Y1proj,H1proj)。如此,可以获取Mark 1的真实位置(X1,Y1,H1)与其在地面上投影点的位置(X1proj,Y1proj,H1proj)之间的误差信息V1,其中V1为矢量。对于其他路标,可以通过同样的方式获取误差信息V2、V3、V4。对于各路标附近的像素,例如距离路标在预设像素距离内的像素点,按照误差信息对该像素点在地面上投影点的位置信息上加以校正。对于两个路标之间的像素点,可以对误差信息进行内插,利用内插值对这些像素点在地面上投影点的位置信息加以校正。具体内插值的获取方法可以参考相关技术,本申请在此不做限制。
在一些实施例中,测量点的真实位置信息还可以基于无人机搭载的激光雷达设备确定。无人机搭载的激光雷达设备可以获取测量点的点云信息,并根据点云信息可以确定测量点的真实位置信息。在一些实施例中,无人机搭载的激光雷达可以是输出稀疏点云的低成本激光雷达。在一些实施例中,激光雷达与摄像装置的sensor进行了精准的标定,可以确定出描述两者间位姿关系的外参矩阵,同时sensor的内参矩阵也可以提前标定。如此,可以建立测量点通过点云信息确定的位置信息(X,Y,Z)pointcloud,和测量点在sensor上对应的像素点(u,v)之间的换算关系,同时又可以获取像素点(u,v)在地面上投影点的位置信息(X1proj,Y1proj,H1proj)。通过比较通过点云信息确定的位置信息(X,Y,Z)pointcloud与投影点的位置信息(X1proj,Y1proj,H1proj),可以确定误差信息,并利用误差信息对投影点的位置信息的进行校正。对于多头的激光雷达,可以发射若干束激光,如此可以获得若干个测量点的真实位置信息。对于测量点之间的像素点,可以对两个测量点的误差信息进行内插,利用内插值对这些像素点在地面上的投影点的位置信息加以校正。具体内插值的获取方法可以参考相关技术,本申请在此不做限制。
在一些实施例中,测量点的真实位置信息还可以基于视觉算法计算。通过在不同时间拍摄的图像中提取测量点的特征,并构建共线方程,然后根据视觉算法计算出测量点的真实位置信息。基于测量点的真实位置信息与其在地面上投影点的位置信息确定出误差信息,并利用误差信息对投影点的位置信息进行校正。对于测量点之间的像素点,可以对两个测量点的误差信息进行内插,利用内插值对这些像素点在地面上的投影点的位置信息加以校正。具体内插值的获取方法可以参考相关技术,本申请在此不做限制。
通过上述实施例,可以获取监控目标的位置信息,并可以对该位置信息进行校正。在一些实施例中,警示对象的位置信息可以按照上述任一实施例所提供的方法获取。
在获取监控目标的位置信息后,可以基于该位置信息确定警戒区域。其中,确定警戒区域的方式可以根据需要设置,例如,在一些例子中,可以根据预设距离,在监控目标的所处位置往外扩充该预设距离作为警戒区域,该预设距离可以需要灵活设定;或者,还可以结合监控目标周边的环境或其他物体等来确定警戒区域;在另一些实施例中,监控目标可能具有一定的大小,占据地面一定的面积,监控目标的位置信息可以包括监控目标中的指定位置,警戒区域可以根据该指定位置与预设的区域模型确定。其中,监控目标中的指定位置可以是监控目标的中心位置,也可以是监控目标中其他非中心位置。预设的区域模型可以包括警戒区域的尺寸信息与形状信息。形状信息可以包括圆形区域,则尺寸信息可以包括区域半径;形状信息可以包括矩形区域,则尺寸信息可以包括区域长宽尺寸;形状信息还可以包括扇形区域,则尺寸信息可以包括区域弧度以及区域半径。在实际应用中,形状信息还可以包括其他任意形状,本申请在此不做限定。
如图9(a)所示的警戒区域,若监控目标为三个矩形构成的一厂房区域910,那么可以根据该厂房区域910的中心位置920以及预设的区域模型,确定以该厂房区域910的中心位置920为圆心,半径为R的圆形区域为警戒区域930。
又例如,如图9(b)所示的警戒区域,在运动场部分区域失火的场景下,监控目标为运动场940。通过无人机搭载的红外探测器可以识别到运动场940的左侧区域950有明显的高温异常,如此可以以左侧区域950的中心位置作为指定位置,确定以该指定位置为圆心,半径为R的圆形区域为警戒区域960。
在一些实施例中,监控目标的位置信息可以包括监控目标的边界位置,警戒区域可以根据该边界位置以及预设的缓冲距离确定。在一些实施例中,可以对摄像装置所采集的图像进行特征提取以及机器学习,识别出监控目标的边界。边界位置可以根据监控目标的外表面特征点确定。监控目标的边界可以包括监控目标的轮廓或外接多边形。例如,在如图10(a)所示的警戒区域中,某一厂房区域为监控目标1010,其边界为监控目标1010的轮廓1020, 如此,根据边界位置以及预设的缓冲距离,可以确定警戒区域1020。又例如,在如图10(b)所示的警戒区域中,监控目标1010的边界为外接矩形1040,如此,根据边界位置以及预设的缓冲距离,可以确定警戒区域1050。
又如图11所示的警戒区域示意图,监控目标为一罐体,通过对摄像装置所采集的图像进行特征提取以及机器学习,识别出罐体的顶部以及侧面边界范围,从而可确定出罐体的边界。如此,可以逐个获取边界像素在地面上投影点的位置信息,以得到罐体的边界位置集合{POS}i。对于未能识别出顶部以及侧面边界范围的物体,可以直接圈出该物体的最小外接矩形。逐个获取该最小外接矩形的边界像素以及中心像素在地面上投影点的位置信息,以得到物体的边界位置集合{POS}i。在确定监控目标的边界位置后,可以按照预设的缓冲距离L_buff进行外扩得到警戒区域(图11中表示为缓冲区边界)。外扩后得到的警戒区域的位置集合为{POS}i_buff。
在一些实施例中,一个警戒区域内还可以包括多个警戒级别的子区域,每个警戒级别的子区域分别对应了不同的缓冲距离。例如,若一个警戒区域内包括两个不同警戒级别的子区域,第一子区域对应缓冲距离L_buff_1,第二子区域对应缓冲距离L_buff_2,其中缓冲距离L_buff_1大于缓冲距离L_buff_2。如此,第一子区域的位置集合为{POS}i_buff_1,第二子区域的位置集合为{POS}i_buff_2。
在一些实施例中,还可以按照上述实施例提供的方法确定警示对象的边界位置。例如,若警示对象包括行人、自行车等物体,而警示物体在图像中的尺寸小于5*5像素,那么可以直接圈出警示对象的最小外接矩形,如图11所示。并逐个获取该最小外接矩形的边界像素以及中心像素在地面上投影点的位置信息,以得到警示对象的边界位置集合{pos}i。在一些实施例中,同样可以为警示对象设定警戒区域,警戒区域可以通过上述任一实施例提供的方法设定,本申请在此不再赘述。若警示对象的缓冲距离为l_buff,则警示对象的警戒区域的位置集合{pos}i_buff。
在一些实施例中,本申请提供的无人机监控方法,还包括如图12所示的步骤:
步骤1210:获取所述监控目标的类型信息;
步骤1220:所述警戒区域根据所述监控目标的位置信息与类型信息确定。
警戒区域除了可以根据监控目标的位置信息划分以外,还可以根据监控目标的类型信息确定。监控目标的类型信息可以包括低危险类、中危险类以及高危险类。例如在突发事故的场景中,交通事故现场所在的区域可以为低危险类,而失火区域则可以划分为高危险类。针对不同类别,可以设定不同大小的警戒区域。例如属于高危险类的监控目标所设置的缓冲距离最大,其次是属于中危险类的监控目标,而属于低危险类的监控目标所设置的缓冲距离最小。
此外,在一些实施例中,一个警戒区域内还可以包括多个警戒级别的子区域,针对不同警戒级别的子区域对应于不同级别的警示信息。例如,警戒区域可以划分为第一、第二子区域,其警戒级别依次递增。对于第一子区域,其警示信息可以为“您已进入警戒区域,请尽快离开”。对于第二子区域,其警示信息可以为“请停止靠近,并迅速离开警戒区域”。此外,针对不同级别的子区域,还可以采取不同的警示措施。如在上述例子中,对于第一子区域,可以采取语音广播警示信息的警示措施。对于第二子区域,可以采取APP、短信、电话通知警示对象的警示措施。
如图13(a)-(c)所示的警示对象与警戒区域的位置关系示意图,当警示对象与警戒区域的位置关系满足其中任一种时,则生成警示信息。
如图13(a)所示,监控目标为某一厂房区域1310,警戒区域1320为一圆形区域,图中示出了警示对象1330的外接矩形。当警示对象1330在警戒区域1320内,生成警示信息,以提醒警示对象1330离开警戒区域1320。在一些实施例中,可以分析警示对象的边界位置集合{pos}i,或者警示对象的警戒区域的位置集合{pos}i_buff,是否进入监控目标的警戒区域的位置集合{POS}i_buff,来确定警示对象是否进入监控目标的警戒区域。
此外,如图13(b)所示,当警示对象1330所处位置与警戒区域1320边界的距离d小于预设的距离阈值,则生成警示信息,以提醒警示对象1330停止靠近警戒区域1320。在一些实施例中,可以分析警示对象的边界位置集合{pos}i,或者警示对象的警戒区域的位置集合{pos}i_buff,与监控目标的警戒区域的位置集合{POS}i_buff的距离,来确定警示对象所处位置与监控目标的警戒区域边界的距离是否小于预设的距离阈值。
此外,还可以基于警示对象的位置信息提取警示对象的运动信息,根据运动信息生成警示对象的预测位置,若警示对象的预测位置与警戒区域满 足预设条件,则生成警示信息。例如,如图13(c)所示,可以基于警示对象1330的位置信息提取警示对象1330的运动信息,可知警示对象1330的运动信息为正在朝警戒区域1320移动。根据运动信息生成警示对象1330的预测位置,若警示对象1330的预测位置与警戒区域1320满足预设条件,则生成警示信息。例如,预设条件可以是预测位置在警戒区域内。根据警示对象1330正在朝警戒区域1320移动可知,警示对象1330的预测位置可能会进入警戒区域1320内,因此可以生成警示信息以提醒警示对象1330改变行动轨迹。
在生成警示信息后,可以对警示对象进行警示或提醒。在一些实施例中,上述方法还可以包括步骤:发送监控目标的位置信息至另一可移动设备,以使该可移动设备根据该位置信息执行目标任务,其中,目标任务可以包括拍摄监控目标的图像、和/或对警示对象发出语音信息。例如可以派遣值守飞机自动飞到监控目标所在位置进行侦查或喊话。
在一些实施例中,警示对象包括可移动的对象,例如警示对象包括人、车等可移动的对象,那么上述方法还可以包括步骤:控制无人机跟踪警示对象。例如,当无人机悬停在空中某一位置对监控目标进行监控时,若摄像装置所采集的图像中出现可移动的警示对象,则跟踪该警示对象的行踪。根据警示对象与警戒区域的位置信息生成警示信息。当警示对象离开摄像装置的拍摄范围后,无人机可以重新回到悬停的位置继续对监控目标进行监控。
在一些实施例中,监控目标包括可移动的对象,当通过无人机搭载的红外探测器或者通过其他方式识别到可移动的监控目标有高温异常(如车着火或有着火风险),或者识别到监控目标包括危险移动源(如搭载危险物品)时,上述方法还可以包括步骤:控制无人机跟踪监控目标。当可移动的监控目标失火或搭载有危险物品时,可以控制无人机一直跟踪监控目标,以警示监控目标周围的人物远离监控目标。
此外,本申请还提供了另一种无人机监控方法实施例,可以在获取无人机搭载的摄像装置所采集的图像后,通过机器学习实时识别出图像中的监控目标和警示对象,并基于摄像装置采集图像时的位姿确定监控目标和警示对象的位置信息,且对位置信息进行校正。此后,通过对图像进行特征提取以及机器学习,识别出监控目标的顶部以及侧面边界范围,并识别出图像中其他交通工具、人物等。
对于能识别出顶部范围的监控目标,可以逐个获取边界像素在地面上 投影点的位置信息,以得到监控目标的边界位置集合{POS}i。对于未能识别出顶部以及侧面边界范围的监控目标,可以直接圈出监控目标的最小外接矩形。逐个获取该最小外接矩形的边界像素以及中心像素在地面上投影点的位置信息,以得到监控目标的边界位置集合{POS}i。
对于在图像中的尺寸小于5*5像素的警示对象,可以直接圈出警示对象的最小外接矩形,并逐个获取该最小外接矩形的边界像素以及中心像素在地面上投影点的位置信息,以得到警示对象的边界位置集合{pos}i。
在确定监控目标的边界位置后,可以按照预设的缓冲距离L_buff进行外扩得到警戒区域,外扩后得到的警戒区域的位置集合为{POS}i_buff。
其中,警戒区域包括至少两个警戒级别的子区域,第一子区域对应缓冲距离L_buff_1,第二子区域对应缓冲距离L_buff_2,其中缓冲距离L_buff_1大于缓冲距离L_buff_2。如此,第一子区域的位置集合为{POS}i_buff_1,第二子区域的位置集合为{POS}i_buff_2。
警戒对象同样可以通过上述方法设置警戒区域,若警示对象的缓冲距离为l_buff,则警示对象的警戒区域的位置集合{pos}i_buff。
随后,实时分析警示对象的边界位置集合{pos}i或警示对象的警戒区域的位置集合{pos}i_buff是否进入监控目标的警戒区域{POS}i_buff,若未进入,则继续进行监控;若发现进入,则判断警示对象进入了哪一个子区域,针对不同的子区域对应有不同级别的警示信息以及采取不同的警示措施。
当警示对象进入警戒区域后,无人机可以实时上报至监控设备,监控设备下发下一步的任务调度,例如通过喇叭广播,使得警示对象离开警戒区域,消防人员/安保人员待命等警告语反制措施。除了警告语反制措施外,监控设备还可以将监控目标的地理坐标发送到值守飞机,派遣值守飞机按照地理坐标自动飞行到监控目标附近进行侦查或喊话。
如此,通过上述方案可以纠正监控目标和警示对象的位置信息,得到更高精度的地物信息和地理位置。且通过实时机器学习和警戒区域划定,可以实时为现场作业提供快速指导,有效应对突发事故的发生,并基于分析结果自动执行下一步操作,或联动其他设备进行联合作业,大大提高了安防巡检的灵活性与智能化。
基于上述任意实施例所述的一种无人机监控方法,本申请还提供了如 图14所示的一种无人机监控装置的结构示意图。如图14,在硬件层面,该无人机监控装置包括处理器、内部总线、网络接口、内存以及非易失性存储器,当然还可能包括其他业务所需要的硬件。处理器从非易失性存储器中读取对应的计算机程序到内存中然后运行,以实现上述任意实施例所述的一种无人机监控方法。
基于上述任意实施例所述的一种无人机监控方法,本申请还提供了如图15所示的一种无人机的结构示意图。如图15,在硬件层面,该无人机包括机身,动力组件,用于驱动无人机在空中运动,摄像装置,以及如图14所示的无人机监控装置。无人机监控装置包括处理器、内部总线、网络接口、内存以及非易失性存储器,当然还可能包括其他业务所需要的硬件。处理器从非易失性存储器中读取对应的计算机程序到内存中然后运行,以实现上述任意实施例所述的一种无人机监控方法。
基于上述任意实施例所述的一种无人机监控方法,本申请还提供了如图16所示的一种监控设备的结构示意图,该监控设备与无人机通信。如图16,在硬件层面,该监控设备包括处理器、内部总线、网络接口、内存以及非易失性存储器,当然还可能包括其他业务所需要的硬件。处理器从非易失性存储器中读取对应的计算机程序到内存中然后运行,以实现上述任意实施例所述的一种无人机监控方法。
基于上述任意实施例所述的一种无人机监控方法,本申请还提供了一种计算机程序产品,包括计算机程序,计算机程序被处理器执行时可用于执行上述任意实施例所述的一种无人机监控方法。
基于上述任意实施例所述的一种无人机监控方法,本申请还提供了一种计算机存储介质,存储介质存储有计算机程序,计算机程序被处理器执行时可用于执行上述任意实施例所述的一种无人机监控方法。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造 性劳动的情况下,即可以理解并实施。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本申请实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (22)

  1. 一种无人机监控方法,其特征在于,所述方法包括:
    根据无人机搭载的摄像装置所采集的图像,在空间中识别监控目标与警示对象;
    获取所述监控目标与所述警示对象的位置信息,所述位置信息基于所述摄像装置采集所述图像时的位姿确定;
    基于所述监控目标的位置信息确定警戒区域;
    基于所述警示对象的位置与所述警戒区域的位置关系生成警示信息。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取所述监控目标所在区域的正射图像或立体图;
    在所述正射图像或立体图中展示所述警戒区域。
  3. 根据权利要求2所述的方法,其特征在于,所述正射图像是摄像装置采集的图像经过合成后的图像。
  4. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    获取所述监控目标所在区域的三维模型,所述三维模型通过摄像装置采集的图像建立;
    通过所述三维模型获取所述正射图像。
  5. 根据权利要求1所述的方法,其特征在于,所述获取所述监控目标与所述警示对象的位置信息,包括:
    获取所述监控目标在所述图像的中心区域时所述监控目标与所述警示对象的位置信息。
  6. 根据权利要求1所述的方法,其特征在于,
    所述监控目标的位置信息包括所述监控目标中的指定位置,所述警戒区域根据所述指定位置与预设的区域模型确定;和/或,
    所述监控目标的位置信息包括所述监控目标的边界位置,所述警戒区域根据所述边界位置以及预设的缓冲距离确定。
  7. 根据权利要求6所述的方法,其特征在于,所述边界位置通过所述监控目标的外表面的特征点确定。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取所述监控目标的类型信息;
    所述警戒区域根据所述监控目标的位置信息与类型信息确定。
  9. 根据权利要求1所述的方法,其特征在于,所述警戒区域包括多个警戒级别的子区域,针对不同警戒级别的子区域对应于不同级别的警示信息。
  10. 根据权利要求1所述的方法,其特征在于,所述基于所述警示对象与所述警戒区域的位置关系生成警示信息,包括如下任一:
    若所述警示对象在所述警戒区域内,生成警示信息;或
    若所述警示对象所处位置与所述警戒区域边界的距离小于预设的距离阈值,生成警示信息;或
    基于所述警示对象的位置信息提取所述警示对象的运动信息,根据所述运动信息生成所述警示对象的预测位置,若所述警示对象的预测位置与所述警戒区域满足预设条件,则生成警示信息。
  11. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    发送所述监控目标的位置信息至另一可移动设备,以使所述可移动设备根据所述位置信息执行目标任务;所述目标任务包括拍摄所述监控目标的图像、和/或对所述警示对象发出语音信息。
  12. 根据权利要求1所述的方法,其特征在于,所述警示对象包括可移动的对象,所述方法还包括:
    控制所述无人机跟踪所述警示对象。
  13. 根据权利要求1所述的方法,其特征在于,所述监控目标包括可移动的对象,所述方法还包括:
    控制所述无人机跟踪所述监控目标。
  14. 根据权利要求1所述的方法,其特征在于,所述监控目标的位置信息通过以下步骤获取:
    获取所述监控目标在所述图像中的像素位置信息;
    获取所述摄像装置的位姿信息;
    根据所述像素位置信息以及所述位姿信息计算所述监控目标的位置信息。
  15. 根据权利要求14所述的方法,其特征在于,所述监控目标的位置信息包括水平位置信息与高度信息,所述位置信息的获取步骤还包括:
    根据所述水平位置信息,利用预设的地形模型查找所述高度信息的修正值;
    利用所述修正值更新所述水平位置信息。
  16. 根据权利要求14所述的方法,其特征在于,所述监控目标的位置信息的校正步骤包括:
    在所述图像中识别测量点并获取所述测量点的像素位置信息;
    获取所述摄像装置的位姿信息;
    根据所述像素位置信息以及所述位姿信息计算所述测量点的位置信息;
    基于所述测量点的位置信息与所述测量点的真实位置信息确定误差信息;
    利用所述误差信息对所述监控目标的位置信息进行校正。
  17. 根据权利要求16所述的方法,其特征在于,所述测量点为预设的已知真实位置信息的路标;或者所述测量点的真实位置信息通过以下一种或多种方式确定:
    基于所述无人机搭载的激光雷达设备对所述测量点所获取的点云信息确定所述测量点的真实位置信息;或者
    基于视觉算法计算所述测量点的真实位置信息。
  18. 一种无人机监控装置,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器调用所述可执行指令时实现如权利要求1-17任一所述方法的操作。
  19. 一种无人机,其特征在于,包括:
    机身;
    动力组件,用于驱动所述无人机在空间中运动;
    摄像装置;
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器调用所述可执行指令时实现如权利要求1-17任一所述方法的操作。
  20. 一种监控设备,其特征在于,所述监控设备与无人机通信,所述监控设备包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器调用所述可执行指令时实现如权利要求1-17任一所述方法的操作。
  21. 一种计算机程序产品,包括计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1-7任一所述方法的步骤。
  22. 一种机器可读存储介质,其特征在于,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被执行时执行如权利要求1-17任一所述的方法。
PCT/CN2021/123137 2021-10-11 2021-10-11 一种无人机监控方法、装置、无人机及监控设备 WO2023060405A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180102022.7A CN117897737A (zh) 2021-10-11 2021-10-11 一种无人机监控方法、装置、无人机及监控设备
PCT/CN2021/123137 WO2023060405A1 (zh) 2021-10-11 2021-10-11 一种无人机监控方法、装置、无人机及监控设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/123137 WO2023060405A1 (zh) 2021-10-11 2021-10-11 一种无人机监控方法、装置、无人机及监控设备

Publications (2)

Publication Number Publication Date
WO2023060405A1 true WO2023060405A1 (zh) 2023-04-20
WO2023060405A9 WO2023060405A9 (zh) 2024-04-18

Family

ID=85987137

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/123137 WO2023060405A1 (zh) 2021-10-11 2021-10-11 一种无人机监控方法、装置、无人机及监控设备

Country Status (2)

Country Link
CN (1) CN117897737A (zh)
WO (1) WO2023060405A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116449875A (zh) * 2023-06-16 2023-07-18 拓恒技术有限公司 一种无人机巡检方法及系统

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008181347A (ja) * 2007-01-25 2008-08-07 Meidensha Corp 侵入監視システム
US20110050878A1 (en) * 2009-08-28 2011-03-03 Gm Global Technology Operations, Inc. Vision System for Monitoring Humans in Dynamic Environments
US20140333771A1 (en) * 2013-05-08 2014-11-13 International Electronic Machines Corporation Operations Monitoring in an Area
CN106375712A (zh) * 2015-07-13 2017-02-01 霍尼韦尔国际公司 使用微移动无人机及ip摄像机的家庭、办公室安全监视系统
CN108628343A (zh) * 2018-05-02 2018-10-09 广东容祺智能科技有限公司 一种基于无人机的事故现场封锁装置及事故现场封锁方法
CN109117749A (zh) * 2018-07-23 2019-01-01 福建中海油应急抢维修有限责任公司 一种基于无人机巡检影像的异常目标监管方法及系统
CN112216049A (zh) * 2020-09-25 2021-01-12 交通运输部公路科学研究所 一种基于图像识别的施工警戒区监测预警系统及方法
CN112464755A (zh) * 2020-11-13 2021-03-09 珠海大横琴科技发展有限公司 一种监控方法及装置、电子设备、存储介质
CN112969977A (zh) * 2020-05-28 2021-06-15 深圳市大疆创新科技有限公司 抓捕辅助方法、地面指挥平台、无人机、系统及存储介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008181347A (ja) * 2007-01-25 2008-08-07 Meidensha Corp 侵入監視システム
US20110050878A1 (en) * 2009-08-28 2011-03-03 Gm Global Technology Operations, Inc. Vision System for Monitoring Humans in Dynamic Environments
US20140333771A1 (en) * 2013-05-08 2014-11-13 International Electronic Machines Corporation Operations Monitoring in an Area
CN106375712A (zh) * 2015-07-13 2017-02-01 霍尼韦尔国际公司 使用微移动无人机及ip摄像机的家庭、办公室安全监视系统
CN108628343A (zh) * 2018-05-02 2018-10-09 广东容祺智能科技有限公司 一种基于无人机的事故现场封锁装置及事故现场封锁方法
CN109117749A (zh) * 2018-07-23 2019-01-01 福建中海油应急抢维修有限责任公司 一种基于无人机巡检影像的异常目标监管方法及系统
CN112969977A (zh) * 2020-05-28 2021-06-15 深圳市大疆创新科技有限公司 抓捕辅助方法、地面指挥平台、无人机、系统及存储介质
CN112216049A (zh) * 2020-09-25 2021-01-12 交通运输部公路科学研究所 一种基于图像识别的施工警戒区监测预警系统及方法
CN112464755A (zh) * 2020-11-13 2021-03-09 珠海大横琴科技发展有限公司 一种监控方法及装置、电子设备、存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116449875A (zh) * 2023-06-16 2023-07-18 拓恒技术有限公司 一种无人机巡检方法及系统
CN116449875B (zh) * 2023-06-16 2023-09-05 拓恒技术有限公司 一种无人机巡检方法及系统

Also Published As

Publication number Publication date
WO2023060405A9 (zh) 2024-04-18
CN117897737A (zh) 2024-04-16

Similar Documents

Publication Publication Date Title
US11365014B2 (en) System and method for automated tracking and navigation
CN110163904B (zh) 对象标注方法、移动控制方法、装置、设备及存储介质
JP6833630B2 (ja) 物体検出装置、物体検出方法およびプログラム
CN107274695B (zh) 智能照明系统、智能车辆及其车辆辅助驾驶系统和方法
KR101534056B1 (ko) 교통 신호 맵핑 및 검출
US10303943B2 (en) Cloud feature detection
CN111307291B (zh) 基于无人机的地表温度异常检测和定位方法、装置及系统
CN115597659B (zh) 一种变电站智能安全管控方法
JPWO2019003953A1 (ja) 画像処理装置および画像処理方法
CN110796104A (zh) 目标检测方法、装置、存储介质及无人机
WO2023060405A1 (zh) 一种无人机监控方法、装置、无人机及监控设备
US10210389B2 (en) Detecting and ranging cloud features
CN114967731A (zh) 一种基于无人机的野外人员自动搜寻方法
WO2023150888A1 (en) System and method for firefighting and locating hotspots of a wildfire
CN111461013A (zh) 一种基于无人机的实时火场态势感知方法
CN112001266B (zh) 一种大型无人运输车监控方法及系统
CN112802100A (zh) 一种入侵检测方法、装置、设备和计算机可读存储介质
WO2020211593A1 (zh) 交通道路的数字化重建方法、装置和系统
EP4296973A1 (en) System and method for localization of anomalous phenomena in assets
JP7143103B2 (ja) 経路表示装置
CN111491154A (zh) 基于一个或多个单视场帧的检测和测距
Carrio et al. A ground-truth video dataset for the development and evaluation of vision-based Sense-and-Avoid systems
JP7130409B2 (ja) 管制装置
Kim et al. Detecting and localizing objects on an unmanned aerial system (uas) integrated with a mobile device
Amanatiadis et al. The HCUAV project: Electronics and software development for medium altitude remote sensing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21960156

Country of ref document: EP

Kind code of ref document: A1