CN115066663A - Movable platform, method and device for processing data of movable platform, and terminal equipment - Google Patents

Movable platform, method and device for processing data of movable platform, and terminal equipment Download PDF

Info

Publication number
CN115066663A
CN115066663A CN202180013840.XA CN202180013840A CN115066663A CN 115066663 A CN115066663 A CN 115066663A CN 202180013840 A CN202180013840 A CN 202180013840A CN 115066663 A CN115066663 A CN 115066663A
Authority
CN
China
Prior art keywords
sensor
movable platform
determining
sensing range
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180013840.XA
Other languages
Chinese (zh)
Inventor
黄振昊
陈建林
何纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN115066663A publication Critical patent/CN115066663A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/40Correcting position, velocity or attitude
    • G01S19/41Differential correction, e.g. DGPS [differential GPS]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/43Determining position using carrier phase measurements, e.g. kinematic positioning; using long or short baseline interferometry
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A movable platform, a method and a device for processing data of the movable platform, and a terminal device, wherein the movable platform is provided with a first sensor and is used for sensing first environment information of the environment around the movable platform, and the method comprises the following steps: acquiring the position of the movable platform passing through the operation area; and determining a sensing range (201) of the first sensor when the movable platform is in the passing position from the passing position; determining a first sensed area (202) of the work areas according to a sensing range of the first sensor; controlling a display device to display first identification information for identifying the first sensed area (203) in the work area.

Description

Movable platform, method and device for processing data of movable platform, and terminal equipment
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a movable platform, a method and an apparatus for processing data thereof, and a terminal device.
Background
When performing work using a movable platform, it is often necessary to manually control the movement of the movable platform, and a control person sometimes cannot confirm which areas have been subjected to work and which areas have not been subjected to work. Therefore, there are often cases where the work is repeated in the same area or the work area is omitted.
Disclosure of Invention
In a first aspect, an embodiment of the present disclosure provides a method for processing movable platform data, where a movable platform is equipped with a first sensor and configured to sense first environment information of an environment around the movable platform, and the method includes: acquiring the position of the movable platform passing through the operation area; and determining a sensing range of the first sensor when the movable platform is at the passing position from the passing position; determining a first sensed area in the working area according to the sensing range of the first sensor; controlling a display device to display first identification information for identifying the first sensed region in the work region.
In a second aspect, an embodiment of the present disclosure provides an apparatus for processing data of a movable platform, where the movable platform is equipped with a first sensor for sensing first environment information of an environment around the movable platform, and the apparatus includes a processor configured to perform the following steps: acquiring the position of the movable platform passing through the operation area; and determining a sensing range of the first sensor when the movable platform is at the passing position from the passing position; determining a first sensed area in the working area according to the sensing range of the first sensor; controlling a display device to display first identification information for identifying the first sensed region in the work region.
In a third aspect, embodiments of the present disclosure provide a movable platform, including a first sensor for sensing first environmental information of an environment surrounding the movable platform; a positioning device for positioning the movable platform; one or more processors configured to obtain a position traversed by the movable platform in a work area, and determine a sensing range of the first sensor when the movable platform is at the traversed position based on the traversed position; determining a first sensed area in the working area according to the sensing range of the first sensor; and sending the information sent by the processor to a display device to control the display device to display first identification information, wherein the first identification information is used for identifying the first sensed area in the working area.
In a fourth aspect, an embodiment of the present disclosure provides a terminal device, communicatively connected to a movable platform, where the movable platform is equipped with a first sensor, and is configured to sense first environment information of an environment around the movable platform, where the terminal device includes: a communication unit; one or more processors for
Controlling the communication unit to acquire a position from the movable platform, through which the movable platform passes in a work area;
determining a sensing range of the first sensor when the movable platform is at the passing position from the passing position;
determining a first sensed area in the working area according to the sensing range of the first sensor; and
and the display is used for receiving a control instruction of the processor and displaying first identification information according to the control instruction, wherein the first identification information is used for identifying the first sensed area in the working area.
In a fifth aspect, the present disclosure provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method of the first aspect.
In the embodiment of the disclosure, a first sensed region is determined according to a passing position of a movable platform and a sensing range of a first sensor carried on the movable platform at the passing position, and the first sensed region is marked on an interface of a display device, so that a user can determine the region which is sensed by the first sensor on the display interface of the display device according to marked information, and repeated sensing in the same region or occurrence of an unsensed region is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1A is a schematic diagram of a case where there is a duplicate work area in some embodiments.
FIG. 1B is a schematic illustration of a case where there are missing work areas in some embodiments.
FIG. 2 is a flow chart of a method for processing movable platform data of an embodiment of the present disclosure.
Fig. 3 is a schematic diagram of a sensing range of a first sensor of an embodiment of the present disclosure.
Fig. 4 is a schematic diagram of a marking manner of a first sensed region implemented by the present disclosure.
Fig. 5 is a schematic illustration of image distortion as practiced by the present disclosure.
Fig. 6 is a schematic diagram of a core sensing region implemented by the present disclosure.
Fig. 7 is a schematic diagram of a manner of determining a core sensing region based on a point cloud as implemented by the present disclosure.
Fig. 8 is a schematic diagram of a zoom mode implemented by the present disclosure.
Fig. 9 is a schematic diagram comparing a first sensed region and a second sensed region implemented by the present disclosure.
Fig. 10 is a schematic diagram of the second sensed region before and after a zoom magnification change implemented by the present disclosure.
Fig. 11A and 11B are schematic diagrams of relative positional relationships of a first sensed region and a second sensed region, respectively, in an implementation of the present disclosure.
FIG. 12 is a flow chart of a method for processing movable platform data according to another embodiment of the present disclosure.
FIG. 13 is a schematic diagram of an apparatus for processing movable platform data embodying the present disclosure.
Fig. 14 is a schematic view of a movable platform of an embodiment of the disclosure.
Fig. 15 is a schematic diagram of a terminal device of an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
When a movable platform is used for searching, patrolling and other operations, the movement of the movable platform often needs to be manually controlled, and a controller sometimes cannot confirm which areas have been operated and which areas have not been operated. Therefore, there are often cases where the work is repeated in the same area or the work area is omitted. In the following, taking an application scenario in which a mobile platform is used for searching as an example, the case where there are overlapping work areas and missing work areas is considered.
As shown in FIG. 1A, the movable platform moves from position P1 to position P2 along the movement trajectory 102, and each time the movable platform passes a position, a search can be made for an area near the position. Assuming that the search areas for the movable platform at position P1 and position P2 are shown as area 101 and area 103 in the figure, respectively, it can be seen that there is an overlap between area 101 and area 103 and the movable platform has made at least two searches for the overlapping area S1.
As shown in fig. 1B, each gray rectangular area (e.g., the area 104) represents a search area of the movable platform at different positions, and the dashed lines represent the moving track of the movable platform, and it can be seen that there is an area 105 between each search area, and when the movable platform searches at any position, the area 105 is not searched, that is, the area 105 is a missing search area.
Based on this, the disclosed embodiments provide a method for processing movable platform data, where the movable platform is equipped with a first sensor for sensing first environment information of an environment around the movable platform, and with reference to fig. 2, the method includes:
step 201: acquiring the position of the movable platform passing through the operation area; and determining a sensing range of the first sensor when the movable platform is at the passing position from the passing position;
step 202: determining a first sensed area in the working area according to the sensing range of the first sensor;
step 203: controlling a display device to display first identification information for identifying the first sensed region in the work region.
The method of the embodiment of the present disclosure may be executed by a movable platform, or may be executed by a terminal device that communicates with the movable platform, or a part of steps in the method is executed by the movable platform, and another part of steps is executed by the terminal device, which is not limited in this disclosure. Wherein, the movable platform can include, but is not limited to, an unmanned aerial vehicle, an unmanned ship, a movable robot, and the like. The terminal equipment can be a remote controller matched with the movable platform or an intelligent terminal such as a mobile phone, a computer and the like.
The movable platform may carry a first sensor thereon, which may include, but is not limited to, various types of sensors such as a visual sensor, an infrared sensor, a radar sensor, a sound sensor, or a combination of at least two types of sensors. The first sensor has a certain sensing range, and the first sensor mounted on the movable platform can constantly sense the environment in a certain range around the movable platform by controlling the movable platform to move, so that various operations can be performed. Different types of sensors have different sensing ranges, thereby performing different tasks. For example, in an application scenario where a visual sensor is mounted by a drone, the sensing range of the first sensor is the image acquisition range of the visual sensor. Images of the environment surrounding the drone may be captured by a visual sensor to determine if there are target objects in the environment that need to be searched, e.g., trapped people or animals. For example, when the unmanned aerial vehicle is equipped with an infrared sensor, the sensing range of the first sensor is a range in which the infrared sensor can sense the temperature. Can gather unmanned aerial vehicle surrounding environment's thermal imaging picture through infrared sensor to whether there is the source that generates heat in the definite environment, thereby realize fire detection and early warning. Of course, the application scenario of the embodiment of the present disclosure is not limited to the above scenario, and in different scenarios, different types and different numbers of first sensors may be carried on different movable platforms to execute different operations.
In step 201, the sensing range of the first sensor may be determined based on both the position of the movable platform and the sensing range of the first sensor when the movable platform is in the position. Fig. 3 shows a schematic view of the sensing range of a vision Sensor (Sensor) mounted on the drone. In the case where the vision sensor is directly facing the shooting area (e.g., the ground), the ground coverage D of the vision sensor (i.e., the sensing range) may be determined according to the geometric relationship between the side length D of the vision sensor, the focal length f, the field angle FOV, and the altitude H of the drone. Similarly, in the case where the vision sensor is not directly facing the photographing region, the ground coverage D' (i.e., the sensing range) of the vision sensor may be determined according to a geometric relationship between the side length D of the vision sensor, the focal length f, the field angle FOV, and the center-to-center distance measurement L of the vision sensor to the photographing region.
The passing position of the movable platform can be obtained through a positioning device mounted on the movable platform, for example, centimeter-level high-precision position information (LonUAV, LatUAV, H _ UAV) of the unmanned aerial vehicle can be obtained through a Real-time kinematic (RTK) positioning technology. Where LonUAV, LatUAV, and H _ UAV represent the longitude, latitude, and altitude, respectively, of the location where the drone is located. Further, each location acquired by the positioning device may also be associated with time information at the time the location was acquired in order to determine at what time the location was acquired.
The attitude of the first sensor may also be acquired by an attitude sensor. For example, the attitude of the first sensor when the movable platform is in the passing position may be acquired by an attitude sensor mounted on the first sensor. Or acquiring the attitude of the movable platform at the passing position through an attitude sensor installed on the body of the movable platform, and determining the attitude of the first sensor at the passing position based on the attitude conversion relation between the body of the movable platform and the first sensor. The attitude of the first sensor may also be determined jointly with the information collected by the two attitude sensors, for example, an average value of the information collected by the two attitude sensors may be determined as the attitude of the first sensor.
For the second case, in a scene where the unmanned aerial vehicle carries a vision sensor, IMU attitude data of the unmanned aerial vehicle may be acquired, and after the IMU attitude data is fused with the position data (LonUAV, LatUAV, H _ UAV), the position (LonPL, LatPL, H _ PL) of the origin of the vision sensor data is determined by combining a compensation value from an RTK antenna phase center to the vision sensor. Where LonPL, LatPL, and H _ PL are the longitude, latitude, and height, respectively, of the location where the origin of the vision sensor data is located. Since the scale of the background map is large, and the position difference between the RTK antenna phase center of the drone and the origin of the vision sensor is negligible relative to the scale of the background map, the above (LonUAV, LatUAV, H _ UAV) and (LonPL, LatPL, H _ PL) are generally collectively denoted as (Lon, Lat, H). Meanwhile, the attitude of the unmanned aerial vehicle (Pitch _ UAV, Roll _ UAV, Yaw _ UAV) is acquired, and the attitude of the visual sensor (Pitch _ PL, Roll _ PL, Yaw _ PL) is determined based on the attitude of the unmanned aerial vehicle and the pose conversion relationship between the unmanned aerial vehicle and the visual sensor. Wherein Pitch, Roll, and Yaw represent a Pitch angle, a Roll angle, and a Yaw angle, respectively.
In some embodiments, the location traversed by the movable platform may include a current location of the movable platform. The current position refers to a real-time position of the movable platform, that is, a position of the movable platform at the current moment. Accordingly, the sensing range of the first sensor when the movable platform is in the passing position includes the first sensing range of the first sensor when the movable platform is in the current position. By acquiring the real-time sensing range of the first sensor when the movable platform is at the current position, the sensed area of the first sensor at the current moment can be marked on the interface of the display device in real time.
A first field angle of view of the first sensor may be acquired when the movable platform is at a current position, a first pose of the first sensor may be acquired when the movable platform is at the current position, and the first sensing range may then be determined based on the first field angle and the first pose. In a case where the first attitude sensor is mounted on the first sensor, an attitude acquired by the first attitude sensor when the movable platform is at the current position may be taken as the first attitude of the first sensor. In the case where the movable platform includes the second attitude sensor mounted to the body of the movable platform, the first attitude may also be determined based on attitude information of the body of the movable platform collected by the second attitude sensor when the movable platform is at the current position, and an attitude transition relationship between the body of the movable platform and the visual sensor.
In some embodiments, the location traversed by the movable platform may include a historical location of the movable platform. The historical position refers to a position of the movable platform at a certain historical time before the current time, that is, a position that the movable platform has arrived at once. Accordingly, the sensing range of the first sensor when the movable platform is in the passing position includes a second sensing range of the first sensor when the movable platform is in the historical position. By acquiring the real-time sensing range of the first sensor when the movable platform is at the historical position, the user can conveniently review the covered working area in the historical time.
A second field of view of the first sensor may be acquired while the movable platform is in a historical position; and obtaining a second pose of the vision sensor while the movable platform is in the historical position; determining the second sensing range based on the second field of view and the second pose. Wherein, in a case where the second attitude sensor is mounted on the first sensor, an attitude acquired by the first attitude sensor when the movable platform is in the historical position may be taken as the second attitude of the first sensor. In a case where the movable platform includes a second attitude sensor mounted to a body of the movable platform, the second attitude may be determined jointly by an attitude of the body acquired by the second attitude sensor when the movable platform is in a historical position and an attitude conversion relationship between the movable platform and the first sensor.
In some embodiments, the first and second poses may be respectively associated with a position of the movable platform. Further, a position acquired by a positioning device on the movable platform, a time at which the positioning device acquired the position, and a first posture acquired by the first posture sensor may all be associated.
In step 202, a first sensed area may be determined based on the passed location and the sensing range of the first sensor. In a scene where the unmanned aerial vehicle carries the vision sensor, when the unmanned aerial vehicle is at a certain position P, the position of the sensing edge of the vision sensor can be calculated according to the FOV (including the FOV in the course and the sideward direction) of the vision sensor and the flying height of the unmanned aerial vehicle. In fact, according to the geometric relationship shown in fig. 3, the ground point position corresponding to any one field angle can be calculated, and the course projection distance L from the ground point under the FOV to the projection point of the unmanned aerial vehicle on the ground can be calculated Course of course The lateral projection distance L Lateral direction According to course distance L Course of course Lateral distance L from Lateral direction Determining a first sensed area when the drone is at the position P. Specifically, the projection distance L is determined according to the sidewise direction Lateral direction The outward extending range of the left and right is L by taking the position P as the center Lateral direction Is the first sensed area when the drone is in said position P.
Wherein the first sensed region may comprise a first sensed region when the movable platform is at a current position (referred to as a current sensed region) and/or a first sensed region when the movable platform is at a historical position(referred to as history sensed region). Referring to fig. 4, when the drone is in the historical position, the range extending outward from left to right is L, with the track of the drone as the center Lateral direction The region of (1) is the history sensed region. At the current position of the unmanned aerial vehicle, taking the current position as a center, respectively extending the front and the back along the course projection direction by L Course of course And extend along the lateral projection direction to the left and right respectively Lateral direction The obtained region is the current sensed region.
In some embodiments, the movable platform and the first sensor may both maintain a fixed, constant attitude over a period of time. During this period, the sensing range of the first sensor acquired at the i-th time may be determined as the sensing range of the first sensor acquired at the i + 1-th time, and an offset may be superimposed on the first sensed region acquired at the i-th time to obtain the first sensed region acquired at the i + 1-th time. Wherein i is a positive integer. The offset may be determined based on the speed of the movable platform and the time interval between two adjacent updates of the first sensed region.
In step 203, the currently sensed region may be marked on the interface of the display device simultaneously with the historically sensed region. Alternatively, one of the current sensed region and the history sensed region may be marked on the interface of the display device based on an instruction of the user. For example, when the user inputs an instruction to mark the currently sensed region, the currently sensed region may be marked on the interface of the display device. When the user inputs an instruction to switch to the history sensed region, the history sensed region may be marked on the interface of the display device.
The step can be continuously executed in the moving process of the movable platform, can be executed under the trigger of the specified trigger operation, and can be executed periodically at intervals. The triggering operation can be that a user inputs a triggering instruction through an interactive component on the display device, or that the existence of repeated sensing areas or the existence of missing sensing areas is detected.
In some embodiments, the display device may display a map on the display interface, and the first sensed area may be marked on the map. For example, the boundaries of the first sensed area may be marked on the map, or the entire first sensed area may be marked on the map, or the corner points of the first sensed area may be marked on the map. Wherein the marked content may be switched to a boundary of the first sensed region or the entire first sensed region in response to receiving a switching instruction of a user. The boundary of the marked first sensed region and the whole marked first sensed region may be continuously displayed on the display interface, or may be displayed on the display interface in a blinking manner.
Referring to fig. 4, in addition to marking the first sensed region, other information to be marked, which may include a moving track, a moving direction, and the like of the movable platform, may be marked. Assuming that the position information of the movable platform is refreshed at a certain frequency (e.g. 5Hz), the position information POSi (Loni, Lati, Hi) refreshed at each time is synchronized to the background, or is synchronized to the background after being down-sampled to a certain frequency (e.g. 1Hz), and then the moving track of the movable platform can be displayed on the background/ground station/display screen in real time. Further, the moving direction of the movable platform can also be indicated by an arrow. In the case where the movable platform is a drone, the arrow may indicate the head orientation of the drone. The sensible area of the movable platform after a period of time can be predicted according to the moving direction of the movable platform, and whether the sensible area and the historical sensed area can be seamlessly jointed or not can be judged. In fig. 4, a solid line with an arrow indicates a movement locus of the movable platform, a broken line in (4-a) in fig. 4 indicates a boundary of the first sensed region, a gray region in (4-b) in fig. 4 indicates the entire first sensed region, and a rectangular frame indicates a current sensed region. It can be seen that a leakage region exists in (4-a) of fig. 4, and a leakage region does not exist in (4-b) of fig. 4.
To facilitate the user to view the marked information, the first sensed area may be marked on the map using preset visual features. The visual features include, but are not limited to, at least any of: color, brightness, transparency, fill pattern. The current and historical visual characteristics are different. The boundary of the first sensed region and the region within the boundary may be marked with different visual features, the historical sensed region and the current sensed region may be marked with different visual features, and the movement track of the movable platform and the first sensed region may also be marked with different visual features. Of course, the partial information may be marked with the same visual features in order to avoid the visual effect from being too disordered due to too many types of visual features. For example, the boundary of the first sensed region may be marked with the same visual features as the regions within the boundary. In addition, one or more of the information to be marked may be highlighted based on the selection of the user, for example, the information to be marked selected by the user is marked with a color having a larger contrast with other information to be marked or a thicker line.
In some embodiments, a core sensing region may also be determined within the first sensed region and marked on an interface of a display device. The core sensing area may be an area that is more easily noticed by a user, an area of a designated location, an area including a designated object, an area having a designated feature, or the like. For example, when the unmanned aerial vehicle carries an infrared sensor to perform fire early warning, the core sensing area may be an area where the temperature in a thermal radiation diagram collected by the infrared sensor is greater than a preset temperature threshold. For another example, when a building group is photographed by a vision sensor mounted on an unmanned aerial vehicle, the core sensing area may be an area including a building in an image captured by the vision sensor.
In the process of carrying out searching operation by the unmanned aerial vehicle with the vision sensor, an operator has a high possibility of searching certain areas in an image picture acquired by the vision sensor more carefully in the searching process, and can easily ignore part of operation details in other areas. This omission of partial pictures may result from several aspects:
1) the image is distorted. Image distortion can be divided into radial distortion and tangential distortion, see fig. 5. At the center of the image, the radial distortion can be considered to be approximately 0, and the geometric characteristics of the shot object can be relatively truly restored. At the edge of the image, the distortion is maximized, and the geometric properties of the photographed object can change. The operator may ignore the feature because the subject set attribute changes on the screen;
2) the resolving power of the lens is reduced along with the distance from the center, the resolving power at the center of the lens is stronger, and the resolving power of the lens is gradually reduced towards the edge. The direct manifestation is that the contrast at the edges of the image is reduced and the sharpness is reduced.
3) The flatness of the lens can also influence the definition of the edge area of the vision sensor, if the vision sensor is pasted with a patch, the lens is not flat but has a certain inclination angle, the lens is positioned in the focus near the center, the image is clear, but the edge of the measurement area is easily positioned outside the focus, and the edge image is blurred.
4) The operator inadvertently focuses more on the center of the image and may overlook the edges of the image.
Thus, in some embodiments, a central region in an image acquired by a vision sensor may be determined as a core sensing region, and regions in the image other than the core sensing region are referred to as edge regions. Based on the above-mentioned characteristics of the core sensing region, the core sensing region may be determined in any one of the following ways:
the first method is as follows: obtaining a core sensing range of the first sensor when the movable platform is in the passing position; the core sensing range is within a sensing range of the first sensor; and determining the core sensing area according to the passing position and the core sensing range.
Specifically, a preset second angle of view may be acquired, the second angle of view being within a range of the first angle of view of the first vision sensor; and acquiring the pose of the vision sensor when the movable platform is in the passing position; determining the core sensing range based on the second field of view and a pose of the vision sensor while the movable platform is in the passing position. The second angle of view may be a default angle of view, for example, assuming that the first angle of view is a1 degrees a2 degrees, the second angle of view defaults to a1/k1 degrees a2/k2 degrees, where k1 and k2 are both constants greater than 1. Alternatively, referring to fig. 6, the second field angle may also be manually specified by the user. The user can directly set the core sensing area included angle of the heading direction and the sideward direction. For example, for a vision sensor with an FOV of 84 degrees by 63 degrees, the FOV of 60 degrees by 45 degrees may be manually set to the second field angle. After determining the second field of view, the manner of determining the core sensing range is similar to that of determining the sensing range of the first sensor, and is not described herein again.
The second method comprises the following steps: acquiring an image acquired when the first sensor is at the passing position; determining an image quality of the image; determining the core sensing region according to the image quality. In some embodiments, the image quality comprises a resolution, the resolution in the image corresponding to the core sensing region being higher than the resolution in the image corresponding to other regions than the core sensing region. A resolution threshold may be set and regions of the image having a resolution greater than or equal to the resolution threshold may be determined as core sensing regions and regions of the image having a resolution less than the resolution threshold may be determined as edge regions.
In some embodiments, the image quality includes a degree of distortion, the degree of distortion in the image corresponding to the core sensing region being lower than the degree of distortion in the image corresponding to other regions than the core sensing region. A threshold value for the distortion level (mainly radial distortion) can be set, calculating the radial/global distortion value of the device (vision sensor + lens) from the imaging center to the imaging edge. When it is calculated that the radial distortion/total distortion value is larger than the set threshold value at a certain point of the diagonal, the point is set as a dividing point. And connecting all the acquired demarcation points, determining an inner area surrounded by the connecting lines as a core sensing area, and taking an area with distortion larger than a threshold value as an edge area.
In some embodiments, the image quality comprises a contrast, the contrast in the image corresponding to the core sensing region being less than the contrast in the image corresponding to other regions than the core sensing region. The screening can be performed according to a relationship between a Modulation Transfer Function (MTF) curve of the lens and a pixel size of the vision sensor. Assuming a sensor pixel size of a micron, based on the inner nyquist theorem and practical operation experience, with 1/2 inner nyquist as a reference, the corresponding 4 pixels are of size, i.e. 4A microns, and the corresponding line pair is B. Since the shot resolution must match the sensor pel size, and the contrast is reduced due to the drop in edge shot resolution, the contrast C is set to be a threshold value below the B line pair. And finding out an area with the contrast outside the threshold value according to the MTF curve corresponding to the lens of the device, and dividing the area into an edge area, and dividing an area with the contrast smaller than the threshold value into a core sensing area.
The third method comprises the following steps: determining the core sensing area by point cloud density. Referring to fig. 7, when the first sensor is a radar sensor, particularly, when the laser radar simulates the vision of human eyes through rotation scanning, the determination can be made by setting the density of the point cloud. Assuming that the point cloud density P is a threshold limit, the portion with the point cloud density higher than P is set as a core sensing region, and the portion with the external point cloud density lower than P is set as an edge region.
In some embodiments, controlling the display device to display second identification information representing the core sensing region in the first sensing region; wherein the first identification information is different from the second identification information. It is understood that the core sensing region and the other regions (i.e., the edge regions) except the core sensing region may be respectively marked by different visual features on a display interface of a display device. For example, the core sensing area may be marked with a darker color and the rim area may be marked with a lighter color. Wherein, the core sensing area and the edge area can be marked on the display interface at the same time. Alternatively, only the core sensing area may be marked by default, and the edge area may be further marked when a display instruction of the user is received.
Based on any of the above manners, the course extension range L of the core sensing area within the data acquisition range of the first sensor can be determined Course core And sidewise extension range/L of core sensing area Side core Thus, it can be determined which regions are core sensing regions and which regions are edge regions. The two regions may be marked with different visual characteristics.
After the operation is finished, the machine learning can be used for judging which areas are not sensed, belong to missed areas, are only covered by edge areas, are possibly neglected, and prompt information is output to a user.
In some embodiments, the movable platform further comprises a second sensor for collecting second environmental information, wherein a sensing range of the second sensor is smaller than a sensing range of the first sensor. When the movable platform is at the passing position and the second sensor is working, the sensing range of the second sensor when the movable platform is at the passing position can be obtained; determining a second sensed area according to the passing position and the sensing range of the second sensor; controlling the display device to display third identification information representing the second sensed region in the work region.
In some embodiments, the first sensor and the second sensor are both vision sensors, and the first sensor has a larger field of view than the second sensor. In the search operation, it is often necessary to confirm the approximate area where the object to be searched carefully is located through the large FOV, and then to perform a close search using a smaller FOV and more details by switching to zoom. That is, in this embodiment, the first sensor may be operated to locate the approximate area where the target object is located, and then the second sensor may be operated to perform the fine search on the target object in the approximate area.
Implementation logic for optical zoom as shown in (8-a) of fig. 8, by changing the focal length of the camera from f1 to f2, the FOV is reduced from larger FOV1 to smaller FOV2 while the camera's vision sensor covers a smaller area, taking more detail. The logic for digital zoom is shown in fig. 8 (8-b) by "cutting" off and magnifying a portion of the sensor image, but without changing the details of the image. However, in the field angle of zooming, both optical zooming and digital zooming are equivalent to reduction of the FOV of the observation object.
When switching from wide angle to zoom, first, the FOV is determined according to the field angle of the wide-angle camera wide _ heading *FOV wide _ sidewise direction And the fly-height/laser ranging confirms the sensing range on the course and the sensing range on the side direction, and the coordinate positions of 4 corner points, and the calculation mode is shown in fig. 9. The zoom and wide-angle lenses may be calibrated and compensated in advance so that the center of the image is at the same position at the time of original magnification. The figure shows the situation that the first sensor is aligned to the ground before switching and the initial sensing range of the second sensor is aligned to the ground after switching, and those skilled in the art can understand that the actual situation is not limited to this.
The case where the first sensed region and the second sensed region are simultaneously displayed is shown in fig. 9. That is, after switching to zoom, the wide-angle lens coverage area before switching remains. Then the field angle FOV after zooming can be determined according to the photographing center tele _ heading 1 *FOV tele _ side 1 And focal length f tele1 After the zoom is calculated, the coordinates of the four corner points of the second sensed area at the height/distance are displayed within the first sensed area. If an adjustment of the focal length magnification takes place, e.g. the focal length is from f tele1 Is adjusted to f tele2 In this case, the angle of view and the zoom mark range change accordingly, and the mark result is refreshed. When the angle of the second sensor is changed, the zoom sensing range is recalculated according to the angle and the new distance measurement value obtained by laser distance measurement. Likewise, when the drone or the second sensor moves, the range sensed by the second sensor is also recalculated. As shown in fig. 10If the zoom magnification is changed from small to large, the second sensed region before the zoom magnification is changed is shown by a gray rectangular frame in the figure, and the second sensed region after the zoom magnification is changed is shown by a region surrounded by a dashed frame in the figure. In addition to the above-described marking manner, only the second sensed region may be marked without marking the first sensed region. Alternatively, the first sensed region and the second sensed region are only marked for a period of time after the switching, and only the second sensed region is marked after the period of time. In some embodiments, the first sensed region and the second sensed region may be marked with different visual features.
The sensed region of the second sensor may also include a historical sensed region and a current sensed region after the second sensor is operational. In the process of continuously adjusting the angle, the position and the zoom magnification, the current sensed area and the historical sensed area of the second sensor can be marked by different visual features. Referring to fig. 11A, the historically sensed area of the second sensor may be marked with a lighter color and the currently sensed area of the second sensor may be marked with a darker color. Additionally, the sensed region of the first sensor may also be marked. It is thus possible to determine which parts have been scrutinized by the second sensor within the sensed area of the first sensor (i.e. the historical sensed area of the second sensor), and to determine which areas the second sensor sensed (i.e. the current sensed area of the second sensor) at the current position, angle and adjustment to the current zoom magnification. Referring to fig. 11B, in the case where the currently sensed area of the second sensor exceeds the sensed area of the first sensor, an error notice message may also be output.
It should be noted that, for various boundaries or regions, such as the boundary of the first sensing region, the boundary inside the first sensing region, the core sensing region, the edge region, the history sensed region of the first sensor, the current sensed region of the first sensor, the history sensed region of the second sensor, and the current sensed region of the second sensor, some or all of the information may be marked with different visual features, and the visual features used for marking the various information may be set according to actual needs, which is not limited by the present disclosure. In addition, the various information described above may also be marked with the same visual characteristics. To avoid confusing various types of information, different tagged information may be displayed at different times.
The present disclosure has the following advantages:
(1) by displaying the sensed areas of the sensor, the operator can be assisted in determining which areas have been sensed and which areas have not been sensed, and overlapping areas and missing areas are not likely to occur.
(2) The method can assist operators to know which areas can be sensed by the current sensor when the movable platform is at the current position, and therefore the method is helpful for effectively planning the next moving path of the movable platform.
(3) By judging the core sensing area and the edge area sensed by the sensor, the operator can be prompted which areas are easily ignored in the searching process, and the areas can be missed to be checked and need to be checked again.
(4) Under the scene of multi-sensor cooperative operation, the current sensed area and the historical sensed area of the sensor with the smaller sensing range can be displayed in the sensing area of the sensor with the larger sensing range in real time, so that workers can trace the current sensed area and the historical sensed area when performing fine search.
Referring to fig. 12, another method for processing data of a movable platform is further provided in the embodiments of the present disclosure, where the movable platform is equipped with a first sensor and a second sensor, and the first sensor and the second sensor are respectively used for sensing a first environment and a second environment around the movable platform, and a sensing range of the second sensor is smaller than a sensing range of the first sensor, and the method includes:
step 1201: when a currently working sensor is switched from the first sensor to the second sensor, acquiring a sensed area of the first sensor and a sensed area of the second sensor at the switching moment;
step 1202: marking a sensed area of the first sensor and a sensed area of the second sensor in a display interface of a display device controls the display device to display first identification information and third identification information, the first identification information being used for marking the sensed area of the first sensor, the third identification information being used for the sensed area of the second sensor.
Wherein, the first sensor and the second sensor can be both vision sensors. In some application scenarios, the scheme of the embodiment of the present disclosure can be used for searching the target area. Because the sensing range of the first sensor is large, the first sensor can be used for roughly searching the target area, so that the approximate area where the target object is located can be located. And then, performing fine search on the rough area through a second sensor so as to accurately locate the target object. In the related art, before and after switching the currently operating sensor, due to the large difference between the FOVs of the two, problems often occur that it is impossible to determine which areas have been sensed and which areas have not been sensed after switching, and it is impossible to determine the relative positional relationship between the sensed area of the second sensor and the sensed area of the first sensor, positioning needs to be performed by the experience of a worker after each switching, and therefore, the operation is complicated, and the practicability is poor. In the embodiment, the sensed areas of the two sensors are respectively marked in the display interface, so that a user can observe the sensed area of the first sensor and the sensed area of the second sensor through the display interface after switching the sensors, and can determine which part of the sensed area of the second sensor in the sensed area of the first sensor, manual positioning is not needed, the operation complexity is reduced, and the practicability is improved.
The determination of the sensed region of the first sensor and the sensed region of the second sensor can be found in the embodiments of the method described above, and is not repeated herein.
The embodiment of the present disclosure further provides an apparatus for processing data of a movable platform, where the movable platform is equipped with a first sensor, and is configured to sense first environment information of an environment around the movable platform, and the apparatus includes one or more processors, and the processors are configured to perform the following steps:
acquiring the position of the movable platform passing through the operation area; and
determining a sensing range of the first sensor when the movable platform is at the passing position from the passing position;
determining a first sensed area in the working area according to the sensing range of the first sensor;
controlling a display device to display first identification information for identifying the first sensed region in the work region.
Optionally, the passing location comprises a current location of the movable platform; the sensing range of the first sensor when the movable platform is in the passing position includes the first sensing range of the first sensor when the movable platform is in the current position.
Optionally, the first sensor is a vision sensor; in determining from the passed position a sensing range of the first sensor when the movable platform is at the passed position, the processor is specifically configured to: acquiring a first field angle of the first sensor when the movable platform is at a current position; and obtaining a first attitude of the first sensor when the movable platform is at the current position; determining the first sensing range based on the current position, the first field of view, and the first pose.
Optionally, the movable platform comprises a first attitude sensor mounted on the first sensor, for acquiring an attitude of the first sensor; the first attitude is acquired by the first attitude sensor when the movable platform is at the current position.
Optionally, the movable platform comprises a second attitude sensor mounted on the body of the movable platform, and configured to acquire an attitude of the movable platform; the first posture is determined based on posture information of the body of the movable platform acquired by the second posture sensor when the movable platform is at the current position and a posture conversion relation between the body of the movable platform and the vision sensor.
Optionally, the passed position comprises a historical position of the movable platform; the sensing range of the first sensor when the movable platform is in the passing position includes a second sensing range of the first sensor when the movable platform is in the historical position.
Optionally, the first sensor is a visual sensor; in determining from the passed position a sensing range of the first sensor when the movable platform is in the passed position, the processor is specifically configured to: acquiring a second field angle of the first sensor when the movable platform is at the historical position; and obtaining a second pose of the vision sensor while the movable platform is in the historical position; determining the second sensing range based on the historical position, the second field of view, and the second pose.
Optionally, the movable platform comprises a first attitude sensor mounted on the first sensor, for acquiring an attitude of the first sensor; the second attitude is acquired by the first attitude sensor while the movable platform is in a historical position.
Optionally, the movable platform comprises a second attitude sensor mounted on the body of the movable platform, and is configured to acquire an attitude of the body of the movable platform; the second posture is determined by the posture of the body collected by the second posture sensor when the movable platform is in the historical position and the posture conversion relation between the movable platform and the first sensor.
Optionally, the display device is configured to display a map on a display interface, and when the display device displays the first identification information, the processor is specifically configured to:
controlling the display device to mark a boundary of the first sensed area on the map using first identification information; or control the display device to fill the entire first sensed area with first identification information on the map.
Optionally, the first identification information comprises a visual feature.
Optionally, the visual features include at least any one of: color, brightness, transparency, fill pattern.
Optionally, the processor is further configured to:
determining a core sensing region; the core sensing region is within the first sensed region;
controlling the display device to display second identification information, the second identification information being used to identify the core sensing area.
Optionally, when determining the core sensing region, the processor is specifically configured to:
determining a core sensing range of the first sensor when the movable platform is at the passing position from the passing position; the core sensing range is within a sensing range of the first sensor;
determining the core sensing region according to the core sensing range.
Optionally, the first sensor is a vision sensor; in obtaining a core sensing range of the first sensor when the movable platform is in the passing position, the processor is specifically configured to: acquiring a preset second field angle, wherein the second field angle is within the range of the first field angle of the first vision sensor; and acquiring the pose of the vision sensor when the movable platform is in the passing position; determining the core sensing range based on the second field of view and a pose of the vision sensor while the movable platform is in the passing position.
Optionally, when determining the core sensing region, the processor is specifically configured to: acquiring an image acquired when the first sensor is at the passing position; determining an image quality of the image; determining the core sensing region according to the image quality.
Optionally, the image quality comprises a resolution, the resolution in the image corresponding to the core sensing region being higher than the resolution in the image corresponding to other regions than the core sensing region.
Optionally, the image quality comprises a degree of distortion, the degree of distortion in the image corresponding to the core sensing region being lower than the degree of distortion in the image corresponding to other regions than the core sensing region.
Optionally, the image quality comprises a contrast, the contrast in the image corresponding to the core sensing region being less than the contrast in the image corresponding to other regions than the core sensing region.
Optionally, the processor is further configured to: controlling the display device to display second identification information representing the core sensing area in the first sensing area; wherein the first identification information is different from the second identification information.
Optionally, the movable platform further includes a second sensor for acquiring second environmental information, and a sensing range of the second sensor is smaller than a sensing range of the first sensor; when the movable platform is in the passing position and the second sensor is operational, the processor is further configured to: determining a sensing range of the second sensor when the movable platform is at the passing position from the passing position; determining a second sensed area in the working area according to the passing position and the sensing range of the second sensor; controlling the display of third identification information representing a second sensed area in the work area.
Optionally, the second sensed region is within the range of the first sensed region.
Optionally, the visual characteristics of the third marker are different from the visual characteristics of the first marker.
Optionally, the second sensor is a vision sensor; when marking the second sensed region in a display interface of the display device, the processor is specifically configured to: acquiring a third field angle of the second sensor; and acquiring the attitude of the second sensor when the movable platform is in the passing position; determining a sensing range of the second sensor based on the passed position, the third field of view, and a pose of the second sensor when the movable platform is at the passed position.
Embodiments of the present disclosure also provide an apparatus for processing movable platform data,
the movable platform is provided with a first sensor for sensing first environment information of the environment around the movable platform, and the device comprises one or more processors for executing the following steps:
acquiring the position of the movable platform passing through the operation area; and
determining a sensing range of the first sensor when the movable platform is at the passing position from the passing position;
determining a first sensed area in the working area according to the sensing range of the first sensor;
controlling a display device to display first identification information for identifying the first sensed region in the work region.
Fig. 13 is a schematic hardware structure diagram of a more specific apparatus for processing movable platform data according to an embodiment of the present disclosure, where the apparatus may include: a processor 1301, a memory 1302, an input/output interface 1303, a communication interface 1304, and a bus 1305. Wherein the processor 1301, the memory 1302, the input/output interface 1303 and the communication interface 1304 enable communication connections within the device with each other through the bus 1305.
The processor 1301 may be implemented by a general purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1302 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 1302 may store an operating system and other application programs, and when the technical solutions provided in the embodiments of the present specification are implemented by software or firmware, the relevant program codes are stored in the memory 1302 and called by the processor 1301 for execution.
The input/output interface 1303 is used for connecting an input/output module to realize information input and output. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The communication interface 1304 is used for connecting a communication module (not shown in the figure) to implement communication interaction between the present device and other devices. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 1305 includes a path that transfers information between the various components of the device, such as processor 1301, memory 1302, input/output interface 1303, and communication interface 1304.
It should be noted that although the above-mentioned device only shows the processor 1301, the memory 1302, the input/output interface 1303, the communication interface 1304 and the bus 1305, in a specific implementation process, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only the components necessary to implement the embodiments of the present disclosure, and need not include all of the components shown in the figures.
Referring to fig. 14, embodiments of the present disclosure also provide a movable platform comprising:
a first sensor 1402 for sensing first environmental information of an environment surrounding the movable platform;
a positioning device 1403 for positioning the movable platform;
one or more processors 1404 for obtaining a position traversed by the movable platform in the work area; and
determining a sensing range of the first sensor when the movable platform is at the passing position from the passing position; determining a first sensed area in the working area according to the sensing range of the first sensor; and
an antenna 1405 for transmitting the control instruction received from the processor to a display device to control the display device to display first identification information for identifying the first sensed region in the work area.
It can be understood that the movable platform of the embodiment of the present disclosure may be a handheld device such as a mobile phone, a PDA, a camera, a pan-tilt, etc. The mobile platform may also include a power system 1401 for powering the mobile platform for movement thereof; the movable platform can be an unmanned aerial vehicle, an unmanned ship, a movable robot and other equipment. The first sensor 1402 may include, but is not limited to, various types of sensors such as a visual sensor, an infrared sensor, a radar sensor, a sound sensor, and the like. The Positioning device 1403 may position the movable platform by using a Global Positioning System (GPS), a BeiDou Navigation Satellite System (BDS), an RTK Positioning System, and the like. The steps performed by processor 1404 can be found in any of the above method embodiments, and are not described here again. The antenna 1405 may be used to enable communication between the drone and ground station/terminal devices. The display device can be arranged on a ground station or terminal equipment, and the terminal equipment can be a remote controller matched with the movable platform, and can also be an intelligent terminal such as a mobile phone, a computer and the like.
In some embodiments, the movable platform carries a first sensor and a second sensor, a sensing range of the second sensor is smaller than a sensing range of the first sensor, and when the currently operating sensor is switched from the first sensor to the second sensor, the processor 1404 may acquire a sensed area of the first sensor and a sensed area of the second sensor at a switching time, and send both the sensed area of the first sensor and the sensed area of the second sensor to the display device through the antenna 1405, so that the display device marks the sensed area of the first sensor and the sensed area of the second sensor on the display interface.
Referring to fig. 15, an embodiment of the present disclosure further provides a terminal device, where the terminal device is communicatively connected to a movable platform, and the movable platform is equipped with a first sensor for sensing first environment information of an environment around the movable platform, where the terminal device includes:
a communication unit 1501 for acquiring a position where the movable platform passes in a working area, and acquiring a sensing range of the first sensor when the movable platform is at the passing position;
one or more processors 1502 for controlling the communication unit to acquire from the movable platform a position traversed by the movable platform in a work area;
determining a sensing range of the first sensor when the movable platform is at the passing position from the passing position;
determining a first sensed area in the working area according to the sensing range of the first sensor; and
a display 1503, configured to receive a control instruction of the processor, and display first identification information according to the control instruction, where the first identification information is used to identify the first sensed region in the work region.
In some embodiments, the movable platform is mounted with a first sensor and a second sensor, a sensing range of the second sensor is smaller than a sensing range of the first sensor, and the communication unit 1501 is configured to acquire the sensing range of the first sensor and the sensing range of the second sensor and a position where the movable platform passes at a switching time when the sensor currently operating on the movable platform is switched from the first sensor to the second sensor. The processor 1502 is configured to determine a sensed area of the first sensor according to a sensing range of the first sensor at a position passed by the movable platform at the switching time, and determine a sensed area of the second sensor according to a sensing range of the second sensor at a position passed by the movable platform at the switching time. The display 1503 is used to mark the sensed regions of the first sensor and the sensed regions of the second sensor on the interface, respectively.
In the embodiment of the present disclosure, reference may be made to the foregoing method embodiments for determining the sensing range of the first sensor and the sensing range of the second sensor, and the specific processing manner of the processor 1502, which are not described herein again.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps performed by the second processing unit in the method according to any of the foregoing embodiments.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
From the above description of the embodiments, it is clear to those skilled in the art that the embodiments of the present disclosure can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the embodiments of the present specification or portions thereof contributing to the prior art may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, or the like, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods described in the embodiments or some portions of the embodiments of the present specification.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may be in the form of a personal computer, laptop, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
Various technical features in the above embodiments may be arbitrarily combined as long as there is no conflict or contradiction in the combination between the features, but the combination is limited by the space and is not described one by one, and therefore, any combination of various technical features in the above embodiments also belongs to the scope of the present disclosure.
Other embodiments of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (51)

1. A method for processing movable platform data, the movable platform carrying a first sensor for sensing first environmental information of an environment surrounding the movable platform, the method comprising:
acquiring the position of the movable platform passing through the operation area; and
determining a sensing range of the first sensor when the movable platform is at the passing position from the passing position;
determining a first sensed area in the working area according to the sensing range of the first sensor;
controlling a display device to display first identification information for identifying the first sensed region in the work region.
2. The method of claim 1, wherein the passed position comprises a current position of the movable platform; the sensing range of the first sensor when the movable platform is in the passing position includes the first sensing range of the first sensor when the movable platform is in the current position.
3. The method of claim 2, wherein the first sensor is a vision sensor; the determining a sensing range of the first sensor when the movable platform is at the passing position from the passing position comprises:
acquiring a first field angle of the first sensor when the movable platform is at the current position; and
obtaining a first attitude of the first sensor when the movable platform is at the current position;
determining the first sensing range based on the current position, the first field of view, and the first pose.
4. The method of claim 3, wherein the movable platform comprises a first attitude sensor mounted on the first sensor for acquiring an attitude of the first sensor; the first attitude is acquired by the first attitude sensor when the movable platform is at the current position.
5. The method of claim 3, wherein the movable platform comprises a second attitude sensor mounted to a body of the movable platform for acquiring an attitude of the movable platform; the first posture is determined based on posture information of the body of the movable platform acquired by the second posture sensor when the movable platform is at the current position and a posture conversion relation between the body of the movable platform and the vision sensor.
6. The method of claim 1, wherein the passed location comprises a historical location of the movable platform; the sensing range of the first sensor when the movable platform is in the passing position includes a second sensing range of the first sensor when the movable platform is in the historical position.
7. The method of claim 6, wherein the first sensor is a vision sensor; the determining a sensing range of the first sensor when the movable platform is in the passing position from the passing position comprises:
acquiring a second field angle of the first sensor when the movable platform is at the historical position; and
obtaining a second pose of the vision sensor while the movable platform is in the historical position;
determining the second sensing range based on the historical position, the second field of view, and the second pose.
8. The method of claim 7, wherein the movable platform comprises a first attitude sensor mounted on the first sensor for acquiring an attitude of the first sensor; the second attitude is acquired by the first attitude sensor while the movable platform is in a historical position.
9. The method of claim 7, wherein the movable platform comprises a second attitude sensor mounted to a body of the movable platform for acquiring an attitude of the body of the movable platform; the second posture is determined by the posture of the body collected by the second posture sensor when the movable platform is in the historical position and the posture conversion relation between the movable platform and the first sensor.
10. The method according to claim 1, wherein the display device is used for displaying a map of the work area, and the controlling the display device to display first identification information includes:
controlling the display device to mark a boundary of the first sensed area on the map using first identification information; or
Control the display device to fill the entire first sensed area with first identification information on the map.
11. The method of claim 1, wherein the first identification information comprises a visual characteristic.
12. The method of claim 11, wherein the visual characteristics comprise at least any one of: color, brightness, transparency, fill pattern.
13. The method of claim 1, further comprising:
determining a core sensing region; the core sensing region is within the first sensed region;
controlling the display device to display second identification information, the second identification information being used to identify the core sensing area.
14. The method of claim 13, wherein determining the core sensing region comprises:
determining a core sensing range of the first sensor when the movable platform is at the passing position from the passing position; the core sensing range is within a sensing range of the first sensor;
determining the core sensing region according to the core sensing range.
15. The method of claim 14, wherein the first sensor is a vision sensor; said obtaining a core sensing range of said first sensor when said movable platform is in said passing position, comprising:
acquiring a preset second field angle, wherein the second field angle is within the range of the first field angle of the first vision sensor; and
acquiring the attitude of the vision sensor when the movable platform is at the passing position;
determining the core sensing range based on the second field of view and a pose of the vision sensor while the movable platform is in the passing position.
16. The method of claim 13, wherein the determining a core sensing region comprises:
acquiring an image acquired when the first sensor is at the passing position;
determining an image quality of the image;
determining the core sensing region according to the image quality.
17. The method of claim 16, wherein the image quality comprises a resolution, and wherein the resolution in the image corresponding to the core sensing region is higher than the resolution in the image corresponding to regions other than the core sensing region.
18. The method of claim 16, wherein the image quality comprises a degree of distortion, the degree of distortion in the image corresponding to the core sensing region being lower than the degree of distortion in the image corresponding to regions other than the core sensing region.
19. The method of claim 16, wherein the image quality comprises a contrast, and wherein the contrast in the image corresponding to the core sensing region is less than the contrast in the image corresponding to regions other than the core sensing region.
20. The method of claim 13, further comprising:
controlling the display device to display second identification information representing the core sensing area in the first sensing area;
wherein the first identification information is different from the second identification information.
21. The method of claim 1, further comprising a second sensor on the movable platform for acquiring second environmental information, the second sensor having a sensing range less than the sensing range of the first sensor; when the movable platform is in the passing position and the second sensor is operational, the method further comprises:
determining a sensing range of the second sensor when the movable platform is at the passing position from the passing position;
determining a second sensed area in the working area according to the sensing range of the second sensor;
controlling the display device to display third identification information representing the second sensed region in the work region.
22. The method of claim 21, wherein the second sensed region is within a range of the first sensed region.
23. The method of claim 21, wherein the visual characteristics of the third marker are different from the visual characteristics of the first marker.
24. The method of claim 21, wherein the second sensor is a vision sensor; the determining a sensing range of the second sensor when the movable platform is at the passing position from the passing position comprises:
acquiring a third field angle of the second sensor; and
acquiring the attitude of the second sensor when the movable platform is in the passing position;
determining a sensing range of the second sensor based on the passed position, the third field of view, and a pose of the second sensor when the movable platform is at the passed position.
25. An apparatus for processing movable platform data, the movable platform carrying a first sensor for sensing first environmental information of an environment surrounding the movable platform, the apparatus comprising one or more processors configured to perform the steps of:
acquiring the position of the movable platform passing through the operation area; and
determining a sensing range of the first sensor when the movable platform is at the passing position from the passing position;
determining a first sensed area in the working area according to the sensing range of the first sensor;
controlling a display device to display first identification information for identifying the first sensed region in the work region.
26. The apparatus of claim 25, wherein the passed location comprises a current location of the movable platform; the sensing range of the first sensor when the movable platform is in the passing position includes the first sensing range of the first sensor when the movable platform is in the current position.
27. The apparatus of claim 26, wherein the first sensor is a vision sensor; in determining from the passed position a sensing range of the first sensor when the movable platform is in the passed position, the processor is specifically configured to:
acquiring a first field angle of the first sensor when the movable platform is at the current position; and
obtaining a first attitude of the first sensor when the movable platform is at the current position;
determining the first sensing range based on the current position, the first field of view, and the first pose.
28. The apparatus of claim 27, wherein the movable platform comprises a first attitude sensor mounted on the first sensor for acquiring an attitude of the first sensor; the first attitude is acquired by the first attitude sensor when the movable platform is at the current position.
29. The apparatus of claim 27, wherein the movable platform comprises a second attitude sensor mounted to a body of the movable platform for acquiring an attitude of the movable platform; the first posture is determined based on posture information of the body of the movable platform acquired by the second posture sensor when the movable platform is at the current position and a posture conversion relation between the body of the movable platform and the vision sensor.
30. The apparatus of claim 25, wherein the passed position comprises a historical position of the movable platform; the sensing range of the first sensor when the movable platform is in the passing position includes a second sensing range of the first sensor when the movable platform is in the historical position.
31. The apparatus of claim 30, wherein the first sensor is a vision sensor; in determining from the passed position a sensing range of the first sensor when the movable platform is in the passed position, the processor is specifically configured to:
acquiring a second field angle of the first sensor when the movable platform is at the historical position; and
obtaining a second pose of the vision sensor while the movable platform is in the historical position;
determining the second sensing range based on the historical position, the second field of view, and the second pose.
32. The apparatus of claim 31, wherein the movable platform comprises a first attitude sensor mounted on the first sensor for acquiring an attitude of the first sensor; the second attitude is acquired by the first attitude sensor while the movable platform is in a historical position.
33. The apparatus of claim 31, wherein the movable platform comprises a second attitude sensor mounted to a body of the movable platform for acquiring an attitude of the body of the movable platform; the second posture is determined by the posture of the body collected by the second posture sensor when the movable platform is in the historical position and the posture conversion relation between the movable platform and the first sensor.
34. The apparatus of claim 25, wherein the display device is configured to display a map on the display interface, and when the display device displays the first identification information, the processor is specifically configured to:
controlling the display device to mark a boundary of the first sensed area on the map using first identification information; or alternatively
Controlling the display device to fill the entire first sensed area with first identification information on the map.
35. The apparatus of claim 25, wherein the first identification information comprises a visual characteristic.
36. The apparatus of claim 35, wherein the visual features comprise at least any one of: color, brightness, transparency, fill pattern.
37. The apparatus of claim 25, wherein the processor is further configured to:
determining a core sensing region; the core sensing region is within the first sensed region;
controlling the display device to display second identification information, the second identification information being used to identify the core sensing area.
38. The apparatus of claim 37, wherein in determining a core sensing region, the processor is specifically configured to:
determining a core sensing range of the first sensor when the movable platform is at the passing position from the passing position; the core sensing range is within a sensing range of the first sensor;
determining the core sensing region according to the core sensing range.
39. The apparatus of claim 38, wherein the first sensor is a vision sensor; in obtaining a core sensing range of the first sensor when the movable platform is in the passing position, the processor is specifically configured to:
acquiring a preset second field angle, wherein the second field angle is within the range of the first field angle of the first vision sensor; and
acquiring the attitude of the vision sensor when the movable platform is at the passing position;
determining the core sensing range based on the second field of view and a pose of the vision sensor while the movable platform is in the passing position.
40. The apparatus of claim 37, wherein in determining a core sensing region, the processor is specifically configured to:
acquiring an image acquired when the first sensor is at the passing position;
determining an image quality of the image;
determining the core sensing region according to the image quality.
41. The apparatus of claim 40, wherein the image quality comprises a resolution, and wherein a resolution in the image corresponding to the core sensing region is higher than a resolution in the image corresponding to a region other than the core sensing region.
42. The apparatus of claim 40, wherein the image quality comprises a degree of distortion, and wherein the degree of distortion in the image corresponding to the core sensing region is lower than the degree of distortion in the image corresponding to regions other than the core sensing region.
43. The apparatus of claim 40, wherein the image quality comprises a contrast, and wherein a contrast in the image corresponding to the core sensing region is less than a contrast in the image corresponding to a region other than the core sensing region.
44. The apparatus of claim 37, wherein the processor is further configured to:
controlling the display device to display second identification information representing the core sensing region in the first sensing region;
wherein the first identification information is different from the second identification information.
45. The apparatus of claim 25, further comprising a second sensor on the movable platform for acquiring second environmental information, the second sensor having a sensing range less than the sensing range of the first sensor; when the movable platform is in the passing position and the second sensor is operational, the processor is further configured to:
determining a sensing range of the second sensor when the movable platform is at the passing position from the passing position;
determining a second sensed area in the working area according to the passing position and the sensing range of the second sensor;
controlling the display of third identification information representing a second sensed area in the work area.
46. The apparatus of claim 45, wherein the second sensed region is within a range of the first sensed region.
47. The apparatus of claim 46, wherein the visual characteristic of the third marker is different from the visual characteristic of the first marker.
48. The apparatus of claim 45, wherein the second sensor is a vision sensor; in determining from the passed position a sensing range of the second sensor when the movable platform is in the passed position, the processor is specifically configured to:
acquiring a third field angle of the second sensor; and
acquiring the attitude of the second sensor when the movable platform is in the passing position;
determining a sensing range of the second sensor based on the passed position, the third field of view, and a pose of the second sensor when the movable platform is at the passed position.
49. A movable platform, comprising:
a first sensor for sensing first environmental information of an environment surrounding the movable platform; a positioning device for positioning the movable platform;
one or more processors configured to obtain a passing position of the movable platform in a work area and determine a sensing range of the first sensor when the movable platform is at the passing position based on the passing position; determining a first sensed area in the working area according to the sensing range of the first sensor; and
an antenna for transmitting a control instruction received from the processor to a display device to control the display device to display first identification information for identifying the first sensed area in the working area.
50. A terminal device, wherein the terminal device is communicatively connected to a movable platform, and the movable platform is equipped with a first sensor for sensing first environment information of an environment around the movable platform, the terminal device comprising:
a communication unit for acquiring a position where the movable platform passes in a working area; and acquiring a sensing range of the first sensor when the movable platform is in the passing position;
one or more processors for controlling the communication unit to acquire, from the movable platform, a position where the movable platform passes in a work area;
determining a sensing range of the first sensor when the movable platform is at the passing position from the passing position;
determining a first sensed area in the working area according to the sensing range of the first sensor; and
and the display is used for receiving a control instruction of the processor and displaying first identification information according to the control instruction, wherein the first identification information is used for identifying the first sensed area in the working area.
51. A computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 24.
CN202180013840.XA 2021-11-01 2021-11-01 Movable platform, method and device for processing data of movable platform, and terminal equipment Pending CN115066663A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/127965 WO2023070667A1 (en) 2021-11-01 2021-11-01 Movable platform, method and apparatus for processing data of movable platform, and terminal device

Publications (1)

Publication Number Publication Date
CN115066663A true CN115066663A (en) 2022-09-16

Family

ID=83196394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180013840.XA Pending CN115066663A (en) 2021-11-01 2021-11-01 Movable platform, method and device for processing data of movable platform, and terminal equipment

Country Status (2)

Country Link
CN (1) CN115066663A (en)
WO (1) WO2023070667A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106887028A (en) * 2017-01-19 2017-06-23 西安忠林世纪电子科技有限公司 The method and system of aerial photograph overlay area are shown in real time
CN107662707A (en) * 2016-07-28 2018-02-06 深圳航天旭飞科技有限公司 Save medicine unmanned plane
WO2019237413A1 (en) * 2018-06-13 2019-12-19 仲恺农业工程学院 Gis-based unmanned aerial vehicle plant protection system and method
KR102252060B1 (en) * 2019-12-24 2021-05-14 한국항공우주연구원 Method for displaying spatiotemporal information of image taken by drone device based on gps metadata and device thereof
US20210325182A1 (en) * 2019-03-27 2021-10-21 Chengdu Rainpoo Technology Co., Ltd. Aerial survey method and apparatus capable of eliminating redundant aerial photos

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4470926B2 (en) * 2006-08-08 2010-06-02 国際航業株式会社 Aerial photo image data set and its creation and display methods
CN106197377A (en) * 2016-06-30 2016-12-07 西安电子科技大学 A kind of unmanned plane targeted surveillance over the ground and the display system of two dimension three-dimensional linkage
CN109708636B (en) * 2017-10-26 2021-05-14 广州极飞科技股份有限公司 Navigation chart configuration method, obstacle avoidance method and device, terminal and unmanned aerial vehicle
CN110069073A (en) * 2018-11-30 2019-07-30 广州极飞科技有限公司 Job control method, device and plant protection system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107662707A (en) * 2016-07-28 2018-02-06 深圳航天旭飞科技有限公司 Save medicine unmanned plane
CN106887028A (en) * 2017-01-19 2017-06-23 西安忠林世纪电子科技有限公司 The method and system of aerial photograph overlay area are shown in real time
WO2019237413A1 (en) * 2018-06-13 2019-12-19 仲恺农业工程学院 Gis-based unmanned aerial vehicle plant protection system and method
US20210325182A1 (en) * 2019-03-27 2021-10-21 Chengdu Rainpoo Technology Co., Ltd. Aerial survey method and apparatus capable of eliminating redundant aerial photos
KR102252060B1 (en) * 2019-12-24 2021-05-14 한국항공우주연구원 Method for displaying spatiotemporal information of image taken by drone device based on gps metadata and device thereof

Also Published As

Publication number Publication date
WO2023070667A1 (en) 2023-05-04

Similar Documents

Publication Publication Date Title
CN112567201B (en) Distance measuring method and device
US10366511B2 (en) Method and system for image georegistration
US10337865B2 (en) Geodetic surveying system
US20170337743A1 (en) System and method for referencing a displaying device relative to a surveying instrument
US10659753B2 (en) Photogrammetry system and method of operation
US11668577B1 (en) Methods and systems for response vehicle deployment
JP4969053B2 (en) Portable terminal device and display method
US10337863B2 (en) Survey system
JP5086824B2 (en) TRACKING DEVICE AND TRACKING METHOD
US20210156710A1 (en) Map processing method, device, and computer-readable storage medium
CN116817929B (en) Method and system for simultaneously positioning multiple targets on ground plane by unmanned aerial vehicle
CN113906358B (en) Control method, device and system for movable platform
EP3903285B1 (en) Methods and systems for camera 3d pose determination
JP5514062B2 (en) Electronic device, imaging screen display method with information, and program
WO2023070667A1 (en) Movable platform, method and apparatus for processing data of movable platform, and terminal device
Hao et al. Assessment of an effective range of detecting intruder aerial drone using onboard EO-sensor
US20230177781A1 (en) Information processing apparatus, information processing method, and information processing program
WO2021212499A1 (en) Target calibration method, apparatus, and system, and remote control terminal of movable platform
CN109309709B (en) Control method capable of remotely controlling unmanned device
WO2021134715A1 (en) Control method and device, unmanned aerial vehicle and storage medium
JP6880797B2 (en) Position coordinate conversion system, marker creation device, roadside imaging device and position coordinate conversion method
JP2021103410A (en) Mobile body and imaging system
CN109343561B (en) Operation method for displaying and operating unmanned device on electronic device
CN114726996A (en) Method and system for establishing a mapping between a spatial position and an imaging position
CN117968656A (en) Multi-sensor fusion positioning method, self-mobile device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination