WO2023173409A1 - 信息的显示方法、模型的对比方法、装置及无人机系统 - Google Patents
信息的显示方法、模型的对比方法、装置及无人机系统 Download PDFInfo
- Publication number
- WO2023173409A1 WO2023173409A1 PCT/CN2022/081705 CN2022081705W WO2023173409A1 WO 2023173409 A1 WO2023173409 A1 WO 2023173409A1 CN 2022081705 W CN2022081705 W CN 2022081705W WO 2023173409 A1 WO2023173409 A1 WO 2023173409A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dimensional
- model
- display
- information
- map
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 240
- 230000004044 response Effects 0.000 claims description 45
- 238000004590 computer program Methods 0.000 claims description 36
- 238000012545 processing Methods 0.000 claims description 33
- 238000003860 storage Methods 0.000 claims description 27
- 238000012217 deletion Methods 0.000 claims description 12
- 230000037430 deletion Effects 0.000 claims description 12
- 238000009826 distribution Methods 0.000 claims description 10
- 230000000694 effects Effects 0.000 abstract description 57
- 238000010586 diagram Methods 0.000 description 34
- 230000008569 process Effects 0.000 description 25
- 238000004891 communication Methods 0.000 description 14
- 238000013461 design Methods 0.000 description 11
- 238000013480 data collection Methods 0.000 description 10
- 230000008859 change Effects 0.000 description 7
- 238000001514 detection method Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000013507 mapping Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 description 1
- 241001289753 Graphium sarpedon Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000002420 orchard Substances 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/278—Subtitling
Definitions
- Embodiments of the present invention relate to the technical field of unmanned aerial vehicles, and in particular, to an information display method, a model comparison method, a device and an unmanned aerial vehicle system.
- the shooting results of the drone can be obtained and the shooting results can be transmitted to the ground end.
- the user can directly review the shooting results through playback or related list modules. View, this makes the display effect of the shooting results relatively simple.
- Embodiments of the present invention provide an information display method, a model comparison method, a device and a drone system, which can display the drone shooting position and the shooting object position corresponding to the shooting information on a map, thereby improving The quality and effect of displaying drone shooting results.
- the first aspect of the present invention is to provide a method for displaying information collected by a drone, including:
- the drone shooting position and the shooting object position are marked and displayed on the map corresponding to the shooting information.
- the second aspect of the present invention is to provide a method for comparing models obtained using drones, including:
- the at least two three-dimensional models are overlapped and displayed to obtain an overlay display area, and the overlay display area is used to display at least one three-dimensional model;
- the display data in the overlay display area is adjusted to determine a model comparison result between at least two three-dimensional models.
- the third aspect of the present invention is to provide a method for generating a route for controlling a drone, including:
- the waypoint editing information determine at least two spatial waypoints located in the three-dimensional map, where the spatial waypoints include altitude information used to control the drone;
- three-dimensional route information corresponding to the UAV is generated.
- the fourth aspect of the present invention is to provide a method for displaying a model obtained by a drone, including:
- the three-dimensional model and the three-dimensional map are combined and displayed.
- the fifth aspect of the present invention is to provide a display device for information collected by a drone, including:
- Memory used to store computer programs
- a processor configured to run a computer program stored in the memory to:
- the drone shooting position and the shooting object position are marked and displayed on the map corresponding to the shooting information.
- the sixth aspect of the present invention is to provide a comparison device for models obtained using drones, including:
- Memory used to store computer programs
- a processor configured to run a computer program stored in the memory to:
- the at least two three-dimensional models are overlapped and displayed to obtain an overlay display area, and the overlay display area is used to display at least one three-dimensional model;
- the display data in the overlay display area is adjusted to determine a model comparison result between at least two three-dimensional models.
- the seventh aspect of the present invention is to provide a route generation device for controlling a drone, including:
- Memory used to store computer programs
- a processor configured to run a computer program stored in the memory to:
- the waypoint editing information determine at least two spatial waypoints located in the three-dimensional map, where the spatial waypoints include altitude information used to control the drone;
- three-dimensional route information corresponding to the UAV is generated.
- An eighth aspect of the present invention is to provide a display device for a model obtained using a drone, including:
- Memory used to store computer programs
- a processor configured to run a computer program stored in the memory to:
- the three-dimensional model and the three-dimensional map are combined and displayed.
- a ninth aspect of the present invention is to provide a computer-readable storage medium.
- the storage medium is a computer-readable storage medium.
- Program instructions are stored in the computer-readable storage medium.
- the program instructions are used for the first aspect.
- a tenth aspect of the present invention is to provide a computer-readable storage medium.
- the storage medium is a computer-readable storage medium.
- Program instructions are stored in the computer-readable storage medium.
- the program instructions are used in the second aspect. The comparison method described above for models obtained using UAVs.
- An eleventh aspect of the present invention is to provide a computer-readable storage medium.
- the storage medium is a computer-readable storage medium.
- Program instructions are stored in the computer-readable storage medium.
- the program instructions are used in the third aspect. The method for generating routes for controlling drones.
- a twelfth aspect of the present invention is to provide a computer-readable storage medium.
- the storage medium is a computer-readable storage medium.
- Program instructions are stored in the computer-readable storage medium.
- the program instructions are used in the fourth aspect.
- a thirteenth aspect of the present invention is to provide an unmanned aerial vehicle system, including:
- the display device for information collected by a drone described in the fifth aspect is used to control the drone through a cloud platform.
- a fourteenth aspect of the present invention is to provide an unmanned aerial vehicle system, including:
- the device for comparing models obtained using drones described in the sixth aspect is used to control the drones through a cloud platform.
- a fifteenth aspect of the present invention is to provide an unmanned aerial vehicle system, including:
- the device for generating a route for controlling a drone described in the seventh aspect is used to control the drone through a cloud platform.
- a sixteenth aspect of the present invention is to provide an unmanned aerial vehicle system, including:
- the display device for a model obtained by using an unmanned aerial vehicle as described in the eighth aspect is used to control the unmanned aerial vehicle through a cloud platform.
- the technical solution provided by the embodiment of the present invention is to obtain the shooting information of the drone and determine the shooting position of the drone corresponding to the shooting information.
- the position of the shooting object corresponding to the shooting information is determined.
- the map corresponding to the shooting information the drone shooting position and the shooting object position are marked and displayed, which effectively realizes that the drone shooting position and the shooting object position corresponding to the shooting information can be displayed on the map. Display allows users to quickly and intuitively obtain relevant information about shooting information through the map, which effectively improves the quality and effect of displaying drone shooting results, further improves the practicability of this method, and is beneficial to Market promotion and application.
- Figure 1 is a schematic scene diagram of a method for displaying information collected by a drone provided by an embodiment of the present invention
- Figure 2 is a schematic flow chart of a method for displaying information collected by a drone provided by an embodiment of the present invention
- Figure 3 is a schematic diagram of the drone shooting position and the shooting object position provided by the embodiment of the present invention.
- Figure 4 is a schematic diagram 1 of marking and displaying the drone shooting position and the shooting object position provided by an embodiment of the present invention
- Figure 5 is a schematic diagram 2 of marking and displaying the drone shooting position and the shooting object position provided by an embodiment of the present invention
- Figure 6 is a schematic flow chart of another method for displaying information collected by a drone provided by an embodiment of the present invention.
- Figure 7 is a schematic diagram of automatically loading the panorama into the three-dimensional map for mark display according to an embodiment of the present invention.
- Figure 8 is a schematic diagram 1 of displaying the panorama based on the display perspective provided by an embodiment of the present invention.
- Figure 9 is a second schematic diagram of displaying the panorama based on the display perspective provided by an embodiment of the present invention.
- Figure 10 is a schematic flow chart of another method for displaying information collected by a drone provided by an embodiment of the present invention.
- Figure 11 is a schematic diagram of playing the video information provided by an embodiment of the present invention.
- Figure 12 is a schematic diagram of displaying the current shooting position corresponding to the video frame being played on the map provided by an embodiment of the present invention
- Figure 13 is a schematic flow chart of another method for displaying information collected by a drone provided by an embodiment of the present invention.
- Figure 13a is a schematic diagram 1 of displaying a three-dimensional model provided by an embodiment of the present invention.
- Figure 13b is a second schematic diagram of displaying a three-dimensional model provided by an embodiment of the present invention.
- Figure 13c is a schematic diagram 3 of displaying a three-dimensional model provided by an embodiment of the present invention.
- Figure 14a is a schematic diagram of an overlay display area provided by an embodiment of the present invention.
- Figure 14b is a schematic diagram of displaying at least two three-dimensional models that need to be compared according to an embodiment of the present invention
- Figure 15 is a schematic flow chart of adjusting the display data in the overlay display area in response to a display adjustment operation input by the user for the overlay display area provided by an embodiment of the present invention
- Figure 16 is a schematic diagram of adjusting display data in the overlay display area provided by an embodiment of the present invention.
- Figure 17 is a schematic flow chart of another method for displaying information collected by a drone provided by an embodiment of the present invention.
- Figure 18 is a schematic diagram of displaying three-dimensional route information provided by an embodiment of the present invention.
- Figure 19 is a schematic diagram illustrating differentiated display of the actual flight route and the three-dimensional route information provided by an embodiment of the present invention.
- Figure 20 is a schematic flow chart of another method for displaying information collected by a drone provided by an embodiment of the present invention.
- Figure 21 is a schematic diagram of the combined display of the three-dimensional model and the three-dimensional map provided by an embodiment of the present invention.
- Figure 22 is a schematic flow chart of a method for comparing models obtained by using drones according to an embodiment of the present invention.
- Figure 23 is a schematic flowchart of a method for generating a route for controlling a drone provided by an embodiment of the present invention.
- Figure 24 is a schematic flowchart of a method for displaying a model obtained using a drone provided by an embodiment of the present invention.
- Figure 25 is a schematic structural diagram of a display device for information collected by a drone provided by an embodiment of the present invention.
- Figure 26 is a schematic structural diagram of a comparison device for models obtained using drones provided by an embodiment of the present invention.
- Figure 27 is a schematic structural diagram of a route generation device for controlling a drone provided by an embodiment of the present invention.
- Figure 28 is a schematic structural diagram of a display device for a model obtained by a drone provided by an embodiment of the present invention.
- Figure 29 is a schematic structural diagram of an unmanned aerial vehicle system provided by an embodiment of the present invention.
- Figure 30 is a schematic structural diagram 2 of an unmanned aerial vehicle system provided by an embodiment of the present invention.
- Figure 31 is a schematic structural diagram three of an unmanned aerial vehicle system provided by an embodiment of the present invention.
- Figure 32 is a schematic structural diagram 4 of an unmanned aerial vehicle system provided by an embodiment of the present invention.
- the shooting results of the drone can be obtained, and the shooting results can be transmitted to the ground.
- users can directly view the shooting results through playback or related list modules.
- special shooting results such as panoramas/point clouds are less likely to be displayed on the map, which makes the display effect of the shooting results better. single.
- the route drawing operation is generally a two-dimensional map drawing operation, and the route drawn on the two-dimensional map can only identify the drone
- the running plane information cannot identify the spatial information.
- the three-dimensional model corresponding to the shooting results has the following shortcomings: (1) The display of the three-dimensional model and There are few application scenarios for combining terrain; (2) When calling multiple 3D models, there is currently no mature and efficient interaction solution; (3) When viewing multiple 3D models, there is currently no efficient multi-dimensional viewing operations; (4) There is no ideal interaction solution when comparing multiple 3D models.
- this embodiment provides an information display method, a model comparison method, a device and an unmanned aerial vehicle system.
- the display method of information collected by an unmanned aerial vehicle is
- the execution subject is the display device for the information collected by the drone.
- the display device can communicate with the drone through the cloud platform (cloud network, cloud server, etc.), specifically:
- the UAVs can carry out flight operations based on preset routes and perform corresponding task operations.
- the image acquisition device installed on the UAV can be used to collect shooting information, so as to obtain Shooting information.
- the image collection device can be a camera, a video camera, other equipment with image shooting functions, etc.
- the obtained shooting information can include at least one of the following: image information, panorama, video information, point cloud information.
- the shooting information can be sent to the information display device through the cloud platform, so that the information display device can display the shooting information.
- the display device of the information collected by the drone is connected to the drone through the cloud platform, and is used to obtain the drone's shooting information through the cloud platform. After the shooting information is obtained, the shooting information can be analyzed and processed. to determine the drone shooting location corresponding to the shooting information.
- the UAV may optionally be equipped with a sensing device (for example, lidar) for determining the position of the photographed object corresponding to the photographed information.
- the UAV is equipped with a sensing device
- the position of the shooting object corresponding to the shooting information can be obtained through the sensing device; when the sensing device is not configured on the UAV, the position of the shooting object corresponding to the shooting information cannot be obtained.
- the map corresponding to the shooting information can be obtained.
- the map can be a two-dimensional map, a three-dimensional map, etc., and then the map corresponding to the shooting information can be obtained. , marking and displaying the drone shooting position and the shooting object position, effectively realizing the flexible display of the relevant information of the shooting information on the map, so that the user can intuitively and quickly understand the relevant shooting information through the map information, further improving the quality and effect of displaying drone shooting results.
- Figure 2 is a schematic flow chart of a method for displaying information collected by a drone provided by an embodiment of the present invention; with reference to Figure 2, this embodiment provides a method for displaying information collected by a drone
- the display method, the execution subject of the information display method can be an information display device, the information display device can be implemented as software, or a combination of software and hardware, wherein when the information display device is implemented as hardware, the specific It can be a display device of the cloud platform, or it can be an electronic device that communicates with the drone through the cloud platform, cloud network, and cloud server.
- the electronic device can be implemented as a handheld terminal, a personal terminal PC, a tablet computer, or a web page platform.
- the display device of this information can also be a terminal device directly connected to the drone.
- the information display device is implemented as software, it can be installed in the electronic device exemplified above.
- the method for displaying information collected by drones in this embodiment may include:
- Step S201 Obtain the shooting information of the drone.
- Step S202 Determine the drone shooting position corresponding to the shooting information.
- Step S203 When there is a shooting object position corresponding to the shooting information, the drone shooting position and the shooting object position are marked and displayed on the map corresponding to the shooting information.
- Step S201 Obtain the shooting information of the drone.
- the UAV can be equipped with an image acquisition device for obtaining shooting information.
- the image acquisition device can be a camera, a video camera, other equipment with image shooting functions, etc.
- the image acquisition device can obtain the shooting information of the drone, and the shooting information may include at least one of the following: image information, panorama, video information, and point cloud information.
- the shooting information can be actively or passively sent to the information display device, so that the information display device The shooting information of the drone can be obtained stably. At this time, the shooting information of the drone can be obtained through the cloud platform and the drone.
- this embodiment also provides another way to obtain the shooting information of the drone.
- the shooting information of the drone can be It is historical information collected in advance and stored in a preset area.
- the information display device can obtain the drone shooting information by accessing the preset area.
- Step S202 Determine the drone shooting position corresponding to the shooting information.
- the shooting information can be analyzed and processed to determine the drone shooting position corresponding to the shooting information.
- the drone on A positioning device for positioning the UAV can be configured.
- the UAV shooting position corresponding to the shooting information can be obtained through the positioning device.
- the information display device can obtain the positioning data of the positioning device on the drone through the cloud platform. By analyzing and processing the positioning data, the drone shooting position corresponding to the shooting information can be determined. At this time, the drone shooting position can be determined.
- the execution subject for determining the human-machine shooting position is the information display device.
- the positioning data can be obtained through the positioning device on the drone. After the drone obtains the positioning data, the positioning data can be analyzed and processed to determine the drone shooting position corresponding to the shooting information. After the drone obtains the drone shooting position, the obtained drone shooting position can be sent to the information display device through the cloud platform, so that the information display device can determine the drone shooting corresponding to the shooting information. Position, at this time, the execution subject that determines the drone shooting position is the drone.
- Step S203 When there is a shooting object position corresponding to the shooting information, the drone shooting position and the shooting object position are marked and displayed on the map corresponding to the shooting information.
- the shooting information can correspond to the drone shooting position and the shooting object position, where the drone shooting position can refer to the position of the drone when the shooting information is obtained, and the shooting object position refers to The location of the subject included in the shooting information.
- the drone shooting position can refer to the position of the drone when the shooting information is obtained
- the shooting object position refers to The location of the subject included in the shooting information.
- the orchard at position B can be photographed through the image acquisition device on the drone.
- the image acquisition device can obtain the shot Information
- the obtained shooting information corresponds to the drone shooting position (i.e., position A) and the shooting object position (position B).
- the drone shooting position and the shooting object position are two completely different positions.
- the drone can optionally be equipped with a sensing device (such as lidar) for determining the position of the photographed object corresponding to the photographing information.
- a sensing device such as lidar
- the drone is equipped with a sensor.
- the sensing device When a sensing device is installed on the drone, the location of the photographing object corresponding to the photographing information can be obtained through the sensing device; when the sensing device is not configured on the drone, the position of the photographing object corresponding to the photographing information cannot be obtained.
- the obtained photographic object position corresponding to the photographing device can be obtained, the obtained photographic object position can be stored in a preset area, or the photographic object position can be sent to the information display device.
- the information display device can The obtained subject position is stored, and then the stored subject position can be viewed.
- the position of the shooting object corresponding to the shooting information can be viewed, it means that there is a position of the shooting object corresponding to the shooting information; when the position of the shooting object corresponding to the shooting information cannot be viewed, it means that there is no shooting information.
- Corresponding subject position when the photographic object position corresponding to the photographing device can be obtained, the obtained photographic object position can be stored in a preset area, or the photographic object position can be sent to the information display device.
- the information display device can The obtained subject position is stored, and then the stored subject position can be viewed.
- the position of the shooting object corresponding to the shooting information can be viewed, it means that there is a position of the shooting object corresponding to the shooting information; when the position of
- a map corresponding to the shooting information can be obtained.
- the map can be a two-dimensional map, a three-dimensional map, etc., and then the unmanned map can be mapped to the map corresponding to the shooting information.
- the shooting position of the camera and the position of the shooting subject are marked and displayed. It should be noted that since the drone shooting position and the shooting object position are two different locations, in order to further highlight the difference between the display of the drone shooting position and the shooting object position, different display methods can be used to display the drone shooting position and the shooting object position. The shooting position of the drone and the position of the shooting object are displayed.
- the method of this embodiment can display the drone shooting results in a spatial map.
- the images captured by the drone can be stored in the media library.
- the user clicks on the image the detailed information of the image can be viewed.
- the detailed information of the image can be displayed in the middle of the display interface, but also the location of the photographed object in the map of the image information can be displayed on one side of the display interface, and there is no The location of the human-machine shooting will also be displayed on the map.
- the relevant information of the image information can be displayed in the map, and then after entering the map page, that is Image information loaded onto the map can be viewed.
- the image information on the map can be displayed in the map in the form of bubble thumbnails, and the image name is displayed. After clicking on the image, the image will appear in the selected state (the bubble will become larger and a blue stroke will be added), and an enlarged thumbnail of the image will appear.
- the selected image corresponds to a drone shooting location
- the drone shooting location can be displayed on a three-dimensional map (the blue triangle icon is connected to the photo location). If the image corresponds to a drone shooting location, When the shooting location of the aircraft is not the location of the shooting object, the photo bubble and the aircraft corner marker on the map can be displayed at the same location.
- the information display method provided by this embodiment determines the drone shooting position corresponding to the shooting information by obtaining the shooting information of the drone.
- the drone shooting position and the shooting object position are marked and displayed, effectively realizing that the drone shooting position and the shooting object position corresponding to the shooting information can be displayed on the map Displayed in the map, the user can quickly and intuitively learn the relevant information of the shooting information through the map, which effectively improves the quality and effect of displaying the drone shooting results, further improves the practicability of the method, and has Conducive to market promotion and application.
- Figure 6 is a schematic flow chart of another method for displaying information collected by a drone provided by an embodiment of the present invention. based on the above embodiment, with reference to Figure 6, when the captured information includes a panorama , in order to display the panorama stably, the method in this embodiment may include:
- Step S301 Obtain the shooting position of the panorama.
- Step S302 Based on the shooting location, determine a three-dimensional map corresponding to the panorama.
- Step S303 Automatically load the panorama into the three-dimensional map for mark display.
- the panorama is an image information obtained through a wide-angle expression method, it can express as much of the surrounding environment as possible. Therefore, when the shooting information includes a panorama, in order to ensure the quality and effect of displaying the panorama, you can Obtain the shooting position of the panorama.
- the shooting position of the panorama refers to the shooting position of the drone when the panorama is shot by the drone.
- the shooting position of the panorama can be positioned by the positioning device of the drone. Operation obtained.
- the shooting position After obtaining the shooting position (for example, coordinate information) of the panorama, the shooting position can be analyzed and processed to determine a three-dimensional map corresponding to the panorama.
- the determined three-dimensional map can include a map area corresponding to the shooting position.
- the panorama after determining the three-dimensional map corresponding to the panorama, in order to accurately display the panorama in the three-dimensional map, the panorama can be automatically loaded into the three-dimensional map for mark display.
- a thumbnail identification corresponding to the panorama can be added to the map.
- the 3D image can also be directly loaded into the 3D map. Similar to how image information is displayed, panoramas can be displayed on the map in the form of bubble thumbnails, and the panorama name can be displayed. After clicking on the panorama, the panorama can appear selected (the bubble becomes larger and a blue stroke is added) ), and an enlarged thumbnail of the panorama appears.
- the method in this embodiment may further include:
- Step S401 In the three-dimensional map, obtain the angle adjustment operation input by the user on the panorama.
- Step S402 Determine the display angle of the panorama based on the angle adjustment operation.
- Step S403 Display the panorama based on the display perspective.
- the user can customize the panorama according to the application scenario or application requirements. Display viewing angle is adjusted.
- the angle adjustment operation input by the user on the panorama can be obtained in the three-dimensional map.
- the angle adjustment operation can be an execution operation input by the user using the keyboard or mouse, for example : The user holds down the left mouse button (right mouse button, middle mouse button) to perform a movement operation, or the user adjusts parameters through the angle input by the keyboard, etc.
- the display angle of the panorama can be determined based on the angle adjustment operation, and then the panorama can be displayed based on the display angle, so that the user can arbitrarily adjust the display angle of the panorama according to the display requirements. Further Improved flexibility and reliability in displaying panoramas.
- the panorama can be displayed on the map in the form of a bubble thumbnail, and the name of the panorama can be displayed.
- the panorama can be displayed in a selected state (the bubble changes to larger and adds a blue stroke), and an enlarged thumbnail of the panorama appears.
- you double-click the panorama bubble you can open the panorama and display the panorama from the first display perspective.
- click Full Screen to view the panorama from the first display perspective in full screen.
- the user can click or drag the main panorama screen to view the panorama screen from different angles.
- the user can click or drag the panorama screen from different angles.
- the second display angle displays the panorama in the three-dimensional map, thereby effectively enabling the user to arbitrarily adjust the display angle of the panorama according to viewing needs, further improving the flexibility and reliability of this method.
- Figure 10 is a schematic flow chart of another method for displaying information collected by a drone provided by an embodiment of the present invention. based on the above embodiment, with reference to Figure 10, when the shooting information includes video information , in order to accurately display the video information on the map, the method in this embodiment may also include:
- Step S1001 Obtain the shooting position corresponding to each video frame in the video information.
- Step S1002 When playing video information, display the current shooting location corresponding to the video frame being played on the map.
- the shooting information includes video information
- the video information includes multiple video frames
- the drone shooting positions corresponding to each video frame are different, in order to be able to display the detailed information corresponding to the video information on the map
- the shooting position corresponding to each video frame in the video information can be obtained.
- the specific acquisition method and implementation effect of the shooting position corresponding to each video frame in this embodiment are similar to the specific implementation method and implementation effect of the above step S202. Specifically Please refer to the above statement and will not repeat it here.
- the ongoing playback can be displayed on the map.
- the current shooting position corresponding to the video frame is displayed. It should be noted that the video frame being played can change with the playback time period, and the current shooting position corresponding to the video frame displayed will also change with the playback time period. Changes with the change of the video frame being played.
- the shooting location corresponding to the image information is a location
- the shooting location corresponding to the video information is a location.
- the shooting locations are multiple locations, and the video information can identify and display the shooting process over a period of time on the shooting trajectory; after the video information is loaded onto the map, the video information can be viewed in conjunction with the three-dimensional map.
- the video information captured by the drone can be stored in the media library, and the video information can be obtained through the media library. After clicking on the video information, the detailed image information included in the video information can be viewed.
- the detailed information can include The position of the video information on the map is displayed in a two-dimensional map display. When the video information is played, the bubbles in the map window will also move correspondingly in the flight trajectory as time changes.
- a gray flight trajectory is displayed on the map.
- the video static content is displayed in the form of bubble thumbnails at the top.
- the white endpoints can be dragged in the gray flight trajectory.
- the picture in the bubble will change with the position to reflect the video content shot at the corresponding position.
- the enlarged thumbnail can be displayed, and the content of the video will be played in the enlarged thumbnail area.
- the shooting information includes point cloud information.
- the method in this embodiment may also include:
- Step S1101 Obtain a point cloud model corresponding to the point cloud information.
- Step S1102 Determine the model origin corresponding to the point cloud model and the position information corresponding to the model origin.
- Step S1103 Based on the location information, display the point cloud model on the map.
- the point cloud information can be obtained through the point cloud camera. Due to the particularity of point cloud imaging, the point cloud information can be presented in the form of a model. In order to be able to display the point cloud information on the map By displaying it, you can obtain a point cloud model corresponding to the point cloud information, that is, the point cloud shooting results are presented in the form of a model.
- a modeling algorithm can be used to analyze and process point cloud information, so as to obtain a point cloud model corresponding to the point cloud information; or, a machine learning model for establishing a point cloud model is pre-configured, and after obtaining the point cloud After obtaining the information, the point cloud information can be input into the machine learning model, so that the point cloud model output by the machine learning model can be obtained.
- the obtained point cloud model is the result obtained by scanning with the point cloud camera.
- the point cloud model After obtaining the point cloud model, in order to accurately display the point cloud model on the map, the point cloud model can be analyzed and processed to determine the model origin corresponding to the point cloud model and the position corresponding to the model origin. information.
- the model origin corresponding to the point cloud model may be the geometric center or center of gravity of the point cloud model, etc.
- the model origin After determining the model origin corresponding to the point cloud model, the model origin can be analyzed and processed to determine the position information corresponding to the model origin, thus effectively ensuring the accuracy and reliability of determining the model origin and position information.
- the point cloud model After obtaining the position information corresponding to the origin of the model, the point cloud model can be displayed on the map based on the position information, so that the user can view the point cloud model obtained by the drone through the map.
- the number of point cloud models displayed on the map is one or more, and the point cloud model can be presented on the map in the form of bubbles, and the content in the bubbles It can be a point cloud model.
- you click the point cloud model of the bubble you can see the thumbnail of the point cloud model.
- the preview interface of the point cloud model will be opened, thus effectively realizing the preview of the points marked on the map. to view the details of the point cloud model.
- the model origin corresponding to the point cloud model and the position information corresponding to the model origin are determined, and then based on the position information, the point cloud model is mapped on the map Display, thus effectively enabling users to view the detailed information of point cloud information more intuitively and clearly, further improving the quality and effect of this method.
- Figure 13 is a schematic flowchart of another method for displaying information collected by a drone provided by an embodiment of the present invention; based on any of the above embodiments, with reference to Figure 13, this embodiment can In addition to displaying the shooting results collected by the drone on the map, this embodiment can also perform comparison operations on the models generated by the information collected by the drone. Specifically, the method in this embodiment can also include:
- Step S1301 Obtain a model comparison request corresponding to at least two three-dimensional models.
- the at least two three-dimensional models are generated based on the information collected by the drone.
- Step S1302 Overlay and display at least two three-dimensional models based on the model comparison request to obtain an overlay display area.
- the overlay display area is used to display at least one three-dimensional model.
- Step S1303 In response to a display adjustment operation input by the user for the overlay display area, adjust the display data in the overlay display area to determine a model comparison result between at least two three-dimensional models.
- Step S1301 Obtain a model comparison request corresponding to at least two three-dimensional models.
- the at least two three-dimensional models are generated based on the information collected by the drone.
- the UAV can be equipped with a data collection device.
- the types of data collection devices can be different.
- the data collection device can be an image collection device, a positioning device, or an image collection device. devices, etc.
- the collection information corresponding to a collection object can be obtained through the data collection device.
- a three-dimensional model corresponding to the collection object can be established based on the collection information.
- the execution subject of establishing the three-dimensional model can be a drone, a cloud platform that is communicated with the drone, or an information display device that is communicated with the cloud platform.
- the method in this embodiment may also include: receiving at least two three-dimensional models sent by the cloud server, at least two three-dimensional models The models are all generated by the cloud server based on the information collected by the drone.
- the collection information corresponding to a collection object can be obtained through the data collection device.
- the collection information can be sent to the cloud server, and the collection information can be obtained in the cloud server.
- the collected information can be analyzed and processed to generate a three-dimensional model corresponding to the collected information, and then the cloud server can store at least two three-dimensional models.
- the cloud server After the cloud server generates or stores at least two three-dimensional models, the at least two three-dimensional models can be The three-dimensional model is sent to the information display device, so that the information display device can receive at least two three-dimensional models sent by the cloud server, thereby effectively ensuring the accuracy and reliability of acquiring the at least two three-dimensional models.
- the at least two three-dimensional models can be displayed.
- the at least two three-dimensional models obtained can be stored in a preset model library, which is pre-configured to display the at least two three-dimensional models.
- the model list page is displayed, and the obtained at least two three-dimensional models can be displayed in thumbnail form on the model list page.
- the 3D model can be displayed in the upper part of the model preview page, and the thumbnails of different models that can be switched and viewed can be displayed in the lower part.
- the three-dimensional map background can include a pre-configured map background that can be supported, which specifically can include: a preset background image, a satellite map background, Standard map background, etc., as shown in Figure 13a, the 3D map background of the 3D model is a preset background image, that is, a black striped background; as shown in Figure 13b, the 3D map background of the 3D model is a satellite map background; as shown in Figure 13c shows that the 3D map background of the 3D model is the default map background.
- the 3D map background of the 3D model can default to a black background image.
- the 3D model can be displayed separately through the black background image.
- the user can switch the background image used to display the 3D model according to needs. For example: you can select and The three-dimensional map background corresponding to the three-dimensional map is determined, and the three-dimensional model can be directly attached to the three-dimensional map with the selected three-dimensional map background for display.
- the upper part of the model list page is the model display area.
- This model display area can support operations such as model movement, model rotation, and model zoom viewing. Specifically, through the left mouse button Click and drag to drag and move the displayed 3D model, Ctrl+left mouse button to rotate the direction of the 3D model, and scroll the middle mouse wheel to enlarge or reduce the size of the 3D model.
- the lower part of the model list page can display thumbnails of multiple other 3D models to be displayed. You can click on the thumbnails of different 3D models in the model arrangement to switch model viewing. At the same time, based on the user's preset application requirements and design requirements, You can also perform operations such as distributing 3D models, displaying them on the map, downloading them, and deleting them.
- one or more three-dimensional models corresponding to a collection object can be generated through the collection information of the drone.
- the corresponding three-dimensional models of the multiple three-dimensional models The time information can be different.
- the user can perform a model comparison operation on the at least two three-dimensional models according to the design requirements.
- a model comparison request corresponding to the at least two three-dimensional models can be obtained.
- the model comparison request may include the identification of the three-dimensional model that needs to be compared.
- the number of three-dimensional models corresponding to one model comparison request may be two or more.
- obtaining a model comparison request corresponding to at least two three-dimensional models may include: obtaining a user For the model comparison operation input by at least two three-dimensional models, in the model comparison interface, the model selection operation input by the user for the three-dimensional model displayed in the interface is determined as the model comparison operation, and then it can be generated and obtained based on the model comparison operation Model comparison request corresponding to at least two 3D models.
- obtaining a model comparison request corresponding to at least two three-dimensional models may include: obtaining a third device that is communicatively connected to the information display device, generating a model comparison request through the third device, and then the third device may Actively or passively send the model comparison request to the information display device, so that the information display device can stably obtain the model comparison request corresponding to at least two three-dimensional models.
- Step S1302 Overlay and display at least two three-dimensional models based on the model comparison request to obtain an overlay display area.
- the overlay display area is used to display at least one three-dimensional model.
- the at least two three-dimensional models can be overlapped and displayed based on the model comparison request, so that an overlay display area can be obtained, wherein the overlay display area is used to compare at least one
- the 3D model is displayed.
- the overlay display area is obtained by superimposing the two three-dimensional models.
- the overlay display area can display at least one 3D model.
- the overlay display area can display 3D model A or 3D model B; in other examples, the overlay display area can display at least 3D model A. part and at least part of the three-dimensional model B are displayed.
- Step S1303 In response to a display adjustment operation input by the user for the overlay display area, adjust the display data in the overlay display area to determine a model comparison result between at least two three-dimensional models.
- the user can input a display adjustment operation for the overlay display area, and the display adjustment operation is used to adjust the display data at the top level in the overlay display area.
- the display data in the overlay display area can be adjusted based on the display adjustment operation, which facilitates the user to determine a model comparison result between at least two three-dimensional models.
- the display The interface can also be configured with controls for displaying multiple 3D models in the overlay display area respectively.
- multiple 3D models that need to be compared can be displayed simultaneously in a tiled manner, such as As shown in Figure 14b.
- other 3D models for comparison will also be adjusted simultaneously. For example: when the user performs operations such as rotating, zooming in, or reducing any 3D model, other 3D models for comparison will also be adjusted simultaneously. Operations such as rotation, zooming in, and zooming out will also be performed simultaneously, which can facilitate users to observe the model comparison results between three-dimensional models, further improving the quality and effect of obtaining model comparison results.
- the method in this embodiment may also include: obtaining a three-dimensional map corresponding to any three-dimensional model; combining the overlay display area and the three-dimensional map show.
- At least two three-dimensional models included in the overlay display area can be determined.
- any one of the at least two three-dimensional models can be processed. Analyze and process to obtain the three-dimensional map corresponding to any three-dimensional model. Specifically, the position information corresponding to any three-dimensional model can be obtained. Based on the position information, the three-dimensional map corresponding to any three-dimensional model can be obtained.
- the overlay display area and the three-dimensional map can be combined and displayed, which not only expands the quality and effect of displaying at least two three-dimensional models in the overlay display area, but also improves the real reliability of the three-dimensional model display. .
- the overlay display area is obtained by obtaining a model comparison request corresponding to at least two three-dimensional models, and then overlaying and displaying the at least two three-dimensional models based on the model comparison request, in response to the user input for the overlay display area.
- users can quickly and intuitively learn project progress information, task execution progress, etc., further improving the practicality of this method.
- Figure 15 is a schematic flowchart of adjusting the display data in the overlay display area in response to a display adjustment operation input by the user for the overlay display area provided by an embodiment of the present invention; on the basis of the above embodiment, refer to Figure 15 , this embodiment provides an implementation method for adjusting the display data in the overlay display area. Specifically, in this embodiment, in response to the display adjustment operation input by the user for the overlay display area, the display data in the overlay display area is adjusted. Data adjustments can include:
- Step S1501 Obtain the display adjustment operation input by the user for the area adjustment control corresponding to the overlay display area.
- the overlay display area can be configured with a corresponding area adjustment control.
- the area adjustment control can be at least one of the following: area dividing line, used to adjust the overlay display area. Controls for adjusting the display area of each layer, etc.
- the area adjustment control corresponding to the overlay display area can be displayed at the same time, and then the user inputs a display adjustment operation for the displayed area adjustment control, so that the display input by the user for the area adjustment control can be obtained Adjust operations.
- the display adjustment operation obtained may be different.
- the display adjustment operation obtained may be the drag input by the user for the dividing line. movement or moving operation;
- the area adjustment control is a control used to adjust the display area of each layer in the overlay display area, the obtained display adjustment operation can be a data input operation, data selection operation or data input operation input by the user for the control. Click or configure operations, etc.
- the method may also include: obtaining the number of three-dimensional models located in the overlay display area; based on the number of three-dimensional models, determining the area adjustment control corresponding to the overlay display area, the number of area adjustment controls is less than or equal to the number of three-dimensional models, and the area adjustment control is Used to adjust the display area of 3D models in different stacks.
- the number of three-dimensional models located in the overlay display area can be determined based on the model comparison request.
- the number of three-dimensional models can be two or more. Since the overlay display area is used to display at least one 3D model, and the area adjustment control is used to adjust the display area of 3D models in different overlays, that is, the area adjustment corresponding to the number of obtained 3D models and the overlay display area The controls are closely related. Therefore, after obtaining the number of three-dimensional models, the number of three-dimensional models can be analyzed and processed to determine the area adjustment controls corresponding to the overlay display area. The number of area adjustment controls is less than or equal to the number of three-dimensional models.
- the number of area adjustment controls corresponding to the overlay display area is one, and one area adjustment control is used to adjust and display the three-dimensional model. data layer, at this time, the number of area adjustment controls is less than the number of 3D models.
- the number of area adjustment controls corresponding to the overlay display area is three. At this time, the number of area adjustment controls is equal to the number of three-dimensional models.
- Step S1502 In response to a display adjustment operation input by the user for the area adjustment control, adjust the display data in the overlay display area.
- adjusting the display data in the overlay display area may include: determining an adjustable area corresponding to the area adjustment control; in response to the user input for the area adjustment control The adjustment operation is to adjust the display data in the overlay display area within the adjustable area.
- the area adjustment control is related to the area adjustment control.
- Corresponding adjustable area It should be noted that different area adjustment controls can correspond to different adjustable areas. After determining the adjustable area corresponding to the area adjustment control, in response to the user's adjustment operation (moving operation, dragging operation) for the area adjustment control etc.), the display data in the overlay display area within the adjustable area can be adjusted based on the adjustment operation.
- the overlay display area is used to display the three-dimensional model A and the three-dimensional model B.
- the area adjustment control is a dividing line and the area adjustment control is in position a
- the overlay display The smaller half of the area can display part of the data of the three-dimensional model A, and the larger half of the overlay display area can display part of the data of the three-dimensional model B.
- the user can adjust the area adjustment control located at position a to position b according to the needs.
- the area adjustment control is at position b, most of the overlay display area can display part of the data of the 3D model A, and the smaller half of the overlay display area can display the 3D model. Part of the data of model B.
- the data that can be displayed in the overlay display area has been sent for adjustment.
- the multiple three-dimensional models can be stored based on the order corresponding to the timeline.
- the timeline corresponding to the multiple three-dimensional models can be displayed.
- the user can switch the three-dimensional model displayed in the display interface through the timeline, and the display interface also displays a "Compare” button for model comparison operations.
- you can open the model comparison page, and compare the selected 3D model with the recently generated 3D model.
- the entire display interface can have a left-right structure.
- the selected 3D model is displayed on the left, and the nearest 3D model is displayed on the right by default.
- the two 3D models are overlapped and displayed, with a 3D model in the middle.
- a dividing line is used to distinguish the two models.
- the user can click and pull the dividing line in the middle to move left and right, thereby adjusting the display size of the left and right models to facilitate comparison of model changes; at the same time, the area where each 3D model is located has a date display, click on the date, and you can select 3D models on different dates, thereby enabling flexible replacement of 3D models that need to be compared.
- the display adjustment operation in the overlay display area can be performed based on the display adjustment operation. Adjust the display data, so that users can effectively adjust the display data in the overlay display area at any time according to design needs and usage needs. This not only helps improve the accuracy and reliability of obtaining model comparison results, but also satisfies different users. The flexible requirement for viewing 3D models that require model comparison operations further improves the practicality of this method.
- Figure 17 is a schematic flowchart of another method for displaying information collected by a drone provided by an embodiment of the present invention; based on any of the above embodiments, as shown in Figure 17, in this embodiment
- the method in this embodiment can also implement route regulation operations in a three-dimensional map.
- the method in this embodiment also can Can include:
- Step S1701 Obtain the waypoint editing information input by the user in the three-dimensional map.
- Step S1702 Based on the waypoint editing information, determine at least two spatial waypoints located in the three-dimensional map.
- the spatial waypoints include altitude information used to control the drone.
- Step S1703 Generate three-dimensional route information corresponding to the UAV based on at least two spatial waypoints.
- Step S1701 Obtain the waypoint editing information input by the user in the three-dimensional map.
- the user Before the drone operates, in order to accurately control the drone to complete the corresponding operation, the user needs to perform a route drawing operation first. In order to make the mapped route information have more spatial information, especially height information, you can obtain and display a 3D map.
- the user can use the route operation control to input route editing operations in the three-dimensional map, thereby obtaining the waypoint editing information input by the user in the three-dimensional map.
- the waypoint editing information is used to identify the points that can constitute the three-dimensional route information.
- Waypoint setting information may include horizontal coordinate information, vertical coordinate information and altitude information in the space where the three-dimensional map is located.
- the route editing information can not only be generated by the route editing operation input by the user for the route operation control, the route editing information can also be obtained by analyzing the route file composed of the preset editing instructions, that is, the route file can be based on User needs and design needs are generated by instruction editing operations. Different user needs and design needs can generate different route files. After obtaining the route file, the command recognition operation can be performed on the route file, so that the waypoint editing information that the user needs to input in the three-dimensional map can be obtained.
- Step S1702 Based on the waypoint editing information, determine at least two spatial waypoints located in the three-dimensional map.
- the spatial waypoints include altitude information used to control the drone.
- the waypoint editing information can identify the waypoint information that constitutes the three-dimensional route information, that is, different waypoint editing information can identify different waypoint information that constitutes the three-dimensional route information, therefore, after obtaining the waypoint editing information, the waypoint can be edited.
- the information is analyzed and processed to determine at least two spatial waypoints located in the three-dimensional map, and the spatial waypoints may include altitude information used to control the drone.
- the method in this embodiment may also include: obtaining the user's information on any spatial waypoint in the three-dimensional map. Click the input waypoint adjustment operation; adjust the space waypoint based on the waypoint adjustment operation.
- the user can intuitively view the specific information of the spatial waypoints in the three-dimensional map, and then the user can identify whether the set spatial waypoints meet the preset requirements.
- the waypoint meets the preset requirements, there is no need to make any adjustments to the space waypoint; when the space waypoint does not meet the preset requirements, it means that the space waypoint at this time does not meet the preset requirements.
- the user can input a waypoint adjustment operation in the three-dimensional map.
- the waypoint adjustment operation can include the user's horizontal adjustment operation for the horizontal coordinate information of the space waypoint, and the vertical coordinate information of the space waypoint.
- the space waypoints After obtaining the waypoint adjustment operation, the space waypoints can be adjusted based on the waypoint adjustment operation, thus effectively enabling the user to flexibly adjust the space waypoints according to design needs or application needs, further improving the understanding of the space. Stable and reliable waypoint determination.
- Step S1703 Generate three-dimensional route information corresponding to the UAV based on at least two spatial waypoints.
- three-dimensional route information corresponding to the UAV can be generated based on the at least two space waypoints. Specifically, adjacent space waypoints among the at least two space waypoints are dashed. Or connected by solid lines, so that three-dimensional route information corresponding to the UAV can be generated.
- the three-dimensional route information after generating the three-dimensional route information corresponding to the UAV, in order to enable the user to intuitively obtain the drawn or configured three-dimensional route information, the three-dimensional route information can be displayed in a three-dimensional map.
- the drawn route can be previewed for the user to view.
- the user can You need to click to add a spatial waypoint in the three-dimensional map.
- you click on one or more locations in the three-dimensional map you can form a spatial waypoint (with Altitude information), and then connect the set adjacent space waypoints to the ground with a dotted line.
- the length of the dotted line can reflect the altitude information corresponding to the space waypoint.
- the altitude information corresponding to the space waypoint does not meet the user's design needs, the altitude information of the space waypoint can be changed.
- the altitude information of the space waypoint can be changed by holding down the ALT key on the keyboard and dragging the space waypoint up and down. . After obtaining the space waypoints, you can connect adjacent waypoints. The resulting connection is the route. There is a shear in the route pointing in the direction of the drone's flight.
- the waypoint editing information input by the user in the three-dimensional map is obtained, and then based on the waypoint editing information, at least two spatial waypoints located in the three-dimensional map are determined, and based on the at least two spatial waypoints, a generated
- the three-dimensional route information corresponding to the UAV effectively realizes the drawing operation of three-dimensional route information in the three-dimensional map based on the user's design needs and usage needs. Since the three-dimensional route information contains spatial information, in this way, based on the three-dimensional route information When controlling drones using information, the safety and reliability of controlling drones are effectively improved.
- the method in this embodiment may also include:
- Step S1801 Obtain the actual flight route of the drone.
- Step S1802 In the three-dimensional map, the actual flight route and the three-dimensional route information are displayed separately.
- the UAV can be controlled to fly based on the three-dimensional route information.
- the UAV can fly according to the pre-drawn three-dimensional route information.
- the actual flight trajectory corresponding to the UAV can be obtained through the detection device and/or positioning device provided on the UAV.
- the actual flight route can be based on the actual flight of the UAV obtained through the detection device and/or positioning device.
- the waypoint is determined.
- the actual flight route and the three-dimensional route information can be displayed separately on the three-dimensional map.
- different colors can be used to differentiate between the actual flight route and the three-dimensional route information.
- blue thin lines can be used to display the actual flight route
- gray thin lines can be used to display the actual flight route.
- different route display methods can be used to differentiate between the actual flight route and the three-dimensional route information. For example, as shown in Figure 19, the actual flight route can be displayed with solid lines, the actual flight route can be displayed with dotted lines, etc. etc., so that users can more intuitively understand the difference between the actual flight route and the three-dimensional route information.
- the method in this embodiment may also include:
- Step S1901 Obtain the execution status corresponding to the three-dimensional route information.
- Step S1902 In the three-dimensional map, differentiate and display the three-dimensional route information in different execution states.
- the UAV After generating the three-dimensional route information corresponding to the UAV, the UAV can be controlled to perform flight operations based on the three-dimensional route information. It should be noted that when the UAV is controlled to perform flight operations based on the three-dimensional route information, the three-dimensional route information can The flight process based on the UAV has different execution states. The execution status can include any of the following: completed state, unfinished state. The three-dimensional route information can include the route segment used to identify the UAV completed and/or Used to identify unfinished route segments of the drone.
- the execution status corresponding to the three-dimensional route information can be obtained.
- obtaining the execution status corresponding to the three-dimensional route information can include: obtaining the actual execution status corresponding to the UAV. Position information. Based on the actual position information, the completed route and the uncompleted route included in the three-dimensional route information can be determined. The completed route can be at least part of the three-dimensional route information. When the completed route is the complete three-dimensional route information, then The uncompleted route is 0; when the uncompleted route is complete three-dimensional route information, the completed route is 0.
- the execution status corresponding to the completed routes is the completion state.
- the execution status corresponding to the uncompleted routes is The execution status is incomplete.
- the three-dimensional route information in different execution states can be displayed differently on the three-dimensional map.
- different colors can be used to distinguish between the three-dimensional route information in the completed state and the unfinished one.
- the three-dimensional route information of the status can be differentiated and displayed.
- the three-dimensional route information of the completed status can be displayed with a gray thin line
- the three-dimensional route information of the unfinished status can be displayed with a green thin line, so that the user can display it more intuitively. Understand the three-dimensional route information in different execution states.
- the actual flight trajectory corresponding to the drone can be obtained.
- the planned three-dimensional route can be viewed at the same time.
- Information and actual flight trajectory the actual flight trajectory after the route is completed can be displayed as a gray line in the three-dimensional space map.
- Figure 20 is a schematic flow chart of another method for displaying information collected by a drone provided by an embodiment of the present invention; based on any of the above embodiments, as shown in Figure 20, in this embodiment
- the method can not only display the information collected by the drone, but also display the three-dimensional model generated by the information collected by the drone in combination with the three-dimensional map.
- the method in this embodiment can also include :
- Step S2001 Obtain the three-dimensional model to be displayed.
- the three-dimensional model is generated based on the information collected by the drone.
- Step S2002 Based on the collected information, determine a three-dimensional map corresponding to the three-dimensional model.
- Step S2003 Combine and display the three-dimensional model and the three-dimensional map.
- Step S2001 Obtain the three-dimensional model to be displayed.
- the three-dimensional model is generated based on the information collected by the drone.
- the UAV can be equipped with a data collection device.
- the types of data collection devices can be different.
- the data collection device can be an image collection device, a positioning device, or an image collection device. devices, etc.
- the collection information corresponding to a collection object can be obtained through the data collection device.
- a three-dimensional model corresponding to the collection information can be established based on the collection information.
- the execution subject of establishing the three-dimensional model can be a drone, a cloud platform that is communicated with the drone, or an information display device that is communicated with the cloud platform.
- obtaining the three-dimensional model to be displayed may include: receiving the three-dimensional model sent by the cloud server.
- the three-dimensional model is generated by the cloud server based on the information collected by the drone.
- the data collection device can obtain the collection information corresponding to a collection object. After obtaining the collection After receiving the information, the collected information can be sent to the cloud server. After the cloud server obtains the collected information, it can analyze and process the collected information to generate a three-dimensional model to be displayed corresponding to the collected information, and then the cloud server can store the collected information to be displayed.
- the three-dimensional model to be displayed can be sent to the information display device, so that the information display device can receive the three-dimensional model to be displayed sent by the cloud server, This effectively ensures the accuracy and reliability of acquiring the three-dimensional model to be displayed.
- Step S2002 Based on the collected information, determine a three-dimensional map corresponding to the three-dimensional model.
- the three-dimensional model to be displayed is generated based on the information collected by the drone, and different collection information can correspond to different location information, in order to be able to combine the three-dimensional map with the information generated by the drone
- the three-dimensional model is displayed.
- the collected information corresponding to the three-dimensional model can be analyzed and processed to determine a three-dimensional map corresponding to the three-dimensional model.
- determining the three-dimensional map corresponding to the three-dimensional model based on the collected information may include: determining the position information corresponding to the three-dimensional model based on the collected information, and then determining the three-dimensional map corresponding to the three-dimensional model based on the position information.
- Step S2003 Combine and display the three-dimensional model and the three-dimensional map.
- the three-dimensional model and the three-dimensional map can be combined and displayed.
- the combined display of the three-dimensional model and the three-dimensional map may include: among the multiple three-dimensional models, determining the target three-dimensional model that needs to be displayed in detail; using the first preset area of the display interface to display the target three-dimensional model and the corresponding three-dimensional map.
- the map is combined and displayed; the second preset area of the display interface is used to thumbnail display other three-dimensional models except the target three-dimensional model, wherein the second preset area is smaller than the first preset area.
- the display interface may include a first preset area and a second preset area.
- the display area corresponding to the first preset area is larger than the display area corresponding to the second preset area.
- the first preset area can be the upper half of the display interface
- the second preset area can be the display interface. the lower part.
- the first preset area of the display interface can be used to display the target three-dimensional model and the corresponding three-dimensional map in combination, and at the same time, the second preset area of the display interface can be used to display the target three-dimensional model except the target three-dimensional model.
- Other three-dimensional models are displayed as thumbnails, thereby effectively achieving the quality and effect of combined display of three-dimensional models and three-dimensional maps.
- the method in this embodiment can also implement switching operations on the displayed three-dimensional models.
- the method in this embodiment may also include:
- Step S2101 Obtain the model selection operation input by the user for any other three-dimensional model.
- Step S2102 Switch the target three-dimensional model displayed in the first preset area to a three-dimensional model corresponding to the model selection operation.
- the user can perform a model selection operation on any other three-dimensional model input, for example: the user can use The mouse clicks on any other three-dimensional model, so that the model selection operation input by the user on any other three-dimensional model can be obtained.
- the target three-dimensional model displayed in the first preset area can be switched to the three-dimensional model corresponding to the model selection operation, thereby effectively realizing the control of all other three-dimensional models.
- the displayed three-dimensional model can be switched, thereby improving the flexibility and reliability of this method.
- the method in this embodiment can not only display the three-dimensional model through different areas of the display interface, but also adjust the display type of the three-dimensional map corresponding to the three-dimensional model.
- the three-dimensional model and The combined display of the three-dimensional map may include: determining the display type of the three-dimensional map, which includes any of the following: preset background map, satellite map, standard map; based on the display type of the three-dimensional map, combined display of the three-dimensional model and the three-dimensional map .
- the 3D map background can include a pre-configured map background that can be indicated, which specifically can include: a preset background image, a satellite map background, and a standard map. Background, etc., as shown in Figure 13a, the 3D map background of the 3D model is a preset background image, that is, a black striped background; as shown in Figure 13b, the 3D map background of the 3D model is a satellite map background, as shown in Figure 13c, The 3D map background of the 3D model is the default map background. During specific implementation, the 3D map background of the 3D model can default to a black background map and the model is displayed alone.
- the user can switch the background base map used to display the 3D model according to needs.
- the 3D map corresponding to the 3D map can be called up.
- Background and the 3D model can be directly attached to the 3D map for display.
- this embodiment is also able to display multiple three-dimensional models that need to be displayed in a preset order.
- the combined display of the three-dimensional model and the three-dimensional map in this embodiment may include: obtaining reference information for sorting multiple three-dimensional models; determining the display sequence of the multiple three-dimensional models based on the reference information; based on the display sequence, Multiple three-dimensional models and corresponding three-dimensional maps are combined and displayed sequentially.
- reference information for sorting the multiple three-dimensional models can be obtained.
- the reference information can include any one of the following : Selection order information, time information; the reference information used to sort multiple three-dimensional models can be obtained based on the user's configuration operation or selection operation.
- the display sequence of the multiple three-dimensional models can be determined based on the reference information. It should be noted that when the reference information is selection order information, the display sequence can be determined based on the user's needs.
- the selection sequence information corresponding to the displayed multiple three-dimensional models performs a sorting operation on the multiple three-dimensional models, so that a display sequence of the multiple three-dimensional models can be obtained.
- the reference information is time information
- the user can sort the multiple three-dimensional models based on the time information corresponding to the multiple three-dimensional models that need to be displayed, so that the display sequence of the multiple three-dimensional models can be obtained; after obtaining the display sequence Afterwards, multiple three-dimensional models and corresponding three-dimensional maps can be sequentially combined and displayed based on the display sequence.
- this embodiment provides an implementation method for determining the display sequence of multiple three-dimensional models based on reference information, which specifically includes: based on reference information , determine the initial sequence of multiple three-dimensional models; obtain the user's adjustment operation on the initial sequence input, and obtain the display sequence of multiple three-dimensional models.
- the initial sequences of multiple three-dimensional models can be determined based on the reference information.
- the initial sequence of the multiple three-dimensional models is determined as the display sequence of the multiple three-dimensional models.
- the user can flexibly adjust the initial sequence.
- the user can input the adjustment operation of the initial sequence through the display interface, so that the user's adjustment of the initial sequence input can be obtained. operation, and then the initial sequence of multiple three-dimensional models can be adjusted based on the adjustment operation, and then the display sequence of multiple three-dimensional models can be obtained. This effectively ensures the accuracy and reliability of acquiring the display sequence of multiple three-dimensional models.
- multiple 3D models can be presented in the form of a list or grid view.
- a "Multiple Model Preview" button can be displayed.
- a global pop-up window can appear.
- the global pop-up window is divided into two content areas, the upper and lower areas. A preview of the model arranged in the time dimension. The lower part is the display of the selected content;
- the upper part of the model supports multi-selection.
- the selected model When a model is selected, the selected model will be displayed in the lower part.
- the display is divided into two methods: “Select Order Sort” and “Time Sort”.
- “Select Order Sort” When you click “Select Order Sort” ", the multiple 3D models presented below will be sorted by the order of the selected models in the upper part.
- “Time Sort” When “Time Sort” is clicked, the selected models will be sorted in the positive order of time;
- the 3D model also supports the drag and drop operation of the model.
- the display order of the 3D model has been adjusted.
- the multiple 3D models will be previewed in a timeline.
- users can enter the model preview page, which is divided into a model preview area at the top and a timeline selection area at the bottom.
- the model display area displays the latest 3D model, and the timeline below selects the model with the latest date.
- you click the 3D model in the timeline the 3D model in the model preview area will change accordingly, and the timeline will be in the style of a model thumbnail. exist.
- multiple 3D models can also be automatically played according to needs, which is similar to the effect of automatic slide play.
- automatic play is clicked, the 3D models will be displayed according to the time. Automatically switch model preview operations in sequential dimensions, thereby eliminating the need for users to switch display operations on multiple 3D models, allowing users to clearly view the model information of multiple 3D models, further improving the flexibility and reliability of this method. sex.
- the model selection operation input by the user on any other three-dimensional model is obtained, and then the target three-dimensional model displayed in the first preset area is switched to the three-dimensional model corresponding to the model selection operation, thereby effectively
- the switching display operation of the displayed target three-dimensional model is realized, which further improves the flexibility and reliability of this method.
- the method in this embodiment may also include: obtaining the user's execution operation for the three-dimensional model input; and moving, rotating, or scaling the three-dimensional model based on the execution operation.
- the user can adjust the display angle of the 3D model to be displayed or the 3D model that has been displayed according to the processing needs.
- the upper part of the model list page is the model display area. This model display area can support operations such as model movement, model rotation, and model zoom viewing. Specifically, you can click and drag with the left mouse button. Drag and move the displayed 3D model. Ctrl + left mouse button can rotate the direction of the 3D model. Rolling the middle mouse wheel can enlarge or reduce the size of the 3D model.
- the lower part of the model list page can display thumbnails of multiple other 3D models to be displayed.
- the execution operation input by the user on the three-dimensional model is obtained; and then the three-dimensional model is moved, rotated or zoomed based on the execution operation, so that the angle of the displayed three-dimensional model can meet the user's viewing needs and facilitate the user's viewing. Viewing the three-dimensional model from each display perspective further improves the practicability of this method.
- the method in this embodiment may also include: in response to a model processing request for the three-dimensional model, performing processing operations on the three-dimensional model and the three-dimensional map.
- the processing operations include at least one of the following: 1: Distribution operation, download operation, and deletion operation.
- the user can perform corresponding processing operations on the 3D model to be displayed or the 3D model that has been displayed according to the processing requirements.
- the processing request is a model distribution request
- the user can input the model distribution requirement for the 3D model.
- Map performs model distribution operations.
- model processing request is a model download request
- the user when the user has a model download requirement for the 3D model, the user can input the model download requirement for the 3D model. After obtaining the model download request for the 3D model, the user can download based on the model. Request model download operations for 3D models and 3D maps.
- the model processing request is a model deletion request
- the user when the user has a model deletion request for the 3D model, the user can input the model deletion request for the 3D model.
- the 3D model After obtaining the model deletion request for the 3D model, the 3D model can be deleted based on the model deletion request. Perform model deletion operations with 3D maps.
- the methods provided by the above embodiments achieve the following functions: (1) Display the three-dimensional model generated by the information collected by the drone. Specifically, the three-dimensional model has elevation information and can be It can be displayed on a map with 3D display capability, thus realizing the combined display of 3D model and 3D map, which can then be displayed on the web page or the ground, and supports operations such as moving, rotating, and scaling of the displayed 3D map. . (2) It helps to understand the degree of change of the actual object or the actual environment through the three-dimensional model. Specifically, since the use of the three-dimensional model is often not isolated, the changes in the three-dimensional model can be viewed in the time dimension, so that the user can more Get a clear, intuitive understanding of how physical objects in the real world change.
- the drone's shooting results can be displayed in a three-dimensional space map, so that users can view the shooting results directly through the three-dimensional space map, which improves the understanding of the shooting results.
- the authenticity of the display Before, during and after the UAV flight route, the process trajectory can be displayed on the three-dimensional map, thereby making it easier for users to understand the flight status of the UAV in a timely manner, further improving the practicality of this method sex.
- Figure 22 is a schematic flow chart of a method for comparing models obtained by using drones according to an embodiment of the present invention; with reference to Figure 22, this embodiment provides a method for comparing models obtained by using drones.
- the comparison method of the model can be executed by a comparison device of the model.
- the comparison device of the model can be implemented as software or a combination of software and hardware.
- the comparison device of the model is implemented as hardware, its specific It can be an electronic device that communicates and connects with the drone through a cloud platform, cloud network, and cloud server.
- the electronic device can be implemented as a handheld terminal, a personal terminal PC, etc.
- the model comparison device is implemented as software, it can be installed in the electronic device exemplified above.
- the comparison method of models obtained using drones in this embodiment may include:
- Step S2201 Obtain a model comparison request corresponding to at least two three-dimensional models.
- the at least two three-dimensional models are generated based on the information collected by the drone.
- Step S2202 Overlay and display at least two three-dimensional models based on the model comparison request to obtain an overlay display area.
- the overlay display area is used to display at least one three-dimensional model.
- Step S2203 In response to a display adjustment operation input by the user for the overlay display area, adjust the display data in the overlay display area to determine a model comparison result between at least two three-dimensional models.
- adjusting the display data in the overlay display area may include: obtaining a display adjustment operation input by the user for an area adjustment control corresponding to the overlay display area; Display data in the overlay display area is adjusted in response to a display adjustment operation input by the user for the area adjustment control.
- the method in this embodiment may further include: obtaining the number of three-dimensional models located in the overlay display area; based on the three-dimensional model The quantity determines the area adjustment controls corresponding to the overlay display area.
- the number of area adjustment controls is less than or equal to the number of three-dimensional models, and the area adjustment controls are used to adjust the display areas of three-dimensional models in different overlays.
- adjusting the display data in the overlay display area may include: determining an adjustable area corresponding to the area adjustment control; in response to the user input for the area adjustment control The adjustment operation is to adjust the display data in the overlay display area within the adjustable area.
- the method in this embodiment may further include: obtaining a three-dimensional map corresponding to any three-dimensional model; and displaying the overlay display area and the three-dimensional map in combination.
- the method in this embodiment may further include: receiving at least two three-dimensional models sent by the cloud server, and the at least two three-dimensional models are both cloud The server generates it based on the information collected by the drone.
- Figure 23 is a schematic flow chart of a method for generating a route for controlling a drone provided by an embodiment of the present invention; with reference to Figure 23, this embodiment provides a method for controlling a drone
- a route generation method the execution subject of the route generation method can be a route generation device, and the route generation device can be implemented as software, or a combination of software and hardware, wherein when the route generation device is implemented as hardware, Specifically, it can be an electronic device that communicates with the drone through a cloud platform, cloud network, and cloud server.
- the electronic device can be implemented as a handheld terminal, a personal terminal PC, etc.
- the route generation device is implemented as software, it can be installed in the electronic device exemplified above.
- the method for generating a route for controlling a drone in this embodiment may include:
- Step S2301 Obtain the waypoint editing information input by the user in the three-dimensional map.
- Step S2302 Based on the waypoint editing information, determine at least two spatial waypoints located in the three-dimensional map.
- the spatial waypoints include altitude information used to control the drone.
- Step S2303 Generate three-dimensional route information corresponding to the UAV based on at least two spatial waypoints.
- the method in this embodiment may further include: obtaining the waypoint adjustment operation input by the user for any spatial waypoint in the three-dimensional map; Adjust the space waypoint based on the waypoint adjustment operation.
- the method in this embodiment may also include: obtaining the actual flight route of the drone; and displaying the actual flight route and the three-dimensional route information differently on the three-dimensional map.
- the method in this embodiment may further include: obtaining the execution status corresponding to the three-dimensional route information; and displaying the three-dimensional route information in different execution states in a differentiated manner on the three-dimensional map.
- Figure 24 is a schematic flow chart of a method for displaying a model obtained by using a drone provided by an embodiment of the present invention; with reference to Figure 24, this embodiment provides a method for displaying a model obtained by using a drone.
- the display method of the model, the execution subject of the display method of the model can be the display device of the model, the display device of the model can be implemented as software, or a combination of software and hardware, wherein when the display device of the model is implemented as hardware, the specific It can be an electronic device that communicates and connects with the drone through a cloud platform, cloud network, and cloud server.
- the electronic device can be implemented as a handheld terminal, a personal terminal PC, etc.
- the display method of the model obtained by using the drone in this embodiment may include:
- Step S2401 Obtain the three-dimensional model to be displayed.
- the three-dimensional model is generated based on the information collected by the drone.
- Step S2402 Based on the collected information, determine a three-dimensional map corresponding to the three-dimensional model.
- Step S2403 Combine and display the three-dimensional model and the three-dimensional map.
- the combined display of the three-dimensional model and the three-dimensional map may include: determining the target three-dimensional model that needs to be displayed in detail among the multiple three-dimensional models; using the first part of the display interface The preset area combines and displays the target three-dimensional model and the corresponding three-dimensional map; the second preset area of the display interface is used to thumbnail display of other three-dimensional models except the target three-dimensional model, wherein the second preset area is smaller than the first three-dimensional map.
- a preset area may include: determining the target three-dimensional model that needs to be displayed in detail among the multiple three-dimensional models; using the first part of the display interface The preset area combines and displays the target three-dimensional model and the corresponding three-dimensional map; the second preset area of the display interface is used to thumbnail display of other three-dimensional models except the target three-dimensional model, wherein the second preset area is smaller than the first three-dimensional map.
- a preset area may include: determining the target three-dimensional model that needs to be displayed in detail among the multiple three-
- the method in this embodiment may also include: obtaining the user's information on any other three-dimensional model.
- the input model selection operation switches the target three-dimensional model displayed in the first preset area to a three-dimensional model corresponding to the model selection operation.
- the combined display of the three-dimensional model and the three-dimensional map may include: determining the display type of the three-dimensional map.
- the display type includes any of the following: preset background map, satellite map, standard map; a display type based on the three-dimensional map, Combined display of 3D models and 3D maps.
- the method in this embodiment may also include: obtaining the execution operation of the user's input on the three-dimensional model; and moving, rotating or scaling the three-dimensional model based on the execution operation.
- the method in this embodiment may further include: in response to a model processing request for the three-dimensional model, performing a processing operation on the three-dimensional model and the three-dimensional map.
- the processing operation includes at least one of the following: distribution operation, download operation, deletion operate.
- the combined display of the three-dimensional models and the three-dimensional map may include: obtaining reference information for sorting the multiple three-dimensional models; determining the location of the multiple three-dimensional models based on the reference information. Display sequence; based on the display sequence, multiple three-dimensional models and corresponding three-dimensional maps are combined and displayed sequentially.
- the reference information includes any one of the following: selection order information, time information.
- determining the display sequence of multiple three-dimensional models based on the reference information may include: determining the initial sequence of the multiple three-dimensional models based on the reference information; obtaining the user's adjustment operation on the initial sequence input to obtain the display sequence of the multiple three-dimensional models. .
- obtaining the three-dimensional model to be displayed includes: receiving the three-dimensional model sent by the cloud server.
- the three-dimensional model is generated by the cloud server based on the information collected by the drone.
- Figure 25 is a schematic structural diagram of a display device for information collected by a drone provided by an embodiment of the present invention; with reference to Figure 25, this embodiment provides a display device for information collected by a drone
- the information display device is used to perform the information display method shown in Figure 2.
- the information display device may include:
- Memory 2501 used to store computer programs
- Processor 2502 used to run the computer program stored in the memory 2501 to implement:
- the drone shooting position and the shooting object position are marked and displayed on the map corresponding to the shooting information.
- the structure of the information display device may also include a communication interface 2503, which is used to implement communication between the information display device and other devices or communication networks.
- Figure 26 is a schematic structural diagram of a device for comparing models obtained by using drones according to an embodiment of the present invention; with reference to Figure 26, this embodiment provides a device for comparing models obtained by using drones.
- the comparison device of the model is used to perform the comparison method of the model obtained by using the drone as shown in Figure 13.
- the comparison device of the model may include:
- Memory 2601 used to store computer programs
- Processor 2602 used to run the computer program stored in the memory 2601 to implement:
- At least two three-dimensional models are overlapped and displayed, and an overlay display area is obtained.
- the overlay display area is used to display at least one three-dimensional model
- the display data in the overlay display area is adjusted to determine a model comparison result between at least two three-dimensional models.
- the structure of the model comparison device may also include a communication interface 2603, which is used to implement communication between the model comparison device and other devices or communication networks.
- Figure 27 is a schematic structural diagram of a route generation device for controlling a drone provided by an embodiment of the present invention; with reference to Figure 27, this embodiment provides a device for controlling a drone
- the route generating device is used to execute the route generating method for controlling the UAV shown in Figure 17.
- the route generating device may include:
- Memory 2701 used to store computer programs
- Processor 2702 used to run the computer program stored in memory 2701 to implement:
- the waypoint editing information determine at least two spatial waypoints located in the three-dimensional map, where the spatial waypoints include altitude information used to control the drone;
- three-dimensional route information corresponding to the UAV is generated.
- the structure of the route generation device may also include a communication interface 2703, which is used to implement communication between the route generation device and other devices or communication networks.
- the implementation and implementation effects of the route generation device for controlling the UAV shown in Figure 27 are similar to the implementation and implementation effects of the method in the embodiment shown in Figures 17 to 19. This embodiment does not For detailed description, reference may be made to the relevant descriptions of the embodiments shown in Figures 17-19. For the implementation process and technical effects of this technical solution, please refer to the description in the embodiment shown in Figures 17 to 19, and will not be described again here.
- Figure 28 is a schematic structural diagram of a display device for a model obtained using a drone provided by an embodiment of the present invention; with reference to Figure 28, this embodiment provides a display device for a model obtained using a drone
- the display device of the model is used to perform the display method of the model obtained by using the drone as shown in Figure 20.
- the display device of the model may include:
- Memory 2801 used to store computer programs
- Processor 2802 used to run the computer program stored in the memory 2801 to implement:
- the three-dimensional model is generated based on the information collected by the drone;
- the structure of the display device of the model may also include a communication interface 2803, which is used to implement communication between the display device of the model and other devices or communication networks.
- embodiments of the present invention provide a computer storage medium for storing computer software instructions used by electronic devices, which include instructions for executing the method embodiments shown in Figures 1 to 12 above. The procedures involved in the method of displaying information.
- Embodiments of the present invention provide a computer storage medium for storing computer software instructions used in electronic equipment, which includes instructions for executing the above method embodiments shown in FIGS. 13-16 on the model obtained by using a drone. Procedures involved in contrasting methods.
- Embodiments of the present invention provide a computer storage medium for storing computer software instructions used in electronic devices, which include routes used to control UAVs in the method embodiments shown in Figures 17-19. The procedures involved in the generation method.
- Embodiments of the present invention provide a computer storage medium for storing computer software instructions used in electronic devices, which includes instructions for executing the method embodiments shown in FIGS. 20 and 21 above on the model obtained by using a drone. Shows the procedures involved in the method.
- embodiments of the present invention provide a computer program product, including: a computer program.
- the processor When the computer program is executed by a processor of an electronic device, the processor causes the processor to perform the unmanned processing in the method embodiments shown in FIGS. 1-12. How to display the information collected by the machine.
- Embodiments of the present invention provide a computer program product, which includes: a computer program.
- a computer program When the computer program is executed by a processor of an electronic device, the processor is caused to execute the steps for using a drone in the method embodiments shown in FIGS. 13-16. Methods of comparison of the obtained models.
- Embodiments of the present invention provide a computer program product, including: a computer program.
- the computer program When the computer program is executed by a processor of an electronic device, the processor is caused to execute the method embodiments shown in FIGS. 17-19 for drone control. How to generate control routes.
- Embodiments of the present invention provide a computer program product, including: a computer program.
- the computer program When the computer program is executed by a processor of an electronic device, the processor is caused to execute the steps for utilizing a drone in the method embodiment shown in FIGS. 20-21. Display method of the obtained model.
- FIG 29 is a schematic structural diagram of an unmanned aerial vehicle system provided by an embodiment of the present invention. Referring to Figure 29, this embodiment provides an unmanned aerial vehicle system.
- the unmanned aerial vehicle system may include:
- the display device 2902 for information collected by the drone in the embodiment of FIG. 25 is used to control the drone 2901 through the cloud platform 2903.
- the cloud platform 2903 is used to set the flight operation tasks of the drone, the user's planned route and other operations.
- the drone 2901 can perform the flight operation tasks set through the cloud platform 2903, or can operate according to the user's planned route.
- the drone 2901 can be equipped with an image acquisition device, through which the drone's shooting results (image information, video information, point cloud information, etc.) can be obtained, and the shooting results can be directly transmitted to the cloud platform 2903 or upload to the cloud platform 2903 through the remote control.
- the photographing results can be displayed through the information display device 2902 .
- other terminal devices can also download and display the shooting results from the cloud platform 2903 according to design requirements or application requirements.
- the implementation method and effect of the UAV system in this embodiment are similar to the implementation method and effect of the display device using the information collected by the UAV in the embodiment shown in FIG. 25, and are not detailed in this embodiment.
- FIG 30 is a schematic structural diagram 2 of an unmanned aerial vehicle system provided by an embodiment of the present invention. with reference to Figure 30, this embodiment provides another unmanned aerial vehicle system, which may include:
- the comparison device 3002 for models obtained by using a drone in the above-mentioned embodiment of FIG. 26 is used to control the drone 3001 through the cloud platform 3003.
- the implementation method and effect of the UAV system in this embodiment are similar to the implementation method and effect of the comparison device for the model obtained by using the UAV in the embodiment shown in FIG. 26, and are not detailed in this embodiment.
- the implementation process and technical effects of this technical solution please refer to the description in the embodiment shown in Figure 26 and will not be described again here.
- FIG 31 is a schematic structural diagram 3 of an unmanned aerial vehicle system provided by an embodiment of the present invention. with reference to Figure 31, this embodiment provides yet another unmanned aerial vehicle system, which may include:
- the route generation device 3102 for controlling the UAV in the above-mentioned embodiment of FIG. 27 is used to control the UAV 3101 through the cloud platform 3103.
- the implementation and implementation effects of the UAV system in this embodiment are similar to the implementation and implementation effects of the route generation device for controlling the UAV in the embodiment shown in Figure 27.
- This embodiment does not For detailed description, please refer to the relevant description of the embodiment shown in FIG. 27 .
- Figure 32 is a schematic structural diagram 4 of an unmanned aerial vehicle system provided by an embodiment of the present invention. with reference to Figure 32, this embodiment provides yet another unmanned aerial vehicle system, which may include:
- the display device 3202 for the model obtained by the drone in the above-mentioned embodiment of Figure 28 is used to control the drone 3201 through the cloud platform 3203.
- the implementation method and effect of the UAV system in this embodiment are similar to the implementation method and effect of the display device for the model obtained by using the UAV in the embodiment shown in FIG. 28, and are not detailed in this embodiment.
- the disclosed related detection devices and methods can be implemented in other ways.
- the detection device embodiments described above are only illustrative.
- the division of modules or units is only a logical function division.
- there may be other division methods, such as multiple units or components. can be combined or can be integrated into another system, or some features can be ignored, or not implemented.
- the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the detection device or unit may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- each functional unit in various embodiments of the present invention can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
- the above integrated units can be implemented in the form of hardware or software functional units.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
- the technical solution of the present invention is essentially or contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions for causing a computer processor (processor) to execute all or part of the steps of the methods described in various embodiments of the present invention.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Processing Or Creating Images (AREA)
- Instructional Devices (AREA)
Abstract
一种信息的显示方法、模型的对比方法、装置及无人机系统。信息的显示方法包括:获取无人机的拍摄信息;确定与拍摄信息相对应的无人机拍摄位置;当存在与拍摄信息相对应的拍摄对象位置,则在拍摄信息所对应的地图中,对无人机拍摄位置和拍摄对象位置进行标记显示。本实施例提供的技术方案,通过获取无人机的拍摄信息,确定与拍摄信息相对应的无人机拍摄位置,当存在与拍摄信息相对应的拍摄对象位置,则在拍摄信息所对应的地图中,对无人机拍摄位置和拍摄对象位置进行标记显示,有效地实现了能够将拍摄信息的相关信息在地图上进行灵活显示,使得用户通过地图能够快速、直观地获知到拍摄信息的相关信息,提高了对拍摄信息进行显示的质量和效果。
Description
本发明实施例涉及无人机技术领域,尤其涉及一种信息的显示方法、模型的对比方法、装置及无人机系统。
随着科学技术的飞速发展,无人机的应用领域越来越广泛,例如:农业场景、航拍场景、测绘场景、搜救场景等等。在无人机作业的过程中,可以获得无人机的拍摄成果,并能够将拍摄成果传输到地面端,在地面端获得拍摄成果之后,用户可以直接通过回放或者相关的列表模块对拍摄成果进行查看,这样使得拍摄成果的显示效果比较单一。
发明内容
本发明实施例提供了一种信息的显示方法、模型的对比方法、装置及无人机系统,能够将拍摄信息所对应的无人机拍摄位置和拍摄对象位置在地图中进行显示,从而提高了对无人机的拍摄成果进行显示的质量和效果。
本发明的第一方面是为了提供一种对利用无人机所采集的信息的显示方法,包括:
获取无人机的拍摄信息;
确定与所述拍摄信息相对应的无人机拍摄位置;
当存在与所述拍摄信息相对应的拍摄对象位置,则在所述拍摄信息所对应的地图中,对所述无人机拍摄位置和所述拍摄对象位置进行标记显示。
本发明的第二方面是为了提供一种对利用无人机所获得的模型的对比方法,包括:
获取与至少两个三维模型相对应的模型对比请求,所述至少两个三维模型均是基于无人机的采集信息所生成的;
基于所述模型对比请求将所述至少两个三维模型进行重合叠加显示,获得叠加显示区域,所述叠加显示区域用于对至少一个三维模型进行显示;
响应于用户针对所述叠加显示区域输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整,以便于确定至少两个三维模型之间的模型对比结果。
本发明的第三方面是为了提供一种用于对无人机进行控制的航线的生成方法,包括:
获取用户在三维地图中所输入的航点编辑信息;
基于所述航点编辑信息,确定位于所述三维地图中的至少两个空间航点,所述空间航点包括用于对无人机进行控制的高度信息;
基于所述至少两个空间航点,生成与所述无人机相对应的三维航线信息。
本发明的第四方面是为了提供一种对利用无人机所获得的模型的显示方法,包括:
获取待显示的三维模型,所述三维模型是基于无人机的采集信息所生成的;
基于所述采集信息,确定与所述三维模型相对应的三维地图;
对所述三维模型和所述三维地图进行结合显示。
本发明的第五方面是为了提供一种对利用无人机所采集的信息的显示装置,包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:
获取无人机的拍摄信息;
确定与所述拍摄信息相对应的无人机拍摄位置;
当存在与所述拍摄信息相对应的拍摄对象位置,则在所述拍摄信息所对应的地图中,对所述无人机拍摄位置和所述拍摄对象位置进行标记显示。
本发明的第六方面是为了提供一种对利用无人机所获得的模型的对比装置,包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:
获取与至少两个三维模型相对应的模型对比请求,所述至少两个三维模型均是基于无人机的采集信息所生成的;
基于所述模型对比请求将所述至少两个三维模型进行重合叠加显示,获得叠加显示区域,所述叠加显示区域用于对至少一个三维模型进行显示;
响应于用户针对所述叠加显示区域输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整,以便于确定至少两个三维模型之间的模型对比结果。
本发明的第七方面是为了提供一种用于对无人机进行控制的航线的生成装置,包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:
获取用户在三维地图中所输入的航点编辑信息;
基于所述航点编辑信息,确定位于所述三维地图中的至少两个空间航点,所述空间航点包括用于对无人机进行控制的高度信息;
基于所述至少两个空间航点,生成与所述无人机相对应的三维航线信息。
本发明的第八方面是为了提供一种对利用无人机所获得的模型的显示装置,包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:
获取待显示的三维模型,所述三维模型是基于无人机的采集信息所生成的;
基于所述采集信息,确定与所述三维模型相对应的三维地图;
对所述三维模型和所述三维地图进行结合显示。
本发明的第九方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第一方面所述的对利用无人机所采集的信息的显示方法。
本发明的第十方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第二方面所述的对利用无人机所获得的模型的对比方法。
本发明的第十一方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第三方面所述的用于对无人机进行控制的航线的生成方法。
本发明的第十二方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第四方面所述的对利用无人机所获得的模型的显示方法。
本发明的第十三方面是为了提供一种无人机系统,包括:
无人机;
上述第五方面所述的对利用无人机所采集的信息的显示装置,用于通过云平台对所述无人机进行控制。
本发明的第十四方面是为了提供一种无人机系统,包括:
无人机;
上述第六方面所述的对利用无人机所获得的模型的对比装置,用于通过云平台对所述无人机进行控制。
本发明的第十五方面是为了提供一种无人机系统,包括:
无人机;
上述第七方面所述的用于对无人机进行控制的航线的生成装置,用于通过云平台对所述无人机进行控制。
本发明的第十六方面是为了提供一种无人机系统,包括:
无人机;
上述第八方面所述的对利用无人机所获得的模型的显示装置,用于通过云平台对所述无人机进行控制。
本发明实施例提供的技术方案,通过获取无人机的拍摄信息,确定与所述拍摄信息相对应的无人机拍摄位置,当存在与所述拍摄信息相对应的拍摄对象位置,则在所述拍摄信息所对应的地图中,对所述无人机拍摄位置和所述拍摄对象位置进行标记显示,有效地实现了能够将拍摄信息所对应的无人机拍摄位置和拍摄对象位置在地图中进行显示,使得用户通过地图能够快速、直观地获知到拍摄信息的相关信息,这样有效地提高了对无人机的拍摄成果进行显示的质量和效果,进一步提高了该方法的实用性,有利于市场的推广与应用。
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1为本发明实施例提供的一种对利用无人机所采集的信息的显示方法的场景示意图;
图2为本发明实施例提供的一种对利用无人机所采集的信息的显示方法的流程示意图;
图3为本发明实施例提供的无人机拍摄位置和所述拍摄对象位置的示意图;
图4为本发明实施例提供的对所述无人机拍摄位置和所述拍摄对象位置进行标记显示的示意图一;
图5为本发明实施例提供的对所述无人机拍摄位置和所述拍摄对象位置进行标记显示的示意图二;
图6为本发明实施例提供的另一种对利用无人机所采集的信息的显示方法的流程示意图;
图7为本发明实施例提供的将所述全景图自动加载至所述三维地图中进行标记显示的示意图;
图8为本发明实施例提供的基于所述显示视角对所述全景图进行显示的示意图一;
图9为本发明实施例提供的基于所述显示视角对所述全景图进行显示的示意图二;
图10为本发明实施例提供的又一种对利用无人机所采集的信息的显示方法的流程示意图;
图11为本发明实施例提供的对所述视频信息进行播放的示意图;
图12为本发明实施例提供的在所述地图中对正在进行播放的视频帧所对应的当前拍摄位置进行显示的示意图;
图13为本发明实施例提供的另一种对利用无人机所采集的信息的显示方法的流程示意图;
图13a为本发明实施例提供的对三维模型进行显示的示意图一;
图13b为本发明实施例提供的对三维模型进行显示的示意图二;
图13c为本发明实施例提供的对三维模型进行显示的示意图三;
图14a为本发明实施例提供的叠加显示区域的示意图;
图14b为本发明实施例提供的对需要进行对比操作的至少两个三维模型进行显示的示意图;
图15为本发明实施例提供的响应于用户针对所述叠加显示区域输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整的流程示意图;
图16为本发明实施例提供的对所述叠加显示区域中的显示数据进行调整的示意图;
图17为本发明实施例提供的又一种对利用无人机所采集的信息的显示方法的流程示意图;
图18为本发明实施例提供的对三维航线信息进行显示的示意图;
图19为本发明实施例提供的对所述实际飞行航线和所述三维航线信息进行区分显示的示意图;
图20为本发明实施例提供的又一种对利用无人机所采集的信息的显示方法的流程示意图;
图21为本发明实施例提供的对所述三维模型和所述三维地图进行结合显示的示意图;
图22为本发明实施例提供的一种对利用无人机所获得的模型的对比方法的流程示意图;
图23为本发明实施例提供的一种用于对无人机进行控制的航线的生成方法的流程示意图;
图24为本发明实施例提供的一种对利用无人机所获得的模型的显示方法的流程示意图;
图25为本发明实施例提供的一种对利用无人机所采集的信息的显示装置的结构示意图;
图26为本发明实施例提供的一种对利用无人机所获得的模型的对比装置的结构示意图;
图27为本发明实施例提供的一种用于对无人机进行控制的航线的生成装置的结构示意图;
图28为本发明实施例提供的一种对利用无人机所获得的模型的显示装置的结构示意图;
图29为本发明实施例提供的一种无人机系统的结构示意图一;
图30为本发明实施例提供的一种无人机系统的结构示意图二;
图31为本发明实施例提供的一种无人机系统的结构示意图三;
图32为本发明实施例提供的一种无人机系统的结构示意图四。
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。
为了能够理解本实施例中技术方案的具体实现过程和实现原理,下面先对相关技术进行说明:
随着科学技术的飞速发展,无人机的应用领域越来越广泛,例如:农业场景、航拍场景、测绘场景、搜救场景等等。目前,对于无人机而言,在无人机作业的过程中,可以获得无人机的拍摄成果(照片、全景图、视频、点云等),并能够将拍摄成果传输到地面端,在地面端获得拍摄成果之后,用户可以直接通过回放或者相关的列表模块对拍摄成果进行查看,特别是全景图/点云等特殊的拍摄成果更少有在地图呈现,这样使得拍摄成果的显示效果比较单一。
另外,在无人机作业之前,对于用于控制无人机飞行的航线而言,航线绘制操作一般为二维地图的绘制操作,而在二维地图上所绘制的航线仅能够标识无人机运行的平面信息,并不能标识空间信息,相类似的,在无人机飞行的过程中或者完成之后,无法查看无人机进行实时飞行的轨迹画面,尤其是在三维地图中。因此,在目前利用二维地图对航线进行显示时,无法让用户及时清楚地感知到无人机所对应的空间运行状态,这样容易使得无人机出现穿插、碰撞等问题,从而会提高无人机运行的风险。
此外,在获取到无人机的拍摄成果之后,可以建立与拍摄成果相对应的三维模型,目前,对于通过拍摄成果所对应的三维模型而言,存在以下缺陷:(1)三维模型的展示与地形进行结合的应用场景较少;(2)在对多个三维模型进行调用时,目前并没有比较成熟、高效的交互方案;(3)在对多个三维模型进行查看时,目前并没有高效的、多维度的查看操作;(4)在对多个三维模型进行对比时,并没有理想的交互方案。
为了能够解决上述技术问题,本实施例提供了一种信息的显示方法、模型的对比方法、装置及无人机系统,其中,本实施例中的对利用无人机所采集的信息的显示方法的执行主体为对利用无人机所采集的信息的显示装置。参考附图1所示,该显示装置能够通过云平台(云网络、云服务器等)与无人机通信连接,具体的:
无人机,能够基于预设航线进行飞行作业,执行相对应的任务操作,在无人机飞行的过程中,可以通过设置于无人机上的图像采集装置进行拍摄信息的采集操作,从而可以获得拍摄信息,具体的,图像采集装置可以为照相机、摄像机、具有图像拍摄功能的其他设备等等,所获得的拍摄信息可以包括以下至少之一:图像信息、全景图、视频信息、点云信息。在获取到拍摄信息之后,可以将拍摄信息通过云平台发送至信息的显示装置,从而使得信息的显示装置能够对拍摄信息进行显示。
对利用无人机所采集的信息的显示装置,通过云平台与无人机通信连接,用于通过云平台获得无人机的拍摄信息,在获取到拍摄信息之后,可以对拍摄信息进行分析处理,以确定与拍摄信息相对应的无人机拍摄位置。在无人机飞行的过程中,无人机上可以选择地配置有用于确定与拍摄信息相对应的拍摄对象位置的感测装置(例如:激光雷达),具体的,在无人机上配置有感测装置时,通过感测装置可以获得与拍摄信息相对应的拍摄对象位置;在无人机上未配置感测装置时,则无法获得与拍摄信息相对应的拍摄对象位置。
当存在或者能够获得与拍摄信息相对应的拍摄对象位置时,则可以获取与拍摄信息相对应的地图,该地图可以为二维地图、三维地图等等,而后可以在拍摄信息所对应的地图中,对无人机拍摄位置和拍摄对象位置进行标记显示,有效地实现了能够将拍摄信息的相关信息在地图上进行灵活显示,从而使得用户通过地图即可直观、快速地了解到拍摄信息的相关信息,进一步提高了对无人机的拍摄成果进行显示的质量和效果。
下面结合附图,对本发明中一种信息的显示方法、模型的对比方法、装置及无人机系统的一些实施方式作详细说明。在各实施例之间不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
图2为本发明实施例提供的一种对利用无人机所采集的信息的显示方法的流程示意图;参考附图2所示,本实施例提供了一种对利用无人机所采集的信息的显示方法,该信息的显示方法的执行主体可以为信息的显示装置,该信息的显示装置可以实现为软件、或者软件和硬件的组合,其中,在信息的显示装置实现为硬件时,其具体可以为云平台的显示设备,也可以是通过云平台、云网络、云端服务器与无人机进行通信连接的电子设备,该电子设备可以实现为手持终端、个人终端PC、平板电脑、网页端平台等等;当然,该信息的显示装置也可以是直接与无人机通信连接的终端设备。当信息的显示装置实现为软件时,其可以安装在上述所例举的电子设备中。具体的,本实施例中的对利用无人机所采集的信息的显示方法可以包括:
步骤S201:获取无人机的拍摄信息。
步骤S202:确定与拍摄信息相对应的无人机拍摄位置。
步骤S203:当存在与拍摄信息相对应的拍摄对象位置,则在拍摄信息所对应的地图中,对无人机拍摄位置和拍摄对象位置进行标记显示。
下面对上述各个步骤的具体实现过程和实现效果进行详细说明:
步骤S201:获取无人机的拍摄信息。
其中,无人机上可以设置有用于获得拍摄信息的图像采集装置,图像采集装置可以为照相机、摄像机、具有图像拍摄功能的其他设备等等,在无人机飞行的过程中,通过无人机上的图像采集装置即可获取无人机的拍摄信息,该拍摄信息可以包括以下至少之一:图像信息、全景图、视频信息、点云信息。
为了能够使得用户可以更加直观、清楚地查看到无人机的拍摄信息,在无人机获取到拍摄信息之后,可以将拍摄信息主动或者被动地发送至信息的显示装置,从而使得信息的显示装置能够稳定地获得无人机的拍摄信息,此时,无人机的拍摄信息可以是经由云平台和无人机所获得的。
除了经由云平台和无人机获得无人机的拍摄信息之外,本实施例还提供了另一种对无人机的拍摄信息进行获取的实现方式,具体的,无人机的拍摄信息可以是预先采集的存储在预设区域中的历史信息,此时,信息的显示装置通过访问预设区域即可获取到无人机的拍摄信息。
步骤S202:确定与拍摄信息相对应的无人机拍摄位置。
在获取到拍摄信息之后,可以对拍摄信息进行分析处理,以确定与拍摄信息相对应的无人机拍摄位置,具体的,为了能够获得与拍摄信息相对应的无人机拍摄位置,无人机上可以配置有用于对无人机进行定位操作的定位装置,在无人机进行飞行拍摄的过程中,通过定位装置即可获取到与拍摄信息相对应的无人机拍摄位置。
在一些实例中,信息的显示装置可以通过云平台获得无人机上定位装置的定位数据,通过对定位数据进行分析处理即可确定与拍摄信息相对应的无人机拍摄位置,此时,对无人机拍摄位置进行确定的执行主体为信息的显示装置。
在另一些实例中,通过无人机上定位装置可以获得定位数据,在无人机获得定位数据之后,可以对定位数据进行分析处理,即可确定与拍摄信息相对应的无人机拍摄位置。在无人机获得无人机拍摄位置之后,可以通过云平台将所获得的无人机拍摄位置发送至信息的显示装置,从而使得信息的显示装置可以确定与拍摄信息相对应的无人机拍摄位置,此时,对无人机拍摄位置进行确定的执行主体为无人机。
步骤S203:当存在与拍摄信息相对应的拍摄对象位置,则在拍摄信息所对应的地图中,对无人机拍摄位置和拍摄对象位置进行标记显示。
对于拍摄信息而言,拍摄信息可以对应有无人机拍摄位置和拍摄对象位置,其中,无人机拍摄位置可以是指在获得拍摄信息时、无人机所处的位置,拍摄对象位置是指拍摄信息中包括的拍摄对象所在的位置。举例来说,参考附图3所示,在无人机处于位置A时,通过位于无人机上的图像采集装置可以对处于位置B的果园进行拍摄作业,此时,通过图像采集装置能够获得拍摄信息,所获得的拍摄信息对应有无人机拍摄位置(即位置A)和拍摄对象位置(位置B),显然的,无人机拍摄位置和拍摄对象位置是完全不同的两个位置。
另外,在无人机飞行作业之前,无人机上可以选择地配置有用于确定与拍摄信息相对应的拍摄对象位置的感测装置(例如:激光雷达),具体的,在无人机上配置有感测装置时,通过感测装置可以获得与拍摄信息相对应的拍摄对象位置;在无人机上未配置有感测装置时,则无法获得与拍摄信息相对应的拍摄对象位置。
在能够获得与拍摄装置相对应的拍摄对象位置时,则可以将所获得的拍摄对象位置存储在预设区域中,或者将拍摄对象位置发送至信息的显示装置,此时,信息的显示装置可以对所获的拍摄对象位置进行存储,而后便能够查看到所存储的拍摄对象位置。当能够查看到与拍摄信息相对应的拍摄对象位置时,则说明存在与拍摄信息相对应的拍摄对象位置;当不能查看到与拍摄信息相对应的拍摄对象位置时,则说明不存在与拍摄信息相对应的拍摄对象位置。
当存在与拍摄信息相对应的拍摄对象位置,则可以获取与拍摄信息相对应的地图,该地图可以为二 维地图、三维地图等等,而后可以在拍摄信息所对应的地图中,对无人机拍摄位置和拍摄对象位置进行标记显示。需要注意的是,由于无人机拍摄位置和拍摄对象位置为两个不同的位置,为了进一步突出对无人机拍摄位置和拍摄对象位置进行显示的区别程度,可以利用不同的显示方式对无人机拍摄位置和拍摄对象位置进行显示,例如:在无人机拍摄位置处可以添加箭头或者飞机标识,在拍摄对象位置处显示拍摄信息的缩略图等等,这样有效地实现了能够将拍摄信息的相关信息在地图上进行灵活显示,进而提高了对无人机拍摄效果进行显示的质量和效果。
以图像作为拍摄信息为例,本实施例的方法能够实现将无人机拍摄成果在空间地图中进行显示,具体的,参考附图4所示,无人机拍摄的图像可以存储在媒体库中,在用户点击图像时,可以查看图像的详细信息,此外,在显示界面的中部不仅可以显示图像的详细信息,还能够在显示界面的一侧展示图像信息在地图中的拍摄对象位置,并且无人机拍摄位置也会在地图上展示。
具体的,参考附图5所示,对于存储在媒体库中的图像而言,通过将媒体加载至地图的操作,可以将图像信息的相关信息在地图中进行显示,而后进入地图页面之后,即可查看加载至地图上的图像信息。地图上的图像信息可以以气泡缩略图的形式在地图中展示,并展示图像名称,点击图像之后,图像呈现选中状态(气泡变大并增加蓝色描边),并且出现图像的放大缩略图,如果所选中的图像对应有无人机拍摄位置,可以将无人机拍摄时的无人机拍摄位置在三维地图上呈现(蓝色的三角形icon与照片位置相连),如果图像进对应有无人机拍摄位置,并没有拍摄对象位置时,则可以将地图中的照片气泡和飞机角标在同一位置进行显示。
当地图中有多张图像之间的距离很近时,可以以聚合的方式呈现在第一个加载至地图的图像右上角增加数字显示,有几张图像聚合就显示几个数字。
本实施例提供的信息的显示方法,通过获取无人机的拍摄信息,确定与所述拍摄信息相对应的无人机拍摄位置,当存在与所述拍摄信息相对应的拍摄对象位置,则在所述拍摄信息所对应的地图中,对所述无人机拍摄位置和所述拍摄对象位置进行标记显示,有效地实现了能够将拍摄信息所对应的无人机拍摄位置和拍摄对象位置在地图中进行显示,使得用户通过地图能够快速、直观地获知到拍摄信息的相关信息,这样有效地提高了对无人机的拍摄成果进行显示的质量和效果,进一步提高了该方法的实用性,有利于市场的推广与应用。
图6为本发明实施例提供的另一种对利用无人机所采集的信息的显示方法的流程示意图;在上述实施例的基础上,参考附图6所示,在拍摄信息包括全景图时,为了能够稳定地对全景图进行显示,本实施例中的方法可以包括:
步骤S301:获取全景图的拍摄位置。
步骤S302:基于拍摄位置,确定与全景图相对应的三维地图。
步骤S303:将全景图自动加载至三维地图中进行标记显示。
由于全景图是通过广角的表现手段所获得图像信息,其能够尽可能多地表现出周围的环境,因此,在拍摄信息包括全景图时,为了能够保证对全景图进行显示的质量和效果,可以获取全景图的拍摄位置,具体的,全景图的拍摄位置是指在通过无人机拍摄全景图时、无人机所在的拍摄位置,全景图的拍摄位置可以通过定位装置对无人机进行定位操作获得。
在获取到全景图的拍摄位置(例如:坐标信息)之后,可以对拍摄位置进行分析处理,以确定与全景图相对应的三维地图,所确定的三维地图中可以包括拍摄位置所对应的地图区域,在确定与全景图相对应的三维地图之后,为了能够准确地将全景图在三维地图中进行显示,可以将全景图自动加载至三维地图中进行标记显示。
举例来说,参考附图7所示,在将无人机所获得的全景图在地图中进行显示时,可以在地图中添加 与全景图相对应的缩略图标识,具体的,通过无人机所获得的全景图除了存储在媒体库中,还可以将三维图直接加载至三维地图中。与图像信息的展示方式相似,全景图可以以气泡缩略图的形式在地图中展示,并能够展示全景图名称,点击全景图之后,全景图能够呈现选中状态(气泡变大并增加蓝色描边),并且出现全景图的放大缩略图。
此外,当地图中有多张全景图之间的距离很近时,可以以聚合的方式在第一个加载至地图的全景图的右上角增加数字显示,有几张全景图聚合就显示几个数字。
在一些实例中,在将全景图自动加载至三维地图中进行标记显示之后,本实施例中的方法还可以包括:
步骤S401:在三维地图中,获取用户对全景图输入的角度调整操作。
步骤S402:基于角度调整操作,确定全景图的显示视角。
步骤S403:基于显示视角对全景图进行显示。
由于全景图能够尽可能多的表现周围的环境,即全景图可以对应有多个显示视角,而不同显示视角的全景图可以显示不同的图像区域,用户可以根据应用场景或者应用需求对全景图的显示视角进行调整。为了能够实现用户可以根据需求对全景图的显示视角进行调整,在三维地图中可以获取用户对全景图输入的角度调整操作,该角度调整操作可以是用户利用键盘或者鼠标所输入的执行操作,例如:用户按住鼠标左键(鼠标右键、鼠标中键)进行移动的操作,或者,用户通过键盘所输入的角度调整参数等等。在获取到角度调整操作之后,可以基于角度调整操作确定全景图的显示视角,而后则可以基于显示视角对全景图进行显示,从而使得用户可以根据显示需求对全景图的显示视角进行任意调整,进一步提高了对全景图进行显示的灵活可靠性。
举例来说,参考附图8所示,在获取到全景图之后,全景图可以气泡缩略图的形式在地图中展示,并能够展示全景图名称,点击全景图之后,可以呈现选中状态(气泡变大并增加蓝色描边),并且出现全景图的放大缩略图。在双击全景图气泡时,可以打开全景图,并能够以第一显示视角对全景图进行显示,此外,点击全屏可以全屏查看第一显示视角的全景图。
在第一显示视角的全景图不能满足用户的查看需求时,用户可以点击或者拖拽全景图的主画面,以查看不同的角度的全景图画面,具体的,如图9所示,可以以第二显示视角对三维地图中的全景图进行显示,从而有效地实现了用户可以根据查看需求对在全景图的显示角度进行任意调整操作,进一步提高了该方法使用的灵活可靠性。
本实施例中,通过获取全景图的拍摄位置,而后基于拍摄位置确定与全景图相对应的三维地图,并将全景图自动加载至三维地图中进行标记显示,从而有效地实现了在通过无人机获得全景图之后,可以自动将全景图添加至相对应的三维地图中进行显示,进而使得用户可以更加直观地获知到全景图的详细信息,提高了该方法的实用性。
图10为本发明实施例提供的又一种对利用无人机所采集的信息的显示方法的流程示意图;在上述实施例的基础上,参考附图10所示,在拍摄信息包括视频信息时,为了能够准确地对视频信息在地图中进行显示,本实施例中的方法还可以包括:
步骤S1001:获取与视频信息中各个视频帧相对应的拍摄位置。
步骤S1002:在对视频信息进行播放时,在地图中对正在进行播放的视频帧所对应的当前拍摄位置进行显示。
其中,在拍摄信息包括视频信息时,由于视频信息中包括多个视频帧,而各个视频帧所对应的无人机拍摄位置不同,为了能够在地图中对视频信息所对应的详细信息进行显示,可以获取与视频信息中各个视频帧相对应的拍摄位置,本实施例中的各个视频帧相对应的拍摄位置的具体获取方式和实现效果与 上述步骤S202的具体实现方式和实现效果相类似,具体可参考上述陈述内容,在此不再赘述。
在获取到各个视频帧所对应的无人机拍摄位置之后,为了能够使得用户了解到与视频帧相对应的无人机拍摄位置,在对视频信息进行播放时,在地图中可以对正在进行播放的视频帧所对应的当前拍摄位置进行显示,需要注意的是,在正在进行播放的视频帧可以随着播放时间段的变化而变化,所显示的与视频帧相对应的当前拍摄位置也会随着正在进行播放的视频帧的变化而变化。
举例来说,参考附图11所示,在将无人机所获得的视频信息在地图中进行显示时,由于视频信息与图像信息不同,图像信息对应的拍摄位置是一个地点,视频信息对应的拍摄位置是多个地点,该视频信息能够标识与拍摄轨迹上显示一段时间内的拍摄过程;当将视频信息加载到地图上之后,可以结合三维地图查看视频信息。具体的,无人机拍摄的视频信息可以存储在媒体库中,通过媒体库即可获得视频信息,在点击视频信息之后,可以对视频信息中所包括的图像详细信息进行查看,详细信息可以包括以二维地图的展示方式展示视频信息在地图中的位置,当对视频信息进行播放时,地图小窗中的气泡也会随时间的变化在飞行轨迹中对应移动。
另外,参考附图12所示,在地图中显示灰色的飞行轨迹,在轨迹中有白色端点,上方显示以气泡缩略图的形式展示视频静态内容,白色端点可以在灰色飞行轨迹中拖动,当拖动白色端点时,气泡中的画面会随位置的变化现实对应位置拍摄的视频内容。当利用鼠标点击视频气泡时,可以展示放大缩略图,同时放大缩略图区域将播放视频的内容。
本实施例中,通过获取与视频信息中各个视频帧相对应的拍摄位置,在对视频信息进行播放时,在地图中对正在进行播放的视频帧所对应的当前拍摄位置进行显示,从而有效地实现了用户可以更加直观、清楚地对视频信息的详细信息进行查看,进一步提高了该方法使用的质量和效果。
在又一些实例中,拍摄信息包括点云信息,为了能够准确地对点云信息在地图中进行显示,本实施例中的方法还可以包括:
步骤S1101:获取与点云信息相对应的点云模型。
步骤S1102:确定与点云模型相对应的模型原点以及与模型原点相对应的位置信息。
步骤S1103:基于位置信息,在地图中对点云模型进行显示。
其中,在无人机上设置有点云相机时,通过点云相机可以获取到点云信息,受制于点云成像的特殊性,点云信息可以以模型的方式呈现,为了能够将点云信息在地图中进行显示,可以获取与点云信息相对应的点云模型,即点云拍摄成果以模型的方式呈现。具体的,可以利用建模算法对点云信息进行分析处理,从而可以获得与点云信息相对应的点云模型;或者,预先配置有用于建立点云模型的机器学习模型,在获取到点云信息之后,可以将点云信息输入至机器学习模型,从而可以获得由机器学习模型所输出的点云模型,所获得的点云模型即为通过点云相机进行扫描所获得的结果。
在获取到点云模型之后,为了能够准确地在地图中对点云模型进行显示,可以对点云模型进行分析处理,以确定与点云模型相对应的模型原点以及与模型原点相对应的位置信息。在一些实例中,与点云模型相对应的模型原点可以为点云模型的几何中心或者重心等等。在确定与点云模型相对应的模型原点之后,可以对模型原点进行分析处理,以确定与模型原点相对应的位置信息,从而有效地保证了对模型原点和位置信息进行确定的准确可靠性。
在获得与模型原点相对应的位置信息之后,可以基于位置信息在地图中对点云模型进行显示,从而使得用户通过地图即可查看到通过无人机所获得的点云模型。具体的,当在地图上对点云模型进行显示时,地图上所显示的点云模型的数量为一个或多个,并且,点云模型可以以气泡的方式在地图上呈现,气泡中的内容可以为点云模型,在点击气泡的点云模型时,则可以看到点云模型的缩略图,双击点云模型之后即打开点云模型的预览界面,从而有效地实现了对地图上所标记的点云模型的详细信息进行查看。
本实施例中,通过获取与点云信息相对应的点云模型,确定与点云模型相对应的模型原点以及与模型原点相对应的位置信息,而后基于位置信息,在地图中对点云模型进行显示,从而有效地实现了用户可以更加直观、清楚地对点云信息的详细信息进行查看,进一步提高了该方法使用的质量和效果。
图13为本发明实施例提供的另一种对利用无人机所采集的信息的显示方法的流程示意图;在上述任意一个实施例的基础上,参考附图13所示,本实施例除了能够对无人机所采集的拍摄成果在地图中进行显示之外,本实施例还能够实现对通过无人机的采集信息所生成的模型进行对比操作,具体的,本实施例中的方法还可以包括:
步骤S1301:获取与至少两个三维模型相对应的模型对比请求,至少两个三维模型均是基于无人机的采集信息所生成的。
步骤S1302:基于模型对比请求将至少两个三维模型进行重合叠加显示,获得叠加显示区域,叠加显示区域用于对至少一个三维模型进行显示。
步骤S1303:响应于用户针对叠加显示区域输入的显示调整操作,对叠加显示区域中的显示数据进行调整,以便于确定至少两个三维模型之间的模型对比结果。
下面对上述各个步骤的具体实现过程和实现效果进行详细说明:
步骤S1301:获取与至少两个三维模型相对应的模型对比请求,至少两个三维模型均是基于无人机的采集信息所生成的。
其中,无人机上可以设置有数据采集装置,在不同的应用场景中,数据采集装置的类型可以不同,例如:在测绘领域或者工程监测的应用场景中,数据采集装置可以为图像采集装置、定位装置等等,在无人机运行的过程中,通过数据采集装置可以获得针对一采集对象相对应的采集信息,在获得采集信息之后,可以基于采集信息建立与采集对象相对应的三维模型,需要注意的是,建立三维模型的执行主体可以为无人机、与无人机通信连接的云平台或者与云平台通信连接的信息的显示装置。
为了能够准确地实现模型对比操作,在获取与至少两个三维模型相对应的模型对比请求之前,本实施例中的方法还可以包括:接收云端服务器发送的至少两个三维模型,至少两个三维模型均是云端服务器基于无人机的采集信息所生成的。
具体的,在无人机运行的过程中,通过数据采集装置可以获得针对一采集对象相对应的采集信息,在获得采集信息之后,可以将采集信息发送至云端服务器,在云端服务器获取到采集信息之后,可以对采集信息进行分析处理,以生成与采集信息相对应的三维模型,而后云端服务器可以存储至少两个三维模型,在云端服务器生成或者存储至少两个三维模型之后,可以将至少两个三维模型发送至信息的显示装置,使得信息的显示装置能够接收到云端服务器所发送的至少两个三维模型,从而有效地保证了对至少两个三维模型进行获取的准确可靠性。
在获得至少两个三维模型之后,可以对至少两个三维模型进行显示,具体实现时,所获得的至少两个三维模型可以存储在预设模型库中,预先配置有用于对至少两个三维模型进行显示的模型列表页面,通过模型列表页面中可以以缩略方式显示所获得的至少两个三维模型。并且,打开模型预览页面可以对三维模型的详细信息进行查看,具体的,在模型预览页面的上半部分可以显示三维模型,下半部分可以显示能够进行切换查看的不同模型的缩略图。
此外,在对三维模型进行显示时,能够对三维模型进行显示的三维地图背景进行切换,三维地图背景可以包括预先配置能够支持的地图背景,其具体可以包括:预设背景图、卫星地图背景、标准地图背景等等,如图13a所示,三维模型的三维地图背景为预设背景图,即黑色条纹背景;如图13b所示,三维模型的三维地图背景为卫星地图背景;如图13c所示,三维模型的三维地图背景为预设地图背景。具体实现时,三维模型的三维地图背景可以默认为黑色背景图,通过黑色背景图可以对三维模型进行单独展 示,用户可以根据需求切换用于对三维模型进行显示的背景图,例如:可以选择并确定与三维地图相对应的三维地图背景,并可以将三维模型直接依附于以所选择的三维地图背景的三维地图之上进行展示。
另外,对于模型列表页面而言,模型列表页面的上半部分为模型展示区域,该模型展示区域可以支持模型的移动、模型的旋转、模型的放缩查看等操作,具体的,通过鼠标左键点击拖拽可以拖拽移动所显示的三维模型,Ctrl+鼠标左键可以旋转三维模型的方向,滚动鼠标中键滚轮可以放大或缩小三维模型的大小。模型列表页面的下半部分可以显示多个待显示的其他三维模型的缩略图,可以点击模型排列中的不同三维模型的缩略图可以切换模型查看,同时基于用户的预设应用需求和设计需求,还可以进行三维模型的分发操作、在地图上显示操作、下载操作以及删除等操作。
需要注意的是,通过无人机的采集信息可以生成针对一采集对象相对应的一个或多个三维模型,当获取到与采集对象相对应的多个三维模型时,多个三维模型各自对应的时间信息可以不同。在获取到至少两个三维模型之后,用户可以根据设计需求针对至少两个三维模型进行模型对比操作,此时,则可以获取到与至少两个三维模型相对应的模型对比请求,该模型对比请求中可以包括需要进行模型对比操作的三维模型标识,一个模型对比请求所对应的三维模型的数量可以为两个或者两个以上。
另外,本实施例对于获取与至少两个三维模型相对应的模型对比请求的具体实现方式不做限定,在一些实例中,获取与至少两个三维模型相对应的模型对比请求可以包括:获取用户针对至少两个三维模型所输入的模型对比操作,在模型对比界面中,将用户针对界面中所显示的三维模型所输入的模型选择操作确定为模型对比操作,而后可以基于模型对比操作生成并获得与至少两个三维模型相对应的模型对比请求。
在另一些实例中,获取与至少两个三维模型相对应的模型对比请求可以包括:获取与信息的显示装置进行通信连接的第三设备,通过第三设备生成模型对比请求,而后第三设备可以主动或者被动地将模型对比请求发送至信息的显示装置,从而使得信息的显示装置可以稳定地获取到与至少两个三维模型相对应的模型对比请求。
步骤S1302:基于模型对比请求将至少两个三维模型进行重合叠加显示,获得叠加显示区域,叠加显示区域用于对至少一个三维模型进行显示。
在获取到与至少两个三维模型相对应的模型对比请求之后,可以基于模型对比请求将至少两个三维模型进行重合叠加显示,从而可以获得叠加显示区域,其中,叠加显示区域用于对至少一个三维模型进行显示。举例来说,参考附图14a所示,在与模型对比请求相对应的三维模型包括三维模型A和三维模型B时,则叠加显示区域中是由两个三维模型进行重合叠加所获得的,此时的叠加显示区域能够对至少一个三维模型进行显示,在一些实例中,叠加显示区域能够对三维模型A或者三维模型B进行显示;在另一些实例中,叠加显示区域能够对三维模型A的至少部分和三维模型B的至少部分进行显示。
步骤S1303:响应于用户针对叠加显示区域输入的显示调整操作,对叠加显示区域中的显示数据进行调整,以便于确定至少两个三维模型之间的模型对比结果。
在获得叠加显示区域之后,用户可以针对叠加显示区域输入显示调整操作,该显示调整操作用于对叠加显示区域中位于顶层的显示数据进行调整。响应于用户针对叠加显示区域输入的显示调整操作,能够基于显示调整操作对叠加显示区域中的显示数据进行调整,这样便于用户确定至少两个三维模型之间的模型对比结果。
需要注意的是,在与模型对比请求相对应的三维模型的数量为两个以上时,例如:4个、5个或者6个时,除了基于多个三维模型确定一个叠加显示区域之外,显示界面中还可以配置有用于对叠加显示区域中的多个三维模型分别进行显示的控件,在用户点击控件时,即可以以平铺的方式对需要进行对比的多个三维模型进行同步显示,如图14b所示。当用户对任一的三维模型进行调整时,进行对比的其他三 维模型也会进行同步调整,例如:在用户对任一的三维模型进行旋转、放大、缩小等操作时,进行对比的其他三维模型也会同步进行旋转、放大、缩小等操作,这样可以方便用户观察三维模型之间的模型对比结果,进一步提高了对模型对比结果进行获取的质量和效果。
在获得叠加显示区域之后,为了提高对叠加显示区域进行显示的真实可靠性,本实施例中的方法还可以包括:获取任意一个三维模型所对应的三维地图;对叠加显示区域和三维地图进行结合显示。
在获取到叠加显示区域之后,可以确定叠加显示区域中所包括的至少两个三维模型,为了能够提高对三维模型进行显示的真实可靠性,可以对至少两个三维模型中的任意一个三维模型进行分析处理,以获取任意一个三维模型所对应的三维地图,具体的,可以获取任意一个三维模型所对应的位置信息,基于位置信息即可获取与任意一个三维模型相对应的三维地图。
在获取到三维地图之后,可以对叠加显示区域和三维地图进行结合显示,从而不仅扩展了对叠加显示区域中至少两个三维模型进行显示的质量和效果,还能够提高三维模型显示的真实可靠性。
本实施例中,通过获取与至少两个三维模型相对应的模型对比请求,而后基于模型对比请求将至少两个三维模型进行重合叠加显示,获得叠加显示区域,响应于用户针对叠加显示区域输入的显示调整操作,并对叠加显示区域中的显示数据进行调整,以确定至少两个三维模型之间的模型对比结果,从而使得用户了解到至少两个三维模型之间的差异,当应用于工程监测的应用场景或者任务执行场景中时,基于模型对比结果可以使得用户快速、直观地获知到工程进度信息、任务执行进度等等,进一步提高了该方法的实用性。
图15为本发明实施例提供的响应于用户针对叠加显示区域输入的显示调整操作,对叠加显示区域中的显示数据进行调整的流程示意图;在上述实施例的基础上,参考附图15所示,本实施例提供了一种对叠加显示区域中的显示数据进行调整的实现方式,具体的,本实施例中的响应于用户针对叠加显示区域输入的显示调整操作,对叠加显示区域中的显示数据进行调整可以包括:
步骤S1501:获取用户针对与叠加显示区域相对应的区域调整控件输入的显示调整操作。
其中,为了能够实现对叠加显示区域中的显示数据进行调整,叠加显示区域可以配置有相对应的区域调整控件,该区域调整控件可以为以下至少之一:区域分割线、用于对叠加显示区域中各个层的显示区域进行调整的控件等等。在对叠加显示区域进行显示时,可以同时对于叠加显示区域相对应的区域调整控件进行显示,而后用户针对所显示的区域调整控件输入显示调整操作,从而可以获得用户针对区域调整控件所输入的显示调整操作。
可以理解的是,在区域调整控件的类型不同时,所获得的显示调整操作可以不同,例如:在区域调整控件为分割线时,所获得的显示调整操作可以为用户针对分割线所输入的拖动或者移动操作;在区域调整控件为用于对叠加显示区域中各个层的显示区域进行调整的控件时,所获得的显示调整操作可以为用户针对控件所输入的数据输入操作、数据选择操作或者点选或者配置操作等等。
在获取用户针对与叠加显示区域相对应的区域调整控件输入的显示调整操作之前,为了能够准确地获取到用户针对与叠加显示区域相对应的区域调整控件输入的显示调整操作,本实施例中的方法还可以包括:获取位于叠加显示区域内的三维模型数量;基于三维模型数量,确定与叠加显示区域相对应的区域调整控件,区域调整控件的数量小于或等于三维模型数量,且区域调整控件用于对处于不同叠层的三维模型的显示区域进行调整。
具体的,在获取到模型对比请求之后,可以基于模型对比请求确定位于叠加显示区域内的三维模型数量,三维模型数量可以为两个或两个以上。由于叠加显示区域用于对至少一个三维模型进行显示,而区域调整控件用于对处于不同叠层的三维模型的显示区域进行调整,即所获得的三维模型数量与叠加显示区域所对应的区域调整控件息息相关,因此,在获取到三维模型数量之后,可以对三维模型数量进行 分析处理,以确定与叠加显示区域相对应的区域调整控件,该区域调整控件的数量小于或等于三维模型数量。
举例来说,在叠加显示区域中所显示的三维模型数量为两个时,与叠加显示区域所对应的区域调整控件的数量为1个,一个区域调整控件用于调整用于对三维模型进行显示的数据层,此时,区域调整控件的数量小于三维模型数量。在叠加显示区域中所显示的三维模型数量为三个时,与叠加显示区域所对应的区域调整控件的数量为3个,此时,区域调整控件的数量等于三维模型数量。
步骤S1502:响应于用户针对区域调整控件输入的显示调整操作,对叠加显示区域中的显示数据进行调整。
在获取到用户针对区域调整控件输入的显示调整操作之后,可以基于显示调整操作对叠加显示区域中的显示数据进行调整。在一些实例中,响应于用户针对区域调整控件输入的显示调整操作,对叠加显示区域中的显示数据进行调整可以包括:确定与区域调整控件相对应的可调区域;响应于用户针对区域调整控件的调整操作,在可调区域内对叠加显示区域中的显示数据进行调整。
具体的,为了能够准确地基于区域调整控件对叠加显示区域中的显示数据进行调整,在获取到用于对叠加显示区域中的显示数据进行调整的区域调整控件之后,可以确定与区域调整控件相对应的可调区域。需要注意的是,不同的区域调整控件可以对应有不同的可调区域,在确定与区域调整控件相对应的可调区域之后,响应于用户针对区域调整控件的调整操作(移动操作、拖动操作等),可以基于调整操作对可调区域内对叠加显示区域中的显示数据进行调整。
举例来说,参考附图16所示,叠加显示区域用于对三维模型A和三维模型B进行显示,当区域调整控件为分割线时,在区域调整控件处于位置a时,此时,叠加显示区域中的小半部分能够显示三维模型A的部分数据,叠加显示区域中的大半部分能够显示三维模型B的部分数据。用户可以根据需求将位于位置a的区域调整控件调整到位置b,在区域调整控件处于位置b时,叠加显示区域的大半部分能够显示三维模型A的部分数据,叠加显示区域的小半部分能够显示三维模型B的部分数据,此时,叠加显示区域中所能够显示的数据已经发送调整。
具体实现时,对于需要进行模型对比操作的多个三维模型时,多个三维模型可以基于时间轴所对应的顺序进行存储,在显示界面中,可以显示与多个三维模型相对应的时间轴,用户可以通过时间轴对显示界面中所显示的三维模型进行切换,并且,显示界面中还显示有用于实现模型对比操作的「对比」按钮,在用户时间轴查看的基础上点击「对比」按钮时,则可以打开模型对比的页面,选中的三维模型与最近生成的三维模型即可进行对比查看。
在对需要进行对比操作的显示界面中,显示界面的整体可以呈左右结构,左侧展示所选的三维模型,右侧为默认最近的三维模型,而后对两个三维模型重合叠加展示,中间有一条分割线用于区分两个模型。用户可以通过鼠标点击拉动中间的分割线进行左右移动,从而可以调整左右模型的显示大小,从而方便对比出模型的变化;同时,每个三维模型所在的区域都有日期展示,点击日期,可以选择不同日期中的三维模型,从而实现对需要进行对比操作的三维模型进行灵活更换操作。
本实施例中,通过获取用户针对与叠加显示区域相对应的区域调整控件输入的显示调整操作,在获取到用户针对区域调整控件输入的显示调整操作之后,可以基于显示调整操作对叠加显示区域中的显示数据进行调整,从而有效地用户可以根据设计需求和使用需求随时对叠加显示区域中的显示数据进行调整,这样不仅有利于提高对模型对比结果进行获取的准确可靠性,并且能够满足不同用户对需要进行模型对比操作的三维模型进行查看的灵活需求,进一步提高了该方法的实用性。
图17为本发明实施例提供的又一种对利用无人机所采集的信息的显示方法的流程示意图;在上述任意一个实施例的基础上,参考附图17所示,本实施例中的方法除了能够实现模型对比操作、对无人机所 采集的信息进行显示操作之外,本实施例中的方法还能够实现在三维地图中进行航线规制操作,具体的,本实施例中的方法还可以包括:
步骤S1701:获取用户在三维地图中所输入的航点编辑信息。
步骤S1702:基于航点编辑信息,确定位于三维地图中的至少两个空间航点,空间航点包括用于对无人机进行控制的高度信息。
步骤S1703:基于至少两个空间航点,生成与无人机相对应的三维航线信息。
下面对上述各个步骤的具体实现过程和实现效果进行详细说明:
步骤S1701:获取用户在三维地图中所输入的航点编辑信息。
在无人机进行作业之前,为了能够准确地控制无人机完成相对应的作业操作,用户需要先进行航线绘制操作,为了能够使得所绘制出来的航线信息具有更多的空间信息,尤其是高度信息,则可以获取并显示三维地图。
在显示三维地图之后,用户可以利用航线操作控件在三维地图中输入航线编辑操作,从而可以获取用户在三维地图中所输入的航点编辑信息,航点编辑信息用于标识能够构成三维航线信息的航点设置信息,航点设置信息可以包括在三维地图所在空间中的水平坐标信息、纵向坐标信息和高度信息。
需要注意的是,航点编辑信息不仅可以通过用户针对航线操作控件输入的航线编辑操作所生成,航线编辑信息也可以通过预设编辑指令所构成的航线文件进行分析获得,即航线文件可以是根据用户需求和设计需求进行指令编辑操作所生成的,不同的用户需求和设计需求可以生成不同的航线文件。在获取到航线文件之后,可以对航线文件进行指令识别操作,从而可以获得用户需要在三维地图中所输入的航点编辑信息。
步骤S1702:基于航点编辑信息,确定位于三维地图中的至少两个空间航点,空间航点包括用于对无人机进行控制的高度信息。
由于航点编辑信息能够标识构成三维航线信息的航点信息,即不同的航点编辑信息能够标识构成三维航线信息的不同航点信息,因此,在获得航点编辑信息之后,可以对航点编辑信息进行分析处理,以确定位于三维地图中的至少两个空间航点,该空间航点可以包括用于对无人机进行控制的高度信息。
为了能够提高对空间航点进行编辑的灵活可靠性,在确定位于三维地图中的至少两个空间航点之后,本实施例中的方法还可以包括:获取用户在三维地图中对任一空间航点所输入的航点调整操作;基于航点调整操作对空间航点进行调整。
在确定位于三维地图中的至少两个空间航点之后,用户可以直观地在三维地图中查看到空间航点的具体信息,而后用户可以识别所设置的空间航点是否满足预设需求,在空间航点满足预设需求时,则无需对空间航点进行任何的调整操作;在空间航点不满足预设需求时,则说明此时的空间航点并不满足预设需求。为了获得满足预设需求的空间航点,用户可以在三维地图中输入航点调整操作,航点调整操作可以包括用户针对空间航点的水平坐标信息的水平调整操作、针对空间航点的纵向坐标信息的纵向调整操作、针对空间航点的高度信息的高度调整操作。在获得航点调整操作之后,可以基于航点调整操作对空间航点进行调整,从而有效地实现了用户可以根据设计需求或者应用需求灵活地对空间航点进行灵活调整操作,进一步提高了对空间航点进行确定的稳定可靠性。
步骤S1703:基于至少两个空间航点,生成与无人机相对应的三维航线信息。
在获取到至少两个空间航点之后,可以基于至少两个空间航点生成与无人机相对应的三维航线信息,具体的,将至少两个空间航点中的相邻空间航点进行虚线或者实线连接,从而可以生成与无人机相对应的三维航线信息。
在一些实例中,在生成与无人机相对应的三维航线信息之后,为了能够使得用户直观地获得所绘制 或者配置的三维航线信息,可以在三维地图中对三维航线信息进行显示。
具体实现时,在进行航线绘制操作的过程中,可以对所绘制的航线进行预览以供用户进行查看操作,例如,参考附图18所示,在对三维地图进行显示的界面中,用户可以根据需求在三维地图中点击添加空间航点,具体的,当在三维地图中的一个或多个位置进行点击操作时,则可以在上述一个或多个位置所对应的空中形式一个空间航点(具有高度信息),而后将所设置的相邻空间航点对地有虚线连接,该虚线的长短能够体现空间航点所对应的高度信息。当空间航点所对应的高度信息不满足用户的设计需求时,可以改变空间航点的高度信息,具体的,可以通过按住键盘ALT键上下拖动空间航点来改变空间航点的高度信息。在获取到空间航点之后,可以将相邻航点之间进行连线操作,所形成的连线即为航线,航线中有剪头指向无人机飞行的方向。
本实施例中,通过获取用户在三维地图中所输入的航点编辑信息,而后基于航点编辑信息,确定位于三维地图中的至少两个空间航点,并基于至少两个空间航点,生成与无人机相对应的三维航线信息,有效地实现了能够基于用户的设计需求和使用需求在三维地图中进行三维航线信息的绘制操作,由于三维航线信息中具有空间信息,这样在基于三维航线信息对无人机进行控制时,有效地提高了对无人机进行控制的安全可靠性。
在上述任意一个实施例的基础上,为了进一步该方法的实用性,本实施例中的方法还可以包括:
步骤S1801:获取无人机的实际飞行航线。
步骤S1802:在三维地图中,对实际飞行航线和三维航线信息进行区分显示。
在生成与无人机相对应的三维航线信息之后,可以基于三维航线信息控制无人机进行飞行。在基于三维航线信息控制无人机进行飞行的过程中,原则上无人机可以按照预先绘制的三维航线信息进行飞行,但是由于航线角度与用于对无人机进行控制的飞控系统的复杂与多样性,无人机所对应的实际飞行轨迹与所绘制的三维航线信息之间可能会存在差异,为了能够使得用户准确地了解到无人机在飞行过程中实际飞行轨迹与三维航线信息之间的差异,可以通过设置于无人机上的检测装置和/或定位装置获得无人机的实际飞行航线,实际飞行航线可以基于通过检测装置和/或定位装置所获得的无人机的实际飞行航点进行确定的。在获取无人机的实际飞行航线之后,可以在三维地图中对实际飞行航线和三维航线信息进行区分显示。
在一些实例中,可以利用不同的颜色对实际飞行航线和三维航线信息进行区分显示,例如:可以利用蓝色的细线对实际飞行航线进行显示,利用灰色的细线对实际飞行航线进行显示。或者,可以利用不同的航线显示方式对实际飞行航线和三维航线信息进行区分显示,例如,参考附图19所示,可以利用实线对实际飞行航线进行显示,利用虚线对实际飞行航线进行显示等等,以使得用户可以更加直观地了解到实际飞行航线与三维航线信息之间的差异。
在上述任意一个实施例的基础上,为了进一步该方法的实用性,本实施例中的方法还可以包括:
步骤S1901:获取三维航线信息相对应的执行状态。
步骤S1902:在三维地图中,对处于不同执行状态的三维航线信息进行区分显示。
在生成与无人机相对应的三维航线信息之后,可以基于三维航线信息控制无人机进行飞行作业,需要注意的是,在基于三维航线信息控制无人机进行飞行作业时,三维航线信息可以基于无人机的飞行过程具有不同的执行状态,该执行状态可以包括以下任意之一:完成状态、未完成状态,三维航线信息中可以包括用于标识无人机已经完成的航线段和/或用于标识无人机未完成的航线段。
为了能够使得用户及时了解三维航线信息的执行状态,可以获取三维航线信息相对应的执行状态,在一些实例中,获取三维航线信息相对应的执行状态可以包括:获取与无人机相对应的实际位置信息,基于实际位置信息可以确定三维航线信息中所包括的已完成航线和未完成航线,已完成航线可以为三维 航线信息中的至少一部分,在已完成航线为完整的三维航线信息时,则未完成航线为0;在未完成航线为完整的三维航线信息时,则已完整航线为0。对于三维航线信息中所包括的已完成航线而言,可以确定已完成航线所对应的执行状态为完成状态,对于三维航线信息中所包括的未完成航线而言,可以确定未完成航线所对应的执行状态为未完成状态。
在获取三维航线信息相对应的执行状态之后,可以在三维地图中对处于不同执行状态的三维航线信息进行区分显示,在一些实例中,可以利用不同的颜色对完成状态的三维航线信息和未完成状态的三维航线信息进行区分显示,例如:可以利用灰色的细线对完成状态的三维航线信息进行显示,利用绿色的细线对未完成状态的三维航线信息进行显示,从而使得用户可以更加直观地了解到处于不同执行状态的三维航线信息。
在另一些实例中,在无人机在执行航线飞行任务完毕后,可以获取到与无人机相对应的实际飞行轨迹,在航线查看与拍摄成果查看时,可以同时查看到所规划的三维航线信息和实际飞行轨迹,航线完成后的实际飞行轨迹在三维空间地图中可以用灰色线进行显示。
本实施例中,通过获取三维航线信息相对应的执行状态,而后在三维地图中对处于不同执行状态的三维航线信息进行区分显示,从而使得用户可以更加直观地了解到处于不同执行状态的三维航线信息,进一步提高了该方法使用的稳定可靠性。
图20为本发明实施例提供的又一种对利用无人机所采集的信息的显示方法的流程示意图;在上述任意一个实施例的基础上,参考附图20所示,本实施例中的方法不仅能够对利用无人机所采集的信息进行显示操作,还能够结合三维地图对利用无人机所采集的信息所生成的三维模型进行显示,具体的,本实施例中的方法还可以包括:
步骤S2001:获取待显示的三维模型,三维模型是基于无人机的采集信息所生成的。
步骤S2002:基于采集信息,确定与三维模型相对应的三维地图。
步骤S2003:对三维模型和三维地图进行结合显示。
下面对上述各个步骤的具体实现过程和实现效果进行详细说明:
步骤S2001:获取待显示的三维模型,三维模型是基于无人机的采集信息所生成的。
其中,无人机上可以设置有数据采集装置,在不同的应用场景中,数据采集装置的类型可以不同,例如:在测绘领域或者工程监测的应用场景中,数据采集装置可以为图像采集装置、定位装置等等,在无人机运行的过程中,通过数据采集装置可以获得针对一采集对象相对应的采集信息,在获得采集信息之后,可以基于采集信息建立与采集信息相对应的三维模型,需要注意的是,建立三维模型的执行主体可以为无人机、与无人机通信连接的云平台或者与云平台通信连接的信息的显示装置。
在一些实例中,获取待显示的三维模型可以包括:接收云端服务器发送的三维模型,三维模型是云端服务器基于无人机的采集信息所生成的。
为了能够结合三维地图对利用无人机所采集的信息所生成的三维模型进行显示,在无人机运行的过程中,通过数据采集装置可以获得针对一采集对象相对应的采集信息,在获得采集信息之后,可以将采集信息发送至云端服务器,在云端服务器获取到采集信息之后,可以对采集信息进行分析处理,以生成与采集信息相对应的待显示的三维模型,而后云端服务器可以存储待显示的三维模型,在云端服务器生成或者存储待显示的三维模型之后,可以将待显示的三维模型发送至信息的显示装置,使得信息的显示装置能够接收到云端服务器所发送的待显示的三维模型,从而有效地保证了对待显示的三维模型进行获取的准确可靠性。
步骤S2002:基于采集信息,确定与三维模型相对应的三维地图。
由于待显示的三维模型是基于无人机的采集信息所生成的,而不同的采集信息可以对应有不同的位 置信息,因此,为了能够结合三维地图对利用无人机所采集的信息所生成的三维模型进行显示,在获取待显示的三维模型之后,可以对与三维模型相对应的采集信息进行分析处理,以确定与三维模型相对应的三维地图。具体的,基于采集信息,确定与三维模型相对应的三维地图可以包括:基于采集信息,确定与三维模型相对应的位置信息,而后基于位置信息即可确定与三维模型相对应的三维地图。
步骤S2003:对三维模型和三维地图进行结合显示。
在获得三维模型和三维地图之后,可以对三维模型和三维地图进行结合显示。为了能够提高对三维模型和三维地图进行结合显示的质量和效果,在一些实例中,在三维模型的数量为多个时,还能够实现对多个三维模型进行切换显示操作,具体的,本实施例中的对三维模型和三维地图进行结合显示可以包括:在多个三维模型中,确定需要进行详细显示的目标三维模型;利用显示界面的第一预设区域对目标三维模型和相对应的三维地图进行结合显示;利用显示界面的第二预设区域对除了目标三维模型外的其他三维模型进行缩略显示,其中,第二预设区域小于第一预设区域。
具体的,参考附图21所示,在待显示的三维模型的数量为多个时,为了能够通过显示界面对多个三维模型进行切换显示,显示界面可以包括第一预设区域和第二预设区域,第一预设区域所对应的显示区域大于第二预设区域所对应的显示区域,例如:第一预设区域可以为显示界面的上半部分,第二预设区域可以为显示界面的下半部分。当存在多个三维模型时,可以在多个三维模型中确定需要进行详细显示的目标三维模型,目标三维模型可以通过对任一三维模型所输入的模型选择操作。
在获取到目标三维模型之后,可以利用显示界面的第一预设区域对目标三维模型和相对应的三维地图进行结合显示,同时可以利用显示界面的第二预设区域对除了目标三维模型外的其他三维模型进行缩略显示,从而有效地实现了对三维模型和三维地图进行结合显示的质量和效果。
进一步的,在利用显示界面的第二预设区域对除了目标三维模型外的其他三维模型进行缩略显示之后,本实施例中的方法还能够实现对所显示的三维模型进行切换操作,具体的,本实施例中的方法还可以包括:
步骤S2101:获取用户对任一其他三维模型所输入的模型选择操作。
步骤S2102:将在第一预设区域中显示的目标三维模型切换为与模型选择操作相对应的三维模型。
具体的,在利用显示界面的第二预设区域对除了目标三维模型外的其他三维模型进行缩略显示之后,用户可以对任一的其他三维模型所输入的模型选择操作,例如:用户可以通过鼠标对任一的其他三维模型进行点选操作,从而可以获得用户对任一其他三维模型所输入的模型选择操作。在获得用户对任一其他三维模型所输入的模型选择操作之后,可以将在第一预设区域中显示的目标三维模型切换为与模型选择操作相对应的三维模型,从而有效地实现了对所显示的三维模型进行切换操作,进而提高了该方法使用的灵活可靠性。
本实施例中的方法不仅能够通过显示界面的不同区域对三维模型进行显示,还能够对与三维模型相对应的三维地图的显示类型进行调整操作,具体的,本实施例中的对三维模型和三维地图进行结合显示可以包括:确定三维地图的显示类型,显示类型包括以下任意之一:预设背景图、卫星地图、标准地图;基于三维地图的显示类型,对三维模型和三维地图进行结合显示。
在对三维模型进行显示时,能够对三维模型进行显示的三维地图背景进行切换,三维地图背景可以包括预先配置能够指示的地图背景,其具体可以包括:预设背景图、卫星地图背景、标准地图背景等等,如图13a所示,三维模型的三维地图背景为预设背景图,即黑色条纹背景;如图13b所示,三维模型的三维地图背景为卫星地图背景,如图13c所示,三维模型的三维地图背景为预设地图背景。具体实现时,三维模型的三维地图背景可以默认为黑色背景图模型单独展示,用户可以根据需求切换用于对三维模型进行显示的背景底图,例如:可以调出与三维地图相对应的三维地图背景,并可以将三维模型直接依附 于三维地图之上进行展示,通过对三维模型的背景地图进行切换显示,有效地提高了对三维模型进行显示的灵活可靠性。
除了能够实现对与三维模型相对应的三维地图的显示类型进行调整,本实施例还能够实现对需要进行显示的多个三维模型按照预设顺序进行显示,具体的,在三维模型的数量为多个时,本实施例中的对三维模型和三维地图进行结合显示可以包括:获取用于对多个三维模型进行排序的参考信息;基于参考信息确定多个三维模型的显示序列;基于显示序列,对多个三维模型和所对应的三维地图依次进行结合显示。
在三维模型的数量为多个时,为了能够准确地对多个三维模型进行显示的质量和效果,可以获取用于对多个三维模型进行排序的参考信息,该参考信息可以包括以下任意之一:选择顺序信息、时间信息;用于对多个三维模型进行排序的参考信息可以基于用户的配置操作或者选择操作所获得的。在获取用于对多个三维模型进行排序的参考信息之后,可以基于参考信息确定多个三维模型的显示序列,需要注意的是,在参考信息为选择顺序信息时,则可以基于用户对需要进行显示的多个三维模型所对应的选择顺序信息对多个三维模型进行排序操作,从而可以获得多个三维模型的显示序列。在参考信息为时间信息时,则可以基于用户对需要进行显示的多个三维模型所对应的时间信息对多个三维模型进行排序操作,从而可以获得多个三维模型的显示序列;在获得显示序列之后,可以基于显示序列对多个三维模型和所对应的三维地图依次进行结合显示。
为了能够保证所确定的多个三维模型的显示序列满足用户的设计需求或者应用需求,本实施例提供了一种基于参考信息确定多个三维模型的显示序列的实现方式,具体包括:基于参考信息,确定多个三维模型的初始序列;获取用户对初始序列输入的调整操作,获得多个三维模型的显示序列。
具体的,在获得参考信息之后,可以基于参考信息确定多个三维模型的初始序列,在多个三维模型的初始序列能够满足用户需求时,则无需对初始序列进行调整,而后可以将所获得的多个三维模型的初始序列确定为多个三维模型的显示序列。在多个三维模型的初始序列不满足用户需求时,用户可以对初始序列进行灵活调整操作,此时,用户可以通过显示界面输入对初始序列的调整操作,从而可以获得用户对初始序列输入的调整操作,而后可以基于调整操作对多个三维模型的初始序列进行调整操作,进而可以获得多个三维模型的显示序列,这样有效地保证对多个三维模型的显示序列进行获取的准确可靠性。
具体实现时,在待显示的三维模型的数量为多个时,为了能够方便对多个三维模型进行显示操作,多个三维模型可以以列表或宫格视图的形式呈现,在用于对多个三维模型进行显示的界面中,可以显示有「多模型预览」按钮,当用户点击「多模型预览」按钮后,可以出现全局弹窗,全局弹窗分为上下两个内容区域,上半部分区域为以时间维度排列的模型预览图。下半部分为所选择内容的展示;
其中,上半部分模型支持多选,当选中模型后,所选的模型会在下半部分做展示,展示分为「选择顺序排序」和「时间排序」两种方式,当点选「选择顺序排序」时,下方呈现的多个三维模型以点击上半部分所选模型的顺序为维度排序,当点选「时间排序」时,所选模型将以时间的正序维度排序;下半部分展示的三维模型也支持模型的拖拽操作,已调整三维模型的显示书序,并且,也可以点击模型右上角的「取消」控件,以移除所选择的三维模型。
在多模型预览功能下,选择多个三维模型、且按照时间排序的方式对多个三维模型进行排序后,多个三维模型将以时间轴的方式预览查看。具体的,用户可以进入模型预览页,分为上方的模型预览区域与下方的时间轴选择区域。模型展示区域显示最近的三维模型,同时下方的时间轴选中最近日期的模型,当点击时间轴中的三维模型后,模型预览区域的三维模型会发生对应的变化,时间轴以模型缩略图的样式存在。
进一步的,在需要对多个三维模型进行显示的时候,还可以根据需求对多个三维模型进行自动播放 操作,即类似幻灯片自动播放的效果,当点击自动播放后,三维模型将根据时间抽顺序维度进行自动切换模型预览操作,从而实现了无需用户对多个三维模型进行切换显示操作,即可使得用户清楚地查看到多个三维模型各自的模型信息,进一步提高了该方法使用的灵活可靠性。
本实施例中,通过获取用户对任一其他三维模型所输入的模型选择操作,而后将在第一预设区域中显示的目标三维模型切换为与模型选择操作相对应的三维模型,从而有效地实现了对所显示的目标三维模型进行切换显示操作,进一步提高了该方法使用的灵活可靠性。
在一些实例中,为了进一步提高该方法使用的灵活可靠性,本实施例中的方法还可以包括:获取用户针对三维模型输入的执行操作;基于执行操作对三维模型进行移动、旋转或者缩放操作。
在获得待显示的三维模型之后,或者在对三维模型和三维地图进行结合显示之后,用户可以根据处理需求对待显示的三维模型或者已经显示的三维模型的显示视角进行调整操作,具体的,对于模型列表页面而言,模型列表页面的上半部分为模型展示区域,该模型展示区域可以支持模型的移动、模型的旋转、模型的放缩查看等操作,具体的,通过鼠标左键点击拖拽可以拖拽移动所显示的三维模型,Ctr l+鼠标左键可以旋转三维模型的方向,滚动鼠标中键滚轮可以放大或缩小三维模型的大小。模型列表页面的下半部分可以显示多个待显示的其他三维模型的缩略图,可以点击模型排列中的不同三维模型的缩略图可以切换模型查看,同时基于用户的预设应用需求和设计需求,还可以进行三维模型的分发操作、在地图上显示操作、下载操作以及删除等操作。
本实施例中,通过获取用户针对三维模型输入的执行操作;而后基于执行操作对三维模型进行移动、旋转或者缩放操作,从而使得所显示的三维模型的角度能够满足用户的查看需求,方便用户对各个显示视角的三维模型进行查看,进一步提高了该方法的实用性。
在又一些实例中,为了提高该方法的实用性,本实施例中的方法还可以包括:响应于对三维模型的模型处理请求,对三维模型和三维地图进行处理操作,处理操作包括以下至少之一:分发操作、下载操作、删除操作。
在获得待显示的三维模型之后,或者在对三维模型和三维地图进行结合显示之后,用户可以根据处理需求对待显示的三维模型或者已经显示的三维模型进行相对应的处理操作,具体的,在模型处理请求为模型分发请求时,在用户针对三维模型存在模型分发需求时,用户可以针对三维模型输入模型分发需求,在获得对三维模型的模型处理请求之后,可以基于模型处理请求对三维模型和三维地图进行模型分发操作。
相类似的,在模型处理请求为模型下载请求时,在用户针对三维模型存在模型下载需求时,用户可以针对三维模型输入模型下载需求,在获得对三维模型的模型下载请求之后,可以基于模型下载请求对三维模型和三维地图进行模型下载操作。在模型处理请求为模型删除请求时,在用户针对三维模型存在模型删除需求时,用户可以针对三维模型输入模型删除需求,在获得对三维模型的模型删除请求之后,可以基于模型删除请求对三维模型和三维地图进行模型删除操作。
总的来说,上述实施例所提供的方法,实现了如下功能:(1)对通过无人机的采集信息所生成的三维模型进行展示操作,具体的,三维模型具有高程信息,且能够在三维显示能力的地图中进行显示,从而实现了将三维模型与三维地图进行结合显示,之后可以在网页端或者地面端进行展示,并且支持对所显示的三维地图进行移动、旋转、放缩等操作。(2)有助于通过三维模型了解实际对象或者实际环境的变化程度,具体的,由于三维模型的使用往往不是孤立的,因此,以时间的维度对比查看三维模型的变化,从而使得用户可以更加清楚、直观地了解现实世界中物体的物理变化程度。(3)实现三维模型的对比操作,通过更加精细化的对比模型,从而可以找到差异点。(4)能够将无人机的拍摄成果(照片/全景图/视频/点云)在三维空间地图中进行显示,使得用户可以直接通过三维空间地图即可查看到 拍摄成果,提高了对拍摄成果进行显示的真实性;(5)在无人机航线飞行前、中、后,过程轨迹能够在三维地图中的展示,从而便于用户及时了解无人机的飞行状态,进一步提高了该方法的实用性。
图22为本发明实施例提供的一种对利用无人机所获得的模型的对比方法的流程示意图;参考附图22所示,本实施例提供了一种对利用无人机所获得的模型的对比方法,该模型的对比方法的执行主体可以为模型的对比装置,该模型的对比装置可以实现为软件、或者软件和硬件的组合,其中,在模型的对比装置实现为硬件时,其具体可以为通过云平台、云网络、云端服务器与无人机进行通信连接的电子设备,该电子设备可以实现为手持终端、个人终端PC等等。当模型的对比装置实现为软件时,其可以安装在上述所例举的电子设备中。具体的,本实施例中的对利用无人机所获得的模型的对比方法可以包括:
步骤S2201:获取与至少两个三维模型相对应的模型对比请求,至少两个三维模型均是基于无人机的采集信息所生成的。
步骤S2202:基于模型对比请求将至少两个三维模型进行重合叠加显示,获得叠加显示区域,叠加显示区域用于对至少一个三维模型进行显示。
步骤S2203:响应于用户针对叠加显示区域输入的显示调整操作,对叠加显示区域中的显示数据进行调整,以便于确定至少两个三维模型之间的模型对比结果。
在一些实例中,响应于用户针对叠加显示区域输入的显示调整操作,对叠加显示区域中的显示数据进行调整可以包括:获取用户针对与叠加显示区域相对应的区域调整控件输入的显示调整操作;响应于用户针对区域调整控件输入的显示调整操作,对叠加显示区域中的显示数据进行调整。
在一些实例中,在获取用户针对与叠加显示区域相对应的区域调整控件输入的显示调整操作之前,本实施例中的方法还可以包括:获取位于叠加显示区域内的三维模型数量;基于三维模型数量,确定与叠加显示区域相对应的区域调整控件,区域调整控件的数量小于或等于三维模型数量,且区域调整控件用于对处于不同叠层的三维模型的显示区域进行调整。
在一些实例中,响应于用户针对区域调整控件输入的显示调整操作,对叠加显示区域中的显示数据进行调整可以包括:确定与区域调整控件相对应的可调区域;响应于用户针对区域调整控件的调整操作,在可调区域内对叠加显示区域中的显示数据进行调整。
在一些实例中,在获得叠加显示区域之后,本实施例中的方法还可以包括:获取任意一个三维模型所对应的三维地图;对叠加显示区域和三维地图进行结合显示。
在一些实例中,在获取与至少两个三维模型相对应的模型对比请求之前,本实施例中的方法还可以包括:接收云端服务器发送的至少两个三维模型,至少两个三维模型均是云端服务器基于无人机的采集信息所生成的。
图22所示的对利用无人机所获得的模型的对比方法的实现方式和实现效果与上述图13-图16所示实施例的方法的实现方式和实现效果相类似,本实施例未详细描述的部分,可参考对图13-图16所示实施例的相关说明。该技术方案的执行过程和技术效果参见图13-图16所示实施例中的描述,在此不再赘述。
图23为本发明实施例提供的一种用于对无人机进行控制的航线的生成方法的流程示意图;参考附图23所示,本实施例提供了一种用于对无人机进行控制的航线的生成方法,该航线的生成方法的执行主体可以为航线的生成装置,该航线的生成装置可以实现为软件、或者软件和硬件的组合,其中,在航线的生成装置实现为硬件时,其具体可以为通过云平台、云网络、云端服务器与无人机进行通信连接的电子设备,该电子设备可以实现为手持终端、个人终端PC等等。当航线的生成装置实现为软件时,其可以安装在上述所例举的电子设备中。具体的,本实施例中的用于对无人机进行控制的航线的生成方法可以包括:
步骤S2301:获取用户在三维地图中所输入的航点编辑信息。
步骤S2302:基于航点编辑信息,确定位于三维地图中的至少两个空间航点,空间航点包括用于对无人机进行控制的高度信息。
步骤S2303:基于至少两个空间航点,生成与无人机相对应的三维航线信息。
在一些实例中,在确定位于三维地图中的至少两个空间航点之后,本实施例中的方法还可以包括:获取用户在三维地图中对任一空间航点所输入的航点调整操作;基于航点调整操作对空间航点进行调整。
在一些实例中,本实施例中的方法还可以包括:获取无人机的实际飞行航线;在三维地图中,对实际飞行航线和三维航线信息进行区分显示。
在一些实例中,本实施例中的方法还可以包括:获取三维航线信息相对应的执行状态;在三维地图中,对处于不同执行状态的三维航线信息进行区分显示。
图23所示的用于对无人机进行控制的航线的生成方法的实现方式和实现效果与上述图17-图19所示实施例的方法的实现方式和实现效果相类似,本实施例未详细描述的部分,可参考对图17-图19所示实施例的相关说明。该技术方案的执行过程和技术效果参见图17-图19所示实施例中的描述,在此不再赘述。
图24为本发明实施例提供的一种对利用无人机所获得的模型的显示方法的流程示意图;参考附图24所示,本实施例提供了一种对利用无人机所获得的模型的显示方法,该模型的显示方法的执行主体可以为模型的显示装置,该模型的显示装置可以实现为软件、或者软件和硬件的组合,其中,在模型的显示装置实现为硬件时,其具体可以为通过云平台、云网络、云端服务器与无人机进行通信连接的电子设备,该电子设备可以实现为手持终端、个人终端PC等等。当模型的显示装置实现为软件时,其可以安装在上述所例举的电子设备中。具体的,本实施例中的对利用无人机所获得的模型的显示方法可以包括:
步骤S2401:获取待显示的三维模型,三维模型是基于无人机的采集信息所生成的。
步骤S2402:基于采集信息,确定与三维模型相对应的三维地图。
步骤S2403:对三维模型和三维地图进行结合显示。
在一些实例中,在三维模型的数量为多个时,对三维模型和三维地图进行结合显示可以包括:在多个三维模型中,确定需要进行详细显示的目标三维模型;利用显示界面的第一预设区域对目标三维模型和相对应的三维地图进行结合显示;利用显示界面的第二预设区域对除了目标三维模型外的其他三维模型进行缩略显示,其中,第二预设区域小于第一预设区域。
在一些实例中,在利用显示界面的第二预设区域对除了目标三维模型外的其他三维模型进行缩略显示之后,本实施例中的方法还可以包括:获取用户对任一其他三维模型所输入的模型选择操作;将在第一预设区域中显示的目标三维模型切换为与模型选择操作相对应的三维模型。
在一些实例中,对三维模型和三维地图进行结合显示可以包括:确定三维地图的显示类型,显示类型包括以下任意之一:预设背景图、卫星地图、标准地图;基于三维地图的显示类型,对三维模型和三维地图进行结合显示。
在一些实例中,本实施例中的方法还可以包括:获取用户针对三维模型输入的执行操作;基于执行操作对三维模型进行移动、旋转或者缩放操作。
在一些实例中,本实施例中的方法还可以包括:响应于对三维模型的模型处理请求,对三维模型和三维地图进行处理操作,处理操作包括以下至少之一:分发操作、下载操作、删除操作。
在一些实例中,在三维模型的数量为多个时,对三维模型和三维地图进行结合显示可以包括:获取用于对多个三维模型进行排序的参考信息;基于参考信息确定多个三维模型的显示序列;基于显示序列,对多个三维模型和所对应的三维地图依次进行结合显示。
在一些实例中,参考信息包括以下任意之一:选择顺序信息、时间信息。
在一些实例中,基于参考信息确定多个三维模型的显示序列可以包括:基于参考信息,确定多个三维模型的初始序列;获取用户对初始序列输入的调整操作,获得多个三维模型的显示序列。
在一些实例中,获取待显示的三维模型包括:接收云端服务器发送的三维模型,三维模型是云端服务器基于无人机的采集信息所生成的。
图24所示的用于对无人机进行控制的航线的生成方法的实现方式和实现效果与上述图20-图21所示实施例的方法的实现方式和实现效果相类似,本实施例未详细描述的部分,可参考对图20-图21所示实施例的相关说明。该技术方案的执行过程和技术效果参见图20-图21所示实施例中的描述,在此不再赘述。
图25为本发明实施例提供的一种对利用无人机所采集的信息的显示装置的结构示意图;参考附图25所示,本实施例提供了一种对利用无人机所采集的信息的显示装置,该信息的显示装置用于执行上述图2所示的信息的显示方法,具体的,信息的显示装置可以包括:
存储器2501,用于存储计算机程序;
处理器2502,用于运行存储器2501中存储的计算机程序以实现:
获取无人机的拍摄信息;
确定与拍摄信息相对应的无人机拍摄位置;
当存在与拍摄信息相对应的拍摄对象位置,则在拍摄信息所对应的地图中,对无人机拍摄位置和拍摄对象位置进行标记显示。
其中,信息的显示装置的结构中还可以包括通信接口2503,用于实现信息的显示装置与其他设备或通信网络通信。
图25所示的对利用无人机所获得的模型的对比装置的实现方式和实现效果与上述图1-图12所示实施例的方法的实现方式和实现效果相类似,本实施例未详细描述的部分,可参考对图1-图12所示实施例的相关说明。该技术方案的执行过程和技术效果参见图1-图12所示实施例中的描述,在此不再赘述。
图26为本发明实施例提供的一种对利用无人机所获得的模型的对比装置的结构示意图;参考附图26所示,本实施例提供了一种对利用无人机所获得的模型的对比装置,该模型的对比装置用于执行上述图13所示的对利用无人机所获得的模型的对比方法,具体的,模型的对比装置可以包括:
存储器2601,用于存储计算机程序;
处理器2602,用于运行存储器2601中存储的计算机程序以实现:
获取与至少两个三维模型相对应的模型对比请求,至少两个三维模型均是基于无人机的采集信息所生成的;
基于模型对比请求将至少两个三维模型进行重合叠加显示,获得叠加显示区域,叠加显示区域用于对至少一个三维模型进行显示;
响应于用户针对叠加显示区域输入的显示调整操作,对叠加显示区域中的显示数据进行调整,以便于确定至少两个三维模型之间的模型对比结果。
其中,模型的对比装置的结构中还可以包括通信接口2603,用于实现模型的对比装置与其他设备或通信网络通信。
图26所示的对利用无人机所获得的模型的对比装置的实现方式和实现效果与上述图13-图16所示实施例的方法的实现方式和实现效果相类似,本实施例未详细描述的部分,可参考对图13-图16所示实施例的相关说明。该技术方案的执行过程和技术效果参见图13-图16所示实施例中的描述,在此不 再赘述。
图27为本发明实施例提供的一种用于对无人机进行控制的航线的生成装置的结构示意图;参考附图27所示,本实施例提供了一种用于对无人机进行控制的航线的生成装置,该航线的生成装置用于执行上述图17所示的用于对无人机进行控制的航线的生成方法,具体的,航线的生成装置可以包括:
存储器2701,用于存储计算机程序;
处理器2702,用于运行存储器2701中存储的计算机程序以实现:
获取用户在三维地图中所输入的航点编辑信息;
基于航点编辑信息,确定位于三维地图中的至少两个空间航点,空间航点包括用于对无人机进行控制的高度信息;
基于至少两个空间航点,生成与无人机相对应的三维航线信息。
其中,航线的生成装置的结构中还可以包括通信接口2703,用于实现航线的生成装置与其他设备或通信网络通信。
图27所示的用于对无人机进行控制的航线的生成装置的实现方式和实现效果与上述图17-图19所示实施例的方法的实现方式和实现效果相类似,本实施例未详细描述的部分,可参考对图17-图19所示实施例的相关说明。该技术方案的执行过程和技术效果参见图17-图19所示实施例中的描述,在此不再赘述。
图28为本发明实施例提供的一种对利用无人机所获得的模型的显示装置的结构示意图;参考附图28所示,本实施例提供了一种对利用无人机所获得的模型的显示装置,该模型的显示装置用于执行上述图20所示的对利用无人机所获得的模型的显示方法,具体的,模型的显示装置可以包括:
存储器2801,用于存储计算机程序;
处理器2802,用于运行存储器2801中存储的计算机程序以实现:
获取待显示的三维模型,三维模型是基于无人机的采集信息所生成的;
基于采集信息,确定与三维模型相对应的三维地图;
对三维模型和三维地图进行结合显示。
其中,模型的显示装置的结构中还可以包括通信接口2803,用于实现模型的显示装置与其他设备或通信网络通信。
图28所示的对利用无人机所获得的模型的显示装置的实现方式和实现效果与上述图20-图21所示实施例的方法的实现方式和实现效果相类似,本实施例未详细描述的部分,可参考对图20-图21所示实施例的相关说明。该技术方案的执行过程和技术效果参见图20-图21所示实施例中的描述,在此不再赘述。
另外,本发明实施例提供了一种计算机存储介质,用于储存电子设备所用的计算机软件指令,其包含用于执行上述图1-图12所示方法实施例中对利用无人机所采集的信息的显示方法所涉及的程序。
本发明实施例提供了一种计算机存储介质,用于储存电子设备所用的计算机软件指令,其包含用于执行上述图13-图16所示方法实施例中对利用无人机所获得的模型的对比方法所涉及的程序。
本发明实施例提供了一种计算机存储介质,用于储存电子设备所用的计算机软件指令,其包含用于执行上述图17-图19所示方法实施例中用于对无人机进行控制的航线的生成方法所涉及的程序。
本发明实施例提供了一种计算机存储介质,用于储存电子设备所用的计算机软件指令,其包含用于执行上述图20-图21所示方法实施例中对利用无人机所获得的模型的显示方法所涉及的程序。
此外,本发明实施例提供了一种计算机程序产品,包括:计算机程序,当计算机程序被电子设备的处理器执行时,使处理器执行图1-图12所示方法实施例中对利用无人机所采集的信息的显示方法。
本发明实施例提供了一种计算机程序产品,包括:计算机程序,当计算机程序被电子设备的处理器执行时,使处理器执行图13-图16所示方法实施例中对利用无人机所获得的模型的对比方法。
本发明实施例提供了一种计算机程序产品,包括:计算机程序,当计算机程序被电子设备的处理器执行时,使处理器执行图17-图19所示方法实施例中用于对无人机进行控制的航线的生成方法。
本发明实施例提供了一种计算机程序产品,包括:计算机程序,当计算机程序被电子设备的处理器执行时,使处理器执行图20-图21所示方法实施例中对利用无人机所获得的模型的显示方法。
图29为本发明实施例提供的一种无人机系统的结构示意图一;参考附图29所示,本实施例提供了一种无人机系统,该无人机系统可以包括:
无人机2901;
上述图25实施例中的对利用无人机所采集的信息的显示装置2902,用于通过云平台2903对无人机2901进行控制。
其中,云平台2903用于设置无人机的飞行作业任务、用户的规划航线等操作,无人机2901能够基于执行通过云平台2903所设置的飞行作业任务,或者能够按照用户的规划航线进行作业等等,无人机2901上可以设置有图像采集装置,通过图像采集装置可以获得无人机的拍摄成果(图像信息、视频信息、点云信息等),并可以将拍摄成果直接传输至云平台2903或者通过遥控器上传至云平台2903。在云平台2903获得拍摄成果之后,可以通过信息的显示装置2902对拍摄成果进行显示。在一些实例中,其他终端设备也可以根据设计需求或者应用需求从云平台2903处下载并展示拍摄成果。
本实施例中的无人机系统的实现方式和实现效果与上述图25所示实施例的对利用无人机所采集的信息的显示装置的实现方式和实现效果相类似,本实施例未详细描述的部分,可参考对图25所示实施例的相关说明。该技术方案的执行过程和技术效果参见图25所示实施例中的描述,在此不再赘述。
图30为本发明实施例提供的一种无人机系统的结构示意图二;参考附图30所示,本实施例提供了另一种无人机系统,该无人机系统可以包括:
无人机3001;
上述图26实施例中的对利用无人机所获得的模型的对比装置3002,用于通过云平台3003对无人机3001进行控制。
本实施例中的无人机系统的实现方式和实现效果与上述图26所示实施例的对利用无人机所获得的模型的对比装置的实现方式和实现效果相类似,本实施例未详细描述的部分,可参考对图26所示实施例的相关说明。该技术方案的执行过程和技术效果参见图26所示实施例中的描述,在此不再赘述。
图31为本发明实施例提供的一种无人机系统的结构示意图三;参考附图31所示,本实施例提供了又一种无人机系统,该无人机系统可以包括:
无人机3101;
上述图27实施例中的用于对无人机进行控制的航线的生成装置3102,用于通过云平台3103对无人机3101进行控制。
本实施例中的无人机系统的实现方式和实现效果与上述图27所示实施例的用于对无人机进行控制的航线的生成装置的实现方式和实现效果相类似,本实施例未详细描述的部分,可参考对图27所示实施例的相关说明。该技术方案的执行过程和技术效果参见图27所示实施例中的描述,在此不再赘述。
图32为本发明实施例提供的一种无人机系统的结构示意图四;参考附图32所示,本实施例提供了又一种无人机系统,该无人机系统可以包括:
无人机3201;
上述图28实施例中的对利用无人机所获得的模型的显示装置3202,用于通过云平台3203对无人机 3201进行控制。
本实施例中的无人机系统的实现方式和实现效果与上述图28所示实施例的对利用无人机所获得的模型的显示装置的实现方式和实现效果相类似,本实施例未详细描述的部分,可参考对图28所示实施例的相关说明。该技术方案的执行过程和技术效果参见图28所示实施例中的描述,在此不再赘述。
以上各个实施例中的技术方案、技术特征在与本相冲突的情况下均可以单独,或者进行组合,只要未超出本领域技术人员的认知范围,均属于本申请保护范围内的等同实施例。
在本发明所提供的几个实施例中,应该理解到,所揭露的相关检测装置和方法,可以通过其它的方式实现。例如,以上所描述的检测装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,检测装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得计算机处理器(processor)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘或者光盘等各种可以存储程序代码的介质。
以上所述仅为本发明的实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。
Claims (92)
- 一种对利用无人机所采集的信息的显示方法,其特征在于,包括:获取无人机的拍摄信息;确定与所述拍摄信息相对应的无人机拍摄位置;当存在与所述拍摄信息相对应的拍摄对象位置,则在所述拍摄信息所对应的地图中,对所述无人机拍摄位置和所述拍摄对象位置进行标记显示。
- 根据权利要求1所述的方法,其特征在于,所述拍摄信息包括全景图,所述方法还包括:获取所述全景图的拍摄位置;基于所述拍摄位置,确定与所述全景图相对应的三维地图;将所述全景图自动加载至所述三维地图中进行标记显示。
- 根据权利要求2所述的方法,其特征在于,在将所述全景图自动加载至所述三维地图中进行标记显示之后,所述方法还包括:在所述三维地图中,获取用户对所述全景图输入的角度调整操作;基于所述角度调整操作,确定所述全景图的显示视角;基于所述显示视角对所述全景图进行显示。
- 根据权利要求1所述的方法,其特征在于,所述拍摄信息包括视频信息,所述方法还包括:获取与所述视频信息中各个视频帧相对应的拍摄位置;在对所述视频信息进行播放时,在所述地图中对正在进行播放的视频帧所对应的当前拍摄位置进行显示。
- 根据权利要求1所述的方法,其特征在于,所述拍摄信息包括点云信息,所述方法还包括:获取与所述点云信息相对应的点云模型;确定与所述点云模型相对应的模型原点以及与所述模型原点相对应的位置信息;基于所述位置信息,在所述地图中对所述点云模型进行显示。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:获取与至少两个三维模型相对应的模型对比请求,所述至少两个三维模型均是基于无人机的采集信息所生成的;基于所述模型对比请求将所述至少两个三维模型进行重合叠加显示,获得叠加显示区域,所述叠加显示区域用于对至少一个三维模型进行显示;响应于用户针对所述叠加显示区域输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整,以便于确定至少两个三维模型之间的模型对比结果。
- 根据权利要求6所述的方法,其特征在于,响应于用户针对所述叠加显示区域输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整,包括:获取用户针对与所述叠加显示区域相对应的区域调整控件输入的显示调整操作;响应于用户针对所述区域调整控件输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整。
- 根据权利要求7所述的方法,其特征在于,在获取用户针对与所述叠加显示区域相对应的区域调整控件输入的显示调整操作之前,所述方法还包括:获取位于所述叠加显示区域内的三维模型数量;基于所述三维模型数量,确定与所述叠加显示区域相对应的区域调整控件,所述区域调整控件的数量小于或等于所述三维模型数量,且所述区域调整控件用于对处于不同叠层的三维模型的显示区域进行 调整。
- 根据权利要求7所述的方法,其特征在于,响应于用户针对所述区域调整控件输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整,包括:确定与所述区域调整控件相对应的可调区域;响应于用户针对所述区域调整控件的调整操作,在所述可调区域内对所述叠加显示区域中的显示数据进行调整。
- 根据权利要求6所述的方法,其特征在于,在获得叠加显示区域之后,所述方法还包括:获取任意一个三维模型所对应的三维地图;对所述叠加显示区域和所述三维地图进行结合显示。
- 根据权利要求6所述的方法,其特征在于,所述获取与至少两个三维模型相对应的模型对比请求之前,还包括:接收云端服务器发送的所述至少两个三维模型,所述至少两个三维模型均是所述云端服务器基于所述无人机的采集信息所生成的。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:获取用户在三维地图中所输入的航点编辑信息;基于所述航点编辑信息,确定位于所述三维地图中的至少两个空间航点,所述空间航点包括用于对无人机进行控制的高度信息;基于所述至少两个空间航点,生成与所述无人机相对应的三维航线信息。
- 根据权利要求12所述的方法,其特征在于,在确定位于所述三维地图中的至少两个空间航点之后,所述方法还包括:获取用户在所述三维地图中对任一空间航点所输入的航点调整操作;基于所述航点调整操作对所述空间航点进行调整。
- 根据权利要求12所述的方法,其特征在于,所述方法还包括:获取所述无人机的实际飞行航线;在所述三维地图中,对所述实际飞行航线和所述三维航线信息进行区分显示。
- 根据权利要求12所述的方法,其特征在于,所述方法还包括:获取所述三维航线信息相对应的执行状态;在所述三维地图中,对处于不同执行状态的三维航线信息进行区分显示。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:获取待显示的三维模型,所述三维模型是基于无人机的采集信息所生成的;基于所述采集信息,确定与所述三维模型相对应的三维地图;对所述三维模型和所述三维地图进行结合显示。
- 根据权利要求16所述的方法,其特征在于,在所述三维模型的数量为多个时,对所述三维模型和所述三维地图进行结合显示,包括:在多个三维模型中,确定需要进行详细显示的目标三维模型;利用显示界面的第一预设区域对所述目标三维模型和相对应的三维地图进行结合显示;利用所述显示界面的第二预设区域对除了所述目标三维模型外的其他三维模型进行缩略显示,其中,所述第二预设区域小于所述第一预设区域。
- 根据权利要求17所述的方法,其特征在于,在利用所述显示界面的第二预设区域对除了所述目标三维模型外的其他三维模型进行缩略显示之后,所述方法还包括:获取用户对任一其他三维模型所输入的模型选择操作;将在所述第一预设区域中显示的目标三维模型切换为与所述模型选择操作相对应的三维模型。
- 根据权利要求16所述的方法,其特征在于,对所述三维模型和所述三维地图进行结合显示,包括:确定所述三维地图的显示类型,所述显示类型包括以下任意之一:预设背景图、卫星地图、标准地图;基于所述三维地图的显示类型,对所述三维模型和所述三维地图进行结合显示。
- 根据权利要求16所述的方法,其特征在于,所述方法还包括:获取用户针对所述三维模型输入的执行操作;基于所述执行操作对所述三维模型进行移动、旋转或者缩放操作。
- 根据权利要求16所述的方法,其特征在于,所述方法还包括:响应于对所述三维模型的模型处理请求,对所述三维模型和所述三维地图进行处理操作,所述处理操作包括以下至少之一:分发操作、下载操作、删除操作。
- 根据权利要求16所述的方法,其特征在于,在所述三维模型的数量为多个时,对所述三维模型和所述三维地图进行结合显示,包括:获取用于对多个三维模型进行排序的参考信息;基于所述参考信息确定所述多个三维模型的显示序列;基于所述显示序列,对多个三维模型和所对应的三维地图依次进行结合显示。
- 根据权利要求22所述的方法,其特征在于,所述参考信息包括以下任意之一:选择顺序信息、时间信息。
- 根据权利要求22所述的方法,其特征在于,基于所述参考信息确定所述多个三维模型的显示序列,包括:基于所述参考信息,确定所述多个三维模型的初始序列;获取用户对所述初始序列输入的调整操作,获得所述多个三维模型的显示序列。
- 根据权利要求16所述的方法,其特征在于,所述获取待显示的三维模型包括:接收云端服务器发送的所述三维模型,所述三维模型是所述云端服务器基于所述无人机的采集信息所生成的。
- 一种对利用无人机所获得的模型的对比方法,其特征在于,包括:获取与至少两个三维模型相对应的模型对比请求,所述至少两个三维模型均是基于无人机的采集信息所生成的;基于所述模型对比请求将所述至少两个三维模型进行重合叠加显示,获得叠加显示区域,所述叠加显示区域用于对至少一个三维模型进行显示;响应于用户针对所述叠加显示区域输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整,以便于确定至少两个三维模型之间的模型对比结果。
- 根据权利要求26所述的方法,其特征在于,响应于用户针对所述叠加显示区域输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整,包括:获取用户针对与所述叠加显示区域相对应的区域调整控件输入的显示调整操作;响应于用户针对所述区域调整控件输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整。
- 根据权利要求27所述的方法,其特征在于,在获取用户针对与所述叠加显示区域相对应的区域 调整控件输入的显示调整操作之前,所述方法还包括:获取位于所述叠加显示区域内的三维模型数量;基于所述三维模型数量,确定与所述叠加显示区域相对应的区域调整控件,所述区域调整控件的数量小于或等于所述三维模型数量,且所述区域调整控件用于对处于不同叠层的三维模型的显示区域进行调整。
- 根据权利要求27所述的方法,其特征在于,响应于用户针对所述区域调整控件输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整,包括:确定与所述区域调整控件相对应的可调区域;响应于用户针对所述区域调整控件的调整操作,在所述可调区域内对所述叠加显示区域中的显示数据进行调整。
- 根据权利要求26所述的方法,其特征在于,在获得叠加显示区域之后,所述方法还包括:获取任意一个三维模型所对应的三维地图;对所述叠加显示区域和所述三维地图进行结合显示。
- 根据权利要求26所述的方法,其特征在于,所述获取与至少两个三维模型相对应的模型对比请求之前,还包括:接收云端服务器发送的所述至少两个三维模型,所述至少两个三维模型均是所述云端服务器基于所述无人机的采集信息所生成的。
- 一种用于对无人机进行控制的航线的生成方法,其特征在于,包括:获取用户在三维地图中所输入的航点编辑信息;基于所述航点编辑信息,确定位于所述三维地图中的至少两个空间航点,所述空间航点包括用于对无人机进行控制的高度信息;基于所述至少两个空间航点,生成与所述无人机相对应的三维航线信息。
- 根据权利要求32所述的方法,其特征在于,在确定位于所述三维地图中的至少两个空间航点之后,所述方法还包括:获取用户在所述三维地图中对任一空间航点所输入的航点调整操作;基于所述航点调整操作对所述空间航点进行调整。
- 根据权利要求32所述的方法,其特征在于,所述方法还包括:获取所述无人机的实际飞行航线;在所述三维地图中,对所述实际飞行航线和所述三维航线信息进行区分显示。
- 根据权利要求32所述的方法,其特征在于,所述方法还包括:获取所述三维航线信息相对应的执行状态;在所述三维地图中,对处于不同执行状态的三维航线信息进行区分显示。
- 一种对利用无人机所获得的模型的显示方法,其特征在于,包括:获取待显示的三维模型,所述三维模型是基于无人机的采集信息所生成的;基于所述采集信息,确定与所述三维模型相对应的三维地图;对所述三维模型和所述三维地图进行结合显示。
- 根据权利要求36所述的方法,其特征在于,在所述三维模型的数量为多个时,对所述三维模型和所述三维地图进行结合显示,包括:在多个三维模型中,确定需要进行详细显示的目标三维模型;利用显示界面的第一预设区域对所述目标三维模型和相对应的三维地图进行结合显示;利用所述显示界面的第二预设区域对除了所述目标三维模型外的其他三维模型进行缩略显示,其中,所述第二预设区域小于所述第一预设区域。
- 根据权利要求37所述的方法,其特征在于,在利用所述显示界面的第二预设区域对除了所述目标三维模型外的其他三维模型进行缩略显示之后,所述方法还包括:获取用户对任一其他三维模型所输入的模型选择操作;将在所述第一预设区域中显示的目标三维模型切换为与所述模型选择操作相对应的三维模型。
- 根据权利要求36所述的方法,其特征在于,对所述三维模型和所述三维地图进行结合显示,包括:确定所述三维地图的显示类型,所述显示类型包括以下任意之一:预设背景图、卫星地图、标准地图;基于所述三维地图的显示类型,对所述三维模型和所述三维地图进行结合显示。
- 根据权利要求36所述的方法,其特征在于,所述方法还包括:获取用户针对所述三维模型输入的执行操作;基于所述执行操作对所述三维模型进行移动、旋转或者缩放操作。
- 根据权利要求36所述的方法,其特征在于,所述方法还包括:响应于对所述三维模型的模型处理请求,对所述三维模型和所述三维地图进行处理操作,所述处理操作包括以下至少之一:分发操作、下载操作、删除操作。
- 根据权利要求36所述的方法,其特征在于,在所述三维模型的数量为多个时,对所述三维模型和所述三维地图进行结合显示,包括:获取用于对多个三维模型进行排序的参考信息;基于所述参考信息确定所述多个三维模型的显示序列;基于所述显示序列,对多个三维模型和所对应的三维地图依次进行结合显示。
- 根据权利要求42所述的方法,其特征在于,所述参考信息包括以下任意之一:选择顺序信息、时间信息。
- 根据权利要求42所述的方法,其特征在于,基于所述参考信息确定所述多个三维模型的显示序列,包括:基于所述参考信息,确定所述多个三维模型的初始序列;获取用户对所述初始序列输入的调整操作,获得所述多个三维模型的显示序列。
- 根据权利要求36所述的方法,其特征在于,所述获取待显示的三维模型包括:接收云端服务器发送的所述三维模型,所述三维模型是所述云端服务器基于所述无人机的采集信息所生成的。
- 一种对利用无人机所采集的信息的显示装置,其特征在于,包括:存储器,用于存储计算机程序;处理器,用于运行所述存储器中存储的计算机程序以实现:获取无人机的拍摄信息;确定与所述拍摄信息相对应的无人机拍摄位置;当存在与所述拍摄信息相对应的拍摄对象位置,则在所述拍摄信息所对应的地图中,对所述无人机拍摄位置和所述拍摄对象位置进行标记显示。
- 根据权利要求46所述的装置,其特征在于,所述拍摄信息包括全景图,所述处理器还用于:获取所述全景图的拍摄位置;基于所述拍摄位置,确定与所述全景图相对应的三维地图;将所述全景图自动加载至所述三维地图中进行标记显示。
- 根据权利要求47所述的装置,其特征在于,在将所述全景图自动加载至所述三维地图中进行标记显示之后,所述处理器还用于:在所述三维地图中,获取用户对所述全景图输入的角度调整操作;基于所述角度调整操作,确定所述全景图的显示视角;基于所述显示视角对所述全景图进行显示。
- 根据权利要求46所述的装置,其特征在于,所述拍摄信息包括视频信息,所述处理器还用于:获取与所述视频信息中各个视频帧相对应的拍摄位置;在对所述视频信息进行播放时,在所述地图中对正在进行播放的视频帧所对应的当前拍摄位置进行显示。
- 根据权利要求46所述的装置,其特征在于,所述拍摄信息包括点云信息,所述处理器还用于:获取与所述点云信息相对应的点云模型;确定与所述点云模型相对应的模型原点以及与所述模型原点相对应的位置信息;基于所述位置信息,在所述地图中对所述点云模型进行显示。
- 根据权利要求46所述的装置,其特征在于,所述处理器还用于:获取与至少两个三维模型相对应的模型对比请求,所述至少两个三维模型均是基于无人机的采集信息所生成的;基于所述模型对比请求将所述至少两个三维模型进行重合叠加显示,获得叠加显示区域,所述叠加显示区域用于对至少一个三维模型进行显示;响应于用户针对所述叠加显示区域输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整,以便于确定至少两个三维模型之间的模型对比结果。
- 根据权利要求51所述的装置,其特征在于,在所述处理器响应于用户针对所述叠加显示区域输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整时,所述处理器用于:获取用户针对与所述叠加显示区域相对应的区域调整控件输入的显示调整操作;响应于用户针对所述区域调整控件输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整。
- 根据权利要求52所述的装置,其特征在于,在获取用户针对与所述叠加显示区域相对应的区域调整控件输入的显示调整操作之前,所述处理器用于:获取位于所述叠加显示区域内的三维模型数量;基于所述三维模型数量,确定与所述叠加显示区域相对应的区域调整控件,所述区域调整控件的数量小于或等于所述三维模型数量,且所述区域调整控件用于对处于不同叠层的三维模型的显示区域进行调整。
- 根据权利要求52所述的装置,其特征在于,在所述处理器响应于用户针对所述区域调整控件输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整时,所述处理器用于:确定与所述区域调整控件相对应的可调区域;响应于用户针对所述区域调整控件的调整操作,在所述可调区域内对所述叠加显示区域中的显示数据进行调整。
- 根据权利要求51所述的装置,其特征在于,在获得叠加显示区域之后,所述处理器还用于:获取任意一个三维模型所对应的三维地图;对所述叠加显示区域和所述三维地图进行结合显示。
- 根据权利要求51所述的装置,其特征在于,所述获取与至少两个三维模型相对应的模型对比请求之前,所述处理器还用于:接收云端服务器发送的所述至少两个三维模型,所述至少两个三维模型均是所述云端服务器基于所述无人机的采集信息所生成的。
- 根据权利要求46所述的装置,其特征在于,所述处理器还用于:获取用户在三维地图中所输入的航点编辑信息;基于所述航点编辑信息,确定位于所述三维地图中的至少两个空间航点,所述空间航点包括用于对无人机进行控制的高度信息;基于所述至少两个空间航点,生成与所述无人机相对应的三维航线信息。
- 根据权利要求57所述的装置,其特征在于,在确定位于所述三维地图中的至少两个空间航点之后,所述处理器还用于:获取用户在所述三维地图中对任一空间航点所输入的航点调整操作;基于所述航点调整操作对所述空间航点进行调整。
- 根据权利要求57所述的装置,其特征在于,所述处理器还用于:获取所述无人机的实际飞行航线;在所述三维地图中,对所述实际飞行航线和所述三维航线信息进行区分显示。
- 根据权利要求57所述的装置,其特征在于,所述处理器还用于:获取所述三维航线信息相对应的执行状态;在所述三维地图中,对处于不同执行状态的三维航线信息进行区分显示。
- 根据权利要求46所述的装置,其特征在于,所述处理器还用于:获取待显示的三维模型,所述三维模型是基于无人机的采集信息所生成的;基于所述采集信息,确定与所述三维模型相对应的三维地图;对所述三维模型和所述三维地图进行结合显示。
- 根据权利要求61所述的装置,其特征在于,在所述三维模型的数量为多个时,在所述处理器对所述三维模型和所述三维地图进行结合显示时,所述处理器还用于:在多个三维模型中,确定需要进行详细显示的目标三维模型;利用显示界面的第一预设区域对所述目标三维模型和相对应的三维地图进行结合显示;利用所述显示界面的第二预设区域对除了所述目标三维模型外的其他三维模型进行缩略显示,其中,所述第二预设区域小于所述第一预设区域。
- 根据权利要求62所述的装置,其特征在于,在利用所述显示界面的第二预设区域对除了所述目标三维模型外的其他三维模型进行缩略显示之后,所述处理器还用于:获取用户对任一其他三维模型所输入的模型选择操作;将在所述第一预设区域中显示的目标三维模型切换为与所述模型选择操作相对应的三维模型。
- 根据权利要求61所述的装置,其特征在于,在所述处理器对所述三维模型和所述三维地图进行结合显示时,所述处理器还用于:确定所述三维地图的显示类型,所述显示类型包括以下任意之一:预设背景图、卫星地图、标准地图;基于所述三维地图的显示类型,对所述三维模型和所述三维地图进行结合显示。
- 根据权利要求61所述的装置,其特征在于,所述处理器还用于:获取用户针对所述三维模型输入的执行操作;基于所述执行操作对所述三维模型进行移动、旋转或者缩放操作。
- 根据权利要求61所述的装置,其特征在于,所述处理器还用于:响应于对所述三维模型的模型处理请求,对所述三维模型和所述三维地图进行处理操作,所述处理操作包括以下至少之一:分发操作、下载操作、删除操作。
- 根据权利要求61所述的装置,其特征在于,在所述三维模型的数量为多个时,在所述处理器对所述三维模型和所述三维地图进行结合显示时,所述处理器还用于:获取用于对多个三维模型进行排序的参考信息;基于所述参考信息确定所述多个三维模型的显示序列;基于所述显示序列,对多个三维模型和所对应的三维地图依次进行结合显示。
- 根据权利要求67所述的装置,其特征在于,所述参考信息包括以下任意之一:选择顺序信息、时间信息。
- 根据权利要求67所述的装置,其特征在于,在所述处理器基于所述参考信息确定所述多个三维模型的显示序列时,所述处理器还用于:基于所述参考信息,确定所述多个三维模型的初始序列;获取用户对所述初始序列输入的调整操作,获得所述多个三维模型的显示序列。
- 根据权利要求69所述的装置,其特征在于,在所述处理器获取待显示的三维模型时,所述处理器还用于:接收云端服务器发送的所述三维模型,所述三维模型是所述云端服务器基于所述无人机的采集信息所生成的。
- 一种对利用无人机所获得的模型的对比装置,其特征在于,包括:存储器,用于存储计算机程序;处理器,用于运行所述存储器中存储的计算机程序以实现:获取与至少两个三维模型相对应的模型对比请求,所述至少两个三维模型均是基于无人机的采集信息所生成的;基于所述模型对比请求将所述至少两个三维模型进行重合叠加显示,获得叠加显示区域,所述叠加显示区域用于对至少一个三维模型进行显示;响应于用户针对所述叠加显示区域输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整,以便于确定至少两个三维模型之间的模型对比结果。
- 根据权利要求71所述的装置,其特征在于,在所述处理器响应于用户针对所述叠加显示区域输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整时,所述处理器用于:获取用户针对与所述叠加显示区域相对应的区域调整控件输入的显示调整操作;响应于用户针对所述区域调整控件输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整。
- 根据权利要求72所述的装置,其特征在于,在获取用户针对与所述叠加显示区域相对应的区域调整控件输入的显示调整操作之前,所述处理器用于:获取位于所述叠加显示区域内的三维模型数量;基于所述三维模型数量,确定与所述叠加显示区域相对应的区域调整控件,所述区域调整控件的数量小于或等于所述三维模型数量,且所述区域调整控件用于对处于不同叠层的三维模型的显示区域进行调整。
- 根据权利要求72所述的装置,其特征在于,在所述处理器响应于用户针对所述区域调整控件输入的显示调整操作,对所述叠加显示区域中的显示数据进行调整时,所述处理器用于:确定与所述区域调整控件相对应的可调区域;响应于用户针对所述区域调整控件的调整操作,在所述可调区域内对所述叠加显示区域中的显示数据进行调整。
- 根据权利要求71所述的装置,其特征在于,在获得叠加显示区域之后,所述处理器还用于:获取任意一个三维模型所对应的三维地图;对所述叠加显示区域和所述三维地图进行结合显示。
- 根据权利要求71所述的装置,其特征在于,所述获取与至少两个三维模型相对应的模型对比请求之前,所述处理器还用于:接收云端服务器发送的所述至少两个三维模型,所述至少两个三维模型均是所述云端服务器基于所述无人机的采集信息所生成的。
- 一种用于对无人机进行控制的航线的生成装置,其特征在于,包括:存储器,用于存储计算机程序;处理器,用于运行所述存储器中存储的计算机程序以实现:获取用户在三维地图中所输入的航点编辑信息;基于所述航点编辑信息,确定位于所述三维地图中的至少两个空间航点,所述空间航点包括用于对无人机进行控制的高度信息;基于所述至少两个空间航点,生成与所述无人机相对应的三维航线信息。
- 根据权利要求77所述的装置,其特征在于,在确定位于所述三维地图中的至少两个空间航点之后,所述处理器还用于:获取用户在所述三维地图中对任一空间航点所输入的航点调整操作;基于所述航点调整操作对所述空间航点进行调整。
- 根据权利要求77所述的装置,其特征在于,所述处理器还用于:获取所述无人机的实际飞行航线;在所述三维地图中,对所述实际飞行航线和所述三维航线信息进行区分显示。
- 根据权利要求77所述的装置,其特征在于,所述处理器还用于:获取所述三维航线信息相对应的执行状态;在所述三维地图中,对处于不同执行状态的三维航线信息进行区分显示。
- 一种对利用无人机所获得的模型的显示装置,其特征在于,包括:存储器,用于存储计算机程序;处理器,用于运行所述存储器中存储的计算机程序以实现:获取待显示的三维模型,所述三维模型是基于无人机的采集信息所生成的;基于所述采集信息,确定与所述三维模型相对应的三维地图;对所述三维模型和所述三维地图进行结合显示。
- 根据权利要求81所述的装置,其特征在于,在所述三维模型的数量为多个时,在所述处理器对所述三维模型和所述三维地图进行结合显示时,所述处理器还用于:在多个三维模型中,确定需要进行详细显示的目标三维模型;利用显示界面的第一预设区域对所述目标三维模型和相对应的三维地图进行结合显示;利用所述显示界面的第二预设区域对除了所述目标三维模型外的其他三维模型进行缩略显示,其中, 所述第二预设区域小于所述第一预设区域。
- 根据权利要求82所述的装置,其特征在于,在利用所述显示界面的第二预设区域对除了所述目标三维模型外的其他三维模型进行缩略显示之后,所述处理器还用于:获取用户对任一其他三维模型所输入的模型选择操作;将在所述第一预设区域中显示的目标三维模型切换为与所述模型选择操作相对应的三维模型。
- 根据权利要求81所述的装置,其特征在于,在所述处理器对所述三维模型和所述三维地图进行结合显示时,所述处理器用于:确定所述三维地图的显示类型,所述显示类型包括以下任意之一:预设背景图、卫星地图、标准地图;基于所述三维地图的显示类型,对所述三维模型和所述三维地图进行结合显示。
- 根据权利要求81所述的装置,其特征在于,所述处理器还用于:获取用户针对所述三维模型输入的执行操作;基于所述执行操作对所述三维模型进行移动、旋转或者缩放操作。
- 根据权利要求81所述的装置,其特征在于,所述处理器还用于:响应于对所述三维模型的模型处理请求,对所述三维模型和所述三维地图进行处理操作,所述处理操作包括以下至少之一:分发操作、下载操作、删除操作。
- 根据权利要求81所述的装置,其特征在于,在所述三维模型的数量为多个时,在所述处理器对所述三维模型和所述三维地图进行结合显示时,所述处理器用于:获取用于对多个三维模型进行排序的参考信息;基于所述参考信息确定所述多个三维模型的显示序列;基于所述显示序列,对多个三维模型和所对应的三维地图依次进行结合显示。
- 根据权利要求87所述的装置,其特征在于,所述参考信息包括以下任意之一:选择顺序信息、时间信息。
- 根据权利要求87所述的装置,其特征在于,在所述处理器基于所述参考信息确定所述多个三维模型的显示序列时,所述处理器用于:基于所述参考信息,确定所述多个三维模型的初始序列;获取用户对所述初始序列输入的调整操作,获得所述多个三维模型的显示序列。
- 根据权利要求81所述的装置,其特征在于,在所述处理器获取待显示的三维模型时,所述处理器用于:接收云端服务器发送的所述三维模型,所述三维模型是所述云端服务器基于所述无人机的采集信息所生成的。
- 一种计算机可读存储介质,其特征在于,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于实现权利要求1-45中任意一项所述的对利用无人机所采集的信息的显示方法。
- 一种无人机系统,其特征在于,包括:无人机;权利要求46-70中任意一项所述的对利用无人机所采集的信息的显示装置,用于通过云平台对所述无人机进行控制。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/081705 WO2023173409A1 (zh) | 2022-03-18 | 2022-03-18 | 信息的显示方法、模型的对比方法、装置及无人机系统 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/081705 WO2023173409A1 (zh) | 2022-03-18 | 2022-03-18 | 信息的显示方法、模型的对比方法、装置及无人机系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023173409A1 true WO2023173409A1 (zh) | 2023-09-21 |
Family
ID=88021991
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/081705 WO2023173409A1 (zh) | 2022-03-18 | 2022-03-18 | 信息的显示方法、模型的对比方法、装置及无人机系统 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023173409A1 (zh) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103499346A (zh) * | 2013-09-29 | 2014-01-08 | 大连理工大学 | 一种小型无人机地面站三维导航地图实现方法 |
CN105518415A (zh) * | 2014-10-22 | 2016-04-20 | 深圳市大疆创新科技有限公司 | 一种飞行航线设置方法及装置 |
CN107749957A (zh) * | 2017-11-07 | 2018-03-02 | 高域(北京)智能科技研究院有限公司 | 无人机航拍画面显示系统和方法 |
CN110673650A (zh) * | 2019-11-21 | 2020-01-10 | 梅州市晟邦科技有限公司 | 无人机控制方法 |
CN111272172A (zh) * | 2020-02-12 | 2020-06-12 | 深圳壹账通智能科技有限公司 | 无人机室内导航方法、装置、设备和存储介质 |
WO2020179869A1 (ja) * | 2019-03-06 | 2020-09-10 | 株式会社moegi | 情報処理装置、及び情報処理プログラム |
CN112200910A (zh) * | 2020-10-10 | 2021-01-08 | 国网江苏省电力有限公司经济技术研究院 | 一种利用无人机快速建立三维地形的方法 |
CN112270755A (zh) * | 2020-11-16 | 2021-01-26 | Oppo广东移动通信有限公司 | 三维场景构建方法、装置、存储介质与电子设备 |
-
2022
- 2022-03-18 WO PCT/CN2022/081705 patent/WO2023173409A1/zh unknown
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103499346A (zh) * | 2013-09-29 | 2014-01-08 | 大连理工大学 | 一种小型无人机地面站三维导航地图实现方法 |
CN105518415A (zh) * | 2014-10-22 | 2016-04-20 | 深圳市大疆创新科技有限公司 | 一种飞行航线设置方法及装置 |
CN107749957A (zh) * | 2017-11-07 | 2018-03-02 | 高域(北京)智能科技研究院有限公司 | 无人机航拍画面显示系统和方法 |
WO2020179869A1 (ja) * | 2019-03-06 | 2020-09-10 | 株式会社moegi | 情報処理装置、及び情報処理プログラム |
CN110673650A (zh) * | 2019-11-21 | 2020-01-10 | 梅州市晟邦科技有限公司 | 无人机控制方法 |
CN111272172A (zh) * | 2020-02-12 | 2020-06-12 | 深圳壹账通智能科技有限公司 | 无人机室内导航方法、装置、设备和存储介质 |
CN112200910A (zh) * | 2020-10-10 | 2021-01-08 | 国网江苏省电力有限公司经济技术研究院 | 一种利用无人机快速建立三维地形的方法 |
CN112270755A (zh) * | 2020-11-16 | 2021-01-26 | Oppo广东移动通信有限公司 | 三维场景构建方法、装置、存储介质与电子设备 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12079942B2 (en) | Augmented and virtual reality | |
US8954853B2 (en) | Method and system for visualization enhancement for situational awareness | |
US11057561B2 (en) | Capture, analysis and use of building data from mobile devices | |
CN108521788B (zh) | 生成模拟航线的方法、模拟飞行的方法、设备及存储介质 | |
US9851877B2 (en) | Image processing apparatus, image processing method, and computer program product | |
CN110072087B (zh) | 基于3d地图的摄像机联动方法、装置、设备及存储介质 | |
US9367942B2 (en) | Method, system and software program for shooting and editing a film comprising at least one image of a 3D computer-generated animation | |
US8964052B1 (en) | Controlling a virtual camera | |
RU2491638C2 (ru) | Агрегация 3d контента, встроенная в устройства | |
US10084994B2 (en) | Live streaming video over 3D | |
CN106687902B (zh) | 基于内容分析的图像显示、可视化和管理 | |
US9990750B1 (en) | Interactive geo-referenced source imagery viewing system and method | |
WO2020103022A1 (zh) | 一种测绘系统、测绘方法、装置、设备及介质 | |
JP7167134B2 (ja) | 自由視点画像生成方法、自由視点画像表示方法、自由視点画像生成装置及び表示装置 | |
WO2015192056A1 (en) | Systems and methods for processing and providing terrestrial and/or space-based earth observation video | |
US20180158251A1 (en) | Automated thumbnail object generation based on thumbnail anchor points | |
WO2020103023A1 (zh) | 一种测绘系统、测绘方法、装置、设备及介质 | |
WO2021212501A1 (zh) | 轨迹生成方法、遥控终端、可移动平台、系统及计算机可读存储介质 | |
CN111917979B (zh) | 多媒体文件输出方法、装置、电子设备及可读存储介质 | |
KR20240118764A (ko) | 이미지 변환 가능성 정보를 표시하는 컴퓨팅 디바이스 | |
US8788968B1 (en) | Transitioning an interface to a neighboring image | |
US9009616B2 (en) | Method and system for configuring a sequence of positions of a camera | |
WO2023173409A1 (zh) | 信息的显示方法、模型的对比方法、装置及无人机系统 | |
WO2020103024A1 (zh) | 一种作业控制系统、作业控制方法、装置、设备及介质 | |
WO2023178491A1 (zh) | 无人机的航线绘制方法、装置及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22931441 Country of ref document: EP Kind code of ref document: A1 |