WO2022041112A1 - 可移动平台的控制方法、装置及控制系统 - Google Patents

可移动平台的控制方法、装置及控制系统 Download PDF

Info

Publication number
WO2022041112A1
WO2022041112A1 PCT/CN2020/112085 CN2020112085W WO2022041112A1 WO 2022041112 A1 WO2022041112 A1 WO 2022041112A1 CN 2020112085 W CN2020112085 W CN 2020112085W WO 2022041112 A1 WO2022041112 A1 WO 2022041112A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
target object
orientation
movable platform
interactive interface
Prior art date
Application number
PCT/CN2020/112085
Other languages
English (en)
French (fr)
Inventor
梁家斌
田艺
黎莹莹
李亚梅
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/112085 priority Critical patent/WO2022041112A1/zh
Priority to CN202310252255.6A priority patent/CN116360406A/zh
Priority to CN202080039572.4A priority patent/CN113906358B/zh
Publication of WO2022041112A1 publication Critical patent/WO2022041112A1/zh
Priority to US17/700,553 priority patent/US11983821B2/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0011Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement
    • G05D1/0044Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement by providing the operator with a computer generated representation of the environment of the vehicle, e.g. virtual reality, maps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/469Contour-based spatial representations, e.g. vector-coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present application relates to the technical field of human-computer interaction, and in particular, to a control method, device and control system for a movable platform.
  • the present application provides a control method, device, movable platform and control system for a movable platform.
  • a method for controlling a movable platform comprising:
  • obtaining a target object selection operation input by the user in an interactive interface where the interactive interface displays a three-dimensional model of the work area, and the target object selection operation is used to determine the position of the target object in the work area;
  • the orientation of the three-dimensional model displayed on the interactive interface when acquiring the target object selection operation determine the orientation of the target when the operation is performed on the target object;
  • a control device for a mobile platform includes a processor, a memory, a computer program stored in the memory and executable by the processor, and the processor executes the computer
  • the program implements the following steps:
  • obtaining a target object selection operation input by the user in an interactive interface where the interactive interface displays a three-dimensional model of the work area, and the target object selection operation is used to determine the position of the target object in the work area;
  • the orientation of the three-dimensional model displayed on the interactive interface when acquiring the target object selection operation determine the orientation of the target when the operation is performed on the target object;
  • a control system includes a movable platform and a control terminal,
  • the control terminal is configured to acquire a target object selection operation input by the user on an interactive interface, where the interactive interface displays a three-dimensional model of the work area, and the target object selection operation is used to determine the position of the target object in the work area;
  • the orientation of the three-dimensional model displayed on the interactive interface when acquiring the target object selection operation determine the orientation of the target when the operation is performed on the target object;
  • the movable platform is used for moving to a target position based on the control instruction and performing work on the target object according to the target orientation.
  • the user can adjust the three-dimensional model to the orientation suitable for observing the target object in the work area according to the visual effect of the three-dimensional model of the work area displayed on the interactive interface, so as to determine the target orientation when working on the target object .
  • the user can intuitively determine the appropriate orientation when working on the target object, without the need to continuously input the angle to adjust the orientation during the operation multiple times, which is more convenient and fast, and at the same time
  • the target position when working on the target object can be determined through the determined position, target orientation and working distance of the target object, and the working distance can be precisely controlled according to the user's needs, which is very convenient for the user to operate.
  • FIG. 1 is a flowchart of a control method of a mobile platform according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of images obtained by photographing a three-dimensional object from different orientations by a virtual camera according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of determining a target orientation according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of determining a target position according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of determining a position of a target object and a target area corresponding to the target object according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an adjustment operation according to an embodiment of the present application.
  • FIG. 7 is a diagram illustrating an example of an adjustment path of a first adjustment operation according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an adjustment path of a second adjustment operation according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an interactive interface according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an application scenario of an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a mobile platform control apparatus according to an embodiment of the present application.
  • Mobile platforms such as drones, unmanned vehicles, and intelligent robots are widely used in many fields, such as using drones for inspection, using robotic arms for fruit picking, or using drones and unmanned vehicles for drug spraying, watering, etc. .
  • it is usually necessary to perform fine control on the movable platform such as controlling the movable platform to move to a specific position and angle, and then operate the target object, in order to achieve a better operation effect.
  • controlling the movable platform to perform operations either manually control the movable platform to perform the operation, for example, manually adjust the position and orientation of the movable platform according to the user's observation or the photos returned by the movable platform. This method requires manual control for each operation, which is more labor-intensive.
  • the present application provides a control method for a movable platform.
  • the user can move and rotate the three-dimensional model on the interactive interface displaying the three-dimensional model of the work area, and adjust the position of the three-dimensional model according to the viewing angle suitable for observing the target object in the work area.
  • Orientation determine the position of the target object according to the target object selection operation input by the user in the interactive interface, determine the target orientation when working on the target object according to the orientation of the 3D model when the user inputs the target object selection operation, and determine the target object according to the position and orientation of the target object.
  • the working distance determines the target position when the movable platform works on the target object.
  • the user can intuitively determine the orientation suitable for the operation of the movable platform with the help of the three-dimensional model of the interactive interface, without the need for the user to adjust the orientation through multiple input angles, which is more convenient and fast. Accurately control the working distance of the movable platform.
  • the method includes the following steps:
  • S102 acquiring a target object selection operation input by a user on an interactive interface, where the interactive interface displays a three-dimensional model of the work area, and the target object selection operation is used to determine the position of the target object in the work area;
  • S106 Determine a target position when the movable platform operates on the target object according to the position of the target object, the target orientation and the working distance of the movable platform, so that the movable platform can move to the target position and work on the target object according to the target orientation.
  • the movable platform control method of the present application can be executed by a movable platform.
  • the movable platform can provide a human-computer interaction interface, and a user can perform operations on the human-computer interaction interface to determine the target The target position and target orientation when the object is working, and then control the movable platform to work according to the determined target position and target orientation.
  • the mobile platform control method of the present application can also be executed by a control terminal, and the control terminal can be a terminal device such as a laptop computer, a remote control, a mobile phone, a tablet, etc., and the control terminal can provide a human-computer interaction interface for Obtain the user's interactive operation, and control the movable platform according to the user's interactive operation.
  • the movable platform of the present application is a movable platform currently used for working on a target object.
  • the movable platform includes a power component for driving the movable platform to move.
  • the movable platform can be an unmanned aerial vehicle, an unmanned vehicle, Intelligent robots and other equipment.
  • the target object of this application refers to the object to be operated by the movable platform.
  • the target object can be power equipment, dams, bridges, etc. that need to be inspected, or crops that need to be sprayed or irrigated, or collected fruit, etc.
  • the 3D model of the work area can be a 3D model obtained by 3D reconstruction through photogrammetry, or a 3D model obtained through lidar scanning, or a CAD model in the design process.
  • the device executing the control method can load the three-dimensional model of the work area in the three-dimensional engine, and display the three-dimensional model to the user through the human-computer interaction interface.
  • the user determines the target orientation and target position of the movable platform for the target object, he can move and rotate the 3D model in the interactive interface, and adjust the orientation of the 3D model to an orientation that is more suitable for observing the target object, such as adjusting to the target.
  • the object will not be occluded by other objects and the orientation of the target object can be clearly seen.
  • the device executing the control method can obtain the target object selection operation input by the user, and determine the position of the target object according to the target object selection operation, Then, according to the orientation of the three-dimensional model displayed in the interactive interface when the target object selection operation is obtained, the target orientation when the operation is performed on the target object is determined.
  • the orientation of the target is consistent with the orientation of the 3D model observed by the user from the interactive interface, and the viewing angle of the user observing the 3D model from the interactive interface is the viewing angle of the target object when the movable platform operates the target object.
  • the target position when the movable platform is working on the target object can be determined according to the determined position of the target object, the target orientation and the working distance of the movable platform, so as to control the movable platform to move to the target position and Work on the target object according to the target orientation.
  • the target position and target orientation in this application can be the position and orientation of the movable platform when the target object is operated, or the position and orientation of the parts on the movable platform that work on the target object.
  • the target position can also be compensated according to the positional relationship between the center of the camera device and the center of the UAV, the compensated position is obtained, and the UAV is moved to the compensated position , so that the drone camera device is located at the target position to operate.
  • the orientation of the target can be the orientation of the movable platform.
  • it can also be the orientation of the parts working on the movable platform.
  • it can be the orientation of the camera on the drone, or the robotic arm on the unmanned vehicle. towards.
  • the user can adjust the orientation of the three-dimensional model according to the visual effect of the target object displayed on the interactive interface, thereby determining the orientation that is more suitable for observing the target object, and determining the target orientation when working on the target object.
  • the user can intuitively determine the appropriate orientation of the target object during operation, without having to adjust the orientation of the operation multiple times by continuously inputting the angle, which is more convenient and fast.
  • the distance determines the target position when working on the target object, and can precisely control the working distance according to the user's needs.
  • the target object selection operation input by the user can be any operation used to determine the position of the target object according to the 3D model.
  • the position of the target object can be determined according to the target object selection operation input by the user. For example, the user can click the center area of the target object on the interactive interface, and then use the point clicked by the user as the position of the target object. If the user selects the target object in the interactive interface, the center position of the box can be used as the position of the target object. , or the user can also click multiple points on the target object to determine the target object.
  • the user can click a point on the head, click a point on the body, click a point on the feet, and then Connect the three points clicked by the user into a line, and take the center of the line as the position of the target object.
  • the 3D model is a model that carries geographic location information, the 3D coordinates corresponding to the position of the target object can be determined according to the user's interactive operation on the 3D model.
  • the 3D engine corresponds to a virtual camera
  • the picture of the work area presented in the interactive interface is equivalent to the virtual camera taking pictures of the work area from different perspectives.
  • the position and orientation of the virtual camera are also constantly changing.
  • the 3D model is the model of the car
  • the user adjusts the orientation of the 3D model that is, when the image presents different angles of the car (the car image in the rectangular frame in the figure)
  • the image is created by the virtual camera (Black dots in the figure) Images acquired in different orientations.
  • the position and orientation of the corresponding virtual camera to the 3D engine can be determined.
  • the orientation of the three-dimensional model displayed in the interactive interface corresponds to the orientation of the virtual camera one-to-one. Therefore, the orientation of the virtual camera corresponding to the orientation of the three-dimensional model displayed in the current interactive interface may be used as the target orientation when performing operations on the target object.
  • the target orientation may be along a line connecting the position of the target object and the position of the virtual camera and point to the position of the target object, which may be determined according to the position of the target object and the position of the virtual camera Target orientation, for example, when the user stops moving or rotating the 3D model in the interactive interface, the orientation of the 3D model displayed in the interactive interface at this moment is determined, and the 3D engine can automatically determine the position of the current virtual camera (which can be the 3D coordinates corresponding to the center of the virtual camera) , and then determine the position of the target object (which can be the three-dimensional coordinates corresponding to the center of the target object) according to the target object selection operation input by the user in the interactive interface, and then connect the center of the virtual camera and the center of the target object to obtain a connection line, and perform the operation on the target object.
  • the target orientation of the movable platform during operation can point to the target object along the connection line.
  • the target position when the movable platform operates on the target object may be located on the line connecting the position of the target object and the position of the virtual camera, and the distance from the target object is the working distance .
  • a line is obtained by connecting the center of the virtual camera and the center of the target object, Then, taking the target object as a reference, move the working distance along the connecting line to reach the target position, so that the distance when the movable platform operates on the target object can be determined as the working distance expected by the user.
  • the device executing the control method can determine the orientation of the three-dimensional model displayed in the interactive interface according to the target object selection operation and the orientation of the interactive interface.
  • the target position and target orientation when the mobile platform operates on the target object.
  • the device executing the control method may acquire the adjustment operation input through the interactive interface, then determine the adjusted target position according to the position of the target object and the adjustment operation input by the user, and adjust the adjusted target position according to the adjusted target position.
  • Target orientation wherein the adjusted target orientation points to the position of the target object from the adjusted target position.
  • the working distance of the movable platform can be adjusted according to the actual needs of the user, can be input by the user, or can be automatically determined by the device executing the control method. For example, in the scene of drug spraying on crops, in order to ensure the effect of drug spraying, the distance between the nozzle and the crops is required. In this scenario, the user can input the working distance through the interactive interface.
  • the working distance may be determined according to the spatial resolution of the collected image by the user, wherein the spatial resolution refers to the physical size corresponding to the pixel in the image.
  • the working distance of the movable platform may be determined according to the size of the working object, or may be determined according to the spatial resolution, or determined in combination with the size of the working object and the spatial resolution, and the spatial resolution may be determined by the user through interaction Interface input or preset.
  • the user can input the spatial resolution according to the demand for the spatial resolution of the captured image of the target object, and the device executing the control method can automatically calculate the working distance according to the following formula (1):
  • d is the working distance
  • w is the pixel width of the image sensor
  • f is the 35mm equivalent focal length of the lens
  • gsd is the spatial resolution, representing the size of the pixel on the image corresponding to the actual distance of the object in the three-dimensional space.
  • the working distance can also be determined according to the size of the target object. For example, in some scenarios, a complete target object needs to be photographed in one image. Therefore, the working distance can be determined according to the size of the target object. Among them, the working distance can be determined according to formula (2):
  • L is the size of the target object
  • f is the 35mm equivalent focal length of the lens.
  • the working distance can also be determined in combination with the spatial resolution and the size of the target object.
  • the user can input the variation range of the spatial resolution, and then can use the spatial resolution range input by the user and the target object to determine the working distance.
  • the size of the camera determines the working distance together, so that shooting at this working distance can not only meet the spatial resolution requirements input by the user, but also shoot the complete target object.
  • the orientation of the camera on the movable platform can be adjusted to obtain multiple images. , and then use these multiple images to synthesize an image including the complete target object.
  • the focal length can also be adjusted according to the desired working distance to meet the user's requirement for the working distance.
  • a target area corresponding to the target object may be determined according to the target object selection operation input by the user, and then the size of the target area is used as the size of the target object, where the target area is usually The three-dimensional space area that includes the target object. For example, as shown in FIG. 5, when the target object selection operation is that the user clicks a point at the start end of the target object 51 and a point at the end end of the target object in the interactive interface, the difference between the two points can be determined according to the three-dimensional model.
  • connection line takes the center of the connection line as the center of the target object 51, and then use the connection line as the diameter to determine a spherical area 52, and the spherical area 52 as the target object corresponds to
  • the size of the spherical area is the size of the target object.
  • the shape of the target area is not limited to a sphere, and may be a three-dimensional space area of various shapes, such as a cuboid area or an area of other shapes.
  • the target object can be used as the center, Adjust the target position while keeping the working distance of the movable platform unchanged.
  • the angle between the line connecting the target object's position and the target position and the horizontal plane is the pitch angle
  • the angle between the projection of the line connecting the target object's position and the target position on the horizontal plane and the north direction is the yaw angle.
  • the adjustment operation for the target position includes a first adjustment operation, and the first adjustment operation may be an operation for adjusting the pitch angle.
  • the first adjustment operation can make the target position in the first adjustment operation. Adjustment on a target circle, wherein the center of the first target circle is at the position of the target object, the plane where the first target circle is located is perpendicular to the horizontal plane, and the radius of the first target circle is the working distance of the movable platform.
  • the adjustment operation on the target position includes a second adjustment operation
  • the second adjustment operation may be an operation of adjusting the above-mentioned yaw angle.
  • the second adjustment operation can make the target position in the second target position. Adjustment on a circle, wherein the center of the second target circle is located at the projection position of the target object, the projection position is obtained by projecting the position of the target object onto the horizontal plane passing the target position, and the radius of the second target circle is the projection The distance from the location to this target location.
  • the first target circle and/or the second target circle may also be displayed on the interactive interface, so that the user can view the first target circle and/or the second target circle according to the displayed first target circle and/or the first target circle Two target circles determine the adjustment path of the target position.
  • the interactive interface may further include a preview window.
  • the preview effect of the movable platform can be displayed in the preview window when the target position is adjusted and the preview effect is performed according to the target orientation.
  • the preview effect of the acquired image can be displayed in the preview window when the drone is located at the adjusted target position and the target object is imaged according to the adjusted target orientation. So that the user can determine whether to continue to adjust the position and orientation of the drone according to the preview effect, and determine the adjustment strategy.
  • the movable platform can be displayed at the adjusted target position in the preview window, and the robotic arm can be adjusted to the dynamic schematic diagram of the target facing the fruit picking.
  • the user It can be clearly known that the movable platform is located at the current target position to pick the fruit according to the target orientation, whether it can be successfully picked, and then determine how to adjust the target position and target orientation.
  • the position of the target object in the preview window can be fixed, For example, the target object can be kept at the center of the screen, and the position of the target object remains unchanged during the adjustment process, so that the user can better observe the adjustment effect.
  • an adjustment control may be set in the interactive interface, and the adjustment operation of adjusting the target position may be triggered by the adjustment control.
  • an adjustment button 91 can be displayed on the interactive interface, and the user can click the adjustment button 91 to adjust the target position and the target orientation.
  • the distance between the obstacle closest to the target position and the target position can be determined, if the distance If the distance is less than the preset safety distance, the target position and/or obstacle are marked, so that the user can adjust the target position according to the mark.
  • the target position and the obstacle can be marked in red in the interactive interface, or the target position and the obstacle can be box-selected, so that the user can adjust the target position according to the mark.
  • there may be multiple target objects assuming that there are a first target object and a second target object, when it is determined to perform an operation on the first target object according to the first target object selection operation input by the user on the interactive interface
  • the target position hereinafter referred to as the target position corresponding to the first target object selection operation
  • the second target object selection operation input by the user can be obtained. (hereinafter referred to as the target position corresponding to the second target object selection operation)
  • the device executing the control method can automatically determine the target position corresponding to the target object selection operation, and connect it with the previously determined target position to form the operation of the movable platform. path, and displayed on the interactive interface.
  • the first target object selection operation and the second target object selection operation may also correspond to the same target object, which is not limited in this embodiment of the present application.
  • the distance between the obstacle closest to the connection and the connection can be determined.
  • the distance is less than the preset safety distance, the The connection line and/or the obstacle is identified, so that the user can adjust the target position corresponding to the first target selection operation and/or the target position corresponding to the second target selection operation.
  • the user determines the target position and target orientation of each target object through the interactive interface displaying the 3D model and generates a movable platform. After the working path is determined, the working path of the movable platform can be stored. For example, the target parameters used to determine the target position and target orientation can be stored, so that the next time the target object in the working area is operated, these target parameters can be directly called to determine The target position and target orientation when working on the target object.
  • the target parameters may include one or more of the position of the target object, the target orientation, the target position, the working distance, the size of the target object, and the position of the virtual camera.
  • the target position and target orientation can be stored, and during operation, the movable platform is directly controlled to move to the target position and the target object is operated according to the target orientation.
  • the target object's position, target orientation, working distance, etc. can also be stored.
  • the target position can be directly determined according to the target object's position, target orientation and working distance.
  • the target position and target orientation can be determined according to the position of the target object, after determining the target position and target orientation when working on the target object in the work area according to the hardware parameters of the movable platform, if the The target objects in the same operation area can be operated by other movable platforms, and the operation path can be set without the need for the user to re-set the operation path through the interactive interface.
  • the above-mentioned movable platform is the first movable platform.
  • the second movable platform is used to operate the target object, the stored target parameters can be obtained, and then the stored target parameters and the second movable platform can be obtained according to the stored target parameters and the second movable platform.
  • the hardware parameters of the platform determine the position and orientation of the second movable platform when working on the target object.
  • the hardware parameters of the second movable platform may be the same as or different from the hardware parameters of the first movable platform.
  • the focal length of the second movable platform and the pixel width of the sensor may be different from those of the first movable platform.
  • the pixel width of the sensor re-determines the working distance. Since the position of the target object is known, and the target position and target orientation are determined with the target object as the center, the position and orientation of the second movable platform during operation can be re-determined according to the re-determined working distance and the position of the target object, to update the job path.
  • FIG. 10 is a schematic diagram of an application scenario of an embodiment of the present application
  • the user can set the target orientation and For the target position, a control command is generated based on the target orientation and target position, and the control command is sent to the UAV 101 .
  • the control instruction may be sent to the drone 101 through a communication link between the control terminal 102 and the drone 101 .
  • control instruction can be sent to the movable platform through the communication link between the control terminal 102 and the remote control 103, and the remote control 103 and the drone 101.
  • control terminal 102 may be a notebook computer installed with three-dimensional reconstruction software.
  • the user can open the software application on the control terminal 102, and the 3D engine in the software application can load the 3D model of the work area, and the 3D model can be obtained by 3D reconstruction by photogrammetry, or by scanning the work area by lidar.
  • the user can input the spatial resolution through the interactive interface, so as to determine the working distance of the UAV 101 according to the spatial resolution.
  • the 3D model of the working area can be displayed to the user through the interactive interface.
  • the user can move and rotate the 3D model, and move and rotate the 3D model to a more suitable orientation, so as to determine the viewing angle where the target object will not be blocked, and it is convenient to observe the target object.
  • the screen displayed on the interactive interface can be regarded as the image of the work area captured by the virtual camera.
  • the orientation of the 3D model corresponds to the orientation of the virtual camera.
  • the user can click the starting position and ending position of the target object in the 3D model, generate a connection line according to the starting position and ending position clicked by the user, and determine the position of the center of the connection, and use the center position as the target object. s position.
  • a connection can be generated according to the position of the target object and the position of the virtual camera.
  • the orientation of the camera can point to the target object along the connection line.
  • the position of the drone is located on the line where the distance from the target object is the working distance.
  • the angle between the line connecting the position of the target object and the position of the UAV and the horizontal plane can be defined as the pitch angle
  • the angle between the projection of the line on the horizontal plane and the north direction is the yaw angle.
  • the position of one target object is determined according to the user's click operation on the interactive interface
  • the position of the other target object is determined, and then the two positions are connected by connecting The line is connected, and the connection is displayed in the interactive interface.
  • the obstacle or the position is identified on the interactive interface so that the user can adjust the position.
  • the drone is also possible to determine the obstacle with the closest distance to the connecting line where the drone is operating on different target objects, and determine whether the position between the obstacle and the connecting line is less than the preset safe distance, and if it is less than the obstacle, identify the obstacle and wiring for user adjustment.
  • the position, pitch angle, yaw angle and operating distance of the target object can be stored, and then the UAV operation can be controlled according to the stored parameters.
  • the resolution can be determined according to the hardware parameters of the UAV and the expected resolution.
  • the operating distance is re-determined at a rate, and then the position and orientation of the UAV during operation are re-determined according to the position of the target object, the pitch angle, the yaw angle and the operating distance. Since the target object is used as the center to determine the position and orientation of the operation, the user only needs to set the operation path once in the interactive interface. Even if the hardware parameters of the operating drone are changed, they can The orientation of the drone re-determines the position during operation without the need for the user to reset it.
  • the user can easily determine the position and orientation of the drone during operation. And the fine-tuning of the position during the operation is centered on the target object to ensure that the target object is always in the center of the shooting screen, and only the direction is changed during fine-tuning without changing the working distance, which can ensure the working effect. Since the position of the target object is determined, so It is convenient to change the position during operation according to the optimal operating distance of different drones to adapt to different aircraft and achieve the same shooting effect, without the need for the user to re-determine the operation path for each drone.
  • the present application also provides a control device for a mobile platform.
  • the device includes a processor 111 , a memory 112 , and a computer program stored in the memory 112 and executable by the processor 111 ,
  • the processor 111 implements the following steps when executing the computer program:
  • obtaining a target object selection operation input by the user in an interactive interface where the interactive interface displays a three-dimensional model of the work area, and the target object selection operation is used to determine the position of the target object in the work area;
  • the orientation of the three-dimensional model displayed on the interactive interface when acquiring the target object selection operation determine the orientation of the target when the operation is performed on the target object;
  • the orientation of the three-dimensional model displayed by the interactive interface corresponds to the orientation of the virtual camera.
  • the target orientation is along a line connecting the position of the target object and the position of the virtual camera and points to the position of the target object.
  • the target position is located on a line connecting the position of the target object and the position of the virtual camera, and the distance from the target object is the working distance.
  • the processor is also used to:
  • the target orientation is adjusted according to the adjusted target position, so that the target orientation is directed from the adjusted target position to the position of the target object.
  • the adjustment operation includes a first adjustment operation capable of causing the target position to be adjusted on a first target circle, wherein the center of the first target circle is located on the The position of the target object, the plane on which the first target circle is located is perpendicular to the horizontal plane, and the radius of the first target circle is the working distance of the movable platform.
  • the adjustment operation includes a second adjustment operation capable of causing the target position to be adjusted on a second target circle, wherein the center of the second target circle is located on the The projection position of the target object, the projection position is obtained by projecting the position of the target object to a horizontal plane passing the target position, and the radius of the second target circle is the projection position to the target position the distance.
  • the processor is also used to:
  • the first target circle and/or the second target circle are displayed in the interactive interface.
  • the interactive interface includes a preview window, and after adjusting the target position and the target orientation based on the position of the target object and the adjustment operation, it also includes:
  • a preview effect of the movable platform being located at the adjusted target position and performing operations according to the adjusted target orientation is displayed.
  • the position of the target object in the preview window is fixed.
  • the interactive interface includes an adjustment control, and the adjustment operation is triggered based on the adjustment control.
  • the processor is further configured to: when it is determined that the distance between the target position and the obstacle closest to the target position is less than a preset safe working distance, perform a calculation on the target position and/or the target position. The obstacles are identified.
  • the processor is further configured to: the target object selection operation is a first target object selection operation, and the method further includes:
  • connection line between the target position corresponding to the first target object selection operation and the target position corresponding to the second target object selection operation is displayed in the interactive interface.
  • the processor is further configured to: when determining the connection line and the distance between the target position corresponding to the first target selection operation and the target position corresponding to the second target selection operation When the distance of the closest obstacle to the line is smaller than the preset safe working distance, the connecting line and/or the obstacle are marked.
  • the processor is also used to:
  • the spatial resolution refers to the physical size corresponding to a pixel, and the spatial resolution is the spatial resolution input by the user on the interactive interface or preset.
  • the processor is also used to:
  • the target object selection operation determine the target area corresponding to the target object
  • the size of the target area is taken as the size of the target object.
  • the processor is further configured to: store target parameters for use in determining the target location and the target orientation.
  • the target parameters include one or more of the following: the position of the target object, the target orientation, the working distance, the size of the target object, and the position of the virtual camera.
  • the movable platform is a first movable platform
  • the processor is further configured to:
  • the stored target parameters are acquired, and the position and orientation of the second movable platform during operation are determined based on the hardware parameters of the second movable platform and the target parameters.
  • control device of the movable platform may be identical to the aforementioned control terminal, or be a part of the control terminal.
  • control terminal for the specific implementation details of determining the target position and target orientation when the movable platform operates on the target object, reference may be made to the descriptions in the various embodiments of the above method, which will not be repeated here.
  • the present application also provides a movable platform, which is configured to receive a control instruction, move to a target position based on the control instruction, and perform operations on a target object in an operation area according to the target orientation;
  • control instruction is determined based on the following manner: acquiring a target object selection operation input by the user on an interactive interface, where the interactive interface displays a three-dimensional model of the operation area, and the target object selection operation is used to determine the operation area The position of the target object in ;
  • the orientation of the three-dimensional model displayed on the interactive interface when acquiring the target object selection operation determine the orientation of the target when the operation is performed on the target object;
  • control system includes a movable platform and a control terminal, the control terminal is used to obtain a target object selection operation input by a user on an interactive interface, and the interactive interface displays a work area
  • the three-dimensional model of the target object selection operation is used to determine the position of the target object in the work area
  • the orientation of the three-dimensional model displayed on the interactive interface when acquiring the target object selection operation determine the orientation of the target when the operation is performed on the target object;
  • the movable platform is used for moving to a target position based on the control instruction and performing work on the target object according to the target orientation.
  • control terminal to control the movable platform to perform operations may refer to the descriptions in the embodiments of the foregoing methods, which will not be repeated here.
  • an embodiment of the present specification further provides a computer storage medium, where a program is stored in the storage medium, and when the program is executed by a processor, the control method of the movable platform in any of the foregoing embodiments is implemented.
  • Embodiments of the present specification may take the form of a computer program product embodied on one or more storage media having program code embodied therein, including but not limited to disk storage, CD-ROM, optical storage, and the like.
  • Computer-usable storage media includes permanent and non-permanent, removable and non-removable media, and storage of information can be accomplished by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • PRAM phase-change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • Flash Memory or other memory technology
  • CD-ROM Compact Disc Read Only Memory
  • CD-ROM Compact Disc Read Only Memory
  • DVD Digital Versatile Disc
  • Magnetic tape cassettes magnetic tape magnetic disk storage or other magnetic storage devices or any other non-

Abstract

一种可移动平台控制方法、装置、可移动平台及控制系统。所述方法包括:获取用户在交互界面输入的目标对象选择操作,所述交互界面展示有作业区域的三维模型,所述目标对象选择操作用于确定所述作业区域中的目标对象的位置;根据获取所述目标对象选择操作时所述交互界面展示的所述三维模型的朝向,确定对所述目标对象进行作业时的目标朝向;根据所述目标对象的位置、所述目标朝向以及可移动平台的作业距离确定所述可移动平台对所述目标对象进行作业时的目标位置,以使所述可移动平台移动至所述目标位置并按照所述目标朝向对所述目标对象进行作业。用户可以通过交互界面展示的三维模型直观地确定可移动平台作业时的朝向,并且根据作业距离和作业时的朝向确定作业时的位置,可以准确控制作业距离,并且十分方便快捷。

Description

可移动平台的控制方法、装置及控制系统 技术领域
本申请涉及人机交互技术领域,具体而言,涉及一种可移动平台的控制方法、装置及控制系统。
背景技术
很多领域都需要对无人机、无人车、智能机器人等可移动平台进行精细化控制,以执行特定的作业任务。比如,在采用无人机进行巡检的场景,通常需要控制无人机在特定的位置按照特定的角度对电力绝缘子、大坝等作业对象进行拍摄,在采用机械臂进行果实采摘等场景,也需要控制机械臂移动到特定位置和角度,才能完成果实的采摘。因此,有必要提供一种方便用户对可移动平台进行精细化控制的方案。
发明内容
有鉴于此,本申请提供一种可移动平台的控制方法、装置、可移动平台及控制系统。
根据本申请的第一方面,提供一种可移动平台的控制方法,所述方法包括:
获取用户在交互界面输入的目标对象选择操作,所述交互界面展示有作业区域的三维模型,所述目标对象选择操作用于确定所述作业区域中的目标对象的位置;
根据获取所述目标对象选择操作时所述交互界面展示的所述三维模型的朝向,确定对所述目标对象进行作业时的目标朝向;
根据所述目标对象的位置、所述目标朝向以及可移动平台的作业距离确 定所述可移动平台对所述目标对象进行作业时的目标位置,以使所述可移动平台移动至所述目标位置并按照所述目标朝向对所述目标对象进行作业。
根据本申请的第二方面,提供一种可移动平台的控制装置,所述装置包括处理器、存储器、存储于所述存储器所述处理器可执行的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
获取用户在交互界面输入的目标对象选择操作,所述交互界面展示有作业区域的三维模型,所述目标对象选择操作用于确定所述作业区域中的目标对象的位置;
根据获取所述目标对象选择操作时所述交互界面展示的所述三维模型的朝向,确定对所述目标对象进行作业时的目标朝向;
根据所述目标对象的位置、所述目标朝向以及可移动平台的作业距离确定所述可移动平台对所述目标对象进行作业时的目标位置,以使所述可移动平台移动至所述目标位置并按照所述目标朝向对所述目标对象进行作业。
根据本申请的第三方面,提供一种控制系统,其特征在于,所述控制系统包括可移动平台和控制终端,
所述控制终端用于获取用户在交互界面输入的目标对象选择操作,所述交互界面展示有作业区域的三维模型,所述目标对象选择操作用于确定所述作业区域中的目标对象的位置;
根据获取所述目标对象选择操作时所述交互界面展示的所述三维模型的朝向,确定对所述目标对象进行作业时的目标朝向;
根据所述目标对象的位置、所述目标朝向以及可移动平台的作业距离确定所述可移动平台对所述目标对象进行作业时的目标位置;
基于所述目标朝向和所述目标位置生成所述控制指令,并将所述控制 指令发送给所述可移动平台;
所述可移动平台用于基于所述控制指令移动至目标位置并按照目标朝向对所述目标对象进行作业。
应用本申请提供的方案,用户可以根据交互界面展现的作业区域的三维模型的视觉效果,将三维模型调整至适合观测作业区域中的目标对象的朝向,以确定对目标对象进行作业时的目标朝向,通过浏览和调整交互界面的三维模型的方式确定目标朝向,用户可以比较直观的确定对目标对象进行作业时合适的朝向,无需通过不断输入角度多次调整作业时的朝向,比较方便快捷,同时可以通过确定的目标对象的位置、目标朝向和作业距离确定对目标对象进行作业时的目标位置,可以根据用户需求精确的控制作业距离,十分方便用户操作。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个实施例可移动平台的控制方法流程图。
图2是本申请一个实施例的虚拟相机从不同朝向拍摄三维物体得到的图像的示意图。
图3是本申请一个实施例的确定目标朝向的示意图。
图4是本申请一个实施例的确定目标位置的示意图。
图5是本申请一个实施例的确定目标对象的位置以及目标对象对应的目标区域的示意图。
图6是本申请一个实施例的调整操作的示意图。
图7是本申请一个实施例的第一调整操作的调整路径示例图。
图8是本申请一个实施例的第二调整操作的调整路径示意图。
图9是本申请一个实施例的交互界面的示意图。
图10是本申请一个实施例的应用场景的示意图。
图11是本申请一个实施例的可移动平台控制装置的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
无人机、无人车、智能机器人等可移动平台在很多领域广泛应用,比如使用无人机进行巡检、使用机械臂进行果实采摘或者使用无人机、无人车进行药物喷洒、浇灌等。在执行上述作业任务时,通常需要对可移动平台进行精细化的控制,比如控制可移动平台移动至特定的位置和角度,再对目标对象进行作业,才能达到较好的作业效果。相关技术中,在控制可移动平台进行作业时,要么采用人工手动操控可移动平台进行作业的方式,比如,根据用户的观测或者可移动平台返回的照片手动调整可移动平台的位置和朝向,但是这种方式每次作业都需要人工操控,比较耗费人力。也有的技术,可以通过用户在展示有作业区域的三维模型的交互界面上进行操作,确定可移动平台的作业时的位置和朝向并存储,然后按照存储的位置和朝向控制可移动平台进行作业,这种方式通过用户在三维模型中作业对象附近的位置点击确定作业时的位置,例如,先点击一个地面点,再在高度方向上拖拽一定的距离,然后需要用户自行输入可移动平台作业时的姿态角,并根据交互界面中的预览窗口显示的预览效果不断调整输入的角度,以确定最终作业时可移动平台的朝向。这种方式无法准确地控制可移 动平台的作业距离,并且需要用户反复调整输入的角度,特别繁琐。
基于此,本申请提供了一种可移动平台的控制方法,用户可以在展示有作业区域的三维模型的交互界面上移动、旋转三维模型,根据适合观测作业区域中目标对象的视角调整三维模型的朝向,根据用户在交互界面输入的目标对象选择操作确定目标对象的位置,根据用户输入目标对象选择操作时三维模型的朝向确定对目标对象进行作业时的目标朝向,根据目标对象的位置、目标朝向以及作业距离确定可移动平台对目标对象作业时的目标位置。用户可以借助于交互界面的三维模型比较直观的确定适合可移动平台作业的朝向,无需用户通过多次输入角度来调整朝向,比较方便快捷,同时通过以目标对象为中心调整作业时的位置,可以准确地控制可移动平台的作业距离。
具体的,所述方法如图1所示,包括以下步骤:
S102、获取用户在交互界面输入的目标对象选择操作,所述交互界面展示有作业区域的三维模型,所述目标对象选择操作用于确定所述作业区域中的目标对象的位置;
S104、根据获取所述目标对象选择操作时所述交互界面展示的所述三维模型的朝向,确定对所述目标对象进行作业时的目标朝向;
S106、根据所述目标对象的位置、所述目标朝向以及可移动平台的作业距离确定所述可移动平台对所述目标对象进行作业时的目标位置,以使所述可移动平台移动至所述目标位置并按照所述目标朝向对所述目标对象进行作业。
本申请的可移动平台控制方法可以由可移动平台执行,比如,在某些实施例中,可移动平台可以提供人机交互界面,用户可以在人机交互界面进行操作,确定可移动平台对目标对象进行作业时的目标位置和目标朝向,然后根据确定的目标位置和目标朝向控制可移动平台进行作业。在某些实施例中,本申请的可移动平台控制方法也可以由控制终端执行,控制终端可以是笔记本电脑、遥控器、手机、平板等终端设备,控制终端可以提供 人机交互界面,用于获取用户的交互操作,并根据用户的交互操作对可移动平台进行控制。
本申请的可移动平台是当前一次用于对目标对象进行作业的可移动平台,该可移动平台包括动力部件,用于驱使可移动平台运动,可移动平台可以是无人机、无人车、智能机器人等设备。
本申请的目标对象是指可移动平台要进行作业的对象,该目标对象可以是需要被巡检的电力设备、大坝、桥梁等,也可以是需要被喷洒药物或者浇灌的农作物,或者是被采集的果实等。目标对象可以是一个或者多个,作业区域是指包括这一个或者多个目标对象的区域。作业区域的三维模型可以是通过摄影测量三维重建得到的三维模型、或者通过激光雷达扫描得到的三维模型或者是设计过程中的CAD模型。
执行该控制方法的设备可以在三维引擎中加载该作业区域的三维模型,并通过人机交互界面将该三维模型展示给用户。用户在确定可移动平台对目标对象进行作业的目标朝向和目标位置时,可以在交互界面中移动和旋转该三维模型,将三维模型的朝向调整至一个比较适合观测目标对象的朝向,比如调整到目标对象不会被其他物体遮挡并且可以清晰的看到目标对象的朝向。用户在交互界面调整好三维模型的朝向后,可以通过交互界面输入目标对象选择操作,执行该控制方法的设备可以获取用户输入的目标对象选择操作,根据该目标对象选择操作确定目标对象的位置,然后根据获取该目标对象选择操作时交互界面中展示的三维模型的朝向确定对目标对象进行作业时的目标朝向。其中,目标朝向与用户从交互界面观测到的该三维模型的朝向一致,用户从交互界面观测三维模型的视角即为可移动平台对目标对象进行作业时观测目标对象的视角。确定目标朝向后,即可以根据所确定的目标对象的位置、目标朝向以及可移动平台的作业距离确定可移动平台对目标对象进行作业时的目标位置,以控制可移动平台移动至该目标位置并按照该目标朝向对目标对象进行作业。
需要指出的是本申请中的目标位置和目标朝向既可以是对目标对象进 行作业时可移动平台的位置和朝向,也可以是可移动平台上对目标对象进行作业的零件的位置和朝向。举个例子,假设采用无人机对电子设备进行巡检的场景,由于无人机中心和摄像装置中心的位置差异较小,因而,可以直接将无人机中心调整到该目标位置,当然,为了更加精准的对无人机进行控制,也可以根据摄像装置的中心和无人机的中心的位置关系对该目标位置进行补偿,得到补偿后的位置,将无人机移动至补偿后的位置,以使无人机摄像装置位于该目标位置进行作业。目标朝向可以是可移动平台的朝向,当然,当也可以是可移动平台上进行作业的零件的朝向,比如,可以是无人机上的摄像装置的朝向,或者是无人车上的机械臂的朝向。
通过本申请提供的方法,用户可以根据交互界面展现的目标对象的视觉效果调整三维模型的朝向,从而确定比较适合观测目标对象的朝向,并确定对目标对象进行作业时的目标朝向,通过这种方式,用户可以比较直观的确定对目标对象进行作业时的合适的朝向,无需通过不断输入角度多次调整作业时的朝向,比较方便快捷,同时可以通过确定的目标对象的位置、目标朝向和作业距离确定对目标对象进行作业时的目标位置,可以根据用户需求精确的控制作业距离。
用户输入的目标对象选择操作可以是任一用于根据三维模型确定目标对象的位置的操作,比如,可以是用户点击交互界面展示的三维模型中的目标对象的操作或者是框选三维模型中的目标对像的操作,根据用户输入的目标对象选择操作,可以确定目标对象的位置。比如,用户可以在交互界面点击目标对象的中心区域,然后将用户点击的那个点作为目标对象的位置,如果用户在交互界面框选目标对象,则可以将选框的中心位置作为目标对象的位置,或者用户也可以在目标对象上点击多个点确定目标对象,比如,目标对象为一个人,用户可以在头部点击一个点,在身体位置点击一个点,在脚的位置点击一个点,然后将用户点击的三个点连成一条线,取这条线的中心作为目标对象的位置。由于三维模型的是携带有地理位置信息的模型,因而,根据用户在三维模型的交互操作,可以确定目标对象 的位置对应的三维坐标。
通常三维引擎对应有一个虚拟相机,交互界面中呈现的作业区域的画面,相当于虚拟相机在不同的视角对作业区域进行拍摄得到。比如,用户在交互界面不断移动和旋转三维模型时,虚拟相机的位置和朝向也在不断的变化。如图2所示,假设三维模型为汽车的模型,当用户调整三维模型的朝向,即图像呈现汽车的不同角度时(图中矩形框内的汽车图像),则可以认为该图像是由虚拟相机(图中的黑点)按不同朝向采集的到的图像。其中,用户在交互界面对三维模型进行操作,调整三维模型的展现的姿态时,对应的虚拟相机的位置和朝向三维引擎是可以确定的。在某些实施例中,交互界面展示的三维模型的朝向与虚拟相机的朝向一一对应。因此,可以将当前交互界面展示的三维模型的朝向对应的虚拟相机的朝向作为对目标对象进行作业时的目标朝向。
在某些实施例中,如图3所示,目标朝向可以是沿着目标对象的位置与虚拟相机的位置的连线并指向目标对象的位置,可以根据目标对象的位置和虚拟相机的位置确定目标朝向,比如,当用户停止在交互界面中移动或旋转三维模型时,此刻交互界面展示的三维模型的朝向确定,三维引擎可以自动确定当前虚拟相机的位置(可以是虚拟相机中心对应的三维坐标),然后根据用户在交互界面输入的目标对象选择操作确定目标对象的位置(可以是目标对象中心对应的三维坐标),然后连接虚拟相机的中心与目标对象的中心得到一条连线,对目标对象进行作业时可移动平台的目标朝向即可以沿着该连线指向目标对象。
在某些实施例中,如图4所示,可移动平台对目标对象进行作业时的目标位置可以位于目标对象的位置与虚拟相机的位置的连线上,且与目标对象的距离为作业距离。在确定虚拟相机的位置(可以是虚拟相机中心对应的三维坐标)和目标对象的位置(可以是目标对象中心对应的三维坐标)后,连接虚拟相机的中心与目标对象的中心得到一条连线,然后以目标对象为基准沿着该连线移动作业距离达到目标位置,这样便可以确定可移动 平台对目标对象进行作业时的距离为用户预期的作业距离。
用户在交互界面平移旋转确定交互界面中展示的三维模型的朝向,并输入目标对象选择操作后,执行该控制方法的设备即可以根据该目标对象选择操作以及交互界面展示的三维模型的朝向确定可移动平台对目标对象进行作业时的目标位置和目标朝向。当然,用户还可以进一步对目标位置和目标朝向进行微调,以调整到更佳的位置和朝向。在某些实施例中,执行该控制方法的设备可以获取通过交互界面输入的调整操作,然后根据目标对象的位置和用户输入的调整操作确定调整后的目标位置,并根据调整后的目标位置调整目标朝向,其中,调整后的目标朝向由调整后的目标位置指向目标对象的位置。
可移动平台的作业距离可以根据用户的实际需求调整,可以由用户输入,也可以由执行该控制方法的设备自动确定。比如,针对农作物进行药物喷洒的场景,为了保证药物喷洒的效果,对喷头与农作物的距离有要求,这种场景用户可以通过交互界面输入作业距离。针对采集目标对象的图像的场景,作业距离可以根据用户对采集的图像的空间分辨率确定,其中,空间分辨率是指图像中像素点对应的物理尺寸。比如,用户需要观测目标对象局部区域的细节,则可移动平台应离目标对象尽可能近,作业距离可以设置小一点,如果用户需要观测目标对象的全貌,则作业距离应当确保可移动平台可以拍摄到整个目标对象。在某些实施例中,可移动平台的作业距离可以根据作业对象的尺寸确定、也可根据空间分辨率确定、或者同时结合作业对象的尺寸以及空间分辨率确定,空间分辨率可以由用户通过交互界面输入或者预先设置。比如,用户可以根据对采集的目标对象的图像的空间分辨率的需求,输入空间分辨率,则执行该控制方法的设备可以根据以下公式(1)自动计算作业距离:
d=gsd*w/35*f   公式(1)
其中:d为作业距离,w为图像传感器的像素宽度,f为镜头35mm等效焦距,gsd为空间分辨率,代表图像上像素点的尺寸对应三维空间中的物 体的实际距离。
当然,作业距离也可以根据目标对象的尺寸确定,比如,在某些场景,需要在一张图像中拍摄到完整的目标对象,因此,可以根据目标对象的尺寸确定作业距离。其中,作业距离可以根据公式(2)确定:
d=L/35*f  公式(2)
其中,L为目标对象的尺寸,f为镜头35mm等效焦距。
当然,在某些实施例中,也可以结合空间分辨率和目标对象的尺寸共同确定作业距离,比如,用户可以输入空间分辨率的变化范围,然后可以根据用户输入的空间分辨率范围以及目标对象的尺寸共同确定作业距离,以便在该作业距离下拍摄,即可以满足用户输入的空间分辨率需求,又可以拍摄到完整的目标对象。
在某些实施例中,根据用户输入的空间分辨率确定作业距离后,如果在该作业距离下无法拍摄到完整的目标对象,则可以调整可移动平台上的相机的朝向,拍摄得到多张图像,然后再利用这多张图像合成得到包括完整目标对象的图像。
在某些实施例中,如果可移动平台上的摄像装置是可变焦的,也可以根据期望的作业距离调整焦距,以满足用户对作业距离的需求。
在某些实施例中,在确定目标对象的尺寸时,可以根据用户输入的目标对象选择操作确定目标对象对应的目标区域,然后将该目标区域的尺寸作为目标对象的尺寸,其中目标区域通常为包括目标对象的三维空间区域。举个例子,如图5所示当目标对象选择操作为用户在交互界面中目标对象51的起始端点击一个点以及目标对象的结束端点击一个点,则可以根据三维模型确定这两个点的三维空间坐标,并确定这两个点的连线,将连线的中心作为目标对象51的中心的位置,然后以连线的作为直径确定一个球形区域52,将该球形区域52作为目标对象对应的区域,该球形区域的尺寸即为目标对象的尺寸。当然,目标区域的形状不局限于球形,可以是各种形状的三维空间区域,比如长方体区域后者其他形状的区域。在确定目标 位置时,可以选择可以拍摄到整个球形区域的位置作为目标位置。
如图6所示,由于目标对象的位置已经确定,为了保证可移动平台以预期的作业距离对目标对象进行作业,以保证作业效果,因此,在调整目标位置时,可以以目标对象为中心,保持可移动平台的作业距离不变的前提下对目标位置进行调整。可以定义目标对象的位置与目标位置的连线与水平面的夹角为pitch角,目标对象的位置与目标位置的连线在水平面上的投影与北向的夹角为yaw角,在调整目标位置时,可以保持目标对象的位置与目标位置的距离不变,调整pitch角或者yaw角的大小,以对目标位置进行微调。所以,在某些实施例中,对目标位置的调整操作包括第一调整操作,第一调整操作可以是调整上述pitch角的操作,如图7所示,第一调整操作能够使得目标位置在第一目标圆上调整,其中,第一目标圆的圆心位于目标对象的位置,第一目标圆所在的平面垂直于水平面,第一目标圆的半径为可移动平台的作业距离。
在某些实施例中,对目标位置的调整操作包括第二调整操作,第二调整操作可以是调整上述yaw角的操作,如图8所示,第二调整操作能够使得目标位置在第二目标圆上调整,其中,第二目标圆的圆心位于目标对象的投影位置,该投影位置是通过将目标对象的位置投影到过该目标位置的水平面上得到的,该第二目标圆的半径为投影位置到该目标位置的距离。
在某些实施例中,为了让用户可以直观的看到目标位置的调整路径,还可以在交互界面展示第一目标圆和/或第二目标圆,以便用户根据展示的第一目标圆和第二目标圆确定目标位置的调整路径。
在某些实施例中,为了让用户可以更直观的知道调整后的目标位置和目标朝向是否合适,是否是最佳的位置和朝向,交互界面还可以包括预览窗口,在调整目标位置和目标朝向的过程中,可以在预览窗口展示可移动平台在调整后的目标位置并按照目标朝向进行作业时的预览效果。以采用无人机对目标对象进行图像采集为例,可以在预览窗口展示无人机位于调整后的目标位置并按照调整后的目标朝向对目标对象进行图像采集时,采 集的图像的预览效果,以便用户根据预览效果确定是否需要继续调整无人机的位置和朝向,以及确定调整策略。当然,如果是采用可移动平台上的机械臂采摘果实的场景,即可以在预览窗口展示可移动平台位于调整后的目标位置,并将机械臂调整至目标朝向采摘果实的动态示意图,这时用户可以清楚的知道可移动平台位于当前目标位置按照目标朝向采摘果实,是否可以成功采摘,然后确定该如何调整目标位置和目标朝向。
由于在对目标位置和目标朝向进行调整时,是以目标对象为中心进行调整,所以,在某些实施例中,在调整目标位置和目标朝向时,目标对象在预览窗口中的位置可以固定,比如,可以保持目标对象在画面的中心位置,调整过程中,目标对象的位置不变,以便用户更好的观察调整效果。
在某些实施例中,交互界面中可以设置调整控件,调整目标位置的调整操作可以通过所述调整控件触发。如图9所示,交互界面可以显示调整按钮91,用户可以点击调整按钮91以调整目标位置和目标朝向。
在某些实施例中,为了保证可移动平台安全作业,在确定可移动平台对目标对象进行作业时的目标位置后,可以确定与该目标位置最近的障碍物与目标位置的距离,如果该距离小于预设的安全距离,则对该目标位置和/障碍物进行标识,以便用户根据标识对目标位置进行调整。比如,可以在交互界面中标红该目标位置以及障碍物,或者框选该目标位置以及障碍物,以便用户根据该标识调整目标位置。
在某些实施例中,目标对象可以是多个,假设有第一目标对象和第二目标对象,当根据用户在交互界面输入的第一目标对象选择操作确定对第一目标对象进行作业时的目标位置(以下称为第一目标对象选择操作对应的目标位置)后,可以获取用户输入的第二目标对象选择操作,根据用户输入的第二目标对象选择操作确定对第二目标对象进行作业时的目标位置(以下称为第二目标对象选择操作对应的目标位置),然后在交互界面展示第一目标对象选择操作对应的目标位置和第二目标对象选择操作对应的目标位置之间的连线。即用户每次在交互界面输入目标对象选择操作后,执 行该控制方法的设备可以自动确定该目标对象选择操作对应的目标位置,并与前一次确定的目标位置连接起来,形成可移动平台的作业路径,并展示的交互界面上。可选的,上述第一目标对象选择操作和第二目标对象选择操作也可以对应于同一目标对象,本申请实施例对此不作限定。
由于可移动平台从一个目标位置移动到另一个目标位置的过程中,也可以能存在障碍物,为了确保可移动平台作业过程的安全,在某些实施例中,在确定第一目标选择操作对应的目标位置和第二目标选择操作对应的目标位置之间的连线后,可以确定离该连线最近的障碍物与该连线的距离,当该距离小于预设的安全距离时,则可以标识该连线和/或障碍物,以便用户对该第一目标选择操作对应的目标位置和/或第二目标选择操作对应的目标位置进行调整。
由于针对作业区域中的目标对象,可能存在需要重复对其进行作业的需求。比如,在对农作物进行药物喷洒的场景,可能需要多次对农作物进行药物喷洒,所以,用户通过展示有三维模型的交互界面确定对各目标对象进行作业的目标位置和目标朝向并生成可移动平台的作业路径后,可以存储可移动平台的作业路径,比如,可以存储用于确定目标位置和目标朝向的目标参数,以便下一次对作业区域中的目标对象作业时,可以直接调用这些目标参数确定对目标对象进行作业时的目标位置和目标朝向。
在某些实施例中,目标参数可以包括目标对象的位置、目标朝向、目标位置、作业距离、目标对象的尺寸以及虚拟相机的位置等一个或者多个。比如,在某些实施例中,可以存储目标位置和目标朝向,作业时,直接控制可移动平台移动至目标位置并按照目标朝向对目标对象进行作业。当然,也可以存储目标对象的位置、目标朝向、作业距离等,采用可移动平台对目标对象进行作业时,可以直接根据目标对象的位置、目标朝向以及作业距离确定目标位置。
由于可以存储目标对象的位置,并可以根据目标对象的位置确定目标位置和目标朝向,在根据可移动平台的硬件参数确定对作业区域的目标对 象进行作业时的目标位置和目标朝向后,如果换成其他的可移动平台对同一作业区域的目标对象进行作业,可以无需用户在重新通过交互界面设置作业路径。在某些实施例中,上述可移动平台为第一可移动平台,当采用第二可移动平台对目标对象进行作业时,可以获取存储的目标参数,然后根据存储的目标参数以及第二可移动平台的硬件参数确定第二可移动平台对目标对象进行作业时的位置和朝向。其中,第二可移动平台的硬件参数可以与第一可移动平台的硬件参数相同,也可以不同。比如,当用户换成第二可移动平台对目标对象进行作业,第二可移动平台的焦距和传感器的像素宽度可能与第一可移动平台不同,因此,可以根据第二可移动平台的焦距和传感器的像素宽度重新确定作业距离。由于目标对象的位置已知,并且是以目标对象为中心确定目标位置和目标朝向,因此,可以根据重新确定的作业距离和目标对象的位置重新确定第二可移动平台作业时的位置和朝向,以更新作业路径。
为了进一步解释本申请的可移动平台控制方法,以下结合一个具体的实施例加以解释。
通常会使用无人机对电力设备、桥梁、大坝等进行巡检,通过无人机采集这些待巡检的目标对象的图像,然后通过分析这些图像判定是否出现故障。如图10所示,为本申请一个实施例的应用场景示意图,用户可以通过控制终端102上的安装的软件应用来设置无人机101对作业区域中的目标对象进行图像采集时的目标朝向和目标位置,基于目标朝向和目标位置生成控制指令,并将该控制指令发送给无人机101。可选的,可以通过控制终端102和无人机101之间的通信链路,将该控制指令发送给无人机101。或者,可以通过控制终端102与遥控器103,以及遥控器103和无人机101之间的通信链路,将该控制指令发送给可移动平台。可选的,控制终端102可以是安装有三维重建软件的笔记本电脑。
用户可以打开控制终端102上的软件应用,软件应用中的三维引擎可以加载作业区域的三维模型,三维模型可以通过摄影测量三维重建得到, 或者通过激光雷达对作业区域进行扫描得到。用户可以通过交互界面输入空间分辨率,以便根据空间分辨率确定无人机101的作业距离。
作业区域的三维模型可以通过交互界面展示给用户,用户可以移动、旋转三维模型,将三维模型移动旋转到一个比较合适的朝向,以便确定目标对象不会被遮挡的视角,便于观测目标对象。三维引擎中存在一个虚拟相机,交互界面上呈现的画面可以看成是虚拟相机采集的作业区域的画面,三维模型的朝向与虚拟相机的朝向相对应,当用户停止移动旋转三维模型后,三维引擎即可以确定当前三维模型的朝向和位置。用户可以在三维模型中点击目标对象的起始位置和结束位置,根据用户点击的起始位置和结束位置生成一条连线,并且确定该连线的中心的位置,将该中心的位置作为目标对象的位置。确定目标对象的位置后,可以根据目标对象的位置和虚拟相机的位置生成一条连线,无人机对目标对象进行作业时,相机的朝向即可以沿着该连线指向目标对象,无人机对目标对象作业时,无人机的位置位于该连线上与目标对象的距离为作业距离的位置。
此外,可以定义目标对象的位置与无人机作业时的位置的连线与水平面的夹角为pitch角,该连线在水平面上的投影与北向的夹角为yaw角。在确定无人机作业时的朝向和位置后,还可以对pitch角和yaw角进行微调,以确定拍摄效果更佳的位置和朝向。其中,交互界面可以包括预览窗口,当用户调整pitch角和yaw角时,可以在预览窗口展示无人机位于当前位置和朝向时,拍摄的图像的预览效果,以便用户根据预览效果确定pitch角和yaw角的调整策略。当根据用户在交互界面的点击操作确定对一个目标对象进行作业时的位置后,如果用户点击另一个目标对象,则确定对另一个目标对象的进行作业时的位置,然后将两个位置通过连线连接起来,并在交互界面展示该连线。为了保证无人机在作业过程中的安全,可以确定与无人机作业时的位置距离最近的障碍物,判断障碍物与作业时的无人机的距离是否小于预设的安全距离,如果小于,则在交互界面标识该障碍物或者该位置,以便用户对该位置进行调整。同时,也可以确定与无 人机对不同目标对象作业时所在位置的连线距离最近的障碍物,判定该障碍物与该连线的位置是否小于预设安全距离,如果小于则标识该障碍物以及连线,以便用户进行调整。在确定对作业区域内各目标对象进行作业时,无人机的位置和朝向后,可以存储目标对象的位置、pitch角、yaw角以及作业距离,然后根据存储的上述参数控制无人机作业。当然,如果更换对该作业区域内的目标对象进行作业的无人机,比如,无人机的焦距、传感器的像素宽度等硬件参数发生变化,则可以根据无人机的硬件参数和预期的分辨率重新确定作业距离,然后根据目标对象的位置、pitch角、yaw角以及作业距离重新确定无人机作业时的位置和朝向。由于是以目标对象为中心确定作业时的位置和朝向,这样用户只需在交互界面设置一次作业路径,即便作业的无人机的硬件参数发生了更改,也可以根据存储的目标对象的位置和无人机的朝向重新确定作业时的位置,无需用户重新设置。
通过上述方法,可以用户可以非常方便的确定无人机作业时的位置和朝向。并且作业时的位置的微调都是以目标对象为中心,确保目标对象永远在拍摄画面的中心,而且微调时只改变朝向不改变作业距离,可以保证作业效果,由于确定了目标对象的位置,所以可以方便根据不同无人机的最佳作业距离改变作业时的位置,以适应不同飞机,达到同样拍摄效果,无需用户针对每个作业的无人机都重新确定作业路径。
此外,本申请还提供了一种可移动平台的控制装置,如图11所示,所述装置包括处理器111、存储器112、存储于所述存储器112所述处理器111可执行的计算机程序,所述处理器111执行所述计算机程序时实现以下步骤:
获取用户在交互界面输入的目标对象选择操作,所述交互界面展示有作业区域的三维模型,所述目标对象选择操作用于确定所述作业区域中的目标对象的位置;
根据获取所述目标对象选择操作时所述交互界面展示的所述三维模型的朝向,确定对所述目标对象进行作业时的目标朝向;
根据所述目标对象的位置、所述目标朝向以及可移动平台的作业距离确定所述可移动平台对所述目标对象进行作业时的目标位置,以使所述可移动平台移动至所述目标位置并按照所述目标朝向对所述目标对象进行作业。
在某些实施例中,所述交互界面展示的所述三维模型的朝向与虚拟相机的朝向相对应。
在某些实施例中,所述目标朝向沿着所述目标对象的位置与所述虚拟相机的位置的连线并指向所述目标对象的位置。
在某些实施例中,所述目标位置位于所述目标对象的位置与所述虚拟相机的位置的连线上,且与所述目标对象的距离为所述作业距离。
在某些实施例中,所述处理器还用于:
获取用户通过所述交互界面输入的调整操作;
根据所述目标对象的位置和所述调整操作,确定调整后的目标位置;
根据所述调整后的目标位置调整所述目标朝向,以使得所述目标朝向由所述调整后的目标位置指向所述目标对象的位置。
在某些实施例中,所述调整操作包括第一调整操作,所述第一调整操作能够使得所述目标位置在第一目标圆上调整,其中,所述第一目标圆的圆心位于所述目标对象的位置,所述第一目标圆所在的平面垂直于水平面,所述第一目标圆的半径为所述可移动平台的作业距离。
在某些实施例中,所述调整操作包括第二调整操作,所述第二调整操作能够使得所述目标位置在第二目标圆上调整,其中,所述第二目标圆的圆心位于所述目标对象的投影位置,所述投影位置是通过将所述目标对象的位置投影到过所述目标位置的水平面上得到的,所述第二目标圆的半径为所述投影位置到所述目标位置的距离。
在某些实施例中,所述处理器还用于:
在所述交互界面中展示所述第一目标圆和/或所述第二目标圆。
在某些实施例中,所述交互界面包括预览窗口,基于所述目标对象的 位置和所述调整操作调整所述目标位置和所述目标朝向之后,还包括:
在所述预览窗口中展示所述可移动平台位于调整后的目标位置并按照调整后的目标朝向进行作业的预览效果。
在某些实施例中,调整所述目标位置和所述目标朝向时,所述目标对象在所述预览窗口中的位置固定。
在某些实施例中,所述交互界面包括调整控件,所述调整操作基于所述调整控件触发。
在某些实施例中,所述处理器还用于:当确定所述目标位置与距离所述目标位置最近的障碍物的距离小于预设的安全作业距离时,对所述目标位置和/或所述障碍物加以标识。
在某些实施例中,所述处理器还用于:所述目标对象选择操为第一目标对象选择操作,所述方法还包括:
获取用户在交互界面输入的第二目标对象选择操作;
在所述交互界面中展示所述第一目标对象选择操作对应的目标位置和所述第二目标对象选择操作对应的目标位置之间的连线。
在某些实施例中,所述处理器还用于:当确定所述第一目标选择操作对应的目标位置和所述第二目标选择操作对应的目标位置之间的连线与距离所述连线最近的障碍物的距离小于预设的安全作业距离时,对所述连线和/或所述障碍物加以标识。
在某些实施例中,所述处理器还用于:
根据空间分辨率和/或目标对象的尺寸确定所述作业距离;
其中,所述空间分辨率是指像素点对应的物理尺寸,所述空间分辨率为用户在交互界面输入或者预设设置的空间分辨率。
在某些实施例中,所述处理器还用于:
根据所述目标对象选择操作,确定所述目标对象对应的目标区域;
将所述目标区域的尺寸作为所述目标对象的尺寸。
在某些实施例中,所述处理器还用于:存储目标参数,所述目标参数 用于确定所述目标位置和所述目标朝向。
在某些实施例中,所述目标参数包括以下一个或多个:所述目标对象的位置、所述目标朝向、所述作业距离、所述目标对象的尺寸以及所述虚拟相机的位置。
在某些实施例中,所述可移动平台为第一可移动平台,所述处理器还用于:
获取存储的所述目标参数,基于第二可移动平台的硬件参数以及所述目标参数确定所述第二可移动平台作业时的位置和朝向。
其中,所述可移动平台的控制装置可以等同于前述控制终端,或者是控制终端的一部分。确定可移动平台对目标对象进行作业时的目标位置和目标朝向的具体实现细节可以参考上述方法中的各实施例中的描述,在此不再赘述。
另外,本申请还提供一种可移动平台,所述可移动平台用于接收控制指令,并基于所述控制指令移动至目标位置并按照目标朝向对作业区域中的目标对象进行作业;
其中,所述控制指令基于以下方式确定:获取用户在交互界面输入的目标对象选择操作,所述交互界面展示有所述作业区域的三维模型,所述目标对象选择操作用于确定所述作业区域中的目标对象的位置;
根据获取所述目标对象选择操作时所述交互界面展示的所述三维模型的朝向,确定对所述目标对象进行作业时的目标朝向;
根据所述目标对象的位置、所述目标朝向以及可移动平台的作业距离确定所述可移动平台对所述目标对象进行作业时的目标位置;基于所述目标朝向和所述目标位置生成所述控制指令。
其中,所述生成所述控制指令的具体实现细节可以参考上述方法中的各实施例中的描述,在此不再赘述。
进一步的,本申请还提供一种控制系统,所述控制系统包括可移动平台和控制终端,所述控制终端用于获取用户在交互界面输入的目标对象选 择操作,所述交互界面展示有作业区域的三维模型,所述目标对象选择操作用于确定所述作业区域中的目标对象的位置;
根据获取所述目标对象选择操作时所述交互界面展示的所述三维模型的朝向,确定对所述目标对象进行作业时的目标朝向;
根据所述目标对象的位置、所述目标朝向以及可移动平台的作业距离确定所述可移动平台对所述目标对象进行作业时的目标位置;
基于所述目标朝向和所述目标位置生成所述控制指令,并将所述控制指令发送给所述可移动平台;
所述可移动平台用于基于所述控制指令移动至目标位置并按照目标朝向对所述目标对象进行作业。
其中,所述控制终端控制所述可移动平台进行作业的具体实现细节可以参考上述方法中的各实施例中的描述,在此不再赘述。
相应地,本说明书实施例还提供一种计算机存储介质,所述存储介质中存储有程序,所述程序被处理器执行时实现上述任一实施例中的可移动平台的控制方法。
本说明书实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可用存储介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处 参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本发明实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (39)

  1. 一种可移动平台的控制方法,其特征在于,所述方法包括:
    获取用户在交互界面输入的目标对象选择操作,所述交互界面展示有作业区域的三维模型,所述目标对象选择操作用于确定所述作业区域中的目标对象的位置;
    根据获取所述目标对象选择操作时所述交互界面展示的所述三维模型的朝向,确定对所述目标对象进行作业时的目标朝向;
    根据所述目标对象的位置、所述目标朝向以及可移动平台的作业距离确定所述可移动平台对所述目标对象进行作业时的目标位置,以使所述可移动平台移动至所述目标位置并按照所述目标朝向对所述目标对象进行作业。
  2. 根据权利要求1所述的方法,其特征在于,所述交互界面展示的所述三维模型的朝向与虚拟相机的朝向相对应。
  3. 根据权利要求2所述的方法,其特征在于,所述目标朝向沿着所述目标对象的位置与所述虚拟相机的位置的连线并指向所述目标对象的位置。
  4. 根据权利要求2或3所述的方法,其特征在于,所述目标位置位于所述目标对象的位置与所述虚拟相机的位置的连线上,且与所述目标对象的距离为所述作业距离。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:
    获取用户通过所述交互界面输入的调整操作;
    根据所述目标对象的位置和所述调整操作,确定调整后的目标位置;
    根据所述调整后的目标位置调整所述目标朝向,以使得所述目标朝向由所述调整后的目标位置指向所述目标对象的位置。
  6. 根据权利要求5所述的方法,其特征在于,所述调整操作包括第一调整操作,所述第一调整操作能够使得所述目标位置在第一目标圆上调整,其中,所述第一目标圆的圆心位于所述目标对象的位置,所述第一目标圆 所在的平面垂直于水平面,所述第一目标圆的半径为所述可移动平台的作业距离。
  7. 根据权利要求5或6所述的方法,其特征在于,所述调整操作包括第二调整操作,所述第二调整操作能够使得所述目标位置在第二目标圆上调整,其中,所述第二目标圆的圆心位于所述目标对象的投影位置,所述投影位置是通过将所述目标对象的位置投影到过所述目标位置的水平面上得到的,所述第二目标圆的半径为所述投影位置到所述目标位置的距离。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    在所述交互界面中展示所述第一目标圆和/或所述第二目标圆。
  9. 根据权利要求5-8任一项所述的方法,其特征在于,所述交互界面包括预览窗口,基于所述目标对象的位置和所述调整操作调整所述目标位置和所述目标朝向之后,还包括:
    在所述预览窗口中展示所述可移动平台位于调整后的目标位置并按照调整后的目标朝向进行作业的预览效果。
  10. 根据权利要求9所述的方法,其特征在于,调整所述目标位置和所述目标朝向时,所述目标对象在所述预览窗口中的位置固定。
  11. 根据权利要求5-10任一项所述的方法,其特征在于,所述交互界面包括调整控件,所述调整操作基于所述调整控件触发。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,还包括:当确定所述目标位置与距离所述目标位置最近的障碍物的距离小于预设的安全作业距离时,对所述目标位置和/或所述障碍物加以标识。
  13. 根据权利要求1-12任一项所述的方法,其特征在于,还包括:所述目标对象选择操作为第一目标对象选择操作,所述方法还包括:
    获取用户在交互界面输入的第二目标对象选择操作;
    在所述交互界面中展示所述第一目标对象选择操作对应的目标位置和所述第二目标对象选择操作对应的目标位置之间的连线。
  14. 根据权利要求13所述的方法,其特征在于,还包括:当确定所述 第一目标选择操作对应的目标位置和所述第二目标选择操作对应的目标位置之间的连线与距离所述连线最近的障碍物的距离小于预设的安全作业距离时,对所述连线和/或所述障碍物加以标识。
  15. 根据权利要求1-14任一项所述的方法,其特征在于,所述方法还包括:
    根据空间分辨率和/或目标对象的尺寸确定所述作业距离;
    其中,所述空间分辨率是指像素点对应的物理尺寸,所述空间分辨率为用户在交互界面输入或者预设设置的空间分辨率。
  16. 根据权利要求15所述的方法,其特征在于,所述方法还包括:
    根据所述目标对象选择操作,确定所述目标对象对应的目标区域;
    将所述目标区域的尺寸作为所述目标对象的尺寸。
  17. 根据权利要求1-16任一项所述的方法,其特征在于,所述方法还包括:存储目标参数,所述目标参数用于确定所述目标位置和所述目标朝向。
  18. 根据权利要求17所述的方法,其特征在于,所述目标参数包括以下一个或多个:所述目标对象的位置、所述目标朝向、所述作业距离、所述目标对象的尺寸以及所述虚拟相机的位置。
  19. 根据权利要求17或18所述的方法,其特征在于,所述可移动平台为第一可移动平台,所述方法还包括:
    获取存储的所述目标参数,基于第二可移动平台的硬件参数以及所述目标参数确定所述第二可移动平台作业时的位置和朝向。
  20. 一种可移动平台的控制装置,其特征在于,所述装置包括处理器、存储器、存储于所述存储器所述处理器可执行的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
    获取用户在交互界面输入的目标对象选择操作,所述交互界面展示有作业区域的三维模型,所述目标对象选择操作用于确定所述作业区域中的目标对象的位置;
    根据获取所述目标对象选择操作时所述交互界面展示的所述三维模型的朝向,确定对所述目标对象进行作业时的目标朝向;
    根据所述目标对象的位置、所述目标朝向以及可移动平台的作业距离确定所述可移动平台对所述目标对象进行作业时的目标位置,以使所述可移动平台移动至所述目标位置并按照所述目标朝向对所述目标对象进行作业。
  21. 根据权利要求20所述的装置,其特征在于,所述交互界面展示的所述三维模型的朝向与虚拟相机的朝向相对应。
  22. 根据权利要求21所述的装置,其特征在于,所述目标朝向沿着所述目标对象的位置与所述虚拟相机的位置的连线并指向所述目标对象的位置。
  23. 根据权利要求21或22所述的装置,其特征在于,所述目标位置位于所述目标对象的位置与所述虚拟相机的位置的连线上,且与所述目标对象的距离为所述作业距离。
  24. 根据权利要求20-23任一项所述的装置,其特征在于,所述处理器还用于:
    获取用户通过所述交互界面输入的调整操作;
    根据所述目标对象的位置和所述调整操作,确定调整后的目标位置;
    根据所述调整后的目标位置调整所述目标朝向,以使得所述目标朝向由所述调整后的目标位置指向所述目标对象的位置。
  25. 根据权利要求24所述的装置,其特征在于,所述调整操作包括第一调整操作,所述第一调整操作能够使得所述目标位置在第一目标圆上调整,其中,所述第一目标圆的圆心位于所述目标对象的位置,所述第一目标圆所在的平面垂直于水平面,所述第一目标圆的半径为所述可移动平台的作业距离。
  26. 根据权利要求24或25所述的装置,其特征在于,所述调整操作包括第二调整操作,所述第二调整操作能够使得所述目标位置在第二目标 圆上调整,其中,所述第二目标圆的圆心位于所述目标对象的投影位置,所述投影位置是通过将所述目标对象的位置投影到过所述目标位置的水平面上得到的,所述第二目标圆的半径为所述投影位置到所述目标位置的距离。
  27. 根据权利要求26所述的装置,其特征在于,所述处理器还用于:
    在所述交互界面中展示所述第一目标圆和/或所述第二目标圆。
  28. 根据权利要求24-27任一项所述的装置,其特征在于,所述交互界面包括预览窗口,基于所述目标对象的位置和所述调整操作调整所述目标位置和所述目标朝向之后,还包括:
    在所述预览窗口中展示所述可移动平台位于调整后的目标位置并按照调整后的目标朝向进行作业的预览效果。
  29. 根据权利要求28所述的装置,其特征在于,调整所述目标位置和所述目标朝向时,所述目标对象在所述预览窗口中的位置固定。
  30. 根据权利要求24-29任一项所述的装置,其特征在于,所述交互界面包括调整控件,所述调整操作基于所述调整控件触发。
  31. 根据权利要求20-30任一项所述的装置,其特征在于,所述处理器还用于:当确定所述目标位置与距离所述目标位置最近的障碍物的距离小于预设的安全作业距离时,对所述目标位置和/或所述障碍物加以标识。
  32. 根据权利要求20-31任一项所述的装置,其特征在于,所述处理器还用于:所述目标对象选择操为第一目标对象选择操作,所述方法还包括:
    获取用户在交互界面输入的第二目标对象选择操作;
    在所述交互界面中展示所述第一目标对象选择操作对应的目标位置和所述第二目标对象选择操作对应的目标位置之间的连线。
  33. 根据权利要求32所述的装置,其特征在于,所述处理器还用于:当确定所述第一目标选择操作对应的目标位置和所述第二目标选择操作对应的目标位置之间的连线与距离所述连线最近的障碍物的距离小于预设的 安全作业距离时,对所述连线和/或所述障碍物加以标识。
  34. 根据权利要求20-33任一项所述的装置,其特征在于,所述处理器还用于:
    根据空间分辨率和/或目标对象的尺寸确定所述作业距离;
    其中,所述空间分辨率是指像素点对应的物理尺寸,所述空间分辨率为用户在交互界面输入或者预设设置的空间分辨率。
  35. 根据权利要求34所述的装置,其特征在于,所述处理器还用于:
    根据所述目标对象选择操作,确定所述目标对象对应的目标区域;
    将所述目标区域的尺寸作为所述目标对象的尺寸。
  36. 根据权利要求20-35任一项所述的装置,其特征在于,所述处理器还用于:存储目标参数,所述目标参数用于确定所述目标位置和所述目标朝向。
  37. 根据权利要求36所述的装置,其特征在于,所述目标参数包括以下一个或多个:所述目标对象的位置、所述目标朝向、所述作业距离、所述目标对象的尺寸以及所述虚拟相机的位置。
  38. 根据权利要求36或37所述的装置,其特征在于,所述可移动平台为第一可移动平台,所述处理器还用于:
    获取存储的所述目标参数,基于第二可移动平台的硬件参数以及所述目标参数确定所述第二可移动平台作业时的位置和朝向。
  39. 一种控制系统,其特征在于,所述控制系统包括可移动平台和控制终端,
    所述控制终端用于获取用户在交互界面输入的目标对象选择操作,所述交互界面展示有作业区域的三维模型,所述目标对象选择操作用于确定所述作业区域中的目标对象的位置;
    根据获取所述目标对象选择操作时所述交互界面展示的所述三维模型的朝向,确定对所述目标对象进行作业时的目标朝向;
    根据所述目标对象的位置、所述目标朝向以及可移动平台的作业距离 确定所述可移动平台对所述目标对象进行作业时的目标位置;
    基于所述目标朝向和所述目标位置生成所述控制指令,并将所述控制指令发送给所述可移动平台;
    所述可移动平台用于基于所述控制指令移动至目标位置并按照目标朝向对所述目标对象进行作业。
PCT/CN2020/112085 2020-08-28 2020-08-28 可移动平台的控制方法、装置及控制系统 WO2022041112A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2020/112085 WO2022041112A1 (zh) 2020-08-28 2020-08-28 可移动平台的控制方法、装置及控制系统
CN202310252255.6A CN116360406A (zh) 2020-08-28 2020-08-28 可移动平台的控制方法、装置及控制系统
CN202080039572.4A CN113906358B (zh) 2020-08-28 2020-08-28 可移动平台的控制方法、装置及控制系统
US17/700,553 US11983821B2 (en) 2022-03-22 Control method and apparatus for movable platform, and control system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/112085 WO2022041112A1 (zh) 2020-08-28 2020-08-28 可移动平台的控制方法、装置及控制系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/700,553 Continuation US11983821B2 (en) 2022-03-22 Control method and apparatus for movable platform, and control system thereof

Publications (1)

Publication Number Publication Date
WO2022041112A1 true WO2022041112A1 (zh) 2022-03-03

Family

ID=79186966

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/112085 WO2022041112A1 (zh) 2020-08-28 2020-08-28 可移动平台的控制方法、装置及控制系统

Country Status (2)

Country Link
CN (2) CN113906358B (zh)
WO (1) WO2022041112A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114895819A (zh) * 2022-07-13 2022-08-12 麒砺创新技术(深圳)有限公司 一种三维模型智能化展示优化方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010094777A (ja) * 2008-10-16 2010-04-30 Fuji Electric Systems Co Ltd 遠隔操作支援装置
CN105867433A (zh) * 2016-03-31 2016-08-17 纳恩博(北京)科技有限公司 一种移动控制方法、移动电子设备及移动控制系统
CN107179768A (zh) * 2017-05-15 2017-09-19 上海木爷机器人技术有限公司 一种障碍物识别方法及装置
CN107220099A (zh) * 2017-06-20 2017-09-29 华中科技大学 一种基于三维模型的机器人可视化虚拟示教系统及方法
CN111508066A (zh) * 2020-04-16 2020-08-07 北京迁移科技有限公司 一种基于3d视觉的无序堆叠工件抓取系统及交互方法
US20200257307A1 (en) * 2018-10-10 2020-08-13 Midea Group Co., Ltd. Method and system for providing remote robotic control

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100484726C (zh) * 2006-05-12 2009-05-06 上海大学 基于虚拟现实机器人灵巧手遥操作平台
CN107583271B (zh) * 2017-08-22 2020-05-22 网易(杭州)网络有限公司 在游戏中选择目标的交互方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010094777A (ja) * 2008-10-16 2010-04-30 Fuji Electric Systems Co Ltd 遠隔操作支援装置
CN105867433A (zh) * 2016-03-31 2016-08-17 纳恩博(北京)科技有限公司 一种移动控制方法、移动电子设备及移动控制系统
CN107179768A (zh) * 2017-05-15 2017-09-19 上海木爷机器人技术有限公司 一种障碍物识别方法及装置
CN107220099A (zh) * 2017-06-20 2017-09-29 华中科技大学 一种基于三维模型的机器人可视化虚拟示教系统及方法
US20200257307A1 (en) * 2018-10-10 2020-08-13 Midea Group Co., Ltd. Method and system for providing remote robotic control
CN111508066A (zh) * 2020-04-16 2020-08-07 北京迁移科技有限公司 一种基于3d视觉的无序堆叠工件抓取系统及交互方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114895819A (zh) * 2022-07-13 2022-08-12 麒砺创新技术(深圳)有限公司 一种三维模型智能化展示优化方法及系统
CN114895819B (zh) * 2022-07-13 2022-09-13 麒砺创新技术(深圳)有限公司 一种三维模型智能化展示优化方法及系统

Also Published As

Publication number Publication date
US20220215632A1 (en) 2022-07-07
CN113906358A (zh) 2022-01-07
CN113906358B (zh) 2023-03-28
CN116360406A (zh) 2023-06-30

Similar Documents

Publication Publication Date Title
US11644839B2 (en) Systems and methods for generating a real-time map using a movable object
CN103873758B (zh) 全景图实时生成的方法、装置及设备
CN111199560B (zh) 一种视频监控的定位方法及视频监控系统
KR20210104684A (ko) 측량 및 매핑 시스템, 측량 및 매핑 방법, 장치 및 기기
JP6765512B2 (ja) 飛行経路生成方法、情報処理装置、飛行経路生成システム、プログラム及び記録媒体
CN108436909A (zh) 一种基于ros的相机和机器人的手眼标定方法
WO2018098704A1 (zh) 控制方法、设备、系统、无人机和可移动平台
WO2019227441A1 (zh) 可移动平台的拍摄控制方法和设备
US11295621B2 (en) Methods and associated systems for managing 3D flight paths
JP2014006148A (ja) 航空写真撮像方法及び航空写真撮像システム
EP3758351B1 (en) A system and method of scanning an environment using multiple scanners concurrently
Lan et al. XPose: Reinventing User Interaction with Flying Cameras.
WO2021016907A1 (zh) 确定环绕航线的方法、航拍方法、终端、无人飞行器及系统
KR20210105345A (ko) 측량 및 매핑 방법, 장치 및 기기
CN110275179A (zh) 一种基于激光雷达以及视觉融合的构建地图方法
WO2022041112A1 (zh) 可移动平台的控制方法、装置及控制系统
JP7435599B2 (ja) 情報処理装置、情報処理方法、及びプログラム
WO2021000225A1 (zh) 可移动平台的控制方法、装置、设备及存储介质
US11983821B2 (en) Control method and apparatus for movable platform, and control system thereof
KR20210106422A (ko) 작업 제어 시스템, 작업 제어 방법, 장치 및 기기
WO2022056683A1 (zh) 视场确定方法、视场确定装置、视场确定系统和介质
WO2021212499A1 (zh) 目标标定方法、装置和系统及可移动平台的遥控终端
WO2021134715A1 (zh) 一种控制方法、设备、无人机及存储介质
Choi et al. An Implementation of Drone-Projector: Stabilization of Projected Image
WO2019062173A1 (zh) 视频处理方法、设备、无人机及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20950782

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20950782

Country of ref document: EP

Kind code of ref document: A1