WO2020014987A1 - 移动机器人的控制方法、装置、设备及存储介质 - Google Patents

移动机器人的控制方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2020014987A1
WO2020014987A1 PCT/CN2018/096534 CN2018096534W WO2020014987A1 WO 2020014987 A1 WO2020014987 A1 WO 2020014987A1 CN 2018096534 W CN2018096534 W CN 2018096534W WO 2020014987 A1 WO2020014987 A1 WO 2020014987A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
target object
position information
target image
mobile robot
Prior art date
Application number
PCT/CN2018/096534
Other languages
English (en)
French (fr)
Inventor
李劲松
周游
刘洁
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2018/096534 priority Critical patent/WO2020014987A1/zh
Priority to CN201880042301.7A priority patent/CN110892714A/zh
Priority to CN202310158597.1A priority patent/CN116126024A/zh
Publication of WO2020014987A1 publication Critical patent/WO2020014987A1/zh
Priority to US17/123,125 priority patent/US11789464B2/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0808Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0094Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENTS OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D47/00Equipment not otherwise provided for
    • B64D47/08Arrangements of cameras
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/0202Control of position or course in two dimensions specially adapted to aircraft
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U10/00Type of UAV
    • B64U10/10Rotorcrafts
    • B64U10/13Flying platforms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2201/00UAVs characterised by their flight controls
    • B64U2201/10UAVs characterised by their flight controls autonomous, i.e. by navigating independently from ground or air stations, e.g. by using inertial navigation systems [INS]

Definitions

  • Embodiments of the present invention relate to the field of control, and in particular, to a method, device, device, and storage medium for controlling a mobile robot.
  • a mobile robot such as an unmanned aerial vehicle or an unmanned ground robot
  • a photographing device can photograph a target object.
  • Surround shooting around a target object is a shooting method in the prior art.
  • the drone needs to fly directly above the target object, and a surround center is set directly above the target object.
  • the user needs to instruct the drone to record the position of the surround center through the control terminal (for example, Record the GPS position of the orbiting center), and further the drone flies away from the orbiting center to a preset position, performs orbiting flight with the orbiting center as a circle center, and a distance of the drone relative to the orbiting center as a radius.
  • the operation process of moving around the target object is cumbersome.
  • Embodiments of the present invention provide a control method, a device, a device, and a storage medium of a mobile robot to simplify a mobile robot's operation process of moving around a target object, and at the same time improve the operational safety of the mobile robot.
  • a first aspect of an embodiment of the present invention is to provide a control method of a mobile robot, which is applied to a mobile robot, wherein the mobile robot includes a photographing device, and the method includes:
  • a second aspect of the embodiments of the present invention is to provide a control device for a mobile robot, including: a memory and a processor;
  • the memory is used to store program code
  • the processor calls the program code, and when the program code is executed, is used to perform the following operations:
  • a third aspect of the embodiments of the present invention is to provide a mobile robot, including:
  • a fourth aspect of the embodiments of the present invention is to provide a computer-readable storage medium having a computer program stored thereon, the computer program being executed by a processor to implement the method according to the first aspect.
  • the control method, device, device, and storage medium of the mobile robot determine position information of a target object by acquiring position information of a target object captured by the shooting device in a reference image output by the shooting device, and according to the The position information of the target object controls the mobile robot to move around the target object, so that the mobile robot does not need to move to the surrounding center to record the position of the surrounding center, the mobile robot can move around the target object, simplifying the mobile robot's implementation of the target The process of moving around improves the operational safety of the mobile robot.
  • FIG. 1 is a flowchart of a method for controlling a mobile robot according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a reference image according to an embodiment of the present invention.
  • FIG. 4 is a flowchart of a method for controlling a mobile robot according to another embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a UAV flying around a reference object according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of a method for controlling a mobile robot according to another embodiment of the present invention.
  • FIG. 7 is a schematic diagram of feature point tracking provided by an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a correspondence relationship between a three-dimensional coordinate and a pixel coordinate according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of orbiting a reference object by a drone according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of orbiting a target object by a drone according to an embodiment of the present invention.
  • FIG. 11 is a flowchart of a method for controlling a mobile robot according to another embodiment of the present invention.
  • FIG. 12 is a schematic diagram of feature point tracking provided by an embodiment of the present invention.
  • FIG. 13 is a structural diagram of a control device for a mobile robot according to an embodiment of the present invention.
  • FIG. 14 is a structural diagram of an unmanned aerial vehicle provided by an embodiment of the present invention.
  • 35 start control button; 50: reference object; 51: machine head;
  • control device 131: memory; 132: processor;
  • 133 communication interface
  • 140 unmanned aerial vehicle
  • 141 motor
  • a component when a component is called “fixed to” another component, it may be directly on another component or a centered component may exist. When a component is considered to be “connected” to another component, it can be directly connected to another component or a centered component may exist at the same time.
  • FIG. 1 is a flowchart of a method for controlling a mobile robot according to an embodiment of the present invention.
  • the control method of the mobile robot according to this embodiment can be applied to a mobile robot, and the application includes a photographing device.
  • the method in this embodiment may include:
  • Step S101 Acquire instruction information of a target object, where the instruction information includes position information of the target object in a reference image output by a photographing device.
  • the mobile robot described in this embodiment may be an unmanned aerial vehicle, an unmanned ground robot, or an unmanned ship.
  • a mobile robot is used as a drone for a schematic description. Understandably, the drones in this article can be equivalently replaced by mobile robots.
  • the drone 20 is equipped with a photographing device 21, and the photographing device 21 may specifically be a camera, a video camera, or the like.
  • the photographing device 21 may be mounted on the drone 20 through the gimbal 22, or the photographing device 21 may be fixed on the drone 20 through other fixing devices.
  • the shooting device 21 can capture video data or image data in real time, and send the video data or image data to the control terminal 24 through the wireless communication interface 23 of the drone 20.
  • the control terminal 24 may specifically correspond to the drone 20
  • the remote control may also be a user terminal such as a smart phone, a tablet computer, or the like.
  • the drone 20 may further include a control device, and the control device may include a general-purpose or special-purpose processor, which is only schematically illustrated here and does not limit the specific structure of the drone.
  • the image captured by the shooting device 21 includes a target object 31 as shown in FIG. 2.
  • a certain frame image output by the shooting device 21 is recorded as a reference image, and the processor of the drone 20 may obtain Instruction information of the target object, where the instruction information includes position information of the target object in the reference image.
  • the obtaining the indication information of the target object includes: receiving the indication information sent by the control terminal, wherein the indication information is detected by the control terminal on the interactive interface displaying the reference image The target object selection operation is determined.
  • the reference image is sent to the control terminal 24 through the wireless communication interface 23, and the control terminal 24 displays the reference image in the interactive interface, so that the user can respond to the reference image in the interactive interface.
  • Select the target object in the reference image As shown in FIG. 3, 30 represents a reference image displayed in the interactive interface, and the reference image 30 includes a target object 31.
  • One possible way for the user to select the target object 31 in the interactive interface is: the user selects the point 32, It starts to slide from point 32 to point 33. This is only a schematic description here, and the specific selection operation is not limited in this embodiment.
  • the control terminal 24 may determine a region 34 selected by the user in the interactive interface according to a user's selection operation in the interactive interface, and determine position information of the region 34 in the reference image 30. For example, the control terminal 24 may It is determined that the upper left corner of the region 34 is the position information of the point 32 in the reference image 30 and the size of the region 34, such as length and width, or the control terminal 24 may determine that the upper left corner of the region 34 is the point 32 in the reference. The position information in the image 30 and the position information of the point 33 in the reference image 30 in the lower right corner of the region 34. Further, the control terminal 24 may send the position information of the area 34 in the reference image 30 to the drone 20 as the position information of the target object 31 in the reference image 30, that is, the indication information of the target object 31.
  • the acquiring the indication information of the target object includes: identifying the target object in the reference image to acquire the indication information of the target object.
  • the processor of the drone may identify the target object in the reference image output by the photographing device 21, and obtain the indication information of the target object through recognition. Further, the processor of the drone may input the reference image into a trained neural network model, and obtain indication information of the target object output by the neural network model.
  • Step S102 Determine position information of the target object according to the instruction information.
  • the drone may determine the position information of the target object according to the indication information, where the position information of the target object may be three-dimensional position information or two-dimensional position information;
  • the position information of the target object may be position information based on the world coordinate system; in addition, the position information of the target object may be position information based on the global coordinate system, and the position information may include at least longitude and latitude;
  • the position information of the target object may also be position information based on a body coordinate system of the drone.
  • determining the position information of the target object according to the instruction information includes: determining the orientation of the target object with respect to the mobile robot according to the instruction information, and determining the orientation of the target object with respect to the mobile robot according to the orientation information. The horizontal distance between them or the ground height value of the mobile robot determines the position information of the target object.
  • the orientation of the target object 31 with respect to the drone 20 may be determined according to the position information of the target object 31 in the reference image 30 and the attitude of the gimbal that carries the photographing device 21;
  • the horizontal distance between the object 31 and the drone 20 determines the position information of the target object 31.
  • the FOV of the shooting device 21 is known, and the angle of the target object 31 relative to the optical axis of the shooting device 21 can be determined according to the position information of the target object 31 in the reference image. For example, if the target object 31 is at the center of the reference image, the angle of the target object 31 with respect to the optical axis of the shooting device is 0.
  • the FOV of the shooting device 21 is 20 degrees in the horizontal direction, if the target object 31 is at the far left of the reference image, then It is explained that the horizontal angle of the target object 31 with respect to the optical axis of the photographing device 21 is 10 degrees, and the vertical direction is similar, and will not be repeated here.
  • the attitude of the head 22 of the photographing device 21 determines the light of the photographing device 21 The orientation of the axis, combined with the angle of the target object 31 relative to the optical axis of the photographing device 21 and the orientation of the optical axis, can obtain the orientation of the target object 31 with respect to the drone 20.
  • the position information of the target object 31 is determined according to the orientation of the target object 31 with respect to the drone 20 and the horizontal distance between the target object 31 and the drone 20. In some embodiments, the position information of the target object 31 is determined according to the orientation of the target object 31 with respect to the drone 20, the horizontal distance between the target object 31 and the drone 20, or the ground-height value of the drone 20 .
  • the angle of the target object 31 relative to the drone 20 in the pitch direction can be determined according to the orientation of the target object 31 with respect to the drone 20, such as the angle ⁇ shown in FIG. 2, and then, an unmanned person can be obtained.
  • the ground height value of the drone measured by the distance sensor configured on the aircraft 20, such as h shown in FIG. 2, according to the ⁇ angle and the ground height value, the target object can be determined in a vertical direction relative to the drone.
  • the angle of the target object 31 with respect to the drone 20 in the yaw direction can be determined according to the ⁇ angle and the target object 31 and no
  • the horizontal distance L between the human machine 20 can determine the position information of the target object relative to the drone in the horizontal direction; according to the position information of the target object relative to the drone in the vertical direction and the target object relative to the drone
  • the position information in the horizontal direction can determine the position information of the target object relative to the drone, and further, the target can be determined based on the position information of the target object relative to the drone and the position information of the drone.
  • Location information of the object, location information of the target object may be the position of the target object in the world coordinate system, may be the position of the target object in the global coordinate system.
  • the position information of the target object relative to the drone in the vertical direction may also be determined according to the horizontal distance L between the target object 31 and the drone 20 and the ⁇ angle.
  • the indication information of the target object may indicate the size of the image area corresponding to the target object in the reference image, and the horizontal distance between the target object 31 and the drone 20 may be determined according to the size of the image area.
  • Step S103 Control the mobile robot to move around the target object according to the position information of the target object.
  • the target object 31 is used as the center, and the surrounding trajectory is generated according to the position relationship between the drone 20 and the target object 31, and the The man-machine 20 moves on the orbit, that is, the drone 20 is controlled to fly on the orbit, and the orbiting target object 31 is realized. While the drone 20 is flying around the target object 31, the shooting device 21 can shoot the target object 31 in real time, and send the image data or video data obtained through the shooting to the control terminal 24 through the wireless communication interface 23, so that the user can browse and watch .
  • This embodiment determines position information of a target object by acquiring position information of a target object captured by the shooting device in a reference image output by the shooting device, and controls the mobile robot to move around the target object according to the position information of the target object, so that The mobile robot does not need to move to the orbiting center to record the position of the orbiting center, it can realize that the mobile robot orbits the target object, simplifies the process of the mobile robot's orbiting movement of the target object, and improves the operation safety of the mobile robot.
  • FIG. 4 is a flowchart of a method for controlling a mobile robot according to another embodiment of the present invention. As shown in FIG. 4, based on the embodiment shown in FIG. 1, this embodiment provides another implementable manner for determining the position information of the target object according to the instruction information. Specifically, according to the instruction The information determining the location information of the target object may include the following steps:
  • Step S401 Control the mobile robot to orbit the reference object.
  • a point may be taken as a reference object at a preset distance directly in front of the drone, and the reference object is specifically a virtual target point, and the drone is controlled to orbit the reference object.
  • 50 indicates a reference object at a preset distance in front of the drone
  • 51 indicates the nose of the drone.
  • the processor in the drone can specifically control the drone to surround the reference object 50. flight.
  • the controlling the mobile robot to orbit the reference object includes: determining the reference object according to a preset orbit radius, and controlling the mobile robot to orbit the reference object.
  • the UAV is controlled with the reference object 50 as the orbiting center, and a preset orbiting radius such as 500 meters as a radius to generate a circular trajectory, such as the circular trajectory 53 shown in FIG. 5, and controlling the drone at The circular track 53 orbits the reference object 50.
  • the drone may fly on the circular trajectory 53 in a counterclockwise direction, and may also fly on the circular trajectory 53 in a clockwise direction.
  • the processor in the drone receives the instruction information of the target object sent by the control terminal, it can determine the reference object according to the preset orbit radius, and control the drone to orbit the reference object. . In other words, after the user selects the target object framed in the reference image, the drone can orbit the reference object.
  • controlling the mobile robot to orbit the reference object includes: after receiving the start control instruction sent by the control terminal, controlling the mobile robot to orbit the reference object.
  • the activation control button 35 may be displayed in the interaction interface.
  • the activation control button 35 may be an icon in the interaction interface, that is, when After the user selects the target object 31, the drone does not immediately orbit the reference object, but waits for the user to click the start control button 35 in the interactive interface before the drone begins to orbit the reference object. .
  • the control terminal when the user clicks the start control button 35 in the interactive interface, the control terminal generates a start control instruction according to the user's click operation, and sends the start control instruction to the drone, and when the processor in the drone After receiving the start control instruction, the UAV is controlled to orbit the reference object.
  • the specific orbit control method may be the method shown in FIG. 5, which is not repeated here.
  • Step S402 In the process of moving the mobile robot around a reference object, obtain a plurality of frames of a first target image output by the photographing device, where the first target image includes the target object.
  • the shooting device of the drone may also be captured and a target image including the target object 31 may be output.
  • the target image captured by the drone during the orbital flight of the reference object 50 is recorded as the first target image.
  • the first target image output by the shooting device of the drone may be multiple frames.
  • the processor of the drone may obtain a plurality of frames of the first target image output by the shooting device, and the first target image includes the target object 31 .
  • the angle at which the target object 31 is offset from the optical axis of the shooting device is not limited here, as long as the target object 31 is ensured in the shooting screen of the shooting device.
  • Step S403 Determine the position information of the target object according to the indication information of the target object and the multi-frame first target image.
  • the processor of the drone may determine the position information of the target object 31 according to the indication information of the target object 31 obtained in the foregoing embodiment and the first target image of the multiple frames obtained in the foregoing steps.
  • determining the position information of the target object according to the indication information of the target object and the multi-frame first target image may include the following steps as shown in FIG. 6:
  • Step S601 Acquire a feature point in a target area of the reference image, where the target area is an image area indicated by the instruction information in the reference image.
  • the drone After the drone receives the instruction information of the target object sent by the control terminal, it can determine the target area of the reference image according to the instruction information of the target object, and the target area is specifically the image area indicated by the instruction information. For example, as shown in FIG. 3, after the drone receives the position information of the area 34 in the reference image 30 sent by the control terminal or obtains the position information of the area 34 in the reference image 30 by recognition, the drone's processor A target area may be determined in the reference image 30, and the target area may specifically be the area 34, that is, the drone may use the area framed by the user in the interactive interface as the target area. Further, the processor of the drone may obtain the feature points in the target area.
  • the processor may determine the feature points in the target area according to a preset feature point extraction algorithm.
  • the feature point extraction algorithm Including at least one of the following: Harris corner detection algorithm, Scale-invariant feature transform (SIFT), Speeded Up Robust Features (SURT) algorithm, Fast feature point extraction and description algorithm (Oriented FAST and Rotated Brief (ORB), etc.
  • Harris corner detection algorithm is used to extract feature points in the target area.
  • Step S602 Use a tracking algorithm to obtain the feature points of the first target image of each frame based on the feature points in the target area of the reference image.
  • the tracking algorithm is used to track the feature points in the target area, that is, the position of the feature points in the target area in the first target image of each frame is determined using the tracking algorithm.
  • the tracking algorithm may specifically be a (Kanade-Lucas-Tomasi Feature Tracker, KLT) feature tracking algorithm.
  • A, B, C, D, E, F, and G respectively represent the feature points in the target region of the reference image 30, that is, the feature points in the region 34, and the feature points A, B, C, D, E, F, and G. It is also a characteristic point of the target object 31.
  • 71, 72, and 73 respectively represent the first target images sequentially output by the shooting device during the process of the drone flying around the reference object.
  • feature points of the target object 31 in the reference image 30, such as A, B, C, D, E, F, and G, can be determined in the first target image 71, the first target image 72, and the first target image, respectively. Position in 73.
  • the photographing device first outputs the reference image 30, and then sequentially outputs the first target image 71, the first target image 72, and the first target image 73; the reference image 30, the first target image 71, the first target image 72, and the first target.
  • the image 73 may be an adjacent image or a non-adjacent image.
  • the position of the target object 31 relative to the drone constantly changes, resulting in the target object 31 in the first target image sequentially output by the photographing device.
  • the positions are constantly changed, so that the positions of the feature points corresponding to the target object 31 in the first target image 71, the first target image 72, and the first target image 73 in the corresponding first target image are continuously changed.
  • This is only a schematic description, and does not limit the number of feature points in the area 34, the number of first target images, and the position of the feature points in the area 34 in each frame of the first target image.
  • Step S603 Determine the position information of the target object according to the position information of the feature points of the first target image in each frame in the corresponding first target image.
  • the position information of the target object 31 is determined according to the position information of the feature points corresponding to the target object 31 in the corresponding first target image in the first target image 71, the first target image 72, and the first target image 73.
  • the position information of the target object 31 is specifically three-dimensional coordinates of the target object 31 in a three-dimensional space.
  • the position information of the target object 31 is determined according to the position information of the feature points corresponding to the target object 31 in the first target image 71, the first target image 72, and the first target image 73 in the corresponding first target image. Recorded as the first position information.
  • a new first target image is also output, and the position of the feature point of the target object 31 in the new first target image can be determined according to the KLT feature tracking algorithm; further According to the first target image 71, the first target image 72, the first target image 73, and the position information of the feature points corresponding to the target object 31 in the new first target image in the corresponding first target image, another target can be determined The position information of the object 31.
  • the position information of the target object 31 is recorded as the second position information.
  • the above-mentioned first position information and the second position information may be the same or different, but it can be understood that as the shooting device continuously outputs new first target images, according to the first target image 71, the first target The accuracy of the position information of the target object 31 determined by the position information of the feature point corresponding to the target object 31 in the corresponding first target image in the image 72, the first target image 73, and the first target image continuously output by the photographing device subsequently. Degree continues to increase.
  • the processor of the drone can determine a new position information of the target object 31.
  • determining the position information of the target object based on the position information of the feature points of the first target image in each frame in the corresponding first target image includes: The position information in the first target image is determined by using a fitting algorithm.
  • 80 represents a target object
  • 81, 82, and 83 represent the first target images successively output by the shooting device during the movement of the shooting device around the target object 80 in the direction shown by the arrow.
  • the three-dimensional points on the first target image 81, 82, 83 may be mapped, and the mapping points of the three-dimensional points in the first target image 81, 82, 83 may specifically be features in the first target image 81, 82, 83.
  • the number of feature points that can be tracked is decreasing.
  • point A, point B, and point C are three-dimensional points on the target object 80 respectively.
  • Point a1, point b1, and point c1 represent feature points in the first target image 81.
  • Point a1 corresponds to point A
  • point b1 corresponds to point.
  • B corresponds
  • point c1 corresponds to point C;
  • points a2, b2, and c2 represent feature points in the first target image 82, point a2 corresponds to point A, point b2 corresponds to point B, and point c2 corresponds to point C;
  • Points a3 and b3 represent feature points in the first target image 83, point a3 corresponds to point A, and point b3 corresponds to point B.
  • the three-dimensional coordinates (x w , y w , z w ) of the three-dimensional point on the target object 80 in the world coordinate system and the three-dimensional point in the first target image can be obtained.
  • the position information of the mapping point in the first target image is, for example, a relationship of pixel coordinates ( ⁇ , ⁇ ), and the relationship is specifically shown in the following formula (1):
  • z c represents the coordinate of the three-dimensional point on the Z axis of the camera coordinate system
  • K represents an internal parameter of the camera
  • R represents a rotation matrix of the camera
  • T represents a translation matrix of the camera.
  • ( ⁇ , ⁇ ), K, R, and T are known quantities
  • z c and (x w , y w , z w ) are unknown quantities.
  • an equation shown in formula (1) can be established, and according to the point a2
  • the pixel coordinates in the first target image 82 and the R and T corresponding to the first target image 82 captured by the photographing device can be used to establish another equation as shown in formula (1), according to the point a3 in the first target image
  • the pixel coordinates in 83, and the R and T corresponding to the first target image taken by the shooting device 83 can establish another equation as shown in formula (1), and as the shooting device continuously outputs new first target images
  • the equation shown in formula (1) is gradually increased.
  • the corresponding unknowns can be solved. That is, by solving these equations using a fitting algorithm, the three-dimensional coordinates of the three-dimensional point A in the world coordinate system can be calculated. Similarly, the three-dimensional coordinates of the three-dimensional point B and the three-dimensional point C in the world coordinate system can be calculated, and details are not described herein again. It can be understood that the more the first target image output by the photographing device is, the more accurate the three-dimensional coordinates of the three-dimensional point obtained in the world coordinate system based on the pixel coordinates of the feature points in the multi-frame first target image using a fitting algorithm.
  • the target After determining the three-dimensional coordinates of the three-dimensional points on the target object 80, such as the three-dimensional coordinates of the three-dimensional points A, B, and C in the world coordinate system, the target can be determined according to the three-dimensional coordinates of the three-dimensional points A, B, and C in the world coordinate system.
  • the UAV may obtain the position information of the target object according to the three-dimensional coordinates of the target object 80 in the world coordinate system.
  • the position information of the target object 31 when the position information of the target object 31 is based on the position in the global coordinate system, the position information of the target object 31 may be determined according to the position information of the drone and the three-dimensional coordinates of the target object 80 in the world coordinate system.
  • the position information of the target object 31 is based on the position in the body coordinate system of the drone, the three-dimensional coordinates of the target object 80 in the world coordinate system can be converted to the body coordinate system to obtain the position information based
  • the method further includes: after acquiring the feature points of the first target image of each frame, determining, from the feature points of the first target image of each frame, the target feature points that meet the preset requirements; accordingly, the according to The determination of the position information of the target object by the position information of the feature points of the first target image in each frame in the corresponding first target image includes: according to the target feature points of the first target image in each frame in the corresponding first target image The position information of the target object determines position information of the target object.
  • target feature points that meet preset requirements are determined, for example, an offset amount of each feature point between the first target image 71 and the reference image 30 It may be different, assuming that the offset amount of the feature point A between the first target image 71 and the reference image 30 is recorded as h1, and the offset amount of the feature point B between the first target image 71 and the reference image 30 is recorded as h2, and so on.
  • the offset of the feature point G between the first target image 71 and the reference image 30 is recorded as h7.
  • the average and variance of h1, h2, ..., h7 are calculated.
  • the average is denoted as u, and the variance is recorded as ⁇ 2 , according to the Gaussian distribution, select the feature points with the offset within [u-3 ⁇ , u + 3 ⁇ ] as the target feature points.
  • the first target image 71 Assuming that h1 is outside [u-3 ⁇ , u + 3 ⁇ ], the first target image 71
  • the feature points A in the first target image 71 are deleted, the feature points B, C, D, E, F, and G in the first target image 71 are retained, and the feature points B, C, and D, E, F, and G serve as target feature points of the first target image 71.
  • target feature points in the first target image 72 and the first target image 73 can be calculated, and details are not described herein again.
  • the average and variance of h1, h2, ... h7 are calculated according to the offset between each feature point between the first target image 71 and the reference image 30, such as h1, h2, ... h7, Feature points with offsets within [u-3 ⁇ , u + 3 ⁇ ] are selected as valid points according to the Gaussian distribution. For example, if h1 is outside [u-3 ⁇ , u + 3 ⁇ ], the features in the first target image 71 The point A is deleted, and the feature points B, C, D, E, F, and G in the first target image 71 are used as valid points. The target feature points are further determined from the valid points, and the target features are determined from the valid points.
  • the target feature points in the first target image 72 and the first target image 73 can be used as the target feature points of the first target image 71.
  • the target object 31 is determined in the world according to the position information of the target feature points in the corresponding first target image.
  • the specific principle of the three-dimensional coordinates in the coordinate system is consistent with the principle shown in FIG. 8, and is not repeated here.
  • the drone by controlling the drone to orbit the reference object, in the process of the drone orbiting the reference object, multiple frames of the first target image output by the photographing device are acquired, and according to the instruction information of the target object and the multiple frames of the first target.
  • the position information of the target object is determined by the image.
  • the shooting device continuously outputs the first target image
  • the position information of the target object can be continuously determined according to the instruction information of the target object and the first target image continuously output by the shooting device.
  • the accuracy of the position information of the target object is continuously improved; in addition, after obtaining the feature points of the first target image of each frame output by the shooting device, a feature point that meets the preset requirements is determined from the feature points of the first target image of each frame.
  • the target feature point when determining the position information of the target object according to the position information of the target feature point of each frame of the first target image in the corresponding first target image, can improve the accuracy of the position information of the target object, and Removing the feature points that do not meet the preset requirements can also reduce the corresponding calculation amount.
  • An embodiment of the present invention provides a control method for a mobile robot. Based on the above embodiment, the method further includes: determining, according to position information of feature points of the first target image in the corresponding first target image of each frame, that the mobile robot is in the process of moving around the reference object.
  • the parallax of the shooting device relative to the target object; correspondingly, controlling the mobile robot to move around the target object according to the position information of the target object includes: when the parallax is greater than a first preset parallax threshold,
  • the determined position information of the target object determines an orbital trajectory in which the mobile robot orbits the target object, and controls the mobile robot to move on the orbital trajectory.
  • the first target image 72, and the first target image 73 such as A, B, C, D, E, F, and G
  • the feature points A , B, C, D, E, F, G respectively in the first target image 71, the first target image 72, the first target image 73 position information can be determined as shown in Figure 5 During the flight of the reference object 50, the parallax of the shooting device of the drone relative to the target object 31.
  • the first target image 71 is an image taken by the shooting device when the drone is at the m1 position
  • the first target image 72 is The image captured by the device when the human machine is at the m2 position
  • the first target image 73 is an image captured by the device when the drone is at the m3 position. According to the position information of the feature points A, B, C, D, E, F, and G in the first target image 71 and the first target image 72, respectively, the process of the drone from the m1 position to the m2 position can be determined.
  • the parallax of the shooting device of the drone relative to the target object 31 specifically, the pixel position of the feature point A in the first target image 71 is marked as ( ⁇ 1 , ⁇ 1 ), and the feature point A is at the first target.
  • the pixel mark in the image 72 is ( ⁇ 2 , ⁇ 2 ), and the parallax of the feature point A can be calculated according to the following formula (2), and the parallax of the feature point A is recorded as parallaxA:
  • R 21 represents a change in the rotation direction of the attitude of the camera when shooting the first target image 72 relative to the attitude of the camera when shooting the first target image 71.
  • c x and c y represent the positions of the camera optical centers. It can be understood that the positions of the camera optical centers in the first target image 71 and the first target image 72 are the same.
  • f represents the focal length of the camera.
  • the disparity of feature points B, C, D, E, F, and G can be calculated. The disparity of feature points A, B, C, D, E, F, and G is averaged, and the average is the first.
  • the parallax of the first target image 72 is the parallax of the shooting device of the drone with respect to the target object 31 during the process of the drone from the m1 position to the m2 position.
  • the parallax of the first target image 73 can be determined.
  • the parallax of a target image 73 is the parallax of the shooting device of the drone with respect to the target object 31 during the process of the drone from the m1 position to the m3 position. It can be understood that as the drone flies along the circular trajectory 53, the parallax of the shooting device of the drone with respect to the target object 31 is continuously increasing, and the three-dimensional coordinates of the target object 31 are continuously determined by using a fitting algorithm.
  • the fitting algorithm is stopped to obtain The newly determined three-dimensional coordinates of the target object 31, that is, the precise three-dimensional coordinates of the target object 31, and according to the newly determined three-dimensional coordinates of the target object 31, determine the orbit of the drone to orbit the target object 31.
  • the trajectory is different from the surrounding trajectory 53 of the reference object 50 by the drone.
  • the parallax of the shooting device of the drone relative to the target object 31 is greater than the first preset parallax threshold, and the target object is determined according to the latest determination.
  • the three-dimensional coordinates of 31 and a preset surround parameter, such as a surround radius determine a target trajectory 91 for the drone to orbit the target object 31, and control the drone to fly along the target trajectory 91.
  • the method further includes: determining a change speed of the parallax; and adjusting a speed at which the mobile robot moves around the reference object according to the change speed of the parallax.
  • the determining the change speed of the parallax includes determining the position information of the feature points of two adjacent first target images in the first target image in the corresponding first target image in multiple frames of the first target image. The speed at which parallax changes.
  • parallax_speed (PA i -PA i-1 ), that is, when the image frequency is fixed, measure (PA i
  • the magnitude of -PA i-1 ) / t is consistent with the significance of measuring the magnitude of PA i -PA i-1 .
  • the flying speed that the drone needs to reach the current flying speed of the drone * (the expected change speed of the parallax / current
  • the method further includes: when the parallax is greater than a second preset parallax threshold, adjusting a radius of the mobile robot to orbit the reference object according to the determined position information of the target object, wherein, The first preset parallax threshold is greater than the second preset parallax threshold.
  • the parallax of the shooting device of the drone relative to the target object 31 is greater than the first preset parallax threshold, and the target determined according to the latest
  • the three-dimensional coordinates of the object 31 determine the target trajectory 91 of the drone orbiting the target object 31, but at this time the drone may be far away from the target trajectory 91, and the drone needs to fly from the current position, such as the m3 position, to the target A point on the trajectory 91 starts to fly along the target trajectory 91 again.
  • the parallax of the shooting device of the drone relative to the target object 31 is greater than the second preset parallax threshold.
  • the second preset parallax threshold is smaller than the first preset parallax threshold.
  • the target trajectory 91 of the target object 31 for orbital flight starts from the m2 position, and can continuously adjust the radius of the orbiting of the reference object 50 by the drone, such as continuously reducing the reference of the drone to the reference.
  • the radius of the subject 50 performing orbital flight.
  • the parallax of the shooting device of the drone relative to the target object 31 is constantly changing.
  • the drone may reach a point on the target trajectory 91 (accurate target trajectory), such as m4, or the drone may Reaching a point closer to the target trajectory 91 makes the UAV smoothly transition from this point to the target trajectory 91.
  • This embodiment determines the parallax of the shooting device relative to the target object during the flight around the reference object by using the position information of the feature points of the first target image in each frame in the corresponding first target image. Change the speed to adjust the flying speed of the drone flying around the reference object, so that the drone can determine the 3D coordinates of the target object in a short time, especially when the target object is far away from the drone and the drone orbits around the reference object.
  • the speed of change of the parallax can increase the flying speed of the drone and the efficiency of calculating the three-dimensional coordinates of the target object.
  • the first preset parallax threshold is greater than the second preset parallax threshold, and when the parallax is greater than the second preset parallax threshold, by adjusting the radius of the drone to orbit the reference object, So that when the parallax is greater than the first preset parallax threshold, the drone arrives at the orbit of the orbit flight of the target object , The arrival distance or trajectory position close surrounding, so that a smooth transition from the UAVs reference object to fly around the track around the object to fly around the target trajectory around.
  • FIG. 11 is a flowchart of a method for controlling a mobile robot according to another embodiment of the present invention. As shown in FIG. 11, on the basis of the above embodiment, the method further includes: after acquiring the instruction information, controlling a shooting posture of the photographing device according to the instruction information so that the target object is in the shooting state Center of the device's shooting screen.
  • the target object 31 may not be at the center of the shooting screen of the shooting device.
  • the drone obtains the instruction of the target object 31 Information, for example, after receiving position information of the area 34 in the reference image 30 sent by the control terminal 24, based on the position information of the area 34 in the reference image 30, it is possible to determine the position of the target object 31 relative to the optical axis of the photographing device 21. Angle, according to which the attitude of the drone and / or the attitude of the gimbal can be adjusted to control the shooting attitude of the shooting device, so that the angle of the target object 31 relative to the optical axis of the shooting device is 0, that is, the target object 31 is in the desired position. The center of the shooting frame of the shooting device is described.
  • the drone when the user selects the target object 31, the drone can orbit the reference object; therefore, when the drone obtains the instruction information of the target object 31, it can adjust the attitude of the drone. And / or the attitude of the gimbal, so that the target object 31 is at the center of the shooting frame of the shooting device, that is, adjusting the attitude and / or the cloud of the drone during the drone's orbiting of the reference object.
  • the attitude of the stage is such that the target object 31 is at the center of the shooting frame of the shooting device until the UAV determines the three-dimensional coordinates of the target object 31.
  • the drone when the user selects the target object 31, the drone does not immediately orbit the reference object, but waits until the user clicks the activation control button 35 in the interactive interface. Only then began to orbit the reference object. For example, the drone obtains the instruction information of the target object at time t1. The user clicks the start control button 35 at time t2 after time t1, that is, the drone orbits the reference object from time t2. At time t3 after time t2, the three-dimensional coordinates of the target object 31 are determined.
  • the drone may adjust the attitude of the drone and / or the attitude of the gimbal between time t1 and time t2, so that the target object 31 is at the center of the shooting screen of the shooting device, because from time t1 to t2
  • the drone may not move between moments, but the target object 31 has moved, resulting in a change in the position of the target object 31 in the shooting screen of the shooting device.
  • the drone may also adjust the attitude of the drone and / or the attitude of the gimbal between time t2 and time t3, so that the target object 31 is at the center of the shooting frame of the shooting device.
  • the drone may also adjust the attitude of the drone and / or the attitude of the gimbal between time t1 and time t3, so that the target object 31 is at the center of the shooting frame of the shooting device.
  • the method further includes: after acquiring the instruction information, acquiring a plurality of frames of the second target image output by the photographing device, wherein the second target image includes a target object.
  • the drone when the user selects the target object 31, that is, after the drone obtains the instruction information of the target object 31, the drone can orbit the reference object, and obtain the output of the shooting device when the drone object orbits.
  • the multi-frame second target image then the multi-frame second target image includes the multi-frame first target image.
  • the drone when the user selects the target object 31, the drone does not immediately orbit the reference object, but waits for the user to click the start control button 35 in the interactive interface before the drone starts to reference the object.
  • the multi-frame second target image output by the shooting device may be taken by the shooting device between time t1 and time t2, or it may be at time t2 It was taken between time t3, and it may be taken between time t1 and t3. That is, the multiple target second images include at least multiple first target images.
  • controlling the shooting posture of the shooting device according to the instruction information includes the following steps:
  • Step S1101 Use a tracking algorithm to obtain the feature points of the second target image of each frame based on the feature points in the target area of the reference image.
  • a tracking algorithm is used to calculate an offset between each feature point in the target area between adjacent target images, such as a second target image. If the feature point is in a previous frame target image with respect to a subsequent frame target image The offset of the target point and the target point in the next frame are the same as the offset of the target image in the previous frame, and the directions are opposite, and it can be determined that the characteristic point is the correct tracking point.
  • A, B, C, D, E, F, and G respectively indicate the feature points in the target region of the reference image 30, that is, the feature points in the region 34, and the feature points A, B, C, D, E, F, and G. It is also a characteristic point of the target object 31.
  • 121 indicates that after the UAV acquires the instruction information, one of the second target images in the plurality of frames of the second target image output by the shooting device is only schematically described here.
  • the positions of the feature points of the target object 31 in the reference image 30 such as A, B, C, D, E, F, and G in the second target image 121 can be determined according to the KLT feature tracking algorithm.
  • Step S1102 Determine position information of the target object corresponding to the second target image according to the feature points of the second target image of each frame.
  • position information of the target object 31 in the second target image 121 can be determined, such as Position information of the center point N1 in the second target image 121.
  • Step S1103 Control the shooting posture of the shooting device according to the position information of the target object corresponding to the second target image.
  • the position of the center point N1 of the target object 31 relative to the second target image 121 may be determined.
  • ⁇ and the horizontal FOV can determine the angle of the target object 31 with respect to the optical axis of the camera in the horizontal direction
  • ⁇ and FOV of the camera in the vertical direction can determine the target 31 with respect to the optical axis of the camera Offset angle in the vertical direction.
  • Adjust the shooting attitude of the shooting device by adjusting the attitude of the target object 31 relative to the optical axis of the shooting device in the horizontal and vertical directions, so that the optical axis of the shooting device
  • the target object 31 is aligned, and the target object 31 is located at the center of the screen of the second target image 121.
  • the target object 31 may not be adjusted to the center of the screen of the first target image or the second target image, and the target object 31 may also be adjusted to a preset area in the first target image or the second target image. That is, by adjusting the attitudes of the drone and the gimbal, the angles of the target object 31 with respect to the optical axis of the camera in the horizontal and vertical directions are both non-zero preset angles.
  • the target object by controlling the shooting attitude of the shooting device so that the target object is at the center of the shooting screen of the shooting device, the target object can be prevented from moving outside the shooting screen of the shooting device when the drone is flying around the reference object.
  • the three-dimensional coordinates of the target object cannot be determined normally; in addition, the target object can be prevented from disappearing from the shooting screen of the shooting device during the movement.
  • An embodiment of the present invention provides a control device for a mobile robot.
  • 13 is a structural diagram of a control device for a mobile robot according to an embodiment of the present invention, where the mobile robot includes a photographing device.
  • the control device 130 of the mobile robot includes: a memory 131 and a processor 132; wherein the memory 131 is used to store program code; the processor 132 calls the program code, and when the program code is executed, is used to execute The following operations: acquiring instruction information of a target object, wherein the instruction information includes position information of the target object in a reference image output by a photographing device; determining position information of the target object according to the instruction information; and according to the target object The position information controls the mobile robot to move around the target object.
  • control device 130 further includes: a communication interface 133, and the communication interface 133 is connected to the processor 132.
  • the processor 132 obtains the instruction information of the target object, it is specifically configured to receive the instruction information sent by the control terminal through the communication interface , Wherein the indication information is determined by the control terminal detecting a target object selection operation of a user on an interactive interface displaying the reference image.
  • the processor 132 determines the position information of the target object according to the instruction information
  • the processor 132 is specifically configured to: control the mobile robot to orbit the reference object; while the mobile robot is moving around the reference object To acquire a plurality of frames of the first target image output by the photographing device, wherein the first target image includes the target object; and determine the target object according to the indication information of the target object and the multi-frame first target image Location information of the target object.
  • the processor 132 determines the position information of the target object according to the indication information of the target object and the multi-frame first target image
  • the processor 132 is specifically configured to: obtain a feature point in a target area of the reference image , Wherein the target area is an image area indicated by the instruction information in the reference image; based on feature points in the target area of the reference image, using a tracking algorithm to obtain feature points of the first target image of each frame; according to The position information of the feature points of the first target image in each frame in the corresponding first target image determines the position information of the target object.
  • the processor 132 determines the position information of the target object according to the position information of the feature points of the first target image in each frame in the corresponding first target image
  • the processor 132 is specifically configured to:
  • the position information of the feature points in the corresponding first target image determines the position information of the target object by using a fitting algorithm.
  • the processor 132 is further configured to: after acquiring the feature points of the first target image of each frame, determine the target feature points that meet the preset requirements from the feature points of the first target image of each frame; the processor 132 When determining the position information of the target object according to the position information of the feature points of the first target image in each frame in the corresponding first target image, it is specifically used to: according to the target feature points of the first target image in each frame in the corresponding first The position information in a target image determines the position information of the target object.
  • the processor 132 is further configured to determine, based on the position information of the feature point of the first target image in each frame in the corresponding first target image, that the photographing device is relatively relative to the mobile robot during the movement around the reference object.
  • the parallax of the target object when the processor 132 controls the mobile robot to move around the target object according to the position information of the target object, the processor 132 is specifically configured to: when the parallax is greater than a first preset parallax threshold, according to the The determined position information of the target object determines an orbital trajectory in which the mobile robot orbits the target object, and controls the mobile robot to move on the orbital trajectory.
  • the processor 132 is further configured to: determine a change speed of the parallax; and adjust a speed at which the mobile robot moves around a reference object according to the change speed of the parallax.
  • the processor 132 determines the changing speed of the parallax, it is specifically configured to: according to the positions of the feature points of the two adjacent first target images in the first target image in the corresponding first target images in multiple frames The information determines the rate of change of the parallax.
  • the processor 132 is further configured to: when the parallax is greater than a second preset parallax threshold, adjust a radius of the mobile robot to orbit the reference object according to the determined position information of the target object , Wherein the first preset parallax threshold is greater than the second preset parallax threshold.
  • the processor 132 controls the mobile robot to orbit the reference object, it is specifically configured to determine the reference object according to a preset orbit radius, and control the mobile robot to orbit the reference object.
  • the processor 132 controls the mobile robot to orbit the reference object
  • the processor 132 is specifically configured to: after receiving the start control instruction sent by the control terminal, control the mobile robot to orbit the reference object.
  • the processor 132 is further configured to: after acquiring the instruction information, control the shooting posture of the shooting device according to the instruction information so that the target object is located at the center of the shooting screen of the shooting device.
  • the processor 132 is further configured to: after acquiring the instruction information, acquire a plurality of frames of a second target image output by the photographing device, where the second target image includes a target object; the processor 132 When controlling the shooting posture of the photographing device according to the instruction information, the method is specifically used to: obtain a feature point of a second target image of each frame based on feature points in a target area of the reference image; The feature points of the target image determine position information of the target object corresponding to the second target image; and control the shooting posture of the shooting device according to the position information of the target object corresponding to the second target image.
  • the multi-frame second target image includes the multi-frame first target image.
  • control device for a mobile robot according to the embodiments of the present invention are similar to the above embodiments, and are not described herein again.
  • This embodiment determines position information of a target object by acquiring position information of a target object captured by the shooting device in a reference image output by the shooting device, and controls the mobile robot to move around the target object according to the position information of the target object, so that The mobile robot does not need to move to the orbiting center to record the position of the orbiting center, it can realize that the mobile robot orbits the target object, simplifies the process of the mobile robot's orbiting movement of the target object, and improves the operation safety of the mobile robot.
  • FIG. 14 is a structural diagram of a drone according to an embodiment of the present invention.
  • the drone 140 includes a fuselage, a power system, a photographing device 144 and a control device 148.
  • the power system includes at least one of the following: Kind: The motor 141, the propeller 142, and the electronic governor 143.
  • the power system is installed in the fuselage to provide power.
  • the specific principle and implementation of the control device 148 are similar to the above embodiments, and are not repeated here.
  • the drone 140 further includes: a sensing system 145, a communication system 146, and a supporting device 147.
  • the supporting device 147 may be a gimbal, and the photographing device 144 is mounted on the unmanned through the supporting device 147.
  • On board 140 On board 140.
  • control device 148 may specifically be a flight controller of the drone 140.
  • This embodiment determines position information of a target object by acquiring position information of a target object captured by the shooting device in a reference image output by the shooting device, and controls the mobile robot to move around the target object according to the position information of the target object, so that The mobile robot does not need to move to the orbiting center to record the position of the orbiting center, it can realize that the mobile robot orbits the target object, simplifies the process of the mobile robot's orbiting movement of the target object, and improves the operation safety of the mobile robot.
  • An embodiment of the present invention also provides a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the method for controlling a mobile robot as described above.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the above integrated unit implemented in the form of a software functional unit may be stored in a computer-readable storage medium.
  • the software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute the methods described in the embodiments of the present invention. Some steps.
  • the aforementioned storage media include: U disks, mobile hard disks, read-only memory (ROM), random access memory (RAM), magnetic disks or compact discs, and other media that can store program codes .

Abstract

一种移动机器人的控制方法、装置、设备及存储介质,该方法包括:获取目标对象的指示信息(S101);根据指示信息确定目标对象的位置信息(S102);根据目标对象的位置信息控制移动机器人环绕目标对象移动(S103)。该方法通过获取拍摄装置拍摄的目标对象在该拍摄装置输出的参考图像中的位置信息,确定该目标对象的位置信息,并根据该目标对象的位置信息控制移动机器人环绕该目标对象移动,使得移动机器人不需要移动到环绕中心去记录环绕中心的位置,即可实现该移动机器人环绕该目标对象移动,简化了移动机器人实现对目标对象环绕移动的过程,同时提高了移动机器人的操作安全性。

Description

移动机器人的控制方法、装置、设备及存储介质 技术领域
本发明实施例涉及控制领域,尤其涉及一种移动机器人的控制方法、装置、设备及存储介质。
背景技术
现有技术中,移动机器人(例如无人机或无人地面机器人)搭载有拍摄装置,在移动机器人移动的过程中,该拍摄装置可对目标对象进行拍摄。
围绕目标对象进行的环绕拍摄是现有技术中的一种拍摄方式。具体地,以无人机为例,无人机需要飞行到该目标对象的正上方,在该目标对象的正上方设置环绕中心,用户需要通过控制终端指示无人机记录环绕中心的位置(例如记录环绕中心的GPS位置),进一步该无人机飞离该环绕中心到达预设位置,以该环绕中心为圆心、以该无人机相对于该环绕中心的距离为半径进行环绕飞行。现有技术中实现环绕目标对象移动的操作过程较为繁琐,另外,移动机器人移动至环绕中心的过程中可能存在一定的危险或干扰,导致该移动机器人容易出现安全事故。
发明内容
本发明实施例提供一种移动机器人的控制方法、装置、设备及存储介质,以简化移动机器人实现环绕目标对象移动的操作过程,同时提高移动机器人的操作安全性。
本发明实施例的第一方面是提供一种移动机器人的控制方法,应用于移动机器人,其中,所述移动机器人包括拍摄装置,所述方法包括:
获取目标对象的指示信息,其中,所述指示信息包括目标对象在拍摄装置输出的参考图像中的位置信息;
根据所述指示信息确定所述目标对象的位置信息;
根据所述目标对象的位置信息控制所述移动机器人环绕所述目标对象移动。
本发明实施例的第二方面是提供一种移动机器人的控制装置,包括:存储器和处理器;
所述存储器用于存储程序代码;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
获取目标对象的指示信息,其中,所述指示信息包括目标对象在拍摄装置输出的参考图像中的位置信息;
根据所述指示信息确定所述目标对象的位置信息;
根据所述目标对象的位置信息控制所述移动机器人环绕所述目标对象移动。
本发明实施例的第三方面是提供一种移动机器人,包括:
机身;
动力系统,安装在所述机身,用于提供动力;
拍摄装置;
以及如第二方面所述的控制装置。
本发明实施例的第四方面是提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行以实现如第一方面所述的方法。
本实施例提供的移动机器人的控制方法、装置、设备及存储介质,通过获取拍摄装置拍摄的目标对象在该拍摄装置输出的参考图像中的位置信息,确定该目标对象的位置信息,并根据该目标对象的位置信息控制移动机器人环绕该目标对象移动,使得移动机器人不需要移动到环绕中心去记录环绕中心的位置,即可实现该移动机器人环绕该目标对象移动,简化了移动机器人实现对目标对象环绕移动的过程,同时提高了移动机器人的操作安全性。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳 动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的移动机器人的控制方法的流程图;
图2为本发明实施例提供的一种应用场景的示意图;
图3为本发明实施例提供的参考图像的示意图;
图4为本发明另一实施例提供的移动机器人的控制方法的流程图;
图5为本发明实施例提供的无人机对参考对象进行环绕飞行的示意图;
图6为本发明另一实施例提供的移动机器人的控制方法的流程图;
图7为本发明实施例提供的特征点跟踪的示意图;
图8为本发明实施例提供的三维坐标和像素坐标对应关系的示意图;
图9为本发明实施例提供的无人机对参考对象进行环绕飞行的示意图;
图10为本发明实施例提供的无人机对目标对象进行环绕飞行的示意图;
图11为本发明另一实施例提供的移动机器人的控制方法的流程图;
图12为本发明实施例提供的特征点跟踪的示意图;
图13为本发明实施例提供的移动机器人的控制装置的结构图;
图14为本发明实施例提供的无人机的结构图。
附图标记:
20:无人机;          21:拍摄装置;      22:云台;
23:无线通讯接口;    24:控制终端;      31:目标对象;
32:点;              33:点;            34:区域;
35:启动控制按键;    50:参考对象;      51:机头;
53:环形轨迹;        30:参考图像;      71:第一目标图像;
72:第一目标图像;    73:第一目标图像;
80:目标对象;        81:第一目标图像;  82:第一目标图像;
83:第一目标图像;    91:目标轨迹;      121:第二目标图像;
130:控制装置;       131:存储器;       132:处理器;
133:通讯接口;       140:无人机;       141:电机;
142:螺旋桨;         143:电子调速器;   144:拍摄装置;
145:传感系统;       146:通信系统;     147:支撑设备;
148:控制装置。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
本发明实施例提供一种移动机器人的控制方法。图1为本发明实施例提供的移动机器人的控制方法的流程图。本实施例所述的移动机器人的控制方法可应用于移动机器人,该应用于移动机器人包括拍摄装置。如图1所示,本实施例中的方法,可以包括:
步骤S101、获取目标对象的指示信息,其中,所述指示信息包括目标对象在拍摄装置输出的参考图像中的位置信息。
可选的,本实施例所述的移动机器人具体可以是无人机、无人地面机器人、无人船等。这里为了方便解释,以移动机器人为无人机来进行示意性说明。可以理解的是,本文中的无人机可以被同等地替代成移动机器人。如图2所示,无人机20搭载有拍摄装置21,拍摄装置21具体可以是相机、摄像机等。具体的,拍摄装置21可通过云台22搭载在无人机20上,或者,拍摄装置21通过其他固定装置固定在无人机20上。该拍摄装置21可实时拍摄获得视频数据或图像数据,并将该视频数据或图像数据通过无人机20的无线通讯接口23发送给控制终端24,该控制终端24具体可以 是无人机20对应的遥控器,也可以是用户终端例如智能手机、平板电脑等。另外,该无人机20还可以包括控制装置,该控制装置可以包括通用或者专用的处理器,此处只是示意性说明,并不限定该无人机的具体结构。
可选的,拍摄装置21拍摄的图像中包括如图2所示的目标对象31,此处,将拍摄装置21输出的某一帧图像记为参考图像,该无人机20的处理器可获取该目标对象的指示信息,该指示信息包括该目标对象在该参考图像中的位置信息。
作为一种可能的方式,所述获取目标对象的指示信息,包括:接收控制终端发送的指示信息,其中,所述指示信息是所述控制终端检测用户在显示所述参考图像的交互界面上的目标对象选择操作确定的。
具体的,拍摄装置21输出该参考图像后,通过无线通讯接口23将该参考图像发送给控制终端24,控制终端24将该参考图像显示在交互界面中,使得用户可以在该交互界面中对该参考图像中的目标对象进行选择。如图3所示,30表示该交互界面中显示的参考图像,参考图像30中包括目标对象31,用户在该交互界面中对目标对象31进行选择的一种可能方式是:用户选择点32,并从点32开始滑动到点33,此处只是示意性说明,本实施例不限定具体的选择操作。控制终端24可根据用户在该交互界面中的选择操作,确定出用户在该交互界面中框选的区域34,并确定出区域34在该参考图像30中的位置信息,例如,控制终端24可确定出该区域34的左上角即点32在该参考图像30中的位置信息和该区域34的大小例如长、宽,或者,控制终端24可确定该区域34的左上角即点32在该参考图像30中的位置信息和该区域34的右下角即点33在该参考图像30中的位置信息。进一步,控制终端24可将该区域34在该参考图像30中的位置信息作为该目标对象31在该参考图像30中的位置信息即该目标对象31的指示信息发送给无人机20。
作为另一种可能的方式,所述获取目标对象的指示信息,包括:对参考图像中的目标对象进行识别以获取目标对象的指示信息。
具体地,无人机的处理器可以对拍摄装置21输出的参考图像中的目标对象进行识别,通过识别获取目标对象的指示信息。进一步地,无人机的处理器可以将参考图像输入到已经训练好的神经网络模型,并获取所述 神经网络模型输出的目标对象的指示信息。
步骤S102、根据所述指示信息确定所述目标对象的位置信息。
具体地,在获取到目标对象的指示信息之后,无人机可以根据所述指示信息确定目标对象的位置信息,其中,所述目标对象的位置信息可以为三维位置信息或者二维位置信息;所述目标对象的位置信息可以为基于世界坐标系中的位置信息;另外,所述目标对象的位置信息也可以为基于全局坐标系中的位置信息,所述位置信息可以至少包括经度和纬度;此外,所述目标对象的位置信息还可以为基于无人机的机体坐标系中的位置信息。
作为一种可实现方式,所述根据所述指示信息确定所述目标对象的位置信息包括:根据所述指示信息确定目标对象相对于移动机器人的朝向,根据所述朝向和移动机器人与目标对象之间的水平距离或者移动机器人的对地高度值确定目标对象的位置信息。
具体地,本实施例可根据目标对象31在该参考图像30中的位置信息、承载拍摄装置21的云台的姿态确定目标对象31相对于无人机20的朝向;再根据所述朝向、目标对象31与无人机20之间的水平距离确定目标对象31的位置信息。其中,拍摄装置21具有的视场角(FOV)是已知的,根据目标对象31在参考图像中的位置信息可以确定目标对象31相对于拍摄装置21的光轴的角度,例如:若目标对象31在参考图像的正中心,则说明目标对象31相对于拍摄装置的光轴的角度为0,若拍摄装置21的FOV在水平方向为20度,若目标对象31在参考图像的最左边,则说明目标对象31相对于拍摄装置21的光轴的水平角度为10度,垂直方向上也类似,此处不再赘述;另外,拍摄装置21的云台22的姿态也决定了拍摄装置21的光轴的朝向,结合目标对象31相对于拍摄装置21的光轴的角度以及光轴的朝向,可以获得目标对象31相对于无人机20的朝向。进一步,根据目标对象31相对于无人机20的朝向、目标对象31与无人机20之间的水平距离确定目标对象31的位置信息。在某些实施例中,根据目标对象31相对于无人机20的朝向、目标对象31与无人机20之间的水平距离或者无人机20的对地高度值确定目标对象31的位置信息。
继续参考图2,根据目标对象31相对于无人机20的朝向可确定目标 对象31相对于无人机20在俯仰方向上的角度,例如图2所示的α角,然后,可以获取无人机20上配置的距离传感器测量的无人机的对地高度值,例如图2所示的h,根据所述α角和对地高度值即可以确定目标对象相对于无人机在垂直方向上的位置信息,另外,根据目标对象31相对于无人机20的朝向还可确定目标对象31相对于无人机20在偏航方向上的角度例如β角,根据β角和目标对象31与无人机20之间的水平距离L即可以确定目标对象相对于无人机在水平方向上的位置信息;根据目标对象相对于无人机在垂直方向上的位置信息和目标对象相对于无人机在水平方向上的位置信息即可确定出目标对象相对于无人机的位置信息,进一步,根据目标对象相对于无人机的位置信息和无人机的位置信息可确定出目标对象的位置信息,该目标对象的位置信息可以是该目标对象在世界坐标系中的位置,也可以是该目标对象在全局坐标系中的位置。
另外,在某些实施例,也可以根据目标对象31与无人机20之间的水平距离L和所述α角确定目标对象相对于无人机在垂直方向上的位置信息。其中,目标对象的指示信息可以指示目标对象在参考图像中对应的图像区域的大小,所述目标对象31与无人机20之间的水平距离可以根据所述图像区域的大小确定。
步骤S103、根据所述目标对象的位置信息控制所述移动机器人环绕所述目标对象移动。
具体的,当无人机20的处理器确定出目标对象31的位置信息后,以该目标对象31为中心,根据无人机20与目标对象31之间的位置关系生成环绕轨迹,并控制无人机20在该环绕轨迹上移动,即控制无人机20在该环绕轨迹上飞行,实现环绕目标对象31飞行。在无人机20环绕该目标对象31飞行的过程中,拍摄装置21可实时拍摄目标对象31,并将拍摄获得的图像数据或视频数据通过无线通讯接口23发送给控制终端24,以便用户浏览观看。
本实施例通过获取拍摄装置拍摄的目标对象在该拍摄装置输出的参考图像中的位置信息,确定该目标对象的位置信息,并根据该目标对象的位置信息控制移动机器人环绕该目标对象移动,使得移动机器人不需要移动到环绕中心去记录环绕中心的位置,即可实现该移动机器人环绕该目标 对象移动,简化了移动机器人实现对目标对象环绕移动的过程,同时提高了移动机器人的操作安全性。
本发明实施例提供一种移动机器人的控制方法。图4为本发明另一实施例提供的移动机器人的控制方法的流程图。如图4所示,在图1所示实施例的基础上,本实施例提供了根据所述指示信息确定所述目标对象的位置信息的另一种可实现方式,具体的,根据所述指示信息确定所述目标对象的位置信息,可以包括如下步骤:
步骤S401、控制所述移动机器人对参考对象进行环绕移动。
在本实施例中,可在无人机的正前方预设距离处取一个点为参考对象,该参考对象具体为虚拟目标点,控制无人机对该参考对象进行环绕飞行。如图5所示,50表示无人机正前方预设距离处的参考对象,51表示无人机的机头,无人机内的处理器具体可控制该无人机对参考对象50进行环绕飞行。
作为一种可能的方式,所述控制所述移动机器人对参考对象进行环绕移动,包括:根据预设的环绕半径确定参考对象,控制所述移动机器人对参考对象进行环绕移动。
具体的,控制该无人机以参考对象50为环绕中心,以预设的环绕半径例如500米为半径,生成一条环形轨迹,如图5所示的环形轨迹53,并控制该无人机在该环形轨迹53上对参考对象50进行环绕飞行。可选的,该无人机可沿着逆时针方向在该环形轨迹53上飞行,也可以沿着顺时针方向在该环形轨迹53上飞行。可选的,当无人机内的处理器接收到控制终端发送的目标对象的指示信息后,即可根据该预设的环绕半径确定参考对象,并控制该无人机对参考对象进行环绕飞行。也就是说,当用户在参考图像中框选出目标对象后,无人机即可对参考对象进行环绕飞行。
作为另一种可能的方式,所述控制所述移动机器人对参考对象进行环绕移动,包括:在接收到所述控制终端发送的启动控制指令后,控制所述移动机器人对参考对象进行环绕移动。
具体的,如图3所示,当用户框选目标对象31后,该交互界面中可显示启动控制按键35,该启动控制按键35具体可以是该交互界面中的一 个图标,也就是说,当用户框选目标对象31后,无人机并不立即对参考对象进行环绕飞行,而是等到用户在该交互界面中点击该启动控制按键35后,该无人机才开始对参考对象进行环绕飞行。具体的,当用户在该交互界面中点击该启动控制按键35时,控制终端根据用户的点击操作生成启动控制指令,并将该启动控制指令发送给无人机,当无人机内的处理器接收到该启动控制指令后,控制无人机对参考对象进行环绕飞行,具体的环绕控制方式可以是如图5所示的方式,此处不再赘述。
步骤S402、在所述移动机器人环绕参考对象移动的过程中,获取所述拍摄装置输出的多帧第一目标图像,其中,所述第一目标图像中包括所述目标对象。
如图5所示,以该无人机沿着顺时针方向在该环形轨迹53上飞行为例,当该无人机在对参考对象50进行环绕飞行的过程中,该无人机的拍摄装置还可以对目标对象31进行拍摄,并输出包括该目标对象31的目标图像,本实施例将该无人机在对参考对象50进行环绕飞行的过程中拍摄的目标图像记为第一目标图像,且该无人机的拍摄装置输出的第一目标图像可以是多帧。具体的,该无人机在对参考对象50进行环绕飞行的过程中,该无人机的处理器可获取到拍摄装置输出的多帧第一目标图像,该第一目标图像中包括目标对象31。此处不限定目标对象31相对于拍摄装置的光轴偏移的角度,只要保证目标对象31在拍摄装置的拍摄画面中即可。
步骤S403、根据所述目标对象的指示信息和所述多帧第一目标图像确定所述目标对象的位置信息。
该无人机的处理器可根据上述实施例中获取到的目标对象31的指示信息,以及上述步骤中获取到的多帧第一目标图像确定目标对象31的位置信息。
可选的,所述根据所述目标对象的指示信息和所述多帧第一目标图像确定所述目标对象的位置信息,可以包括如图6所示的如下步骤:
步骤S601、获取所述参考图像的目标区域中的特征点,其中,所述目标区域为所述参考图像中所述指示信息指示的图像区域。
当无人机接收到控制终端发送的目标对象的指示信息后,可根据该目标对象的指示信息确定出该参考图像的目标区域,该目标区域具体为该指 示信息指示的图像区域。例如图3所示,无人机接收到控制终端发送的区域34在该参考图像30中的位置信息或者通过识别获取区域34在该参考图像30中的位置信息后,该无人机的处理器可在该参考图像30中确定出目标区域,该目标区域具体可以是区域34,也就是说,无人机可以将用户在交互界面中框选的区域作为该目标区域。进一步,该无人机的处理器可获取该目标区域中的特征点,可选的,该处理器可根据预设的特征点提取算法确定出该目标区域中的特征点,该特征点提取算法包括如下至少一种:Harris角点检测算法、尺度不变特征变换(Scale-invariant feature transform,SIFT)、加速稳健特征(Speeded Up Robust Features,SURT)算法、快速特征点提取和描述的算法(Oriented FAST and Rotated BRIEF,ORB)等。可选的,本实施例采用Harris角点检测算法提取该目标区域中的特征点。
步骤S602、基于所述参考图像的目标区域中的特征点利用跟踪算法获取每一帧第一目标图像的特征点。
在获取到该目标区域中的特征点后,利用跟踪算法对该目标区域中的特征点进行跟踪,即利用跟踪算法确定该目标区域中的特征点在每一帧第一目标图像中的位置。该跟踪算法具体可以是(Kanade-Lucas-Tomasi Feature Tracker,KLT)特征跟踪算法。
如图7所示,A、B、C、D、E、F、G分别表示参考图像30的目标区域即区域34中的特征点,特征点A、B、C、D、E、F、G也是目标对象31的特征点。71、72、73分别表示在该无人机环绕参考对象飞行的过程中,拍摄装置依次输出的第一目标图像。根据KLT特征跟踪算法可确定出参考图像30中目标对象31的特征点例如A、B、C、D、E、F、G分别在第一目标图像71、第一目标图像72、第一目标图像73中的位置。例如,拍照装置先输出参考图像30,后续依次输出第一目标图像71、第一目标图像72、第一目标图像73;参考图像30、第一目标图像71、第一目标图像72、第一目标图像73可以是相邻图像,也可以是非相邻图像。
如图5所示,在该无人机环绕参考对象50飞行的过程中,目标对象31相对于该无人机的位置不断变化,导致目标对象31在拍摄装置依次输出的第一目标图像中的位置不断变化,从而使得第一目标图像71、第一目 标图像72、第一目标图像73中目标对象31对应的特征点在对应的第一目标图像中的位置不断变化。此处只是示意性说明,并不限定区域34中特征点的个数,也不限定第一目标图像的个数,以及区域34中的特征点在每一帧第一目标图像中的位置。
步骤S603、根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定目标对象的位置信息。
例如,根据第一目标图像71、第一目标图像72、第一目标图像73中目标对象31对应的特征点在对应的第一目标图像中的位置信息确定目标对象31的位置信息,确定出的该目标对象31的位置信息具体为目标对象31在三维空间中的三维坐标。此处,将根据第一目标图像71、第一目标图像72、第一目标图像73中目标对象31对应的特征点在对应的第一目标图像中的位置信息确定出的目标对象31的位置信息记为第一位置信息。
可以理解,在拍摄装置输出第一目标图像73之后还会输出新的第一目标图像,根据KLT特征跟踪算法可确定出目标对象31的特征点在该新的第一目标图像中的位置;进一步根据第一目标图像71、第一目标图像72、第一目标图像73、新的第一目标图像中目标对象31对应的特征点在对应的第一目标图像中的位置信息可确定出又一个目标对象31的位置信息,此处,将该目标对象31的位置信息记为第二位置信息。上述的第一位置信息和此处的第二位置信息可能相同,也可能不同,但可以理解的是,随着拍摄装置不断输出新的第一目标图像,根据第一目标图像71、第一目标图像72、第一目标图像73、以及拍摄装置后续不断输出的第一目标图像中目标对象31对应的特征点在对应的第一目标图像中的位置信息确定出的目标对象31的位置信息的精准度不断提高。一种可能的方式是,拍摄装置每输出一帧新的第一目标图像,无人机的处理器即可确定出该目标对象31的一个新的位置信息。
可选的,所述根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定目标对象的位置信息,包括:基于每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息利用拟合算法确定所述目标对象的位置信息。
如图8所示,80表示目标对象,81、82、83表示拍摄装置环绕目标 对象80按照箭头所示的方向移动的过程中,拍摄装置先后输出的第一目标图像,可以理解,目标对象80上的三维点可映射到第一目标图像81、82、83中,该三维点在第一目标图像81、82、83中的映射点具体可以是第一目标图像81、82、83中的特征点,从第一目标图像81到第一目标图像83的过程中,可跟踪到的特征点的个数在减少。
例如,点A、点B和点C分别为目标对象80上的三维点,点a1、点b1和点c1表示第一目标图像81中的特征点,点a1与点A对应,点b1与点B对应,点c1和点C对应;点a2、点b2和点c2表示第一目标图像82中的特征点,点a2与点A对应,点b2与点B对应,点c2和点C对应;点a3和点b3表示第一目标图像83中的特征点,点a3与点A对应,点b3与点B对应。此处只是示意性说明,并不限定目标对象80、目标对象80上的三维点、以及目标对象80上的三维点在第一目标图像中的映射点。可以理解的是,目标对象80在不同第一目标图像中的位置不同,目标对象80上的同一个三维点在不同第一目标图像中的映射点在对应的第一目标图像中的位置也不同。
根据世界坐标系和像素平面坐标系的转换关系,可得到目标对象80上的三维点在世界坐标系中的三维坐标(x w,y w,z w)与该三维点在第一目标图像中的映射点在该第一目标图像中的位置信息例如像素坐标(μ,υ)的关系,该关系具体如下公式(1)所示:
Figure PCTCN2018096534-appb-000001
其中,z c表示该三维点在相机坐标系Z轴上的坐标,K表示相机的内参,R表示相机的旋转矩阵,T表示相机的平移矩阵。在本实施例中,(μ,υ)、K、R、T为已知量,z c和(x w,y w,z w)为未知量。在拍摄装置拍摄不同的第一目标图像时,K是不变的,R、T可以是变化的。
具体的,根据点a1在第一目标图像81中的像素坐标、以及该拍摄装置拍摄第一目标图像81时对应的R、T,可建立一个如公式(1)所示的方程,根据点a2在第一目标图像82中的像素坐标、以及该拍摄装置拍摄第一目标图像82时对应的R、T,可建立另一个如公式(1)所示的方程,根据点 a3在第一目标图像83中的像素坐标、以及该拍摄装置拍摄第一目标图像83时对应的R、T,可建立再一个如公式(1)所示的方程,随着该拍摄装置不断输出新的第一目标图像,建立的如公式(1)所示的方程逐渐增加,可以理解,当方程组中方程的个数大于未知量的个数时,可求解出相应的未知量。也就是说,利用拟合算法对这些方程进行求解即可计算出三维点A在世界坐标系中的三维坐标。同理,可计算出三维点B和三维点C在世界坐标系中的三维坐标,此处不再赘述。可以理解,该拍摄装置输出的第一目标图像越多,基于该多帧第一目标图像中的特征点的像素坐标利用拟合算法得到的三维点在世界坐标系中的三维坐标越精确。当确定出目标对象80上的多个三维点例如三维点A、B、C在世界坐标系中的三维坐标后,根据三维点A、B、C在世界坐标系中的三维坐标可确定出目标对象80在世界坐标系中的三维坐标。无人机可以根据目标对象80在世界坐标系中的三维坐标获取目标对象的位置信息。例如,当目标对象31的位置信息为基于全局坐标系中的位置时,可以根据无人机的位置信息和目标对象80在世界坐标系中的三维坐标确定目标对象31的位置信息。当目标对象31的位置信息为基于无人机的机体坐标系中的位置时,可以将目标对象80在世界坐标系中的三维坐标转换到机体坐标系以获取基于所述机体坐标系的位置信息。
另外,所述方法还包括:在获取每一帧第一目标图像的特征点之后,从每一帧第一目标图像的特征点中确定满足预设要求的目标特征点;相应的,所述根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定目标对象的位置信息,包括:根据每一帧第一目标图像的目标特征点在对应的第一目标图像中的位置信息确定所述目标对象的位置信息。
如图7所示,在获取到第一目标图像71、第一目标图像72、第一目标图像73中的特征点例如A、B、C、D、E、F、G之后,从第一目标图像71、第一目标图像72、第一目标图像73的特征点中确定满足预设要求的目标特征点,例如,每个特征点在第一目标图像71和参考图像30之间的偏移量可能是不同的,假设特征点A在第一目标图像71和参考图像30之间的偏移量记为h1、特征点B在第一目标图像71和参考图像30之间的偏 移量记为h2、依次类推,特征点G在第一目标图像71和参考图像30之间的偏移量记为h7,计算h1、h2、…h7的平均值和方差,平均值记为u,方差记为δ 2,根据高斯分布选取偏移量在[u-3δ,u+3δ]内的特征点为目标特征点,假设h1在[u-3δ,u+3δ]外,则将第一目标图像71中的特征点A删除,保留第一目标图像71中的特征点B、C、D、E、F、G,将第一目标图像71中的特征点B、C、D、E、F、G作为第一目标图像71的目标特征点。同理,可计算出第一目标图像72和第一目标图像73中的目标特征点,此处不再赘述。
在其他实施例中,根据每个特征点在第一目标图像71和参考图像30之间的偏移量例如h1、h2、…h7,计算出h1、h2、…h7的平均值和方差后,根据高斯分布选取偏移量在[u-3δ,u+3δ]内的特征点为有效点,例如,h1在[u-3δ,u+3δ]外,则将第一目标图像71中的特征点A删除,将第一目标图像71中的特征点B、C、D、E、F、G作为有效点,进一步从该有效点中确定出目标特征点,从该有效点中确定出目标特征点的一种可能的方式是,计算有效点对应的偏移量的平均值,即计算h2、…h7的平均值记为u1。此处将区域34在参考图像30中的位置信息记为ROI 0,根据ROI 0和u1可确定出区域34在第一目标图像71中的位置信息记为ROI 1,具体的,ROI 1=ROI 0+u1,进一步根据区域34在第一目标图像71中的位置信息ROI 1,以及有效点B、C、D、E、F、G在第一目标图像71中的位置信息,确定出有效点B、C、D、E、F、G中哪些点在该区域34内,哪些点不在该区域34内,将有效点B、C、D、E、F、G中不在该区域34内的点进一步剔除掉,剩余的有效点作为该第一目标图像71的目标特征点,同理,可计算出第一目标图像72和第一目标图像73中的目标特征点,此处不再赘述。
通过上述方法确定出第一目标图像71、第一目标图像72和第一目标图像73中的目标特征点后,根据目标特征点在对应的第一目标图像中的位置信息确定目标对象31在世界坐标系中的三维坐标,具体原理与图8所示的原理一致,此处不再赘述。
本实施例通过控制无人机对参考对象进行环绕飞行,在无人机环绕参考对象飞行的过程中获取拍摄装置输出的多帧第一目标图像,根据目标对象的指示信息和多帧第一目标图像确定该目标对象的位置信息,在拍摄装 置不断输出第一目标图像时,根据目标对象的指示信息和拍摄装置不断输出的第一目标图像可不断的确定出该目标对象的位置信息,且该目标对象的位置信息的准确度不断提高;另外,在获取到拍摄装置输出的每一帧第一目标图像的特征点之后,从每一帧第一目标图像的特征点中确定满足预设要求的目标特征点,在根据每一帧第一目标图像的目标特征点在对应的第一目标图像中的位置信息确定该目标对象的位置信息时,可提高该目标对象的位置信息的精确度,同时去除不满足该预设要求的特征点,还可降低相应的计算量。
本发明实施例提供一种移动机器人的控制方法。在上述实施例的基础上,所述方法还包括:根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定所述移动机器人在环绕参考对象移动过程中所述拍摄装置相对于目标对象的视差;相应的,所述根据所述目标对象的位置信息控制所述移动机器人环绕所述目标对象移动,包括:当所述视差大于第一预设视差阈值时,根据所述确定出的所述目标对象的位置信息确定所述移动机器人对所述目标对象进行环绕移动的环绕轨迹,并控制所述移动机器人在所述环绕轨迹上移动。
如图7所示,在获取到第一目标图像71、第一目标图像72、第一目标图像73中的特征点例如A、B、C、D、E、F、G之后,根据特征点A、B、C、D、E、F、G分别在第一目标图像71、第一目标图像72、第一目标图像73中的位置信息,可确定出如图5所示的无人机在环绕参考对象50飞行过程中,该无人机的拍摄装置相对于目标对象31的视差,例如,第一目标图像71是无人机在m1位置时拍摄装置拍摄的图像,第一目标图像72是无人机在m2位置时拍摄装置拍摄的图像,第一目标图像73是无人机在m3位置时拍摄装置拍摄的图像。根据特征点A、B、C、D、E、F、G分别在第一目标图像71、第一目标图像72中的位置信息,可确定出无人机从m1位置到m2位置的过程中,该无人机的拍摄装置相对于目标对象31的视差;具体的,将特征点A在第一目标图像71中的像素坐标记为(μ 11),将特征点A在第一目标图像72中的像素坐标记为(μ 22),根据如下公式(2)可计算出特征点A的视差,特征点A的视差记为parallaxA:
Figure PCTCN2018096534-appb-000002
其中,R 21表示相机在拍摄第一目标图像72时的姿态相对于相机在拍摄第一目标图像71时的姿态在旋转方向上的变化。c x和c y表示相机光心位置,可以理解,该相机光心在第一目标图像71和第一目标图像72中的位置相同。f表示该相机的焦距。同理,可计算出特征点B、C、D、E、F、G的视差,对特征点A、B、C、D、E、F、G的视差取平均值,该平均值为第一目标图像72的视差,第一目标图像72的视差为无人机从m1位置到m2位置的过程中,该无人机的拍摄装置相对于目标对象31的视差。
同理,根据特征点A、B、C、D、E、F、G分别在第一目标图像71、第一目标图像73中的位置信息,可确定出第一目标图像73的视差,该第一目标图像73的视差为无人机从m1位置到m3位置的过程中,该无人机的拍摄装置相对于目标对象31的视差。可以理解,随着无人机沿着环形轨迹53飞行的过程中,该无人机的拍摄装置相对于目标对象31的视差不断变大,利用拟合算法不断的确定目标对象31的三维坐标,所述视差越大,确定出的目标对象的三维坐标的准确度越高,当该无人机的拍摄装置相对于目标对象31的视差大于第一预设视差阈值时,停止拟合算法,获取最新确定出的目标对象31的三维坐标,即目标对象31的精准三维坐标,并根据最新确定出的目标对象31的三维坐标确定无人机对该目标对象31进行环绕飞行的环绕轨迹,该环绕轨迹不同于无人机对参考对象50的环绕轨迹53。
如图9所示,假设无人机沿环绕轨迹53飞行到m3位置时,该无人机的拍摄装置相对于目标对象31的视差大于第一预设视差阈值,则根据最新确定出的目标对象31的三维坐标和预设环绕参数例如环绕半径确定无人机对该目标对象31进行环绕飞行的目标轨迹91,并控制无人机沿着目标轨迹91飞行。
另外,所述方法还包括:确定所述视差的变化速度;根据所述视差的变化速度调节所述移动机器人环绕参考对象移动的速度。
可选的,所述确定所述视差的变化速度,包括:根据多帧第一目标图像中相邻的两帧第一目标图像的特征点在对应的第一目标图像中的位置 信息确定所述视差的变化速度。
例如,第一目标图像71和第一目标图像72是拍摄装置拍摄的多帧第一目标图像中相邻的两帧第一目标图像,将第一目标图像71的视差记为PA i-1,第一目标图像72的视差记为PA i,视差的变化速度记为parallax_speed,parallax_speed=(PA i-PA i-1)/t,t表示第一目标图像71和第一目标图像72之间的时间间隔,如果拍摄装置拍摄第一目标图像的频率是固定的例如30HZ,则parallax_speed还可以表示为parallax_speed=(PA i-PA i-1),即在图像频率固定的情况下,衡量(PA i-PA i-1)/t的大小与衡量PA i-PA i-1的大小其意义是一致的。
具体的,当无人机沿着环形轨迹53开始飞行时,该无人机可按照预设的较小的速度例如2m/s飞行,但是,如果目标对象31距离无人机较远,当无人机沿着环形轨迹53飞了较长时间后,目标对象31在拍摄装置拍摄的第一目标图像中的位置可能变化较小,或者几乎没变化,在这种情况下,可以根据视差的变化速度,调整无人机沿着环形轨迹53飞行的飞行速度。例如,第一预设视差阈值记为T1,假设T1=20,无人机从开始沿着环形轨迹53飞行需要在例如t=2秒内确定出目标对象31的三维坐标,也就是说,需要该无人机的拍摄装置相对于目标对象31的视差在t=2秒内达到第一预设视差阈值T1,则期望的视差的变化速度为T1/t=10,假设根据parallax_speed=(PA i-PA i-1)计算出当前的parallax_speed为2.5,则需要提高无人机的飞行速度,无人机需要达到的飞行速度=无人机当前的飞行速度*(期望的视差的变化速度/当前的parallax_speed即2m/s*(10/2.5)=8m/s,也就是说,需要将无人机的飞行速度提高到8m/s。
此外,所述方法还包括:当所述视差大于第二预设视差阈值时,根据所述确定出的所述目标对象的位置信息调整所述移动机器人对参考对象进行环绕移动的半径,其中,所述第一预设视差阈值大于所述第二预设视差阈值。
如图9所示,如果无人机沿环绕轨迹53飞行到m3位置时,该无人机的拍摄装置相对于目标对象31的视差大于第一预设视差阈值,则可根据最新确定出的目标对象31的三维坐标确定无人机对该目标对象31进行环绕飞行的目标轨迹91,但是此时无人机距离该目标轨迹91可能较远,无 人机需要从当前位置例如m3位置飞行到目标轨迹91上的一点再开始沿着目标轨迹91飞行。
作为一种可替换方式,如图10所示,假设无人机沿环绕轨迹53飞行到m2位置时,该无人机的拍摄装置相对于目标对象31的视差大于第二预设视差阈值,该第二预设视差阈值小于第一预设视差阈值。此时,利用拟合算法已经可以确定出目标对象31的三维坐标,即目标对象31的粗略三维坐标,并根据该目标对象31的三维坐标和预设环绕参数例如环绕半径确定出无人机对该目标对象31进行环绕飞行的目标轨迹91,即粗略的目标轨迹91,则从m2位置开始,可以不断调整无人机对参考对象50进行环绕飞行的半径,例如不断减小无人机对参考对象50进行环绕飞行的半径,在该无人机以不断减小的环绕半径对参考对象50进行环绕飞行的过程中,该无人机的拍摄装置相对于目标对象31的视差还在不断变化。当该无人机的拍摄装置相对于目标对象31的视差大于第一预设视差阈值时,无人机可能会到达目标轨迹91(准确的目标轨迹)上一点例如m4,或者无人机可能会到达距离目标轨迹91较近的一点,使得无人机可以从该点平滑过渡到目标轨迹91上。
本实施例通过每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定无人机在环绕参考对象飞行过程中该拍摄装置相对于目标对象的视差,根据该视差的变化速度调节无人机环绕参考对象飞行的飞行速度,使得无人机可以在较短的时间内确定出目标对象的三维坐标,尤其是当目标对象距离无人机较远、无人机环绕参考对象飞行的飞行速度较小时,通过该视差的变化速度可提高该无人机的飞行速度,提高计算目标对象的三维坐标的效率;另外,通过设置至少两个视差阈值例如第一预设视差阈值和第二预设视差阈值,第一预设视差阈值大于第二预设视差阈值,当该视差大于第二预设视差阈值时,通过调整该无人机对参考对象进行环绕飞行的半径,可使得当该视差大于第一预设视差阈值时,无人机到达对目标对象进行环绕飞行的环绕轨迹上,或者到达距离该环绕轨迹较近的位置,从而使得无人机可以从对参考对象进行环绕飞行的环绕轨迹上平滑过渡到对目标对象进行环绕飞行的环绕轨迹上。
本发明实施例提供一种移动机器人的控制方法。图11为本发明另一实施例提供的移动机器人的控制方法的流程图。如图11所示,在上述实施例的基础上,所述方法还包括:在获取到所述指示信息之后,根据所述指示信息控制拍摄装置的拍摄姿态以使所述目标对象处于所述拍摄装置的拍摄画面中心。
如图3所示,当用户在参考图像30中框选出目标对象31时,目标对象31可能不在拍摄装置的拍摄画面中心,在本实施例中,当无人机获取到目标对象31的指示信息,例如接收到控制终端24发送的区域34在该参考图像30中的位置信息后,根据区域34在该参考图像30中的位置信息可确定出目标对象31相对于拍摄装置21的光轴的角度,根据该角度可调整无人机的姿态和/或云台的姿态来控制拍摄装置的拍摄姿态,以使目标对象31相对于拍摄装置的光轴的角度为0,即目标对象31处于所述拍摄装置的拍摄画面中心。
在一些实施例中,当用户框选目标对象31后,无人机即可对参考对象进行环绕飞行;因此,当无人机获取到目标对象31的指示信息后即可调整无人机的姿态和/或云台的姿态,以使目标对象31处于所述拍摄装置的拍摄画面中心,也就是说,在无人机对参考对象进行环绕飞行的过程中调整无人机的姿态和/或云台的姿态,以使目标对象31处于所述拍摄装置的拍摄画面中心,直到该无人机确定出目标对象31的三维坐标。
在另外一些实施例中,当用户框选目标对象31后,无人机并不立即对参考对象进行环绕飞行,而是等到用户在该交互界面中点击该启动控制按键35后,该无人机才开始对参考对象进行环绕飞行。例如,无人机在t1时刻获取到该目标对象的指示信息,用户在t1时刻之后的t2时刻点击了启动控制按键35即无人机从t2时刻开始对参考对象进行环绕飞行,无人机在t2时刻之后的t3时刻确定出了目标对象31的三维坐标。
具体的,无人机可以在t1时刻到t2时刻之间调整无人机的姿态和/或云台的姿态,以使目标对象31处于所述拍摄装置的拍摄画面中心,因为在t1时刻到t2时刻之间无人机可能并没有动,但是目标对象31发生了移动,导致目标对象31在所述拍摄装置的拍摄画面中的位置发生了变化。或者,无人机还可以在t2时刻到t3时刻之间调整无人机的姿态和/ 或云台的姿态,以使目标对象31处于所述拍摄装置的拍摄画面中心。再或者,无人机还可以在t1时刻到t3时刻之间调整无人机的姿态和/或云台的姿态,以使目标对象31处于所述拍摄装置的拍摄画面中心。
另外,所述方法还包括:在获取到所述指示信息之后,获取所述拍摄装置输出的多帧第二目标图像,其中,所述第二目标图像中包括目标对象。
例如,当用户框选目标对象31后即无人机获取到目标对象31的指示信息后,无人机即可对参考对象进行环绕飞行,在无人机对象进行环绕飞行时获取拍摄装置输出的多帧第二目标图像,则此时的所述多帧第二目标图像包括所述多帧第一目标图像。
再例如,当用户框选目标对象31后,无人机并不立即对参考对象进行环绕飞行,而是等到用户在该交互界面中点击该启动控制按键35后,该无人机才开始对参考对象进行环绕飞行,则无人机获取到目标对象31的指示信息后,拍摄装置输出的多帧第二目标图像可能是拍摄装置在t1时刻到t2时刻之间拍摄的,也有可能是在t2时刻到t3时刻之间拍摄的,还有可能是在t1时刻到t3时刻之间拍摄的。也就是说,多帧第二目标图像至少包括多帧第一目标图像。
相应的,所述根据所述指示信息控制拍摄装置的拍摄姿态,包括如下步骤:
步骤S1101、基于所述参考图像的目标区域中的特征点利用跟踪算法获取每一帧第二目标图像的特征点。
具体的,利用跟踪算法计算该目标区域中的每个特征点在相邻目标图像例如第二目标图像之间的偏移量,如果该特征点在前一帧目标图像相对于后一帧目标图像的偏移量和该特征点在后一帧目标图像相对于前一帧目标图像的偏移量大小相等、方向相反,即可确定该特征点是跟踪正确的特征点。
如图12所示,A、B、C、D、E、F、G分别表示参考图像30的目标区域即区域34中的特征点,特征点A、B、C、D、E、F、G也是目标对象31的特征点。121表示无人机在获取到所述指示信息之后,拍摄装置输出的多帧第二目标图像中的一个第二目标图像,此处只是示意性说明。根据KLT特征跟踪算法可确定出参考图像30中目标对象31的特征点例如A、B、C、 D、E、F、G分别在第二目标图像121中的位置。
步骤S1102、根据每一帧第二目标图像的特征点确定目标对象在对应第二目标图像的位置信息。
根据特征点例如A、B、C、D、E、F、G分别在第二目标图像121中的位置,可确定出目标对象31在第二目标图像121中的位置信息,例如目标对象31的中心点N1在第二目标图像121中的位置信息。
步骤S1103、根据所述目标对象在对应第二目标图像的位置信息控制拍摄装置的拍摄姿态。
根据目标对象31的中心点N1在第二目标图像121中的位置信息以及第二目标图像121的中心点N的位置信息,可确定出目标对象31的中心点N1相对于第二目标图像121的中心点N在水平方向上的距离Δμ,以及目标对象31的中心点N1相对于第二目标图像121的中心点N在垂直方向上的距离Δυ,进一步,根据Δμ和拍摄装置在水平方向上的FOV,可确定出目标对象31相对于拍摄装置的光轴在水平方向上偏移的角度,根据Δυ和拍摄装置在垂直方向上的FOV,可确定出目标对象31相对于拍摄装置的光轴在垂直方向上偏移的角度。根据目标对象31相对于拍摄装置的光轴在水平方向和垂直方向上偏移的角度,通过调整无人机和/云台的姿态来调整该拍摄装置的拍摄姿态,以使拍摄装置的光轴对准目标对象31,目标对象31位于第二目标图像121的画面中心。
在其他实施例中,可以不限于将目标对象31调整在第一目标图像或第二目标图像的画面中心,还可以将目标对象31调整在第一目标图像或第二目标图像中的预设区域,也就是说,通过调整无人机和/云台的姿态,使得目标对象31相对于拍摄装置的光轴在水平方向和垂直方向上偏移的角度均为非0的预设角度。
本实施例通过控制拍摄装置的拍摄姿态以使目标对象处于所述拍摄装置的拍摄画面中心,可避免无人机在对参考对象进行环绕飞行时该目标对象移动到该拍摄装置的拍摄画面外而导致无法正常确定目标对象的三维坐标;另外,也可以防止目标对象在移动过程中从拍摄装置的拍摄画面中消失。
本发明实施例提供一种移动机器人的控制装置。图13为本发明实施例提供的移动机器人的控制装置的结构图,所述移动机器人包括拍摄装置。如图13所示,移动机器人的控制装置130包括:存储器131和处理器132;其中,存储器131用于存储程序代码;处理器132调用所述程序代码,当程序代码被执行时,用于执行以下操作:获取目标对象的指示信息,其中,所述指示信息包括目标对象在拍摄装置输出的参考图像中的位置信息;根据所述指示信息确定所述目标对象的位置信息;根据所述目标对象的位置信息控制所述移动机器人环绕所述目标对象移动。
可选的,控制装置130还包括:通讯接口133,通讯接口133与处理器132连接;处理器132获取目标对象的指示信息时,具体用于:通过所述通讯接口接收控制终端发送的指示信息,其中,所述指示信息是所述控制终端检测用户在显示所述参考图像的交互界面上的目标对象选择操作确定的。
可选的,处理器132根据所述指示信息确定所述目标对象的位置信息时,具体用于:控制所述移动机器人对参考对象进行环绕移动;在所述移动机器人环绕参考对象移动的过程中,获取所述拍摄装置输出的多帧第一目标图像,其中,所述第一目标图像中包括所述目标对象;根据所述目标对象的指示信息和所述多帧第一目标图像确定所述目标对象的位置信息。
可选的,处理器132根据所述目标对象的指示信息和所述多帧第一目标图像确定所述目标对象的位置信息时,具体用于:获取所述参考图像的目标区域中的特征点,其中,所述目标区域为所述参考图像中所述指示信息指示的图像区域;基于所述参考图像的目标区域中的特征点利用跟踪算法获取每一帧第一目标图像的特征点;根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定目标对象的位置信息。
可选的,处理器132根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定目标对象的位置信息时,具体用于:基于每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息利用拟合算法确定所述目标对象的位置信息。
可选的,处理器132还用于:在获取每一帧第一目标图像的特征点之后,从每一帧第一目标图像的特征点中确定满足预设要求的目标特征点; 处理器132根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定目标对象的位置信息时,具体用于:根据每一帧第一目标图像的目标特征点在对应的第一目标图像中的位置信息确定所述目标对象的位置信息。
可选的,处理器132还用于:根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定所述移动机器人在环绕参考对象移动过程中所述拍摄装置相对于目标对象的视差;处理器132根据所述目标对象的位置信息控制所述移动机器人环绕所述目标对象移动时,具体用于:当所述视差大于第一预设视差阈值时,根据所述确定出的所述目标对象的位置信息确定所述移动机器人对所述目标对象进行环绕移动的环绕轨迹,并控制所述移动机器人在所述环绕轨迹上移动。
可选的,处理器132还用于:确定所述视差的变化速度;根据所述视差的变化速度调节所述移动机器人环绕参考对象移动的速度。
可选的,处理器132确定所述视差的变化速度时,具体用于:根据多帧第一目标图像中相邻的两帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定所述视差的变化速度。
可选的,处理器132还用于:当所述视差大于第二预设视差阈值时,根据所述确定出的所述目标对象的位置信息调整所述移动机器人对参考对象进行环绕移动的半径,其中,所述第一预设视差阈值大于所述第二预设视差阈值。
可选的,处理器132控制所述移动机器人对参考对象进行环绕移动时,具体用于:根据预设的环绕半径确定参考对象,控制所述移动机器人对参考对象进行环绕移动。
可选的,处理器132控制所述移动机器人对参考对象进行环绕移动时,具体用于:在接收到所述控制终端发送的启动控制指令后,控制所述移动机器人对参考对象进行环绕移动。
可选的,处理器132还用于:在获取到所述指示信息之后,根据所述指示信息控制拍摄装置的拍摄姿态以使所述目标对象处于所述拍摄装置的拍摄画面中心。
可选的,处理器132还用于:在获取到所述指示信息之后,获取所述 拍摄装置输出的多帧第二目标图像,其中,所述第二目标图像中包括目标对象;处理器132根据所述指示信息控制拍摄装置的拍摄姿态时,具体用于:基于所述参考图像的目标区域中的特征点利用跟踪算法获取每一帧第二目标图像的特征点;根据每一帧第二目标图像的特征点确定目标对象在对应第二目标图像的位置信息;根据所述目标对象在对应第二目标图像的位置信息控制拍摄装置的拍摄姿态。
可选的,所述多帧第二目标图像包括所述多帧第一目标图像。
本发明实施例提供的移动机器人的控制装置的具体原理和实现方式均与上述实施例类似,此处不再赘述。
本实施例通过获取拍摄装置拍摄的目标对象在该拍摄装置输出的参考图像中的位置信息,确定该目标对象的位置信息,并根据该目标对象的位置信息控制移动机器人环绕该目标对象移动,使得移动机器人不需要移动到环绕中心去记录环绕中心的位置,即可实现该移动机器人环绕该目标对象移动,简化了移动机器人实现对目标对象环绕移动的过程,同时提高了移动机器人的操作安全性。
本发明实施例提供一种移动机器人,该移动机器人具体可以是无人机。图14为本发明实施例提供的无人机的结构图,如图14所示,无人机140包括:机身、动力系统、拍摄装置144和控制装置148,所述动力系统包括如下至少一种:电机141、螺旋桨142和电子调速器143,动力系统安装在所述机身,用于提供动力;控制装置148的具体原理和实现方式均与上述实施例类似,此处不再赘述。
另外,如图14所示,无人机140还包括:传感系统145、通信系统146、支撑设备147,其中,支撑设备147具体可以是云台,拍摄装置144通过支撑设备147搭载在无人机140上。
在一些实施例中,控制装置148具体可以是无人机140的飞行控制器。
本实施例通过获取拍摄装置拍摄的目标对象在该拍摄装置输出的参考图像中的位置信息,确定该目标对象的位置信息,并根据该目标对象的位置信息控制移动机器人环绕该目标对象移动,使得移动机器人不需要移动到环绕中心去记录环绕中心的位置,即可实现该移动机器人环绕该目标 对象移动,简化了移动机器人实现对目标对象环绕移动的过程,同时提高了移动机器人的操作安全性。
本发明实施例还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行以实现如上所述的移动机器人的控制方法。
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功 能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (33)

  1. 一种移动机器人的控制方法,应用于移动机器人,其中,所述移动机器人包括拍摄装置,其特征在于,包括:
    获取目标对象的指示信息,其中,所述指示信息包括目标对象在拍摄装置输出的参考图像中的位置信息;
    根据所述指示信息确定所述目标对象的位置信息;
    根据所述目标对象的位置信息控制所述移动机器人环绕所述目标对象移动。
  2. 根据权利要求1所述的方法,其特征在于,所述获取目标对象的指示信息,包括:
    接收控制终端发送的指示信息,其中,所述指示信息是所述控制终端检测用户在显示所述参考图像的交互界面上的目标对象选择操作确定的。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述指示信息确定所述目标对象的位置信息,包括:
    控制所述移动机器人对参考对象进行环绕移动;
    在所述移动机器人环绕参考对象移动的过程中,获取所述拍摄装置输出的多帧第一目标图像,其中,所述第一目标图像中包括所述目标对象;
    根据所述目标对象的指示信息和所述多帧第一目标图像确定所述目标对象的位置信息。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述目标对象的指示信息和所述多帧第一目标图像确定所述目标对象的位置信息,包括:
    获取所述参考图像的目标区域中的特征点,其中,所述目标区域为所述参考图像中所述指示信息指示的图像区域;
    基于所述参考图像的目标区域中的特征点利用跟踪算法获取每一帧第一目标图像的特征点;
    根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定目标对象的位置信息。
  5. 根据权利要求4所述的方法,其特征在于,所述根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定目标对象的 位置信息,包括:
    基于每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息利用拟合算法确定所述目标对象的位置信息。
  6. 根据权利要求4或5所述的方法,其特征在于,所述方法还包括:在获取每一帧第一目标图像的特征点之后,从每一帧第一目标图像的特征点中确定满足预设要求的目标特征点;
    所述根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定目标对象的位置信息,包括:
    根据每一帧第一目标图像的目标特征点在对应的第一目标图像中的位置信息确定所述目标对象的位置信息。
  7. 根据权利要求4-6任一项所述的方法,其特征在于,所述方法还包括:
    根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定所述移动机器人在环绕参考对象移动过程中所述拍摄装置相对于目标对象的视差;
    所述根据所述目标对象的位置信息控制所述移动机器人环绕所述目标对象移动,包括:
    当所述视差大于第一预设视差阈值时,根据所述确定出的所述目标对象的位置信息确定所述移动机器人对所述目标对象进行环绕移动的环绕轨迹,并控制所述移动机器人在所述环绕轨迹上移动。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    确定所述视差的变化速度;
    根据所述视差的变化速度调节所述移动机器人环绕参考对象移动的速度。
  9. 根据权利要求8所述的方法,其特征在于,所述确定所述视差的变化速度,包括:
    根据多帧第一目标图像中相邻的两帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定所述视差的变化速度。
  10. 根据权利要求7-9任一项所述的方法,其特征在于,所述方法还包括:
    当所述视差大于第二预设视差阈值时,根据所述确定出的所述目标对象的位置信息调整所述移动机器人对参考对象进行环绕移动的半径,其中,所述第一预设视差阈值大于所述第二预设视差阈值。
  11. 根据权利要求3所述的方法,其特征在于,所述控制所述移动机器人对参考对象进行环绕移动,包括:
    根据预设的环绕半径确定参考对象,控制所述移动机器人对参考对象进行环绕移动。
  12. 根据权利要求3或11所述的方法,其特征在于,所述控制所述移动机器人对参考对象进行环绕移动,包括:
    在接收到所述控制终端发送的启动控制指令后,控制所述移动机器人对参考对象进行环绕移动。
  13. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    在获取到所述指示信息之后,根据所述指示信息控制拍摄装置的拍摄姿态以使所述目标对象处于所述拍摄装置的拍摄画面中心。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    在获取到所述指示信息之后,获取所述拍摄装置输出的多帧第二目标图像,其中,所述第二目标图像中包括目标对象;
    所述根据所述指示信息控制拍摄装置的拍摄姿态,包括:
    基于所述参考图像的目标区域中的特征点利用跟踪算法获取每一帧第二目标图像的特征点;
    根据每一帧第二目标图像的特征点确定目标对象在对应第二目标图像的位置信息;
    根据所述目标对象在对应第二目标图像的位置信息控制拍摄装置的拍摄姿态。
  15. 根据权利要求14所述的方法,其特征在于,所述多帧第二目标图像包括所述多帧第一目标图像。
  16. 一种移动机器人的控制装置,所述移动机器人包括拍摄装置,其特征在于,所述控制装置包括:存储器、处理器;
    所述存储器用于存储程序代码;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以 下操作:
    获取目标对象的指示信息,其中,所述指示信息包括目标对象在拍摄装置输出的参考图像中的位置信息;
    根据所述指示信息确定所述目标对象的位置信息;
    根据所述目标对象的位置信息控制所述移动机器人环绕所述目标对象移动。
  17. 根据权利要求16所述的控制装置,其特征在于,所述控制装置还包括:通讯接口,所述通讯接口与所述处理器连接;
    所述处理器获取目标对象的指示信息时,具体用于:
    通过所述通讯接口接收控制终端发送的指示信息,其中,所述指示信息是所述控制终端检测用户在显示所述参考图像的交互界面上的目标对象选择操作确定的。
  18. 根据权利要求16或17所述的控制装置,其特征在于,所述处理器根据所述指示信息确定所述目标对象的位置信息时,具体用于:
    控制所述移动机器人对参考对象进行环绕移动;
    在所述移动机器人环绕参考对象移动的过程中,获取所述拍摄装置输出的多帧第一目标图像,其中,所述第一目标图像中包括所述目标对象;
    根据所述目标对象的指示信息和所述多帧第一目标图像确定所述目标对象的位置信息。
  19. 根据权利要求18所述的控制装置,其特征在于,所述处理器根据所述目标对象的指示信息和所述多帧第一目标图像确定所述目标对象的位置信息时,具体用于:
    获取所述参考图像的目标区域中的特征点,其中,所述目标区域为所述参考图像中所述指示信息指示的图像区域;
    基于所述参考图像的目标区域中的特征点利用跟踪算法获取每一帧第一目标图像的特征点;
    根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定目标对象的位置信息。
  20. 根据权利要求19所述的控制装置,其特征在于,所述处理器根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确 定目标对象的位置信息时,具体用于:
    基于每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息利用拟合算法确定所述目标对象的位置信息。
  21. 根据权利要求19或20所述的控制装置,其特征在于,所述处理器还用于:在获取每一帧第一目标图像的特征点之后,从每一帧第一目标图像的特征点中确定满足预设要求的目标特征点;
    所述处理器根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定目标对象的位置信息时,具体用于:根据每一帧第一目标图像的目标特征点在对应的第一目标图像中的位置信息确定所述目标对象的位置信息。
  22. 根据权利要求19-21任一项所述的控制装置,其特征在于,所述处理器还用于:根据每一帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定所述移动机器人在环绕参考对象移动过程中所述拍摄装置相对于目标对象的视差;
    所述处理器根据所述目标对象的位置信息控制所述移动机器人环绕所述目标对象移动时,具体用于:
    当所述视差大于第一预设视差阈值时,根据所述确定出的所述目标对象的位置信息确定所述移动机器人对所述目标对象进行环绕移动的环绕轨迹,并控制所述移动机器人在所述环绕轨迹上移动。
  23. 根据权利要求22所述的控制装置,其特征在于,所述处理器还用于:
    确定所述视差的变化速度;
    根据所述视差的变化速度调节所述移动机器人环绕参考对象移动的速度。
  24. 根据权利要求23所述的控制装置,其特征在于,所述处理器确定所述视差的变化速度时,具体用于:
    根据多帧第一目标图像中相邻的两帧第一目标图像的特征点在对应的第一目标图像中的位置信息确定所述视差的变化速度。
  25. 根据权利要求22-24任一项所述的控制装置,其特征在于,所述处理器还用于:当所述视差大于第二预设视差阈值时,根据所述确定出的 所述目标对象的位置信息调整所述移动机器人对参考对象进行环绕移动的半径,其中,所述第一预设视差阈值大于所述第二预设视差阈值。
  26. 根据权利要求18所述的控制装置,其特征在于,所述处理器控制所述移动机器人对参考对象进行环绕移动时,具体用于:
    根据预设的环绕半径确定参考对象,控制所述移动机器人对参考对象进行环绕移动。
  27. 根据权利要求18或26所述的控制装置,其特征在于,所述处理器控制所述移动机器人对参考对象进行环绕移动时,具体用于:
    在接收到所述控制终端发送的启动控制指令后,控制所述移动机器人对参考对象进行环绕移动。
  28. 根据权利要求19所述的控制装置,其特征在于,所述处理器还用于:
    在获取到所述指示信息之后,根据所述指示信息控制拍摄装置的拍摄姿态以使所述目标对象处于所述拍摄装置的拍摄画面中心。
  29. 根据权利要求28所述的控制装置,其特征在于,所述处理器还用于:在获取到所述指示信息之后,获取所述拍摄装置输出的多帧第二目标图像,其中,所述第二目标图像中包括目标对象;
    所述处理器根据所述指示信息控制拍摄装置的拍摄姿态时,具体用于:
    基于所述参考图像的目标区域中的特征点利用跟踪算法获取每一帧第二目标图像的特征点;
    根据每一帧第二目标图像的特征点确定目标对象在对应第二目标图像的位置信息;
    根据所述目标对象在对应第二目标图像的位置信息控制拍摄装置的拍摄姿态。
  30. 根据权利要求29所述的控制装置,其特征在于,所述多帧第二目标图像包括所述多帧第一目标图像。
  31. 一种移动机器人,其特征在于,包括:
    机身;
    动力系统,安装在所述机身,用于提供动力;
    拍摄装置;
    以及如权利要求16-30任一项所述的控制装置。
  32. 根据权利要求31所述的移动机器人,其特征在于,所述移动机器人包括无人机。
  33. 一种计算机可读存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被处理器执行以实现如权利要求1-15任一项所述的方法。
PCT/CN2018/096534 2018-07-20 2018-07-20 移动机器人的控制方法、装置、设备及存储介质 WO2020014987A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2018/096534 WO2020014987A1 (zh) 2018-07-20 2018-07-20 移动机器人的控制方法、装置、设备及存储介质
CN201880042301.7A CN110892714A (zh) 2018-07-20 2018-07-20 移动机器人的控制方法、装置、设备及存储介质
CN202310158597.1A CN116126024A (zh) 2018-07-20 2018-07-20 移动机器人的控制方法、装置、设备及存储介质
US17/123,125 US11789464B2 (en) 2018-07-20 2020-12-16 Mobile robot orbiting photography path control methods and apparatuses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/096534 WO2020014987A1 (zh) 2018-07-20 2018-07-20 移动机器人的控制方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/123,125 Continuation US11789464B2 (en) 2018-07-20 2020-12-16 Mobile robot orbiting photography path control methods and apparatuses

Publications (1)

Publication Number Publication Date
WO2020014987A1 true WO2020014987A1 (zh) 2020-01-23

Family

ID=69164932

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/096534 WO2020014987A1 (zh) 2018-07-20 2018-07-20 移动机器人的控制方法、装置、设备及存储介质

Country Status (3)

Country Link
US (1) US11789464B2 (zh)
CN (2) CN116126024A (zh)
WO (1) WO2020014987A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112415990A (zh) * 2020-11-20 2021-02-26 广州极飞科技有限公司 作业设备的控制方法及装置、作业设备
WO2021197121A1 (zh) * 2020-03-30 2021-10-07 维沃移动通信有限公司 图像拍摄方法及电子设备
WO2021217403A1 (zh) * 2020-04-28 2021-11-04 深圳市大疆创新科技有限公司 可移动平台的控制方法、装置、设备及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018086133A1 (en) * 2016-11-14 2018-05-17 SZ DJI Technology Co., Ltd. Methods and systems for selective sensor fusion
WO2021195944A1 (zh) * 2020-03-31 2021-10-07 深圳市大疆创新科技有限公司 可移动平台的控制方法、装置、可移动平台及存储介质
US11808578B2 (en) * 2020-05-29 2023-11-07 Aurora Flight Sciences Corporation Global positioning denied navigation
CN114029952A (zh) * 2021-11-12 2022-02-11 珠海格力电器股份有限公司 机器人操作控制方法、装置和系统
WO2023201574A1 (zh) * 2022-04-20 2023-10-26 深圳市大疆创新科技有限公司 无人机的控制方法、图像显示方法、无人机及控制终端

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006082774A (ja) * 2004-09-17 2006-03-30 Hiroboo Kk 無人飛行体及び無人飛行体制御方法
CN102937443A (zh) * 2012-01-13 2013-02-20 唐粮 一种基于无人机的目标快速定位系统及方法
CN106909172A (zh) * 2017-03-06 2017-06-30 重庆零度智控智能科技有限公司 环绕跟踪方法、装置和无人机
CN107168362A (zh) * 2017-05-26 2017-09-15 昊翔电能运动科技(昆山)有限公司 一种监控方法、系统及飞行机器人

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2460187C2 (ru) * 2008-02-01 2012-08-27 Рокстек Аб Переходная рама с встроенным прижимным устройством
US20170244937A1 (en) * 2014-06-03 2017-08-24 Gopro, Inc. Apparatus and methods for aerial video acquisition
CN105578034A (zh) * 2015-12-10 2016-05-11 深圳市道通智能航空技术有限公司 一种对目标进行跟踪拍摄的控制方法、控制装置及系统
KR20180066647A (ko) * 2016-12-09 2018-06-19 삼성전자주식회사 무인 비행 장치 및 전자 장치를 이용한 무인 비행 장치의 지오 펜스 영역의 재설정 방법
CN107885096B (zh) * 2017-10-16 2021-07-27 中国电力科学研究院 一种无人机巡检航迹三维仿真监控系统
CN107703970B (zh) * 2017-11-03 2018-08-21 中国人民解放军陆军工程大学 无人机集群环绕追踪方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006082774A (ja) * 2004-09-17 2006-03-30 Hiroboo Kk 無人飛行体及び無人飛行体制御方法
CN102937443A (zh) * 2012-01-13 2013-02-20 唐粮 一种基于无人机的目标快速定位系统及方法
CN106909172A (zh) * 2017-03-06 2017-06-30 重庆零度智控智能科技有限公司 环绕跟踪方法、装置和无人机
CN107168362A (zh) * 2017-05-26 2017-09-15 昊翔电能运动科技(昆山)有限公司 一种监控方法、系统及飞行机器人

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021197121A1 (zh) * 2020-03-30 2021-10-07 维沃移动通信有限公司 图像拍摄方法及电子设备
WO2021217403A1 (zh) * 2020-04-28 2021-11-04 深圳市大疆创新科技有限公司 可移动平台的控制方法、装置、设备及存储介质
CN113853559A (zh) * 2020-04-28 2021-12-28 深圳市大疆创新科技有限公司 可移动平台的控制方法、装置、设备及存储介质
CN112415990A (zh) * 2020-11-20 2021-02-26 广州极飞科技有限公司 作业设备的控制方法及装置、作业设备

Also Published As

Publication number Publication date
CN116126024A (zh) 2023-05-16
US20210103293A1 (en) 2021-04-08
US11789464B2 (en) 2023-10-17
CN110892714A (zh) 2020-03-17

Similar Documents

Publication Publication Date Title
WO2020014987A1 (zh) 移动机器人的控制方法、装置、设备及存储介质
CN112567201B (zh) 距离测量方法以及设备
US11644832B2 (en) User interaction paradigms for a flying digital assistant
CN108476288B (zh) 拍摄控制方法及装置
CN105959625B (zh) 控制无人机追踪拍摄的方法及装置
CN111344644B (zh) 用于基于运动的自动图像捕获的技术
CN110799921A (zh) 拍摄方法、装置和无人机
JP6943988B2 (ja) 移動可能物体の制御方法、機器およびシステム
CN106973221B (zh) 基于美学评价的无人机摄像方法和系统
WO2020233682A1 (zh) 一种自主环绕拍摄方法、装置以及无人机
CN106289180A (zh) 运动轨迹的计算方法及装置、终端
WO2020038720A1 (en) Apparatus, method and computer program for detecting the form of a deformable object
WO2020024134A1 (zh) 轨迹切换的方法和装置
WO2019183789A1 (zh) 无人机的控制方法、装置和无人机
WO2020019130A1 (zh) 运动估计方法及可移动设备
WO2021217403A1 (zh) 可移动平台的控制方法、装置、设备及存储介质
WO2021000225A1 (zh) 可移动平台的控制方法、装置、设备及存储介质
CN111581322B (zh) 视频中兴趣区域在地图窗口内显示的方法和装置及设备
KR20180106178A (ko) 무인 비행체, 전자 장치 및 그에 대한 제어 방법
WO2020172878A1 (zh) 可移动平台的射击瞄准控制方法、设备及可读存储介质
CN109309709A (zh) 一种可远端控制无人装置的控制方法
WO2022213385A1 (zh) 目标跟踪方法、装置、可移动平台及计算机可读存储介质
WO2022246608A1 (zh) 生成全景视频的方法、装置和可移动平台

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18926526

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18926526

Country of ref document: EP

Kind code of ref document: A1