WO2019051832A1 - 可移动物体控制方法、设备及系统 - Google Patents

可移动物体控制方法、设备及系统 Download PDF

Info

Publication number
WO2019051832A1
WO2019051832A1 PCT/CN2017/102081 CN2017102081W WO2019051832A1 WO 2019051832 A1 WO2019051832 A1 WO 2019051832A1 CN 2017102081 W CN2017102081 W CN 2017102081W WO 2019051832 A1 WO2019051832 A1 WO 2019051832A1
Authority
WO
WIPO (PCT)
Prior art keywords
marker
photographing device
information
pose
movable object
Prior art date
Application number
PCT/CN2017/102081
Other languages
English (en)
French (fr)
Inventor
吴迪
唐克坦
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2017/102081 priority Critical patent/WO2019051832A1/zh
Priority to CN201780014737.0A priority patent/CN108713179A/zh
Priority to JP2019571295A priority patent/JP6943988B2/ja
Priority to EP17925187.1A priority patent/EP3674210A4/en
Publication of WO2019051832A1 publication Critical patent/WO2019051832A1/zh
Priority to US16/719,207 priority patent/US20200125100A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0011Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement
    • G05D1/0016Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement characterised by the operator's input device
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0808Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2201/00UAVs characterised by their flight controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2201/00UAVs characterised by their flight controls
    • B64U2201/10UAVs characterised by their flight controls autonomous, i.e. by navigating independently from ground or air stations, e.g. by using inertial navigation systems [INS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • Embodiments of the present invention relate to the field of drones, and in particular, to a movable object control method, device, and system.
  • the control method of the unmanned aerial vehicle in the prior art includes: remote control, mobile APP control, and somatosensory control.
  • the somatosensory control means that the user holds the handheld device, and the handheld device is provided with an attitude sensor (IMT).
  • the attitude sensor can sense the motion of the user's hand, thereby converting the motion information of the user's hand into a control command for controlling the unmanned aerial vehicle, and transmitting the control command to the unmanned aerial vehicle, thereby realizing the control of the unmanned aerial vehicle.
  • the somatosensory control method can only control the unmanned aerial vehicle to perform some fuzzy motions in a large range, and cannot accurately control the unmanned aerial vehicle.
  • Embodiments of the present invention provide a movable object control method, device, and system for accurately controlling an unmanned aerial vehicle.
  • a first aspect of the embodiments of the present invention provides a method for controlling a movable object, including:
  • the movable object is controlled according to the pose information of the photographing device with respect to the marker.
  • a second aspect of the embodiments of the present invention provides a terminal device, including: one or more processors, where the processor is configured to:
  • a third aspect of the embodiments of the present invention provides an unmanned aerial vehicle, including:
  • a power system mounted to the fuselage for providing flight power
  • a flight controller is in communication with the power system for controlling the UAV flight.
  • a fourth aspect of the present invention provides a movable object control system, comprising: the terminal device of the second aspect, and the unmanned aerial vehicle of the third aspect.
  • the movable object control method, device and system provided by the embodiment of the present invention determine the pose information of the photographing device relative to the marker by acquiring the marker in the image collected by the photographing device, and according to the position of the photographing device relative to the marker
  • the posture information controls the movable object, and since the pose information of the photographing device with respect to the marker can be accurately determined, accurate control of the movable object can be realized according to the positional information of the photographing device relative to the marker controlling the movable object.
  • FIG. 1 is a flowchart of a method for controlling a movable object according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a movable object control system according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a user interface of a terminal device according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a user interface of another terminal device according to an embodiment of the present disclosure.
  • FIG. 5 is a flowchart of a method for controlling a movable object according to another embodiment of the present invention.
  • FIG. 6 is a flowchart of a method for controlling a movable object according to another embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a terminal device moving relative to a user's face according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of another terminal device moving relative to a user's face according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of still another terminal device moving relative to a user's face according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of a user interface of still another terminal device according to an embodiment of the present disclosure.
  • FIG. 11 is a structural diagram of a terminal device according to an embodiment of the present invention.
  • FIG. 12 is a structural diagram of an unmanned aerial vehicle according to an embodiment of the present invention.
  • a component when referred to as being "fixed” to another component, it can be directly on the other component or the component can be in the middle. When a component is considered to "connect” another component, it can be directly connected to another component or possibly a central component.
  • FIG. 1 is a flowchart of a method for controlling a movable object according to an embodiment of the present invention. As shown in FIG. 1, the method in this embodiment may include:
  • Step S101 Acquire a marker in an image collected by the photographing device.
  • the movable object control method provided in this embodiment can be applied to the movable object control system shown in FIG. 2, wherein the movable object is specifically an unmanned aerial vehicle.
  • the movable object control system includes: a terminal device 20 and a movable object, wherein the movable object is specifically an unmanned aerial vehicle 21, and the unmanned aerial vehicle 21 is equipped with a photographing device 22, specifically, an unmanned aerial vehicle. 21
  • the imaging device 22 is mounted on the pan/tilt head 23.
  • the terminal device 20 may specifically be a handheld device such as a mobile terminal or a tablet computer, and the terminal device 20 has a photographing function.
  • the terminal device 20 is provided with a front camera and/or a rear camera, and the front camera and/or the rear camera can be used for capturing images.
  • the user holds the terminal device 20, performs self-photographing through the front camera of the terminal device 20, or captures other pictures through the rear camera of the terminal device 20.
  • the user can preview the image acquired by the terminal device 20 through the screen of the terminal device 20.
  • the front camera or the rear camera of the terminal device 20 can collect the image of the current scene in real time. Further, the terminal device 20 detects the marker in the image collected by the front camera or the rear camera, and the marker can be divided into presets.
  • the marker and the general marker may include at least one of the following: a face, a two-dimensional code, an AprilTag and the like having a specific model, wherein the face is most common; the general marker may be a still in the image. Objects such as trees, cars, buildings, etc.
  • the difference between a preset marker and a general marker is that the preset marker has a specific model, and the general marker has no specific model.
  • the front camera of the terminal device 20 collects the user face image in real time, and the terminal device 20 recognizes the face in the image through the face recognition technology.
  • image information or video data captured by the photographing device 22 mounted on the UAV 21 can be wirelessly transmitted to the terminal device 20 through the communication system of the UAV 21, and the user can view the image captured by the photographing device 22 through the terminal device 20.
  • Information or video data when the user views the image information or video data captured by the photographing device 22, the front camera of the terminal device 20 may be opposite to the user's face. At this time, the front camera of the terminal device 20 can collect the user in real time.
  • the terminal device 20 recognizes the face in the image by the face recognition technology. Specifically, because the face has strong characteristics, Therefore, the terminal device 20 can detect key points in the image where the face is fixed, such as eyes, nose, eyebrows, mouth, etc., thereby recognizing the face in the image through the fixed key points in the image.
  • the acquiring the markers in the image collected by the photographing device includes the following methods:
  • One way is to acquire a marker selected by the user in the image captured by the photographing device.
  • Acquiring the selected object selected by the user in the image captured by the photographing device including at least one of: acquiring a marker selected by the user in the image collected by the photographing device; acquiring by the user A marker selected from the image captured by the photographing device.
  • the terminal device 20 can acquire the selected marker selected by the user in the image collected by the terminal device 20, as shown in FIG. 3, 30 is an image collected by the terminal device 20, A general marker 31 is included in the image.
  • the terminal device 20 displays the image 30, the user can frame the general marker 31 in the image 30, as shown by the dashed line in FIG. Alternatively, the user can also click on the general marker 31 in the image 30, as shown in FIG.
  • the terminal device 20 pre-stores a reference image, such as a two-dimensional code reference image, an AprilTag reference image, and the like. After the camera of the terminal device 20 collects an image of the current scene, the terminal device 20 detects whether the image exists in the image and the pre-stored two.
  • the QR code matched by the dimension code reference image, or the AprilTag that matches the pre-stored AprilTag reference image, and the icon that matches the success is used as a marker.
  • the terminal device 20 stores in advance a reference image of a general marker such as a tree, a car, a building, etc., after the camera of the terminal device 20 collects an image of the current scene, the terminal device 20 detects whether the pre-stored tree exists in the image. Markers that match the reference image of cars, buildings, etc.
  • a marker consisting of a preset number of feature points in the image collected by the photographing device is acquired.
  • the terminal device 20 can also detect the feature points in the image. If the number of feature points reaches a preset number, and the positional relationship between the feature points satisfies the preset positional relationship, the terminal device 20 can detect A general marker consisting of a preset number of feature points.
  • Step S102 Determine pose information of the photographing device relative to the marker.
  • the terminal device 20 After the terminal device 20 detects the marker through the above steps, the terminal device is further determined. 20 pose information relative to the marker.
  • the pose information includes at least one of the following: location information, posture information.
  • the attitude information includes at least one of the following: a pitch angle, a roll angle, and a heading angle.
  • the determining the pose information of the photographing device relative to the marker comprises: determining pose posture information of the marker in an image collected by the photographing device; The pose information in the image captured by the photographing device is used to determine the pose information of the photographing device relative to the marker.
  • Determining the pose information in the image collected by the photographing device of the marker includes: determining, according to coordinates of one or more key points of the marker in an image collected by the photographing device, The pose information in the image captured by the photographing device.
  • the marker is the user's face
  • the terminal device 20 detects the key point of the face fixed in the image collected by the front camera in real time, and according to the coordinates of the key point fixed in the image, the face can be determined.
  • the pose information in the image specifically includes position information and posture information of the face in the image.
  • the terminal device 20 further determines the pose information of the terminal device 20 relative to the face according to the pose information of the face in the image.
  • the face is on the right side of the image, and the terminal device 20 can be determined to be on the left side of the face.
  • the marker is a general marker.
  • the terminal device 20 detects the feature point of the general marker in the image collected by the rear camera in real time, the pose position information of the feature point is estimated by using the SLAM method, and the feature point is The pose information is used as a general marker for pose information in the image.
  • the terminal device 20 further determines the pose information of the terminal device 20 relative to the general marker according to the pose information of the general marker in the image.
  • the general marker is on the left side of the image, and the terminal device 20 can be determined to be in the general marker.
  • the general marker is on the right side of the object, and it can be determined that the terminal device 20 is on the left side of the general marker.
  • Step S103 Control the movable object according to the pose information of the photographing device with respect to the marker.
  • the terminal device 20 controls the unmanned aerial vehicle 21 based on the pose information of the terminal device 20 with respect to the marker. For example, the terminal device 20 controls the position of the unmanned aerial vehicle 21 based on the position information of the terminal device 20 with respect to the marker; or controls the posture of the unmanned aerial vehicle 21 based on the posture information of the terminal device 20 with respect to the marker.
  • controlling the movable object according to the pose information of the photographing device relative to the marker includes the following feasible implementation manners:
  • a feasible implementation manner is: controlling position information of the movable object relative to a preset origin according to position information of the photographing device relative to the marker.
  • the controlling the position information of the movable object relative to the preset origin according to the position information of the photographing device relative to the marker comprises: controlling the detachable according to the distance of the photographing device relative to the marker The distance of the moving object relative to the preset origin.
  • the terminal device 20 controls the position information of the UAV 21 relative to the preset origin according to the position information of the terminal device 20 relative to the marker.
  • the preset origin may be the current return point of the UAV 21, or may be an unmanned aerial vehicle.
  • the initial return point of 21 may also be a point in a preset geographic coordinate system. This embodiment does not limit a specific preset origin.
  • the terminal device 20 generates a control command for controlling the unmanned aerial vehicle 21 according to the distance of the terminal device 20 from the marker, and the control command is used to control the distance of the unmanned aerial vehicle 21 with respect to the preset origin.
  • the distance between the terminal device 20 and the marker is L1
  • the terminal device 20 controls the distance of the UAV 21 with respect to the preset origin to be L2.
  • This embodiment does not limit the relationship between L1 and L2.
  • the terminal device 20 is 1 meter away from the marker
  • the terminal device 20 controls the distance of the unmanned aerial vehicle 21 with respect to the preset origin to be 100 meters
  • the terminal device 20 sends a control command to the unmanned aerial vehicle 21, the control command including the distance information 100 meters.
  • the unmanned aerial vehicle 21 adjusts the distance between it and the preset origin according to the control command.
  • Another possible implementation manner is: controlling posture information of the movable object according to posture information of the photographing device with respect to the marker. Controlling the posture information of the movable object according to the posture information of the photographing device with respect to the marker, comprising: controlling the pitch of the movable object according to a pitch angle of the photographing device relative to the marker Controlling a roll angle of the movable object according to a roll angle of the photographing device with respect to the marker; controlling a heading of the movable object according to a heading angle of the photographing device with respect to the marker angle.
  • the terminal device 20 controls the attitude information of the unmanned aerial vehicle 21 according to the attitude information of the terminal device 20 with respect to the marker.
  • the terminal device 20 controls the pitch of the unmanned aerial vehicle 21 according to the pitch angle of the terminal device 20 with respect to the marker.
  • Angle controlling the roll angle of the unmanned aerial vehicle 21 according to the roll angle of the terminal device 20 with respect to the marker; controlling the heading angle of the unmanned aerial vehicle 21 according to the heading angle of the terminal device 20 with respect to the marker.
  • the terminal device 20 is relatively opposed to the marker
  • the pitch angle is ⁇ 1
  • the terminal device 20 controls the pitch angle of the unmanned aerial vehicle 21 to be ⁇ 2.
  • This embodiment does not limit the relationship between ⁇ 1 and ⁇ 2.
  • ⁇ 1 and ⁇ 2 are in a predetermined proportional relationship.
  • Yet another possible implementation manner is: controlling the moving speed of the movable object according to the posture information of the photographing device relative to the marker. Controlling the moving speed of the movable object according to the posture information of the photographing device with respect to the marker, comprising: controlling the movable object on the ground according to a pitch angle of the photographing device relative to the marker a velocity moving along the Y axis in the coordinate system; controlling a speed at which the movable object moves along the X axis in the ground coordinate system according to a roll angle of the photographing device with respect to the marker; according to the photographing device The heading angle of the marker controls the speed at which the movable object moves along the Z axis in the ground coordinate system.
  • the terminal device 20 controls the moving speed of the unmanned aerial vehicle 21 according to the attitude information of the terminal device 20 with respect to the marker.
  • the terminal device 20 controls the unmanned aerial vehicle 21 on the ground according to the pitch angle of the terminal device 20 with respect to the marker.
  • the speed at which the unmanned aerial vehicle 21 moves along the Z axis in the ground coordinate system is controlled. This is only a schematic illustration and does not limit the correspondence between the attitude angle and the moving direction.
  • the pose information of the photographing device relative to the marker is determined, and the movable object is controlled according to the pose information of the photographing device relative to the marker, because the photographing device is opposite to the marker.
  • the pose information can be accurately determined so that precise control of the movable object can be achieved when the movable object is controlled according to the pose information of the photographing device with respect to the marker.
  • FIG. 5 is a flowchart of a method for controlling a movable object according to another embodiment of the present invention. As shown in FIG. 5, on the basis of the embodiment shown in FIG. 1, the method in this embodiment may include:
  • Step S501 Acquire a marker in an image collected by the photographing device.
  • Step S501 is consistent with the principle and implementation manner of step S101, and details are not described herein again.
  • Step S502 Determine pose motion information of the photographing device relative to the marker.
  • the terminal device 20 may further determine The pose motion information of the terminal device 20 relative to the marker.
  • the pose motion information includes at least one of the following: position change information and posture change information.
  • the marker is the user's face, and the front camera of the terminal device 20 is opposite to the user's face.
  • the front camera can collect an image including the user's face in real time, assuming that the user's face is not moving, the user mobile terminal device 20, when When the user's face moves to the right in the image, it indicates that the terminal device 20 is moving to the left with respect to the user's face.
  • the marker is a general marker, and the rear camera of the terminal device 20 is opposite to the general marker.
  • the rear camera can collect an image including a general marker in real time, assuming that the general marker does not move, the user moves the terminal device 20, When the general marker moves to the right in the image, it indicates that the terminal device 20 is moving to the left with respect to the general marker. Similarly, the change in attitude of the terminal device 20 relative to the marker can also be determined. It can be understood that the terminal device 20 detects the marker in the image collected by the front camera or the rear camera in real time, and determines the position change or posture change of the marker in the image.
  • the terminal device 20 can be based on The change in position of the marker in the image reverses the change in position of the terminal device 20 relative to the marker, or reverses the change in attitude of the terminal device 20 relative to the marker based on the change in attitude of the marker in the image.
  • Step S503 controlling the movable object according to the pose motion information of the photographing device with respect to the marker.
  • the terminal device 20 may also map the positional change of the terminal device 20 with respect to the marker to a control command for controlling the unmanned aerial vehicle 21, or map the posture change of the terminal device 20 with respect to the marker for control.
  • the control command of the unmanned aerial vehicle 21 transmits the control command to the unmanned aerial vehicle 21.
  • controlling the movable object according to the pose motion information of the photographing device relative to the marker includes the following possible situations:
  • a possible case is: controlling position change information of the movable object with respect to a preset origin according to position change information of the photographing device with respect to the marker.
  • the terminal device 20 can control the UAV 21 to move to the left with respect to the preset origin, and can also control the UAV 21 to move to the right with respect to the preset origin.
  • the terminal device 20 can control the UAV 21 to move to the right with respect to the preset origin, and can also control the UAV 21 to move to the left with respect to the preset origin.
  • the preset origin may be the current return point of the UAV 21, or may be the initial return point of the UAV 21, or In the case of a point in a predetermined geographical coordinate system, the present embodiment does not limit a specific preset origin.
  • Another possible case is to control the posture change information of the movable object with respect to the preset origin according to the posture change information of the photographing device with respect to the marker.
  • the change in attitude of the terminal device 20 relative to the marker may be a change in the pitch angle of the terminal device 20 relative to the marker, such as a pitch angular velocity, or may be a change in the roll angle of the terminal device 20 relative to the marker.
  • the roll angular velocity may also be a change in the heading angle of the terminal device 20 relative to the marker, such as heading angular velocity.
  • the terminal device 20 controls the pitch angular velocity of the UAV 21 relative to the preset origin according to the pitch angular velocity of the terminal device 20 with respect to the marker; and controls the UAV according to the roll angular velocity of the terminal device 20 with respect to the marker. 21 Rolling angular velocity with respect to a preset origin; controlling the heading angular velocity of the UAV 21 with respect to the preset origin according to the heading angular velocity of the terminal device 20 with respect to the marker.
  • the position change information or the posture change information of the photographing device relative to the marker is determined by acquiring the marker in the image collected by the photographing device, and the position change of the movable object is controlled according to the position change information of the photographing device relative to the marker. Or controlling the posture change of the movable object according to the posture change information of the photographing device relative to the marker, since the position change information or the posture change information of the photographing device relative to the marker can be accurately determined, thereby according to the position of the photographing device relative to the marker
  • the change information or the posture change information can precisely control the movable object.
  • FIG. 6 is a flowchart of a method for controlling a movable object according to another embodiment of the present invention.
  • the method for determining the pose motion information of the photographing device relative to the marker in step S502 may specifically include the following feasible implementation manners:
  • the first feasible implementation is the following steps as shown in Figure 6:
  • Step S601 Determine pose motion information of the marker in at least two frames of images collected by the photographing device.
  • the determining the pose motion information in the at least two frames of the image collected by the photographing device includes: collecting, according to one or more key points of the marker, the photographing device a first coordinate in the first image, and one or more of the markers Determining, at a second coordinate in the second image acquired by the photographing device, an association matrix between the first image and the second image; according to the first image and the second image An inter-matrix matrix that determines pose motion information of the marker in the first image and the second image.
  • the marker is a user's face
  • the front camera of the terminal device 20 collects an image including a face in real time
  • the marker is a general marker
  • the rear camera of the terminal device 20 collects an image including a general marker in real time
  • the terminal device 20 uses a preset initialization method to obtain a series of two-dimensional key points of the general marker in the image.
  • the two-dimensional key point may be a feature point with strong expressiveness such as Harris or FAST corner points.
  • the terminal device 20 further tracks the two-dimensional key points between the two frames of images acquired by the rear camera, for example, tracking two-dimensional key points between adjacent two frames of images, assuming a two-dimensional key point of the previous frame (u i , v i ) corresponding to the two-dimensional key point of the next frame
  • the internal reference matrix of the rear camera of the terminal device 20 is K.
  • the terminal device 20 may use a triangulation method to convert a series of two-dimensional key points of the general marker in the image into a three-dimensional point X i , where the triangulation method may specifically be a direct linear transformation (DLT), assuming
  • the projection matrix of the previous frame is The projection matrix of the next frame is Where p 1T denotes the first row of P, p 2T denotes the second row of P, p 3T denotes the third row of P, p' 1T denotes the first row of P', and p' 2T denotes the second row of P' , p' 3T represents the third line of P'.
  • the relationship between the projection matrix P of the previous frame, the three-dimensional point X i , and the two-dimensional key points (u i , v i ) of the previous frame can be determined by the following formula (1):
  • Projection matrix P' of the next frame 3D point X i , 2D key point of the next frame
  • the relationship between the two can be determined by the following formula (2):
  • Projection matrix P of the previous frame projection matrix P' of the next frame, 3D point X i , 2D key points of the previous frame (u i , v i ), 2D key points of the next frame
  • the relationship can be determined by the following formula (3):
  • A represents a matrix
  • the right eigenvector corresponding to the smallest eigenvalue of the matrix A is the solution of the three-dimensional point X i .
  • the projection matrix P of the previous frame and the projection matrix P' of the subsequent frame can be obtained from the fundamental matrix F.
  • the correlation matrix between the adjacent two frames can be determined.
  • the specific process is as follows:
  • the three-dimensional points corresponding to two adjacent frames are represented in a homogeneous coordinate form.
  • X i (x i , y i , z i ) represents a three-dimensional point of the previous frame
  • X i (x i , y i , z
  • the homogeneous coordinate form of i ) is Represents the 3D point of the next frame
  • the homogeneous coordinate form is
  • the correlation matrix includes a rotation matrix and a translation vector, the rotation matrix representing posture change information of the one or more key points in the first image and the second image, the translation vector representation Position change information of the one or more key points in the first image and the second image.
  • R 3 ⁇ 3 represents a rotation matrix, and represents a posture change information of a key point of the marker in the previous frame and the subsequent frame
  • T 3 ⁇ 1 represents a translation vector, indicating that the key point of the marker is in the previous frame and Position change information in the next frame.
  • M can be calculated by optimizing the cost function shown in equation (6) below:
  • V represents a visual matrix
  • the above formula (6) can also be optimized, and the optimization method can include the following:
  • the RANSAC method is used to select some feature points to reduce the influence of outliers, and further optimization is carried out by nonlinear optimization methods such as Levenberg-Marquardt (LM).
  • LM Levenberg-Marquardt
  • the marker is a general marker
  • the perspective-n-point (PnP) method is used to calculate R and T
  • the nonlinear optimization method such as LM is further used to minimize the following formula.
  • R is specifically R 3 ⁇ 3 in the formula (5)
  • T is specifically T 3 ⁇ 1 in the formula (5).
  • the RANSAC method is used to select some feature points to reduce the influence of outliers.
  • the pose motion information of the marker in the previous frame and the subsequent frame is determined according to the association matrix M.
  • the pose motion information includes the location. Change information, posture change information.
  • R 3 ⁇ 3 represents a rotation matrix, and represents posture change information of the key points of the marker in the previous frame and the subsequent frame. Therefore, the terminal device 20 can be based on the key point of the marker.
  • the posture change information in the frame and the subsequent frame determines the posture change information of the marker in the previous frame and the subsequent frame.
  • T 3 ⁇ 1 represents a translation vector, and represents position change information of the key point of the marker in the previous frame and the subsequent frame. Therefore, the terminal device 20 can be based on the key point of the marker.
  • the position change information in the previous frame and the subsequent frame determines the position change information of the marker in the previous frame and the subsequent frame.
  • Step S602 Determine pose motion information of the photographing device relative to the marker according to the pose motion information in the at least two frames of images collected by the photographing device.
  • the terminal device 20 determines the posture change information of the terminal device 20 relative to the marker according to the posture change information in the previous frame image and the subsequent frame image collected by the terminal device 20, or the terminal device 20 according to the terminal device 20
  • the position change information in the previous frame image and the subsequent frame image collected by the marker at the terminal device 20 determines the position change information of the terminal device 20 relative to the marker.
  • the terminal device 20 may also use R 3 ⁇ 3 and T 3 ⁇ 1 as input signals of a controller (proportional integral derivative, PID for short), so that the controller outputs a control command for controlling the unmanned aerial vehicle 21 .
  • a controller proportional integral derivative, PID for short
  • R 3 ⁇ 3 can be used to control the attitude of the unmanned aerial vehicle 21, for example, the terminal device 20 converts R 3 ⁇ 3 into an Euler angle, and generates a control command for controlling the rotation of the unmanned aerial vehicle 21 according to the Euler angle
  • T 3 x 1 can be used to control the UAV 21 translation.
  • R 3 ⁇ 3 and T 3 ⁇ 1 share one controller, or R 3 ⁇ 3 and T 3 ⁇ 1 use two different controllers.
  • the second feasible manner of determining the pose motion information of the photographing device relative to the marker is: determining the relative position of the photographing device according to the pose motion information of the photographing device detected by the inertial measurement unit IMU The positional motion information of the marker.
  • the terminal device 20 is provided with an inertial measurement unit (IMU), and the inertial measurement unit generally includes a gyroscope and an accelerometer.
  • the inertial measurement unit is configured to detect a pitch angle, a roll angle, a yaw angle, an acceleration, and the like of the terminal device 20. Assuming that the markers are different, the terminal device 20 can determine the posture change information of the terminal device 20 relative to the marker according to the posture change information of the terminal device 20 detected by the inertial measurement unit IMU, or the terminal device 20 detected by the inertial measurement unit IMU.
  • the acceleration calculates the position change information of the terminal device 20, and further determines the position change information of the terminal device 20 with respect to the marker.
  • a third feasible manner for determining the pose motion information of the photographing device relative to the marker is: the pose motion information in the at least two frames of images collected by the photographing device according to the marker, and The pose motion information of the photographing device detected by the inertial measurement unit IMU determines the pose motion information of the photographing device relative to the marker.
  • posture change information or position change information of the terminal device 20 relative to the marker is determined. That is, the terminal device 20 can assist the reference terminal device 20 in inertia when determining the pose motion information of the terminal device 20 relative to the marker according to the pose motion information in the at least two frames of images collected by the marker at the terminal device 20.
  • the pose motion information of the terminal device 20 detected by the measurement unit IMU.
  • the determined pose motion information of the photographing device with respect to the marker is deleted.
  • the terminal device 20 determines the pose motion information of the terminal device 20 relative to the marker and the bit of the terminal device 20 detected by the IMU according to the pose motion information in the at least two frames of images collected by the marker at the terminal device 20. If the posture motion information is inconsistent and the difference is large, the position and motion information of the terminal device 20 determined by the terminal device 20 relative to the marker is inaccurate, and the position of the terminal device 20 relative to the marker that has been determined before the current time may be further determined.
  • the posture motion information is initialized, for example, deleted.
  • the terminal device determines an association matrix between the two frames according to the coordinates in the at least two frames of the image collected by the terminal device according to one or more key points of the marker, and determines the marker according to the association matrix.
  • the pose motion information in the two frames of images further determines the pose motion information of the terminal device relative to the marker according to the pose motion information of the marker in the two frame images, and improves the position of the terminal device relative to the marker The accuracy of the calculation of the pose motion information.
  • Embodiments of the present invention provide a movable object control method.
  • the method before the controlling the movable object, the method further includes: acquiring a trigger instruction for triggering movement of the movable object.
  • the triggering instruction is generated by operating the first activation button.
  • the user can control the unmanned aerial vehicle 21 through the terminal device 20.
  • the terminal device 20 moves to the left relative to the user's face, and the terminal device 20
  • the direction of movement of the terminal device 20 relative to the user's face is further mapped to a control command that controls the unmanned aerial vehicle 21.
  • the case where the terminal device 20 moves to the left with respect to the user's face includes the following possible situations:
  • One possible case is that, as shown in FIG. 7, the user's face is not moving, the user moves the terminal device 20 to the left, and the terminal device 20 is moved in the direction indicated by the arrow 70.
  • the terminal device 20 does not move, and the user's face moves to the right of the terminal device 20 in the direction indicated by the arrow 80.
  • a further possibility is that, as shown in FIG. 9, the user's face and the terminal device 20 move at the same time, the user moves the terminal device 20 to the left, and moves the terminal device 20 in the direction indicated by the arrow 70, while the user's face is directed to the terminal.
  • the right side of device 20 is moved in the direction indicated by arrow 80.
  • the pose change of the user's face or the pose change of the terminal device 20 can cause the pose change of the terminal device 20 relative to the user's face, and the terminal device 20 can change according to the pose of the terminal device 20 relative to the user's face.
  • the unmanned aerial vehicle 21 To control the unmanned aerial vehicle 21.
  • an activation button may be disposed on the terminal device 20, such as the activation button A shown in FIG. 10.
  • the terminal device 20 A trigger command is generated according to a click operation of the user to activate the button A, and the trigger command may trigger the terminal device 20 to send a control command to the unmanned aerial vehicle 21, and if the user does not click the activation button A, even if the terminal device 20 generates a control command, the trigger device cannot
  • the human aircraft 21 transmits to ensure that the unmanned aerial vehicle 21 does not move.
  • the triggering command may also trigger the UAV 21 to move. For example, when the UAV 21 receives the control command and the trigger command sent by the terminal device 20 at the same time, the control command is executed if the UAV 21 only receives the terminal. When the control command sent by the device 20 does not receive the trigger command sent by the terminal device 20, the control command is not executed.
  • the method before the determining the pose information of the photographing device relative to the marker, the method further includes: acquiring an initialization instruction, the initialization command is used to determine the photographing device relative to the marker The pose information is initialized.
  • the initialization instruction is The second activation button is generated by the operation.
  • the terminal device 20 can also be provided with an activation button B.
  • the terminal device 20 When the user clicks the activation button B, the terminal device 20 generates an initialization command according to the user's click operation on the activation button B, and the initialization command is used to
  • the positional motion information of the terminal device 20 that has been determined before the time is initialized, for example, deleted, that is, before the user controls the unmanned aerial vehicle 21 through the terminal device 20, the terminal device 20 may store the information determined at the historical time.
  • the positional motion information of the terminal device 20 relative to the marker, such as the user's face, is used to prevent the positional motion information determined by the historical moment from affecting the pose motion information determined at the current time, and the user can click the activation button B on the terminal device 20
  • the terminal device 20 determined at the historical time is initialized with respect to the pose motion information of the marker such as the user's face.
  • the user activates the triggering instruction for triggering the movement of the unmanned aerial vehicle by the operation of the first activation button on the terminal device, thereby avoiding the position change of the unmanned aerial vehicle caused by the user's misoperation, thereby realizing the unmanned aerial vehicle.
  • Precise control the operation of the second activation button on the terminal device by the user causes the terminal device to generate an initialization command to initialize the posture information of the determined terminal device relative to the marker, so as to avoid the pose motion information determined by the historical moment.
  • the momentarily determined pose motion information has an impact, further enabling precise control of the unmanned aerial vehicle.
  • FIG. 11 is a structural diagram of a terminal device according to an embodiment of the present invention.
  • the terminal device 110 includes one or more processors 111, and the processor 111 is configured to: acquire a flag in an image collected by a photographing device. Determining the pose information of the photographing device relative to the marker; controlling the movable object according to the pose information of the photographing device relative to the marker.
  • the pose information includes at least one of the following: location information and posture information.
  • the attitude information includes at least one of the following: a pitch angle, a roll angle, and a heading angle.
  • the method is specifically configured to: at least one of: acquiring a marker selected by the user in the image collected by the photographing device; a marker matching the preset reference image in the image captured by the photographing device; and acquiring a marker composed of a preset number of feature points in the image collected by the photographing device.
  • the processor 111 acquires the marker selected by the user in the image captured by the photographing device
  • the processor 111 is specifically configured to: at least one of: acquiring the image selected by the user in the image captured by the photographing device. a marker; acquiring a marker selected by a user in an image collected by the photographing device.
  • the processor 111 when determining the pose information of the photographing device relative to the marker, is specifically configured to: determine pose posture information of the marker in an image collected by the photographing device; The pose information in the image captured by the photographing device determines the pose information of the photographing device relative to the marker.
  • the processor 111 is configured to: when the marker is in the pose information in the image collected by the photographing device, specifically, in the image collected by the photographing device according to one or more key points of the marker The coordinates of the marker determine the pose information in the image captured by the photographing device.
  • the processor 111 controls the movable object according to the pose information of the shooting device relative to the marker
  • the processor 111 is specifically configured to: at least one of: according to the position of the photographing device relative to the marker Information for controlling position information of the movable object with respect to a preset origin; controlling posture information of the movable object according to posture information of the photographing device with respect to the marker; and according to the photographing device, the photographing device
  • the attitude information of the object controls the speed of movement of the movable object.
  • the processor 111 is configured to: according to the attitude information of the shooting device relative to the marker, when the motion speed of the movable object is controlled, specifically, according to the pitch angle of the photographing device relative to the marker, the control center Determining a speed at which the movable object moves along the Y axis in the ground coordinate system; controlling a speed at which the movable object moves along the X axis in the ground coordinate system according to a roll angle of the photographing device with respect to the marker; The heading angle of the photographing device relative to the marker controls a speed at which the movable object moves along the Z axis in the ground coordinate system.
  • the processor 111 is configured to: when the attitude information of the movable object is controlled, according to the posture information of the shooting device, specifically, according to the pitch angle of the shooting device relative to the marker, a pitch angle of the moving object; controlling a roll angle of the movable object according to a roll angle of the photographing device with respect to the marker; controlling the rollable angle according to a heading angle of the photographing device with respect to the marker The heading angle of the moving object.
  • the processor 111 is configured to: according to the position information of the photographing device relative to the marker, the position information of the movable object relative to the preset origin, according to the distance of the photographing device relative to the marker, Controlling the distance of the movable object relative to the preset origin.
  • the pose information of the photographing device relative to the marker is determined, and the movable object is controlled according to the pose information of the photographing device relative to the marker, because the photographing device is opposite to the marker.
  • the pose information can be accurately determined so that precise control of the movable object can be achieved when the movable object is controlled according to the pose information of the photographing device with respect to the marker.
  • the embodiment of the invention provides a terminal device.
  • the processor 111 is further configured to: determine pose motion information of the photographing device relative to the marker; and according to the photographing device, the logo
  • the pose motion information of the object controls the movable object.
  • the pose motion information includes at least one of the following: position change information and posture change information.
  • the processor 111 is configured to: when the movable object is controlled according to the pose motion information of the photographing device with respect to the marker, specifically for at least one of: according to the photographing device relative to the marker Position change information, controlling position change information of the movable object with respect to a preset origin; controlling posture change information of the movable object with respect to a preset origin according to posture change information of the photographing device with respect to the marker .
  • the position change information or the posture change information of the photographing device relative to the marker is determined by acquiring the marker in the image collected by the photographing device, and the position change of the movable object is controlled according to the position change information of the photographing device relative to the marker. Or controlling the posture change of the movable object according to the posture change information of the photographing device relative to the marker, since the position change information or the posture change information of the photographing device relative to the marker can be accurately determined, thereby according to the position of the photographing device relative to the marker
  • the change information or the posture change information can precisely control the movable object.
  • the embodiment of the invention provides a terminal device. Based on the technical solution provided by the foregoing embodiment, the processor 111 determines the pose motion information of the photographing device relative to the marker.
  • the approach includes the following possible implementations:
  • a possible implementation manner is: when the processor 111 determines the pose motion information of the photographing device relative to the marker, specifically, determining that the marker is in at least two frames of images collected by the photographing device The pose motion information; determining the pose motion information of the photographing device relative to the marker according to the pose motion information in the at least two frames of images collected by the photographing device.
  • the processor 111 is configured to: when the marker is in the pose motion information in the at least two frames of images collected by the photographing device, the method is specifically configured to: collect, according to one or more key points of the marker, the photographing device Determining the first image and the first coordinate in the first image, and the second coordinate of the one or more key points of the marker in the second image acquired by the photographing device An association matrix between the two images; determining pose motion information of the marker in the first image and the second image according to an association matrix between the first image and the second image.
  • the correlation matrix includes a rotation matrix and a translation vector, the rotation matrix representing posture change information of the one or more key points in the first image and the second image, the translation vector representation Position change information of the one or more key points in the first image and the second image.
  • Another possible implementation manner is: when the processor 111 determines the pose motion information of the photographing device relative to the marker, specifically for: the pose motion information of the photographing device detected by the inertial measurement unit IMU Determining the pose motion information of the photographing device with respect to the marker.
  • Another possible implementation manner is: when the processor 111 determines the pose motion information of the photographing device relative to the marker, specifically, the method is: at least two frames of images collected by the photographing device according to the marker The pose motion information in the pose and the pose motion information of the photographing device detected by the inertial measurement unit IMU determine the pose motion information of the photographing device with respect to the marker. And if the absolute value of the difference between the pose motion information in the at least two frames of images collected by the photographing device and the pose motion information of the photographing device detected by the IMU is greater than a threshold, the processing The device is further configured to delete the determined pose motion information of the photographing device relative to the marker.
  • the terminal device is in the terminal device according to one or more key points of the marker. Collecting coordinates in at least two frames of images, determining an association matrix between the two frames of images, and determining pose motion information of the markers in the two frames according to the correlation matrix, further based on the markers in the two
  • the pose motion information in the frame image determines the pose motion information of the terminal device relative to the marker, and improves the calculation precision of the pose motion information of the terminal device relative to the marker.
  • the embodiment of the invention provides a terminal device. Based on the technical solution provided by the foregoing embodiment, before the processor 111 controls the movable object, the processor 111 is further configured to: acquire a trigger instruction for triggering movement of the movable object. The triggering instruction is generated by operating the first activation button.
  • the processor 111 is further configured to: acquire an initialization instruction, where the initialization command is used to determine the photographing device relative to the marker The pose information is initialized.
  • the initialization command is generated by operating the second activation button.
  • the user activates the triggering instruction for triggering the movement of the unmanned aerial vehicle by the operation of the first activation button on the terminal device, thereby avoiding the position change of the unmanned aerial vehicle caused by the user's misoperation, thereby realizing the unmanned aerial vehicle.
  • Precise control the operation of the second activation button on the terminal device by the user causes the terminal device to generate an initialization command to initialize the posture information of the determined terminal device relative to the marker, so as to avoid the pose motion information determined by the historical moment.
  • the momentarily determined pose motion information has an impact, further enabling precise control of the unmanned aerial vehicle.
  • Embodiments of the present invention provide an unmanned aerial vehicle.
  • 12 is a structural diagram of an unmanned aerial vehicle according to an embodiment of the present invention.
  • the unmanned aerial vehicle 100 includes: a fuselage, a power system, and a flight controller 118, and the power system includes at least one of the following: a motor 107.
  • a propeller 106 and an electronic governor 117, the power system is mounted to the airframe for providing flight power; and the flight controller 118 is communicatively coupled to the power system for controlling the UAV flight.
  • the unmanned aerial vehicle 100 further includes: a sensing system 108, a communication system 110, a supporting device 102, and a photographing device 104.
  • the supporting device 102 may specifically be a pan/tilt
  • the communication system 110 may specifically include receiving
  • the receiver is configured to receive a wireless signal transmitted by the antenna 114 of the ground station 112, and 116 represents an electromagnetic wave generated during communication between the receiver and the antenna 114.
  • the ground station 112 may specifically be the terminal device in the above embodiment, the terminal device generates a control command and transmits the control command to the flight controller 118 through the communication system 110 of the UAV 100, and the flight controller 118 further controls according to the terminal device.
  • the command is used to control the unmanned aerial vehicle 100.
  • the specific principles and implementations of the unmanned aerial vehicle 100 are similar to the above embodiments, and are not described herein again.
  • Embodiments of the present invention provide a movable object control system.
  • the movable object is specifically an unmanned aerial vehicle.
  • the movable object control system comprises: a terminal device 20 and an unmanned aerial vehicle 21.
  • the specific principles and implementation manners of the terminal device 20 for controlling the unmanned aerial vehicle 21 are similar to those of the foregoing embodiment, and are not described herein again.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种可移动物体控制方法,包括:获取由拍摄设备(22)采集到的图像中的标志物(31),确定拍摄设备(22)相对标志物(31)的位姿信息,并根据拍摄设备(22)相对标志物(31)的位姿信息控制可移动物体。由于拍摄设备(22)相对标志物(31)的位姿信息是可以被精确确定的,从而根据拍摄设备(22)相对标志物(31)的位姿信息控制可移动物体时能够实现对可移动物体的精确控制。另外,还涉及实施上述方法的设备和系统。

Description

可移动物体控制方法、设备及系统 技术领域
本发明实施例涉及无人机领域,尤其涉及一种可移动物体控制方法、设备及系统。
背景技术
现有技术中无人飞行器的控制方法包括:遥控器控制、手机APP控制、体感控制,其中,体感控制是指用户持有手持设备,该手持设备内设置有姿态传感器(Inertial Measurement Unit,简称IMU),该姿态传感器可感测出用户手的动作,进而将用户手的动作信息转换为控制无人飞行器的控制指令,并将控制指令发送给无人飞行器,实现了对无人飞行器的控制。
但是,体感控制方式只能控制无人飞行器在一个大的范围内做某些模糊的动作,无法精确的控制无人飞行器。
发明内容
本发明实施例提供一种可移动物体控制方法、设备及系统,以精确控制无人飞行器。
本发明实施例的第一方面是提供一种可移动物体控制方法,包括:
获取由拍摄设备采集到的图像中的标志物;
确定所述拍摄设备相对所述标志物的位姿信息;
根据所述拍摄设备相对所述标志物的位姿信息,控制所述可移动物体。
本发明实施例的第二方面是提供一种终端设备,包括:一个或多个处理器,所述处理器用于:
获取由拍摄设备采集到的图像中的标志物;
确定所述拍摄设备相对所述标志物的位姿信息;
根据所述拍摄设备相对所述标志物的位姿信息,控制所述可移动物 体。
本发明实施例的第三方面是提供一种无人飞行器,包括:
机身;
动力系统,安装在所述机身,用于提供飞行动力;
飞行控制器,与所述动力系统通讯连接,用于控制所述无人飞行器飞行。
本发明实施例的第四方面是提供一种可移动物体控制系统,包括:第二方面所述的终端设备,以及第三方面所述的无人飞行器。
本发明实施例提供的可移动物体控制方法、设备及系统,通过获取由拍摄设备采集到的图像中的标志物,确定拍摄设备相对标志物的位姿信息,并根据拍摄设备相对标志物的位姿信息控制可移动物体,由于拍摄设备相对标志物的位姿信息是可以被精确确定的,从而根据拍摄设备相对标志物的位姿信息控制可移动物体时能够实现对可移动物体的精确控制。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的可移动物体控制方法的流程图;
图2为本发明实施例提供的可移动物体控制系统的示意图;
图3为本发明实施例提供的一种终端设备的用户界面示意图;
图4为本发明实施例提供的另一种终端设备的用户界面示意图;
图5为本发明另一实施例提供的可移动物体控制方法的流程图;
图6为本发明另一实施例提供的可移动物体控制方法的流程图;
图7为本发明实施例提供的一种终端设备相对于用户人脸运动的示意图;
图8为本发明实施例提供的另一种终端设备相对于用户人脸运动的示意图;
图9为本发明实施例提供的再一种终端设备相对于用户人脸运动的示意图;
图10为本发明实施例提供的再一种终端设备的用户界面示意图;
图11为本发明实施例提供的终端设备的结构图;
图12为本发明实施例提供的无人飞行器的结构图。
附图标记:
20-终端设备  21-无人飞行器  22-拍摄设备  23-云台
30-图像  31-一般标志物  70-方向  80-方向
110-终端设备  111-处理器  100-无人飞行器
107-电机  106-螺旋桨  117-电子调速器
118-飞行控制器  108-传感系统  110-通信系统
102-支撑设备  104-拍摄设备  112-地面站
114-天线  116-电磁波
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
本发明实施例提供一种可移动物体控制方法。图1为本发明实施例提供的可移动物体控制方法的流程图。如图1所示,本实施例中的方法,可以包括:
步骤S101、获取由拍摄设备采集到的图像中的标志物。
本实施例提供的可移动物体控制方法,可以适用于图2所示的可移动物体控制系统,其中,可移动物体具体为无人飞行器。如图2所示,该可移动物体控制系统包括:终端设备20和可移动物体,其中,可移动物体具体为无人飞行器21,无人飞行器21搭载有拍摄设备22,具体的,无人飞行器21通过云台23搭载拍摄设备22。终端设备20具体可以是移动终端、平板电脑等手持设备,终端设备20具有拍摄功能。具体的,终端设备20设置有前置摄像头和/或后置摄像头,前置摄像头和/或后置摄像头可用于采集图像。用户手持终端设备20,通过终端设备20的前置摄像头进行自拍,或者通过终端设备20的后置摄像头拍摄其他画面。用户可通过终端设备20的屏幕预览终端设备20采集到的图像。
终端设备20的前置摄像头或后置摄像头可实时采集当前场景的图像,进一步的,终端设备20检测前置摄像头或后置摄像头采集到的图像中的标志物,该标志物可分为预设标志物和一般标志物,预设标志物可包括如下至少一种:人脸、二维码、AprilTag等具有特定模型的标志物,其中以人脸最为普遍;一般标志物可以是图像中的静止物体,比如树木、汽车、建筑物等。预设标志物和一般标志物的区别在于:预设标志物具体有特定的模型,一般标志物没有特定的模型。
例如,用户手持终端设备20,通过终端设备20的前置摄像头自拍时,终端设备20的前置摄像头实时采集用户人脸图像,终端设备20通过人脸识别技术识别出图像中的人脸。再例如,无人飞行器21搭载的拍摄设备22拍摄到的图像信息或视频数据可通过无人飞行器21的通信系统无线传输到终端设备20,用户可以通过终端设备20观看拍摄设备22拍摄到的图像信息或视频数据,在用户观看拍摄设备22拍摄到的图像信息或视频数据时,终端设备20的前置摄像头可能与用户人脸相对,此时,终端设备20的前置摄像头可实时采集用户人脸图像,进一步的,终端设备20通过人脸识别技术识别出图像中的人脸。具体的,由于人脸具有很强的特征, 因此,终端设备20可检测出图像中人脸固定的关键点,例如眼睛、鼻子、眉毛、嘴巴等,从而通过图像中固定的关键点识别出图像中的人脸。
具体的,所述获取由拍摄设备采集到的图像中的标志物,包括如下几种方式:
一种方式是:获取由用户在所述拍摄设备采集到的图像中选定的标志物。所述获取由用户在所述拍摄设备采集到的图像中选定的标志物,包括如下至少一种:获取由用户在所述拍摄设备采集到的图像中框选的标志物;获取由用户在所述拍摄设备采集到的图像中点选的标志物。
由于一般标志物没有特定的模型,因此,终端设备20可获取由用户在终端设备20采集到的图像中选定的标志物,如图3所示,30是终端设备20采集到的图像,该图像中包括一般标志物31,当终端设备20显示图像30时,用户可在图像30中框选一般标志物31,如图3虚线所示。或者,用户还可以在图像30中点选一般标志物31,如图4所示。
另一种方式是:获取所述拍摄设备采集到的图像中与预设参考图像匹配的标志物。例如,终端设备20预先存储有参考图像,如二维码参考图像、AprilTag参考图像等,当终端设备20的摄像头采集到当前场景的图像后,终端设备20检测该图像中是否存在与预存的二维码参考图像匹配的二维码,或者与预存的AprilTag参考图像匹配的AprilTag,并将匹配成功的图标作为标志物。
或者,终端设备20预先存储有一般标志物例如树木、汽车、建筑物等的参考图像,当终端设备20的摄像头采集到当前场景的图像后,终端设备20检测该图像中是否存在与预存的树木、汽车、建筑物等的参考图像匹配的标志物。
再一种方式是:获取所述拍摄设备采集到的图像中由预设数量的特征点构成的标志物。对于一般标志物,终端设备20还可以检测出图像中的特征点,如果特征点的数量达到预设数量,且特征点之间的位置关系满足预设的位置关系,则终端设备20可检测出由预设数量的特征点构成的一般标志物。
步骤S102、确定所述拍摄设备相对所述标志物的位姿信息。
终端设备20通过上述步骤检测出标志物之后,进一步确定终端设备 20相对该标志物的位姿信息。所述位姿信息包括如下至少一种:位置信息、姿态信息。所述姿态信息包括如下至少一种:俯仰角、横滚角、航向角。
具体的,所述确定所述拍摄设备相对所述标志物的位姿信息,包括:确定所述标志物在所述拍摄设备采集到的图像中的位姿信息;根据所述标志物在所述拍摄设备采集到的图像中的位姿信息,确定所述拍摄设备相对所述标志物的位姿信息。
所述确定所述标志物在所述拍摄设备采集到的图像中的位姿信息,包括:根据所述标志物的一个或多个关键点在所述拍摄设备采集到的图像中的坐标,确定所述标志物在所述拍摄设备采集到的图像中的位姿信息。
例如,标志物是用户人脸,终端设备20检测出前置摄像头实时采集到的图像中人脸固定的关键点后,根据人脸固定的关键点在图像中的坐标,可以确定出人脸在图像中的位姿信息,具体包括人脸在图像中的位置信息、姿态信息。终端设备20进一步根据人脸在图像中的位姿信息,确定终端设备20相对于人脸的位姿信息,例如,人脸在图像的右边,可确定出终端设备20在人脸的左边。
再例如,标志物是一般标志物,当终端设备20检测出后置摄像头实时采集到的图像中一般标志物的特征点后,采用SLAM方法估计出特征点的位姿信息,并将特征点的位姿信息作为一般标志物在图像中的位姿信息。终端设备20进一步根据一般标志物在图像中的位姿信息,确定终端设备20相对于一般标志物的位姿信息,例如,一般标志物在图像的左侧,可确定出终端设备20在一般标志物的右侧;一般标志物在图像的右侧,可确定出终端设备20在一般标志物的左侧。
步骤S103、根据所述拍摄设备相对所述标志物的位姿信息,控制所述可移动物体。
具体的,终端设备20根据终端设备20相对标志物的位姿信息,控制无人飞行器21。例如,终端设备20根据终端设备20相对标志物的位置信息,控制无人飞行器21的位置;或者根据终端设备20相对标志物的姿态信息,控制无人飞行器21的姿态。
不失一般性,所述根据所述拍摄设备相对所述标志物的位姿信息,控制所述可移动物体,包括如下几种可行的实现方式:
一种可行的实现方式是:根据所述拍摄设备相对所述标志物的位置信息,控制所述可移动物体相对于预设原点的位置信息。所述根据所述拍摄设备相对所述标志物的位置信息,控制所述可移动物体相对于预设原点的位置信息,包括:根据所述拍摄设备相对所述标志物的距离,控制所述可移动物体相对于所述预设原点的距离。
例如,终端设备20根据终端设备20相对标志物的位置信息,控制无人飞行器21相对于预设原点的位置信息,预设原点可以是无人飞行器21的当前返航点,也可以是无人飞行器21的初始返航点,还可以是预先设定的地理坐标系中的一点,本实施例并不限定具体的预设原点。
例如,终端设备20根据终端设备20相对标志物的距离生成对无人飞行器21进行控制的控制指令,该控制指令用于控制无人飞行器21相对于所述预设原点的距离,可选的,终端设备20相对标志物的距离为L1,终端设备20控制无人飞行器21相对于所述预设原点的距离为L2,本实施例不限定L1与L2之间的关系。例如,终端设备20相对标志物1米,终端设备20控制无人飞行器21相对于预设原点的距离为100米,终端设备20向无人飞行器21发送控制指令,该控制指令包括距离信息100米,无人飞行器21接收到该控制指令后,根据该控制指令调整其与预设原点之间的距离。
另一种可行的实现方式是:根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的姿态信息。所述根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的姿态信息,包括:根据所述拍摄设备相对所述标志物的俯仰角,控制所述可移动物体的俯仰角;根据所述拍摄设备相对所述标志物的横滚角,控制所述可移动物体的横滚角;根据所述拍摄设备相对所述标志物的航向角,控制所述可移动物体的航向角。
例如,终端设备20根据终端设备20相对标志物的姿态信息,控制无人飞行器21的姿态信息,可选的,终端设备20根据终端设备20相对标志物的俯仰角,控制无人飞行器21的俯仰角;根据终端设备20相对标志物的横滚角,控制无人飞行器21的横滚角;根据终端设备20相对标志物的航向角,控制无人飞行器21的航向角。例如,终端设备20相对标志物 的俯仰角为α1,终端设备20控制无人飞行器21的俯仰角为α2,本实施例不限定α1和α2之间的关系,可选的,α1和α2成预设的比例关系。
再一种可行的实现方式是:根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的运动速度。所述根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的运动速度,包括:根据所述拍摄设备相对所述标志物的俯仰角,控制所述可移动物体在地面坐标系中沿Y轴移动的速度;根据所述拍摄设备相对所述标志物的横滚角,控制所述可移动物体在地面坐标系中沿X轴移动的速度;根据所述拍摄设备相对所述标志物的航向角,控制所述可移动物体在地面坐标系中沿Z轴移动的速度。
例如,终端设备20根据终端设备20相对标志物的姿态信息,控制无人飞行器21的运动速度,可选的,终端设备20根据终端设备20相对标志物的俯仰角,控制无人飞行器21在地面坐标系中沿Y轴移动的速度;根据终端设备20相对标志物的横滚角,控制无人飞行器21在地面坐标系中沿X轴移动的速度;根据终端设备20相对标志物的航向角,控制无人飞行器21在地面坐标系中沿Z轴移动的速度。此处只是示意性说明,并不限定姿态角和移动方向之间的对应关系。
本实施例通过获取由拍摄设备采集到的图像中的标志物,确定拍摄设备相对标志物的位姿信息,并根据拍摄设备相对标志物的位姿信息控制可移动物体,由于拍摄设备相对标志物的位姿信息是可以被精确确定的,从而根据拍摄设备相对标志物的位姿信息控制可移动物体时能够实现对可移动物体的精确控制。
本发明实施例提供一种可移动物体控制方法。图5为本发明另一实施例提供的可移动物体控制方法的流程图。如图5所示,在图1所示实施例的基础上,本实施例中的方法,可以包括:
步骤S501、获取由拍摄设备采集到的图像中的标志物。
步骤S501与步骤S101的原理和实现方式一致,此处不再赘述。
步骤S502、确定所述拍摄设备相对所述标志物的位姿运动信息。
在本实施例中,终端设备20检测出标志物之后,进一步还可以确定 终端设备20相对该标志物的位姿运动信息。所述位姿运动信息包括如下至少一种:位置变化信息、姿态变化信息。例如,标志物是用户人脸,终端设备20的前置摄像头与用户人脸相对,前置摄像头可实时采集到包括用户人脸的图像,假设用户人脸不动,用户移动终端设备20,当用户人脸在图像中向右移动时,表示终端设备20相对于用户人脸在向左移动。再例如,标志物是一般标志物,终端设备20的后置摄像头与一般标志物相对,后置摄像头可实时采集到包括一般标志物的图像,假设一般标志物不动,用户移动终端设备20,当一般标志物在图像中向右移动时,表示终端设备20相对于一般标志物在向左移动。同理,还可以确定出终端设备20相对于标志物的姿态变化。可以理解的是:终端设备20通过检测前置摄像头或后置摄像头实时采集到的图像中的标志物,并判断出标志物在图像中的位置变化或姿态变化,进一步的,终端设备20可根据标志物在图像中的位置变化反推出终端设备20相对于标志物的位置变化,或者根据标志物在图像中的姿态变化反推出终端设备20相对于标志物的姿态变化。
步骤S503、根据所述拍摄设备相对所述标志物的位姿运动信息,控制所述可移动物体。
进一步的,终端设备20还可以将终端设备20相对于标志物的位置变化映射成用于控制无人飞行器21的控制指令,或者,将终端设备20相对于标志物的姿态变化映射成用于控制无人飞行器21的控制指令,并将控制指令发送给无人飞行器21。
具体的,所述根据所述拍摄设备相对所述标志物的位姿运动信息,控制所述可移动物体,包括如下几种可能的情况:
一种可能的情况是:根据所述拍摄设备相对所述标志物的位置变化信息,控制所述可移动物体相对于预设原点的位置变化信息。
例如,终端设备20相对于用户人脸在向左移动,则终端设备20可以控制无人飞行器21相对于预设原点向左移动,也可以控制无人飞行器21相对于预设原点向右移动。再例如,终端设备20相对于一般标志物在向右移动,则终端设备20可以控制无人飞行器21相对于预设原点向右移动,也可以控制无人飞行器21相对于预设原点向左移动。预设原点可以是无人飞行器21的当前返航点,也可以是无人飞行器21的初始返航点,还可 以是预先设定的地理坐标系中的一点,本实施例并不限定具体的预设原点。
另一种可能的情况是:根据所述拍摄设备相对所述标志物的姿态变化信息,控制所述可移动物体相对于预设原点的姿态变化信息。
假设标志物不动,终端设备20相对于标志物的姿态变化可以是终端设备20相对于标志物的俯仰角的变化例如俯仰角速度,也可以是终端设备20相对于标志物的横滚角的变化例如横滚角速度,还可以是终端设备20相对于标志物的航向角的变化例如航向角速度。可选的,终端设备20根据终端设备20相对于标志物的俯仰角速度,控制无人飞行器21相对于预设原点的俯仰角速度;根据终端设备20相对于标志物的横滚角速度,控制无人飞行器21相对于预设原点的横滚角速度;根据终端设备20相对于标志物的航向角速度,控制无人飞行器21相对于预设原点的航向角速度。
本实施例通过获取由拍摄设备采集到的图像中的标志物,确定拍摄设备相对标志物的位置变化信息或姿态变化信息,并根据拍摄设备相对标志物的位置变化信息控制可移动物体的位置变化,或者根据拍摄设备相对标志物的姿态变化信息控制可移动物体的姿态变化,由于拍摄设备相对标志物的位置变化信息或姿态变化信息是可以被精确确定的,从而根据拍摄设备相对标志物的位置变化信息或姿态变化信息可精确的控制可移动物体。
本发明实施例提供一种可移动物体控制方法。图6为本发明另一实施例提供的可移动物体控制方法的流程图。如图6所示,在图5所示实施例的基础上,步骤S502确定所述拍摄设备相对所述标志物的位姿运动信息的方法具体可包括如下几种可行的实现方式:
第一种可行的实现方式是如图6所示的如下步骤:
步骤S601、确定所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息。
具体的,所述确定所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息,包括:根据所述标志物的一个或多个关键点在所述拍摄设备采集到的第一图像中的第一坐标,以及所述标志物的一个或多个关 键点在所述拍摄设备采集到的第二图像中的第二坐标,确定所述第一图像和所述第二图像之间的关联矩阵;根据所述第一图像和所述第二图像之间的关联矩阵,确定所述标志物在所述第一图像和所述第二图像中的位姿运动信息。
例如,标志物是用户人脸,终端设备20的前置摄像头实时采集包括人脸的图像,终端设备20通过人脸检测方法得到人脸在图像中的若干个二维关键点,记为
Figure PCTCN2017102081-appb-000001
再根据人脸的先验知识将二维关键点转化为三维点,记为Xi=(xi,yi,zi),i=1,2,3......n。
再例如,标志物是一般标志物,终端设备20的后置摄像头实时采集包括一般标志物的图像,终端设备20采用预设的初始化方法得到一般标志物在图像中的一系列二维关键点,具体的,该二维关键点可以是表现力较强的特征点例如Harris或FAST角点。终端设备20进一步在后置摄像头采集到的两帧图像之间跟踪二维关键点,例如在相邻两帧图像之间跟踪二维关键点,假设前一帧的二维关键点(ui,vi)对应后一帧的二维关键点
Figure PCTCN2017102081-appb-000002
终端设备20的后置摄像头的内参矩阵为K。终端设备20可采用三角化方法将一般标志物在图像中的一系列二维关键点转化为三维点Xi,此处三角化方法具体可以是直接线性变换(Direct Linear Transform,简称DLT),假设前一帧的投影矩阵为
Figure PCTCN2017102081-appb-000003
后一帧的投影矩阵为
Figure PCTCN2017102081-appb-000004
其中,p1T表示P的第1行,p2T表示P的第2行,p3T表示P的第3行,p’1T表示P’的第1行,p’2T表示P’的第2行,p’3T表示P’的第3行。前一帧的投影矩阵P、三维点Xi、前一帧的二维关键点(ui,vi)之间的关系可通过如下公式(1)确定:
Figure PCTCN2017102081-appb-000005
后一帧的投影矩阵P’、三维点Xi、后一帧的二维关键点
Figure PCTCN2017102081-appb-000006
之间的关系可通过如下公式(2)确定:
Figure PCTCN2017102081-appb-000007
前一帧的投影矩阵P、后一帧的投影矩阵P’、三维点Xi、前一帧的二维关键点(ui,vi)、后一帧的二维关键点
Figure PCTCN2017102081-appb-000008
之间的关系可通过如下公式(3)确定:
Figure PCTCN2017102081-appb-000009
其中,A表示矩阵,且矩阵A的最小特征值对应的右特征向量即为三维点Xi的解。前一帧的投影矩阵P、后一帧的投影矩阵P’可根据基本矩阵(fundamental matrix)F求得。
当标志物的三维点可以在每一帧图像中被检测出时,可确定出相邻两帧之间的之间的关联矩阵,具体过程如下:
将相邻两帧对应的三维点表示成齐次坐标形式,例如,Xi=(xi,yi,zi)表示前一帧的三维点,Xi=(xi,yi,zi)的齐次坐标形式为
Figure PCTCN2017102081-appb-000010
表示后一帧的三维点,
Figure PCTCN2017102081-appb-000011
的齐次坐标形式为
Figure PCTCN2017102081-appb-000012
Xi=(xi,yi,zi)的齐次坐标形式
Figure PCTCN2017102081-appb-000013
的齐次坐标形式
Figure PCTCN2017102081-appb-000014
前一帧和后一帧之间的关联矩阵M之间的关系可通过如下公式(4)确定:
Figure PCTCN2017102081-appb-000015
其中,M可表示为公式(5)的形式:
Figure PCTCN2017102081-appb-000016
其中,所述关联矩阵包括旋转矩阵和平移向量,所述旋转矩阵表示所述一个或多个关键点在所述第一图像和所述第二图像中的姿态变化信息,所述平移向量表示所述一个或多个关键点在所述第一图像和所述第二图像中的位置变化信息。具体的,R3×3表示旋转矩阵,表示标志物的关键点在前一帧和后一帧中的姿态变化信息,T3×1表示平移向量,表示标志物的关键点在前一帧和后一帧中的位置变化信息。
可选的,通过优化下面的公式(6)所示的代价函数可计算出M:
M*=arg min|(MP-P’)V|2           (6)
其中,V表示可视矩阵,当特征点i可以同时在两帧例如相邻两帧中观测到时V(i,:)=1,否则,V(i,:)=0。
另外,为了提高计算M的精确度,还可以对上述公式(6)进行优化,优化的方法可以包括如下几种:
采用RANSAC方法选取部分特征点,减小离群点(outlier)的影响,进一步采用Levenberg-Marquardt(LM)等非线性优化方法进行优化。
或者,当标志物的当前帧只有二维点时,例如该标志物为一般标志物,采用perspective-n-point(PnP)方法计算R、T,进一步采用LM等非线性优化方法最小化如下公式(7)所示的目标函数:
Figure PCTCN2017102081-appb-000017
其中,R具体为公式(5)中的R3×3,T具体为公式(5)中的T3×1。可选的,采用RANSAC方法选取部分特征点,减小离群点(outlier)的影响。
终端设备20计算出前一帧和后一帧之间的关联矩阵M之后,根据关联矩阵M确定标志物在前一帧和后一帧中的位姿运动信息,具体的,位姿运动信息包括位置变化信息、姿态变化信息。根据公式(5)可知,R3×3表 示旋转矩阵,表示标志物的关键点在前一帧和后一帧中的姿态变化信息,因此,终端设备20可根据标志物的关键点在前一帧和后一帧中的姿态变化信息,确定标志物在前一帧和后一帧中的姿态变化信息。另外,根据公式(5)可知,T3×1表示平移向量,表示标志物的关键点在前一帧和后一帧中的位置变化信息,因此,终端设备20可根据标志物的关键点在前一帧和后一帧中的位置变化信息,确定标志物在前一帧和后一帧中的位置变化信息。
步骤S602、根据所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息,确定所述拍摄设备相对所述标志物的位姿运动信息。
具体的,终端设备20根据标志物在终端设备20采集到的前一帧图像和后一帧图像中的姿态变化信息,确定出终端设备20相对标志物的姿态变化信息,或者,终端设备20根据标志物在终端设备20采集到的前一帧图像和后一帧图像中的位置变化信息,确定出终端设备20相对标志物的位置变化信息。
在其他本实施例中,终端设备20还可以将R3×3和T3×1作为控制器(proportional integral derivative,简称PID)的输入信号,以使控制器输出控制无人飞行器21的控制指令,其中,R3×3可用于控制无人飞行器21的姿态,例如,终端设备20将R3×3转换为欧拉角,并根据该欧拉角生成控制无人飞行器21旋转的控制指令,T3×1可用于控制无人飞行器21平移。可选的,R3×3和T3×1共用一个控制器,或者R3×3和T3×1使用两个不同的控制器。
第二种可行的确定所述拍摄设备相对所述标志物的位姿运动信息的实现方式是:根据惯性测量单元IMU检测到的所述拍摄设备的位姿运动信息,确定所述拍摄设备相对所述标志物的位姿运动信息。
具体的,终端设备20设置有惯性测量单元(inertial measurement unit,简称IMU),惯性测量单元一般包括陀螺仪和加速度计。所述惯性测量单元用于检测终端设备20的俯仰角、横滚角、偏航角和加速度等。假设标志物不同,终端设备20可根据惯性测量单元IMU检测到的终端设备20的姿态变化信息,确定终端设备20相对标志物的姿态变化信息,或者根据惯性测量单元IMU检测到的终端设备20的加速度计算出终端设备20的位置变化信息,进一步确定终端设备20相对标志物的位置变化信息。
第三种可行的确定所述拍摄设备相对所述标志物的位姿运动信息的实现方式是:根据所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息,以及惯性测量单元IMU检测到的所述拍摄设备的位姿运动信息,确定所述拍摄设备相对所述标志物的位姿运动信息。
具体的,结合上述两种确定所述拍摄设备相对所述标志物的位姿运动信息的实现方式,确定终端设备20相对标志物的姿态变化信息或位置变化信息。也就是说,终端设备20在根据标志物在终端设备20采集到的至少两帧图像中的位姿运动信息确定终端设备20相对标志物的位姿运动信息时,可以辅助参考终端设备20内惯性测量单元IMU检测到的终端设备20的位姿运动信息。
可选的,若所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息与IMU检测到的所述拍摄设备的位姿运动信息的差值的绝对值大于阈值,则删除已确定的所述拍摄设备相对所述标志物的位姿运动信息。
例如,终端设备20根据标志物在终端设备20采集到的至少两帧图像中的位姿运动信息,确定出的终端设备20相对标志物的位姿运动信息与IMU检测到的终端设备20的位姿运动信息不一致,且差别较大,则说明终端设备20确定出的终端设备20相对标志物的位姿运动信息不准确,进一步可将当前时刻之前已经确定出的终端设备20相对标志物的位姿运动信息进行初始化例如删除。
本实施例通过终端设备根据标志物的一个或多个关键点在终端设备采集到的至少两帧图像中的坐标,确定出两帧图像之间的关联矩阵,并根据该关联矩阵,确定标志物在该两帧图像中的位姿运动信息,进一步根据标志物在该两帧图像中的位姿运动信息,确定终端设备相对该标志物的位姿运动信息,提高了对终端设备相对该标志物的位姿运动信息的计算精度。
本发明实施例提供一种可移动物体控制方法。在上述实施例的基础上,所述控制所述可移动物体之前,还包括:获取用于触发所述可移动物体移动的触发指令。所述触发指令是对第一激活按键进行操作生成的。
根据上述实施例可知,用户可通过终端设备20控制无人飞行器21,例如,终端设备20的前置摄像头采集到用户人脸时,终端设备20相对于用户人脸在向左移动,终端设备20进一步将终端设备20相对于用户人脸的运动方向映射成控制无人飞行器21的控制指令。其中,终端设备20相对于用户人脸向左移动的情况包括如下几种可能的情况:
一种可能的情况是:如图7所示,用户人脸不动、用户向左移动终端设备20,如箭头70所示的方向移动终端设备20。
另一种可能的情况是:如图8所示,终端设备20不动,用户人脸向终端设备20的右边如箭头80所示的方向移动。
再一种可能的情况是:如图9所示,用户人脸和终端设备20同时移动,用户向左移动终端设备20,如箭头70所示的方向移动终端设备20,同时用户人脸向终端设备20的右边如箭头80所示的方向移动。
可见用户人脸的位姿变化或终端设备20自身的位姿变化均可导致终端设备20相对于用户人脸发生位姿变化,终端设备20可根据终端设备20相对于用户人脸的位姿变化来控制无人飞行器21。
有时候用户无意之间转动头部或用户无意之间移动了终端设备20,也可能造成无人飞行器21的位姿变化,而此时用户可能并不希望无人飞行器21产生位姿变化,为了避免由于用户的误操作而导致无人飞行器21产生位姿变化,在终端设备20上可设置有一个激活按键,如图10所示的激活按键A,当用户点击激活按键A时,终端设备20根据用户对激活按键A的点击操作生成触发指令,该触发指令可以触发终端设备20向无人飞行器21发送控制指令,如果用户不点击激活按键A,即使终端设备20生成控制指令,也无法向无人飞行器21发送,从而保证无人飞行器21不发生移动。或者,该触发指令还可以触发无人飞行器21移动,例如当无人飞行器21同时接收到终端设备20发送的控制指令和触发指令时,则执行该控制指令,如果无人飞行器21只接收到终端设备20发送的控制指令,而没有接收到终端设备20发送的触发指令,则不执行该控制指令。
在一些实施例中,所述确定所述拍摄设备相对所述标志物的位姿信息之前,还包括:获取初始化指令,所述初始化指令用于对已确定的所述拍摄设备相对所述标志物的位姿信息进行初始化处理。所述初始化指令是对 第二激活按键进行操作生成的。
如图10所示,终端设备20上还可设置有一个激活按键B,当用户点击激活按键B时,终端设备20根据用户对激活按键B的点击操作生成初始化指令,该初始化指令用于对当前时刻之前已经确定出的终端设备20相对标志物的位姿运动信息进行初始化例如删除,也就是说,用户通过终端设备20控制无人飞行器21之前,终端设备20可能存储有在历史时刻确定出的终端设备20相对标志物例如用户人脸的位姿运动信息,为了避免历史时刻确定的位姿运动信息对当前时刻确定的位姿运动信息造成影响,用户可通过点击激活按键B对终端设备20在历史时刻确定出的终端设备20相对标志物例如用户人脸的位姿运动信息进行初始化。
本实施例通过用户对终端设备上第一激活按键的操作使得终端设备产生触发无人飞行器移动的触发指令,避免由于用户的误操作而导致无人飞行器产生位姿变化,实现了对无人飞行器的精确控制。另外,通过用户对终端设备上第二激活按键的操作使得终端设备产生初始化指令,以对已确定的终端设备相对标志物的位姿信息进行初始化处理,避免历史时刻确定的位姿运动信息对当前时刻确定的位姿运动信息造成影响,进一步实现了对无人飞行器的精确控制。
本发明实施例提供一种终端设备。图11为本发明实施例提供的终端设备的结构图,如图11所示,终端设备110包括一个或多个处理器111,处理器111用于:获取由拍摄设备采集到的图像中的标志物;确定所述拍摄设备相对所述标志物的位姿信息;根据所述拍摄设备相对所述标志物的位姿信息,控制所述可移动物体。
具体的,所述位姿信息包括如下至少一种:位置信息、姿态信息。所述姿态信息包括如下至少一种:俯仰角、横滚角、航向角。
可选的,处理器111获取由拍摄设备采集到的图像中的标志物时,具体用于如下至少一种:获取由用户在所述拍摄设备采集到的图像中选定的标志物;获取所述拍摄设备采集到的图像中与预设参考图像匹配的标志物;获取所述拍摄设备采集到的图像中由预设数量的特征点构成的标志物。
进一步的,处理器111获取由用户在所述拍摄设备采集到的图像中选定的标志物时,具体用于如下至少一种:获取由用户在所述拍摄设备采集到的图像中框选的标志物;获取由用户在所述拍摄设备采集到的图像中点选的标志物。
具体的,处理器111确定所述拍摄设备相对所述标志物的位姿信息时,具体用于:确定所述标志物在所述拍摄设备采集到的图像中的位姿信息;根据所述标志物在所述拍摄设备采集到的图像中的位姿信息,确定所述拍摄设备相对所述标志物的位姿信息。处理器111确定所述标志物在所述拍摄设备采集到的图像中的位姿信息时,具体用于:根据所述标志物的一个或多个关键点在所述拍摄设备采集到的图像中的坐标,确定所述标志物在所述拍摄设备采集到的图像中的位姿信息。
可选的,处理器111根据所述拍摄设备相对所述标志物的位姿信息,控制所述可移动物体时,具体用于如下至少一种:根据所述拍摄设备相对所述标志物的位置信息,控制所述可移动物体相对于预设原点的位置信息;根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的姿态信息;根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的运动速度。其中,处理器111根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的运动速度时,具体用于:根据所述拍摄设备相对所述标志物的俯仰角,控制所述可移动物体在地面坐标系中沿Y轴移动的速度;根据所述拍摄设备相对所述标志物的横滚角,控制所述可移动物体在地面坐标系中沿X轴移动的速度;根据所述拍摄设备相对所述标志物的航向角,控制所述可移动物体在地面坐标系中沿Z轴移动的速度。处理器111根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的姿态信息时,具体用于:根据所述拍摄设备相对所述标志物的俯仰角,控制所述可移动物体的俯仰角;根据所述拍摄设备相对所述标志物的横滚角,控制所述可移动物体的横滚角;根据所述拍摄设备相对所述标志物的航向角,控制所述可移动物体的航向角。处理器111根据所述拍摄设备相对所述标志物的位置信息,控制所述可移动物体相对于预设原点的位置信息时,具体用于:根据所述拍摄设备相对所述标志物的距离,控制所述可移动物体相对于所述预设原点的距离。
本发明实施例提供的终端设备的具体原理和实现方式均与图1所示实施例类似,此处不再赘述。
本实施例通过获取由拍摄设备采集到的图像中的标志物,确定拍摄设备相对标志物的位姿信息,并根据拍摄设备相对标志物的位姿信息控制可移动物体,由于拍摄设备相对标志物的位姿信息是可以被精确确定的,从而根据拍摄设备相对标志物的位姿信息控制可移动物体时能够实现对可移动物体的精确控制。
本发明实施例提供一种终端设备。在图11所示实施例提供的技术方案的基础上,进一步的,处理器111还用于:确定所述拍摄设备相对所述标志物的位姿运动信息;根据所述拍摄设备相对所述标志物的位姿运动信息,控制所述可移动物体。其中,所述位姿运动信息包括如下至少一种:位置变化信息、姿态变化信息。
可选的,处理器111根据所述拍摄设备相对所述标志物的位姿运动信息,控制所述可移动物体时,具体用于如下至少一种:根据所述拍摄设备相对所述标志物的位置变化信息,控制所述可移动物体相对于预设原点的位置变化信息;根据所述拍摄设备相对所述标志物的姿态变化信息,控制所述可移动物体相对于预设原点的姿态变化信息。
本发明实施例提供的终端设备的具体原理和实现方式均与图5所示实施例类似,此处不再赘述。
本实施例通过获取由拍摄设备采集到的图像中的标志物,确定拍摄设备相对标志物的位置变化信息或姿态变化信息,并根据拍摄设备相对标志物的位置变化信息控制可移动物体的位置变化,或者根据拍摄设备相对标志物的姿态变化信息控制可移动物体的姿态变化,由于拍摄设备相对标志物的位置变化信息或姿态变化信息是可以被精确确定的,从而根据拍摄设备相对标志物的位置变化信息或姿态变化信息可精确的控制可移动物体。
本发明实施例提供一种终端设备。在上述实施例提供的技术方案的基础上,处理器111确定所述拍摄设备相对所述标志物的位姿运动信息的 方式包括如下几种可行的实现方式:
一种可行的实现方式是:处理器111确定所述拍摄设备相对所述标志物的位姿运动信息时,具体用于:确定所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息;根据所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息,确定所述拍摄设备相对所述标志物的位姿运动信息。处理器111确定所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息时,具体用于:根据所述标志物的一个或多个关键点在所述拍摄设备采集到的第一图像中的第一坐标,以及所述标志物的一个或多个关键点在所述拍摄设备采集到的第二图像中的第二坐标,确定所述第一图像和所述第二图像之间的关联矩阵;根据所述第一图像和所述第二图像之间的关联矩阵,确定所述标志物在所述第一图像和所述第二图像中的位姿运动信息。其中,所述关联矩阵包括旋转矩阵和平移向量,所述旋转矩阵表示所述一个或多个关键点在所述第一图像和所述第二图像中的姿态变化信息,所述平移向量表示所述一个或多个关键点在所述第一图像和所述第二图像中的位置变化信息。
另一种可行的实现方式是:处理器111确定所述拍摄设备相对所述标志物的位姿运动信息时,具体用于:根据惯性测量单元IMU检测到的所述拍摄设备的位姿运动信息,确定所述拍摄设备相对所述标志物的位姿运动信息。
再一种可行的实现方式是:处理器111确定所述拍摄设备相对所述标志物的位姿运动信息时,具体用于:根据所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息,以及惯性测量单元IMU检测到的所述拍摄设备的位姿运动信息,确定所述拍摄设备相对所述标志物的位姿运动信息。若所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息与IMU检测到的所述拍摄设备的位姿运动信息的差值的绝对值大于阈值,则所述处理器还用于删除已确定的所述拍摄设备相对所述标志物的位姿运动信息。
本发明实施例提供的终端设备的具体原理和实现方式均与图6所示实施例类似,此处不再赘述。
本实施例通过终端设备根据标志物的一个或多个关键点在终端设备 采集到的至少两帧图像中的坐标,确定出两帧图像之间的关联矩阵,并根据该关联矩阵,确定标志物在该两帧图像中的位姿运动信息,进一步根据标志物在该两帧图像中的位姿运动信息,确定终端设备相对该标志物的位姿运动信息,提高了对终端设备相对该标志物的位姿运动信息的计算精度。
本发明实施例提供一种终端设备。在上述实施例提供的技术方案的基础上,处理器111控制所述可移动物体之前,还用于:获取用于触发所述可移动物体移动的触发指令。所述触发指令是对第一激活按键进行操作生成的。
进一步地,处理器111确定所述拍摄设备相对所述标志物的位姿信息之前,还用于:获取初始化指令,所述初始化指令用于对已确定的所述拍摄设备相对所述标志物的位姿信息进行初始化处理。所述初始化指令是对第二激活按键进行操作生成的。
本发明实施例提供的终端设备的具体原理和实现方式均与图10所示实施例类似,此处不再赘述。
本实施例通过用户对终端设备上第一激活按键的操作使得终端设备产生触发无人飞行器移动的触发指令,避免由于用户的误操作而导致无人飞行器产生位姿变化,实现了对无人飞行器的精确控制。另外,通过用户对终端设备上第二激活按键的操作使得终端设备产生初始化指令,以对已确定的终端设备相对标志物的位姿信息进行初始化处理,避免历史时刻确定的位姿运动信息对当前时刻确定的位姿运动信息造成影响,进一步实现了对无人飞行器的精确控制。
本发明实施例提供一种无人飞行器。图12为本发明实施例提供的无人飞行器的结构图,如图12所示,无人飞行器100包括:机身、动力系统和飞行控制器118,所述动力系统包括如下至少一种:电机107、螺旋桨106和电子调速器117,动力系统安装在所述机身,用于提供飞行动力;飞行控制器118与所述动力系统通讯连接,用于控制所述无人飞行器飞行。
另外,如图12所示,无人飞行器100还包括:传感系统108、通信系统110、支撑设备102、拍摄设备104,其中,支撑设备102具体可以是云台,通信系统110具体可以包括接收机,接收机用于接收地面站112的天线114发送的无线信号,116表示接收机和天线114通信过程中产生的电磁波。
地面站112具体可以是上述实施例中的终端设备,终端设备产生控制指令并将控制指令通过无人飞行器100的通信系统110发送给飞行控制器118,飞行控制器118进一步根据终端设备发送的控制指令控制无人飞行器100,其中,通过终端设备控制无人飞行器100的具体原理和实现方式均与上述实施例类似,此处不再赘述。
本发明实施例提供一种可移动物体控制系统。其中,该可移动物体具体为无人飞行器,如图2所示,该可移动物体控制系统包括:终端设备20和无人飞行器21。终端设备20控制无人飞行器21的具体原理和实现方式均与上述实施例类似,此处不再赘述。
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (50)

  1. 一种可移动物体控制方法,其特征在于,包括:
    获取由拍摄设备采集到的图像中的标志物;
    确定所述拍摄设备相对所述标志物的位姿信息;
    根据所述拍摄设备相对所述标志物的位姿信息,控制所述可移动物体。
  2. 根据权利要求1所述的方法,其特征在于,所述位姿信息包括如下至少一种:
    位置信息、姿态信息。
  3. 根据权利要求1或2所述的方法,其特征在于,所述确定所述拍摄设备相对所述标志物的位姿信息,包括:
    确定所述标志物在所述拍摄设备采集到的图像中的位姿信息;
    根据所述标志物在所述拍摄设备采集到的图像中的位姿信息,确定所述拍摄设备相对所述标志物的位姿信息。
  4. 根据权利要求3所述的方法,其特征在于,所述确定所述标志物在所述拍摄设备采集到的图像中的位姿信息,包括:
    根据所述标志物的一个或多个关键点在所述拍摄设备采集到的图像中的坐标,确定所述标志物在所述拍摄设备采集到的图像中的位姿信息。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述根据所述拍摄设备相对所述标志物的位姿信息,控制所述可移动物体,包括如下至少一种:
    根据所述拍摄设备相对所述标志物的位置信息,控制所述可移动物体相对于预设原点的位置信息;
    根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的姿态信息;
    根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的运动速度。
  6. 根据权利要求5所述的方法,其特征在于,所述姿态信息包括如下至少一种:
    俯仰角、横滚角、航向角。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的运动速度,包括:
    根据所述拍摄设备相对所述标志物的俯仰角,控制所述可移动物体在地面坐标系中沿Y轴移动的速度;
    根据所述拍摄设备相对所述标志物的横滚角,控制所述可移动物体在地面坐标系中沿X轴移动的速度;
    根据所述拍摄设备相对所述标志物的航向角,控制所述可移动物体在地面坐标系中沿Z轴移动的速度。
  8. 根据权利要求6所述的方法,其特征在于,所述根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的姿态信息,包括:
    根据所述拍摄设备相对所述标志物的俯仰角,控制所述可移动物体的俯仰角;
    根据所述拍摄设备相对所述标志物的横滚角,控制所述可移动物体的横滚角;
    根据所述拍摄设备相对所述标志物的航向角,控制所述可移动物体的航向角。
  9. 根据权利要求5或6所述的方法,其特征在于,所述根据所述拍摄设备相对所述标志物的位置信息,控制所述可移动物体相对于预设原点的位置信息,包括:
    根据所述拍摄设备相对所述标志物的距离,控制所述可移动物体相对于所述预设原点的距离。
  10. 根据权利要求1所述的方法,其特征在于,还包括:
    确定所述拍摄设备相对所述标志物的位姿运动信息;
    根据所述拍摄设备相对所述标志物的位姿运动信息,控制所述可移动物体。
  11. 根据权利要求10所述的方法,其特征在于,所述位姿运动信息包括如下至少一种:
    位置变化信息、姿态变化信息。
  12. 根据权利要求10或11所述的方法,其特征在于,所述确定所述拍摄设备相对所述标志物的位姿运动信息,包括:
    确定所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息;
    根据所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息,确定所述拍摄设备相对所述标志物的位姿运动信息。
  13. 根据权利要求12所述的方法,其特征在于,所述确定所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息,包括:
    根据所述标志物的一个或多个关键点在所述拍摄设备采集到的第一图像中的第一坐标,以及所述标志物的一个或多个关键点在所述拍摄设备采集到的第二图像中的第二坐标,确定所述第一图像和所述第二图像之间的关联矩阵;
    根据所述第一图像和所述第二图像之间的关联矩阵,确定所述标志物在所述第一图像和所述第二图像中的位姿运动信息。
  14. 根据权利要求13所述的方法,其特征在于,所述关联矩阵包括旋转矩阵和平移向量,所述旋转矩阵表示所述一个或多个关键点在所述第一图像和所述第二图像中的姿态变化信息,所述平移向量表示所述一个或多个关键点在所述第一图像和所述第二图像中的位置变化信息。
  15. 根据权利要求10或11所述的方法,其特征在于,所述确定所述拍摄设备相对所述标志物的位姿运动信息,包括:
    根据惯性测量单元IMU检测到的所述拍摄设备的位姿运动信息,确定所述拍摄设备相对所述标志物的位姿运动信息。
  16. 根据权利要求10或11所述的方法,其特征在于,所述确定所述拍摄设备相对所述标志物的位姿运动信息,包括:
    根据所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息,以及惯性测量单元IMU检测到的所述拍摄设备的位姿运动信息,确定所述拍摄设备相对所述标志物的位姿运动信息。
  17. 根据权利要求16所述的方法,其特征在于,还包括:
    若所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息与IMU检测到的所述拍摄设备的位姿运动信息的差值的绝对值大于阈值,则删除已确定的所述拍摄设备相对所述标志物的位姿运动信息。
  18. 根据权利要求10-17任一项所述的方法,其特征在于,所述根据 所述拍摄设备相对所述标志物的位姿运动信息,控制所述可移动物体,包括如下至少一种:
    根据所述拍摄设备相对所述标志物的位置变化信息,控制所述可移动物体相对于预设原点的位置变化信息;
    根据所述拍摄设备相对所述标志物的姿态变化信息,控制所述可移动物体相对于预设原点的姿态变化信息。
  19. 根据权利要求1-18任一项所述的方法,其特征在于,所述获取由拍摄设备采集到的图像中的标志物,包括如下至少一种:
    获取由用户在所述拍摄设备采集到的图像中选定的标志物;
    获取所述拍摄设备采集到的图像中与预设参考图像匹配的标志物;
    获取所述拍摄设备采集到的图像中由预设数量的特征点构成的标志物。
  20. 根据权利要求19所述的方法,其特征在于,所述获取由用户在所述拍摄设备采集到的图像中选定的标志物,包括如下至少一种:
    获取由用户在所述拍摄设备采集到的图像中框选的标志物;
    获取由用户在所述拍摄设备采集到的图像中点选的标志物。
  21. 根据权利要求1-20任一项所述的方法,其特征在于,所述控制所述可移动物体之前,还包括:获取用于触发所述可移动物体移动的触发指令。
  22. 根据权利要求21所述的方法,其特征在于,所述触发指令是对第一激活按键进行操作生成的。
  23. 根据权利要求1-20任一项所述的方法,其特征在于,所述确定所述拍摄设备相对所述标志物的位姿信息之前,还包括:
    获取初始化指令,所述初始化指令用于对已确定的所述拍摄设备相对所述标志物的位姿信息进行初始化处理。
  24. 根据权利要求23所述的方法,其特征在于,所述初始化指令是对第二激活按键进行操作生成的。
  25. 一种终端设备,其特征在于,包括:一个或多个处理器,所述处理器用于:
    获取由拍摄设备采集到的图像中的标志物;
    确定所述拍摄设备相对所述标志物的位姿信息;
    根据所述拍摄设备相对所述标志物的位姿信息,控制可移动物体。
  26. 根据权利要求25所述的终端设备,其特征在于,所述位姿信息包括如下至少一种:
    位置信息、姿态信息。
  27. 根据权利要求25或26所述的终端设备,其特征在于,所述处理器确定所述拍摄设备相对所述标志物的位姿信息时,具体用于:
    确定所述标志物在所述拍摄设备采集到的图像中的位姿信息;
    根据所述标志物在所述拍摄设备采集到的图像中的位姿信息,确定所述拍摄设备相对所述标志物的位姿信息。
  28. 根据权利要求27所述的终端设备,其特征在于,所述处理器确定所述标志物在所述拍摄设备采集到的图像中的位姿信息时,具体用于:
    根据所述标志物的一个或多个关键点在所述拍摄设备采集到的图像中的坐标,确定所述标志物在所述拍摄设备采集到的图像中的位姿信息。
  29. 根据权利要求25-28任一项所述的终端设备,其特征在于,所述处理器根据所述拍摄设备相对所述标志物的位姿信息,控制所述可移动物体时,具体用于如下至少一种:
    根据所述拍摄设备相对所述标志物的位置信息,控制所述可移动物体相对于预设原点的位置信息;
    根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的姿态信息;
    根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的运动速度。
  30. 根据权利要求29所述的终端设备,其特征在于,所述姿态信息包括如下至少一种:
    俯仰角、横滚角、航向角。
  31. 根据权利要求30所述的终端设备,其特征在于,所述处理器根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的运动速度时,具体用于:
    根据所述拍摄设备相对所述标志物的俯仰角,控制所述可移动物体在 地面坐标系中沿Y轴移动的速度;
    根据所述拍摄设备相对所述标志物的横滚角,控制所述可移动物体在地面坐标系中沿X轴移动的速度;
    根据所述拍摄设备相对所述标志物的航向角,控制所述可移动物体在地面坐标系中沿Z轴移动的速度。
  32. 根据权利要求30所述的终端设备,其特征在于,所述处理器根据所述拍摄设备相对所述标志物的姿态信息,控制所述可移动物体的姿态信息时,具体用于:
    根据所述拍摄设备相对所述标志物的俯仰角,控制所述可移动物体的俯仰角;
    根据所述拍摄设备相对所述标志物的横滚角,控制所述可移动物体的横滚角;
    根据所述拍摄设备相对所述标志物的航向角,控制所述可移动物体的航向角。
  33. 根据权利要求29或30所述的终端设备,其特征在于,所述处理器根据所述拍摄设备相对所述标志物的位置信息,控制所述可移动物体相对于预设原点的位置信息时,具体用于:
    根据所述拍摄设备相对所述标志物的距离,控制所述可移动物体相对于所述预设原点的距离。
  34. 根据权利要求25所述的终端设备,其特征在于,所述处理器还用于:
    确定所述拍摄设备相对所述标志物的位姿运动信息;
    根据所述拍摄设备相对所述标志物的位姿运动信息,控制所述可移动物体。
  35. 根据权利要求34所述的终端设备,其特征在于,所述位姿运动信息包括如下至少一种:
    位置变化信息、姿态变化信息。
  36. 根据权利要求34或35所述的终端设备,其特征在于,所述处理器确定所述拍摄设备相对所述标志物的位姿运动信息时,具体用于:
    确定所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运 动信息;
    根据所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息,确定所述拍摄设备相对所述标志物的位姿运动信息。
  37. 根据权利要求36所述的终端设备,其特征在于,所述处理器确定所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息时,具体用于:
    根据所述标志物的一个或多个关键点在所述拍摄设备采集到的第一图像中的第一坐标,以及所述标志物的一个或多个关键点在所述拍摄设备采集到的第二图像中的第二坐标,确定所述第一图像和所述第二图像之间的关联矩阵;
    根据所述第一图像和所述第二图像之间的关联矩阵,确定所述标志物在所述第一图像和所述第二图像中的位姿运动信息。
  38. 根据权利要求37所述的终端设备,其特征在于,所述关联矩阵包括旋转矩阵和平移向量,所述旋转矩阵表示所述一个或多个关键点在所述第一图像和所述第二图像中的姿态变化信息,所述平移向量表示所述一个或多个关键点在所述第一图像和所述第二图像中的位置变化信息。
  39. 根据权利要求34或35所述的终端设备,其特征在于,所述处理器确定所述拍摄设备相对所述标志物的位姿运动信息时,具体用于:
    根据惯性测量单元IMU检测到的所述拍摄设备的位姿运动信息,确定所述拍摄设备相对所述标志物的位姿运动信息。
  40. 根据权利要求34或35所述的终端设备,其特征在于,所述处理器确定所述拍摄设备相对所述标志物的位姿运动信息时,具体用于:
    根据所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息,以及惯性测量单元IMU检测到的所述拍摄设备的位姿运动信息,确定所述拍摄设备相对所述标志物的位姿运动信息。
  41. 根据权利要求40所述的终端设备,其特征在于,若所述标志物在所述拍摄设备采集到的至少两帧图像中的位姿运动信息与IMU检测到的所述拍摄设备的位姿运动信息的差值的绝对值大于阈值,则所述处理器还用于删除已确定的所述拍摄设备相对所述标志物的位姿运动信息。
  42. 根据权利要求34-41任一项所述的终端设备,其特征在于,所述 处理器根据所述拍摄设备相对所述标志物的位姿运动信息,控制所述可移动物体时,具体用于如下至少一种:
    根据所述拍摄设备相对所述标志物的位置变化信息,控制所述可移动物体相对于预设原点的位置变化信息;
    根据所述拍摄设备相对所述标志物的姿态变化信息,控制所述可移动物体相对于预设原点的姿态变化信息。
  43. 根据权利要求25-42任一项所述的终端设备,其特征在于,所述处理器获取由拍摄设备采集到的图像中的标志物时,具体用于如下至少一种:
    获取由用户在所述拍摄设备采集到的图像中选定的标志物;
    获取所述拍摄设备采集到的图像中与预设参考图像匹配的标志物;
    获取所述拍摄设备采集到的图像中由预设数量的特征点构成的标志物。
  44. 根据权利要求43所述的终端设备,其特征在于,所述处理器获取由用户在所述拍摄设备采集到的图像中选定的标志物时,具体用于如下至少一种:
    获取由用户在所述拍摄设备采集到的图像中框选的标志物;
    获取由用户在所述拍摄设备采集到的图像中点选的标志物。
  45. 根据权利要求25-44任一项所述的终端设备,其特征在于,所述处理器控制所述可移动物体之前,还用于:
    获取用于触发所述可移动物体移动的触发指令。
  46. 根据权利要求45所述的终端设备,其特征在于,所述触发指令是对第一激活按键进行操作生成的。
  47. 根据权利要求25-44任一项所述的终端设备,其特征在于,所述处理器确定所述拍摄设备相对所述标志物的位姿信息之前,还用于:
    获取初始化指令,所述初始化指令用于对已确定的所述拍摄设备相对所述标志物的位姿信息进行初始化处理。
  48. 根据权利要求47所述的终端设备,其特征在于,所述初始化指令是对第二激活按键进行操作生成的。
  49. 一种无人飞行器,其特征在于,包括:
    机身;
    动力系统,安装在所述机身,用于提供飞行动力;
    飞行控制器,与所述动力系统通讯连接,用于控制所述无人飞行器飞行。
  50. 一种可移动物体控制系统,其特征在于,包括:
    如权利要求25-48任一项所述的终端设备;以及
    无人飞行器。
PCT/CN2017/102081 2017-09-18 2017-09-18 可移动物体控制方法、设备及系统 WO2019051832A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
PCT/CN2017/102081 WO2019051832A1 (zh) 2017-09-18 2017-09-18 可移动物体控制方法、设备及系统
CN201780014737.0A CN108713179A (zh) 2017-09-18 2017-09-18 可移动物体控制方法、设备及系统
JP2019571295A JP6943988B2 (ja) 2017-09-18 2017-09-18 移動可能物体の制御方法、機器およびシステム
EP17925187.1A EP3674210A4 (en) 2017-09-18 2017-09-18 METHOD, DEVICE AND SYSTEM FOR CONTROLLING A MOVABLE OBJECT
US16/719,207 US20200125100A1 (en) 2017-09-18 2019-12-18 Movable object control method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/102081 WO2019051832A1 (zh) 2017-09-18 2017-09-18 可移动物体控制方法、设备及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/719,207 Continuation US20200125100A1 (en) 2017-09-18 2019-12-18 Movable object control method, device and system

Publications (1)

Publication Number Publication Date
WO2019051832A1 true WO2019051832A1 (zh) 2019-03-21

Family

ID=63866980

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/102081 WO2019051832A1 (zh) 2017-09-18 2017-09-18 可移动物体控制方法、设备及系统

Country Status (5)

Country Link
US (1) US20200125100A1 (zh)
EP (1) EP3674210A4 (zh)
JP (1) JP6943988B2 (zh)
CN (1) CN108713179A (zh)
WO (1) WO2019051832A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3627447B1 (en) * 2018-09-24 2022-06-29 Tata Consultancy Services Limited System and method of multirotor dynamics based online scale estimation for monocular vision
JP6927943B2 (ja) * 2018-10-30 2021-09-01 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd 情報処理装置、飛行制御方法及び飛行制御システム
CN112051546B (zh) * 2019-06-05 2024-03-08 北京外号信息技术有限公司 一种用于实现相对定位的装置以及相应的相对定位方法
CN110570463B (zh) * 2019-09-11 2023-04-07 深圳市道通智能航空技术股份有限公司 一种目标状态估计方法、装置和无人机
CN112529985A (zh) * 2019-09-17 2021-03-19 北京字节跳动网络技术有限公司 图像处理方法及装置
CN112742038B (zh) * 2019-10-29 2023-05-05 珠海一微半导体股份有限公司 玩具机器人及其移动方法和芯片
WO2021157136A1 (ja) * 2020-02-07 2021-08-12 パナソニックIpマネジメント株式会社 測位システム
CN114071003B (zh) * 2020-08-06 2024-03-12 北京外号信息技术有限公司 一种基于光通信装置的拍摄方法和系统
CN111917989B (zh) * 2020-09-15 2022-01-21 苏州臻迪智能科技有限公司 一种视频拍摄方法及装置
CN112419403B (zh) * 2020-11-30 2024-10-11 海南大学 一种基于二维码阵列的室内无人机定位方法
CN114726996B (zh) * 2021-01-04 2024-03-15 北京外号信息技术有限公司 用于建立空间位置与成像位置之间的映射的方法和系统
CN117437288B (zh) * 2023-12-19 2024-05-03 先临三维科技股份有限公司 摄影测量方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2202671A2 (en) * 2008-12-26 2010-06-30 Canon Kabushiki Kaisha Subject tracking apparatus and control method therefor, image capturing apparatus, and display apparatus
US20100253798A1 (en) * 2009-04-02 2010-10-07 Nikon Corporation Image processing apparatus, digital camera, and recording medium
US20130107066A1 (en) * 2011-10-27 2013-05-02 Qualcomm Incorporated Sensor aided video stabilization
CN105120146A (zh) * 2015-08-05 2015-12-02 普宙飞行器科技(深圳)有限公司 一种利用无人机进行运动物体自动锁定拍摄装置及拍摄方法
CN105487552A (zh) * 2016-01-07 2016-04-13 深圳一电航空技术有限公司 无人机跟踪拍摄的方法及装置
CN205453893U (zh) * 2016-03-31 2016-08-10 深圳奥比中光科技有限公司 无人机

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007047953A2 (en) * 2005-10-20 2007-04-26 Prioria, Inc. System and method for onboard vision processing
US7643893B2 (en) * 2006-07-24 2010-01-05 The Boeing Company Closed-loop feedback control using motion capture systems
CN102967305B (zh) * 2012-10-26 2015-07-01 南京信息工程大学 基于大小回字标志物的多旋翼无人机位姿获取方法
TWI499223B (zh) * 2013-06-07 2015-09-01 Pixart Imaging Inc 指向式機器人之遙控系統
US8903568B1 (en) * 2013-07-31 2014-12-02 SZ DJI Technology Co., Ltd Remote control method and terminal
CN103426282A (zh) * 2013-07-31 2013-12-04 深圳市大疆创新科技有限公司 遥控方法及终端
DE102014110265A1 (de) * 2014-07-22 2016-01-28 Vorwerk & Co. Interholding Gmbh Verfahren zur Reinigung oder Bearbeitung eines Raumes mittels eines selbsttätig verfahrbaren Gerätes
JP6466669B2 (ja) * 2014-08-29 2019-02-06 三菱重工業株式会社 作業ロボットシステム及び作業ロボットシステムの制御方法
KR102243659B1 (ko) * 2014-12-29 2021-04-23 엘지전자 주식회사 이동 단말기 및 그 제어 방법
WO2016168722A1 (en) * 2015-04-16 2016-10-20 Skydio, Inc. Magic wand interface and other user interaction paradigms for a flying digital assistant
JP6558169B2 (ja) * 2015-09-16 2019-08-14 株式会社デンソーウェーブ 飛行配送システム
JP2017096789A (ja) * 2015-11-25 2017-06-01 中国電力株式会社 貯炭温度測定装置及び貯炭温度測定方法
EP3424814A4 (en) * 2016-03-02 2019-11-13 Nec Corporation PILOT-FREE AIR VEHICLE, UNMANNED AERIAL VEHICLE CONTROL SYSTEM, FLIGHT CONTROL METHOD, AND PROGRAM STORAGE MEDIUM
WO2018018378A1 (zh) * 2016-07-25 2018-02-01 深圳市大疆创新科技有限公司 控制移动物体移动的方法、装置和系统
CN106231142A (zh) * 2016-10-21 2016-12-14 广东容祺智能科技有限公司 一种无人机手机遥控器
CN106503671B (zh) * 2016-11-03 2019-07-12 厦门中控智慧信息技术有限公司 确定人脸姿态的方法和装置
CN106529538A (zh) * 2016-11-24 2017-03-22 腾讯科技(深圳)有限公司 一种飞行器的定位方法和装置
CN106679648B (zh) * 2016-12-08 2019-12-10 东南大学 一种基于遗传算法的视觉惯性组合的slam方法
CN106683137B (zh) * 2017-01-11 2019-12-31 中国矿业大学 基于人工标志的单目多目标识别与定位方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2202671A2 (en) * 2008-12-26 2010-06-30 Canon Kabushiki Kaisha Subject tracking apparatus and control method therefor, image capturing apparatus, and display apparatus
US20100253798A1 (en) * 2009-04-02 2010-10-07 Nikon Corporation Image processing apparatus, digital camera, and recording medium
US20130107066A1 (en) * 2011-10-27 2013-05-02 Qualcomm Incorporated Sensor aided video stabilization
CN105120146A (zh) * 2015-08-05 2015-12-02 普宙飞行器科技(深圳)有限公司 一种利用无人机进行运动物体自动锁定拍摄装置及拍摄方法
CN105487552A (zh) * 2016-01-07 2016-04-13 深圳一电航空技术有限公司 无人机跟踪拍摄的方法及装置
CN205453893U (zh) * 2016-03-31 2016-08-10 深圳奥比中光科技有限公司 无人机

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3674210A4 *

Also Published As

Publication number Publication date
EP3674210A1 (en) 2020-07-01
JP6943988B2 (ja) 2021-10-06
US20200125100A1 (en) 2020-04-23
EP3674210A4 (en) 2020-09-23
JP2020534198A (ja) 2020-11-26
CN108713179A (zh) 2018-10-26

Similar Documents

Publication Publication Date Title
WO2019051832A1 (zh) 可移动物体控制方法、设备及系统
US11797009B2 (en) Unmanned aerial image capture platform
CN112567201B (zh) 距离测量方法以及设备
US11644832B2 (en) User interaction paradigms for a flying digital assistant
WO2018227350A1 (zh) 无人机返航控制方法、无人机和机器可读存储介质
CN111344644B (zh) 用于基于运动的自动图像捕获的技术
CN108351654B (zh) 用于视觉目标跟踪的系统和方法
WO2018086130A1 (zh) 飞行轨迹的生成方法、控制装置及无人飞行器
WO2020107372A1 (zh) 拍摄设备的控制方法、装置、设备及存储介质
JP5775632B2 (ja) 飛行体の飛行制御システム
WO2020014987A1 (zh) 移动机器人的控制方法、装置、设备及存储介质
KR101959366B1 (ko) 무인기와 무선단말기 간의 상호 인식 방법
WO2021250914A1 (ja) 情報処理装置、移動装置、情報処理システム、および方法、並びにプログラム
WO2020038720A1 (en) Apparatus, method and computer program for detecting the form of a deformable object
WO2020019175A1 (zh) 图像处理方法和设备、摄像装置以及无人机
KR102181809B1 (ko) 시설물 점검 장치 및 방법
TWI738315B (zh) 基於光標籤的自動追蹤拍攝系統
KR20190053018A (ko) 카메라를 포함하는 무인 비행 장치를 조종하는 방법 및 전자장치
KR20230016390A (ko) 광시야각의 스테레오 카메라 장치 및 이를 이용한 깊이 영상 처리 방법
TWM601357U (zh) 基於光標籤的自動追蹤拍攝系統
JP2024501368A (ja) 3次元位置を決定するための方法およびシステム
KR20190118486A (ko) 무인기와 무선단말기 간의 상호 인식 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17925187

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019571295

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017925187

Country of ref document: EP

Effective date: 20200325