WO2019095210A1 - 智能眼镜及其控制云台的方法、云台、控制方法和无人机 - Google Patents

智能眼镜及其控制云台的方法、云台、控制方法和无人机 Download PDF

Info

Publication number
WO2019095210A1
WO2019095210A1 PCT/CN2017/111367 CN2017111367W WO2019095210A1 WO 2019095210 A1 WO2019095210 A1 WO 2019095210A1 CN 2017111367 W CN2017111367 W CN 2017111367W WO 2019095210 A1 WO2019095210 A1 WO 2019095210A1
Authority
WO
WIPO (PCT)
Prior art keywords
smart glasses
time
data
posture data
pan
Prior art date
Application number
PCT/CN2017/111367
Other languages
English (en)
French (fr)
Inventor
魏亮辉
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201780035851.1A priority Critical patent/CN109313455B/zh
Priority to CN202111061018.9A priority patent/CN113759948A/zh
Priority to PCT/CN2017/111367 priority patent/WO2019095210A1/zh
Publication of WO2019095210A1 publication Critical patent/WO2019095210A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0808Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D3/00Control of position or direction
    • G05D3/12Control of position or direction using feedback
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Definitions

  • the invention relates to the field of drones, in particular to a method for controlling a cloud platform of smart glasses, a control method for a cloud platform, smart glasses, a cloud platform and a drone.
  • the smart glasses move along with the user's head.
  • the pan/tilt will be shaken correspondingly under the remote control of the inertial measurement unit, causing the video captured by the camera to also jitter;
  • the head actually has a slight rollback, which causes the pan/tilt to roll back under the remote control of the inertial measurement unit, causing the camera to take a slight video. Rollback.
  • Embodiments of the present invention provide a method for controlling a cloud platform of smart glasses, a control method for a cloud platform, smart glasses, a pan/tilt head, and a drone.
  • a method for controlling a pan/tilt according to an embodiment of the present invention includes:
  • the smart glasses of the embodiments of the present invention are used to control a cloud platform, and the smart glasses include a processor, and the processor is configured to:
  • the pan/tilt of the embodiment of the present invention includes a processor, and the processor is configured to:
  • the pan/tilt head is disposed on the body.
  • the method for controlling the gimbal of the smart glasses according to the embodiment of the present invention, the control method of the gimbal, the smart glasses, the pan/tilt and the drone, determining whether the smart glasses are shaken or rolled back according to the posture data of the smart glasses, and in the smart glasses When there is jitter or rollback, the attitude data of the smart glasses is processed to remove the jitter and rollback, thereby determining the target attitude data of the control pan/tilt to prevent the pan/tilt remotely controlled by the smart glasses from shaking or rolling back.
  • FIG. 1 is a schematic flow chart of a method for controlling a gimbal of smart glasses according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an application scenario of smart glasses according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a scene of a smart glasses controlling a cloud platform according to an embodiment of the present invention
  • FIG. 4 is a schematic flow chart of a method for controlling a pan/tilt of smart glasses according to an embodiment of the present invention
  • FIG. 5 is a schematic flow chart of a method for controlling a cloud platform of smart glasses according to an embodiment of the present invention
  • FIG. 6 is a schematic flow chart of a method for controlling a cloud platform of smart glasses according to an embodiment of the present invention
  • FIG. 7 is a schematic flow chart of a method for controlling a pan/tilt according to an embodiment of the present invention.
  • FIG. 8 is a schematic flow chart of a method for controlling a pan/tilt according to an embodiment of the present invention.
  • FIG. 9 is a flow chart showing a method of controlling a pan/tilt in accordance with an embodiment of the present invention.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include one or more of the described features either explicitly or implicitly.
  • the meaning of "a plurality" is two or more unless specifically defined otherwise.
  • the terms “installation”, “connected”, and “connected” should be understood broadly, and may be a fixed connection, for example, or They are detachable or integrally connected; they can be mechanically connected, they can be electrically connected or can communicate with each other; they can be connected directly or indirectly through an intermediate medium, which can be internal or two components of two components. Interaction relationship.
  • an intermediate medium which can be internal or two components of two components. Interaction relationship.
  • the "on" or “below” of the second feature may include direct contact of the first and second features, and may also include the first sum, unless otherwise specifically defined and defined.
  • the second feature is not in direct contact but through additional features between them.
  • the first feature “above”, “above” and “above” the second feature includes the first feature directly above and above the second feature, or merely indicating that the first feature level is higher than the second feature.
  • the first feature “below”, “below” and “below” the second feature includes the first feature directly below and below the second feature, or merely the first feature level being less than the second feature.
  • a method for controlling a gimbal of a smart glasses includes:
  • S10 determining, according to the posture data of the smart glasses, whether the smart glasses have jitter or rollback;
  • S20 processing posture data of the smart glasses to determine target posture data when there is jitter or rollback of the smart glasses.
  • S30 Send the target attitude data to the PTZ to control the PTZ.
  • the smart glasses 10 of the embodiment of the present invention are used to control the pan/tilt head 20.
  • the smart glasses 10 include a processor 12.
  • the method of controlling the pan/tilt of the smart glasses according to the embodiment of the present invention can be applied to the smart glasses 10 of the embodiment of the present invention.
  • processor 12 can be used to perform the methods in S10, S20, and S30.
  • the processor 12 can be configured to: determine whether the smart glasses 10 have jitter or rollback according to the posture data of the smart glasses 10; and process the posture data of the smart glasses 10 to determine the target when the smart glasses 10 have jitter or rollback.
  • the attitude data; and the target attitude data is transmitted to the pan/tilt head 20 to control the pan/tilt head 20.
  • the method for controlling the pan/tilt of the smart glasses according to the embodiment of the present invention and the smart glasses 10 do not immediately transmit the original pose data to the pan/tilt head 20 after collecting the original pose data of the smart glasses 10, but according to the smart glasses 10
  • the posture data (including the posture data of the current time of the smart glasses 10 and the posture data before the current time) determines whether the smart glasses 10 have jitter or rollback, and debounces the original posture data when the smart glasses 10 have jitter or rollback. And the processing of the rollback, the obtained new posture data is taken as the target posture data, and then the target posture data is transmitted to the pan/tilt head 20 to control the movement of the platform 20. In this way, it is possible to prevent the pan/tilt head 20 remotely controlled by the smart glasses 10 from being shaken or rolled back.
  • the step of determining whether the smart glasses have jitter or rollback according to the posture data of the smart glasses includes:
  • processor 12 can be used to perform the methods in S11 and S12.
  • the processor 12 is further configured to: acquire the current posture data Y(n+1) of the smart glasses 10 at the (N+1)th time; and acquire the current posture data Y(n+1) according to the current posture data Y(n+1)
  • the posture data Y(1), Y(2), ..., Y(n) of the smart glasses 10 at the first N time and the posture data S(n) of the smart glasses 10 transmitted to the pan/tilt 20 at the Nth time are judged. Whether the smart glasses 10 have jitter or rollback.
  • the processor 12 may acquire the current posture data Y(n+1) of the smart glasses 10 at a predetermined frequency, for example, the predetermined frequency may be 50 Hz.
  • the predetermined frequency may be 50 Hz.
  • N ⁇ 1 and N is an integer.
  • the smart glasses 10 acquire their own current posture data Y(n+1) at a predetermined frequency, and according to the current posture data Y(n+1), the obtained posture data Y of the smart glasses 10 at the first N time ( 1), Y(2), ..., Y(n), and the posture data S(n) of the smart glasses 10 transmitted to the pan/tilt head 20 at the Nth time point determine whether the smart glasses 10 have jitter or rollback.
  • the smart glasses 10 acquires the current posture data Y(4) at the fourth moment, and then according to the current posture data Y(4), the acquired posture data Y(1) of the first moment, and the acquired posture data Y of the second moment (2)
  • the obtained posture data Y(3) at the third time and the posture data S(3) transmitted to the platform 20 at the third time determine whether the smart glasses 10 have jitter or rollback.
  • the posture data of the acquired smart glasses 10 referred to herein is the original posture data, that is, the unprocessed posture data, and the posture data transmitted to the PTZ 20 may be the processed posture data (for example, removal).
  • the jitter and the post-rollback attitude data, or the smoothed data), that is, S(n) and Y(n) may be the same or different.
  • the smart glasses include an inertial measurement unit.
  • the step of acquiring the current posture data Y(n+1) of the smart glasses at the (N+1)th time (ie, S11) includes:
  • S112 Convert the attitude data of the inertial measurement unit into the attitude data of the smart glasses.
  • the smart glasses 10 include an inertial measurement unit 14 .
  • Processor 12 can be used to perform the methods in S111 and S112.
  • the processor 12 is further operable to: acquire the attitude data of the inertial measurement unit 14 at a predetermined frequency; and convert the attitude data of the inertial measurement unit 14 into the attitude data of the smart glasses 10.
  • a 6-axis or 9-axis inertial measurement unit 14 may be disposed on the smart glasses 10. It can be understood that there is a predetermined correspondence relationship between the posture data of the inertial measurement unit 14 and the posture data of the smart glasses 10.
  • the smart glasses 10 acquires the attitude data of the inertial measurement unit 14 at a predetermined frequency (for example, 50 Hz), and then converts the attitude data of the inertial measurement unit 14 into its own posture data according to the correspondence.
  • the attitude data includes at least one of a yaw angle, a roll angle, a pitch angle, a yaw rate, a roll angular velocity, and a pitch angular velocity.
  • the attitude data may include a yaw angle; or include a roll angle; or include a yaw angle and a pitch angle speed; or include a yaw angle, a pitch angle, a roll angular velocity; or include a yaw angle, a roll angle, and a pitch angle , yaw rate, roll angular velocity and pitch angular velocity.
  • the pitch angle, the yaw angle, and the roll angle respectively correspond to angles of rotation around the X-axis, the Y-axis, and the Z-axis in the three-dimensional space rectangular coordinate system.
  • the angles of the yaw angle, the roll angle, and the pitch angle are all (-180°, 180°).
  • the smart glasses 10 control the movement of the platform 20 by the target posture data transmitted to the platform 20 to cause the platform 20 to follow its own posture.
  • the pan/tilt head 20 includes a drive motor for controlling the motion of the drive motor to drive the pan/tilt head 20 based on the target attitude data transmitted by the smart glasses 10 such that the attitude of the pan/tilt head 20 follows the posture of the smart glasses 10.
  • the pan/tilt head 20 may be a two-axis pan/tilt head 20 or a three-axis pan/tilt head 20, etc., and is not limited herein.
  • the gimbal 20 is schematically illustrated as a three-axis pan/tilt.
  • the drive motor includes a first motor, a second motor, and a third motor.
  • the first motor is used to drive the pitch axis bracket or the photographing device 24 rotates around the pitch axis
  • the second motor is used to drive the roll axle bracket or the photographing device 24 rotates around the roll axis
  • the third motor is used to drive the yaw axis bracket or shoot The device 24 rotates about the yaw axis.
  • the acquired posture data Y(1), Y(2), ..., Y(n) of the smart glasses at the first N time includes:
  • S122 determining that the smart glasses have no jitter or rollback when the difference is greater than a predetermined threshold
  • the method for controlling the gimbal of the smart glasses further includes:
  • the target posture data is determined according to the current posture data Y(n+1) and the posture data S(n) of the smart glasses transmitted to the pan/tilt at the Nth time.
  • processor 12 can be used to perform the methods in S121, S122, and S40.
  • the processor 12 is further configured to: calculate whether the difference between the current posture data Y(n+1) and the posture data S(n) of the smart glasses 10 transmitted to the PTZ 20 at the Nth time is Greater than a predetermined threshold; and when the difference is greater than a predetermined threshold, it is determined that the smart glasses 10 are free from jitter or rollback.
  • the processor 12 is further configured to: when the smart glasses 10 are not shaken or rolled back, according to the current posture data Y(n+1) and the Nth time, the posture data S of the smart glasses 10 transmitted to the PTZ 20 ) Determine the target pose data.
  • the smart glasses 10 calculate whether the difference between the current posture data Y(4) and the posture data S(3) sent to the PTZ 20 at the third time is greater than a set threshold. When the difference is greater than the predetermined threshold, indicating that the smart glasses 10 are not shaken or rolled back, the smart glasses 10 determine the target according to the current posture data Y(4) and the posture data S(3) sent to the PTZ 20 at the third time. Gesture data. In some embodiments, the smart glasses 10 are configured to smooth the current posture data Y(4) according to the posture data S(3) sent to the PTZ 20 at a third time to determine the target posture data. In this way, the movement process of the gimbal 20 can be made smoother. When the photographing device 24 is mounted on the pan/tilt head 20, the video effect captured by the photographing device 24 is also softer and clearer.
  • the acquired posture data Y(1), Y(2), ..., Y(n) of the smart glasses at the first N time includes:
  • the step of processing the posture data of the smart glasses to determine the target posture data when the smart glasses are shaken or rolled back includes:
  • the method for controlling the gimbal of the smart glasses further includes:
  • the target posture data is determined according to the current posture data Y(n+1) and the posture data S(n) of the smart glasses transmitted to the pan/tilt at the Nth time.
  • processor 12 can be used to perform the methods in S121, S122, S123, S124, S21, and S40.
  • the processor 12 is further configured to: calculate whether the difference between the current posture data Y(n+1) and the posture data S(n) of the smart glasses 10 transmitted to the PTZ 20 at the Nth time is More than a predetermined threshold; when the difference is less than or equal to a predetermined threshold, the posture data Y(1), Y(2), ..., Y(n), and the current posture data Y of the smart glasses 10 according to the acquired first N time (n+1) determining whether the moving direction of the smart glasses 10 at the (N+1)th time is the same as the moving direction of the smart glasses 10 at the first N time; the moving direction of the smart glasses 10 at the (N+1)th time When the direction of movement of the smart glasses 10 at the first N time is the same, it is determined that the smart glasses 10 are not shaken.
  • the processor 12 is further configured to: when the smart glasses 10 are not shaken or rolled back, according to the current posture data Y(n+1) and the Nth time, the posture data S of the smart glasses 10 transmitted to the PTZ 20 ) Determine the target pose data.
  • the smart glasses 10 calculate whether the difference between the current posture data Y(4) and the posture data S(3) sent to the PTZ 20 at the third time is greater than a set threshold, and when the difference is less than or equal to a predetermined threshold, according to The obtained first time posture data Y(1), the acquired second time posture data Y(2), the acquired third time posture data Y(3), and the current posture data Y(4) determine the fourth
  • the direction of movement of the smart glasses 10 at the moment is the same as the direction of movement of the smart glasses 10 at the first three moments.
  • the movement direction of the smart glasses 10 at the fourth moment is the same as the movement direction of the smart glasses 10 at the first three moments: the movement direction of the smart glasses 10 at the fourth time relative to the third time, and the third direction.
  • the time of movement of the smart glasses 10 with respect to the second time is the same as the direction of movement of the smart glasses 10 with respect to the second time.
  • the difference between the moving direction of the smart glasses 10 at the fourth time and the moving direction of the smart glasses 10 at the first three moments is that the fourth time is relative to the moving direction of the smart glasses 10 at the third time, and the third time is opposite to the second time.
  • the moving direction of the smart glasses 10 at the time and the second moving time are different with respect to at least one of the moving directions of the smart glasses 10 at the first time.
  • the movement direction of the smart glasses 10 at the fourth time relative to the third time is different from the movement direction of the smart glasses 10 at the third time relative to the second time, and the movement of the smart glasses 10 at the third time relative to the second time.
  • the direction and the second time are the same as the direction of movement of the smart glasses 10 at the first time, and the direction of movement of the smart glasses 10 determined to be the fourth time is different from the direction of movement of the smart glasses 10 at the first three times.
  • the direction of movement of the smart glasses 10 at the fourth moment is the same as the direction of movement of the smart glasses 10 at the first three moments, it indicates that there is no jitter or rollback of the smart glasses 10.
  • the smart glasses 10 When the direction of movement of the smart glasses 10 at the fourth moment is different from the direction of movement of the smart glasses 10 at the first three moments, it is indicated that the smart glasses 10 have at least one of jitter or rollback.
  • the smart glasses 10 transmit the posture data S(3) transmitted to the pan/tilt 20 at the third time as the target posture data, and transmit the third time to the pan/tilt 20 at the fourth time.
  • the posture data S(3) is again transmitted to the pan/tilt head 20.
  • the smart glasses 10 determine the target posture data based on the current posture data Y(4) and the posture data S(3) transmitted to the pan/tilt head 20 at the third time.
  • the first piece of the smart glasses according to the acquired posture data Y(1), Y(2), ..., Y(n), and the current posture data Y(n+1) are judged (
  • the difference between the data is greater than or equal to zero to determine the intelligence at the (N+1)th moment
  • the movement direction of the glasses is the same as the movement direction of the smart glasses at the first N time; or the difference between the posture data of the smart glasses at any two adjacent moments in the previous (N+1) time is determined to be less than or equal to zero.
  • the moving direction of the smart glasses at the time of (N+1) is the same as the moving direction of the smart glasses at the first N time.
  • the processor 12 is further configured to determine the first (N) by determining that the difference between the pose data of the smart glasses 10 at any two adjacent moments in the previous (N+1) time is greater than or equal to zero. +1) the movement direction of the smart glasses 10 at the time is the same as the movement direction of the smart glasses 10 at the previous N time; or by judging between the posture data of the smart glasses 10 at any two adjacent moments in the previous (N+1) time The difference is less than or equal to zero to determine that the moving direction of the smart glasses 10 at the (N+1)th time is the same as the moving direction of the smart glasses 10 at the first N time.
  • the fourth time is compared with the third time of the smart glasses 10
  • the direction of motion is the same as the direction of motion of the smart glasses 10 with respect to the second time at the third time and the direction of movement of the smart glasses 10 with respect to the first time.
  • the values of Y(4)-Y(3), Y(3)-Y(2), Y(2)-Y(1) are all less than or equal to zero, it also indicates that the fourth moment is relative to the third moment.
  • the moving direction of the smart glasses 10 is the same as the moving direction of the smart glasses 10 at the second time with respect to the second time, and the second moving time is the same as the moving direction of the smart glasses 10 at the first time.
  • the attitude data is used as the yaw angle.
  • the attitude data of the smart glasses acquired at consecutive multiple times are: 0.1, 0.2, 0.1, 0.2, 0.3, 0.4, 0.2, respectively.
  • the unit is degrees. It can be seen that there is a slight jitter in the head. If these pose data are directly sent to the pan/tilt, the pan/tilt will also have corresponding jitter, and the video captured by the camera on the pan/tilt will also appear jittery.
  • the smart glasses When the user's head wears smart glasses and stops suddenly from one orientation to another, for example, from a position of 10 degrees to a position of 20 degrees, the smart glasses are acquired at successive times.
  • the attitude data are: 10.0, 10.5, 10.8, 11.2, ..., 19.8, 20.2, 20.0, 19.6, 19.5, 19.6, in degrees. It can be seen that there is a slight rollback of the head. If these pose data are directly sent to the PTZ, the PTZ will also roll back accordingly, and the video captured by the camera on the PTZ will also roll back.
  • the effectiveness of the method for controlling the gimbal of the smart glasses according to the embodiment of the present invention will be described below from two aspects of jitter and rollback, respectively.
  • the target posture data transmitted to the pan/tilt head 20 is as shown in Table 1; the smart using the embodiment of the present invention is used.
  • the target posture data transmitted to the pan/tilt head 20 is as shown in Table 2.
  • the target posture data transmitted to the pan/tilt 20 is used as a table before the method of controlling the pan/tilt of the smart glasses according to the embodiment of the present invention is used. 3; after the method of controlling the gimbal of the smart glasses according to the embodiment of the present invention, the target transmitted to the gimbal 20 The attitude data is shown in Table 4.
  • the method for controlling the pan/tilt of the smart glasses according to the embodiment of the present invention by performing the debounce and rollback processing on the acquired posture data of the smart glasses 10, causes the pan/tilt head 20 to not shake or roll back, thereby improving the photographing device 24 The effect of the captured video.
  • a method for controlling a pan/tilt includes:
  • S50 determining, according to the attitude data sent by the received smart glasses and the attitude data of the pan/tilt, whether the smart glasses have jitter or rollback;
  • S60 processing posture data of the smart glasses to determine target posture data when there is jitter or rollback of the smart glasses.
  • the platform 20 of the embodiment of the present invention includes a processor 22.
  • the control method of the pan/tilt according to the embodiment of the present invention can be applied to the pan/tilt head 20 of the embodiment of the present invention.
  • processor 22 can be used to perform the methods in S50, S60, and S70.
  • the processor 22 can be configured to: determine whether the smart glasses 10 have jitter or rollback according to the received posture data transmitted by the smart glasses 10 and the attitude data of the pan/tilt head 20; when the smart glasses 10 have jitter or rollback The posture data of the smart glasses 10 is processed to determine target posture data; and the movement of the platform 20 is controlled according to the target posture data.
  • the control method of the pan/tilt head After receiving the original posture data of the smart glasses 10, the control method of the pan/tilt head according to the embodiment of the present invention does not immediately control the motion of the pan/tilt head 20 according to the original posture data, but according to the gesture transmitted by the smart glasses 10
  • the data and the attitude data of the pan/tilt head 20 determine whether the smart glasses 10 are shaken or rolled back, and the original pose data is subjected to the process of removing the jitter and the rollback when the smart glasses 10 are shaken or rolled back, and the obtained new pose data is obtained.
  • the target posture data the motion of the platform 20 is then controlled based on the target posture data. In this way, it is possible to prevent the pan/tilt head 20 remotely controlled by the smart glasses 10 from being shaken or rolled back when the smart glasses 10 are shaken or rolled back.
  • the difference between the target attitude data and the current attitude data of the pan/tilt may be calculated, and then the cloud is determined according to the difference.
  • the amount of motion required for the station to move from the current attitude to the target attitude controls the pan/tilt 20 to perform corresponding motion.
  • the step of determining whether the smart glasses have jitter or rollback according to the attitude data sent by the received smart glasses and the attitude data of the pan/tilt includes:
  • processor 22 can be used to perform the methods in S51 and S52.
  • the processor 22 is further configured to: receive the current attitude data X(n+1) of the smart glasses 10 at the (N+1)th time; and receive the received data according to the current attitude data X(n+1)
  • the posture data X(1), X(2), ..., X(n) of the smart glasses 10 at the first N time and the posture data P(n) of the pan/tilt 20 at the Nth time determine whether the smart glasses 10 are shaken. Or roll back.
  • the processor 22 may receive the current pose data X(n+1) of the smart glasses 10 at a predetermined frequency, for example, the predetermined frequency may be 50 Hz.
  • the predetermined frequency may be 50 Hz.
  • N ⁇ 1 and N is an integer.
  • the pan/tilt head 20 receives the current attitude data X(n+1) of the smart glasses 10 at a predetermined frequency, and according to the current posture data X(n+1), the received posture data of the smart glasses 10 at the first N time.
  • the attitude data P(n) of the X(1), X(2), ..., X(n), and the PTZ 20 at the Nth time determines whether the smart glasses 10 have jitter or rollback.
  • the pan/tilt head 20 receives the current posture data X(4) of the smart glasses 10 at the fourth moment, and then according to the current posture data X(4), the received posture data X(1) of the first time, and the received second time posture
  • the data X(2), the received attitude data X(3) at the third time, and the posture data P(3) of the PTZ 20 at the third time determine whether the smart glasses 10 have jitter or rollback.
  • attitude data of the received smart glasses 10 referred to herein is original posture data, that is, unprocessed posture data
  • the attitude data of the platform 20 may be processed posture data (for example, debounce and The posture data after the rollback or the data after smoothing), that is, P(n) and X(n) may be the same or different.
  • the attitude data includes at least one of a yaw angle, a roll angle, a pitch angle, a yaw rate, a roll angular velocity, and a pitch angular velocity.
  • the attitude data may include a yaw angle; or include a roll angle; or include a yaw angle and a pitch rate Degree; or include yaw angle, pitch angle, roll angular velocity; or include yaw angle, roll angle, pitch angle, yaw rate, roll angular velocity, and pitch angular velocity.
  • the pitch angle, the yaw angle, and the roll angle respectively correspond to angles of rotation around the X-axis, the Y-axis, and the Z-axis in the three-dimensional space rectangular coordinate system.
  • the angles of the yaw angle, the roll angle, and the pitch angle are all (-180°, 180°).
  • the control method of the pan/tilt controls the movement of the platform 20 through the target posture data to follow the movement of the smart glasses 10.
  • the pan/tilt head 20 may be a two-axis pan/tilt head 20, a three-axis pan/tilt head 20, etc., and will not be specifically described herein.
  • the camera 20 is provided with a photographing device 24.
  • the photographing device 24 is used to record a video or take an image or the like.
  • the step of determining the presence or absence of jitter or rollback of the smart glasses by the attitude data P(n) of the pan/tilt at the Nth time includes:
  • S522 determining that the smart glasses have no jitter or rollback when the difference is greater than a predetermined threshold
  • Control methods also include:
  • the target posture data is determined according to the current posture data X(n+1) and the attitude data P(n) of the pan/tilt at the Nth time.
  • processor 22 can be used to perform the methods in S521, S522, and S80.
  • the processor 22 is further configured to: calculate whether a difference between the current attitude data X(n+1) and the attitude data P(n) of the PTZ 20 at the Nth time is greater than a predetermined threshold; When the difference is greater than the predetermined threshold, it is determined that the smart glasses 10 are free from jitter or rollback.
  • the processor 22 is further configured to: when the smart glasses 10 have no jitter or rollback, determine the target posture data according to the current posture data X(n+1) and the posture data P(n) of the PTZ 20 at the Nth time.
  • the pan/tilt 20 calculates whether the difference between the current posture data X(4) and the posture data P(3) of the pan/tilt head 20 at the third time is greater than a set threshold, and when the difference is greater than a predetermined threshold, the smart glasses 10 are indicated. There is no jitter or rollback, and the pan/tilt head 20 determines the target posture data based on the current posture data X(4) and the posture data P(3) of the pan/tilt head 20 at the third time.
  • the pan/tilt head 20 is configured to smooth the current pose data X(4) according to the pose data P(3) of the pan/tilt head 20 at the third moment to determine the target pose data. In this way, the movement process of the gimbal 20 can be made smoother.
  • the photographing device 24 is mounted on the pan/tilt head 20, the video effect captured by the photographing device 24 is also softer and clearer.
  • the attitude data X(1), X(2), ..., X(n) of the smart glasses according to the current posture data X(n+1) and the received first N time. ) and the attitude data P(n) of the PTZ at the Nth time includes:
  • the step of processing the posture data of the smart glasses to determine the target posture data when the smart glasses are shaken or rolled back includes:
  • Control methods also include:
  • the target posture data is determined according to the current posture data X(n+1) and the attitude data P(n) of the pan/tilt at the Nth time.
  • processor 22 can be used to perform the methods in S521, S522, S523, S524, S61, and S80.
  • the processor 22 is further configured to: calculate whether a difference between the current attitude data X(n+1) and the attitude data P(n) of the PTZ 20 at the Nth time is greater than a predetermined threshold; When the value is less than or equal to the predetermined threshold, it is determined according to the posture data X(1), X(2), ..., X(n), and the current posture data X(n+1) of the smart glasses 10 at the received Nth time.
  • the smart glasses of the moving glasses 10 at the (N+1)th time are determined.
  • the smart glasses are determined.
  • the attitude data P(n) of the PTZ 20 at the Nth time is determined as the target posture data.
  • the processor 22 is further configured to: when the smart glasses 10 have no jitter or rollback, determine the target posture data according to the current posture data X(n+1) and the posture data P(n) of the PTZ 20 at the Nth time.
  • the pan/tilt 20 calculates whether the difference between the current attitude data X(4) and the attitude data P(3) of the pan/tilt head 20 at the third moment is greater than a set threshold, and when the difference is less than or equal to a predetermined threshold, according to the reception At the first moment of the attitude data X (1), the second time of the reception
  • the posture data X(2), the received posture data X(3) at the third moment, and the current posture data X(4) determine the movement direction of the smart glasses 10 at the fourth time and the movement direction of the smart glasses 10 at the first three moments. Is it the same?
  • the movement direction of the smart glasses 10 at the fourth moment is the same as the movement direction of the smart glasses 10 at the first three moments: the movement direction of the smart glasses 10 at the fourth time relative to the third time, and the third direction.
  • the time of movement of the smart glasses 10 with respect to the second time is the same as the direction of movement of the smart glasses 10 with respect to the second time.
  • the difference between the moving direction of the smart glasses 10 at the fourth time and the moving direction of the smart glasses 10 at the first three moments is that the fourth time is relative to the moving direction of the smart glasses 10 at the third time, and the third time is opposite to the second time.
  • the moving direction of the smart glasses 10 at the time and the second moving time are different with respect to at least one of the moving directions of the smart glasses 10 at the first time.
  • the movement direction of the smart glasses 10 at the fourth time relative to the third time is different from the movement direction of the smart glasses 10 at the third time relative to the second time, and the movement of the smart glasses 10 at the third time relative to the second time.
  • the direction and the second time are the same as the direction of movement of the smart glasses 10 at the first time, and the direction of movement of the smart glasses 10 determined to be the fourth time is different from the direction of movement of the smart glasses 10 at the first three times.
  • the direction of movement of the smart glasses 10 at the fourth moment is the same as the direction of movement of the smart glasses 10 at the first three moments, it indicates that there is no jitter or rollback of the smart glasses 10.
  • the smart glasses 10 have at least one of jitter or rollback.
  • the pan/tilt head 20 uses the attitude data P(3) at the third time as the target posture data at the fourth time.
  • the platform 20 determines the target posture data based on the current posture data X(4) and the posture data P(3) at the third time.
  • the attitude data X(1), X(2), ..., X(n), and the current pose of the smart glasses according to the received first N moments determines whether the moving direction of the smart glasses at the (N+1)th time is the same as the moving direction of the smart glasses at the previous N time (ie, S624), and determines the time before (N+1)
  • the difference between the attitude data of the smart glasses at any two adjacent moments is greater than or equal to zero to determine that the moving direction of the smart glasses at the (N+1)th time is the same as the moving direction of the smart glasses at the first N time; or
  • the difference between the attitude data of the smart glasses at any two adjacent moments in the previous (N+1) time is less than or equal to zero to determine the movement direction of the smart glasses at the (N+1)th time and the smart glasses at the first N time.
  • the direction of movement is the same.
  • the processor 22 is further configured to determine the first (N) by determining that the difference between the pose data of the smart glasses 10 at any two adjacent moments in the previous (N+1) time is greater than or equal to zero. +1) the movement direction of the smart glasses 10 at the time is the same as the movement direction of the smart glasses 10 at the previous N time; or by judging between the posture data of the smart glasses 10 at any two adjacent moments in the previous (N+1) time The difference is less than or equal to zero to determine that the moving direction of the smart glasses 10 at the (N+1)th time is the same as the moving direction of the smart glasses 10 at the first N time.
  • the current time as the fourth time as an example.
  • X(4)-X(3), X(3)-X(2) The value of X(2)-X(1) is greater than or equal to zero, indicating the moving direction of the smart glasses 10 at the fourth time relative to the third time, and the moving direction of the smart glasses 10 at the second time relative to the second time.
  • the two moments are the same in the direction of movement of the smart glasses 10 with respect to the first moment.
  • the values of X(4)-X(3), X(3)-X(2), X(2)-X(1) are all less than or equal to zero, it also indicates that the fourth moment is relative to the third moment.
  • the moving direction of the smart glasses 10 is the same as the moving direction of the smart glasses 10 at the second time with respect to the second time, and the second moving time is the same as the moving direction of the smart glasses 10 at the first time.
  • the attitude data is used as the yaw angle.
  • the attitude data of the smart glasses received at consecutive multiple times are: 0.1, 0.2, 0.1, 0.2, 0.3, 0.4, 0.2, respectively.
  • the unit is degrees. It can be seen that there is a slight jitter in the head. If the attitude of the gimbal is directly controlled according to these attitude data, the pan/tilt will also have corresponding jitter, and the video captured by the shooting device on the gimbal will also appear jittery.
  • the attitude data are: 10.0, 10.5, 10.8, 11.2, ..., 19.8, 20.2, 20.0, 19.6, 19.5, 19.6, in degrees. It can be seen that there is a slight rollback of the head. If the attitude of the gimbal is directly controlled according to these attitude data, the pan/tilt will also be rolled back accordingly, and the video captured by the shooting device on the gimbal will also roll back.
  • the effectiveness of the control method of the pan/tilt according to the embodiment of the present invention will be described below from two aspects of jitter and rollback, respectively.
  • the target posture data of the determined pan/tilt head 20 is as shown in Table 5; the control of the pan/tilt head using the embodiment of the present invention is used.
  • the target attitude data of the determined pan/tilt head 20 is as shown in Table 6.
  • the target attitude data of the determined pan/tilt 20 is as shown in Table 7 before using the control method of the pan/tilt in accordance with the embodiment of the present invention. After the control method of the pan/tilt head according to the embodiment of the present invention is used, the target posture data of the determined pan/tilt head 20 is as shown in Table 8.
  • the method for controlling the pan/tilt in the embodiment of the present invention determines the target posture data for controlling the posture of the pan/tilt head 20 by performing the debounce and rollback processing on the posture data of the received smart glasses 10, so that the pan/tilt head 20 does not There is a jitter or rollback that improves the effect of the video captured by the camera 24.
  • the drone 30 of the embodiment of the present invention includes a body 32 and a pan/tilt head 20 of any of the above embodiments.
  • the pan/tilt head 20 is disposed on the body 32.
  • the pan/tilt head 20 of the unmanned vehicle 30 of the embodiment of the present invention After receiving the original posture data of the smart glasses 10, the pan/tilt head 20 of the unmanned vehicle 30 of the embodiment of the present invention does not immediately control the motion of the pan-tilt 20 according to the original posture data, but according to the posture data transmitted by the smart glasses 10 And the posture data of the pan/tilt head 20 determines whether the smart glasses 10 have jitter or rollback, and performs the process of removing the jitter and the rollback of the original posture data when the smart glasses 10 are shaken or rolled back, and the obtained new posture data is used as The target attitude data is then used to control the motion of the platform 20 based on the target attitude data. In this way, it is possible to prevent the pan/tilt head 20 remotely controlled by the smart glasses 10 from being shaken or rolled back when the smart glasses 10 are shaken or rolled back.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (control methods) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read Only Memory (ROM), erasable and editable read only memory (EPROM or flash memory) Memory), fiber optic equipment, and compact disc read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the embodiments of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Eyeglasses (AREA)

Abstract

一种智能眼镜(10)控制云台(20)的方法,包括:根据智能眼镜(10)的姿态数据判断智能眼镜(10)是否存在抖动或回滚(S10);在智能眼镜(10)存在抖动或回滚时,处理智能眼镜(10)的姿态数据以确定目标姿态数据(S20);以及,将目标姿态数据发送至云台(20)以控制云台(20)(S30)。此外,还提供一种云台(20)的控制方法、智能眼镜(10)、云台(20)和无人机(30)。

Description

智能眼镜及其控制云台的方法、云台、控制方法和无人机 技术领域
本发明涉及无人机领域,特别涉及一种智能眼镜的控制云台的方法、云台的控制方法、智能眼镜、云台和无人机。
背景技术
当用户佩戴智能眼镜,并通过智能眼镜的惯性测量单元来远程控制搭载有相机的云台时,智能眼镜会跟随用户的头部一起移动。一方面,由于用户的头部很难完全静止,会造成云台在惯性测量单元的远程控制下相应地抖动,导致相机拍摄出来的视频也会出现抖动;另一方面,当用户的头部从一个方位移动到另一个方位并突然停下时,头部实际上存在轻微的回滚,会造成云台在惯性测量单元的远程控制下相应地回滚,导致相机拍摄出来的视频也会出现轻微的回滚。
发明内容
本发明实施方式提供一种智能眼镜的控制云台的方法、云台的控制方法、智能眼镜、云台和无人机。
本发明实施方式的智能眼镜的控制云台的方法,包括:
根据所述智能眼镜的姿态数据判断所述智能眼镜是否存在抖动或回滚;
在所述智能眼镜存在抖动或回滚时处理所述智能眼镜的姿态数据以确定目标姿态数据;及
将所述目标姿态数据发送至所述云台以控制所述云台。
本发明实施方式的云台的控制方法,包括:
根据接收的智能眼镜发送的姿态数据和所述云台的姿态数据判断所述智能眼镜是否存在抖动或回滚;
在所述智能眼镜存在抖动或回滚时处理所述智能眼镜的姿态数据以确定目标姿态数据;及
根据所述目标姿态数据控制所述云台的运动。
本发明实施方式的智能眼镜用于控制云台,所述智能眼镜包括处理器,所述处理器用于:
根据所述智能眼镜的姿态数据判断所述智能眼镜是否存在抖动或回滚;
在所述智能眼镜存在抖动或回滚时处理所述智能眼镜的姿态数据以确定目标姿 态数据;及
将所述目标姿态数据发送至所述云台以控制所述云台。
本发明实施方式的云台包括处理器,所述处理器用于:
根据接收的智能眼镜发送的姿态数据和所述云台的姿态数据判断所述智能眼镜是否存在抖动或回滚;
在所述智能眼镜存在抖动或回滚时处理所述智能眼镜的姿态数据以确定目标姿态数据;及
根据所述目标姿态数据控制所述云台的运动。
本发明实施方式的无人机,包括:
机身;及
本发明实施方式的云台,所述云台设置在所述机身上。
本发明实施方式的智能眼镜的控制云台的方法、云台的控制方法、智能眼镜、云台和无人机,根据智能眼镜的姿态数据判断智能眼镜是否存在抖动或回滚,并在智能眼镜存在抖动或回滚时处理智能眼镜的姿态数据以去除抖动和回滚,从而确定控制云台的目标姿态数据,以防止被智能眼镜远程控制的云台发生抖动或回滚。
本发明实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明的上述和/或附加的方面和优点可以从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本发明实施方式的智能眼镜的控制云台的方法的流程示意图;
图2是本发明实施方式的智能眼镜的应用场景示意图;
图3是本发明实施方式的智能眼镜控制云台的场景示意图;
图4是本发明实施方式的智能眼镜的控制云台的方法的流程示意图;
图5是本发明实施方式的智能眼镜的控制云台的方法的流程示意图;
图6是本发明实施方式的智能眼镜的控制云台的方法的流程示意图;
图7是本发明实施方式的云台的控制方法的流程示意图;
图8是本发明实施方式的云台的控制方法的流程示意图;
图9是本发明实施方式的云台的控制方法的流程示意图。
具体实施方式
下面详细描述本发明的实施方式,实施方式的示例在附图中示出,其中,相同或类似的标号自始至终表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。
在本发明的实施方式的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“长度”、“宽度”、“厚度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”、“顺时针”、“逆时针”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明的实施方式和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的实施方式的限制。此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个所述特征。在本发明的实施方式的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
在本发明的实施方式的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接或可以相互通讯;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明的实施方式中的具体含义。
在本发明的实施方式中,除非另有明确的规定和限定,第一特征在第二特征之“上”或之“下”可以包括第一和第二特征直接接触,也可以包括第一和第二特征不是直接接触而是通过它们之间的另外的特征接触。而且,第一特征在第二特征“之上”、“上方”和“上面”包括第一特征在第二特征正上方和斜上方,或仅仅表示第一特征水平高度高于第二特征。第一特征在第二特征“之下”、“下方”和“下面”包括第一特征在第二特征正下方和斜下方,或仅仅表示第一特征水平高度小于第二特征。
下文的公开提供了许多不同的实施方式或例子用来实现本发明的实施方式的不同结构。为了简化本发明的实施方式的公开,下文中对特定例子的部件和设置进行描述。当然,它们仅仅为示例,并且目的不在于限制本发明。此外,本发明的实施方式可以在不同例子中重复参考数字和/或参考字母,这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施方式和/或设置之间的关系。此外,本发明的实施方式提供了的各种特定的工艺和材料的例子,但是本领域普通技术人员可以意识到其他工艺 的应用和/或其他材料的使用。
请参阅图1,本发明实施方式的智能眼镜的控制云台的方法包括:
S10:根据智能眼镜的姿态数据判断智能眼镜是否存在抖动或回滚;
S20:在智能眼镜存在抖动或回滚时处理智能眼镜的姿态数据以确定目标姿态数据;及
S30:将目标姿态数据发送至云台以控制云台。
请一并请参阅图2和图3,本发明实施方式的智能眼镜10用于控制云台20。智能眼镜10包括处理器12。本发明实施方式的智能眼镜的控制云台的方法可应用于本发明实施方式的智能眼镜10。例如,处理器12可用于执行S10、S20和S30中的方法。
也即是说,处理器12可以用于:根据智能眼镜10的姿态数据判断智能眼镜10是否存在抖动或回滚;在智能眼镜10存在抖动或回滚时处理智能眼镜10的姿态数据以确定目标姿态数据;及将目标姿态数据发送至云台20以控制云台20。
可以理解,当用户佩戴智能眼镜,并通过智能眼镜来远程控制搭载有相机的云台时,智能眼镜会跟随用户的头部一起移动。一方面,由于用户的头部很难完全静止,会造成云台在智能眼镜的远程控制下相应地抖动,导致相机拍摄出来的视频也会出现抖动;另一方面,当用户的头部从一个方位移动到另一个方位并突然停下时,头部实际上存在轻微的回滚,会造成云台在智能眼镜的远程控制下相应地回滚,导致相机拍摄出来的视频也会出现轻微的回滚。本发明实施方式的智能眼镜的控制云台的方法和智能眼镜10在采集到智能眼镜10的原始姿态数据后,不会立即将该原始姿态数据发送至云台20,而是根据智能眼镜10的姿态数据(包括智能眼镜10当前时刻的姿态数据和在当前时刻之前的姿态数据)判断智能眼镜10是否存在抖动或回滚,并在智能眼镜10存在抖动或回滚时对原始姿态数据进行去除抖动和回滚的处理,将得到的新的姿态数据作为目标姿态数据,然后将目标姿态数据发送给云台20以控制云台20的运动。如此,可以防止被智能眼镜10远程控制的云台20发生抖动或回滚。
请参阅图4,在某些实施方式中,根据智能眼镜的姿态数据判断智能眼镜是否存在抖动或回滚的步骤(即S10)包括:
S11:获取第(N+1)时刻的智能眼镜的当前姿态数据Y(n+1);及
S12:根据当前姿态数据Y(n+1)、获取的前N时刻的智能眼镜的姿态数据Y(1)、Y(2)、……、Y(n)、以及第N时刻的发送给云台的智能眼镜的姿态数据S(n)判断智能眼镜是否存在抖动或回滚。
在某些实施方式中,处理器12可用于执行S11和S12中的方法。
也即是说,处理器12进一步可以用于:获取第(N+1)时刻的智能眼镜10的当前姿态数据Y(n+1);及根据当前姿态数据Y(n+1)、获取的前N时刻的智能眼镜10的姿态数据Y(1)、Y(2)、……、Y(n)、以及第N时刻的发送给云台20的智能眼镜10的姿态数据S(n)判断智能眼镜10是否存在抖动或回滚。
具体地,处理器12可以以预定频率获取智能眼镜10的当前姿态数据Y(n+1),例如,预定频率可以为50HZ。在本发明实施方式中,N≥1,且N为整数。在智能眼镜10通过目标姿态数据控制云台20时,智能眼镜10可以先采用一个初始姿态数据Y(1)作为基准姿态,并将该初始姿态数据Y(1)发送至云台20以控制云台20的初始姿态。在后续过程中,智能眼镜10以预定频率获取自身的当前姿态数据Y(n+1),并根据当前姿态数据Y(n+1)、获取的前N时刻的智能眼镜10的姿态数据Y(1)、Y(2)、……、Y(n)、以及第N时刻的发送给云台20的智能眼镜10的姿态数据S(n)判断智能眼镜10是否存在抖动或回滚。
下面以当前时刻为第四时刻为例进行说明,也即是说,N=3。智能眼镜10获取第四时刻的当前姿态数据Y(4),然后根据当前姿态数据Y(4)、获取的第一时刻的姿态数据Y(1)、获取的第二时刻的姿态数据Y(2)、获取的第三时刻的姿态数据Y(3)、以及第三时刻的发送给云台20的姿态数据S(3)判断智能眼镜10是否存在抖动或回滚。需要指出的是,这里所说的获取的智能眼镜10的姿态数据为原始姿态数据,即未经过处理的姿态数据,而发送给云台20的姿态数据可能为处理后的姿态数据(例如,去除抖动和回滚后的姿态数据、或进行平滑处理后的数据),也即是说,S(n)与Y(n)可能相同,也可能不同。
请参阅图5,在某些实施方式中,智能眼镜包括惯性测量单元。获取第(N+1)时刻的智能眼镜的当前姿态数据Y(n+1)的步骤(即S11)包括:
S111:以预定频率获取惯性测量单元的姿态数据;及
S112:将惯性测量单元的姿态数据转化为智能眼镜的姿态数据。
请参阅图2和图3,在某些实施方式中,智能眼镜10包括惯性测量单元14。处理器12可用于执行S111和S112中的方法。
也即是说,处理器12进一步可以用于:以预定频率获取惯性测量单元14的姿态数据;及将惯性测量单元14的姿态数据转化为智能眼镜10的姿态数据。
具体地,智能眼镜10上可以设置一个6轴或9轴的惯性测量单元14。可以理解,惯性测量单元14的姿态数据与智能眼镜10的姿态数据之间存在着预定的对应关系。智能眼镜10以预定频率(例如50HZ)获取惯性测量单元14的姿态数据,然后再根据该对应关系将惯性测量单元14的姿态数据转化为自身的姿态数据。
在某些实施方式中,姿态数据包括偏航角、横滚角、俯仰角、偏航角速度、横滚角速度、俯仰角速度中的至少一种。
例如,姿态数据可以包括偏航角;或者包括横滚角;或者包括偏航角和俯仰角速度;或者包括偏航角、俯仰角、横滚角速度;或者包括偏航角、横滚角、俯仰角、偏航角速度、横滚角速度和俯仰角速度。俯仰角、偏航角、横滚角分别对应围绕三维空间直角坐标系中的X轴、Y轴、Z轴旋转的角度。偏航角、横滚角、俯仰角的角度范围均为(-180°,180°]。
具体地,智能眼镜10通过发送给云台20的目标姿态数据控制云台20运动以使云台20跟随自身的姿态。云台20包括驱动电机,云台20用于根据智能眼镜10发送的目标姿态数据控制驱动电机驱动云台20运动以使云台20的姿态跟随智能眼镜10的姿态。云台20可以为两轴云台20,也可以为三轴云台20等,这里不作限制。在此以云台20为三轴云台进行示意性说明,当云台20为三轴云台,驱动电机包括第一电机、第二电机和第三电机。第一电机用于驱动俯仰轴支架或拍摄设备24绕俯仰轴转动,第二电机用于驱动横滚轴支架或拍摄设备24绕横滚轴转动,第三电机用于驱动偏航轴支架或拍摄设备24绕偏航轴转动。
请参阅图6,在某些实施方式中,根据当前姿态数据Y(n+1)、获取的前N时刻的智能眼镜的姿态数据Y(1)、Y(2)、……、Y(n)、以及第N时刻的发送给云台的智能眼镜的姿态数据S(n)判断智能眼镜是否存在抖动或回滚的步骤(即S12)包括:
S121:计算当前姿态数据Y(n+1)与第N时刻的发送给云台的智能眼镜的姿态数据S(n)之间的差值是否大于预定阈值;及
S122:在差值大于预定阈值时,确定智能眼镜不存在抖动或回滚;
智能眼镜的控制云台的方法还包括:
S40:在智能眼镜不存在抖动或回滚时,根据当前姿态数据Y(n+1)和第N时刻的发送给云台的智能眼镜的姿态数据S(n)确定目标姿态数据。
在某些实施方式中,处理器12可用于执行S121、S122和S40中的方法。
也即是说,处理器12进一步可以用于:计算当前姿态数据Y(n+1)与第N时刻的发送给云台20的智能眼镜10的姿态数据S(n)之间的差值是否大于预定阈值;及在差值大于预定阈值时,确定智能眼镜10不存在抖动或回滚。处理器12还可以用于:在智能眼镜10不存在抖动或回滚时,根据当前姿态数据Y(n+1)和第N时刻的发送给云台20的智能眼镜10的姿态数据S(n)确定目标姿态数据。
具体地,下面仍以当前时刻为第四时刻为例进行说明。智能眼镜10计算当前姿态数据Y(4)与第三时刻发送给云台20的姿态数据S(3)之间的差值是否大于设定阈值, 当该差值大于预定阈值时,表明智能眼镜10不存在抖动或回滚,则智能眼镜10根据当前姿态数据Y(4)和第三时刻发送给云台20的姿态数据S(3)确定目标姿态数据。在某些实施方式中,智能眼镜10用于根据第三时刻发送给云台20的姿态数据S(3)对当前姿态数据Y(4)进行平滑处理以确定目标姿态数据。如此,可以使得云台20的运动过程较为平滑。当云台20上搭载有拍摄设备24时,拍摄设备24拍摄出来的视频效果也更为柔和、清晰。
请参阅图6,在某些实施方式中,根据当前姿态数据Y(n+1)、获取的前N时刻的智能眼镜的姿态数据Y(1)、Y(2)、……、Y(n)、以及第N时刻的发送给云台的智能眼镜的姿态数据S(n)判断智能眼镜是否存在抖动或回滚的步骤(即S12)包括:
S121:计算当前姿态数据Y(n+1)与第N时刻的发送给云台的智能眼镜的姿态数据S(n)之间的差值是否大于预定阈值;
S123:在差值小于或等于预定阈值时,根据获取的前N时刻的智能眼镜的姿态数据Y(1)、Y(2)、……、Y(n)、以及当前姿态数据Y(n+1)判断第(N+1)时刻的智能眼镜的运动方向与前N时刻的智能眼镜的运动方向是否相同;
S122:在第(N+1)时刻的智能眼镜的运动方向与前N时刻的智能眼镜的运动方向相同时,确定智能眼镜不存在抖动或回滚;及
S124:在第(N+1)时刻的智能眼镜的运动方向与前N时刻的智能眼镜的运动方向不同时,确定智能眼镜存在抖动或回滚;
在智能眼镜存在抖动或回滚时处理智能眼镜的姿态数据以确定目标姿态数据的步骤(即S20)包括:
S21:将第N时刻的发送给云台的智能眼镜的姿态数据S(n)确定为目标姿态数据;
智能眼镜的控制云台的方法还包括:
S40:在智能眼镜不存在抖动或回滚时,根据当前姿态数据Y(n+1)和第N时刻的发送给云台的智能眼镜的姿态数据S(n)确定目标姿态数据。
在某些实施方式中,处理器12可用于执行S121、S122、S123、S124、S21和S40中的方法。
也即是说,处理器12进一步可以用于:计算当前姿态数据Y(n+1)与第N时刻的发送给云台20的智能眼镜10的姿态数据S(n)之间的差值是否大于预定阈值;在差值小于或等于预定阈值时,根据获取的前N时刻的智能眼镜10的姿态数据Y(1)、Y(2)、……、Y(n)、以及当前姿态数据Y(n+1)判断第(N+1)时刻的智能眼镜10的运动方向与前N时刻的智能眼镜10的运动方向是否相同;在第(N+1)时刻的智能眼镜10的运动方向与前N时刻的智能眼镜10的运动方向相同时,确定智能眼镜10不存在抖 动或回滚;在第(N+1)时刻的智能眼镜10的运动方向与前N时刻的智能眼镜10的运动方向不同时,确定智能眼镜10存在抖动或回滚;及在智能眼镜10存在抖动或回滚时,将第N时刻的发送给云台20的智能眼镜10的姿态数据S(n)确定为目标姿态数据。处理器12还可以用于:在智能眼镜10不存在抖动或回滚时,根据当前姿态数据Y(n+1)和第N时刻的发送给云台20的智能眼镜10的姿态数据S(n)确定目标姿态数据。
具体地,下面仍以当前时刻为第四时刻为例进行说明。智能眼镜10计算当前姿态数据Y(4)与第三时刻发送给云台20的姿态数据S(3)之间的差值是否大于设定阈值,当该差值小于或等于预定阈值时,根据获取的第一时刻的姿态数据Y(1)、获取的第二时刻的姿态数据Y(2)、获取的第三时刻的姿态数据Y(3)、以及当前姿态数据Y(4)判断第四时刻的智能眼镜10的运动方向与前三时刻的智能眼镜10的运动方向是否相同。需要指出的是,第四时刻的智能眼镜10的运动方向与前三时刻的智能眼镜10的运动方向相同指的是:第四时刻相对于第三时刻的智能眼镜10的运动方向,与第三时刻相对于第二时刻的智能眼镜10的运动方向、和第二时刻相对于第一时刻的智能眼镜10的运动方向均相同。第四时刻的智能眼镜10的运动方向与前三时刻的智能眼镜10的运动方向不同指的是:第四时刻相对于第三时刻的智能眼镜10的运动方向,与第三时刻相对于第二时刻的智能眼镜10的运动方向、第二时刻相对于第一时刻的智能眼镜10的运动方向中的至少一个运动方向不同。例如,第四时刻相对于第三时刻的智能眼镜10的运动方向与第三时刻相对于第二时刻的智能眼镜10的运动方向不同,而第三时刻相对于第二时刻的智能眼镜10的运动方向与第二时刻相对于第一时刻的智能眼镜10的运动方向相同,也确定为第四时刻的智能眼镜10的运动方向与前三时刻的智能眼镜10的运动方向不同。当第四时刻的智能眼镜10的运动方向与前三时刻的智能眼镜10的运动方向相同表明:智能眼镜10不存在抖动或回滚。当第四时刻的智能眼镜10的运动方向与前三时刻的智能眼镜10的运动方向不同表明:智能眼镜10存在抖动或者回滚中的至少一种。在智能眼镜10存在抖动或回滚时,智能眼镜10将第三时刻发送给云台20的姿态数据S(3)作为目标姿态数据,并在第四时刻将第三时刻发送给云台20的姿态数据S(3)再次发送给云台20。在智能眼镜10不存在抖动或回滚时,智能眼镜10根据当前姿态数据Y(4)和第三时刻发送给云台20的姿态数据S(3)确定目标姿态数据。
在某些实施方式中,根据获取的前N时刻的智能眼镜的姿态数据Y(1)、Y(2)、……、Y(n)、以及当前姿态数据Y(n+1)判断第(N+1)时刻的智能眼镜的运动方向与前N时刻的智能眼镜的运动方向是否相同的步骤(即S124),通过判断前(N+1)时刻中任意相邻两时刻的智能眼镜的姿态数据之间的差值均大于或等于零来确定第(N+1)时刻的智能 眼镜的运动方向与前N时刻的智能眼镜的运动方向相同;或者通过判断前(N+1)时刻中任意相邻两时刻的智能眼镜的姿态数据之间的差值均小于或等于零来确定第(N+1)时刻的智能眼镜的运动方向与前N时刻的智能眼镜的运动方向相同。
在某些实施方式中,处理器12进一步用于:通过判断前(N+1)时刻中任意相邻两时刻的智能眼镜10的姿态数据之间的差值均大于或等于零来确定第(N+1)时刻的智能眼镜10的运动方向与前N时刻的智能眼镜10的运动方向相同;或者通过判断前(N+1)时刻中任意相邻两时刻的智能眼镜10的姿态数据之间的差值均小于或等于零来确定第(N+1)时刻的智能眼镜10的运动方向与前N时刻的智能眼镜10的运动方向相同。
具体地,下面仍以当前时刻为第四时刻为例进行说明。当Y(4)-Y(3)、Y(3)-Y(2)、Y(2)-Y(1)的值均大于或等于零表明第四时刻相对于第三时刻的智能眼镜10的运动方向,与第三时刻相对于第二时刻的智能眼镜10的运动方向、第二时刻相对于第一时刻的智能眼镜10的运动方向相同。同理,当Y(4)-Y(3)、Y(3)-Y(2)、Y(2)-Y(1)的值均小于或等于零也表明第四时刻相对于第三时刻的智能眼镜10的运动方向,与第三时刻相对于第二时刻的智能眼镜10的运动方向、第二时刻相对于第一时刻的智能眼镜10的运动方向相同。
下面以姿态数据为偏航角为例进行说明。在用户的头部“静止”的状态下(实际上难以完全静止),在连续的多个时刻获取到的智能眼镜的姿态数据分别为:0.1、0.2、0.1、0.2、0.3、0.4、0.2,单位为度。由此可以看出,头部存在轻微的抖动。如果直接将这些姿态数据发送给云台,则云台也会出现相应的抖动,云台上的拍摄设备拍摄出来的视频也会出现抖动。当用户的头部戴着智能眼镜从一个方位运动到另一个方位突然停下时,例如从10度的位置运动到20度的位置突然停下,在连续的多个时刻获取到的智能眼镜的姿态数据分别为:10.0、10.5、10.8、11.2、……、19.8、20.2、20.0、19.6、19.5、19.6,单位为度。由此可以看出,头部存在轻微的回滚。如果直接将这些姿态数据发送给云台,则云台也会出现相应的回滚,云台上的拍摄设备拍摄出来的视频也会出现回滚。
下面分别从抖动和回滚两个方面来说明本发明实施方式的智能眼镜的控制云台的方法的有效性。在用户的头部“静止”的状态下,使用本发明实施方式的智能眼镜的控制云台的方法前,发送给云台20的目标姿态数据如表1所示;使用本发明实施方式的智能眼镜的控制云台的方法后,发送给云台20的目标姿态数据如表2所示。当用户的头部戴着智能眼镜10从一个方位运动到另一个方位突然停下时,使用本发明实施方式的智能眼镜的控制云台的方法前,发送给云台20的目标姿态数据如表3所示;使用本发明实施方式的智能眼镜的控制云台的方法后,发送给云台20的目标 姿态数据如表4所示。本发明实施方式的智能眼镜的控制云台的方法,通过对获取到的智能眼镜10的姿态数据进行去除抖动和回滚处理,使得云台20不会出现抖动或回滚,进而改善拍摄设备24拍摄出来的视频的效果。
表1
时间(小时:分钟:秒.毫秒) 发送的偏航角(单位:度)
19:11:20.348 15.3
19:11:20.368 15.5
19:11:20.387 15.6
19:11:20.408 15.4
19:11:20.428 15.2
19:11:20.448 15.3
表2
时间(小时:分钟:秒.毫秒) 发送的偏航角(单位:度)
19:35:11.004 15.5
19:35:11.024 15.5
19:35:11.044 15.5
19:35:11.064 15.5
19:35:11.084 15.5
19:35:11.104 15.5
表3
时间(小时:分钟:秒.毫秒) 发送的偏航角(单位:度)
20:05:38.475 10.0
20:05:38.495 10.5
20:05:38.515 10.8
20:05:38.535 11.2
... ...
20:05:38.955 19.8
20:05:38.975 20.2
20:05:38.995 20.0
20:05:39.015 19.6
20:05:39.035 19.5
表4
时间(小时:分钟:秒.毫秒) 发送的偏航角(单位:度)
20:18:58.117 12.3
20:18:58.137 12.6
20:18:58.157 13.0
20:18:58.177 13.5
... ...
20:18:58.617 21.8
20:18:58.637 22.0
20:18:58.657 22.0
20:18:58.677 22.0
20:18:58.697 22.0
请参阅图7,本发明实施方式的云台的控制方法包括:
S50:根据接收的智能眼镜发送的姿态数据和云台的姿态数据判断智能眼镜是否存在抖动或回滚;
S60:在智能眼镜存在抖动或回滚时处理智能眼镜的姿态数据以确定目标姿态数据;及
S70:根据目标姿态数据控制云台的运动。
请一并参阅图2和图3,本发明实施方式的云台20包括处理器22。本发明实施方式的云台的控制方法可应用于本发明实施方式的云台20。例如,处理器22可用于执行S50、S60和S70中的方法。
也即是说,处理器22可以用于:根据接收的智能眼镜10发送的姿态数据和云台20的姿态数据判断智能眼镜10是否存在抖动或回滚;在智能眼镜10存在抖动或回滚时处理智能眼镜10的姿态数据以确定目标姿态数据;及根据目标姿态数据控制云台20的运动。
本发明实施方式的云台的控制方法和云台20在接收到智能眼镜10的原始姿态数据后,不会立即根据该原始姿态数据控制云台20的运动,而是根据智能眼镜10发送的姿态数据和云台20的姿态数据判断智能眼镜10是否存在抖动或回滚,并在智能眼镜10存在抖动或回滚时对原始姿态数据进行去除抖动和回滚的处理,将得到的新的姿态数据作为目标姿态数据,然后根据目标姿态数据控制云台20的运动。如此,可以防止在智能眼镜10发生抖动或回滚时,被智能眼镜10远程控制的云台20也发生抖动或回滚。
在某些实施方式中,当云台的控制方法根据目标姿态数据控制云台20运动时,可以通过计算目标姿态数据与云台的当前姿态数据之间的差值,然后根据该差值确定云台由当前姿态运动到目标姿态所需的运动量以控制云台20进行相应的运动。
请参阅图8,在某些实施方式中,根据接收的智能眼镜发送的姿态数据和云台的姿态数据判断智能眼镜是否存在抖动或回滚的步骤(即S60)包括:
S51:接收第(N+1)时刻的智能眼镜的当前姿态数据X(n+1);及
S52:根据当前姿态数据X(n+1)、接收的前N时刻的智能眼镜的姿态数据X(1)、X(2)、……、X(n)、以及第N时刻的云台的姿态数据P(n)判断智能眼镜是否存在抖动或回滚。
在某些实施方式中,处理器22可用于执行S51和S52中的方法。
也即是说,处理器22进一步可以用于:接收第(N+1)时刻的智能眼镜10的当前姿态数据X(n+1);及根据当前姿态数据X(n+1)、接收的前N时刻的智能眼镜10的姿态数据X(1)、X(2)、……、X(n)、以及第N时刻的云台20的姿态数据P(n)判断智能眼镜10是否存在抖动或回滚。
具体地,处理器22可以以预定频率接收智能眼镜10的当前姿态数据X(n+1),例如,预定频率可以为50HZ。在本发明实施方式中,N≥1,且N为整数。在智能眼镜10通过目标姿态数据控制云台20时,智能眼镜10可以先采用一个初始姿态数据X(1)作为基准姿态,并将该初始姿态数据X(1)发送至云台20以控制云台20的初始姿态。在后续过程中,云台20以预定频率接收智能眼镜10的当前姿态数据X(n+1),并根据当前姿态数据X(n+1)、接收的前N时刻的智能眼镜10的姿态数据X(1)、X(2)、……、X(n)、以及第N时刻的云台20的姿态数据P(n)判断智能眼镜10是否存在抖动或回滚。
下面以当前时刻为第四时刻为例进行说明,也即是说,N=3。云台20接收第四时刻的智能眼镜10的当前姿态数据X(4),然后根据当前姿态数据X(4)、接收的第一时刻的姿态数据X(1)、接收的第二时刻的姿态数据X(2)、接收的第三时刻的姿态数据X(3)、以及第三时刻的云台20的姿态数据P(3)判断智能眼镜10是否存在抖动或回滚。需要指出的是,这里所说的接收的智能眼镜10的姿态数据为原始姿态数据,即未经过处理的姿态数据,而云台20的姿态数据可能为处理后的姿态数据(例如,去除抖动和回滚后的姿态数据、或进行平滑处理后的数据),也即是说,P(n)与X(n)可能相同,也可能不同。
在某些实施方式中,姿态数据包括偏航角、横滚角、俯仰角、偏航角速度、横滚角速度、俯仰角速度中的至少一种。
例如,姿态数据可以包括偏航角;或者包括横滚角;或者包括偏航角和俯仰角速 度;或者包括偏航角、俯仰角、横滚角速度;或者包括偏航角、横滚角、俯仰角、偏航角速度、横滚角速度和俯仰角速度。俯仰角、偏航角、横滚角分别对应围绕三维空间直角坐标系中的X轴、Y轴、Z轴旋转的角度。偏航角、横滚角、俯仰角的角度范围均为(-180°,180°]。
具体地,云台的控制方法通过目标姿态数据控制云台20运动以跟随智能眼镜10的运动。云台20可以为两轴云台20、三轴云台20等,这里不再具体说明。
请参阅图3,在某些实施方式中,云台20上设置有拍摄设备24。拍摄设备24用于录制视频或拍摄图像等。
请参阅图9,在某些实施方式中,根据当前姿态数据X(n+1)、接收的前N时刻的智能眼镜的姿态数据X(1)、X(2)、……、X(n)、以及第N时刻的云台的姿态数据P(n)判断智能眼镜是否存在抖动或回滚的步骤(即S62)包括:
S521:计算当前姿态数据X(n+1)与第N时刻的云台的姿态数据P(n)之间的差值是否大于预定阈值;及
S522:在差值大于预定阈值时,确定智能眼镜不存在抖动或回滚;
控制方法还包括:
S80:在智能眼镜不存在抖动或回滚时,根据当前姿态数据X(n+1)和第N时刻的云台的姿态数据P(n)确定目标姿态数据。
在某些实施方式中,处理器22可用于执行S521、S522和S80中的方法。
也即是说,处理器22进一步可以用于:计算当前姿态数据X(n+1)与第N时刻的云台20的姿态数据P(n)之间的差值是否大于预定阈值;及在差值大于预定阈值时,确定智能眼镜10不存在抖动或回滚。处理器22还可以用于:在智能眼镜10不存在抖动或回滚时,根据当前姿态数据X(n+1)和第N时刻的云台20的姿态数据P(n)确定目标姿态数据。
具体地,下面仍以当前时刻为第四时刻为例进行说明。云台20计算当前姿态数据X(4)与第三时刻的云台20的姿态数据P(3)之间的差值是否大于设定阈值,当该差值大于预定阈值时,表明智能眼镜10不存在抖动或回滚,则云台20根据当前姿态数据X(4)和第三时刻的云台20的姿态数据P(3)确定目标姿态数据。在某些实施方式中,云台20用于根据第三时刻的云台20的姿态数据P(3)对当前姿态数据X(4)进行平滑处理以确定目标姿态数据。如此,可以使得云台20的运动过程较为平滑。当云台20上搭载有拍摄设备24时,拍摄设备24拍摄出来的视频效果也更为柔和、清晰。
请参阅图9,在某些实施方式中,根据当前姿态数据X(n+1)、接收的前N时刻的智能眼镜的姿态数据X(1)、X(2)、……、X(n)、以及第N时刻的云台的姿态数据P(n) 判断智能眼镜是否存在抖动或回滚的步骤(即S62)包括:
S521:计算当前姿态数据X(n+1)与第N时刻的云台的姿态数据P(n)之间的差值是否大于预定阈值;
S523:在差值小于或等于预定阈值时,根据接收的前N时刻的智能眼镜的姿态数据X(1)、X(2)、……、X(n)、以及当前姿态数据X(n+1)判断第(N+1)时刻的智能眼镜的运动方向与前N时刻的智能眼镜的运动方向是否相同;
S522:在第(N+1)时刻的智能眼镜的运动方向与前N时刻的智能眼镜的运动方向相同时,确定智能眼镜不存在抖动或回滚;及
S524:在第(N+1)时刻的智能眼镜的运动方向与前N时刻的智能眼镜的运动方向不同时,确定智能眼镜存在抖动或回滚;
在智能眼镜存在抖动或回滚时处理智能眼镜的姿态数据以确定目标姿态数据的步骤(即S60)包括:
S61:将第N时刻的云台的姿态数据P(n)确定为目标姿态数据;
控制方法还包括:
S80:在智能眼镜不存在抖动或回滚时,根据当前姿态数据X(n+1)和第N时刻的云台的姿态数据P(n)确定目标姿态数据。
在某些实施方式中,处理器22可用于执行S521、S522、S523、S524、S61和S80中的方法。
也即是说,处理器22进一步可以用于:计算当前姿态数据X(n+1)与第N时刻的云台20的姿态数据P(n)之间的差值是否大于预定阈值;在差值小于或等于预定阈值时,根据接收的前N时刻的智能眼镜10的姿态数据X(1)、X(2)、……、X(n)、以及当前姿态数据X(n+1)判断第(N+1)时刻的智能眼镜10的运动方向与前N时刻的智能眼镜10的运动方向是否相同;在第(N+1)时刻的智能眼镜10的运动方向与前N时刻的智能眼镜10的运动方向相同时,确定智能眼镜10不存在抖动或回滚;在第(N+1)时刻的智能眼镜10的运动方向与前N时刻的智能眼镜10的运动方向不同时,确定智能眼镜10存在抖动或回滚;及在智能眼镜10存在抖动或回滚时,将第N时刻的云台20的姿态数据P(n)确定为目标姿态数据。处理器22还可以用于:在智能眼镜10不存在抖动或回滚时,根据当前姿态数据X(n+1)和第N时刻的云台20的姿态数据P(n)确定目标姿态数据。
具体地,下面仍以当前时刻为第四时刻为例进行说明。云台20计算当前姿态数据X(4)与第三时刻的云台20的姿态数据P(3)之间的差值是否大于设定阈值,当该差值小于或等于预定阈值时,根据接收的第一时刻的姿态数据X(1)、接收的第二时刻的 姿态数据X(2)、接收的第三时刻的姿态数据X(3)、以及当前姿态数据X(4)判断第四时刻的智能眼镜10的运动方向与前三时刻的智能眼镜10的运动方向是否相同。需要指出的是,第四时刻的智能眼镜10的运动方向与前三时刻的智能眼镜10的运动方向相同指的是:第四时刻相对于第三时刻的智能眼镜10的运动方向,与第三时刻相对于第二时刻的智能眼镜10的运动方向、和第二时刻相对于第一时刻的智能眼镜10的运动方向均相同。第四时刻的智能眼镜10的运动方向与前三时刻的智能眼镜10的运动方向不同指的是:第四时刻相对于第三时刻的智能眼镜10的运动方向,与第三时刻相对于第二时刻的智能眼镜10的运动方向、第二时刻相对于第一时刻的智能眼镜10的运动方向中的至少一个运动方向不同。例如,第四时刻相对于第三时刻的智能眼镜10的运动方向与第三时刻相对于第二时刻的智能眼镜10的运动方向不同,而第三时刻相对于第二时刻的智能眼镜10的运动方向与第二时刻相对于第一时刻的智能眼镜10的运动方向相同,也确定为第四时刻的智能眼镜10的运动方向与前三时刻的智能眼镜10的运动方向不同。当第四时刻的智能眼镜10的运动方向与前三时刻的智能眼镜10的运动方向相同表明:智能眼镜10不存在抖动或回滚。当第四时刻的智能眼镜10的运动方向与前三时刻的智能眼镜10的运动方向不同表明:智能眼镜10存在抖动或者回滚中的至少一种。在智能眼镜10存在抖动或回滚时,云台20在第四时刻将第三时刻的姿态数据P(3)作为目标姿态数据。在智能眼镜10不存在抖动或回滚时,云台20根据当前姿态数据X(4)和第三时刻的姿态数据P(3)确定目标姿态数据。
在某些实施方式中,在差值小于或等于预定阈值时,根据接收的前N时刻的智能眼镜的姿态数据X(1)、X(2)、……、X(n)、以及当前姿态数据X(n+1)判断第(N+1)时刻的智能眼镜的运动方向与前N时刻的智能眼镜的运动方向是否相同的步骤(即S624),通过判断前(N+1)时刻中任意相邻两时刻的智能眼镜的姿态数据之间的差值均大于或等于零来确定第(N+1)时刻的智能眼镜的运动方向与前N时刻的智能眼镜的运动方向相同;或者通过判断前(N+1)时刻中任意相邻两时刻的智能眼镜的姿态数据之间的差值均小于或等于零来确定第(N+1)时刻的智能眼镜的运动方向与前N时刻的智能眼镜的运动方向相同。
在某些实施方式中,处理器22进一步用于:通过判断前(N+1)时刻中任意相邻两时刻的智能眼镜10的姿态数据之间的差值均大于或等于零来确定第(N+1)时刻的智能眼镜10的运动方向与前N时刻的智能眼镜10的运动方向相同;或者通过判断前(N+1)时刻中任意相邻两时刻的智能眼镜10的姿态数据之间的差值均小于或等于零来确定第(N+1)时刻的智能眼镜10的运动方向与前N时刻的智能眼镜10的运动方向相同。
具体地,下面仍以当前时刻为第四时刻为例进行说明。当X(4)-X(3)、X(3)-X(2)、 X(2)-X(1)的值均大于或等于零表明第四时刻相对于第三时刻的智能眼镜10的运动方向,与第三时刻相对于第二时刻的智能眼镜10的运动方向、第二时刻相对于第一时刻的智能眼镜10的运动方向相同。同理,当X(4)-X(3)、X(3)-X(2)、X(2)-X(1)的值均小于或等于零也表明第四时刻相对于第三时刻的智能眼镜10的运动方向,与第三时刻相对于第二时刻的智能眼镜10的运动方向、第二时刻相对于第一时刻的智能眼镜10的运动方向相同。
下面以姿态数据为偏航角为例进行说明。在用户的头部“静止”的状态下(实际上难以完全静止),在连续的多个时刻接收到的智能眼镜的姿态数据分别为:0.1、0.2、0.1、0.2、0.3、0.4、0.2,单位为度。由此可以看出,头部存在轻微的抖动。如果直接根据这些姿态数据控制云台的姿态,则云台也会出现相应的抖动,云台上的拍摄设备拍摄出来的视频也会出现抖动。当用户的头部戴着智能眼镜从一个方位运动到另一个方位突然停下时,例如从10度的位置运动到20度的位置突然停下,在连续的多个时刻接收到的智能眼镜的姿态数据分别为:10.0、10.5、10.8、11.2、……、19.8、20.2、20.0、19.6、19.5、19.6,单位为度。由此可以看出,头部存在轻微的回滚。如果直接根据这些姿态数据控制云台的姿态,则云台也会出现相应的回滚,云台上的拍摄设备拍摄出来的视频也会出现回滚。
下面分别从抖动和回滚两个方面来说明本发明实施方式的云台的控制方法的有效性。在用户的头部“静止”的状态下,使用本发明实施方式的云台的控制方法前,确定的云台20的目标姿态数据如表5所示;使用本发明实施方式的云台的控制方法后,确定的云台20的目标姿态数据如表6所示。当用户的头部戴着智能眼镜10从一个方位运动到另一个方位突然停下时,使用本发明实施方式的云台的控制方法前,确定的云台20的目标姿态数据如表7所示;使用本发明实施方式的云台的控制方法后,确定的云台20的目标姿态数据如表8所示。本发明实施方式的云台的控制方法,通过对接收到的智能眼镜10的姿态数据进行去除抖动和回滚处理,从而确定用于控制云台20的姿态的目标姿态数据,使得云台20不会出现抖动或回滚,进而改善拍摄设备24拍摄出来的视频的效果。
表5
时间(小时:分钟:秒.毫秒) 偏航角(单位:度)
19:11:20.348 15.3
19:11:20.368 15.5
19:11:20.387 15.6
19:11:20.408 15.4
19:11:20.428 15.2
19:11:20.448 15.3
表6
时间(小时:分钟:秒.毫秒) 偏航角(单位:度)
19:35:11.004 15.5
19:35:11.024 15.5
19:35:11.044 15.5
19:35:11.064 15.5
19:35:11.084 15.5
19:35:11.104 15.5
表7
时间(小时:分钟:秒.毫秒) 偏航角(单位:度)
20:05:38.475 10.0
20:05:38.495 10.5
20:05:38.515 10.8
20:05:38.535 11.2
... ...
20:05:38.955 19.8
20:05:38.975 20.2
20:05:38.995 20.0
20:05:39.015 19.6
20:05:39.035 19.5
表8
时间(小时:分钟:秒.毫秒) 偏航角(单位:度)
20:18:58.117 12.3
20:18:58.137 12.6
20:18:58.157 13.0
20:18:58.177 13.5
... ...
20:18:58.617 21.8
20:18:58.637 22.0
20:18:58.657 22.0
20:18:58.677 22.0
20:18:58.697 22.0
请参阅图3,本发明实施方式的无人机30包括机身32及上述任一实施方式的云台20。云台20设置在机身32上。
本发明实施方式的无人机30的云台20在接收到智能眼镜10的原始姿态数据后,不会立即根据该原始姿态数据控制云台20的运动,而是根据智能眼镜10发送的姿态数据和云台20的姿态数据判断智能眼镜10是否存在抖动或回滚,并在智能眼镜10存在抖动或回滚时对原始姿态数据进行去除抖动和回滚的处理,将得到的新的姿态数据作为目标姿态数据,然后根据目标姿态数据控制云台20的运动。如此,可以防止在智能眼镜10发生抖动或回滚时,被智能眼镜10远程控制的云台20也发生抖动或回滚。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理模块的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(控制方法),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存 储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的实施方式的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明的各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。
尽管上面已经示出和描述了本发明的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施实施进行变化、修改、替换和变型。

Claims (34)

  1. 一种智能眼镜的控制云台的方法,其特征在于,所述智能眼镜的控制云台的方法包括:
    根据所述智能眼镜的姿态数据判断所述智能眼镜是否存在抖动或回滚;
    在所述智能眼镜存在抖动或回滚时处理所述智能眼镜的姿态数据以确定目标姿态数据;及
    将所述目标姿态数据发送至所述云台以控制所述云台。
  2. 根据权利要求1所述的智能眼镜的控制云台的方法,其特征在于,所述根据所述智能眼镜的姿态数据判断所述智能眼镜是否存在抖动或回滚的步骤包括:
    获取第(N+1)时刻的所述智能眼镜的当前姿态数据Y(n+1);及
    根据所述当前姿态数据Y(n+1)、获取的前N时刻的所述智能眼镜的姿态数据Y(1)、Y(2)、……、Y(n)、以及第N时刻的发送给所述云台的所述智能眼镜的姿态数据S(n)判断所述智能眼镜是否存在抖动或回滚。
  3. 根据权利要求2所述的智能眼镜的控制云台的方法,其特征在于,所述根据所述当前姿态数据Y(n+1)、获取的前N时刻的所述智能眼镜的姿态数据Y(1)、Y(2)、……、Y(n)、以及第N时刻的发送给所述云台的所述智能眼镜的姿态数据S(n)判断所述智能眼镜是否存在抖动或回滚的步骤包括:
    计算所述当前姿态数据Y(n+1)与第N时刻的发送给所述云台的所述智能眼镜的姿态数据S(n)之间的差值是否大于预定阈值;及
    在所述差值大于所述预定阈值时,确定所述智能眼镜不存在抖动或回滚;
    所述智能眼镜的控制云台的方法还包括:
    在所述智能眼镜不存在抖动或回滚时,根据所述当前姿态数据Y(n+1)和第N时刻的发送给所述云台的所述智能眼镜的姿态数据S(n)确定所述目标姿态数据。
  4. 根据权利要求2所述的智能眼镜的控制云台的方法,其特征在于,所述根据所述当前姿态数据Y(n+1)、获取的前N时刻的所述智能眼镜的姿态数据Y(1)、Y(2)、……、Y(n)、以及第N时刻的发送给所述云台的所述智能眼镜的姿态数据S(n)判断所述智能眼镜是否存在抖动或回滚的步骤包括:
    计算所述当前姿态数据Y(n+1)与第N时刻的发送给所述云台的所述智能眼镜的 姿态数据S(n)之间的差值是否大于预定阈值;
    在所述差值小于或等于所述预定阈值时,根据获取的前N时刻的所述智能眼镜的姿态数据Y(1)、Y(2)、……、Y(n)、以及所述当前姿态数据Y(n+1)判断第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向是否相同;
    在第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向相同时,确定所述智能眼镜不存在抖动或回滚;及
    在第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向不同时,确定所述智能眼镜存在抖动或回滚;
    所述在所述智能眼镜存在抖动或回滚时处理所述智能眼镜的姿态数据以确定目标姿态数据的步骤包括:
    将第N时刻的发送给所述云台的所述智能眼镜的姿态数据S(n)确定为所述目标姿态数据;
    所述智能眼镜的控制云台的方法还包括:
    在所述智能眼镜不存在抖动或回滚时,根据所述当前姿态数据Y(n+1)和第N时刻的发送给所述云台的所述智能眼镜的姿态数据S(n)确定所述目标姿态数据。
  5. 根据权利要求4所述的智能眼镜的控制云台的方法,其特征在于,所述根据获取的前N时刻的所述智能眼镜的姿态数据Y(1)、Y(2)、……、Y(n)、以及所述当前姿态数据Y(n+1)判断第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向是否相同的步骤包括:
    通过判断前(N+1)时刻中任意相邻两时刻的所述智能眼镜的姿态数据之间的差值均大于或等于零来确定第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向相同;或者
    通过判断前(N+1)时刻中任意相邻两时刻的所述智能眼镜的姿态数据之间的差值均小于或等于零来确定第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向相同。
  6. 根据权利要求2所述的智能眼镜的控制云台的方法,其特征在于,所述智能眼镜包括惯性测量单元,所述获取第(N+1)时刻的所述智能眼镜的当前姿态数据Y(n+1)的步骤包括:
    以预定频率获取所述惯性测量单元的姿态数据;及
    将所述惯性测量单元的姿态数据转化为所述智能眼镜的姿态数据。
  7. 根据权利要求1-6任意一项所述的智能眼镜的控制云台的方法,其特征在于,所述姿态数据包括偏航角、横滚角、俯仰角、偏航角速度、横滚角速度、俯仰角速度中的至少一种。
  8. 一种云台的控制方法,其特征在于,所述控制方法包括:
    根据接收的智能眼镜发送的姿态数据和所述云台的姿态数据判断所述智能眼镜是否存在抖动或回滚;
    在所述智能眼镜存在抖动或回滚时处理所述智能眼镜的姿态数据以确定目标姿态数据;及
    根据所述目标姿态数据控制所述云台的运动。
  9. 根据权利要求8所述的控制方法,其特征在于,所述根据接收的智能眼镜发送的姿态数据和所述云台的姿态数据判断所述智能眼镜是否存在抖动或回滚的步骤包括:
    接收第(N+1)时刻的所述智能眼镜的当前姿态数据X(n+1);及
    根据所述当前姿态数据X(n+1)、接收的前N时刻的所述智能眼镜的姿态数据X(1)、X(2)、……、X(n)、以及第N时刻的所述云台的姿态数据P(n)判断所述智能眼镜是否存在抖动或回滚。
  10. 根据权利要求9所述的控制方法,其特征在于,所述根据所述当前姿态数据X(n+1)、接收的前N时刻的所述智能眼镜的姿态数据X(1)、X(2)、……、X(n)、以及第N时刻的所述云台的姿态数据P(n)判断所述智能眼镜是否存在抖动或回滚的步骤包括:
    计算所述当前姿态数据X(n+1)与第N时刻的所述云台的姿态数据P(n)之间的差值是否大于预定阈值;及
    在所述差值大于所述预定阈值时,确定所述智能眼镜不存在抖动或回滚;
    所述控制方法还包括:
    在所述智能眼镜不存在抖动或回滚时,根据所述当前姿态数据X(n+1)和第N时刻的所述云台的姿态数据P(n)确定所述目标姿态数据。
  11. 根据权利要求9所述的控制方法,其特征在于,所述根据所述当前姿态数据 X(n+1)、接收的前N时刻的所述智能眼镜的姿态数据X(1)、X(2)、……、X(n)、以及第N时刻的所述云台的姿态数据P(n)判断所述智能眼镜是否存在抖动或回滚的步骤包括:
    计算所述当前姿态数据X(n+1)与第N时刻的所述云台的姿态数据P(n)之间的差值是否大于预定阈值;
    在所述差值小于或等于所述预定阈值时,根据接收的前N时刻的所述智能眼镜的姿态数据X(1)、X(2)、……、X(n)、以及所述当前姿态数据X(n+1)判断第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向是否相同;
    在第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向相同时,确定所述智能眼镜不存在抖动或回滚;及
    在第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向不同时,确定所述智能眼镜存在抖动或回滚;
    所述在所述智能眼镜存在抖动或回滚时处理所述智能眼镜的姿态数据以确定目标姿态数据的步骤包括:
    将第N时刻的所述云台的姿态数据P(n)确定为所述目标姿态数据;
    所述控制方法还包括:
    在所述智能眼镜不存在抖动或回滚时,根据所述当前姿态数据X(n+1)和第N时刻的所述云台的姿态数据P(n)确定所述目标姿态数据。
  12. 根据权利要求11所述的控制方法,其特征在于,所述在所述差值小于或等于所述预定阈值时,根据接收的前N时刻的所述智能眼镜的姿态数据X(1)、X(2)、……、X(n)、以及所述当前姿态数据X(n+1)判断第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向是否相同的步骤包括:
    通过判断前(N+1)时刻中任意相邻两时刻的所述智能眼镜的姿态数据之间的差值均大于或等于零来确定第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向相同;或者
    通过判断前(N+1)时刻中任意相邻两时刻的所述智能眼镜的姿态数据之间的差值均小于或等于零来确定第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向相同。
  13. 根据权利要求8-12任意一项所述的控制方法,其特征在于,所述姿态数据包括偏航角、横滚角、俯仰角、偏航角速度、横滚角速度、俯仰角速度中的至少一种。
  14. 一种智能眼镜,所述智能眼镜用于控制云台,其特征在于,所述智能眼镜包括处理器,所述处理器用于:
    根据所述智能眼镜的姿态数据判断所述智能眼镜是否存在抖动或回滚;
    在所述智能眼镜存在抖动或回滚时处理所述智能眼镜的姿态数据以确定目标姿态数据;及
    将所述目标姿态数据发送至所述云台以控制所述云台。
  15. 根据权利要求14所述的智能眼镜,其特征在于,所述处理器进一步用于:
    获取第(N+1)时刻的所述智能眼镜的当前姿态数据Y(n+1);及
    根据所述当前姿态数据Y(n+1)、获取的前N时刻的所述智能眼镜的姿态数据Y(1)、Y(2)、……、Y(n)、以及第N时刻的发送给所述云台的所述智能眼镜的姿态数据S(n)判断所述智能眼镜是否存在抖动或回滚。
  16. 根据权利要求15所述的智能眼镜,其特征在于,所述处理器进一步用于:
    计算所述当前姿态数据Y(n+1)与第N时刻的发送给所述云台的所述智能眼镜的姿态数据S(n)之间的差值是否大于预定阈值;及
    在所述差值大于所述预定阈值时,确定所述智能眼镜不存在抖动或回滚;
    所述处理器还用于:
    在所述智能眼镜不存在抖动或回滚时,根据所述当前姿态数据Y(n+1)和第N时刻的发送给所述云台的所述智能眼镜的姿态数据S(n)确定所述目标姿态数据。
  17. 根据权利要求15所述的智能眼镜,其特征在于,所述处理器进一步用于:
    计算所述当前姿态数据Y(n+1)与第N时刻的发送给所述云台的所述智能眼镜的姿态数据S(n)之间的差值是否大于预定阈值;
    在所述差值小于或等于所述预定阈值时,根据获取的前N时刻的所述智能眼镜的姿态数据Y(1)、Y(2)、……、Y(n)、以及所述当前姿态数据Y(n+1)判断第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向是否相同;
    在第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向相同时,确定所述智能眼镜不存在抖动或回滚;
    在第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向不同时,确定所述智能眼镜存在抖动或回滚;及
    在所述智能眼镜存在抖动或回滚时,将第N时刻的发送给所述云台的所述智能眼镜的姿态数据S(n)确定为所述目标姿态数据;
    所述处理器还用于:
    在所述智能眼镜不存在抖动或回滚时,根据所述当前姿态数据Y(n+1)和第N时刻的发送给所述云台的所述智能眼镜的姿态数据S(n)确定所述目标姿态数据。
  18. 根据权利要求17所述的智能眼镜,其特征在于,所述处理器进一步用于:
    通过判断前(N+1)时刻中任意相邻两时刻的所述智能眼镜的姿态数据之间的差值均大于或等于零来确定第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向相同;或者
    通过判断前(N+1)时刻中任意相邻两时刻的所述智能眼镜的姿态数据之间的差值均小于或等于零来确定第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向相同。
  19. 根据权利要求15所述的智能眼镜,其特征在于,所述智能眼镜包括惯性测量单元,所述处理器进一步用于:
    以预定频率获取所述惯性测量单元的姿态数据;及
    将所述惯性测量单元的姿态数据转化为所述智能眼镜的姿态数据。
  20. 根据权利要求14-19任意一项所述的智能眼镜,其特征在于,所述姿态数据包括偏航角、横滚角、俯仰角、偏航角速度、横滚角速度、俯仰角速度中的至少一种。
  21. 一种云台,其特征在于,所述云台包括处理器,所述处理器用于:
    根据接收的智能眼镜发送的姿态数据和所述云台的姿态数据判断所述智能眼镜是否存在抖动或回滚;
    在所述智能眼镜存在抖动或回滚时处理所述智能眼镜的姿态数据以确定目标姿态数据;及
    根据所述目标姿态数据控制所述云台的运动。
  22. 根据权利要求21所述的云台,其特征在于,所述处理器进一步用于:
    接收第(N+1)时刻的所述智能眼镜的当前姿态数据X(n+1);及
    根据所述当前姿态数据X(n+1)、接收的前N时刻的所述智能眼镜的姿态数据 X(1)、X(2)、……、X(n)、以及第N时刻的所述云台的姿态数据P(n)判断所述智能眼镜是否存在抖动或回滚。
  23. 根据权利要求22所述的云台,其特征在于,所述处理器进一步用于:
    计算所述当前姿态数据X(n+1)与第N时刻的所述云台的姿态数据P(n)之间的差值是否大于预定阈值;及
    在所述差值大于所述预定阈值时,确定所述智能眼镜不存在抖动或回滚;
    所述处理器还用于:
    在所述智能眼镜不存在抖动或回滚时,根据所述当前姿态数据X(n+1)和第N时刻的所述云台的姿态数据P(n)确定所述目标姿态数据。
  24. 根据权利要求22所述的云台,其特征在于,所述处理器进一步用于:
    计算所述当前姿态数据X(n+1)与第N时刻的所述云台的姿态数据P(n)之间的差值是否大于预定阈值;
    在所述差值小于或等于所述预定阈值时,根据接收的前N时刻的所述智能眼镜的姿态数据X(1)、X(2)、……、X(n)、以及所述当前姿态数据X(n+1)判断第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向是否相同;
    在第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向相同时,确定所述智能眼镜不存在抖动或回滚;
    在第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向不同时,确定所述智能眼镜存在抖动或回滚;及
    在所述智能眼镜存在抖动或回滚时,将第N时刻的所述云台的姿态数据P(n)确定为所述目标姿态数据;
    所述处理器还用于:
    在所述智能眼镜不存在抖动或回滚时,根据所述当前姿态数据X(n+1)和第N时刻的所述云台的姿态数据P(n)确定所述目标姿态数据。
  25. 根据权利要求24所述的云台,其特征在于,所述处理器进一步用于:
    通过判断前(N+1)时刻中任意相邻两时刻的所述智能眼镜的姿态数据之间的差值均大于或等于零来确定第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向相同;或者
    通过判断前(N+1)时刻中任意相邻两时刻的所述智能眼镜的姿态数据之间的差值 均小于或等于零来确定第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向相同。
  26. 根据权利要求21-25任意一项所述的云台,其特征在于,所述姿态数据包括偏航角、横滚角、俯仰角、偏航角速度、横滚角速度、俯仰角速度中的至少一种。
  27. 根据权利要求21所述的云台,其特征在于,所述云台上设置有拍摄设备。
  28. 一种无人机,其特征在于,所述无人机包括:
    机身;及
    权利要求21所述的云台,所述云台设置在所述机身上。
  29. 根据权利要求28所述的无人机,其特征在于,所述处理器进一步用于:
    接收第(N+1)时刻的所述智能眼镜的当前姿态数据X(n+1);及
    根据所述当前姿态数据X(n+1)、接收的前N时刻的所述智能眼镜的姿态数据X(1)、X(2)、……、X(n)、以及第N时刻的所述云台的姿态数据P(n)判断所述智能眼镜是否存在抖动或回滚。
  30. 根据权利要求29所述的无人机,其特征在于,所述处理器进一步用于:
    计算所述当前姿态数据X(n+1)与第N时刻的所述云台的姿态数据P(n)之间的差值是否大于预定阈值;及
    在所述差值大于所述预定阈值时,确定所述智能眼镜不存在抖动或回滚;
    所述处理器还用于:
    在所述智能眼镜不存在抖动或回滚时,根据所述当前姿态数据X(n+1)和第N时刻的所述云台的姿态数据P(n)确定所述目标姿态数据。
  31. 根据权利要求29所述的无人机,其特征在于,所述处理器进一步用于:
    计算所述当前姿态数据X(n+1)与第N时刻的所述云台的姿态数据P(n)之间的差值是否大于预定阈值;
    在所述差值小于或等于所述预定阈值时,根据接收的前N时刻的所述智能眼镜的姿态数据X(1)、X(2)、……、X(n)、以及所述当前姿态数据X(n+1)判断第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向是否相同;
    在第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向相同时,确定所述智能眼镜不存在抖动或回滚;
    在第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向不同时,确定所述智能眼镜存在抖动或回滚;及
    在所述智能眼镜存在抖动或回滚时,将第N时刻的所述云台的姿态数据P(n)确定为所述目标姿态数据;
    所述处理器还用于:
    在所述智能眼镜不存在抖动或回滚时,根据所述当前姿态数据X(n+1)和第N时刻的所述云台的姿态数据P(n)确定所述目标姿态数据。
  32. 根据权利要求31所述的无人机,其特征在于,所述处理器进一步用于:
    通过判断前(N+1)时刻中任意相邻两时刻的所述智能眼镜的姿态数据之间的差值均大于或等于零来确定第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向相同;或者
    通过判断前(N+1)时刻中任意相邻两时刻的所述智能眼镜的姿态数据之间的差值均小于或等于零来确定第(N+1)时刻的所述智能眼镜的运动方向与前N时刻的所述智能眼镜的运动方向相同。
  33. 根据权利要求28-32任意一项所述的无人机,其特征在于,所述姿态数据包括偏航角、横滚角、俯仰角、偏航角速度、横滚角速度、俯仰角速度中的至少一种。
  34. 根据权利要求28所述的无人机,其特征在于,所述云台上设置有拍摄设备。
PCT/CN2017/111367 2017-11-16 2017-11-16 智能眼镜及其控制云台的方法、云台、控制方法和无人机 WO2019095210A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201780035851.1A CN109313455B (zh) 2017-11-16 2017-11-16 智能眼镜及其控制云台的方法、云台、控制方法和无人机
CN202111061018.9A CN113759948A (zh) 2017-11-16 2017-11-16 智能眼镜及其控制云台的方法、云台、控制方法和无人机
PCT/CN2017/111367 WO2019095210A1 (zh) 2017-11-16 2017-11-16 智能眼镜及其控制云台的方法、云台、控制方法和无人机

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/111367 WO2019095210A1 (zh) 2017-11-16 2017-11-16 智能眼镜及其控制云台的方法、云台、控制方法和无人机

Publications (1)

Publication Number Publication Date
WO2019095210A1 true WO2019095210A1 (zh) 2019-05-23

Family

ID=65225742

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/111367 WO2019095210A1 (zh) 2017-11-16 2017-11-16 智能眼镜及其控制云台的方法、云台、控制方法和无人机

Country Status (2)

Country Link
CN (2) CN113759948A (zh)
WO (1) WO2019095210A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112154398A (zh) * 2019-08-01 2020-12-29 深圳市大疆创新科技有限公司 云台控制方法、控制器、云台、无人移动平台和存储介质
CN113260942A (zh) * 2020-09-22 2021-08-13 深圳市大疆创新科技有限公司 手持云台控制方法、手持云台、系统及可读存储介质
CN117292520A (zh) * 2023-09-25 2023-12-26 北京至真互联网技术有限公司 一种基于智能眼镜的坐姿纠正方法及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113168193A (zh) * 2020-06-08 2021-07-23 深圳市大疆创新科技有限公司 云台控制方法、手持云台及计算机可读存储介质
WO2021248287A1 (zh) * 2020-06-08 2021-12-16 深圳市大疆创新科技有限公司 云台控制方法、手持云台及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426282A (zh) * 2013-07-31 2013-12-04 深圳市大疆创新科技有限公司 遥控方法及终端
KR20170004407A (ko) * 2015-07-02 2017-01-11 김성훈 무인정찰시스템 및 무인정찰방법
CN106444810A (zh) * 2016-10-31 2017-02-22 浙江大学 借助虚拟现实的无人机机械臂空中作业系统及其控制方法
CN106791354A (zh) * 2015-11-20 2017-05-31 广州亿航智能技术有限公司 控制无人机云台转动的智能显示设备及其控制系统
CN107113404A (zh) * 2016-09-18 2017-08-29 深圳市大疆创新科技有限公司 在可穿戴设备与可移动物体中提供图像的方法和设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834249A (zh) * 2015-03-16 2015-08-12 张时勉 穿戴式远程控制器
WO2017008776A1 (de) * 2015-07-14 2017-01-19 Gebäudereinigung Lissowski GmbH Reinigungsvorrichtung sowie verfahren zum reinigen einer fläche
WO2017020150A1 (zh) * 2015-07-31 2017-02-09 深圳市大疆创新科技有限公司 一种图像处理方法、装置及摄像机
CN105487552B (zh) * 2016-01-07 2019-02-19 深圳一电航空技术有限公司 无人机跟踪拍摄的方法及装置
CN113359807A (zh) * 2016-04-29 2021-09-07 深圳市大疆创新科技有限公司 一种无人机第一视角飞行的控制方法及系统、智能眼镜
CN106648068A (zh) * 2016-11-11 2017-05-10 哈尔滨工业大学深圳研究生院 一种双手三维动态手势识别方法
CN106959110B (zh) * 2017-04-06 2020-08-11 亿航智能设备(广州)有限公司 一种云台姿态检测方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426282A (zh) * 2013-07-31 2013-12-04 深圳市大疆创新科技有限公司 遥控方法及终端
KR20170004407A (ko) * 2015-07-02 2017-01-11 김성훈 무인정찰시스템 및 무인정찰방법
CN106791354A (zh) * 2015-11-20 2017-05-31 广州亿航智能技术有限公司 控制无人机云台转动的智能显示设备及其控制系统
CN106742003A (zh) * 2015-11-20 2017-05-31 广州亿航智能技术有限公司 基于智能显示设备的无人机云台转动控制方法
CN107113404A (zh) * 2016-09-18 2017-08-29 深圳市大疆创新科技有限公司 在可穿戴设备与可移动物体中提供图像的方法和设备
CN106444810A (zh) * 2016-10-31 2017-02-22 浙江大学 借助虚拟现实的无人机机械臂空中作业系统及其控制方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112154398A (zh) * 2019-08-01 2020-12-29 深圳市大疆创新科技有限公司 云台控制方法、控制器、云台、无人移动平台和存储介质
WO2021016985A1 (zh) * 2019-08-01 2021-02-04 深圳市大疆创新科技有限公司 云台控制方法、控制器、云台、无人移动平台和存储介质
CN113260942A (zh) * 2020-09-22 2021-08-13 深圳市大疆创新科技有限公司 手持云台控制方法、手持云台、系统及可读存储介质
CN117292520A (zh) * 2023-09-25 2023-12-26 北京至真互联网技术有限公司 一种基于智能眼镜的坐姿纠正方法及系统
CN117292520B (zh) * 2023-09-25 2024-05-14 北京至真互联网技术有限公司 一种基于智能眼镜的坐姿纠正方法及系统

Also Published As

Publication number Publication date
CN109313455A (zh) 2019-02-05
CN113759948A (zh) 2021-12-07
CN109313455B (zh) 2021-09-28

Similar Documents

Publication Publication Date Title
WO2019095210A1 (zh) 智能眼镜及其控制云台的方法、云台、控制方法和无人机
EP3032847B1 (en) Adjusting speakers using facial recognition
US10890830B2 (en) Gimbal control method, gimbal control apparatus, and gimbal
WO2021022580A1 (zh) 一种自动跟踪拍摄方法及系统
US10012982B2 (en) System and method for focus and context views for telepresence and robotic teleoperation
US10306211B2 (en) Remote control of pivotable stereoscopic camera
WO2020031740A1 (ja) 制御装置および制御方法、並びにプログラム
US11310416B2 (en) Control device, control system, control method, and storage medium
WO2021098453A1 (zh) 目标跟踪方法及无人飞行器
WO2019126932A1 (zh) 云台的控制方法和控制设备
US11272105B2 (en) Image stabilization control method, photographing device and mobile platform
CN107431749B (zh) 一种跟焦器控制方法和装置及系统
EP3683488A1 (en) Control method for pan tilt and control system of pan tilt
US20190158755A1 (en) Aerial vehicle and target object tracking method
WO2020019106A1 (zh) 云台和无人机控制方法、云台及无人机
US20190155487A1 (en) Methods, devices, and systems for controlling movement of a moving object
WO2019148348A1 (zh) 云台控制方法和装置
US10362277B2 (en) Following apparatus and following system
WO2020000423A1 (zh) 云台的控制方法、云台、飞行器和计算机可读存储介质
WO2020019113A1 (zh) 移动机器人的控制方法、装置及移动机器人系统
EP3098683A1 (en) Method and system for dynamic point of interest shooting with uav
WO2021217408A1 (zh) 无人机系统及其控制方法和装置
WO2019095139A1 (zh) 数据处理方法和设备
WO2020107393A1 (zh) 云台的控制方法、云台及无人飞行器
WO2022126477A1 (zh) 一种可移动平台的控制方法、装置及可移动平台

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17932544

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17932544

Country of ref document: EP

Kind code of ref document: A1