WO2018019290A1 - 一种云台控制方法和装置、存储介质 - Google Patents

一种云台控制方法和装置、存储介质 Download PDF

Info

Publication number
WO2018019290A1
WO2018019290A1 PCT/CN2017/094882 CN2017094882W WO2018019290A1 WO 2018019290 A1 WO2018019290 A1 WO 2018019290A1 CN 2017094882 W CN2017094882 W CN 2017094882W WO 2018019290 A1 WO2018019290 A1 WO 2018019290A1
Authority
WO
WIPO (PCT)
Prior art keywords
pan
tilt
predetermined gesture
deviation
control information
Prior art date
Application number
PCT/CN2017/094882
Other languages
English (en)
French (fr)
Inventor
孙晓路
张悦
卿明
Original Assignee
纳恩博(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 纳恩博(北京)科技有限公司 filed Critical 纳恩博(北京)科技有限公司
Publication of WO2018019290A1 publication Critical patent/WO2018019290A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Definitions

  • the present invention relates to electronic technologies, and in particular, to a cloud platform control method and apparatus, and a storage medium.
  • the current way to control the movement of the gimbal is to control the direction and speed of the gimbal movement by clicking a button, clicking a virtual button, or manipulating a remote control rod.
  • a virtual button in the graphical interactive interface Take a virtual button in the graphical interactive interface as an example for specific explanation.
  • the method of controlling the gimbal in the related art may undergo multiple adjustments, observations, and re-adjustments in order to achieve the target posture of the gimbal. Therefore, the method in the related art has a technical problem of long delay time, poor interactive experience, and complicated manipulation.
  • the embodiment of the invention provides a cloud platform control method and device, and a storage medium, which are used to solve the technical problem that the related technology controls the gimbal lag time and the interactive experience is poor.
  • the present invention provides a pan/tilt control method, including:
  • the image is captured by the image acquisition unit mounted on the gimbal, and the acquired image is identified. Obtaining a recognition result; the recognition result indicating that a predetermined gesture exists in the collected image, and a position of the predetermined gesture is at a first position;
  • controlling the rotation angle of the gimbal according to the deviation including:
  • the method further includes:
  • the collected image is identified to obtain a recognition result, including:
  • Matching is performed in the collected image according to a predetermined gesture feature stored in advance, to obtain a matching result
  • the method further includes:
  • the initial position includes a reference direction of the circumscribing geometric image, and the rotation angle of the pan/tilt is controlled according to the deviation, including:
  • the method further includes:
  • the method before generating the second control information for controlling the PTZ to enter the tracking state, the method further includes:
  • the present invention provides a pan/tilt control device, including:
  • An identification module configured to collect an image by an image capturing unit mounted on the pan/tilt, and identify the collected image to obtain a recognition result; the recognition result indicates that a predetermined gesture exists in the collected image, and the predetermined gesture Position in the first position;
  • the first control module is configured to obtain a deviation between the first position and the stored initial position, and control a rotation angle of the pan/tilt according to the deviation to make the first position coincide with the initial position.
  • the specific configuration of the first control module is determined to be in a tracking manner according to the deviation
  • the pan/tilt head needs a rotated target yaw angle and a target pitch angle, and generates the first control information based on the target yaw angle and the target pitch angle; and controls the station based on the first control information
  • the pan/tilt rotates the target yaw angle and the target pitch angle.
  • the device further includes: a first determining module, configured to determine whether the pan/tilt is in the tracking state before obtaining a deviation between the first location and the stored initial location;
  • a second control module configured to: when the pan/tilt is not in the tracking state, generate second control information for controlling the pan-tilt to enter the tracking state, and store the first location, and then The first position is the initial position.
  • the identifying module is configured to perform matching in the collected image according to a predetermined gesture feature stored in advance, to obtain a matching result; and when the matching result indicates that the predetermined gesture exists in the collected image Obtaining, according to the predetermined gesture, an external geometric shape of the predetermined gesture, and acquiring a position of the preset reference point of the external geometric figure; determining a position of the preset reference point as the first position, obtaining The recognition result.
  • the initial position includes a reference direction of the external geometric image
  • the identification module is further configured to: after obtaining the circumscribing geometry of the predetermined gesture based on the predetermined gesture, acquiring a current direction of the circumscribed geometry, and determining that the current direction is the first location, obtaining the location Describe the result of the recognition;
  • the initial position includes a reference direction of the circumscribing geometric image
  • the first control module is configured to determine that the pan/tilt in a tracking state needs to be rotated according to an angular difference between the current direction and the reference direction a target roll angle, and based on the target roll angle, the first control information is generated.
  • the device further includes a second determining module configured to determine the deviation between the first position and the stored initial position, and after controlling the rotation angle of the pan/tilt according to the deviation, determine the collecting Whether the predetermined gesture exists in the image obtained;
  • a third control module configured to: when the predetermined gesture does not exist in the collected image, Generating third control information, and controlling the gimbal to exit the tracking state based on the third control information
  • the device further includes a third determining module, configured to determine whether the hold time of the predetermined gesture reaches a preset time before generating second control information for controlling the PTZ to enter the tracking state ;
  • a determining module configured to determine that the second control information needs to be generated when a hold time of the predetermined gesture reaches the preset time.
  • the present invention provides a pan/tilt control device, including:
  • a memory for storing an executable program
  • the processor the executable program of the memory storage for running, implements the PTZ control method provided by the embodiment of the present invention.
  • the present invention provides a readable storage medium on which an executable program is stored, and when the executable program is executed by a processor, the pan/tilt control method provided by the embodiment of the present invention is implemented.
  • the image is captured by the image capturing unit mounted on the gimbal, and the collected image is recognized to obtain a recognition result; the recognition result indicates that a predetermined gesture exists in the collected image, and the position of the predetermined gesture is at the first position. Then, a deviation between the first position and the stored initial position is obtained, and the rotation angle of the gimbal is controlled according to the deviation so that the first position coincides with the initial position.
  • the pan/tilt controls the rotation angle based on the deviation of the first position and the initial position
  • the pan/tilt can track the predetermined gesture real-time motion.
  • the user can control the pan/tilt movement simply by gestures, and can observe the real-time posture of the gimbal at the same time, thereby adjusting the gesture position in time to adjust the pan/tilt accordingly. Therefore, the technical problem of long delay time of the control of the gimbal is solved, and the technical effect of controlling the gimbal in real time is realized.
  • FIG. 1 is a flowchart of a PTZ control method according to an embodiment of the present invention.
  • FIGS. 2a-2c are schematic diagrams of predetermined gestures according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of an external geometric figure and a preset reference position according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of an external geometric image and a reference direction according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a PTZ control device according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a PTZ control device according to an embodiment of the present invention.
  • the embodiment of the invention provides a cloud platform control method and device, and a storage medium, which are used to solve the technical problem that the related technology controls the gimbal lag time and the interactive experience is poor.
  • an image is collected by an image capturing unit mounted on the gimbal, and the collected image is recognized to obtain a recognition result; the recognition result indicates that a predetermined gesture exists in the collected image, and The position of the predetermined gesture is in the first position. Then, a deviation between the first position and the stored initial position is obtained, and the rotation angle of the gimbal is controlled according to the deviation so that the first position coincides with the initial position. It can be seen that since the pan/tilt controls the rotation angle based on the deviation of the first position and the initial position, the pan/tilt can track the predetermined gesture real-time motion.
  • the user can control the pan/tilt movement simply by gestures, and can observe the real-time posture of the gimbal at the same time, thereby adjusting the gesture position in time to adjust the pan/tilt accordingly. Therefore, the present invention solves the technical problem of long delay time of the gimbal control and realizes the technical effect of controlling the gimbal in real time.
  • FIG. 1 is a flowchart of a PTZ control method according to an embodiment of the present invention.
  • the pan/tilt control method in the embodiment of the present invention can be applied to an electronic device such as a balance car, a drone or a mobile robot.
  • the electronic device of the present invention that can be applied includes a cloud platform, and the image capturing unit is mounted on the cloud platform. When the cloud platform rotates, the image capturing unit can rotate with the cloud platform.
  • the gimbal according to the embodiment of the present invention does not only refer to the gimbal in the field, and the driving device, the rotatable head of the robot, the rotating platform, etc. in the specific implementation process also belong to The device or device referred to by the gimbal.
  • the method of pan/tilt control includes:
  • S101 Collecting an image through an image capturing unit mounted on the gimbal, and identifying the collected image to obtain a recognition result.
  • S102 Obtain a deviation between the first position and the stored initial position, and control a rotation angle of the pan/tilt according to the deviation.
  • the image acquisition unit mounted on the pan/tilt is activated, so that the image acquisition unit is in an image acquisition state.
  • image acquisition is performed by the image acquisition unit, and the acquired image is recognized while being acquired, and the recognition result is obtained.
  • a predetermined gesture may be recognized from the acquired image, and a predetermined gesture may not be recognized.
  • the recognition result in S101 is specifically indicating that a predetermined gesture exists in the collected image, and the position of the predetermined gesture is at the first position.
  • the first position in the embodiment of the present invention may be specifically a position of any one of the predetermined gestures, and/or may be specifically a direction of the predetermined gesture, which is not specifically limited in the present invention.
  • an initial position is stored in advance.
  • the initial position includes a reference position of the pan/tilt when the gesture is tracked, and a reference direction when the pan/tilt tracks the gesture.
  • the initial position can be the default setting, and can also be set by the user, or the first position recognized for the first time. In the specific implementation process, the initial position can be set according to actual needs.
  • the deviation between the first position and the initial position is calculated, and according to the deviation, how the pan/tilt needs to be rotated is calculated to make the first position coincide with the initial position, that is, how to rotate can eliminate the deviation. Finally, according to the calculation result, the rotation angle of the gimbal is controlled.
  • the pan/tilt controls the rotation angle based on the deviation of the first position and the initial position
  • the pan/tilt can track the predetermined gesture real-time motion.
  • the user can control the movement of the gimbal by simply gesture, and can observe the real-time posture of the gimbal at the same time, thereby adjusting the gesture in time to adjust the pan/tilt accordingly. Therefore, the embodiment of the invention solves the technical problem that the gyro control delay is long, and realizes the technical effect of controlling the pan/tilt in real time.
  • S101 can be implemented by the following process:
  • Matching is performed in the collected image according to a predetermined gesture feature stored in advance, to obtain a matching result
  • Feature matching is performed in the acquired image according to the general hand feature. If an element conforming to the general hand feature is matched from the acquired image, it is determined that the captured image has a hand and the element is a hand. Of course, if the hand is recognized, it will be further recognized whether the gesture of the hand is a predetermined gesture; and if the hand is not recognized in the captured image, there is no need to further confirm whether there is a predetermined gesture in the captured image. .
  • a predetermined gesture feature is stored in advance. Scheduled gestures can be varied The gestures, such as the three shown in FIG. 2a to FIG. 2c, are not specifically limited in this embodiment of the present invention.
  • the predetermined gesture feature may be a default setting; the user may also set a predetermined gesture according to needs and preferences, and control the image acquisition unit to collect a predetermined gesture set by itself, thereby enabling the PTZ control device to obtain and store the feature of the gesture.
  • the technical staff can set according to the actual, without specific restrictions.
  • matching is performed in the collected image according to the predetermined gesture feature, specifically, the opponent element is matched to obtain the matching degree. If the matching degree reaches a threshold, indicating that the gesture of the hand is a predetermined gesture, obtaining a matching result indicating that a predetermined gesture exists in the collected image; otherwise, if the matching degree is lower than the threshold, indicating that the gesture of the hand is not a predetermined gesture And obtaining a matching result indicating that there is no predetermined gesture in the collected image.
  • the threshold values are, for example, 80%, 90%, 95%, and the like.
  • the circumscribing geometry of the predetermined gesture is obtained according to the edge feature of the predetermined gesture.
  • the circumscribed geometry of the predetermined gesture is a minimum geometry capable of enclosing a predetermined gesture, such as a rectangle, a circle, an ellipse or other irregular geometric figure.
  • the position of the preset reference point of the geometric circumscribing graphic can also be obtained.
  • the preset reference point may be any point on the edge of the circumscribed geometry, such as a vertex, a lowest point, etc., or may be any point inside the circumscribed geometry, such as a midpoint, a focus, etc., and the invention is not specifically limited.
  • the captured image and the external geometry may also be displayed in the display unit.
  • the predetermined gesture is as shown in FIG. 2a below.
  • the external geometry is a rectangle
  • the preset reference position is a rectangular center as an example.
  • the captured image as shown in FIG. 3 is displayed on the display unit.
  • the hand in Fig. 3 is identified by recognition based on the general hand features. Further, matching is performed according to the predetermined gesture feature of the predetermined gesture shown in FIG. 2a, and a matching result indicating that the predetermined gesture exists is obtained. Then, the circumscribing geometry of the predetermined gesture is further obtained by the edge feature of the predetermined gesture, as shown by the dashed rectangle in FIG. And, the position of the center of the rectangle is obtained, as shown by the white circle in FIG. 3, and the position of the center is included in the recognition result.
  • the initial position is the position of a certain point. Both the first position and the initial position are represented by coordinates in the same coordinate system.
  • the coordinate system of the present invention is referred to as a standard coordinate system, wherein the standard coordinate system may be an acquisition plane coordinate system of the image acquisition unit, or may be a display coordinate system of the display unit, etc., and the present invention is not specifically limited. Therefore, when the deviation is calculated, the deviation between the coordinates of the first position and the coordinates of the initial position can be obtained. For example, if the initial position is (0, 0) and the first coordinate is (4, 3), the deviation is (0-4, 0-3), that is, (-4, -3).
  • the rotation angle of the gimbal is controlled based on the deviation.
  • the rotation angle of the pan/tilt can be controlled by:
  • the correspondence between the standard coordinate system and the yaw angle and the pitch angle it is calculated how many yaw angles and/or elevation angles that the pitch head needs to be rotated to eliminate.
  • the correspondence between the standard coordinate system and the yaw angle and the pitch angle indicates how many degrees of yaw angle and/or pitch angle of the gimbal rotation can move the points in the standard coordinate system by one basic unit.
  • the calculated yaw angle is used as the target bias The angle of the flight, and the calculated pitch angle as the target pitch angle.
  • the target yaw angle that the gimbal needs to rotate may be determined according to the deviation between the first position and the initial position in the direction of the rolling axis, and the cloud is determined according to the deviation between the first position and the initial position in the direction of the pitch axis.
  • the target pitch angle at which the station needs to be rotated may be determined according to the deviation between the first position and the initial position in the direction of the rolling axis, and the cloud is determined according to the deviation between the first position and the initial position in the direction of the pitch axis.
  • the target yaw angle and the target pitch angle are input into the pan/tilt control model, and corresponding control parameters are calculated, thereby generating first control information.
  • a person of ordinary skill in the art to which the present invention belongs can obtain a pan/tilt control model for detecting, modeling, debugging, etc. of the pan-tilt motion, and the present invention will not be repeated herein.
  • the first control information is input to the first motor that drives the pan/tilt to rotate around the yaw axis, so that the first motor drives the pan/tilt to rotate the target yaw angle based on the first control information.
  • the first control information is input to the second motor that drives the pan/tilt to rotate around the pitch axis, so that the second motor drives the pan/tilt to rotate the target pitch angle based on the first control information.
  • the third motor that drives the pan/tilt to rotate around the rolling axis needs to be compensated accordingly to ensure the stability of the gimbal.
  • the pan-tilt adjusts the yaw angle and the pitch angle according to the first position of the predetermined gesture, so the predetermined gesture can control the rotation angle of the gimbal.
  • the speed of the predetermined gesture motion can determine the magnitude of the deviation, and the magnitude of the deviation determines the magnitude of the target yaw angle and the target pitch angle.
  • the time slot of the target yaw angle and the target pitch angle are the same, so the speed of the predetermined gesture motion also controls the rotation speed of the pan/tilt.
  • the pan/tilt can also be controlled to rotate a certain roll angle around the roll axis.
  • the reference direction can be set for the external geometric figure, and then the pan/tilt rotation of the pan/tilt is controlled according to the angular difference between the current direction of the external geometric figure and the reference direction.
  • the first position further includes a current direction of the circumscribing geometry
  • the initial position further includes a reference side to.
  • the initial position further includes a reference direction of the external geometric image, and the rotation angle of the pan/tilt is controlled according to the deviation, including:
  • the method may further include:
  • the tracking state is that the pan/tilt tracks the state of the predetermined gesture motion according to the manner in the above embodiment. If the pan/tilt is not in the tracking state, even if the predetermined gesture is recognized in the captured image, it does not follow the movement of the predetermined gesture. Therefore, before S102, it is necessary to determine whether the gimbal is in the tracking state. To determine whether the gimbal is in the tracking state, it can be determined by querying the state of the gimbal. If the queried state is the tracking state, it is determined that the gimbal is in the tracking state; otherwise, if the queried state is not the tracking state, it is determined that the gimbal is not in the tracking state.
  • the recognition result indicates that the predetermined gesture and the position of the predetermined gesture are in the first position, and the pan/tilt is not in the tracking state, it indicates that the user wishes to control the pan/tilt through the predetermined gesture at this time. Therefore, second control information for controlling the pan/tilt to enter the tracking state is generated, and the first location is stored as the initial location of the subsequent tracking.
  • the method before generating the second control information, the method further includes:
  • the user may make a predetermined gesture due to a misoperation, and the pan/tilt is not required to enter the tracking state. Therefore, in order to improve the accuracy of the control, it is necessary to determine the retention of the predetermined gesture before generating the second control information. Whether the time reaches the preset time.
  • the preset time is, for example, 2 seconds (S), 3S, etc., and those skilled in the art to which the present invention pertains may be set according to actual conditions, and the present invention is not specifically limited.
  • the hold time since the image acquisition unit acquires images at equal time intervals, the hold time can be calculated by continuously identifying the number of frames of the image of the predetermined gesture. For example, if the image acquisition unit acquires one frame image every 1 millisecond (mS) and the predetermined gesture is recognized in successive 1200 frame images, the hold time of the predetermined gesture is calculated to be 1.2S.
  • mS millisecond
  • the retention time of the predetermined gesture reaches the preset time, the user is less likely to make a predetermined gesture by erroneous operation, so it is determined that the user needs to control the pan/tilt, thereby determining that the second control information needs to be generated.
  • FIG. 5 A schematic diagram of an embodiment.
  • the user stands in front of the balance car and raises his left hand to keep the left hand in the acquisition range of the image acquisition unit.
  • the pan/tilt control device recognizes the predetermined gesture from the acquired image, and determines that the pan/tilt is not in the tracking state at this time. Further, the hold time of the predetermined gesture is calculated to reach the preset time 2S, so the second control information is generated, and the pan/tilt is controlled to enter the tracking state. And, obtaining a center position of the predetermined gesture external geometric image, and storing the position at this time as the initial position.
  • the user After the user displays the information indicating that the gimbal enters the tracking state according to the display unit, the user starts moving the left hand as needed.
  • the image acquisition unit continues to acquire images and continuously recognizes predetermined gestures from the acquired images, and obtains a first position of the predetermined gesture. Then, the deviation between the first position and the initial position is calculated in real time, and the corresponding target yaw angle and target pitch angle are controlled in real time according to the deviation.
  • the pan/tilt may exit the tracking state.
  • there are multiple ways to control the gimbal to exit the tracking state Several of them will be listed below, including but not limited to the following in the specific implementation process.
  • the pan/tilt When the pan/tilt is in the tracking state, it is judged whether the stationary time of the predetermined gesture reaches the exit time; when the stationary time reaches the exit time, the third control information is generated, and the pan-tilt exits the tracking state based on the third control information.
  • the predetermined gesture can be kept to reach the preset time, and when the user needs the pan-tilt to exit from the tracking state, the same can also be achieved by keeping the predetermined gesture still to reach the exit time.
  • the way to obtain the rest time is similar to the way in which the hold time is obtained, so the description is not repeated.
  • the exit time is, for example, 2S, 3S, etc., and the present invention is not specifically limited.
  • both the exit time and the preset time are 1S.
  • T1 the user remotely The balance car moves to the front of you and wants to control the pan/tilt of the balance car. Therefore, the user raises his right hand to make a predetermined gesture and keeps it.
  • the pan/tilt control device recognizes the predetermined gesture and calculates that the hold time of the predetermined gesture reaches 1S, thus controlling the pan/tilt to enter the tracking state. After confirming that the PTZ enters the tracking state, the user starts moving the right hand and controls the pan/tilt rotation.
  • the pan/tilt control device determines that the predetermined gesture is stationary for 1 S, and controls the gimbal to exit the tracking state.
  • the pan/tilt When the pan/tilt is in the tracking state, it is determined whether the predetermined gesture is converted into an exit gesture; when the predetermined gesture is converted into the exit gesture, the third control information is generated, and the pan-tilt exits the tracking state based on the third control information.
  • the exit gesture is a gesture different from the predetermined gesture, for example, the predetermined gesture is as shown in FIG. 2a, and the static gesture is as shown in FIG. 2b.
  • the predetermined gesture is as shown in FIG. 2a
  • the static gesture is as shown in FIG. 2b.
  • the pan/tilt is in the tracking state
  • the collected image is identified, and if the exit gesture is recognized, it is determined that the predetermined gesture is converted into the exit gesture.
  • the recognition of the exit gesture is similar to the recognition of the predetermined gesture.
  • the predetermined gesture is converted into the exit gesture, it is determined that the user needs the PTZ to exit the tracking state, and then the third control information is generated to control the PTZ exiting the tracking state, and the PTZ maintains the posture when exiting the tracking state.
  • the preset time is 1S.
  • the user remotely controls the balance car to move to the front of himself, hoping to control the pan/tilt of the balance car. Therefore, the user raises his right hand to make a predetermined gesture as shown in Fig. 2a and keeps it.
  • the pan/tilt control device recognizes the predetermined gesture and calculates that the hold time of the predetermined gesture reaches 1S, thus controlling the pan/tilt to enter the tracking state. After confirming that the PTZ enters the tracking state, the user starts moving the right hand and controls the pan/tilt rotation.
  • the user makes a gesture as shown in Fig. 2b with the right hand.
  • the pan/tilt control device recognizes the exit gesture and controls the gimbal to exit the tracking state.
  • the third type is the third type.
  • the pan-tilt control device will continue to recognize the predetermined gesture and obtain the first position to achieve continuous control of the pan-tilt rotation.
  • the predetermined gesture will no longer be recognized in the acquired image.
  • the predetermined gesture is not recognized from the collected image, it is determined that there is no predetermined gesture in the collected image, and it is determined that the user needs the PTZ to exit the tracking state, and then the third control information is generated to control the PTZ to exit the tracking state, and the PTZ Keep the posture when exiting the tracking state.
  • the preset time is 1S.
  • the user remotely controls the balance car to move to the front of himself, hoping to control the pan/tilt of the balance car. Therefore, the user raises his right hand to make a predetermined gesture hold as shown in Fig. 2a.
  • the device controlling the pan/tilt recognizes the predetermined gesture, and calculates that the holding time of the predetermined gesture reaches 1S, thus controlling the pan/tilt to enter the tracking state. After confirming that the PTZ enters the tracking state, the user starts moving the right hand and controls the pan/tilt rotation.
  • the user puts down the right hand and moves the right hand out of the collection range.
  • the device controlling the gimbal does not recognize the predetermined gesture at time T6, and further determines that there is no predetermined gesture, and controls the gimbal to exit the tracking state.
  • the second aspect of the present invention further provides a pan/tilt control device, as shown in FIG. 6, comprising:
  • the identification module 101 is configured to collect an image by using an image capturing unit mounted on the pan/tilt, and identify the collected image to obtain a recognition result; the recognition result indicates that a predetermined gesture exists in the collected image, and the predetermined The position of the gesture is in the first position;
  • the first control module 102 is configured to obtain a deviation between the first position and the stored initial position, and control a rotation angle of the pan/tilt according to the deviation, so that the first position and the initial position The starting position coincides.
  • the first control module 102 is specifically configured to determine, according to the deviation, a target yaw angle and a target pitch angle that the pan/tilt in the tracking state needs to rotate, and based on the target yaw angle and the target a pitch angle, generating first control information; controlling the pan/tilt to rotate the target yaw angle and the target pitch angle based on the first control information.
  • the apparatus further includes a first determination module (connected to the identification module 101, not shown in FIG. 6) configured to obtain between the first location and the stored initial location Before the deviation, determining whether the cloud platform is in the tracking state;
  • a second control module configured to: when the pan/tilt is not in the tracking state, generate second control information for controlling the gimbal to enter the tracking state, and store the first location, and then The first position is described as the initial position.
  • the identification module 101 is specifically configured to perform matching in the collected image according to a predetermined gesture feature stored in advance, to obtain a matching result; and when the matching result indicates that the predetermined gesture exists in the collected image Obtaining, according to the predetermined gesture, an external geometric shape of the predetermined gesture, and acquiring a position of the preset reference point of the external geometric figure; determining a position of the preset reference point as the first position, obtaining The recognition result.
  • the identification module 101 is further configured to: after obtaining the circumscribing geometry of the predetermined gesture based on the predetermined gesture, acquiring a current direction of the circumscribed geometry, and determining that the current direction is The first location, obtaining the recognition result;
  • the initial position includes a reference direction of the circumscribing geometric image
  • the first control module 102 is configured to determine, according to an angular difference between the current direction and the reference direction, the target that the pan/tilt in the tracking state needs to rotate Rolling the angle and generating the first control information based on the target roll angle.
  • the apparatus further includes a second judging module (connected to the identification module 101, not shown in FIG. 6) configured to obtain the first position and the stored initial position. Deviating between, and after controlling the rotation angle of the pan/tilt according to the deviation, determining whether the predetermined gesture exists in the collected image;
  • a third control module configured to generate third control information when the predetermined gesture does not exist in the collected image, and control the pan-tilt to exit the tracking state based on the third control information
  • the apparatus further includes a third determining module (connected to the identifying module 101, not shown in FIG. 6) configured to generate a configuration to control the pan/tilt to enter the tracking state. Before the second control information, determining whether the hold time of the predetermined gesture reaches a preset time;
  • a determining module configured to determine that the second control information needs to be generated when a hold time of the predetermined gesture reaches the preset time.
  • pan/tilt control method shown in FIG. 1 to FIG. 4 are also applicable to the pan/tilt control device of the embodiment.
  • the pan/tilt control method those skilled in the art may
  • the implementation method of the pan/tilt control device in this embodiment is clearly known, so for the sake of brevity of the description, it will not be described in detail herein.
  • FIG. 7 is a schematic structural diagram of a PTZ control apparatus 200 according to another embodiment of the present invention.
  • the PTZ control apparatus 200 includes at least one processor 201, a memory 202, and at least one communication interface 203.
  • the various components in the pan/tilt control device 200 are coupled together by a bus system 204.
  • bus system 204 is used to implement connection communication between these components.
  • the bus system 204 includes a power bus, a control bus, and a status signal bus in addition to the data bus.
  • various buses are labeled as bus system 204 in FIG.
  • memory 202 can be either volatile memory or non-volatile memory, and can include both volatile and nonvolatile memory.
  • the non-volatile memory may be a read only memory (ROM), a programmable read only memory (PROM), an erasable programmable read only memory (EPROM, Erasable). Programmable Read-Only Memory).
  • ROM read only memory
  • PROM programmable read only memory
  • EPROM erasable programmable read only memory
  • Programmable Read-Only Memory Programmable Read-Only Memory
  • the memory 202 in the embodiment of the present invention is used to store various types of data to support the operation of the pan-tilt control device 200.
  • Examples of such data include any executable programs for operating on the pan-tilt control device 200, such as the operating system 2021 and applications 2022; images captured by the pan-tilt, and the like.
  • the operating system 2021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks.
  • the application 2022 may include various applications, and a program for implementing the pan/tilt control method of the embodiment of the present invention may be included in the application 2022.
  • Processor 201 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 201 or an instruction in a form of software.
  • the processor 201 described above may be a general purpose processor, a digital signal processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, or the like.
  • DSP digital signal processor
  • the processor 201 can implement or perform the various methods, steps, and logic blocks disclosed in the embodiments of the present invention.
  • a general purpose processor can be a microprocessor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiment of the present invention may be directly implemented as a hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can reside in a storage medium located in memory 202, and processor 201 reads the information in memory 202 and, in conjunction with its hardware, performs the steps of the foregoing method.
  • the PTZ control device 200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), or other electronic components. Used to perform the aforementioned method.
  • ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal processors
  • PLDs Programmable Logic Devices
  • the embodiment of the present invention further provides a readable storage medium, for example, a memory 202 including an executable program executable by the processor 201 of the PTZ control device 200.
  • the computer readable storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories, such as Mobile phones, computers, tablet devices, personal digital assistants, etc.
  • an executable program stored in a readable storage medium is used to cause a processor to perform the following operations:
  • the executable program stored in the readable storage medium controls the rotation angle of the pan/tilt according to the deviation, including:
  • the executable program stored in the readable storage medium before the obtaining the deviation between the first location and the stored initial location, the executable program stored in the readable storage medium further includes:
  • the executable program stored in the readable storage medium identifies the collected image to obtain a recognition result, including:
  • Matching is performed in the collected image according to a predetermined gesture feature stored in advance, to obtain a matching result
  • the method further includes:
  • the initial position includes a reference direction of the circumscribing geometric image, and the rotation angle of the pan/tilt is controlled according to the deviation, including:
  • the executable program stored in the readable storage medium after obtaining the deviation between the first position and the stored initial position, and controlling the rotation angle of the pan/tilt according to the deviation, further includes:
  • an executable program stored in a readable storage medium is generated for control Before the cloud station enters the second control information of the tracking state, the method further includes:
  • an image is collected by an image capturing unit mounted on the pan/tilt, and the collected image is recognized to obtain a recognition result; the recognition result indicates that a predetermined gesture exists in the collected image, and the position of the predetermined gesture is at the first position. . Then, a deviation between the first position and the stored initial position is obtained, and the rotation angle of the gimbal is controlled according to the deviation so that the first position coincides with the initial position. It can be seen that since the pan/tilt controls the rotation angle based on the deviation of the first position and the initial position, the pan/tilt can track the predetermined gesture real-time motion.
  • the user can control the pan/tilt movement simply by gestures, and can observe the real-time posture of the gimbal at the same time, thereby adjusting the gesture position in time to adjust the pan/tilt accordingly. Therefore, the present invention solves the technical problem that the gimbal control has a long lag time and poor interactive experience, and realizes the technical effect of real-time control of the gimbal.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种云台控制方法和装置、存储介质,方法包括:通过云台上搭载的图像采集单元采集图像,对采集到的图像进行识别,获得识别结果;所述识别结果表示所述采集到的图像中存在预定手势,以及所述预定手势的位置在第一位置;获得所述第一位置与存储的初始位置之间的偏差,并根据所述偏差控制云台的转动角度,以使所述第一位置与所述初始位置重合。

Description

一种云台控制方法和装置、存储介质
相关申请的交叉引用
本申请基于申请号为201610608793.4、申请日为2016年7月28日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的内容在此引入本申请作为参考。
技术领域
本发明涉及电子技术,尤其涉及一种云台控制方法和装置、存储介质。
背景技术
现有控制云台运动的方式,是通过点击按键、点击虚拟按键或操纵遥控杆等来控制云台运动的方向和速度。以点击图形交互界面中虚拟按键为例来进行具体说明。用户点击虚拟按键控制云台进行运动,然后观察云台运动后的姿态是否达到目标姿态。如果没有达到目标姿态,则根据运动后的姿态与目标姿态的差别,再次点击虚拟按键调整云台的姿态。
可见,相关技术中控制云台的方法,可能要经历多次调整、观察、再调整,才能使云台达到目标姿态。因此,相关技术中的方法存在迟滞时间长,交互体验差,操控繁琐的技术问题。
发明内容
本发明实施例提供了一种云台控制方法和装置、存储介质,用于解决相关技术控制云台迟滞时间长,交互体验差的技术问题。
第一方面,本发明提供了一种云台控制方法,包括:
通过云台上搭载的图像采集单元采集图像,对采集到的图像进行识别, 获得识别结果;所述识别结果表示所述采集到的图像中存在预定手势,以及所述预定手势的位置在第一位置;
获得所述第一位置与存储的初始位置之间的偏差,并根据所述偏差控制云台的转动角度,以使所述第一位置与所述初始位置重合。
可选的,根据所述偏差控制云台的转动角度,包括:
根据所述偏差,确定处于跟踪状态的所述云台需要旋转的目标偏航角和目标俯仰角,并基于所述目标偏航角和所述目标俯仰角,生成所述第一控制信息;
基于所述第一控制信息控制所述云台转动所述目标偏航角和所述目标俯仰角。
可选的,在获得所述第一位置与存储的初始位置之间的偏差之前,还包括:
判断所述云台是否处于所述跟踪状态;
当所述云台未处于所述跟踪状态时,生成用于控制所述云台进入所述跟踪状态的第二控制信息,并存储所述第一位置,进而将所述第一位置作为所述初始位置。
可选的,对采集到的图像进行识别,获得识别结果,包括:
按照预先存储的预定手势特征在所述采集到的图像中进行匹配,获得匹配结果;
当所述匹配结果表示所述采集到的图像中存在所述预定手势时,基于所述预定手势,获得所述预定手势的外接几何图形,并获取所述外接几何图形的预设参考点的位置;
确定所述预设参考点的位置为所述第一位置,获得所述识别结果。
可选的,在基于所述预定手势,获得所述预定手势的外接几何图形之后,还包括:
获取所述外接几何图形的当前方向,并确定所述当前方向为所述第一位置,获得所述识别结果;
所述初始位置包括所述外接几何图像的参考方向,根据所述偏差控制云台的转动角度,包括:
根据所述当前方向与所述参考方向之间的角度差,确定处于跟踪状态的所述云台需要旋转的目标滚转角,并基于所述目标滚转角,生成所述第一控制信息。
可选的,在获得所述第一位置与存储的初始位置之间的偏差,并根据所述偏差控制云台的转动角度之后,还包括:
判断所述采集到的图像中是否存在所述预定手势;
当所述采集到的图像中不存在所述预定手势时,生成第三控制信息,并基于所述第三控制信息控制所述云台退出所述跟踪状态。
可选的,在生成用于控制所述云台进入所述跟踪状态的第二控制信息之前,还包括:
判断所述预定手势的保持时间是否达到预设时间;
当所述预定手势的保持时间达到所述预设时间时,确定需要生成所述第二控制信息。
第二方面,本发明提供了一种云台控制装置,包括:
识别模块,配置为通过云台上搭载的图像采集单元采集图像,对采集到的图像进行识别,获得识别结果;所述识别结果表示所述采集到的图像中存在预定手势,以及所述预定手势的位置在第一位置;
第一控制模块,配置为获得所述第一位置与存储的初始位置之间的偏差,并根据所述偏差控制云台的转动角度,以使所述第一位置与所述初始位置重合。
可选的,所述第一控制模块具体配置根据所述偏差,确定处于跟踪状 态的所述云台需要旋转的目标偏航角和目标俯仰角,并基于所述目标偏航角和所述目标俯仰角,生成所述第一控制信息;基于所述第一控制信息控制所述云台转动所述目标偏航角和所述目标俯仰角。
可选的,所述装置还包括第一判断模块,配置为在获得所述第一位置与存储的初始位置之间的偏差之前,判断所述云台是否处于所述跟踪状态;
第二控制模块,配置为当所述云台未处于所述跟踪状态时,生成用于为控制所述云台进入所述跟踪状态的第二控制信息,并存储所述第一位置,进而将所述第一位置作为所述初始位置。
可选的,所述识别模块配置为按照预先存储的预定手势特征在所述采集到的图像中进行匹配,获得匹配结果;当所述匹配结果表示所述采集到的图像中存在所述预定手势时,基于所述预定手势,获得所述预定手势的外接几何图形,并获取所述外接几何图形的预设参考点的位置;确定所述预设参考点的位置为所述第一位置,获得所述识别结果。
可选的,所述初始位置包括所述外接几何图像的参考方向;
所述识别模块还配置为在基于所述预定手势,获得所述预定手势的外接几何图形之后,获取所述外接几何图形的当前方向,并确定所述当前方向为所述第一位置,获得所述识别结果;
所述初始位置包括所述外接几何图像的参考方向,所述第一控制模块配置为根据所述当前方向与所述参考方向之间的角度差,确定处于跟踪状态的所述云台需要旋转的目标滚转角,并基于所述目标滚转角,生成所述第一控制信息。
可选的,所述装置还包括第二判断模块,配置为在获得所述第一位置与存储的初始位置之间的偏差,并根据所述偏差控制云台的转动角度之后,判断所述采集到的图像中是否存在所述预定手势;
第三控制模块,配置为当所述采集到的图像中不存在所述预定手势时, 生成第三控制信息,并基于所述第三控制信息控制所述云台退出所述跟踪状态
可选的,所述装置还包括第三判断模块,配置为在生成用于控制所述云台进入所述跟踪状态的第二控制信息之前,判断所述预定手势的保持时间是否达到预设时间;
确定模块,配置为当所述预定手势的保持时间达到所述预设时间时,确定需要生成所述第二控制信息。
第三方面,本发明提供了一种云台控制装置,包括:
存储器,用于存储可执行程序;
处理器,用于运行的所述存储器存储的可执行程序,实现本发明实施例提供的云台控制方法。
第三方面,本发明提供了一种可读存储介质,其上存储有可执行程序,所述可执行程序被处理器执行时实现本发明实施例提供的云台控制方法。
本发明实施例的有益效果如下:
通过云台上搭载的图像采集单元采集图像,对采集到的图像进行识别,获得识别结果;识别结果表示采集到的图像中存在预定手势,以及预定手势的位置在第一位置。然后,获得第一位置与存储的初始位置之间的偏差,并根据偏差控制云台的转动角度,以使第一位置与初始位置重合。
由此可见,由于云台基于第一位置和初始位置的偏差控制转动角度,所以云台能够跟踪预定手势实时运动。进而,用户调整云台姿态的过程中,简单地通过手势就可以实现控制云台运动,且可以同时观察到云台的实时姿态,从而及时调整手势位置以使云台相应调整。所以解决了云台控制迟滞时间长的技术问题,实现了实时控制云台的技术效果。
附图说明
图1为本发明实施例的云台控制方法流程图;
图2a-图2c为本发明实施例的预定手势示意图;
图3为本发明实施例中的外接几何图形以及预设参考位置示意图;
图4为本发明实施例中的外接几何图像以及参考方向示意图;
图5为本发明实施例中一实施方式示意图;
图6为本发明实施例中的云台控制装置结构示意图;
图7为本发明实施例中的云台控制装置结构示意图。
具体实施方式
本发明实施例提供了一种云台控制方法和装置、存储介质,用于解决相关技术控制云台迟滞时间长,交互体验差的技术问题。
为了解决上述技术问题,在本发明实施例中,通过云台上搭载的图像采集单元采集图像,对采集到的图像进行识别,获得识别结果;识别结果表示采集到的图像中存在预定手势,以及预定手势的位置在第一位置。然后,获得第一位置与存储的初始位置之间的偏差,并根据偏差控制云台的转动角度,以使第一位置与初始位置重合。由此可见,由于云台基于第一位置和初始位置的偏差控制转动角度,所以云台能够跟踪预定手势实时运动。进而,用户调整云台姿态的过程中,简单地通过手势就可以实现控制云台运动,且可以同时观察到云台的实时姿态,从而及时调整手势位置以使云台相应调整。所以,本发明解决了云台控制迟滞时间长的技术问题,实现了实时控制云台的技术效果。
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以 存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
请参考图1,为本发明实施例中云台控制方法的流程图。本发明实施例中的云台控制方法可应用在平衡车、无人机或移动机器人等电子设备中。这些可以应用的本发明的电子设备包括云台,且云台上搭载有图像采集单元,当云台发生转动,图像采集单元能够随云台转动。另外,本领域技术人员应当理解,本发明实施例所述的云台并不仅仅指本领域中的云台,具体实现过程中的带动装置、机器人可转动的头部、转动平台等也都属于所述的云台所指的装置或设备。如图1所示,云台控制的方法包括:
S101:通过云台上搭载的图像采集单元采集图像,对采集到的图像进行识别,获得识别结果。
S102:获得第一位置与存储的初始位置之间的偏差,并根据所述偏差控制云台的转动角度。
具体来讲,在S101之前,启动云台上搭载的图像采集单元,使图像采集单元处于图像采集状态。然后,在S101中,通过图像采集单元进行图像采集,并且,在采集的同时对采集到的图像进行识别,获得识别结果。在具体实现过程中,从采集到的图像中可能识别到预定手势,也可能识别不到预定手势。为了方便介绍本发明实施例中的方法,将假设S101中的识别结果具体为表示采集到的图像中存在预定手势,以及预定手势的位置在第一位置。其中,本发明实施例中的第一位置可以具体为预定手势中任意一点的位置,和/或可以具体为预定手势的方向,本发明不做具体限制。
本发明实施例中预先存储有初始位置。其中,初始位置包括云台在跟踪手势时的参考位置,还包括云台跟踪手势时的参考方向。初始位置可以为缺省设置,与也可以由用户自行设置,还可以为首次识别到的第一位置, 在具体实现过程中,可以根据实际需要设置初始位置。在S102中,计算出第一位置与初始位置之间的偏差,根据偏差进一步计算出云台需要如何转动才能使第一位置与初始位置重合,即如何转动可以消除偏差。最终,根据计算结果,控制云台的转动角度。
由此可见,由于云台基于第一位置和初始位置的偏差控制转动角度,所以云台能够跟踪预定手势实时运动。进而,用户调整云台姿态的过程中,简单地通过手势就可以实现控制云台运动,且可以同时观察到云台的实时姿态,从而及时调整手势以使云台相应调整。所以,本发明实施例解决了云台控制迟滞时间长的技术问题,实现了实时控制云台的技术效果。
下面对如何对采集到的图像进行识别,获得识别结果进行详细介绍。
在本发明实施例中,S101可以通过如下过程实现:
按照预先存储的预定手势特征在所述采集到的图像中进行匹配,获得匹配结果;
当所述匹配结果表示所述采集到的图像中存在所述预定手势时,基于所述预定手势,获得所述预定手势的外接几何图形,并获取所述外接几何图形的预设参考点的位置;
确定所述预设参考点的位置为所述第一位置,获得所述识别结果。
具体来讲,为了确定采集到的图像中是否存在预定手势,需要先在采集到的图像中进行手部识别,换言之,需要先识别出采集到的图像中是否有手部。根据通用手部特征,在采集到的图像中进行特征匹配。如果从采集到的图像中匹配到符合通用手部特征的元素,则确定采集到的图像中具有手部,且该元素就是手部。当然,如果识别到了手部,将进一步识别该手部的手势是否为预定手势;而如果在采集到的图像中未识别出手部,则不需要再进一步确认采集到的图像中是否有预定手势了。
在本发明实施例中,预先存储有预定手势特征。预定手势可以为各种 手势,例如图2a-图2c示出的其中三种,本发明实施例对此不做具体限制。而预定手势特征,可以为缺省设置;用户也可以根据需要和喜好,自己设置预定手势,并控制图像采集单元采集自己设置的预定手势,进而使云台控制装置获得并存储该手势的特征。在具体实现过程中,本技术人员可以根据实际进行设置,不做具体限制。
在采集到的图像中存在手部时,按照预定手势特征,在采集到的图像中进行匹配,具体为对手部元素匹配,获得匹配度。如果匹配度达到阈值,表明该手部的手势为预定手势,则获得表示采集到的图像中存在预定手势的匹配结果;反之,如果匹配度低于阈值,表明该手部的手势不为预定手势,则获得表示采集到的图像中不存在预定手势的匹配结果。阈值例如为80%、90%和95%等。
在本发明可选实施例中,当匹配结果表示采集到的图像中存在预定手势时,根据预定手势的边缘特征,获得预定手势的外接几何图形。在本发明实现过程中,预定手势的外接几何图形为能够包围预定手势的最小几何图形,几何图形例如为矩形、圆形、椭圆或者其他不规则几何图形等。并且,在另一可选实施例中,还可以获得该几何外接图形的预设参考点的位置。预设参考点可以为外接几何图形边缘上的任意一点,例如顶点、最低点等,也可以为外接几何图形内部的任意一点,例如中点、焦点等,本发明不做具体限制。
然后,读取预设参考点的位置,并确定预设参考点的位置为第一位置,将预设参考点的位置包含在识别结果中,获得识别表示采集到的图像中存在预定手势,以及预定手势的位置在第一位置的识别结果。
在本发明可选实施例中,为了方便用户确定做出的预定手势是否被成功识别,还可以在显示单元中显示采集到的图像以及外接几何图形。
为了清除地说明上述识别过程,下面以预定手势为图2a所示的手势, 外接几何图形为矩形,预设参考位置为矩形中心为例来举例说明。
显示单元上显示如图3所示的采集到的图像。首先,在图3中,根据通用手部特征进行识别,识别出图3中的手部。进一步,按照存储的图2a所示的预定手势的预定手势特征进行匹配,得到表示存在预定手势的匹配结果。那么,由预定手势的边缘特征进一步得到预定手势的外接几何图形,如图3中虚线矩形所示。并且,获得矩形的中心的位置,如图3中白色圆圈所示,将中心的位置包含在识别结果中输出。
接下来,执行S102,计算出第一位置与初始位置之间的偏差。在本发明实施例中,初始位置为某一点的位置。第一位置和初始位置均用同一坐标系下的坐标表示。本发明将该坐标系称为标准坐标系,其中,标准坐标系可以为图像采集单元的采集平面坐标系,也可以为显示单元的显示坐标系等,本发明不做具体限制。所以,在计算偏差时,第一位置的坐标和初始位置的坐标相减就可以获得偏差。例如,初始位置为(0,0),第一坐标为(4,3),则偏差为(0-4,0-3),即(-4,-3)。
接下来,基于偏差控制云台的转动角度。在本发明实施例中,具体可以通过如下方式控制云台的转动角度:
根据所述偏差,确定处于跟踪状态的所述云台需要旋转的目标偏航角和目标俯仰角,并基于所述目标偏航角和所述目标俯仰角,生成所述第一控制信息;
基于所述第一控制信息控制所述云台转动所述目标偏航角和所述目标俯仰角。
具体来讲,根据标准坐标系与偏航角、俯仰角的对应关系,计算出要消除偏差云台需要转动多少偏航角和/或俯仰角。其中,标准坐标系与偏航角、俯仰角的对应关系表示云台转动多少度偏航角和/或俯仰角能够使标准坐标系中的点移动一个基本单位。进而,将计算得到的偏航角作为目标偏 航角,以及将计算得到的俯仰角作为目标俯仰角。例如,可以根据第一位置与初始位置之间在滚转轴方向上的偏差,确定云台需要旋转的目标偏航角,根据第一位置与初始位置之间在俯仰轴方向上的偏差,确定云台需要旋转的目标俯仰角。
接下来,将目标偏航角和目标俯仰角输入云台控制模型中,计算得到对应的控制参数,进而生成第一控制信息。在具体实现过程中,本发明所属领域的普通技术人员可以对云台运动进行检测,建模,调试等获得云台控制模型,本发明这里就不再对此一一赘述。
在本发明可选实施例中,,将第一控制信息输入带动云台围绕偏航轴转动的第一电机,以使第一电机基于第一控制信息带动云台转动目标偏航角。以及,将第一控制信息输入带动云台围绕俯仰轴转动的第二电机,以使第二电机基于第一控制信息带动云台转动目标俯仰角。另外,为了防止出现晃动,带动云台围绕滚转轴转动的第三电机还需进行相应的补偿带动,以确保云台的稳定性。
由上述描述可以看出,控制云台转动目标偏航角和目标俯仰角,就消除了偏差,进而使得第一位置与初始位置重合,实现云台跟踪预定手势运动。并且,一方面,云台根据预定手势的第一位置相应调整偏航角和俯仰角,所以预定手势可以控制云台的转动角度。另一方面,预定手势运动的速度,可以决定了偏差的大小,而偏差的大小又决定了目标偏航角和目标俯仰角的大小。而转动目标偏航角和目标俯仰角的时隙相同,所以预定手势运动的速度还控制着云台的转动速度。
结合上述实施例,在另一实施例中,还可以控制云台围绕滚转轴转动一定滚转角。具体可以对外接几何图设置参考方向,进而根据外接几何图形的当前方向和参考方向之间的角度差控制云台绕滚转轴转动。此时,第一位置就进一步还包括外接几何图形的当前方向,初始位置还包括参考方 向。具体来讲,在控制云台围绕滚转轴转动时,在基于预定手势,获得预定手势的外接几何图形之后,还包括:
获取所述外接几何图形的当前方向,并确定所述当前方向为所述第一位置,获得所述识别结果;
此时,初始位置还包括外接几何图像的参考方向,根据所述偏差控制云台的转动角度,包括:
根据所述当前方向与所述参考方向之间的角度差,确定处于跟踪状态的所述云台需要旋转的目标滚转角,并基于所述目标滚转角,生成所述第一控制信息。
沿用上文中外接几何图形为矩形的例子来说。如图4所示,假设设置参考方向为矩形长边平行于标准坐标系纵轴方向。进而,计算出当前矩形长边与参考方向的角度差θ,控制云台转动相应的滚转角。
在本发明另一实施例中,S102中获得第一位置与初始位置之间的偏差之前,还可以包括:
判断所述云台是否处于所述跟踪状态;
当所述云台未处于所述跟踪状态时,生成用于控制所述云台进入所述跟踪状态的第二控制信息,并存储所述第一位置,进而将所述第一位置作为所述初始位置。
具体来讲,在本发明实施例中,跟踪状态为云台根据上述实施例中的方式跟踪预定手势运动的状态。如果云台未处于跟踪状态,则即使在采集到的图像中识别到了预定手势,也不会跟随预定手势的移动而转动。因此,在S102之前,需要判断云台是否处于跟踪状态。判断云台是否处于跟踪状态,可以通过查询云台状态来确定。如果查询到的状态为跟踪状态,则判断云台处于跟踪状态;反之,如果查询到的状态不是跟踪状态,则判断云台未处于跟踪状态。
在本发明实施例中,如果识别结果中表示识别到了预定手势以及预定手势的位置在第一位置的,而云台未处于跟踪状态,则表示用户此时希望通过预定手势来控制云台。所以,生成用于控制云台进入跟踪状态的第二控制信息,并且将第一位置存储起来,作为后续跟踪的初始位置。
结合上述实施例,在另一实施例中,在生成第二控制信息之前,还包括:
判断所述预定手势的保持时间是否达到预设时间;
当所述预定手势的保持时间达到所述预设时间时,确定需要生成所述第二控制信息。
由于在具体实现过程中,用户可能由于误操作做出了预定手势,并不需要云台进入跟踪状态,因此,为了提高控制的准确率,在生成第二控制信息之前,需要判断预定手势的保持时间是否达到预设时间。其中,预设时间例如为2秒(S),3S等,本发明所属领域的普通技术人员可以根据实际进行设置,本发明不做具体限制。
对于保持时间,由于图像采集单元按照相等时间间隔采集图像,因此通过连续识别到预定手势的图像的帧数就可以计算得到保持时间。举例来所,假设图像采集单元每隔1毫秒(mS)采集一帧图像,在连续1200帧图像中均识别到了预定手势,则计算出预定手势的保持时间为1.2S。
由于预定手势的保持时间达到预设时间时,用户误操作做出预定手势的可能性较小,因此确定用户需要对云台进行控制,进而确定需要生成第二控制信息。
由上述描述可知,在判断出预定手势的保持时间达到预设时间时再生成第二控制信息,可以避免由于误操作导致云台进入跟踪状态,提高了控制的准确率。
下面列举一个例子来说明如何通过预定手势控制云台。请参考图5,为 一实施方式示意图。用户站在平衡车的前方,举起左手,使左手保持在图像采集单元的采集范围中。云台控制装置从采集到的图像中识别出预定手势,并判断出云台此时未处于跟踪状态。进一步,计算得到预定手势的保持时间达到预设时间2S,因此生成第二控制信息,控制云台进入跟踪状态。以及,获得预定手势外接几何图像的中心位置,并存储此时的该位置作为初始位置。用户根据显示单元显示的表示云台进入跟踪状态的信息后,开始按照需要移动左手。在移动的过程中,图像采集单元持续采集图像,并且持续从采集到的图像中识别预定手势,以及获得预定手势的第一位置。然后,实时计算出第一位置与初始位置的偏差,并根据偏差实时控制云台转动相应的目标偏航角和目标俯仰角。
在本发明可选实施例中,如果用户不再需要控制云台运动,则云台可以退出跟踪状态。在本发明实施例中,有多种方式可以控制云台退出跟踪状态。下面将列举其中几种,在具体实现过程中,包括但不限于以下几种。
第一种:
在云台处于跟踪状态时,判断预定手势的静止时间是否达到退出时间;当静止时间达到退出时间时,生成第三控制信息,并基于第三控制信息控制云台退出跟踪状态。
具体来讲,当用户需要云台进入跟踪状态时,可以保持预定手势达到预设时间,而当用户需要云台从跟踪状态退出时,同样也可以通过保持预定手势静止达到退出时间来实现。其中,获得静止时间的方式与获得保持时间的方式类似,因此就不重复赘述了。退出时间例如2S、3S等,本发明不做具体限制。当判断静止时间达到退出时间时,确定用户需要云台退出跟踪状态,进而生成第三控制信息控制云台退出跟踪状态,并使云台保持退出跟踪状态时的姿态。
举例来说,假设退出时间和预设时间都为1S。在T1时刻,用户遥控 平衡车移动到自己的面前,希望控制平衡车的云台转动。故用户举起自己的右手做出预定手势并保持。根据图像采集单元采集到的图像,云台控制装置识别出预定手势,并计算出预定手势的保持时间达到了1S,因此控制云台进入跟踪状态。用户确认云台进入跟踪状态后,开始移动右手,控制云台转动。在T1时刻之后的T2时刻,云台转动到用户期望的姿态后,用户再次保持右手静止。云台控制装置判断出预定手势静止的时间达到1S,控制云台退出跟踪状态。
第二种:
在云台处于跟踪状态时,判断预定手势是否转换为退出手势;当预定手势转换为退出手势时,生成第三控制信息,并基于第三控制信息控制云台退出跟踪状态。
具体来讲,在本发明实施例中,退出手势为与预定手势不同的手势,例如预定手势如图2a所示,静止手势如图2b所示。在云台处于跟踪状态时,对采集到的图像进行识别,若识别出退出手势,则确定预定手势转换为退出手势。其中,对退出手势的识别与对预定手势的识别类似。当预定手势转换为退出手势,确定用户需要云台退出跟踪状态,进而生成第三控制信息控制云台退出跟踪状态,并使云台保持退出跟踪状态时的姿态。
举例来说,假设预设时间都为1S。在T3时刻,用户遥控平衡车移动到自己的面前,希望控制平衡车的云台转动。故用户举起自己的右手做出如图2a所示预定手势并保持。根据图像采集单元采集到的图像,云台控制装置识别出预定手势,并计算出预定手势的保持时间达到了1S,因此控制云台进入跟踪状态。用户确认云台进入跟踪状态后,开始移动右手,控制云台转动。在T3之后的T4时刻,云台转动到用户期望的姿态后,用户用右手做出如图2b所示的手势。云台控制装置识别到退出手势,控制云台退出跟踪状态。
第三种:
判断所述采集到的图像中是否存在所述预定手势;
当所述采集到的图像中不存在所述预定手势时,生成第三控制信息,并基于所述第三控制信息控制所述云台退出所述跟踪状态。
具体来讲,在云台处于跟踪状态时,云台控制装置将持续对预定手势进行识别,并且获得第一位置,以便实现持续控制云台转动。当用户的手移动出图像采集单元的采集范围,采集到的图像中将不再识别到预定手势。当从采集到的图像识别不到预定手势时,判断采集到的图像中不存在预定手势,确定用户需要云台退出跟踪状态,进而生成第三控制信息控制云台退出跟踪状态,并使云台保持退出跟踪状态时的姿态。
举例来说,假设预设时间都为1S。在T5时刻,用户遥控平衡车移动到自己的面前,希望控制平衡车的云台转动。故用户举起自己的右手做出如图2a所示预定手势保持。根据图像采集单元采集到的图像,控制云台的装置识别出预定手势,并计算出预定手势的保持时间达到了1S,因此控制云台进入跟踪状态。用户确认云台进入跟踪状态后,开始移动右手,控制云台转动。在T5之后的T6时刻,云台转动到用户期望的姿态后,用户放下右手,使右手移动出采集范围。控制云台的装置在T6时刻,识别不到预定手势,进而判断不存在预定手势,控制云台退出跟踪状态。
基于与前述实施例中云台控制方法同样的发明构思,本发明第二方面还提供一种云台控制装置,如图6所示,包括:
识别模块101,配置为通过云台上搭载的图像采集单元采集图像,对采集到的图像进行识别,获得识别结果;所述识别结果表示所述采集到的图像中存在预定手势,以及所述预定手势的位置在第一位置;
第一控制模块102,配置为获得所述第一位置与存储的初始位置之间的偏差,并根据所述偏差控制云台的转动角度,以使所述第一位置与所述初 始位置重合。
具体来讲,第一控制模块102具体配置为根据所述偏差,确定处于跟踪状态的所述云台需要旋转的目标偏航角和目标俯仰角,并基于所述目标偏航角和所述目标俯仰角,生成第一控制信息;基于所述第一控制信息控制所述云台转动所述目标偏航角和所述目标俯仰角。
在本发明可选实施例中,所述装置还包括第一判断模块(与识别模块101连接,图6中未示出),配置为在获得所述第一位置与存储的初始位置之间的偏差之前,判断所述云台是否处于所述跟踪状态;
第二控制模块,配置为当所述云台未处于所述跟踪状态时,生成用于控制所述云台进入所述跟踪状态的第二控制信息,并存储所述第一位置,进而将所述第一位置作为所述初始位置。
具体来讲,识别模块101具体配置为按照预先存储的预定手势特征在所述采集到的图像中进行匹配,获得匹配结果;当所述匹配结果表示所述采集到的图像中存在所述预定手势时,基于所述预定手势,获得所述预定手势的外接几何图形,并获取所述外接几何图形的预设参考点的位置;确定所述预设参考点的位置为所述第一位置,获得所述识别结果。
在本发明可选实施例中,识别模块101还配置为在基于所述预定手势,获得所述预定手势的外接几何图形之后,获取所述外接几何图形的当前方向,并确定所述当前方向为所述第一位置,获得所述识别结果;
所述初始位置包括所述外接几何图像的参考方向,第一控制模块102配置为根据所述当前方向与所述参考方向之间的角度差,确定处于跟踪状态的所述云台需要旋转的目标滚转角,并基于所述目标滚转角,生成所述第一控制信息。
在本发明可选实施例中,所述装置还包括第二判断模块(与识别模块101连接,图6中未示出),配置为在获得所述第一位置与存储的初始位置 之间的偏差,并根据所述偏差控制云台的转动角度之后,判断所述采集到的图像中是否存在所述预定手势;
第三控制模块,配置为当所述采集到的图像中不存在所述预定手势时,生成第三控制信息,并基于所述第三控制信息控制所述云台退出所述跟踪状态
在本发明可选实施例中,所述装置还包括第三判断模块(与识别模块101连接,图6中未示出),配置为在生成配置为控制所述云台进入所述跟踪状态的第二控制信息之前,判断所述预定手势的保持时间是否达到预设时间;
确定模块,配置为当所述预定手势的保持时间达到所述预设时间时,确定需要生成所述第二控制信息。
前述结合图1至图4示出的云台控制方法的各种变化方式和具体实例同样适用于本实施例的云台控制装置,通过前述对云台控制方法的详细描述,本领域技术人员可以清楚的知道本实施例中云台控制装置的实施方法,所以为了说明书的简洁,在此不再详述。
图7是本发明另一实施例的云台控制装置200的结构示意图,云台控制装置200,包括:至少一个处理器201、存储器202、至少一个通信接口203。云台控制装置200中的各个组件通过总线系统204耦合在一起。可理解,总线系统204用于实现这些组件之间的连接通信。总线系统204除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图7中将各种总线都标为总线系统204。
可以理解,存储器202可以是易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable  Programmable Read-Only Memory)。本发明实施例描述的存储器202旨在包括但不限于这些和任意其它适合类型的存储器。
本发明实施例中的存储器202用于存储各种类型的数据以支持云台控制装置200的操作。这些数据的示例包括:用于在云台控制装置200上操作的任何可执行程序,如操作系统2021和应用程序2022;云台采集的图像等。其中,操作系统2021包含各种系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务。应用程序2022可以包含各种应用程序,实现本发明实施例的云台控制方法的程序可以包含在应用程序2022中。
上述本发明实施例提供的云台控制的方法可以应用于处理器201中,或者由处理器201实现。处理器201可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器201中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器201可以是通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。处理器201可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本发明实施例所公开的方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于存储介质中,该存储介质位于存储器202,处理器201读取存储器202中的信息,结合其硬件完成前述方法的步骤。
在示例性实施例中,云台控制装置200可以被一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、或其他电子元件实现,用于执行前述方法。
本发明实施例还提供了一种可读存储介质,例如包括可执行程序的存储器202,上述可执行程序可由云台控制装置200的处理器201执行。计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、Flash Memory、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备,如移动电话、计算机、平板设备、个人数字助理等。
在示例性实施例中,可读存储介质中存储的可执行程序用于引起处理器执行以下的操作:
通过云台上搭载的图像采集单元采集图像,对采集到的图像进行识别,获得识别结果;所述识别结果表示所述采集到的图像中存在预定手势,以及所述预定手势的位置在第一位置;
获得所述第一位置与存储的初始位置之间的偏差,并根据所述偏差控制云台的转动角度,以使所述第一位置与所述初始位置重合。
在示例性实施例中,可读存储介质中存储的可执行程序根据所述偏差控制云台的转动角度,包括:
根据所述偏差,确定处于跟踪状态的所述云台需要旋转的目标偏航角和目标俯仰角,并基于所述目标偏航角和所述目标俯仰角,生成第一控制信息;
基于所述第一控制信息控制所述云台转动所述目标偏航角和所述目标俯仰角。
在示例性实施例中,可读存储介质中存储的可执行程序在获得所述第一位置与存储的初始位置之间的偏差之前,还包括:
判断所述云台是否处于所述跟踪状态;
当所述云台未处于所述跟踪状态时,生成用于控制所述云台进入所述跟踪状态的第二控制信息,存储所述第一位置,将所述第一位置作为所述 初始位置。
在示例性实施例中,可读存储介质中存储的可执行程序对采集到的图像进行识别,获得识别结果,包括:
按照预先存储的预定手势特征在所述采集到的图像中进行匹配,获得匹配结果;
当所述匹配结果表示所述采集到的图像中存在所述预定手势时,基于所述预定手势,获得所述预定手势的外接几何图形,并获取所述外接几何图形的预设参考点的位置;
确定所述预设参考点的位置为所述第一位置,获得所述识别结果。
在示例性实施例中,可读存储介质中存储的可执行程序在基于所述预定手势,获得所述预定手势的外接几何图形之后,还包括:
获取所述外接几何图形的当前方向,并确定所述当前方向为所述第一位置,获得所述识别结果;
所述初始位置包括所述外接几何图像的参考方向,根据所述偏差控制云台的转动角度,包括:
根据所述当前方向与所述参考方向之间的角度差,确定处于跟踪状态的所述云台需要旋转的目标滚转角,并基于所述目标滚转角,生成所述第一控制信息。
在示例性实施例中,可读存储介质中存储的可执行程序在获得所述第一位置与存储的初始位置之间的偏差,并根据所述偏差控制云台的转动角度之后,还包括:
判断所述采集到的图像中是否存在所述预定手势;
当所述采集到的图像中不存在所述预定手势时,生成第三控制信息,并基于所述第三控制信息控制所述云台退出所述跟踪状态。
在示例性实施例中,可读存储介质中存储的可执行程序在生成用于控 制所述云台进入所述跟踪状态的第二控制信息之前,还包括:
判断所述预定手势的保持时间是否达到预设时间;
当所述预定手势的保持时间达到所述预设时间时,确定需要生成所述第二控制信息。
本发明实施例的有益效果如下:
在本发明实施例通过云台上搭载的图像采集单元采集图像,对采集到的图像进行识别,获得识别结果;识别结果表示采集到的图像中存在预定手势,以及预定手势的位置在第一位置。然后,获得第一位置与存储的初始位置之间的偏差,并根据偏差控制云台的转动角度,以使第一位置与初始位置重合。由此可见,由于云台基于第一位置和初始位置的偏差控制转动角度,所以云台能够跟踪预定手势实时运动。进而,用户调整云台姿态的过程中,简单地通过手势就可以实现控制云台运动,且可以同时观察到云台的实时姿态,从而及时调整手势位置以使云台相应调整。所以,本发明解决了云台控制迟滞时间长,交互体验差的技术问题,实现了实时控制云台的技术效果。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (16)

  1. 一种云台控制方法,包括:
    通过云台上搭载的图像采集单元采集图像,对采集到的图像进行识别,获得识别结果;所述识别结果表示所述采集到的图像中存在预定手势,以及所述预定手势的位置在第一位置;
    获得所述第一位置与存储的初始位置之间的偏差,并根据所述偏差控制云台的转动角度,以使所述第一位置与所述初始位置重合。
  2. 如权利要求1所述的方法,其中,根据所述偏差控制云台的转动角度,包括:
    根据所述偏差,确定处于跟踪状态的所述云台需要旋转的目标偏航角和目标俯仰角,并基于所述目标偏航角和所述目标俯仰角,生成第一控制信息;
    基于所述第一控制信息控制所述云台转动所述目标偏航角和所述目标俯仰角。
  3. 如权利要求2所述的方法,其中,在获得所述第一位置与存储的初始位置之间的偏差之前,还包括:
    判断所述云台是否处于所述跟踪状态;
    当所述云台未处于所述跟踪状态时,生成用于控制所述云台进入所述跟踪状态的第二控制信息,存储所述第一位置,将所述第一位置作为所述初始位置。
  4. 如权利要求2-3任一项所述的方法,其中,对采集到的图像进行识别,获得识别结果,包括:
    按照预先存储的预定手势特征在所述采集到的图像中进行匹配,获得匹配结果;
    当所述匹配结果表示所述采集到的图像中存在所述预定手势时,基于 所述预定手势,获得所述预定手势的外接几何图形,并获取所述外接几何图形的预设参考点的位置;
    确定所述预设参考点的位置为所述第一位置,获得所述识别结果。
  5. 如权利要求4所述的方法,其中,在基于所述预定手势,获得所述预定手势的外接几何图形之后,还包括:
    获取所述外接几何图形的当前方向,并确定所述当前方向为所述第一位置,获得所述识别结果;
    所述初始位置包括所述外接几何图像的参考方向,根据所述偏差控制云台的转动角度,包括:
    根据所述当前方向与所述参考方向之间的角度差,确定处于跟踪状态的所述云台需要旋转的目标滚转角,并基于所述目标滚转角,生成所述第一控制信息。
  6. 如权利要求2所述的方法,其中,在获得所述第一位置与存储的初始位置之间的偏差,并根据所述偏差控制云台的转动角度之后,还包括:
    判断所述采集到的图像中是否存在所述预定手势;
    当所述采集到的图像中不存在所述预定手势时,生成第三控制信息,并基于所述第三控制信息控制所述云台退出所述跟踪状态。
  7. 如权利要求3所述的方法,其中,在生成用于控制所述云台进入所述跟踪状态的第二控制信息之前,还包括:
    判断所述预定手势的保持时间是否达到预设时间;
    当所述预定手势的保持时间达到所述预设时间时,确定需要生成所述第二控制信息。
  8. 一种云台控制装置,包括:
    识别模块,配置为通过云台上搭载的图像采集单元采集图像,对采集到的图像进行识别,获得识别结果;所述识别结果表示所述采集到的图像 中存在预定手势,以及所述预定手势的位置在第一位置;
    第一控制模块,配置为获得所述第一位置与存储的初始位置之间的偏差,并根据所述偏差控制云台的转动角度,以使所述第一位置与所述初始位置重合。
  9. 如权利要求8所述的装置,其中,
    所述第一控制模块具体配置为根据所述偏差,确定处于跟踪状态的所述云台需要旋转的目标偏航角和目标俯仰角,并基于所述目标偏航角和所述目标俯仰角,生成第一控制信息;基于所述第一控制信息控制所述云台转动所述目标偏航角和所述目标俯仰角。
  10. 如权利要求9所述的装置,其中,所述装置还包括:
    第一判断模块,配置为在获得所述第一位置与存储的初始位置之间的偏差之前,判断所述云台是否处于所述跟踪状态;
    第二控制模块,配置为当所述云台未处于所述跟踪状态时,生成用于控制所述云台进入所述跟踪状态的第二控制信息,并存储所述第一位置,进而将所述第一位置作为所述初始位置。
  11. 如权利要求9-10任一项所述的装置,其中,
    所述识别模块具体配置为按照预先存储的预定手势特征在所述采集到的图像中进行匹配,获得匹配结果;当所述匹配结果表示所述采集到的图像中存在所述预定手势时,基于所述预定手势,获得所述预定手势的外接几何图形,并获取所述外接几何图形的预设参考点的位置;确定所述预设参考点的位置为所述第一位置,获得所述识别结果。
  12. 如权利要求11所述的装置,其中,
    所述初始位置包括所述外接几何图像的参考方向;
    所述识别模块还配置为在基于所述预定手势,获得所述预定手势的外接几何图形之后,获取所述外接几何图形的当前方向,并确定所述当前方 向为所述第一位置,获得所述识别结果;
    所述第一控制模块具体配置为根据所述当前方向与所述参考方向之间的角度差,确定处于跟踪状态的所述云台需要旋转的目标滚转角,并基于所述目标滚转角,生成所述第一控制信息。
  13. 如权利要求9所述的装置,其中,所述装置还包括:
    第二判断模块,配置为在获得所述第一位置与存储的初始位置之间的偏差,并根据所述偏差控制云台的转动角度之后,判断所述采集到的图像中是否存在所述预定手势;
    第三控制模块,配置为当所述采集到的图像中不存在所述预定手势时,生成第三控制信息,并基于所述第三控制信息控制所述云台退出所述跟踪状态。
  14. 如权利要求10所述的装置,其中,
    所述装置还包括第三判断模块,配置为在生成用于控制所述云台进入所述跟踪状态的第二控制信息之前,判断所述预定手势的保持时间是否达到预设时间;
    确定模块,配置为当所述预定手势的保持时间达到所述预设时间时,确定需要生成所述第二控制信息。
  15. 一种云台控制装置,包括:
    存储器,用于存储可执行程序;
    处理器,用于运行的所述存储器存储的可执行程序,实现执行权利要求1至7所述的云台控制方法。
  16. 一种可读存储介质,存储有可执行程序,所述可执行程序被处理器执行时实现权利要求1至7所述的云台控制方法。
PCT/CN2017/094882 2016-07-28 2017-07-28 一种云台控制方法和装置、存储介质 WO2018019290A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610608793.4 2016-07-28
CN201610608793.4A CN106249888A (zh) 2016-07-28 2016-07-28 一种云台控制方法和装置

Publications (1)

Publication Number Publication Date
WO2018019290A1 true WO2018019290A1 (zh) 2018-02-01

Family

ID=57604697

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/094882 WO2018019290A1 (zh) 2016-07-28 2017-07-28 一种云台控制方法和装置、存储介质

Country Status (2)

Country Link
CN (1) CN106249888A (zh)
WO (1) WO2018019290A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111984036A (zh) * 2020-08-28 2020-11-24 中国人民解放军国防科技大学 一种基于云台相机的固定翼无人机对快速运动目标的跟踪方法
US11582395B1 (en) * 2021-11-08 2023-02-14 Primax Electronics Ltd. Gimbal device

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106249888A (zh) * 2016-07-28 2016-12-21 纳恩博(北京)科技有限公司 一种云台控制方法和装置
CN106339093B (zh) * 2016-08-31 2019-12-13 纳恩博(北京)科技有限公司 一种云台控制方法和装置
CN108255201A (zh) * 2016-12-29 2018-07-06 昊翔电能运动科技(昆山)有限公司 无人机云台姿态调整方法及其系统
WO2018191971A1 (zh) * 2017-04-21 2018-10-25 深圳市大疆灵眸科技有限公司 一种云台的控制方法以及云台
WO2019023887A1 (zh) 2017-07-31 2019-02-07 深圳市大疆创新科技有限公司 云台转动的方法、云台、飞行器、控制云台转动的方法及系统
CN107977021A (zh) * 2017-11-28 2018-05-01 佛山市安尔康姆航空科技有限公司 一种云台舵机的控制方法
WO2019134117A1 (zh) * 2018-01-05 2019-07-11 深圳市大疆创新科技有限公司 云台控制方法、云台和机器可读存储介质
WO2019144271A1 (zh) * 2018-01-23 2019-08-01 深圳市大疆创新科技有限公司 无人机的控制方法、设备和无人机
CN110121048A (zh) * 2018-02-05 2019-08-13 青岛海尔多媒体有限公司 一种会议一体机的控制方法及控制系统和会议一体机
CN110333734A (zh) * 2019-05-24 2019-10-15 深圳市道通智能航空技术有限公司 一种无人机及其控制方法、存储介质
CN112154652A (zh) * 2019-08-13 2020-12-29 深圳市大疆创新科技有限公司 手持云台的控制方法、控制装置、手持云台及存储介质
CN111654677B (zh) * 2020-06-17 2021-12-17 浙江大华技术股份有限公司 确定云台失步的方法及装置
CN114500858B (zh) * 2022-03-28 2022-07-08 浙江大华技术股份有限公司 一种预置位的参数确定方法、装置、设备及介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336534A (zh) * 2013-06-07 2013-10-02 浙江宇视科技有限公司 一种在监控终端的触摸屏上进行云台摄像机控制的方法
CN204347618U (zh) * 2015-01-12 2015-05-20 南京信息工程大学 基于手势识别的手机自拍杆云台控制装置
CN106249888A (zh) * 2016-07-28 2016-12-21 纳恩博(北京)科技有限公司 一种云台控制方法和装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104486543B (zh) * 2014-12-09 2020-11-27 北京时代沃林科技发展有限公司 智能终端触控方式控制云台摄像头的系统
CN104808799A (zh) * 2015-05-20 2015-07-29 成都通甲优博科技有限责任公司 一种能够识别手势的无人机及其识别方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336534A (zh) * 2013-06-07 2013-10-02 浙江宇视科技有限公司 一种在监控终端的触摸屏上进行云台摄像机控制的方法
CN204347618U (zh) * 2015-01-12 2015-05-20 南京信息工程大学 基于手势识别的手机自拍杆云台控制装置
CN106249888A (zh) * 2016-07-28 2016-12-21 纳恩博(北京)科技有限公司 一种云台控制方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111984036A (zh) * 2020-08-28 2020-11-24 中国人民解放军国防科技大学 一种基于云台相机的固定翼无人机对快速运动目标的跟踪方法
US11582395B1 (en) * 2021-11-08 2023-02-14 Primax Electronics Ltd. Gimbal device

Also Published As

Publication number Publication date
CN106249888A (zh) 2016-12-21

Similar Documents

Publication Publication Date Title
WO2018019290A1 (zh) 一种云台控制方法和装置、存储介质
US10569172B2 (en) System and method of configuring a virtual camera
WO2018040906A1 (zh) 一种云台控制方法和装置、计算机存储介质
US9076364B2 (en) Electronic device and method for adjustting display screen
US10498952B2 (en) Shooting method and shooting system capable of realizing dynamic capturing of human faces based on mobile terminal
US20180020158A1 (en) Camera Augmented Reality Based Activity History Tracking
US11625841B2 (en) Localization and tracking method and platform, head-mounted display system, and computer-readable storage medium
CN111566612A (zh) 基于姿势和视线的视觉数据采集系统
US11210796B2 (en) Imaging method and imaging control apparatus
US20150288857A1 (en) Mount that facilitates positioning and orienting a mobile computing device
US20150103184A1 (en) Method and system for visual tracking of a subject for automatic metering using a mobile device
JP5264844B2 (ja) ジェスチャ認識装置及び方法
US10860092B2 (en) Interface interaction apparatus and method
CN109079809B (zh) 一种机器人屏幕解锁方法、装置、智能设备及存储介质
CN111142836B (zh) 屏幕朝向角度的调整方法、装置、电子产品及存储介质
WO2021026789A1 (zh) 基于手持云台的拍摄方法、手持云台及存储介质
WO2022041014A1 (zh) 云台及其控制方法、设备,拍摄装置、系统和存储介质
CN111857910A (zh) 信息的显示方法、装置及电子设备
US20200106967A1 (en) System and method of configuring a virtual camera
EP3660780A1 (en) Method and apparatus for acquiring images, acquisition device, and computer storage medium
CN108604010B (zh) 用于校正设备中的漂移的方法和该设备
JP6685742B2 (ja) 操作装置、移動装置、およびその制御システム
CN110060295A (zh) 目标定位方法及装置、控制装置、跟随设备及存储介质
CN110633336B (zh) 激光数据搜索范围的确定方法、装置及存储介质
WO2021232273A1 (zh) 无人机及其控制方法和装置、遥控终端、无人机系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17833593

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17833593

Country of ref document: EP

Kind code of ref document: A1