CN113508351A - Control method, intelligent glasses, movable platform, holder, control system and computer-readable storage medium - Google Patents

Control method, intelligent glasses, movable platform, holder, control system and computer-readable storage medium Download PDF

Info

Publication number
CN113508351A
CN113508351A CN201980093254.3A CN201980093254A CN113508351A CN 113508351 A CN113508351 A CN 113508351A CN 201980093254 A CN201980093254 A CN 201980093254A CN 113508351 A CN113508351 A CN 113508351A
Authority
CN
China
Prior art keywords
movable platform
control
camera
user
control instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980093254.3A
Other languages
Chinese (zh)
Inventor
黄敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN113508351A publication Critical patent/CN113508351A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D3/00Control of position or direction
    • G05D3/12Control of position or direction using feedback

Abstract

The embodiment of the application provides a control method, intelligent glasses, a movable platform, a holder, a control system and a storage medium, wherein the method is applied to equipment provided with a collecting device, the collecting device is used for collecting attitude data of a user in real time, and the method comprises the following steps: acquiring attitude data from the acquisition device, comparing the attitude data with the designated attitude data, and determining whether to generate a control instruction according to a comparison result; if so, sending the generated control command to a movable platform; the movable platform is provided with a camera, and the control instruction is used for adjusting the visual field range shot by the camera. According to the embodiment, the camera is adjusted by utilizing the somatosensory data, so that the operation steps of a user are reduced, and the use experience of the user is improved.

Description

Control method, intelligent glasses, movable platform, holder, control system and computer-readable storage medium Technical Field
The application relates to the technical field of human-computer interaction, in particular to a control method, intelligent glasses, a movable platform, a holder, a control system and a computer-readable storage medium.
Background
In order to obtain better shooting experience and more excellent shooting works, people are not satisfied with the process of only shooting through a mobile terminal such as a camera carried by a mobile phone, and along with the development of network technology, equipment for assisting the camera or other shooting devices to shoot is increasing, such as tracking shooting through an unmanned vehicle, or aerial shooting through an unmanned aerial vehicle, and the like. Although a better shooting effect can be obtained by means of the auxiliary device, in the shooting process, a user needs to control the auxiliary device through the related remote control device, and in some scenes, the user needs to control the auxiliary device and watch a real-time picture at the same time, so that the operation requirement is high, the operation is complicated, and the user experience is poor.
Disclosure of Invention
In view of the above, an object of the present application is to provide a control method, smart glasses, a movable platform, a pan-tilt head, a control system, and a computer-readable storage medium.
First, according to a first aspect of embodiments of the present application, there is provided a control method applied to a device equipped with a collection device, where the collection device is used to collect posture data of a user in real time, the method including:
acquiring attitude data from the acquisition device, comparing the attitude data with the designated attitude data, and determining whether to generate a control instruction according to a comparison result;
if so, sending the generated control command to a movable platform; the movable platform is provided with a camera, and the control instruction is used for adjusting the visual field range shot by the camera. According to a second aspect of embodiments of the present application, there is provided smart glasses, including:
a processor;
a memory for storing processor-executable instructions;
the acquisition device is used for acquiring the posture data of the user in real time;
a wireless communication device;
wherein the processor invokes the executable instructions, which when executed, are configured to perform the control method of any of the first aspects.
According to a third aspect of embodiments of the present application, there is provided a movable platform mounted with a camera, the movable platform including:
a body;
the power system is arranged in the machine body and used for providing power for the movable platform;
a wireless communication system installed in the body for receiving the control command transmitted by the smart glasses according to the second aspect; and the control system is arranged in the machine body and used for adjusting the visual field range shot by the camera according to the control instruction.
According to a fourth aspect of embodiments of the present application, there is provided a pan/tilt head, comprising:
a pan-tilt shaft;
the angle sensor is used for acquiring the angle information of the holder shaft;
communication means for receiving control instructions transmitted by the movable platform according to any one of the third aspect; and the number of the first and second groups,
and the processor is used for controlling the rotation of the holder shaft according to the control instruction and the angle information so as to move from the current joint angle to the target joint angle.
According to a fifth aspect of embodiments of the present application, there is provided a control system, comprising the smart glasses according to any one of the second aspects, the movable platform according to any one of the third aspects, and a camera; the camera is mounted on the movable platform.
According to a sixth aspect of embodiments herein, there is provided a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the method of any one of the first aspects.
The embodiment of the application has the advantages that:
the equipment is used for acquiring the attitude data of a user in real time through the acquisition device, and then based on the comparison result of the attitude data and the designated attitude data determines to generate a control instruction for adjusting the visual field range shot by the camera, the adjustment process is realized by utilizing the body feeling, the additional manual operation of the user is not needed, the operation steps of the user are reduced, the use experience of the user is favorably improved, the corresponding remote control equipment is not needed to be additionally designed, and the hardware expenditure cost is favorably saved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure as claimed.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1A is a flow chart illustrating a control method according to an exemplary embodiment of the present application.
Fig. 1B is a schematic diagram of a head mounted display device shown in the present application according to an example embodiment.
FIG. 2 is a flow chart illustrating a second control method according to an exemplary embodiment of the present application.
Fig. 3 is a block diagram of smart eyewear shown in accordance with an exemplary embodiment of the present application.
FIG. 4 is a block diagram illustrating a movable platform according to an exemplary embodiment of the present application.
Fig. 5 is a block diagram of a camera head according to an exemplary embodiment of the present application.
Fig. 6 is a block diagram of a pan/tilt head according to an exemplary embodiment of the present application.
FIG. 7 is a block diagram of a control system according to an exemplary embodiment of the present application.
FIG. 8 is a block diagram of another control system illustrated in accordance with an exemplary embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Some embodiments of the present description will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
With regard to the problems in the related art, embodiments of the present application provide a control method, which may be used in a device equipped with a collection device, where the device may be a wearable device with a display function, such as smart glasses, a helmet, a smart bracelet, a watch, a chest strap, an arm strap, a jacket, a belt, or a protective strap, and refer to fig. 1A, which is a flowchart illustrating a control method according to an exemplary embodiment of the present application, and the method includes:
in step S101, posture data is acquired from the acquisition device, compared with the designated posture data, and it is determined whether to generate a control command according to the comparison result.
In step S102, if yes, the generated control instruction is sent to a camera, and a platform is moved; the movable platform is provided with a camera, and the control instruction is used for adjusting the visual field range shot by the camera.
In an exemplary application scenario, a camera is installed on the movable platform and used for shooting a picture of the movable platform in the moving process in real time and transmitting the picture to the equipment, the equipment comprises a display, the equipment receives the picture transmitted by the camera in real time and displays the picture through the display, so that a user can check the real-time picture of a first visual angle collected by the camera through the display, wherein a collecting device on the equipment collects gesture data of the user in real time and compares the gesture data with specified data, and under the condition that a control instruction is determined to be generated according to a comparison result, the generated control instruction is sent to the movable platform, the control of the movable platform is realized by body feeling, and the adjustment of the visual field range shot by the camera is realized, the operation steps of the user are reduced, so that the user can see the pictures shot by the camera in different visual field ranges through the display of the equipment, and no additional remote control equipment is needed, thereby being beneficial to reducing the hardware expenditure cost.
As an example, referring to fig. 1B, the device may be a head-mounted display device such as smart glasses, and the display displays the pictures taken by the camera in real time, so as to enrich the visual experience of the user.
In another exemplary application scenario, a camera is installed on the movable platform, the camera is used for shooting a designated user in real time, tracking shooting of the designated user is achieved through the movable platform, in consideration of the fact that the user may go out of a visual field range shot by the camera in the moving process, so that the designated user cannot be shot by the camera, in this embodiment, gesture data of the user can be collected in real time by a collecting device on the device, the gesture data is compared with the designated data, the generated control instruction is sent to the movable platform under the condition that the control instruction is determined to be generated according to the comparison result, control over the movable platform is achieved through body feeling, adjustment of the visual field range shot by the camera is achieved, operation steps of the user are reduced, and the adjusted camera can shoot the designated user, the continuous tracking of the appointed user is realized, and no additional remote control equipment is required to be arranged, so that the hardware expenditure cost is reduced.
Of course, the installation manner of the embodiment of the present application is not limited at all, and the specific setting may be performed according to the actual situation, for example, the camera may be fixedly installed on the movable platform, or may be detachably installed on the movable platform; wherein the movable platform includes, but is not limited to, an unmanned aerial vehicle, an unmanned boat, a mobile robot, and the like.
In one embodiment, the acquisition device acquires attitude data of a user wearing the device in real time, wherein the attitude data can be at least one of a pitch angle, a yaw angle and a roll angle, and the attitude data reflects the attitude of the head or limbs of the user relative to the ground; it can be understood that the present application does not limit the specific type of the collecting device, and the collecting device may be specifically configured according to an actual application scenario, for example, the collecting device may be an inertial measurement unit, the inertial measurement unit includes three single-axis accelerometers and three single-axis gyroscopes, the accelerometers detect acceleration signals of the user in three independent axes of the carrier coordinate system, and the gyroscopes detect angular velocity signals of the carrier relative to the navigation coordinate system, so that the inertial measurement unit measures the angular velocity and acceleration of the user in three-dimensional space, and calculates the posture of the user to obtain posture data of the user; in another example, the acquisition device may also be an attitude sensor, and the attitude sensor includes a motion sensor such as a three-axis gyroscope, a three-axis accelerometer, a three-axis electronic compass, and the like.
Illustratively, the device may acquire attitude data, such as a pitch angle, a yaw angle, a roll angle, or any combination of the three, of the head or the limb of the user, which is sensed by the inertial measurement unit, and the device may be pre-stored with a corresponding relationship between the designated attitude data and a control command, after obtaining the pose data, the device may compare the pose data to the specified pose data, and if the pose data is the same as the specified pose data, or the attitude data is within the preset range of the specified attitude data, the equipment can generate a corresponding control instruction according to the specified attitude data and the pre-stored corresponding relation and send the control instruction to the movable platform, the movable platform is provided with a camera, and the control instruction is used for adjusting the visual field range shot by the camera.
It can be understood that, in the embodiment of the present application, no limitation is imposed on the specific setting of the preset range, and the specific setting may be performed according to actual situations; in addition, the movable platform may establish a connection with the device in advance, so as to facilitate data transmission between the movable platform and the device, in this embodiment of the present application, no limitation is imposed on a connection manner between the movable platform and the device, and as an example, the movable platform and the device may implement a connection by accessing a wireless network based on a communication standard, such as WiFi, 3G or 4G, or a combination thereof; as an example, the movable platform receives a broadcast signal or broadcast-related information from the device via a broadcast channel, thereby establishing a connection; by way of example, the movable platform and the device may establish a connection via Near Field Communication (NFC), which may be implemented, for example, based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In another exemplary embodiment, the device may also pre-establish an instruction classification model, where the instruction classification model is used to determine a corresponding control instruction according to a comparison result between the posture data and the designated posture data, and the device may input the posture data into the instruction classification model to obtain the control instruction.
The instruction classification model can be obtained by training the deep neural network to be converged according to a preset sample set. The sample set includes a number of specified pose data and corresponding control instructions. Therefore, the trained instruction classification model can more accurately determine the corresponding control instruction according to the acquired attitude data.
In an embodiment, the camera is fixedly mounted on the movable platform, and the control instruction is specifically configured to: controlling the movable platform to adjust the self-moving mode; after receiving the control instruction, the movable platform may adjust its own moving manner, such as changing a moving direction, according to the movable platform, thereby indirectly adjusting a field of view captured by the camera.
In another embodiment, the movable platform is provided with a rotatable cradle head, the cradle head is used for supporting the camera, the movable platform sends the control command to the cradle head after receiving the control command, and the control command is specifically used for: controlling the holder to move from a current joint angle to a target joint angle; and after receiving the control command, the cradle head moves from the current joint to the target joint angle according to the control command, so that the visual field range shot by the camera is indirectly adjusted.
The movable platform and the holder can be connected in a wired or wireless mode, and the embodiment of the application does not limit the movable platform and the holder, and can be specifically set according to actual application scenes.
In this embodiment, the device collects the attitude data of the user in real time through the collecting device, and then based on the attitude data determines to generate the control instruction for adjusting the visual field range shot by the camera, and the control of the movable platform or the holder is realized by using the body sense without additional manual operation of the user, so that the operation steps of the user are reduced, the use experience of the user is improved, and the corresponding remote control device is not required to be additionally designed, thereby being beneficial to saving the hardware expenditure cost.
It should be noted that, in consideration of the fact that the actual types of the movable platforms are different, and the control commands related to the movement are also different, so the embodiment of the present application does not limit the specific command types of the control commands, and the specific setting may be performed according to the actual application scenario.
For example, the control instruction may include at least one of a movement control instruction for controlling the movable platform to move and a direction adjustment instruction for adjusting a moving direction of the movable platform, and the electronic device determines the movement control instruction or the direction adjustment instruction according to attitude data such as a pitch angle, a yaw angle, a roll angle, or any combination of the three, after acquiring the attitude data from an acquisition device.
As one implementation manner, the device compares the attitude data with the pre-stored designated attitude data, then determines a movement control instruction or a direction adjustment instruction according to the comparison result and the pre-stored corresponding relationship, and sends the generated movement control instruction or direction adjustment instruction to the movable platform, so that the movable platform can move according to the movement control instruction, or adjust the moving direction of the movable platform according to the direction adjustment instruction; according to the embodiment, the moving mode of the movable platform is controlled according to the posture data of the user, so that the operation steps of the user are reduced, and the use experience of the user is improved.
In an embodiment, when the head or the limbs of the user do different motions, in the posture data acquired by the acquisition device, the change between different parameters included in the posture data is also different, and the control instruction determined and generated by the posture data is not limited at all in the embodiment of the present application, and the corresponding relationship between the designated posture data corresponding to the different motions and the different control instructions can be set according to the actual application scene.
For example, when a user nods or tilts up, a collecting device in the device may collect pitch angle data with a large variation amplitude, while other angle data may not be changed or have a small variation amplitude, and the device may pre-store a corresponding relationship between specified attitude data corresponding to nodding or tilting up and a control instruction (such as a movement control instruction); or when the user shakes the head left and right or shakes the head, the acquisition device in the equipment can acquire yaw angle data or roll angle data with large variation amplitude, while other angle data may not be changed or have small variation amplitude, the equipment can prestore the corresponding relation between specified attitude data corresponding to the shaking of the head left and right and control instructions (such as direction adjustment instructions), and then after acquiring the attitude data of the user, the equipment can be compared with the specified attitude data to determine whether to generate corresponding control instructions or not, so that the control over the movable platform or the holder is realized.
It should be noted that the above is only an example, and does not limit the embodiment of the present application, and a corresponding relationship between the designated posture data and the control command corresponding to other actions, such as squatting, walking, and the like, may also be set.
In another exemplary application scenario, a camera is installed on the movable platform, and is configured to capture an image of a designated user in real time, and the movable platform is used to capture a tracking image of the designated user, and further, based on practical application requirements, the camera is configured to capture a part of the designated user, such as a face, a hand, an upper body, or a lower body of the designated user.
Based on this, as one implementation manner, in this embodiment of the application, the device may further send gesture data acquired from an acquisition device to the movable platform, and the movable platform acquires a user image captured by the camera, and after receiving the gesture data, generates the control instruction according to the gesture data and the user image captured by the camera, so as to adjust a visual field range captured by the camera according to the control instruction.
The movable platform may be connected with the camera in advance, so as to facilitate data transmission between the movable platform and the camera, and the embodiment of the present application does not limit the connection manner between the movable platform and the camera, for example, the movable platform and the camera may be connected by accessing a wireless network based on a communication standard, such as WiFi, 3G or 4G, or a combination thereof.
In an embodiment, the camera is fixedly mounted on the movable platform, and the control instruction is specifically configured to: controlling the movable platform to adjust the self-moving mode; after the movable platform receives the control instruction, the self moving mode, such as the moving direction, can be adjusted according to the movable platform, so that the visual field range shot by the camera is indirectly adjusted, and the user can be accurately tracked.
In another embodiment, the movable platform is provided with a rotatable cradle head, the cradle head is used for supporting the camera, after the movable platform generates the control command, the control command is sent to the cradle head, and the control command is specifically used for: controlling the holder to move from a current joint angle to a target joint angle; after the cradle head receives the control instruction, the cradle head moves from the current joint to the target joint angle according to the control instruction, so that the visual field range shot by the camera is indirectly adjusted, and the accurate tracking of the user is realized.
As another implementation manner, the device may obtain the attitude data from the acquisition apparatus, obtain the user image captured by the camera, generate the control instruction according to the attitude data and the user image, and send the control instruction to the movable platform, so that the movable platform may adjust the field of view captured by the camera according to the control instruction.
Referring to fig. 2, a flowchart of a second control method according to an exemplary embodiment of the present application is shown, where the method includes:
in step S201, posture data is acquired from the acquisition device, compared with the designated posture data, and it is determined whether to generate a control command according to the comparison result. Similar to step S101, the description is omitted here.
In step S202, if yes, sending the generated control command to the movable platform; the movable platform is provided with a camera, and the control instruction is used for adjusting the visual field range shot by the camera. Similar to step S102, the description is omitted here.
In step S203, a user image captured by the camera is acquired; the camera is used for shooting a designated user.
In step S204, the control instruction sent to the movable platform is generated according to the user gesture recognized from the user image.
In an embodiment, the movable platform may track a specified user in real time through the camera, the camera is configured to capture the specified user in real time, and further transmit a captured user image to the device, so that the device may acquire the user image captured by the camera, perform image recognition on the image to recognize a user gesture, and generate the control instruction sent to the movable platform according to the user gesture recognized from the user image, thereby adjusting a field of view captured by the camera; in the embodiment, the mobile platform is controlled without additionally arranging a remote control device or manually operating by a user, the mobile platform can be controlled by recognizing the gesture action of the user, so that the operation steps of the user are reduced, the use experience of the user is improved, and the hardware expenditure cost is saved.
In an embodiment, the camera is fixedly mounted on the movable platform, and the control instruction is specifically configured to: controlling the movable platform to adjust the self-moving mode; after receiving the control instruction, the movable platform may adjust its own moving manner, such as changing a moving direction, according to the movable platform, thereby indirectly adjusting a field of view captured by the camera.
In another embodiment, the movable platform is provided with a rotatable cradle head, the cradle head is used for supporting the camera, the movable platform sends the control command to the cradle head after receiving the control command, and the control command is specifically used for: controlling the holder to move from a current joint angle to a target joint angle; and after receiving the control command, the cradle head moves from the current joint to the target joint angle according to the control command, so that the visual field range shot by the camera is indirectly adjusted.
As one implementation manner, the device may pre-store a corresponding relationship between an assigned gesture and the control instruction, where different assigned gestures correspond to different control instructions, after the device acquires a user image shot by the camera, recognize a user gesture from the user image, compare the user gesture with the assigned gesture, generate a corresponding control instruction according to a comparison result and the corresponding relationship between the assigned gesture and the control instruction, and control a movement manner of the movable platform or a posture of the pan/tilt head through the control instruction; as an example, the control instruction includes at least one of a movement control instruction for controlling the movable platform to move, a direction adjustment instruction for adjusting a moving direction of the movable platform, and a posture adjustment instruction for controlling the pan/tilt head to move from a current joint to a target joint angle.
It can be understood that, in the embodiment of the present application, no limitation is imposed on the number of the designated users and the number of the user gestures acquired from the user image, and the number of the designated users may include one designated user, or two or more designated users; may include one user gesture or may include two or more user gestures.
Specifically, after the device acquires the user image, hand detection is performed in the full-image range, and the specific position of the hand is output, for example, the hand position may be identified by a target frame. The target frame comprises coordinates of the upper left corner and the lower right corner of the target frame on the shot image; or the target frame includes coordinates of the center of the target frame on the photographed image and the width and height of the target frame.
Then, the device cuts out the picture of the hand area from the user image according to the specific position of the hand. Inputting the picture of the hand area into a gesture detection neural network to obtain a user gesture in the picture; the gesture detection neural network can be obtained by training the deep neural network to converge according to the gesture sample set. The gesture sample set includes a plurality of hand pictures including different gestures. Therefore, the trained gesture detection neural network can more accurately detect the specific gesture from the picture of the hand area.
In another embodiment, in order to further shorten the delay time, so that the movable platform can respond to a control instruction indicated by a user gesture in real time, the apparatus may include one or more cameras, and a target user extends a hand out of a shooting range of the camera, so that the camera can acquire a hand image of the user in real time, so that the apparatus may generate the control instruction sent to the movable platform according to the user gesture recognized from the hand image, so as to control the movable platform to adjust a self-movement mode or the pan-tilt to adjust a self-posture; in this embodiment, the hand image of the target user is photographed by the photographing device set by the user, and then the control instruction for controlling the moving mode of the movable platform is generated based on the recognized user gesture in the user image, so that the time delay is favorably shortened, the response speed is improved, the operation steps of the user are also reduced, and the use experience of the user is favorably improved.
In an embodiment, the device further includes at least one input device such as a five-dimensional key, a virtual key, or a touch screen, and a user can also control the movable platform or the pan/tilt head through the input device, so as to adjust a field of view of the camera without additionally arranging other devices such as a remote controller, which is beneficial to reducing hardware expenditure cost.
In an exemplary embodiment, the device may obtain moving speed indication information input by a user on the input device, where the moving speed indication information is used to indicate a target speed of the movable platform, and then the device generates a speed adjustment instruction according to the moving speed indication information, where after obtaining the speed adjustment instruction, the movable platform adjusts from a current speed to the target speed according to the speed adjustment instruction; in the embodiment, the corresponding speed control function is integrated on the equipment, so that other equipment such as a remote controller is not required to be additionally arranged, and the hardware expenditure cost is favorably reduced.
As an example, the movable platform may be an unmanned aerial vehicle, before the unmanned aerial vehicle takes off, a user may input initial speed indication information through the input device, where the initial speed indication information is used to indicate an initial speed of the unmanned aerial vehicle, the device generates a speed setting instruction sent to the unmanned aerial vehicle according to the initial speed indication information, and the unmanned aerial vehicle flies at the initial speed pointed by the speed setting instruction after receiving the speed setting instruction.
In an exemplary embodiment, the movable platform may be an unmanned aerial vehicle, the device may further obtain flight altitude indication information input by a user on the input device, where the movement speed indication information is used to indicate a target altitude of the unmanned aerial vehicle, and the device may generate an altitude adjustment instruction according to the flight altitude indication information, where the altitude adjustment instruction is sent to the unmanned aerial vehicle, so that the unmanned aerial vehicle, after receiving the altitude adjustment instruction, adjusts the current altitude to the target altitude according to the altitude adjustment instruction; in the embodiment, the corresponding height control function is integrated on the equipment, so that other equipment such as a remote controller is not required to be additionally arranged, and the hardware expenditure cost is favorably reduced.
In an exemplary embodiment, the movable platform may be an unmanned aerial vehicle, the device further includes a hover button (which may be a virtual control or a physical button), and a user may control the unmanned aerial vehicle to hover (i.e., stay at a certain position in the air) through the hover button according to actual needs, and if the device detects that the hover control is triggered, the device generates a hover instruction sent to the unmanned aerial vehicle, where the hover instruction is used to control the unmanned aerial vehicle to stay in place, and after receiving the hover instruction, the unmanned aerial vehicle performs an operation of staying in place according to the hover instruction; in the embodiment, the corresponding hovering control function is integrated on the equipment, so that other equipment such as a remote controller does not need to be additionally arranged, and the hardware expenditure cost is favorably reduced.
In an embodiment, the device may be a wearable device with a display function, such as smart glasses, a helmet, a smart bracelet, a watch, a chest strap, an arm strap, a jacket, a belt, or a protective strap, and please refer to fig. 3, which takes the device as smart glasses 300 for illustration, fig. 3 is a structural diagram of a smart glasses 300 according to an exemplary embodiment of the present application, and the smart glasses 300 includes:
a processor 301.
A memory 302 for storing instructions executable by the processor 301.
A collecting means 303 for collecting posture data of the user in real time.
A wireless communication device 304.
Wherein the processor 301 invokes the executable instructions, and when executed, performs the following:
acquiring attitude data from the acquisition device 303, comparing the attitude data with the designated attitude data, and determining whether to generate a control instruction according to a comparison result;
if yes, generating a control instruction;
the wireless communication device 304 is configured to send the generated control instruction to a movable platform; the movable platform is provided with a camera, and the control instruction is used for adjusting the visual field range shot by the camera.
The Processor 301 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 302 stores a computer program of executable instructions of the control method, and the memory 302 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the smart glasses 300 may cooperate with a network storage device that performs a storage function of the memory 302 through a network connection. The memory 302 may be an internal storage unit of the smart glasses 300, such as a hard disk or a memory of the smart glasses 300. The memory 302 may also be an external storage device of the Smart glasses 300, such as a plug-in hard disk provided on the Smart glasses 300, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), and the like. Further, the memory 302 may also include both an internal storage unit of the smart glasses 300 and an external storage device. The memory 302 is used to store computer programs and other programs and data required by the smart glasses 300. The memory 302 may also be used to temporarily store data that has been output or is to be output.
The acquisition device 303 may be an Inertial Measurement Unit (IMU), an attitude sensor, an acceleration sensor, a gyroscope, or other devices for acquiring attitude data.
The wireless communication device 304 enables communication between the smart glasses 300 and the camera and the movable platform, and the present embodiment may have any limitation on the specific communication manner, and the smart glasses 300 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the wireless communication device 304 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the wireless communication device 304 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The various embodiments described herein may be implemented using a computer-readable medium such as computer software, hardware, or any combination thereof. For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, and an electronic unit designed to perform the functions described herein. For a software implementation, the implementation such as a process or a function may be implemented with a separate software module that allows performing at least one function or operation. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in memory 302 and executed by processor 301.
In an embodiment, the control instruction is specifically configured to: and controlling the movable platform to adjust the self-moving mode.
In one embodiment, the movable platform is provided with a cradle head for supporting the camera.
After receiving the control instruction, the movable platform sends the control instruction to the holder, where the control instruction is specifically used for: and controlling the holder to move from the current joint angle to the target joint angle. In an embodiment, the pose data comprises at least one of: pitch angle, yaw angle, and roll angle.
In one embodiment, the acquisition device 303 comprises an inertial measurement unit.
The inertial measurement unit is used for sensing the pitch angle and/or yaw angle and/or roll angle of the head or limb of the user.
The processor 301 is further configured to obtain the pitch angle and/or yaw angle and/or roll angle.
In one embodiment, the control instructions include at least one of: a movement control command, a direction adjustment command.
And the movement control instruction is generated according to the pitch angle, the yaw angle and/or the roll angle and is used for controlling the movable platform to move.
And the direction adjusting instruction is generated according to the pitch angle, the yaw angle and/or the roll angle and is used for adjusting the moving direction of the movable platform. In one embodiment, the device further comprises a display; the display is used for displaying the pictures transmitted by the camera in real time.
In an embodiment, an input device is also included.
The processor 301 is further configured to: acquiring moving speed indication information input by a user on the input equipment, wherein the moving speed indication information is used for indicating the target speed of the movable platform; and generating a speed adjusting instruction sent to the movable platform according to the moving speed indication information so as to adjust the movable platform from the current speed to the target speed.
In one embodiment, the input device comprises at least one of: five-dimensional keys, virtual keys, or a touch screen.
In one embodiment, the movable platform includes an unmanned aerial vehicle, an unmanned ship, an unmanned vehicle, and a mobile robot.
In one embodiment, the movable platform is an unmanned aerial vehicle.
The processor 301 is further configured to: acquiring flight altitude indicating information input by a user on the input equipment, wherein the moving speed indicating information is used for indicating the target speed altitude of the unmanned aerial vehicle; and generating an altitude adjusting instruction sent to the unmanned aerial vehicle according to the flight altitude indicating information so as to adjust the unmanned aerial vehicle from the current speed altitude to the target altitude.
In an embodiment, a hover button control is also included.
The processor 301 is further configured to: and if the hovering key control is detected to be triggered, generating a hovering instruction sent to the unmanned aerial vehicle so as to control the unmanned aerial vehicle to stay in place.
In one embodiment, the camera is used for shooting a specified user.
The processor 301 is further configured to: acquiring a user image shot by the camera shooting device; and generating the control instruction sent to the movable platform according to the user gesture recognized from the user image.
In one embodiment, the camera is used for shooting a specified user.
The processor 301 is further configured to: sending the pose data to the movable platform;
the movable platform is used for: and generating the control instruction according to the attitude data and the user image shot by the camera.
In one embodiment, the camera is used for shooting a specified user.
The processor 301 is further configured to: acquiring a user image shot by the camera shooting device; and generating the control instruction according to the attitude data and the user image shot by the camera.
Accordingly, referring to fig. 4, a block diagram of a movable platform 400 according to an exemplary embodiment of the present application is shown, where the movable platform 400 includes:
a body 401.
And a power system 402 installed in the body 401 for providing power to the movable platform 400.
A wireless communication system 403 installed in the main body 401 for receiving the control command transmitted from the smart glasses 300; and the number of the first and second groups,
and the control system 404 is installed in the body 401 and is used for adjusting the field of view shot by the camera according to the control instruction.
Those skilled in the art will appreciate that fig. 4 is merely an example of a movable platform 400 and does not constitute a limitation on the movable platform 400 and may include more or fewer components than shown, or some components in combination, or different components, e.g., the movable platform 400 may also include input-output devices, network access devices, etc.
By way of example, the movable platform 400 includes: unmanned aerial vehicles, unmanned boats, and mobile robots.
In an embodiment, the control system is specifically configured to: and controlling the power system according to the control instruction so as to control the movable platform to adjust the self-moving mode.
In an embodiment, the control system is specifically configured to: and sending the control command to a holder so that the holder moves from the current joint angle to the target joint angle according to the control command.
In one embodiment, the camera is used for shooting a specified user.
The wireless communication system is also used for receiving the attitude data sent by the intelligent glasses.
The control system is further configured to generate the control instruction according to the attitude data and the user image captured by the camera.
In an exemplary embodiment, the movable platform may be an unmanned aerial vehicle, the unmanned aerial vehicle further comprising at least one of the following measurement devices: TOF image sensors and GPS devices; the control system is also used for controlling the power system based on the measurement result of at least one type of the measurement equipment so as to maintain the self flying height, thereby realizing the purpose of automatic height following.
In an exemplary embodiment, the control system is further configured to determine whether an obstacle exists in a current flight direction according to a shooting result of the camera, and if so, control the power system to adjust the flight direction or stay in place, so as to avoid damage to the unmanned aerial vehicle.
Accordingly, referring to fig. 5, a structural diagram of a camera 500 according to an exemplary embodiment of the present application is shown, where the camera 500 includes:
a housing 501.
A lens assembly 502 disposed inside the housing 501.
A sensor assembly 503 disposed inside the housing 501 for sensing light passing through the lens assembly 502 and generating an electrical signal.
A processor 504, disposed inside the housing 501, is used for processing the electrical signals to form an image.
A wireless communication component 505, disposed inside the housing 501, for transmitting the image to the smart glasses 300.
Those skilled in the art will appreciate that fig. 5 is merely an example of a camera 500 and is not intended to limit camera 500 and may include more or less components than those shown, or some components may be combined, or different components, e.g., camera 500 may also include input-output devices, network access devices, etc.
Correspondingly, please refer to fig. 6, which is a structural diagram of a pan/tilt head 600 according to an exemplary embodiment of the present application, wherein the pan/tilt head 600 includes:
a pan and tilt head shaft 601.
And an angle sensor 602 for acquiring the angle information of the holder shaft.
A communication device 603 for receiving the control command sent by the movable platform 400.
And a processor 604, configured to control the pan/tilt head axis to rotate according to the control instruction and the angle information, so as to move from the current joint angle to the target joint angle.
In an embodiment, the processor 604 receives the control instruction, controls the pan/tilt axis to rotate according to the control instruction, and determines whether the pan/tilt axis moves to a target joint angle according to the angle information collected in real time.
Those skilled in the art will appreciate that fig. 6 is merely an example of a pan/tilt head 600 and does not constitute a limitation of pan/tilt head 600, and may include more or less components than those shown, or some components in combination, or different components, e.g., pan/tilt head 600 may also include input/output devices, network access devices, etc.
The cloud platform and the movable platform can be connected in a wired mode or a wireless mode, the cloud platform is not limited in any way, and specific setting can be carried out according to actual conditions.
Accordingly, please refer to fig. 7, which is a block diagram of a control system according to an exemplary embodiment of the present application, wherein the control system includes the smart glasses 300, the movable platform 400, and the camera 500; the camera 500 is mounted on the movable platform 400.
In an exemplary application scenario, the movable platform 400 moves under the control of a user, that is, a control instruction is determined and generated based on comparison between the posture data acquired by the smart glasses 300 and the designated posture data, the movable platform 400 is controlled to adjust the movement mode of the movable platform 400 based on the control instruction, so that the visual field range shot by the camera 500 is indirectly adjusted, the camera 500 installed on the movable platform 400 shoots pictures along the way in real time, the shot pictures are transmitted to the smart glasses 300, and the shot pictures are displayed by the display on the smart glasses 300 in real time.
In another exemplary application scenario, the camera 500 is configured to capture a part of a designated user, and send a captured user image to the smart glasses 300, in a tracking capture process, the smart glasses 300 may generate the control instruction sent to the movable platform according to the acquired gesture data and the user image, and control the movable platform 400 to adjust a movement manner of the movable platform based on the control instruction, so as to indirectly adjust a field of view captured by the camera 500, and implement accurate tracking capture.
Referring to fig. 8, which is a structural diagram of another control system according to an exemplary embodiment of the present application, the control system further includes the pan/tilt head 600, the pan/tilt head 600 is used for supporting the camera 500, and the pan/tilt head 600 is connected to the movable platform 400; that is, the camera 500 is mounted on the movable platform 400 through the pan/tilt head 600.
In an exemplary embodiment, based on comparison between the posture data acquired by the smart glasses 300 and the designated posture data, a control instruction is determined and generated, and the control instruction is sent to the movable platform 400, so that the movable platform 400 sends the control instruction to the pan-tilt 600, the pan-tilt 600 is controlled to move from the current joint angle to the target joint angle, thereby indirectly adjusting the visual field range shot by the camera 500, the camera 500 installed on the movable platform 400 shoots pictures along the way in real time, and transmits the shot pictures to the smart glasses 300, and a display on the smart glasses 300 displays the shot pictures in real time.
In another exemplary application scenario, the camera 500 is configured to capture a part of a designated user, and send a captured user image to the smart glasses 300, and in the tracking capture process, the smart glasses 300 may generate the control instruction sent to the movable platform according to the acquired gesture data and the user image, so that the movable platform 400 sends the control instruction to the pan/tilt head 600, and the pan/tilt head 600 is controlled to move from a current joint angle to a target joint angle, so as to indirectly adjust a visual field range captured by the camera 500, and implement accurate tracking capture.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as a memory comprising instructions, for storing a computer program comprising instructions for performing the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Wherein the instructions in the storage medium, when executed by the smart glasses 300, enable the apparatus to perform the aforementioned control method.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The method and apparatus provided by the embodiments of the present invention are described in detail above, and the principle and the embodiments of the present invention are explained in detail herein by using specific examples, and the description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (22)

  1. A control method, applied to a device equipped with a collection device for collecting posture data of a user in real time, the method comprising:
    acquiring attitude data from the acquisition device, comparing the attitude data with the designated attitude data, and determining whether to generate a control instruction according to a comparison result;
    if so, sending the generated control command to a movable platform; the movable platform is provided with a camera, and the control instruction is used for adjusting the visual field range shot by the camera.
  2. The method according to claim 1, wherein the control instructions are specifically configured to: and controlling the movable platform to adjust the self-moving mode.
  3. The method of claim 1, wherein the movable platform is equipped with a pan-tilt for supporting the camera;
    after receiving the control instruction, the movable platform sends the control instruction to the holder, where the control instruction is specifically used for: and controlling the holder to move from the current joint angle to the target joint angle.
  4. The method of claim 2, wherein the pose data comprises at least one of: pitch angle, yaw angle, and roll angle.
  5. The method of claim 4, wherein the acquisition device comprises an inertial measurement unit;
    the acquiring pose data from the acquisition device comprises:
    acquiring the pitch angle, yaw angle and/or roll angle of the head or the limb of the user sensed by the inertial measurement unit.
  6. The method of claim 4, wherein the control instructions comprise at least one of: a movement control instruction and a direction adjustment instruction;
    the movement control instruction is generated according to the pitch angle, the yaw angle and/or the roll angle and is used for controlling the movable platform to move;
    and the direction adjusting instruction is generated according to the pitch angle, the yaw angle and/or the roll angle and is used for adjusting the moving direction of the movable platform.
  7. The method of claim 1, wherein the device further comprises an input device;
    the method further comprises:
    acquiring moving speed indication information input by a user on the input equipment, wherein the moving speed indication information is used for indicating the target speed of the movable platform;
    and generating a speed adjusting instruction sent to the movable platform according to the moving speed indication information so as to adjust the movable platform from the current speed to the target speed.
  8. The method of claim 7, wherein the input device comprises at least one of:
    five-dimensional keys, virtual keys, or a touch screen.
  9. The method of claim 7, wherein the movable platform comprises an unmanned aerial vehicle, an unmanned boat, and a mobile robot.
  10. The method of claim 7, wherein the movable platform is an unmanned aerial vehicle;
    the method further comprises the following steps:
    acquiring flight altitude indicating information input by a user on the input equipment, wherein the moving speed indicating information is used for indicating the target altitude of the unmanned aerial vehicle;
    and generating an altitude adjusting instruction sent to the unmanned aerial vehicle according to the flight altitude indicating information so as to adjust the unmanned aerial vehicle from the current altitude to the target altitude.
  11. The method of claim 10, wherein the device further comprises a hover control;
    the method further comprises:
    and if the hovering control is detected to be triggered, generating a hovering instruction sent to the unmanned aerial vehicle so as to control the unmanned aerial vehicle to stay in place.
  12. The method of claim 1, wherein the device is a wearable device, the device further comprising a display;
    the method further comprises the following steps:
    and receiving the picture transmitted by the camera in real time and displaying the picture through the display.
  13. The method of claim 1, wherein the camera is used to photograph a designated user;
    the method further comprises the following steps:
    acquiring a user image shot by the camera;
    and generating the control instruction sent to the movable platform according to the user gesture recognized from the user image.
  14. The method of any one of claims 1 to 13, wherein the device is smart glasses.
  15. A smart eyewear, comprising:
    a processor;
    a memory for storing processor-executable instructions;
    the acquisition device is used for acquiring the posture data of the user in real time;
    a wireless communication device;
    wherein the processor invokes the executable instructions, which when executed, are operable to perform the control method of any one of claims 1 to 14.
  16. The utility model provides a movable platform which characterized in that, movable platform installs the camera, movable platform includes:
    a body;
    the power system is arranged in the machine body and used for providing power for the movable platform;
    a wireless communication system installed in the body for receiving the control command transmitted from the smart glasses according to claim 15; and the number of the first and second groups,
    and the control system is arranged in the machine body and used for adjusting the visual field range shot by the camera according to the control instruction.
  17. The method of claim 16, wherein the movable platform comprises: unmanned aerial vehicles, unmanned boats, and mobile robots.
  18. The method according to claim 16, characterized in that the control system is specifically configured to: and controlling the power system according to the control instruction so as to control the movable platform to adjust the self-moving mode.
    The method according to claim 16, characterized in that the control system is specifically configured to: and sending the control command to a holder so that the holder moves from the current joint angle to the target joint angle according to the control command.
  19. A head, comprising:
    a pan-tilt shaft;
    the angle sensor is used for acquiring the angle information of the holder shaft;
    communication means for receiving control commands transmitted by the movable platform of any one of claims 16, 17 and 18; and the number of the first and second groups,
    and the processor is used for controlling the rotation of the holder shaft according to the control instruction and the angle information so as to move from the current joint angle to the target joint angle.
  20. A control system comprising smart glasses according to claim 15, a movable platform according to any one of claims 16 to 18, and a camera; the camera is mounted on the movable platform.
  21. The control system of claim 20, further comprising a head according to claim 19, the head being coupled to the movable platform, the head being configured to support the camera head.
  22. A computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 14.
CN201980093254.3A 2019-12-23 2019-12-23 Control method, intelligent glasses, movable platform, holder, control system and computer-readable storage medium Pending CN113508351A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/127562 WO2021127888A1 (en) 2019-12-23 2019-12-23 Control method, smart glasses, mobile platform, gimbal, control system, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN113508351A true CN113508351A (en) 2021-10-15

Family

ID=76572872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980093254.3A Pending CN113508351A (en) 2019-12-23 2019-12-23 Control method, intelligent glasses, movable platform, holder, control system and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN113508351A (en)
WO (1) WO2021127888A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114157806A (en) * 2021-11-23 2022-03-08 深圳市商汤科技有限公司 Holder control method and device, holder and medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113608623B (en) * 2021-08-27 2023-09-26 江西佳铭特实业有限公司 Vehicle VR equipment based on face recognition
CN114137995A (en) * 2021-11-24 2022-03-04 广东电网有限责任公司 Unmanned aerial vehicle control system and control method thereof
CN114879557A (en) * 2022-05-07 2022-08-09 中国人民解放军东部战区总医院 Control method, system, equipment and storage medium for unmanned equipment cluster
CN115190237B (en) * 2022-06-20 2023-12-15 亮风台(上海)信息科技有限公司 Method and device for determining rotation angle information of bearing device
CN115866388B (en) * 2022-11-24 2023-06-30 广州新城建筑设计院有限公司 Intelligent glasses shooting control method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104486543A (en) * 2014-12-09 2015-04-01 北京时代沃林科技发展有限公司 Equipment and method for controlling cloud deck camera by intelligent terminal in touch manner
CN106843456A (en) * 2016-08-16 2017-06-13 深圳超多维光电子有限公司 A kind of display methods, device and virtual reality device followed the trail of based on attitude
CN108363415A (en) * 2018-03-29 2018-08-03 燕山大学 A kind of vision remote control servomechanism and method applied to underwater robot
US10303187B1 (en) * 2014-03-21 2019-05-28 Amazon Technologies, Inc. Dynamic inventory management user interface

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10303187B1 (en) * 2014-03-21 2019-05-28 Amazon Technologies, Inc. Dynamic inventory management user interface
CN104486543A (en) * 2014-12-09 2015-04-01 北京时代沃林科技发展有限公司 Equipment and method for controlling cloud deck camera by intelligent terminal in touch manner
CN106843456A (en) * 2016-08-16 2017-06-13 深圳超多维光电子有限公司 A kind of display methods, device and virtual reality device followed the trail of based on attitude
CN108363415A (en) * 2018-03-29 2018-08-03 燕山大学 A kind of vision remote control servomechanism and method applied to underwater robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114157806A (en) * 2021-11-23 2022-03-08 深圳市商汤科技有限公司 Holder control method and device, holder and medium

Also Published As

Publication number Publication date
WO2021127888A1 (en) 2021-07-01

Similar Documents

Publication Publication Date Title
US11385645B2 (en) Remote control method and terminal
CN113508351A (en) Control method, intelligent glasses, movable platform, holder, control system and computer-readable storage medium
US11914370B2 (en) System and method for providing easy-to-use release and auto-positioning for drone applications
US20220374010A1 (en) User Interaction Paradigms For A Flying Digital Assistant
US11632497B2 (en) Systems and methods for controlling an image captured by an imaging device
US10551834B2 (en) Method and electronic device for controlling unmanned aerial vehicle
US9977434B2 (en) Automatic tracking mode for controlling an unmanned aerial vehicle
CN110494360B (en) System and method for providing autonomous photography and photography
US20190243357A1 (en) Wearable uav control device and uav system
KR20180068411A (en) Controlling method for operation of unmanned vehicle and electronic device supporting the same
KR20180064253A (en) Flight controlling method and electronic device supporting the same
WO2015013979A1 (en) Remote control method and terminal
US20210112194A1 (en) Method and device for taking group photo
US20200389593A1 (en) Gimbal servo control method and control device
WO2019051832A1 (en) Movable object control method, device and system
US20220350330A1 (en) Remote control method and terminal
US20210304624A1 (en) Method for setting target flight path of aircraft, target flight path setting system, and program for setting target flight path
CN204287973U (en) flight camera
KR20200020295A (en) AUGMENTED REALITY SERVICE PROVIDING APPARATUS INTERACTING WITH ROBOT and METHOD OF THEREOF
JP6560479B1 (en) Unmanned aircraft control system, unmanned aircraft control method, and program
KR20180025416A (en) Drone flying control system and method using motion recognition and virtual reality
WO2022082440A1 (en) Method, apparatus and system for determining target following strategy, and device and storage medium
KR20180060403A (en) Control apparatus for drone based on image
KR101599149B1 (en) An imaging device with automatic tracing for the object
CN113273174A (en) Method, device, system, equipment and storage medium for determining target to be followed

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination