CN116945176B - Semi-automatic control method and system for photographic robot - Google Patents

Semi-automatic control method and system for photographic robot Download PDF

Info

Publication number
CN116945176B
CN116945176B CN202310964915.3A CN202310964915A CN116945176B CN 116945176 B CN116945176 B CN 116945176B CN 202310964915 A CN202310964915 A CN 202310964915A CN 116945176 B CN116945176 B CN 116945176B
Authority
CN
China
Prior art keywords
signal
robot
module
node
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310964915.3A
Other languages
Chinese (zh)
Other versions
CN116945176A (en
Inventor
高子龙
曾诚
陈宏枢
王羚睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Yueqian Innovation Technology Co ltd
Original Assignee
Chongqing Yueqian Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Yueqian Innovation Technology Co ltd filed Critical Chongqing Yueqian Innovation Technology Co ltd
Priority to CN202310964915.3A priority Critical patent/CN116945176B/en
Publication of CN116945176A publication Critical patent/CN116945176A/en
Application granted granted Critical
Publication of CN116945176B publication Critical patent/CN116945176B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/17Mechanical parametric or variational design
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/04Constraint-based CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to the technical field of fortune mirrors, in particular to a semi-automatic control method and a semi-automatic control system of a photographic robot, wherein the method comprises the following steps: s101, acquiring a first input signal combination; s102, switching or maintaining the remote control module to be in a corresponding functional state according to the first input signal combination; s103, when the remote control module is in a first functional state, acquiring a second input signal combination associated with the robot module; s104, converting the second input signal combination into a first motion instruction; s105 splits the first motion instruction into a first instruction and a second instruction, and sends the first instruction and the second instruction to a robot unit and a camera unit in the robot module, respectively. The semi-automatic control method provided by the invention provides a plurality of functional states which are flexibly switched for users to be used for different lens transporting requirements in an actual film field environment.

Description

Semi-automatic control method and system for photographic robot
Technical Field
The invention relates to the technical field of lens transportation, in particular to a semi-automatic control method and system of a photographic robot.
Background
With the development of film and television technology, the demand of the film and television industry for high-flexibility and high-stability photographic mirror devices is also significantly increased. Currently, the commonly used method for transporting the mirror is to mount the camera by an industrial robot
To realize a photographic lens. Among them, methods for using industrial robot to transport mirrors are generally of two types:
1) Dotting by using a virtual camera, and reproducing a dotting path by using a camera robot;
For example, the invention patent application CN2019106238656 discloses a method, a device and a system for generating and controlling a robot track. According to the method, the actual running track of the robot is solved through the pose of the virtual device acquired by the motion capture system. For another example, the invention application CN202010842227.6 discloses a method, a system and an electronic device for offline programming of a camera robot. Similarly, the method helps a user to complete shooting tasks by discretizing the spatial track of the virtual camera robot and checking virtual-real combination effects in real time in the virtual shooting system. For another example, the invention patent application CN2019101367788 discloses a method for planning a path of a camera robot and a computer storage medium. According to the method, the visual path curve is quickly edited on the motion path, so that a user can visually check the shooting effect of the virtual camera machine in real time. However, the path reproduction method implemented based on the virtual camera often has the following problems in the actual shooting process: firstly, the dotting data is easy to have problems such as unsmooth and the like (larger operation errors can be generated on the dotting data due to hand shake in the process of operating the virtual camera by a photographer); secondly, the operation mode of the virtual camera is relatively complex, and higher requirements are also put forward on the professional level of a photographer (namely, the field photographer is required to ensure the artistic sense of the lens, and meanwhile, the control operation of each module of the virtual camera, the industrial robot and the real camera is also required, and even a plurality of ginseng and shooting of the photographer, the robot engineer and the like may be required). Therefore, such methods are often more suitable for teaching the shooting track of the camera, but will face the above problems or limitations in the shooting environment of the real film field.
2) Controlling the industrial robot to realize the mirror conveying through remote control equipment (such as teleoperation equipment and the like);
For example, see patent application CN201410357772.0, which discloses a teleoperation-based real-time control method for a camera robot, where the method uses mutual communication among a teleoperation device end, a PC server end, a robot client end, and a communication transmission link to control real-time motion of the robot in response to a real-time motion command signal. For another example, application CN202121944975.1 discloses an external trigger device suitable for a film and television shooting robot to attempt to improve the real-time performance of signal triggering of the shooting robot by the external trigger device. But this real-time control approach requires on the one hand the provision of a skilled technician to operate it; on the other hand, manual coordination of photographers is needed to adjust photographing parameters according to the lens transportation route, which puts very high demands on accurate coordination between professional technicians and photographers.
In addition, in the prior art, in order to improve the shooting quality in the mirror process, a method for adjusting or optimizing the robot path is also provided for the industrial robot. For example, CN 202010044804.7 discloses a method for calculating real-time pose of a six-axis robot tip following a target object, which can solve and track the target object without any connection relationship with the robot, and calculate the pose of the robot tip to be adjusted, so that the robot tip is always kept level with a workbench. Also e.g. CN
202110949804.6 Discloses a target tracking method based on a camera robot, which utilizes a target tracking algorithm to adjust the mechanical arm and a camera in real time in the motion track process of the mechanical arm, so that a camera lens is aligned to a target object in real time, and the focal length of the camera lens is positioned at a position consistent with the distance of the target object in real time. However, the method for solving and adjusting the path in real time puts higher requirements on software and hardware equipment, if the controller of the robot is required to have higher data processing capacity, the cost of actual shooting is correspondingly increased.
Disclosure of Invention
The invention aims to provide a semi-automatic control method of a photographic robot, which partially solves or alleviates the defects in the prior art and can improve the stability and accuracy of the robot lens.
In order to solve the technical problems, the invention adopts the following technical scheme: in a first aspect of the present invention, there is provided a semi-automatic control method of a photographing robot, the method comprising:
S101, acquiring a first input signal combination through a remote control module, wherein the first input signal combination comprises one or more of the following signals:
(i) A signal associated with a remote control state of the remote control module;
(ii) Signals associated with the actual operating state of the remote control module and/or the robot module;
S102, switching or maintaining the remote control module to be in a corresponding functional state according to the first input signal combination;
S103, when the remote control module is in the first functional state, acquiring the second input signal combination associated with the robot module, the second input signal combination including: a first sub-combination input through the remote control module;
wherein the first sub-combination comprises one or more of the following signals:
(1) A first rocker signal, the first rocker signal comprising: spatial motion trail data of the first rocker; (2) A second rocker signal, the second rocker signal comprising: euler angle change data of the second rocker; (3) a length signal comprising: data associated with a task execution length of the robot module;
S104, converting the second input signal combination into a first motion instruction through a server connected with the remote control module;
S105, splitting the first motion instruction into a first instruction and a second instruction through a server, and respectively sending the first instruction and the second instruction to a robot unit and a camera unit in the robot module.
In some embodiments, before S104, the method further comprises the step of:
first node information of at least two first nodes is obtained from the motion trail data, and the first node information comprises: a first node coordinate, and a node tag associated with the first node coordinate;
obtaining a first fitting curve through the node coordinates by adopting a cubic spline interpolation method;
Judging whether the first fitting curve is matched with the working parameters of the robot module, if so,
And S104, if not, prompting the user to update the first rocker signal.
In some embodiments, the operating parameters include: a workspace boundary; correspondingly, the step of judging whether the first fitting curve is matched with the working parameters of the robot module comprises the following steps:
acquiring a first interval between the first fitting curve and the working space boundary;
judging whether the first interval belongs to a preset interval range or not;
If yes, executing S104, otherwise, prompting a user to update the first rocker signal.
In some embodiments, before S104, the method further comprises the step of:
First node information of at least two first nodes is obtained from the motion trail data, and the first node information comprises: a first node coordinate, and a first node tag associated with the first node coordinate; and obtaining second node information of at least one second node from the euler angle change data, wherein the second node information comprises: a second node pose angle, and a second node label associated with the pose angle;
calculating to obtain a second fitting curve through the first node information and the second node information by adopting a numerical analysis method;
Calculating at least one axis motion line of the robot unit according to the second fitting curve by adopting a kinematic inverse solution method, wherein the axis motion line is a relation curve for reflecting the included angle of the joint axis of the robot and time;
judging whether the included angle of the joint axis belongs to a preset included angle range or not according to the axis motion line;
If yes, executing S104, otherwise, prompting a user to update the first sub-combination;
And/or, the second input signal combination further comprises: a second sub-combination received or collected from the robotic unit, and the second sub-combination comprising one or more of the following signals:
(1) A first feedback signal, the first feedback signal comprising: the shaft speed of at least one shaft of the robotic unit, and a corresponding first feedback tag, and the second feedback tag comprises: a time or node number corresponding to the shaft speed;
(2) A second feedback signal, the second feedback signal comprising: world coordinates of a robot end of the robot unit, and a corresponding second feedback tag, and the second feedback tag includes: time or node number corresponding to the world coordinates.
In some embodiments, the first instruction comprises: fifth node information of at least two fifth nodes, the fifth node information comprising: a coordinate position of the fifth node, and a node tag corresponding to the coordinate position; correspondingly, the method further comprises the steps of:
When the server receives at least one corresponding feedback signal within a first set time I,
Judging whether the feedback signal is matched with the first instruction, if so, judging that the actual running state of the robot module is normal, and if not, judging that the actual running state is a first abnormal state;
And/or when the server does not receive the feedback signal within the second set time II, judging that the actual motion state of the robot module is a second abnormal state.
In some embodiments, the method further comprises the step of:
when the server detects that the robot module is in the first abnormal state, correcting the second instruction according to the feedback signal, and correspondingly generating a first correction signal;
the server sending the first correction signal to the camera unit;
and/or the method further comprises the steps of:
when the server detects that the robot module is in the second abnormal state, the server sends a first communication signal to the robot unit so that the robot unit directly sends the feedback signal to the camera unit;
the camera unit adaptively adjusts a camera parameter set according to the feedback signal;
And when the server receives the feedback signal from the robot unit within a third set time III, the server transmits a second communication signal to the robot unit so that the robot unit stops directly transmitting the feedback signal to the camera unit.
In some embodiments, the step S103 includes the steps of:
s31, detecting a user-defined signal of a user, wherein the user-defined signal comprises at least one or more of the following:
(1) The signal type of the addition signal to be added;
(2) The signal type of the removal signal to be removed;
(3) Function state information to be customized;
S32 adds the add signal to the corresponding functional state or removes the remove signal from the functional state in response to the custom signal.
In some embodiments, prior to S32, further comprising the step of:
Acquiring at least one of the following priority information:
(1) The first preset priority corresponding to the adding signal or the removing signal;
(2) The second preset priority corresponding to the functional state;
(3) A third preset priority of predefined signal types in the functional state;
Judging whether the first preset priority is matched with the second preset priority or the third preset priority, if so, executing a corresponding adding step or removing step, and if not, sending a corresponding prompt signal to a user.
In some embodiments, the spatial motion profile data comprises: an initial position of the first rocker, the first rocker being offset on at least one axis by at least one first offset compared to the initial position;
In some embodiments, the euler angle change data includes: the original euler angle of the second rocker, the second rocker being offset about at least one axis by at least one second offset;
In some embodiments, the length signal comprises: tasks that the robot module needs to perform
Is set in the initial node information;
In some embodiments, the second instruction includes one or more of the following parameters: aperture, sensitivity, focal length.
The second aspect of the present invention is to provide a semi-automatic control system of a photographic robot, the system comprising:
A first input module configured to obtain a first input signal combination, wherein the first input signal combination comprises one or more of the following signals:
(i) A signal associated with a remote control state of the remote control module;
(ii) Signals associated with the actual operating state of the remote control module and/or the robot module;
A function switching module configured to switch or maintain the remote control module to a corresponding functional state according to the first input signal combination;
A second input module configured for obtaining the second input signal combination associated with the robot module when the remote control module is in a first functional state, the second input signal combination comprising: a first sub-combination input through the remote control module;
wherein the first sub-combination comprises one or more of the following signals:
(1) A first rocker signal, the first rocker signal comprising: spatial motion trail data of the first rocker; (2) A second rocker signal, the second rocker signal comprising: euler angle change data of the second rocker; (3) a length signal comprising: data associated with a task execution length of the robot module;
a first instruction conversion module configured to convert the second input signal combination into a first motion instruction;
And the first instruction transmission module is configured to split the first motion instruction into a first instruction and a second instruction and respectively send the first instruction and the second instruction to a robot unit and a camera unit in the robot module.
The beneficial technical effects are as follows:
The invention provides a semi-automatic decision mode of manual and intelligent mutual cooperation. In particular, the method comprises the steps of,
The semi-automatic decision mode in the invention can be switched to the corresponding specific functional state aiming at different application scenes or stages. For example, the semiautomatic decision mode may be switched to the first functional state and the second functional state for the stages of taking a shot, taking a real shot, and the like.
On the one hand, the mode of flexibly switching the function states provides more operation flexibility for users so as to meet the requirements of the users on manual adjustment range and intelligent decision under different scenes. On the other hand, the mode of setting specific function combinations according to specific application scenes (such as specific operation states or control states) is also beneficial to more accurately realizing the mutual coordination of artificial and intelligent decisions.
From the user's point of view, the user only needs to input a limited signal type through the remote control module in a single operation. Therefore, for users, the semi-automatic decision mode also reduces the learning and operation difficulty of the users to a certain extent. From the perspective of computer operation, aiming at limited signal combinations, the invention adopts a mode of carrying out integral processing and split operation on the collected signal combinations, thereby further ensuring that accurate and synchronous operation between the robot unit and the camera unit can be realized.
In another aspect, unlike the conventional technical route in the prior art, the main way of improving the stability of the mirror in the present invention is not to improve the trajectory planning algorithm of the robot. But rather to provide a simple external control method/system that can be quickly matched and docked with existing equipment, such as existing six-axis robots and cameras. On the one hand, the control method can quickly find dotting data matched with the existing equipment in a decision mode of cooperation of artificial and intelligent according to limited signal input; on the other hand, the synchronous processing of the three-party data is realized through the data transmission of the server, the robot unit and the camera unit. Moreover, the method can avoid greatly modifying the track planning algorithm in the robot. Therefore, the robot can be simply docked with various robot products in the practical application process, and the application range of the robot is relatively wide.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. Like elements or portions are generally identified by like reference numerals throughout the several figures. In the drawings, elements or portions thereof are not necessarily drawn to scale. It will be apparent to those of ordinary skill in the art that the drawings in the following description are of some embodiments of the invention and that other drawings may be derived from these drawings without inventive faculty.
FIG. 1 is a flow chart of a robot control method according to an exemplary embodiment of the present invention;
FIG. 2 is a block diagram of a robotic control system in an exemplary embodiment of the invention;
FIG. 3 is a diagram illustrating a relationship between a remote control module and a server in a robot control system according to an exemplary embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating an update process of a motion profile in a third control state according to an exemplary embodiment of the present invention;
Fig. 5 is a flow chart of a method for synchronizing data of a camera and a robot cell in an exemplary embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In this document, suffixes such as "module", "component", or "unit" used to represent elements are used only for facilitating the description of the present invention, and have no particular meaning in themselves. Thus, "module"
"Component" or "unit" may be used in combination.
The terms "upper," "lower," "inner," "outer," "front," "rear," "one end," "the other end," and the like herein refer to an orientation or positional relationship based on that shown in the drawings, merely for convenience of description and to simplify the description, and do not denote or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the invention. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted," "configured to," "connected," and the like, herein, are to be construed broadly as, for example, "connected," whether fixedly, detachably, or integrally connected, unless otherwise specifically defined and limited; the two components can be mechanically connected, can be directly connected or can be indirectly connected through an intermediate medium, and can be communicated with each other. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Herein, "and/or" includes any and all combinations of one or more of the associated listed items.
Herein, "plurality" means two or more, i.e., it includes two, three, four, five, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As used in this specification, the term "about", typically expressed as +/-5% of the value,
More typically +/-4% of the value, more typically +/-3% of the value, more typically +/-2% of the value, even more typically +/-1% of the value, even more typically the value
+/-0.5%。
In this specification, certain embodiments may be disclosed in a format that is within a certain range. It should be appreciated that such a description of "within a certain range" is merely for convenience and brevity and should not be construed as a inflexible limitation on the disclosed ranges. Accordingly, the description of a range should be considered to have specifically disclosed all possible sub-ranges and individual numerical values within that range. For example, the description of ranges 1-6 should be construed as having specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6, etc., as well as individual numbers within such ranges, e.g., 1,2,3,
4,5 And 6. The above rule applies regardless of the breadth of the range.
Herein, "user" generally refers to the actual operator/photographer, or may also be a computer connected to one or more of a remote control module, a server, a robot module.
Example 1
As shown in fig. 1, the present invention provides a semi-automatic control method of a photographic robot, the method comprising:
s101 obtains a first input signal combination, wherein the first input signal combination comprises one or more of the following signals:
(i) A signal a associated with a remote control state of the remote control module;
(ii) A signal B associated with the actual operating state of the remote control module and/or the robot module.
S102, switching or maintaining the remote control module to a corresponding functional state according to the first input signal combination.
In some embodiments, signal a is a manually entered switching signal by a user, e.g., a manually entered switching signal by a user (i.e., an operator, such as a photographer) indicating a switch to the first functional state or the second functional state. Or the signal a may also be a default signal preset in the remote control module, where the default signal is associated with a preset functional state, that is, when the remote control module is turned on, the default signal will be automatically maintained or switched to the corresponding functional state.
In some embodiments, signal B is remote control module historical operational record information (functional status during the last operation). Or the signal B is the current actual running state (e.g. stationary, moving state) of the robot module, and when the robot is stationary, the remote control module may preferably be switched to the first functional state.
S103, when the remote control module is in the first functional state, acquiring the second input signal combination associated with the robot module, the second input signal combination including: a first sub-combination entered through the remote control module.
S104 converts the second input signal combination into a first motion command.
S105, splitting the first motion instruction into a first instruction and a second instruction, and respectively sending the first instruction and the second instruction to a robot unit and a camera unit in the robot module.
Preferably, in some embodiments, steps S101-S103 described above may be performed by a remote control module,
Steps S104, S105 may be performed by a server (e.g., a computer) connected to the remote control module.
In some embodiments, a preset trajectory planning algorithm in the robot unit may be used to calculate the first motion command (e.g. the position pose of the robot tip and the camera parameter set) based on the second input signal combination.
In some embodiments, the first instructions comprise: fifth node information of at least two fifth nodes,
The fifth node information includes: the coordinate position (such as the spatial position and the euler angle) of the fifth node, and a node label corresponding to the coordinate.
In some embodiments, the node tag comprises: the time stamp (or time) of the node, or the number of the node (which may be used to reflect the relative position of the node in the robot motion profile). Thus, the node labels may be used to identify, associate, and otherwise associate corresponding nodes.
In some embodiments, the second instructions comprise: a set of camera parameters associated with at least one of the nodes (e.g., a fifth node), the set of camera parameters comprising: one or more parameters of aperture, sensitivity, focal length, etc., and a time stamp or node number corresponding to the parameters (which may be used to reflect the speed or time of parameter adjustment of the camera).
In some embodiments, the robotic unit is at least one industrial robot, such as a six-axis robot,
Or a three-axis robot, a five-axis robot, etc.
In some embodiments, the camera unit is a device that images and records images using optical principles for at least one camera, video camera, or the like.
The following exemplarily describes preferred operation states (as shown in fig. 3) included in the first functional state of the remote control module in the embodiment of the present invention:
1. a first operating state f1 (also referred to as a dotting state)
Preferably, in some embodiments, the first operational state is adapted for a user to manually click on the mirror line prior to performing the actual shooting task.
To meet the manual dotting requirements of the user, a specific signal type is acquired in a first operation state
Is determined by a first sub-combination, and the first sub-combination includes one or more of the following signals:
(1) A first rocker signal, the first rocker signal comprising: spatial motion trail data of the first rocker;
for example, in some embodiments, the first rocker includes three degrees of freedom of movement, and accordingly, the spatial movement trajectory data further includes: the spatial coordinate position M (x, y, z) of at least one point (or node) in the motion trajectory of the first rocker in space (e.g., world coordinate system).
(2) A second rocker signal, the second rocker signal comprising: euler angle change data of the second rocker;
For example, in some embodiments, the second rocker also includes three degrees of freedom of movement, and correspondingly, the euler angle change data further includes: the original pose N1 (u 1, v1, w 1) of the second rocker in space, the pose N2 (u 2, v2, w 2) of the second rocker after movement occurs under user operation.
(3) A length signal, the length signal comprising: data associated with a task execution length of the robot module;
For example, in some embodiments, the task execution length is a task execution duration T, or an execution track length L, or an execution track interval (for example, may be a track interval that is selected by the user to have a higher execution difficulty). That is, before executing the actual shooting task, the user may set the execution length of the task to first preview the actual motion trail of a part of points, and then determine whether the preview effect meets the expectation by the photographer or the corresponding server.
In some embodiments, a rocker is provided on the remote control module which, when operated by a user to rotate the rocker,
The rotation angle or the rotation number of the rocking wheel is converted into the corresponding task execution length according to a preset conversion rule.
In the embodiment of the invention, the requirements of manual dotting, dotting effect previewing and the like can be flexibly realized by collecting the first signal combination.
Further, in some embodiments, before S104, the method further includes the steps of:
First node information of at least two first nodes is obtained from the motion trail data, and the first node information comprises: a first node coordinate (x, y, z), and a node tag associated with the first node coordinate, the node tag comprising: a first timestamp, or node number.
A first fitted curve is obtained from the node coordinates using cubic spline interpolation (Cubic Spline Interpolation).
And judging whether the first fitting curve is matched with the working parameters (such as the performance parameters of the industrial robot) of the robot module, if so, executing S104, and if not, prompting a user to update the first rocker signal.
In some embodiments, the timestamp may be the actual dotting time 1 of the node (e.g., the actual time at which the first rocker moved to the corresponding node position). Or the time stamp may be time2 converted from time 1 by a preset conversion rule; where time2 is the expected time for the camera to move in space to the target location corresponding to the node.
In some embodiments, the operating parameters include: a workspace boundary; correspondingly, the step of judging whether the first fitting curve is matched with the working parameters of the robot module comprises the following steps:
acquiring a first interval between the first fitting curve and the working space boundary;
judging whether the first interval belongs to a preset interval range or not;
If yes, executing S104, otherwise, prompting a user to update the first rocker signal.
For example, in some embodiments, the spacing between the locations of the points in the first fitted curve and the boundary coordinates of the working space of the industrial robot (i.e., the working space boundary) is calculated separately. And when the spacing is smaller, the user is recommended to re-input the first rocker signal or make local changes to the first rocker signal.
Therefore, the dotting data in the embodiment of the invention can be determined through the cooperation of user input and intelligent analysis and evaluation, namely, the semi-automatic dotting method with cooperation of manpower and intelligence is provided.
In some embodiments, before S104, the method further comprises the step of:
1) First node information of at least two first nodes is obtained from the motion trail data, and the first node information comprises: a first node coordinate, and a first tag (e.g., including a first timestamp, or node number) associated with the first node coordinate; and obtaining second node information of at least one second node from the euler angle change data, wherein the second node information comprises: a second node pose angle, such as euler angles (u, v, w), and a second label (e.g., including a second timestamp, or node number) associated with the second node pose;
For example, in some embodiments, the first tag and the second tag may be used to associate the first node and the second node information, e.g., node information that is the same/similar in number, or the same/similar in time may be integrated to obtain a complete position pose (x, y, z, u, v, w) of the node.
2) Calculating a second fitting curve through the first node information and the second node information by adopting a numerical analysis method (for example, a cubic spline difference method);
3) Calculating at least one axis motion line (such as six-axis motion curve of an industrial robot) of the robot unit according to the second fitting curve by adopting a kinematic inverse solution method (such as an iteration method, a numerical optimization method and the like), wherein the axis motion line is used for reflecting the joint axis included angle and time of the robot
Is a curve of (2);
4) Judging whether the included angle of the joint axis belongs to a preset included angle range or not through an axis motion line;
if yes, executing S104, otherwise, prompting the user to update the first sub-combination.
In the embodiment of the invention, the first rocker signal or the second rocker signal can be rapidly analyzed in advance by a numerical simulation method, so that the possibility of faults caused by the fact that the movement of the robot unit is adjacent to a singular point is reduced.
In some embodiments, the determination results (such as which nodes are adjacent to the singular point) obtained by the above-described automatic analysis may be combined to manually update the partial track interval.
In some embodiments, the fifth node in the first instruction may be the first node and/or the second node obtained by user input. Alternatively, in other embodiments, the fifth node may be a node selected from the first fitted curve or the second fitted curve that meets the requirement according to a preset interval.
The first operation state f1 in the present embodiment is preferably applied to a scene in which the motion trajectory of the subject to be photographed is relatively clear. From the operation perspective of a user, the automatic control of the mirror transporting process can be completed only by manually rotating two rockers on the remote control module.
In addition, the manual and intelligent coordinated semi-automatic decision mode in the embodiment of the invention provides a certain degree of operation freedom for the user (for example, a rocker adjusting method is adopted to assist the user to realize simple and rapid manual adjustment). On the other hand, the semi-automatic decision mode can assist the photographer to quickly find a set of dotting data (such as corresponding node information) conforming to the working performance of the robot on the basis of meeting the lens adjustment requirement of the photographer. That is, the invention reduces the performance requirements for the robot units to a certain extent, so that the method can be used with more types of robot units (such as different types of mechanical arms)
A good adaptation is made.
2. Second operating state f2
In some embodiments, the second operational state is primarily applied when the robotic module is operating in real-time,
The camera unit of the robot module and the tasks between the robot units are synchronized.
In some embodiments, in order to increase the processing speed of task synchronization and ensure the accuracy of the synchronization process, the second input signal combination further includes: a second sub-combination received or collected from the robotic unit, and preferably comprising one or more of the following signals:
(1) A first feedback signal, the first feedback signal comprising: the shaft speed of at least one shaft of the robot unit (i.e. the actual operating parameters of the motors in the industrial robot), and a corresponding first feedback tag, and the first feedback tag comprises: a first feedback time T1 corresponding to the shaft speed, and/or a node number corresponding to the shaft speed;
In some embodiments, the first feedback time is a point in time corresponding to the shaft speed.
(2) A second feedback signal, the second feedback signal comprising: world coordinates of a robot end of the robot cell, and a corresponding second feedback tag, and the feedback tag comprises: a second feedback time T2 corresponding to world coordinates and/or a node number corresponding to the world coordinates.
In some embodiments, the world coordinates may be world coordinates of the robot tip at the current time,
The world coordinates of the robot tip at a future time (e.g., 1S, 2S, etc.), which may be automatically predicted by the robot module, may also be used.
In some embodiments, the second feedback time is a point in time corresponding to the world coordinates (i.e., the time the robot tip is traveling to the corresponding location).
As shown in fig. 5, in some embodiments, the method further comprises the steps of:
When the server receives at least one feedback signal in a first set time I, judging whether the feedback signal is matched with the first instruction, if so, judging that the actual running state of the robot module (or the robot unit) is normal, and if not, judging that the actual running state is a first abnormal state;
And when the feedback signal is not received within the second set time II, judging that the actual motion state is a second abnormal state.
In some embodiments, for example, it may be determined whether the coordinates of the same numbered node (which may be derived from the first motion instruction or the first instruction) are the same as or similar to the actual world coordinates,
If so, judging that the actual running state of the robot unit is normal, and if not, judging that the actual running state of the robot unit is a first abnormal state.
As another example, in some embodiments, it may be determined whether the shaft speed falls within a preset threshold range,
If yes, judging that the actual running state is normal, and if not, judging that the actual running state is a first abnormal state.
It will be appreciated that the decision rule/decision mode in the present invention may be preset by the user.
In some embodiments, the 103 further comprises the step of:
When the server detects that the robot module is in the first abnormal state, correcting the second instruction according to the feedback signal, and correspondingly generating a first correction signal (such as data including corrected camera parameter groups and the like); a server sends the first correction signal to the camera unit.
In some embodiments, a camera parameter set corresponding to an actual coordinate/expected coordinate calculation of a target object (e.g., a person, an object, etc.) to be photographed is calculated according to the world coordinates, and the second instruction is updated according to the camera parameter set to send the updated second instruction to the camera unit.
In some embodiments, the method further comprises the step of:
when the server detects that the robot unit is in the second abnormal state, a first communication signal is sent to the camera unit and/or the robot unit and used for starting data communication between the camera unit and the robot unit, namely the robot unit directly sends a first feedback signal or a second feedback signal to the camera unit. At this time, the camera unit is switched from single-wire communication with the server to two-wire communication with the server and the robot unit.
In some embodiments, the step S103 further includes the steps of:
When the feedback signal is received again within the third set time III (the server), a second communication signal is sent to the camera unit and/or the robot unit for stopping the data communication between the camera unit and the robot unit, i.e. the robot unit sends the first feedback signal or the second feedback signal directly to the server. At this time, the camera unit is again converted from two-wire communication with the server, the robot unit, to single-wire communication with the server to concentrate the task thread to the server.
Preferably, in some embodiments, the camera unit and the robot unit may perform data transmission by means of wired communication to ensure stability of data transmission when the network environment is relatively poor.
Alternatively, in other embodiments, the camera unit and the robot unit may perform data transmission through wireless communication modes such as bluetooth, network, and the like.
In this embodiment, when the user completes the manual dotting, a limited parameter set (such as an axis speed or a world coordinate data set) is preferably used in conjunction with a three-way data transmission path (i.e., a data transmission path from the robot unit, the server to the camera unit in sequence), so that tasks among the server, the robot unit and the camera unit can be performed synchronously. In addition, the data transmission and processing capacity in the process are relatively limited, so that the requirements on network environment and hardware conditions in the mirror operation process are reduced to a certain extent.
Therefore, the operation state in the present embodiment can be better suited for an outdoor shooting environment (e.g., in a region where network signals are relatively weak such as a mountain area, a grassland, a desert, etc.). And moreover, a mode of wireless control of the robot by adopting the remote control module can reduce the situation that the activities of users (such as photographic staff) are limited due to the terrain to a certain extent, so that the photographic staff can flexibly and freely move in the nearby area of the robot, and meanwhile, the mirror-transporting control of the robot is finished.
3. A third operating state f3 (also referred to as a custom state)
Preferably, in some embodiments, when the remote control module is in any functional state, the third operating state may be started in response to the operation of the user, where the user may automatically customize the functional combination (i.e. the signal acquisition type) in the current functional state according to the field requirements (such as indoor environment, outdoor environment, fast shot, slow shot, etc.).
In some embodiments, the step S103 includes the steps of:
detecting a custom signal for a user, the custom signal comprising at least one or more of:
(1) The signal type of the added signal to be added (the signal type may specifically be a signal/data name, such as a feedback signal, a rocker signal, etc.);
(2) The signal type of the removal signal to be removed;
(3) Functional state information to be customized (the functional state information may specifically be a name of a functional state—such as a first functional state, a second functional state, or a first operation state, etc.);
The add signal is added to a corresponding functional state in response to the custom signal or the remove signal is removed from the functional state.
In some embodiments, before adding the add signal to the corresponding functional state in response to the custom signal, the method further comprises the steps of:
Acquiring at least one of the following priority information:
(1) The first preset priority corresponding to the adding signal or the removing signal;
(2) The second preset priority corresponding to the functional state;
(3) A third preset priority of predefined signal types in the functional state;
Judging whether the first preset priority is matched with the second preset priority or the third preset priority, if so, executing a corresponding adding step or removing step, and if not, sending a corresponding prompt signal to a user.
In some embodiments, signal types of the same priority may be combined in the same functional state.
In some embodiments, the priorities include: a blacklist including information of signal types, functional states, or operational states that conflict with the signals (i.e., are not suggested to be simultaneously enabled).
For example, in some embodiments, when the signal to be added is signal C, and the blacklist of signal C includes signal D, signal D has been added by default in the current functional state to be customized. The user should be prompted at this point that the current signal C cannot be added directly.
In some embodiments, the method further comprises the step of:
generating a corresponding indication signal according to the current functional state or operation state;
And according to the indication signal, special display is performed on the functions (such as signal lamp, text display is performed around the function keys/rockers, etc. to indicate whether the corresponding functions/signals are enabled or not) on the remote control module.
In some embodiments, the spatial motion profile data comprises: the initial position of the first rocker is at least one axis (such as x-axis,
Y-axis, z-axis) at least one first offset generated on the y-axis, z-axis);
in some embodiments, the euler angle change data includes: the original euler angle (corresponding to the original pose) of the second rocker, the second rocker being offset about at least one axis by at least one second offset.
The following describes steps S104 and S105 in the present invention by way of a specific embodiment:
Reads the camera position pose (also equivalent to the position pose of the robot tip) P 0=(x0,y0,z0,u0,v0,w0;
the world coordinate { W } of the robot is taken as a reference coordinate, and the position is rewritten into a homogeneous transformation matrix form:
Wherein W is a first world coordinate system referenced by the robot, C is a robot end coordinate system, represents a change from the robot end coordinate system to the first world coordinate system; CP0 Representing the displacement of three degrees of freedom in the xyz axis of the robot tip (as read from the first rocker signal).
For a gesture change (e.g., read from the second rocker signal), the remote control module gesture is changed from the original gesture a to the gesture a', and the corresponding transformation matrix is . The relationship between the two gestures can be expressed as:
Wherein W 'represents a second world coordinate system referenced by the remote control module, A represents a remote control module coordinate system in an original posture, and A' represents a remote control module coordinate system in a changed posture;
further solving a relative posture change matrix which takes the remote control module as a reference system and is equivalent to the posture change of the camera, so as to obtain a relative posture change matrix/> of the camera, wherein the posture of the camera after the posture change is
The position after the change of the position change rate can be obtained through the rocker signal:
Wherein C Pt is the position of the robot after the change of the end (which is also equivalent to the position of the camera after the change),
K1 is the signal scaling factor and Δt is the sampling time interval.
In summary, the homogeneous transformation matrix of the changed camera pose is:
The matrix is rewritten as a form of position-ZYX euler angles:
Pt=(xt,yt,zt,ut,vt,wt)
At least one fifth node information is collected from the P t to correspondingly generate a second instruction and send the second instruction to the robot unit for execution.
Of course, in other embodiments, the track planning algorithm pre-stored in the robot unit (i.e. the internal algorithm of the industrial robot actually selected) may be directly selected for processing in step S103.
In some embodiments, the length signal comprises: the length of the task to be executed by the robot module is, for example, the initial node information of the track section to be executed.
In some embodiments, the second instruction includes one or more of the following parameters: aperture, sensitivity, focal length.
It can be understood that, on the one hand, each functional state in the present invention can be set by user in a customized manner under a corresponding preset rule (e.g. combining according to priority); on the other hand, the individual functional states can also be superimposed on one another or switched freely in real time without collision.
It should be noted that, unlike the conventional technical route, the core route of the present invention is as follows:
Firstly, on the basis of satisfying the manual lens movement adjustment of a photographer, finding a motion track most suitable for the stable motion of a robot unit (instead of finding the motion track most suitable for an object to be photographed or most accurate); and secondly, setting specific function combinations according to different application scenes/states so as to limit the operation functions of the remote controller to a certain extent (rather than directly carrying out number superposition on the operation functions). Through the mutual matching of the two core routes, the invention can simplify the operation and use difficulty of the remote controller on one hand, and can reduce the requirements on the hardware conditions of equipment (namely, the data processing pressure of cameras and mechanical arms is reduced to a certain extent, and the requirements on the network communication environment are reduced) on the other hand.
On the basis, the invention also adopts a novel data interaction mode (namely, the data processing tasks of the multi-party roles such as the remote controller, the server, the camera and the robot are distributed and managed through the steps of the method) so as to further improve the reliability and the stability of data processing in the real-time operation process.
In addition, the invention can solve or relieve the application limitation possibly generated when the signal input type is limited to a certain extent by a flexible function switching mode.
Example two
As shown in fig. 2, the present invention further provides a semi-automatic control system of a photographic robot, the system comprising:
A remote control module 10, the remote control module 10 comprising:
A first input module configured to obtain a first input signal combination, wherein the first input signal combination comprises one or more of the following signals:
(i) A signal associated with a remote control state of the remote control module 10;
(ii) Signals associated with the actual operating state of the remote control module 10 and/or the robot module 30;
A function switching module configured to switch or maintain the remote control module to a corresponding functional state according to the first input signal combination;
A second input module configured for obtaining the second input signal combination associated with the robot module when the remote control module is in a first functional state, the second input signal combination comprising: a first sub-combination input through the remote control module;
wherein the first sub-combination comprises one or more of the following signals:
(1) A first rocker signal, the first rocker signal comprising: spatial motion trail data of the first rocker; (2) A second rocker signal, the second rocker signal comprising: euler angle change data of the second rocker; (3) a length signal comprising: data associated with a task execution length of the robot module;
And a server 20 connected to the remote control module, the server comprising:
a first instruction conversion module configured to convert the second input signal combination into a first motion instruction;
A first instruction transmission module configured to split the first motion instruction into a first instruction and a second instruction, and send the first instruction and the second instruction to a robot unit 31 (such as a robotic arm) and a camera unit 32 in the robot module, respectively.
In some embodiments, the server further comprises: the first judging module comprises:
a first node obtaining unit configured to obtain first node information of at least two first nodes from the motion trail data, the first node information including: a first node coordinate, and a node tag associated with the first node coordinate;
a first fitting unit configured to obtain a first fitted curve through the node coordinates using a cubic spline interpolation method;
the first judging unit is configured to judge whether the first fitting curve is matched with the working parameters of the robot module, if yes, the first instruction transmission module corresponds to the splitting and transmission of the starting instruction, and if not, the first instruction transmission module prompts a user to update the first rocker signal.
In some embodiments, the operating parameters include: a workspace boundary; accordingly, the first judgment unit includes:
A first subunit configured to obtain a first spacing between the first fitted curve and the workspace boundary;
A second subunit configured to determine whether the first pitch belongs to a preset pitch range;
If yes, the first instruction transmission module is used for correspondingly starting the splitting and transmission of the instruction, and if not, a user is prompted to update the first rocker signal.
In some embodiments, the system further comprises:
a second node acquisition unit configured to acquire at least two from the motion trajectory data
First node information of a first node, the first node information comprising: a first node coordinate, and a first timestamp or node number associated with the first node coordinate; and obtaining second node information of at least one second node from the euler angle change data, wherein the second node information comprises:
A second node pose angle, and a second timestamp or node number associated with the pose angle;
The second fitting unit is configured to calculate a second fitting curve through the first node information and the second node information by adopting a numerical analysis method;
The inverse solution unit is configured to calculate at least one axis motion line of the robot unit according to the second fitting curve by adopting a kinematic inverse solution method, wherein the axis motion line is a relation curve for reflecting the joint axis included angle of the robot and time;
The second judging unit is configured to judge whether the included angle of the joint axis belongs to a preset included angle range or not through an axis motion line; if yes, the first instruction transmission module correspondingly starts the splitting and transmission of the instruction, and if not, the user is prompted to update the first sub-combination;
in some embodiments, the second input signal combination further comprises: a second sub-combination received or collected from the robotic unit, and the second sub-combination comprising one or more of the following signals:
(1) A first feedback signal, the first feedback signal comprising: the shaft speed of at least one shaft of the robotic unit, and a corresponding first feedback tag comprising: a time or node number corresponding to the shaft speed;
(2) A second feedback signal, the second feedback signal comprising: world coordinates of a robot end of the robot cell, and a corresponding second feedback tag comprising: time or node number corresponding to the world coordinates.
In some embodiments, the first instruction comprises: fifth node information of at least two fifth nodes, the fifth node information comprising: coordinates of the fifth node, and a node label corresponding to the coordinates; correspondingly, the server further comprises: a state judgment unit;
The state judging unit is configured to judge whether the feedback signal is matched with the first instruction or not when the server receives at least one corresponding feedback signal within a first set time I, if so, the actual running state of the robot module is judged to be normal, and if not, the actual running state is judged to be a first abnormal state;
and/or the state judging unit is configured to judge that the actual motion state of the robot module is a second abnormal state when the server does not receive the feedback signal within a second set time II.
In some embodiments, the server 20 further comprises:
A correction unit configured to correct the second instruction according to the feedback signal and correspondingly generate a first correction signal when the robot module is detected to be in the first abnormal state (the server); and transmitting the first correction signal to the camera unit;
And/or, in some embodiments, server 20 further comprises:
An adjustment unit configured to transmit a first communication signal to the robot unit when the server detects that the robot module is in the second abnormal state, so that the robot unit directly transmits the feedback signal to the camera unit; and the camera unit adaptively adjusts a camera parameter set according to the feedback signal;
And when the server receives the feedback signal from the robot unit within a third set time III, the server transmits a second communication signal to the robot unit so that the robot unit stops directly transmitting the feedback signal to the camera unit.
In some embodiments, the system further comprises a customization module, and the customization module comprises:
a custom signal detection unit configured to detect a custom signal of a user, the custom signal comprising at least one or more of:
(1) The signal type of the addition signal to be added;
(2) The signal type of the removal signal to be removed;
(3) Function state information to be customized;
A signal adding and removing unit configured to add the adding signal to a corresponding functional state or remove the first removing signal from the functional state in response to the custom signal.
In some embodiments, the customization module further comprises: a custom evaluation unit, and the custom evaluation unit is configured to perform the steps of:
Acquiring at least one of the following priority information:
(1) A first preset priority corresponding to the first adding signal or the first removing signal;
(2) The second preset priority corresponding to the functional state;
(3) A third preset priority of predefined signal types in the functional state;
Judging whether the first preset priority is matched with the second preset priority or the third preset priority, if so, executing a corresponding adding step or removing step, and if not, sending a corresponding prompt signal to a user.
It will be appreciated that the system of embodiments of the present invention may implement any of the methods or steps described above,
And will not be described in detail herein.
Example III
Further, in order to be able to control the robot in real time, for example, during a movement of the robot unit according to the first instruction, a local adjustment of the trajectory of the robot is performed; or the motion trail of the robot is controlled in real time directly by the user manually. The invention also provides a real-time control method of the photographic robot on the basis of the embodiment.
In some embodiments, the real-time control method comprises:
s101 obtains a first input signal combination, wherein the first input signal combination comprises one or more of the following signals:
(i) A signal associated with a remote control state of the remote control module;
(ii) Signals associated with the actual operating state of the remote control module and/or the robot module;
S102, switching or maintaining the remote control module to be in a corresponding functional state according to the first input signal combination;
s106, when the remote control module is in a second functional state (corresponding to a real-time control state), acquiring the third input signal combination associated with the robot module, the third input signal combination including: a first input combination input through the remote control module;
Wherein the first input combination comprises one or more of the following signals:
(1) A first rocker signal, the first rocker signal comprising: spatial motion trail data of the first rocker; (2) A second rocker signal, the second rocker signal comprising: euler angle change data of the second rocker; (3) a rocking wheel signal, the wheel signal comprising: relative position data of the rocking wheel;
S107, converting the third input signal combination into a second motion instruction;
S108 splits the second motion instruction into a third instruction (e.g., node information associated with a robot motion trajectory) and a fourth instruction (e.g., a camera parameter set), and sends the third instruction and the fourth instruction to a robot unit and a camera unit in the robot module, respectively.
In some embodiments, a preset trajectory planning algorithm in the robot cell may be employed to calculate the second motion command (e.g., a position pose of the robot tip, a set of camera parameters, etc.) based on the third input signal combination. For example, the second motion command may be a fitted curve (or, in other words, a fitted trajectory) calculated according to a trajectory planning algorithm.
In some embodiments, the second motion command may also be a fitted curve calculated according to the third input signal combination by using other numerical simulation analysis methods.
In some embodiments, the second functional state includes one or more of the following control states:
(i) A first control state (i.e., a real-time control state), and when the remote control module is in the first control state, the remote control module collects related data of the first input combination, such as a rocker signal, or a rocker signal, in real time;
(ii) A second control state (i.e. controlling the pose of the robot tip by means of a motion sensing module-imu),
And when the remote control module is in the second control state, the remote control module collects related data of a second input combination of the third input signal combination in real time, wherein the second input combination comprises one or more of the following: the first original gesture of the remote control module, the first changing gesture of the remote control module, the second original gesture of the robot terminal;
(iii) A third control state (i.e., a real-time correction state), and when the remote control module is in the third control state, the remote control module collects, in real-time, data related to a third input combination of the third input signal combination, wherein the third input combination includes one or more of:
Third node information, and the third node information includes: at least one third node to be inserted
And a third time of insertion;
Fourth node information, and the fourth node information includes: and inserting the position of at least one fourth node near the third node.
The following exemplarily describes preferred control states included in the second functional state of the remote control module in the embodiment of the present invention:
1. First control state f'1
Preferably, the first control state f'1 in the embodiment of the present invention is suitable for a user to input and adjust a motion track of an industrial robot in real time.
For example, in some embodiments, the first input combination may be a first rocker signal input in real time when the mirror approach is relatively simple, such as when following a linearly moving object.
For example, in some embodiments, the first rocker signal may be rapidly analyzed by a numerical simulation method (e.g., cubic spline interpolation) to determine whether the current real-time input signal meets the operating rules of the robotic unit (e.g., whether it is able to match the corresponding operating parameters of the motor). Specifically, the judging manner or step may refer to the first embodiment and the second embodiment, and will not be described herein.
For example, in some embodiments, when the remote control module is in the first control state f'1, the first input combination includes: the input first and second rocker signals are synchronized. Similarly, for the quick analysis and evaluation of the first and second rocker signals, reference may be made to the first and second embodiments, and the details thereof will not be repeated here.
For example, in some embodiments, the first input combination includes a first rocker signal, a second rocker signal, and a rocker signal of the synchronous input. The rocking wheel signal can be the rotation angle or the number of turns of the rocking wheel arranged on the remote control module, and a user can control the movement track section of the robot unit by rotating the rocking wheel.
2. Second control state f'2
Preferably, in the embodiment of the present invention, the second control state f'2 is suitable for the user to follow the spatial motion track of the robot end, so as to adjust the gesture (such as euler angle) of the robot end in real time.
Preferably, the second control state in the embodiment of the present invention adjusts the posture of the robot tip through a body sensor disposed on the remote control module, where the body sensor may be used to detect a posture change of the remote control module in space. At this time, the user can directly control the posture of the robot tip by manually rotating and tilting the remote control module.
In some embodiments, the body sensor may be an inertial measurement unit (InertialMeasurement Unit, i.e., IMU).
3. Third control state f'3
Preferably, the third control state f'3 in the embodiment of the present invention is suitable for locally adjusting the motion track of the robot during the motion of the robot according to the first instruction or the third instruction.
In some embodiments, when the remote control module is in the third control state, S106 includes the steps of:
S61, determining constraint conditions of simulation according to the third node information, wherein the constraint conditions comprise: a range of position thresholds for at least one endpoint, or a range of derivative thresholds for at least one endpoint;
and S62, simulating by adopting a cubic spline interpolation method according to the constraint condition and the fourth node information to obtain a third fitting curve.
For example, in some embodiments, as shown in fig. 4, when the motion trajectory L1 of the robot end needs to be adjusted in real time, the coordinates of the end points of the interval to be inserted in the trajectory L1 (i.e., the coordinates of the two third nodes a and b) may be first obtained, and the constraint condition (e.g., the coordinate range of the end points on both sides of the trajectory L2, i.e., the position threshold range) may be determined according to the position coordinates of the nodes a and b. In addition, new node information (i.e., coordinates of at least one fourth node inserted between nodes a, b) that needs to be inserted needs to be obtained. Finally, a new local track L2 (equivalent to a third fitting curve) is obtained through calculation of constraint conditions and new node coordinates by using a numerical simulation algorithm. The complete motion track L3 of the robot can be obtained according to the track L2 and the track L1' (i.e. the track L1 with the area between the nodes a, b removed).
For another example, in some embodiments, a preset rule may be employed to determine constraints (i.e., derivative threshold ranges) for the derivatives at the respective endpoints based on the derivative values at nodes a, b.
In some embodiments, the method further comprises the step of:
The relationship between the movement speed and the shaft speed of the robot end is established by a jacobian matrix method,
Wherein the relationship is expressed as:
Where v denotes a Cartesian velocity vector of the motion velocity, Θ denotes a joint angle vector of the robot (e.g., a robot arm), denotes a joint angle velocity vector of the robot, and J -1 denotes a Jacobian matrix;
In some embodiments, the shaft angular velocity of the robot at each moment can be calculated according to the fitted curve (such as the third fitted curve) and the motion velocity v. When the shaft angular speed is greater than the appointed limiting speed theta max, judging that the robot enters a singular range easy to shake, recording the Cartesian position at the moment, and sending the Cartesian position to a user to prompt the user to carry out adaptive adjustment.
In some embodiments, prior to S61, further comprising the step of:
Confirming an insertion interval according to the third node information and the fourth node information, wherein the insertion interval comprises: a time difference between the third time and the current time, and/or a spatial interval between the third node and the current node;
Judging whether the insertion interval belongs to a preset threshold range or not; if so, then the process proceeds to the executed S61,
If not, a prompt signal is sent to the user.
In the embodiment of the invention, when a user executes real-time operations such as adding, deleting, changing, inserting, moving and the like, firstly, the insertion position or the insertion time of the user is analyzed so as to avoid damage to a robot motor caused by improper insertion intervals. For example, when the node to be inserted is close to the node position where the robot actually moves, and bad shake is easy to generate, the user is recommended to modify the inserted node.
In some embodiments, when the remote control module is in the second control state, the second motion instructions include: euler angle data, a somatosensory module (such as an attitude sensor) is further arranged on the remote control module, so as to be used for monitoring the spatial attitude of the remote control module (dynamic changes such as overturning, tilting and the like of the remote control module); accordingly, the step S106 includes:
Calculating a first rotation matrix according to the first original posture (namely the original posture of the remote control module) and the first change posture (the offset of the remote control module on at least one axis) by adopting a matrix transformation method, wherein the first rotation matrix represents the relative rotation quantity of the first change posture under the coordinate system of the first original posture;
calculating a second rotation matrix according to the first rotation matrix and the second original posture (namely the original posture of the tail end of the robot) by adopting a matrix transformation method, wherein the second rotation matrix represents a motion track of the tail end of the robot around a zyx shaft;
euler angle data for at least one node is collected from the second rotation matrix.
In some embodiments, the fourth instruction includes one or more of the following parameters: aperture, sensitivity, focal length.
In some embodiments, the third input signal combination further comprises: a second input combination received or collected from the robotic unit, and the second input combination includes one or more of the following signals:
(1) A first feedback signal, the first feedback signal comprising: the shaft speed of at least one shaft of the robotic unit, and a corresponding first feedback tag comprising: a time or node number corresponding to the shaft speed;
(2) A second feedback signal, the second feedback signal comprising: world coordinates of a robot end of the robot unit, and a corresponding second feedback tag, and the second feedback tag includes: time or node number corresponding to the world coordinates.
In some embodiments, the third instruction comprises: sixth node information of at least two sixth nodes, the sixth node information comprising: the location (e.g., spatial coordinates, and euler angles) of the sixth node, and a node tag corresponding to the location.
For example, in some embodiments, the sixth node information may be at least one node information collected from the third fitted curve. For another example, in some embodiments, the sixth node may be the corresponding node information obtained directly from the first or second rocker signal. For another example, in some embodiments, the sixth node information may also be at least one node information acquired from the first fitted curve or the second fitted curve.
In some embodiments, the method further comprises the step of:
When the server receives at least one corresponding feedback signal within a first set time I,
Judging whether the feedback signal is matched with the third instruction, if so, judging that the actual running state of the robot module is normal, and if not, judging that the actual running state is a first abnormal state.
In some embodiments, when the server does not receive the feedback signal within the second set time II, the actual motion state of the robot module is determined to be a second abnormal state.
In some embodiments, the 103 further comprises the step of:
When the server detects that the robot module is in the first abnormal state, correcting the fourth instruction according to the feedback signal, and correspondingly generating a second correction signal;
the server sending the second correction signal to the camera unit;
In some embodiments, the method further comprises the step of:
when the server detects that the robot module is in the second abnormal state, the server sends a first communication signal to the robot unit so that the robot unit directly sends the feedback signal to the camera unit;
the camera unit adaptively adjusts a camera parameter set according to the feedback signal;
And when the server receives the feedback signal from the robot unit within a third set time III, the server transmits a second communication signal to the robot unit so that the robot unit stops directly transmitting the feedback signal to the camera unit.
It will be appreciated that the real-time control method of the present invention may be used in conjunction with the semi-automatic control method of the above embodiments.
The invention is equivalent to providing an independent control method to help ordinary video staff to control the robot externally. Generally, in the process of controlling the camera lens by using a robot, accurate dotting is required, otherwise problems of poor smoothness such as jamming and shaking easily occur. In addition, before the expected shooting effect is achieved, multiple shooting needs to be performed experimentally, and at this time, the performance mode (such as action change) and the mirror-moving mode (such as rotation path, direction or speed of the camera) of the shot object such as an actor are all greatly adjusted.
In the actual shooting process, the robot moving mirror can be controlled in real time in a manner of being controlled by a remote controller directly so as to finish shooting. Unlike the traditional automatic mirror-conveying mode, the robot control method does not need to acquire the route track (such as information of positioning points and the like) in advance, and can acquire the information in real time through external signals.
In addition, in order to help the user to quickly input accurate control data, the invention also provides a semi-automatic task decision mode with cooperation of manual decision and automatic decision. This task decision mode can be accomplished by a single operator on the one hand and reduces the professional level requirements for the operator; on the other hand, the task decision mode can reduce the precision requirement on the motor to a certain extent, and further realize stable and accurate mirror transportation in a lower cost mode.
In other words, unlike the conventional technology, the technical route adopted in the present invention for providing the stability of the lens is not directly selected from the industrial robots with higher precision, but rather, the present invention preferably adopts the industrial robots with low precision and adopts the semi-automatic task decision mechanism to improve the stability and accuracy of the industrial robots with low precision in the lens transportation process.
Example IV
As shown in fig. 2, the present invention further provides a real-time control system of a photographic robot, the system comprising: remote control module 10, and remote control module 10 further comprises:
A first input module configured to obtain a first input signal combination, wherein the first input signal combination comprises one or more of the following signals:
(i) A signal associated with a remote control state of the remote control module;
(ii) Signals associated with the actual operating state of the remote control module and/or the robot module;
A function switching module configured to switch or maintain the remote control module to a corresponding functional state according to the first input signal combination;
A third input module configured for obtaining the third input signal combination associated with the robot module when the remote control module is in a second functional state, the third input signal combination comprising: a first input combination input through the remote control module;
Wherein the first input combination comprises one or more of the following signals:
(1) A first rocker signal, the first rocker signal comprising: spatial motion trail data of the first rocker; (2) A second rocker signal, the second rocker signal comprising: euler angle change data of the second rocker; (3) a rocker signal, the rocker signal comprising: relative position data of the rocking wheel;
and a server 20 connected to the remote control module 10, and the server 20 includes:
a second instruction conversion module configured to convert the third input signal combination into a second motion instruction;
A second instruction transmission module configured to split the second motion instruction into a third instruction and a fourth instruction and send the third instruction and the fourth instruction to the robot unit 31 and the camera unit 32 in the robot module 30, respectively.
In some embodiments, the second functional state includes:
(i) The remote control module is in a first control state, and when the remote control module is in the first control state, the remote control module acquires related data of the first input combination in real time;
(ii) And a second control state, and when the remote control module is in the second control state, the remote control module collects related data of a second input combination of the third input signal combination in real time, wherein the second input combination comprises one or more of the following: the first original gesture of the rocker module, the first changed gesture of the rocker module, the second original gesture of the robot terminal;
(iii) And a third control state, and when the remote control module is in the second control state, the remote control module collects related data of a third input combination of the third input signal combination in real time, wherein the third input combination comprises one or more of the following:
Third node information, and the third node information includes: the location of at least one third node to be inserted, and a third time of insertion;
Fourth node information, and the fourth node information includes: and inserting the position of at least one fourth node near the third node.
In some embodiments, the fourth instruction includes one or more of the following parameters: aperture, sensitivity, focal length.
In some embodiments, the server further comprises: a simulation module, and the simulation module is configured to perform the steps of: determining a constraint condition of simulation according to the third node information, wherein the constraint condition comprises: a range of position thresholds for at least one endpoint, or a range of derivative thresholds for at least one endpoint; and simulating by adopting a cubic spline interpolation method according to the constraint condition and the fourth node information to obtain a third fitting curve. In some embodiments, the simulation module is further configured to perform the steps of: confirming an insertion interval according to the third node information and the fourth node information, wherein the insertion interval comprises: a time difference between the third time and the current time, and/or a spatial interval between the third node and the current node;
Judging whether the insertion interval belongs to a preset interval range or not; if not, a prompt signal is sent to the user to prompt the user to modify the third input combination.
It will be appreciated that the present invention may include any functional module/unit in the foregoing embodiments, and may also be used to implement the method or the step in any foregoing embodiment, which is not described herein.
It should be noted that, in addition to the mechanical arm disclosed in the above embodiment, any of the robots/mechanical arms that can be used to implement the above functions may be used in the present invention, for example, a multi-joint mechanical arm, a rectangular coordinate system mechanical arm, a spherical coordinate system mechanical arm, a polar coordinate mechanical arm, a cylindrical coordinate mechanical arm, and so on.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising several instructions for causing a computer terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (10)

1. A semi-automatic control method of a photographic robot, the method comprising:
S101, acquiring a first input signal combination through a remote control module, wherein the first input signal combination comprises one or more of the following signals:
(i) A signal a associated with a remote control state of the remote control module; the signal A is a switching signal input by a user and used for indicating switching to a first functional state or a second functional state; or the signal A is a default signal preset in the remote control module;
(ii) A signal B related to the actual operating state of the remote control module and/or the robot module;
S102, switching or maintaining the remote control module to be in a corresponding functional state according to the first input signal combination;
S103, when the remote control module is in a first functional state, acquiring a second input signal combination associated with the robot module, wherein the second input signal combination comprises: a first sub-combination input through the remote control module;
wherein the first sub-combination comprises one or more of the following signals:
(1) A first rocker signal, the first rocker signal comprising: spatial motion trail data of the first rocker; (2) A second rocker signal, the second rocker signal comprising: euler angle change data of the second rocker; (3) a length signal comprising: data associated with a task execution length of the robotic module, wherein the length signal comprises: the starting node information of the task to be executed by the robot module;
S104, converting the second input signal combination into a first motion instruction through a server connected with the remote control module;
S105, splitting the first motion instruction into a first instruction and a second instruction through a server, and respectively sending the first instruction and the second instruction to a robot unit and a camera unit in the robot module.
2. The semiautomatic control method according to claim 1, characterized by further comprising, before S104, the steps of:
first node information of at least two first nodes is obtained from the motion trail data, and the first node information comprises: a first node coordinate, and a node tag associated with the first node coordinate;
Obtaining a first fitting curve through the first node coordinates by adopting a cubic spline interpolation method;
and judging whether the first fitting curve is matched with the working parameters of the robot module, if so, executing S104, and if not, prompting a user to update the first rocker signal.
3. The semi-automatic control method according to claim 2, wherein the operating parameters include: a workspace boundary; correspondingly, the step of judging whether the first fitting curve is matched with the working parameters of the robot module comprises the following steps:
acquiring a first interval between the first fitting curve and the working space boundary;
judging whether the first interval belongs to a preset interval range or not;
If yes, executing S104, otherwise, prompting a user to update the first rocker signal.
4. The semiautomatic control method according to claim 1, characterized by further comprising, before S104, the steps of:
First node information of at least two first nodes is obtained from the motion trail data, and the first node information comprises: a first node coordinate, and a first node tag associated with the first node coordinate; and obtaining second node information of at least one second node from the euler angle change data, wherein the second node information comprises: a second node pose angle, and a second node label associated with the second node pose angle;
calculating to obtain a second fitting curve through the first node information and the second node information by adopting a numerical analysis method;
Calculating at least one axis motion line of the robot unit according to the second fitting curve by adopting a kinematic inverse solution method, wherein the axis motion line is a relation curve for reflecting the included angle of the joint axis of the robot and time;
judging whether the included angle of the joint axis belongs to a preset included angle range or not according to the axis motion line;
If yes, executing S104, otherwise, prompting a user to update the first sub-combination;
And/or, the second input signal combination further comprises: a second sub-combination received or collected from the robotic unit, and the second sub-combination comprising one or more of the following signals:
(1) A first feedback signal, the first feedback signal comprising: the shaft speed of at least one shaft of the robotic unit, and a corresponding first feedback tag, and the first feedback tag comprises: a time or node number corresponding to the shaft speed;
(2) A second feedback signal, the second feedback signal comprising: world coordinates of a robot end of the robot unit, and a corresponding second feedback tag, and the second feedback tag includes: time or node number corresponding to the world coordinates.
5. The semi-automatic control method according to claim 4, wherein the first instruction includes: fifth node information of at least two fifth nodes, the fifth node information comprising: a coordinate position of the fifth node, and a node tag corresponding to the coordinate position; correspondingly, the method further comprises the steps of:
When the server receives at least one corresponding feedback signal within a first set time I, judging whether the feedback signal is matched with the first instruction, if so, judging that the actual running state of the robot module is normal, and if not, judging that the actual running state is a first abnormal state;
And/or when the server does not receive the feedback signal within the second set time II, judging that the actual running state of the robot module is a second abnormal state.
6. The semi-automatic control method according to claim 5, characterized in that said method further comprises the step of:
when the server detects that the robot module is in the first abnormal state, correcting the second instruction according to the feedback signal, and correspondingly generating a first correction signal;
the server sending the first correction signal to the camera unit;
and/or the method further comprises the steps of:
when the server detects that the robot module is in the second abnormal state, the server sends a first communication signal to the robot unit so that the robot unit directly sends the feedback signal to the camera unit;
the camera unit adaptively adjusts a camera parameter set according to the feedback signal;
And when the server receives the feedback signal from the robot unit within a third set time III, the server transmits a second communication signal to the robot unit so that the robot unit stops directly transmitting the feedback signal to the camera unit.
7. The semi-automatic control method according to claim 1, characterized in that it further comprises the step of:
Detecting a custom signal of a user, the custom signal comprising:
(1) One or more of a signal type of an add signal to be added and (2) a signal type of a remove signal to be removed; wherein the signal type is a signal name; and (3) functional status information to be customized; wherein the function state information is the name of the function state;
The add signal is added to a corresponding functional state in response to the custom signal or the remove signal is removed from the functional state.
8. The semi-automatic control method according to claim 7, characterized in that said method further comprises the step of:
Acquiring at least one of the following priority information:
(1) The first preset priority corresponding to the adding signal or the removing signal;
(2) The second preset priority corresponding to the functional state;
(3) A third preset priority of predefined signal types in the functional state;
Judging whether the first preset priority is matched with the second preset priority or the third preset priority, if so, executing a corresponding adding step or removing step, and if not, sending a corresponding prompt signal to a user.
9. The semi-automatic control method according to claim 1, wherein,
The spatial motion trail data comprises: an initial position of the first rocker, the first rocker being offset on at least one axis by at least one first offset compared to the initial position;
and/or, the euler angle change data includes: the original euler angle of the second rocker, the second rocker being offset about at least one axis by at least one second offset;
And/or, the second instruction comprises one or more of the following parameters: aperture, sensitivity, focal length.
10. A semi-automatic control system for a photographic robot, the system comprising:
A first input module configured to obtain a first input signal combination, wherein the first input signal combination comprises one or more of the following signals:
(i) A signal a associated with a remote control state of the remote control module; the signal A is a switching signal input by a user and used for indicating switching to a first functional state or a second functional state; or the signal A is a default signal preset in the remote control module;
(ii) A signal B related to the actual operating state of the remote control module and/or the robot module;
A function switching module configured to switch or maintain the remote control module to a corresponding functional state according to the first input signal combination;
a second input module configured to obtain a second input signal combination associated with the robot module when the remote control module is in the first functional state, the second input signal combination comprising: a first sub-combination input through the remote control module;
wherein the first sub-combination comprises one or more of the following signals:
(1) A first rocker signal, the first rocker signal comprising: spatial motion trail data of the first rocker; (2) A second rocker signal, the second rocker signal comprising: euler angle change data of the second rocker; (3) a length signal comprising: data associated with a task execution length of the robotic module, wherein the length signal comprises: the starting node information of the task to be executed by the robot module;
a first instruction conversion module configured to convert the second input signal combination into a first motion instruction;
And the first instruction transmission module is configured to split the first motion instruction into a first instruction and a second instruction and respectively send the first instruction and the second instruction to a robot unit and a camera unit in the robot module.
CN202310964915.3A 2023-07-31 2023-07-31 Semi-automatic control method and system for photographic robot Active CN116945176B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310964915.3A CN116945176B (en) 2023-07-31 2023-07-31 Semi-automatic control method and system for photographic robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310964915.3A CN116945176B (en) 2023-07-31 2023-07-31 Semi-automatic control method and system for photographic robot

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202410374240.1A Division CN118269087A (en) 2023-07-31 Semi-automatic control method and system for photographic robot

Publications (2)

Publication Number Publication Date
CN116945176A CN116945176A (en) 2023-10-27
CN116945176B true CN116945176B (en) 2024-04-16

Family

ID=88458167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310964915.3A Active CN116945176B (en) 2023-07-31 2023-07-31 Semi-automatic control method and system for photographic robot

Country Status (1)

Country Link
CN (1) CN116945176B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368803A (en) * 2011-08-19 2012-03-07 深圳一电科技有限公司 Shooting device, remote control method thereof and shooting system
CN105700543A (en) * 2016-04-01 2016-06-22 成都云图秀色科技有限公司 Flight device control system, control method and aerial photographing UAV (Unmanned Aerial Vehicle)
CN106592677A (en) * 2016-12-15 2017-04-26 电子科技大学 Remote control and mechanical control switching method of underground scraper
CN109910010A (en) * 2019-03-23 2019-06-21 广东石油化工学院 A kind of system and method for efficient control robot
CN110900600A (en) * 2019-11-13 2020-03-24 江苏创能智能科技有限公司 Remote control system of live working robot and control method thereof
CN211878727U (en) * 2020-04-16 2020-11-06 深圳市航昇科技有限公司 Remote controller for automatically transporting mirror by holder
TW202123673A (en) * 2019-12-03 2021-06-16 儒園系統股份有限公司 Multi-robotic live production and webcast videography system includes at least one photographing robot, a main control equipment, and at least one operation mode unit
CN116922387A (en) * 2023-07-31 2023-10-24 重庆越千创新科技有限公司 Real-time control method and system for photographic robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200055195A1 (en) * 2017-05-03 2020-02-20 Taiga Robotics Corp. Systems and Methods for Remotely Controlling a Robotic Device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368803A (en) * 2011-08-19 2012-03-07 深圳一电科技有限公司 Shooting device, remote control method thereof and shooting system
CN105700543A (en) * 2016-04-01 2016-06-22 成都云图秀色科技有限公司 Flight device control system, control method and aerial photographing UAV (Unmanned Aerial Vehicle)
CN106592677A (en) * 2016-12-15 2017-04-26 电子科技大学 Remote control and mechanical control switching method of underground scraper
CN109910010A (en) * 2019-03-23 2019-06-21 广东石油化工学院 A kind of system and method for efficient control robot
CN110900600A (en) * 2019-11-13 2020-03-24 江苏创能智能科技有限公司 Remote control system of live working robot and control method thereof
TW202123673A (en) * 2019-12-03 2021-06-16 儒園系統股份有限公司 Multi-robotic live production and webcast videography system includes at least one photographing robot, a main control equipment, and at least one operation mode unit
CN211878727U (en) * 2020-04-16 2020-11-06 深圳市航昇科技有限公司 Remote controller for automatically transporting mirror by holder
CN116922387A (en) * 2023-07-31 2023-10-24 重庆越千创新科技有限公司 Real-time control method and system for photographic robot

Also Published As

Publication number Publication date
CN116945176A (en) 2023-10-27

Similar Documents

Publication Publication Date Title
CN111897332B (en) Semantic intelligent substation robot humanoid inspection operation method and system
US10245731B2 (en) Programming of a robotic arm using a motion capture system
US11409260B2 (en) Runtime controller for robotic manufacturing system
US9993924B2 (en) Closed-loop control system for robotic operation
US9623559B2 (en) Systems and methods for instructing robotic operation
Fallon et al. An architecture for online affordance‐based perception and whole‐body planning
JP3173042B2 (en) Robot numerical controller
CN116922387B (en) Real-time control method and system for photographic robot
CN109407603B (en) Method and device for controlling mechanical arm to grab object
JP2006110705A (en) Calibration method of robot
WO2019080113A1 (en) Patrol planning method for unmanned aerial vehicle, control terminal, unmanned aerial vehicle, and unmanned aerial vehicle system
US20160117811A1 (en) Method for generating a target trajectory of a camera embarked on a drone and corresponding system
WO2023015566A1 (en) Control method, control device, movable platform, and storage medium
WO2022142078A1 (en) Method and apparatus for action learning, medium, and electronic device
CN114080590A (en) Robotic bin picking system and method using advanced scanning techniques
CN116945176B (en) Semi-automatic control method and system for photographic robot
CN118269087A (en) Semi-automatic control method and system for photographic robot
CN108724183A (en) A kind of control method, system and the relevant apparatus of handling machinery arm
CN113272756A (en) Holder control method and device, holder and storage medium
CN112329530B (en) Method, device and system for detecting mounting state of bracket
CN117156267B (en) Cloud deck camera working mode switching method and system based on environment self-adaption
Rishwaraj et al. Multi-robot formation control using a hybrid posture estimation strategy
JP2020067604A (en) Control device, control method, and program
CN115091465A (en) Mechanical arm path compensation method and device, electronic equipment and storage medium
CN113834479A (en) Map generation method, device, system, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant