CN116175582A - Intelligent mechanical arm control system and control method based on machine vision - Google Patents

Intelligent mechanical arm control system and control method based on machine vision Download PDF

Info

Publication number
CN116175582A
CN116175582A CN202310229492.0A CN202310229492A CN116175582A CN 116175582 A CN116175582 A CN 116175582A CN 202310229492 A CN202310229492 A CN 202310229492A CN 116175582 A CN116175582 A CN 116175582A
Authority
CN
China
Prior art keywords
mechanical arm
hand
coordinates
machine vision
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310229492.0A
Other languages
Chinese (zh)
Inventor
王雅红
刘霈
李义
孙洁
孟偌愉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Polytechnic University
Original Assignee
Dalian Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Polytechnic University filed Critical Dalian Polytechnic University
Priority to CN202310229492.0A priority Critical patent/CN116175582A/en
Publication of CN116175582A publication Critical patent/CN116175582A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention provides an intelligent mechanical arm control system and a control method based on machine vision, which relate to the technical field of mechanical arm control, wherein the control system comprises: the system comprises a mechanical arm, an upper computer, a lower computer and a signal receiver; the upper computer comprises an image processing module, an action recognition module and an inverse kinematics calculation module; the image processing module is used for identifying the positions of key points of the human hands to obtain real-time hand coordinates, the action identification module is used for identifying hand actions according to the real-time hand coordinates, and the inverse kinematics calculation module is used for generating a mechanical arm moving instruction according to the hand actions and sending the mechanical arm moving instruction to the signal receiver; the signal receiver receives the mechanical arm moving instruction and sends the mechanical arm moving instruction to the lower computer; and the lower computer controls the mechanical arm to perform corresponding actions according to the received mechanical arm movement instruction. The invention realizes the stable interaction between the operator and the mechanical arm by using the machine vision technology, and ensures the simplicity, the accuracy and the instantaneity of the operation of the mechanical arm.

Description

Intelligent mechanical arm control system and control method based on machine vision
Technical Field
The invention relates to the technical field of mechanical arm control, in particular to an intelligent mechanical arm control system and method based on machine vision.
Background
In the modern society, the mechanical arm can finish production links such as welding, carrying, packaging, inspection, gluing, polishing and polishing, and the like, and is widely applied to the fields of industries such as automobiles, energy, foods, new retailing, 3C electronics, precision manufacturing, medical appliances, education and the like.
The current situation corresponding to the wide application of the mechanical arm is that the mechanical arm is extremely difficult to use. The traditional mechanical arm control mode requires the professional knowledge of an operator on the mechanical arm control method. The existing mechanical arm simulation control method for improving the operation difficulty in the market enables the mechanical arm to move through machine vision identification of an operation target or human hand simulation operation hardware. The former has a major disadvantage of too narrow an application range and too specific, and the latter has a major disadvantage of reducing portability of the robot arm deployment while increasing costs in order to require additional operating hardware. In summary, the intelligent mechanical arm control method is easy to use, flexible in action and capable of being based on common equipment such as mobile phones.
Disclosure of Invention
According to the technical problems of limitation of reverse control, flexibility and cost of simulation control, the intelligent mechanical arm control system and the control method based on machine vision are provided. The invention mainly utilizes the machine vision recognition to the human hand, thereby playing the role of positively controlling the mechanical arm from the hand without additional operation equipment.
The invention adopts the following technical means:
an intelligent robotic arm manipulation system based on machine vision, comprising: the system comprises a mechanical arm, an upper computer, a lower computer and a signal receiver;
the upper computer comprises an image processing module, an action recognition module and an inverse kinematics calculation module; the image processing module is used for identifying the positions of key points of the human hands to obtain real-time hand coordinates, the action identification module is used for identifying hand actions according to the real-time hand coordinates, and the inverse kinematics calculation module is used for generating a mechanical arm moving instruction according to the hand actions and sending the mechanical arm moving instruction to the signal receiver;
the signal receiver receives the mechanical arm moving instruction and sends the mechanical arm moving instruction to the lower computer;
and the lower computer controls the mechanical arm to perform corresponding actions according to the received mechanical arm movement instruction.
Further, the image processing module recognizes a hand image obtained by the camera, and recognizes the positions of key points of the hand to obtain real-time hand coordinates.
The invention also provides an intelligent mechanical arm control method based on machine vision, which is realized based on any intelligent mechanical arm control system based on machine vision, and comprises the following steps:
the upper computer acquires hand action images by using a camera;
constructing a machine learning model, and training the machine learning model to obtain a trained machine learning model;
calculating the hand action image by adopting the trained machine learning model to obtain a hand space three-dimensional model;
converting the hand space three-dimensional model into key point coordinates of all joints of the hand;
performing hand motion recognition operation according to the key point coordinates to obtain specific motion joint coordinates of the mechanical arm;
performing inverse kinematics calculation on the specific motion joint coordinates to obtain action data of all steering engines of the mechanical arm;
transmitting the control instruction of the mechanical arm steering engine to a lower computer;
and the lower computer controls the mechanical arm to finish the appointed action according to the control instruction of the mechanical arm steering engine.
Further, the hand motion recognition operation is performed according to the coordinates of the key points, which specifically includes:
the resolution, the turnover and the aspect ratio of the hand action image are processed;
executing a TensorFlowLite model on the GPU of the upper computer, acquiring processed hand action images and outputting hand mark model tensors;
calculating detection confidence according to the output hand mark model tensor;
judging whether the current hand exists according to the confidence level;
feedback is carried out according to the number of the hands which are detected at present and the set maximum number, so that redundant detection is prevented;
calculating a dominant hand score according to the output hand mark model tensor;
and decoding the output hand mark model tensor into a total 21 key point coordinate list of all joints of the hand, and carrying out normalization processing on the coordinates according to the size of the input image.
Further, the method further comprises the following steps:
if the hand exists, processing the normalized hand coordinates according to the size of the image before processing, and converting the hand coordinates into drawing primitives;
if the hand exists, calculating a boundary box of the hand according to the hand coordinates, and converting the boundary box into drawing primitives;
the obtained drawing primitive is drawn into an annotation superimposed layer and is output after being placed at the uppermost layer of the original image.
Further, the building of the machine learning model and the training of the machine learning model comprise the following steps:
for hands without high-contrast recognition points, first training the palm and fist detectors;
modeling the palm/fist by using square boundary frame anchor points in the palm/fist detector, ignoring aspect ratio, and reducing the total anchor point quantity;
annotating various real hand or rendered hand images, and training as a training set input model.
Further, the inverse kinematics calculation is performed on the specific motion joint coordinates, and the method comprises the following steps:
forward listing equations for solving X, Y and Z according to the arm lengths and the arm angles of the mechanical arms;
iterating all the available angles of the first steering engine once to find out all possible solutions;
after the possible angles of the first steering engine are solved, the angle solutions of the left steering engines are sequentially solved by the first steering engine;
the output precision satisfies the solution of the use requirement.
Compared with the prior art, the invention has the following advantages:
compared with the existing mechanical arm patent using machine vision on the market, the mechanical arm moves the mechanical arm through the machine vision recognition of the operation target, but recognizes and restores the hand action of the person through the machine vision, is flexible and wide in application range, and has the adaptability of the person which is not possessed by the former. The invention realizes the stable interaction between an operator and the mechanical arm by using a machine vision technology, ensures the simplicity, the accuracy and the instantaneity of the operation of the mechanical arm, realizes the independent, accurate and quick judgment of the operation gesture of a system and the like by accumulating pictures and independently learning in a certain period, synthesizes the equipment such as a mobile phone and the like, ensures the accurate, accurate and stable operation, realizes the remote networking operation, can realize the remote instant operation, can obtain and derive operation data at any time, can give satisfactory results for both the supply and the demand, reduces the training time, and further reduces the influence of the training personnel on the production rhythm.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort to a person skilled in the art.
FIG. 1 is a control block diagram of the system of the present invention.
Fig. 2 is an output diagram of the annotation overlay of the present invention after it is placed at the uppermost layer of the original image.
Detailed Description
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other. The invention will be described in detail below with reference to the drawings in connection with embodiments.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
The relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise. Meanwhile, it should be clear that the dimensions of the respective parts shown in the drawings are not drawn in actual scale for convenience of description. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate. In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
In the description of the present invention, it should be understood that the azimuth or positional relationships indicated by the azimuth terms such as "front, rear, upper, lower, left, right", "lateral, vertical, horizontal", and "top, bottom", etc., are generally based on the azimuth or positional relationships shown in the drawings, merely to facilitate description of the present invention and simplify the description, and these azimuth terms do not indicate and imply that the apparatus or elements referred to must have a specific azimuth or be constructed and operated in a specific azimuth, and thus should not be construed as limiting the scope of protection of the present invention: the orientation word "inner and outer" refers to inner and outer relative to the contour of the respective component itself.
Spatially relative terms, such as "above … …," "above … …," "upper surface at … …," "above," and the like, may be used herein for ease of description to describe one device or feature's spatial location relative to another device or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "above" or "over" other devices or structures would then be oriented "below" or "beneath" the other devices or structures. Thus, the exemplary term "above … …" may include both orientations of "above … …" and "below … …". The device may also be positioned in other different ways (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
In addition, the terms "first", "second", etc. are used to define the components, and are only for convenience of distinguishing the corresponding components, and the terms have no special meaning unless otherwise stated, and therefore should not be construed as limiting the scope of the present invention.
The invention provides an intelligent mechanical arm control system and a control method based on machine vision, wherein the control system comprises the following components: the system comprises a mechanical arm, an upper computer, a lower computer and a signal receiver;
the upper computer comprises an image processing module, an action recognition module and an inverse kinematics calculation module; the image processing module is used for identifying the positions of key points of the human hands to obtain real-time hand coordinates, the action identification module is used for identifying hand actions according to the real-time hand coordinates, and the inverse kinematics calculation module is used for generating a mechanical arm moving instruction according to the hand actions and sending the mechanical arm moving instruction to the signal receiver; the image processing module is used for identifying the hand image obtained by the camera and identifying the positions of key points of the hand to obtain real-time hand coordinates.
The signal receiver receives the mechanical arm moving instruction and sends the mechanical arm moving instruction to the lower computer;
and the lower computer controls the mechanical arm to perform corresponding actions according to the received mechanical arm movement instruction.
Furthermore, the invention is especially suitable for the situation that the upper computer is an intelligent terminal with an opened application page. The upper computer is an intelligent terminal with an opened application page, the image processing module of the upper computer recognizes an image acquired by a camera on the intelligent terminal, the motion recognition module generates hand motion information according to real-time coordinate information of the image processing module, the inverse kinematics calculation module automatically generates a motion instruction of the mechanical arm steering engine at each moment according to the motion information and transmits the motion instruction of the mechanical arm steering engine at each moment to the lower computer, and the lower computer controls the mechanical arm to move to a designated position coordinate at each moment according to position coordinate information of the tail end of the mechanical arm, so that the real-time control of the hand on the mechanical arm is realized.
As shown in fig. 1, the invention further provides a machine vision-based intelligent mechanical arm control method, which is implemented based on the machine vision-based intelligent mechanical arm control system, and comprises the following steps:
the upper computer acquires hand action images by using a camera;
constructing a machine learning model, and training the machine learning model to obtain a trained machine learning model;
constructing a machine learning model, and training the machine learning model, wherein the method comprises the following steps of:
for hands without high-contrast recognition points, first training the palm and fist detectors;
modeling the palm/fist by using square boundary frame anchor points in the palm/fist detector, ignoring aspect ratio, and reducing the total anchor point quantity;
annotating various real hand or rendered hand images, and training as a training set input model.
Calculating the hand action image by adopting the trained machine learning model to obtain a hand space three-dimensional model;
converting the hand space three-dimensional model into key point coordinates of all joints of the hand;
performing hand motion recognition operation according to the key point coordinates to obtain specific motion joint coordinates of the mechanical arm;
and carrying out hand motion recognition operation according to the key point coordinates, wherein the hand motion recognition operation specifically comprises the following steps of:
the resolution, the turnover and the aspect ratio of the hand action image are processed;
executing a TensorFlowLite model on the GPU of the upper computer, acquiring processed hand action images and outputting hand mark model tensors;
calculating detection confidence according to the output hand mark model tensor;
judging whether the current hand exists according to the confidence level;
feedback is carried out according to the number of the hands which are detected at present and the set maximum number, so that redundant detection is prevented;
calculating a dominant hand score according to the output hand mark model tensor;
and decoding the output hand mark model tensor into a total 21 key point coordinate list of all joints of the hand, and carrying out normalization processing on the coordinates according to the size of the input image.
If the hand exists, processing the normalized hand coordinates according to the size of the image before processing, and converting the hand coordinates into drawing primitives;
if the hand exists, calculating a boundary box of the hand according to the hand coordinates, and converting the boundary box into drawing primitives;
the obtained drawing primitive is drawn into an annotation superimposed layer and is output after being placed at the uppermost layer of the original image, and the output diagram is shown in fig. 2.
Performing inverse kinematics calculation on the specific motion joint coordinates to obtain action data of all steering engines of the mechanical arm; taking a four-degree-of-freedom mechanical arm as an example, the method comprises the following steps:
forward listing equations for solving X, Y and Z according to the arm lengths and the arm angles of the mechanical arms;
iterating all the available angles of the first steering engine once to find out all possible solutions;
after the possible angles of the first steering engine are solved, the angle solutions of the left steering engines are sequentially solved by the first steering engine;
the output precision satisfies the solution of the use requirement.
Transmitting the control instruction of the mechanical arm steering engine to a lower computer;
and the lower computer controls the mechanical arm to finish the appointed action according to the control instruction of the mechanical arm steering engine.
The system control block diagram of the patent is shown in figure 1, a human hand acts, and after the camera shoots image information, the upper computer performs image processing to acquire coordinates of key points of the hand; then the upper computer continues to perform inverse kinematics calculation, and after the specific motion instructions of all steering engines of the mechanical arm are solved, the specific motion instructions are transmitted to the signal receiver by a wireless technology, and the signal receiver is transmitted to the connected lower computer; the lower computer operates the mechanical arm and finally faithfully restores the manual command action.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (7)

1. An intelligent mechanical arm control system based on machine vision, which is characterized by comprising: the system comprises a mechanical arm, an upper computer, a lower computer and a signal receiver;
the upper computer comprises an image processing module, an action recognition module and an inverse kinematics calculation module; the image processing module is used for identifying the positions of key points of the human hands to obtain real-time hand coordinates, the action identification module is used for identifying hand actions according to the real-time hand coordinates, and the inverse kinematics calculation module is used for generating a mechanical arm moving instruction according to the hand actions and sending the mechanical arm moving instruction to the signal receiver;
the signal receiver receives the mechanical arm moving instruction and sends the mechanical arm moving instruction to the lower computer;
and the lower computer controls the mechanical arm to perform corresponding actions according to the received mechanical arm movement instruction.
2. The intelligent mechanical arm control system based on machine vision according to claim 1, wherein the image processing module recognizes a hand image obtained by a camera and recognizes a hand key point position to obtain real-time hand coordinates.
3. The intelligent mechanical arm control method based on machine vision, which is realized based on the intelligent mechanical arm control system based on machine vision as claimed in claims 1-2, is characterized by comprising the following steps:
the upper computer acquires hand action images by using a camera;
constructing a machine learning model, and training the machine learning model to obtain a trained machine learning model;
calculating the hand action image by adopting the trained machine learning model to obtain a hand space three-dimensional model;
converting the hand space three-dimensional model into key point coordinates of all joints of the hand;
performing hand motion recognition operation according to the key point coordinates to obtain specific motion joint coordinates of the mechanical arm;
performing inverse kinematics calculation on the specific motion joint coordinates to obtain motion data of all steering engines of the mechanical arm, and converting the motion data into an operation instruction capable of being directly executed;
transmitting the control instruction of the mechanical arm steering engine to a lower computer;
and the lower computer controls the mechanical arm to finish the appointed action according to the control instruction of the mechanical arm steering engine.
4. The intelligent mechanical arm control method based on machine vision according to claim 3, wherein the hand motion recognition operation is performed according to the key point coordinates, specifically comprising:
the resolution, the turnover and the aspect ratio of the hand action image are processed;
executing a TensorFlowLite model on the GPU of the upper computer, acquiring processed hand action images and outputting hand mark model tensors;
calculating detection confidence according to the output hand mark model tensor;
judging whether the current hand exists according to the confidence level;
feedback is carried out according to the number of the hands which are detected at present and the set maximum number, so that redundant detection is prevented;
calculating a dominant hand score according to the output hand mark model tensor;
and decoding the output hand mark model tensor into a total 21 key point coordinate list of all joints of the hand, and carrying out normalization processing on the coordinates according to the size of the input image.
5. The machine vision-based intelligent mechanical arm manipulation method of claim 4, further comprising:
if the hand exists, processing the normalized hand coordinates according to the size of the image before processing, and converting the hand coordinates into drawing primitives;
if the hand exists, calculating a boundary box of the hand according to the hand coordinates, and converting the boundary box into drawing primitives;
the obtained drawing primitive is drawn into an annotation superimposed layer and is output after being placed at the uppermost layer of the original image.
6. The intelligent mechanical arm control method based on machine vision according to claim 3, wherein the building a machine learning model and training the machine learning model comprises the following steps:
for hands without high-contrast recognition points, first training the palm and fist detectors;
modeling the palm/fist by using square boundary frame anchor points in the palm/fist detector, ignoring aspect ratio, and reducing the total anchor point quantity;
annotating various real hand or rendered hand images, and training as a training set input model.
7. The machine vision-based intelligent mechanical arm control method according to claim 3, wherein the inverse kinematics calculation is performed on the specific motion joint coordinates, and the method comprises the following steps:
forward listing equations for solving X, Y and Z according to the arm lengths and the arm angles of the mechanical arms;
iterating all the available angles of the first steering engine once to find out all possible solutions;
after the possible angles of the first steering engine are solved, the angle solutions of the left steering engines are sequentially solved by the first steering engine;
the output precision satisfies the solution of the use requirement.
CN202310229492.0A 2023-03-10 2023-03-10 Intelligent mechanical arm control system and control method based on machine vision Pending CN116175582A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310229492.0A CN116175582A (en) 2023-03-10 2023-03-10 Intelligent mechanical arm control system and control method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310229492.0A CN116175582A (en) 2023-03-10 2023-03-10 Intelligent mechanical arm control system and control method based on machine vision

Publications (1)

Publication Number Publication Date
CN116175582A true CN116175582A (en) 2023-05-30

Family

ID=86438428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310229492.0A Pending CN116175582A (en) 2023-03-10 2023-03-10 Intelligent mechanical arm control system and control method based on machine vision

Country Status (1)

Country Link
CN (1) CN116175582A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117283571A (en) * 2023-11-24 2023-12-26 法奥意威(苏州)机器人系统有限公司 Robot real-time control method and device, electronic equipment and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117283571A (en) * 2023-11-24 2023-12-26 法奥意威(苏州)机器人系统有限公司 Robot real-time control method and device, electronic equipment and readable storage medium
CN117283571B (en) * 2023-11-24 2024-02-20 法奥意威(苏州)机器人系统有限公司 Robot real-time control method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN110480634B (en) Arm guide motion control method for mechanical arm motion control
CN111055281B (en) ROS-based autonomous mobile grabbing system and method
CN106826838B (en) Interaction bionic mechanical arm control method based on Kinect visual depth sensor
CN110900581B (en) Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
Li Human–robot interaction based on gesture and movement recognition
Do et al. Imitation of human motion on a humanoid robot using non-linear optimization
WO2018137445A1 (en) Ros-based mechanical arm grabbing method and system
US8265791B2 (en) System and method for motion control of humanoid robot
CN112634318B (en) Teleoperation system and method for underwater maintenance robot
Tölgyessy et al. Foundations of visual linear human–robot interaction via pointing gesture navigation
US11648678B2 (en) Systems, devices, articles, and methods for calibration of rangefinders and robots
CN106313049A (en) Somatosensory control system and control method for apery mechanical arm
JP2022542241A (en) Systems and methods for augmenting visual output from robotic devices
US9008442B2 (en) Information processing apparatus, information processing method, and computer program
CN113103230A (en) Human-computer interaction system and method based on remote operation of treatment robot
US20180370038A1 (en) Systems, devices, articles, and methods for stow verification
CN116175582A (en) Intelligent mechanical arm control system and control method based on machine vision
CN114347033A (en) Robot article grabbing method and device, robot and storage medium
CN115576426A (en) Hand interaction method for mixed reality flight simulator
CN112732075B (en) Virtual-real fusion machine teacher teaching method and system for teaching experiments
CN117103277A (en) Mechanical arm sensing method based on multi-mode data fusion
CN111975776A (en) Robot movement tracking system and method based on deep learning and Kalman filtering
Urban et al. Recognition of arm gestures using multiple orientation sensors: Repeatability assessment
CN116206189A (en) Curved surface graphic identification code and identification method thereof
EP3916507A1 (en) Methods and systems for enabling human robot interaction by sharing cognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination