US20220413511A1 - Robot control method, a robot control system and a modular robot - Google Patents

Robot control method, a robot control system and a modular robot Download PDF

Info

Publication number
US20220413511A1
US20220413511A1 US17/900,878 US202217900878A US2022413511A1 US 20220413511 A1 US20220413511 A1 US 20220413511A1 US 202217900878 A US202217900878 A US 202217900878A US 2022413511 A1 US2022413511 A1 US 2022413511A1
Authority
US
United States
Prior art keywords
robot
turn
posture
motion
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/900,878
Other languages
English (en)
Inventor
Jianbo Yang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Keyi Technology Co Ltd
Original Assignee
Beijing Keyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Keyi Technology Co Ltd filed Critical Beijing Keyi Technology Co Ltd
Assigned to BEIJING KEYI TECHNOLOGY CO., LTD. reassignment BEIJING KEYI TECHNOLOGY CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: YANG, JIANBO
Publication of US20220413511A1 publication Critical patent/US20220413511A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0272Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising means for registering the travel distance, e.g. revolutions of wheels
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/19Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by positioning or contouring control systems, e.g. to control position from one programmed point to another or to control movement along a programmed continuous path
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/08Programme-controlled manipulators characterised by modular constructions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1615Programme controls characterised by special kind of manipulator, e.g. planar, scara, gantry, cantilever, space, closed chain, passive/active joints and tendon driven manipulators
    • B25J9/1617Cellular, reconfigurable manipulator, e.g. cebot
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/4155Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by programme execution, i.e. part programme or machine function execution, e.g. selection of a programme
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0011Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/35Nc in input of data, input till input file format
    • G05B2219/35349Display part, programmed locus and tool path, traject, dynamic locus
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40302Dynamically reconfigurable robot, adapt structure to tasks, cellular robot, cebot
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40304Modular structure
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/45Nc applications
    • G05B2219/45007Toy
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/50Machine tool, machine tool null till machine tool work handling
    • G05B2219/50391Robot

Definitions

  • the disclosure relates to the field of robots, in particular to a robot control method, a robot control system and a modular robot.
  • Robots have been widely used in life and industry, e.g., used for training students' creative thinking skills in teaching and used for welding, spraying, assembling, carrying and other operations in automated production.
  • a robot has great flexibility to complete different work tasks
  • an existing robot often has only one main function for specific use purposes and occasions due to fixed freedom degree and configuration and lack of functional scalability and re-configurability.
  • it is very expensive to develop a specific robot for each field and each application, which severely restricts the popularization and application of robots. Therefore, re-configurable robots come into being.
  • Re-configurable robots are usually obtained by combining a main module and multiple basic modules having the same shape, structure, and connecting surfaces for combining.
  • the user cannot verify whether the combined structure of a modular robot is correct or not, which results in a lot of repetitive assembly work to the user and a poor user experience.
  • many reconstructed robots are also provided with wheels, which controls different motion states and drive the main module to move to different positions to help people complete some designated tasks, such as shooting, detection and other tasks.
  • the main module and the basic module are usually adjusted individually, or the wheels and the sub-modules are adjusted individually, and it is difficult to simultaneously adjust the main modules and the main modules and the wheels, resulting in limited adjustment posture, which is difficult to meet the diverse needs of users; or the procedure for setting control-action information is cumbersome, the adjustment speed is slow, and the adjustment operation is complicated, which seriously affects user experience.
  • the present disclosure provides a robot control method, a robot control system and a modular robot.
  • a technical solution of the present disclosure for solving the technical problems is to provide a robot control method, which includes the steps of: T1: providing a robot, with at least one wheel and at least one motion posture; T2: regulating the robot to a motion posture, saving motion-posture information corresponding to the motion posture, and generating preset action control information based on the speed of the wheel and the motion-posture information; T3: constructing and forming an operating model based on the preset action control information; and T4: outputting, by the operating model, actual motion control information of a motion according to user's input to control the robot to perform the motion.
  • the speed of the wheel is obtained by pushing the robot to walk on the road in the corresponding motion posture, or the speed is obtained by controlling the robot to move via an app, and/or the speed is obtained by user customization.
  • the number of the wheels is at least two
  • the motion posture includes a forward posture, a left-turn posture, and a right-turn posture
  • the corresponding motion-posture information includes forward-posture information, left-turn-posture information, and right-turn-posture information.
  • the step T2 includes the steps of: T21: controlling the robot to move forward, obtaining the forward speed ratio of each wheel, and determining forward preset action control information according to the forward-posture information and the forward speed ratio of each wheel; T22: controlling the robot to turn left, obtaining the left-turn speed ratio of each wheel, and determining left-turn preset action control information according to the left-turn-posture information and the left-turn speed ratio of each wheel; and T23: controlling the robot to turn right, obtaining the right-turn speed ratio of each wheel, and determining right-turn preset action control information according to the right-turn-posture information and the right-turn speed ratio of each wheel.
  • the step T2 further includes the steps of: T24: controlling the robot to move forward, saving the forward-posture information and the maximum forward speed thereof, and determining forward preset action control information according to the forward-posture information, the maximum forward speed and/or a customized speed; T25: controlling the robot to turn left, saving the left-turn-posture information and the maximum left-turn speed thereof, and determining left-turn preset action control information according to the left-turn-posture information, the maximum left-turn speed and/or a customized speed; and T26: controlling the robot to turn right, saving the right-turn-posture information and the maximum right-turn speed thereof, and determining right-turn preset action control information according to the right-turn-posture information, the maximum right-turn speed and/or a customized speed.
  • the step T3 includes the steps of: T31: constructing a coordinate system, defining a circular area or a fan-shaped area with the origin as the center on the coordinate system to form a manipulation area, and the fan-shaped area including areas located in the first quadrant and the second quadrant; T32: mapping the forward preset action control information, the left-turn preset action control information, and the right-turn preset action control information to the upper vertex, the left vertex and the right vertex of the manipulation area, respectively.
  • the step T3 further includes the step of: T33: defining user's input as a control point located in the coordinate system, when the control point is located in the first and second quadrants, the control point located in the first quadrant being a forward-right-motion control point, and forward-right preset action control information being obtained by the interpolation operation of the forward preset action control information and the right-turn preset action control information; the control point located in the second quadrant being a forward-left-motion control point, forward-left preset action control information being obtained by the interpolation operation of the forward preset action control information and the left-turn preset action control information; the control point located on the Y-axis of the coordinate system being obtained by converting the forward preset action control information according to the distance between the control point and the center.
  • the fan-shaped area further includes areas located in the third quadrant and the fourth quadrant, and there are blank areas where no control points are disposed between the first quadrant and the fourth quadrant and between the second quadrant and the third quadrant.
  • the step T3 further includes the step of: T34: interpolating to the lower vertex of the manipulation area according to the forward preset action control information to form backward preset action control information; interpolating to the third quadrant of the manipulation area according to the backward preset action control information and the left-turn preset action control information to form backward-left preset action control information; interpolating to the fourth quadrant of the manipulation area according to the backward preset action control information and the right-turn preset action control information to form backward-right preset action control information.
  • an operating disc is provided on a Graphic User Interface for operating the robot, and the operating disc corresponds to the circular area or the fan-shaped area on the coordinate system.
  • the step T4 includes the steps of: T41: operating on the operating disc to form an input; T42: the operating model calculating the actual motion control information according to the control point corresponding to the input to control the robot to perform a motion.
  • At least one of the forward speed ratio, the left-turn speed ratio, and the right-turn speed ratio is adjusted according to a preset adjustment ratio.
  • the present disclosure further provides a robot control system, which includes a robot that is assembled from a main body and at least one wheel connected to the main body and has an initial entity structure, a memory, and one or more programs, where one or more of the programs are stored in the memory, the memory communicates with the main body, and the programs are used to execute the robot control method as described above.
  • the main body includes a plurality of module units
  • the robot control system further includes a controller
  • signal transmission can be performed between the controller and the robot.
  • the controller includes a display screen, which presents at least one operating disc, and the user controls the motion of the robot by manipulating the operating disc.
  • the main body includes a plurality of module units
  • the robot control system further includes a controller
  • signal transmission can be performed between the controller and the robot.
  • the controller includes a display screen, which presents at least one ratio bar, and the user adjusts at least one of the forward speed ratio, the left-turn speed ratio, and the right-turn speed ratio by manipulating the ratio bar.
  • a first edit button and a second edit button are provided on the display screen, the first edit button is used to set the robot to a corresponding motion posture, and the second edit button is used to set the speed of the wheel.
  • the present disclosure further provides a modular robot, which is used to execute the above-mentioned robot control method.
  • the robot control method, the robot control system and the modular robot of the present disclosure have the following beneficial effects.
  • the robot control method includes the steps of: providing a robot, with at least one wheel and at least one motion posture; regulating the robot to a motion posture, saving the motion-posture information corresponding to the motion posture, and generating preset action control information based on the speed and the motion-posture information; constructing and forming an operating model based on the preset action control information; and the operating model outputting actual motion control information of a motion according to user's input to control the robot to perform the motion.
  • the robot can be adjusted to a corresponding motion posture according to user's needs, and the motion-posture information and speed thereof are saved to generate preset action control information, which greatly improves the convenience of the robot and facilitates the user to set motion modes, while greatly enriching the robot's motion modes to meet the diverse needs of users, and increasing the design space of the robot to make it suitable for more scenarios.
  • the user can simply set forward preset action control information, left-turn preset action control information, and right-turn preset action control information.
  • the preset action control information of other defined motion postures is obtained through the interpolation algorithm, which makes the setting simple and fast.
  • the preset action control information is mapped to a circular area or a fan-shaped area, so that the user can easily and quickly operate the robot and have a better experience.
  • the setting of action control information can be completed by pushing the robot to walk on the ground.
  • the operation is simple and fast, which improves the user experience, and can customize the motion of the robot according to the needs of the user to meet the different needs of users and improve the fun of the user playing with the robot.
  • the step T2 includes the steps of: T21: controlling the robot to move forward, obtaining the forward speed ratio of each wheel, and determining the forward preset action control information according to the forward-posture information and the forward speed ratio of each wheel; T22: controlling the robot to turn left, obtaining the left-turn speed ratio of each wheel, and determining the left-turn preset action control information according to the left-turn-posture information and the left-turn speed ratio of each wheel; and T23: controlling the robot to turn right, obtaining the right-turn speed ratio of each wheel, and determining the right-turn preset action control information according to the right-turn-posture information and the right-turn speed ratio of each wheel.
  • the forward speed ratio, the left-turn speed ratio and the right-turn speed ratio are all obtained in real time by pushing the robot to move. Therefore, the forward preset action control information, the left-turn preset action control information and the right-turn preset action control information set according to the obtained forward speed ratio, left-turn speed ratio and right-turn speed ratio can be better adapted to the robot, which can improve the adaptability of motions of the robot and obtain a better motion state.
  • At least one of the forward speed ratio, the left-turn speed ratio, and the right-turn speed ratio is adjusted according to a preset adjustment ratio, so that the forward speed ratio, the left-turn speed ratio and the right-turn speed ratio can be well adjusted according to different operating environments, which further improves the stability and flexibility of motions of the robot, and ensures a better user experience.
  • At least one of the forward speed ratio, the left-turn speed ratio and the right-turn speed ratio is adjusted by direct editing or input.
  • the speed obtained by pushing the robot to move is adjusted, which can further improve the adaptability of the speed and improve the stability and coordination of motions of the robot.
  • the robot control system and the modular robot of the present disclosure also have the same beneficial effects as the robot control method described above.
  • FIG. 1 is a schematic flowchart of a robot control method according to a first embodiment of the present disclosure.
  • FIG. 2 A is a schematic diagram of the robot in a straight posture according to the first embodiment of the present disclosure.
  • FIG. 2 B is a schematic diagram of the robot in a right-turn posture according to the first embodiment of the present disclosure.
  • FIG. 3 is a schematic diagram of a sub-flowchart of step T0 of the robot control method of the first embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of a sub-flowchart of step T2 of the robot control method of the first embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of a sub-flowchart of step T2 of a variant embodiment of the robot control method according to the first embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of a ratio bar of the robot control method according to the first embodiment of the present disclosure when sliding the ratio bar to set the speed.
  • FIG. 7 is a schematic diagram of a sub-flowchart of the robot control method according to the first embodiment of the present disclosure when the speed is obtained through user customization.
  • FIG. 8 is a schematic diagram of a fan-shaped area formed in the robot control method of the first embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a circular area formed in the robot control method of the first embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of a module structure of a robot control system according to a second embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of a module structure of a robot control system according to a third embodiment of the present disclosure.
  • FIG. 12 A is a first schematic diagram of an interface of a display screen of a controller of the robot control system of the third embodiment of the present disclosure.
  • FIG. 12 B is a second schematic diagram of the interface of the display screen of the controller of the robot control system of the third embodiment of the present disclosure.
  • FIG. 12 C is a third schematic diagram of the interface of the display screen of the controller of the robot control system of the third embodiment of the present disclosure.
  • a first embodiment of the present disclosure provides a robot control method for controlling a robot to perform corresponding actions.
  • the robot includes at least one wheel and a main body connected to the wheel.
  • the robot has at least one motion posture.
  • the robot is a modular robot, and the main body of the robot includes at least two module units.
  • One of the at least two module units is connected to the wheel, and each of the module units includes at least one sub-module connected to and rotatable with respect to one of the wheels. That is, when the number of the sub-modules and the wheels is one, one end of the sub-module is connected to the wheel, and the wheel and the sub-modules can be relatively rotated.
  • the number of the sub-modules and wheels is one and two respectively, two ends of the sub-module are respectively connected to one of the wheels.
  • each of the module units includes at least two sub-modules, any two adjacent sub-modules are connected and rotated with each other.
  • each module unit includes an upper hemisphere and a lower hemisphere that can be relatively rotated, and two ends of one hemisphere are respectively connected to the two wheels.
  • Each sub-module includes at least a docking part; each docking part is provided with an interface; each interface has unique interface identification information; and the module units are connected through the docking part. It can be understood that when each sub-module includes at least two docking parts, the two module units are connected by one docking part of one of the two module units and one docking part of the other of the two module units to form a virtual connection surface at the connection of the two module units.
  • the two module units can rotate based on the virtual connection surface, and a plane where at least another docking part on at least one of the two module units is located intersects with the virtual connection surface.
  • each wheel is provided with a docking part; each docking part is provided with a docking opening; each wheel has corresponding identification information; each docking opening has unique interface identification information; and the modular unit is connected to the wheel through the docking openings thereof.
  • Configuration information includes but is not limited to one or more of module type information, position information, module quantity information, and initial angle information between the two sub-modules.
  • the configuration information also includes one or more of type information of the wheel, position information of the wheel, quantity information of the wheel, and initial angle information between the sub-module and the wheel.
  • type information of the wheels can be defined as left wheel and right wheel; when the number of the wheels is four, type information of the wheels can be defined as forward-left wheel, forward-right wheel, backward-left wheel and backward-right wheel.
  • the type information of the wheels can also be other names, as long as the wheels can be marked and identified.
  • the configuration information is configured to define a connection relationship between adjacent module units and/or between wheels and sub-modules.
  • the position information is configured to record interface identification information of the two docking parts by which the adjacent module units are connected and interface identification information of the two docking parts by which the wheel and the sub-module are connected.
  • the interface identification information of each docking part represents the position of the docking part on the module unit in which the docking part is located and the position of the wheel with respect to the sub-module. Therefore, the position information of each module unit and the position information of the wheel with respect to the sub-module represents the absolute position thereof in a three-dimensional spatial configuration or a planar configuration.
  • Module units of the same type are set with the same module type identifier.
  • cell bodies have the same module type identifier while single cell bodies have the same module type identifier, and the module type identifier of the cell bodies are inconsistent with the module type identifier of the single cell bodies.
  • each type of single cell bodies has the same module type identifier, and different types of single cell bodies have different module type identifiers, such that the module type information of the module units can be obtained by recognizing the module type identifiers.
  • the initial angle information between the two sub-modules refers to a relative angle value between the upper and lower sub-modules in each module unit.
  • the module quantity information refers to the quantity of module units.
  • the process of recognizing the interface identification information of the two docking parts by which two adjacent module units are connected to each other between the two adjacent module units refers to a process of face recognition, and the position information of the module units can be obtained by performing the face recognition. It may be understood that the definitions here are also applicable to other embodiments of this specification.
  • the robot control method includes the steps of:
  • T1 providing a robot, with at least one wheel and at least one motion posture
  • T2 regulating the robot to a motion posture, saving motion-posture information corresponding to the motion posture, and generating preset action control information based on the speed of the wheel and the motion-posture information;
  • T3 constructing and forming an operating model based on the preset action control information
  • T4 outputting, by the operating model, actual motion control information of a motion according to user's input to control the robot to perform the motion.
  • the robot can communicate with a remote terminal.
  • the robot can perform the module unit of the subsequent steps by itself.
  • the step T2 is: regulating the robot to a motion posture, saving motion-posture information corresponding to the motion posture, and generating preset action control information based on the speed of the wheel and the motion-posture information.
  • the motion postures of the robot includes any one or more of a forward posture, a left-turn posture, a right-turn posture, a backward posture, a forward-left posture, a forward-right posture, a backward-left posture, a backward-right posture and a stop posture.
  • the corresponding motion-posture information includes forward-posture information, left-turn-posture information, and right-turn-posture information.
  • the motion posture is illustrated with the number of the wheels being four.
  • Establish a coordinate system which includes an X-axis and a Y-axis perpendicular to each other; when the center line of two wheels on the same row coincides with the X-axis, the motion posture is defined as the forward posture and the backward posture of the wheels (as shown in FIG. 2 A ); when the angle V between the center line of the two wheels and the X-axis is an acute angle (as shown in FIG.
  • the motion posture is defined as the right-turn posture and the forward-right posture of the wheels; when the angle between the center line of the two wheels and the X-axis is an obtuse angle, the motion posture is defined as the left-turn posture and the forward-left posture of the wheel (not shown).
  • the backward-left posture and the backward-right posture can also be defined by defining the deflection angle of the wheel. It should be noted that the definition of the motion posture in this embodiment is only one of the methods, and the motion posture can also be defined through other definition methods.
  • Step T10 obtaining the motion-posture information of the robot.
  • the step T10 is between the step T1 and the step T2. It can be understood that in the step T10, the remote terminal obtains initial virtual configuration information of the robot in the motion posture, and the initial virtual configuration information represents the motion-posture information of the robot in the motion posture; or at least one of the plurality of modular units or at least one wheel obtains and stores the initial virtual configuration information of the robot instead of transmitting it to the remote terminal.
  • the initial virtual configuration information includes one or more of position information, module type information, module quantity information, initial angle information between the upper and lower sub-modules, type information of the wheel, position information of the wheel, quantity information of the wheel, and the initial angle between the sub-module and the wheel; the initial virtual configuration information also includes one or more of other information that defines the connection relationship between adjacent module units.
  • the module unit or the wheel transmits its own module type information to the cell body through wireless transmission; after all the module units and the wheels transmit their position information to the remote terminal, the remote terminal obtains the module quantity information and wheel quantity information of the robot; the module unit detects the initial angles of the upper and lower sub-modules thereof and wirelessly transmits their initial angle information to the remote terminal, and the module unit detects the initial angles of the wheels and the sub-modules and wirelessly transmits their initial angle information to the remote terminal.
  • the initial virtual configuration information corresponds to the posture information in the initial virtual configuration.
  • the corresponding motion-posture information also includes one or more of position information, module type information, module quantity information, initial angle information between the upper and lower sub-modules, type information of the wheel, position information of the wheel, quantity information of the wheel, and the initial angle between the sub-module and the wheel; the corresponding motion-posture information also includes one or more of other information that defines the connection relationship between adjacent module units.
  • the plurality of module units of the robot may include a plurality of identical or different module units, and the plurality of module units include at least one module unit that can communicate with the remote terminal.
  • the plurality of module units includes one cell body and at least one single cell body, that is, the robot includes one cell body and at least one single cell body.
  • the signal transmission process between the cell body and at least one single cell body is described as follows:
  • the cell body is configured to communicate with the remote terminal
  • the single cell body directly connected to the cell body is defined as a first-level single cell body
  • the single cell body connected to the first-level single cell body is defined as a second-level single cell body
  • the single cell body connected to an M th -level single cell body is defined as a (M+1) th -level single cell body, M being an integer greater than or equal to 1.
  • T101 transmitting a signal by the cell body to the first-level single cell body connected thereto through the docking part;
  • T102 receiving the signal and then performing face recognition by the first-level single cell body to obtain the interface identification information of the docking part, which sends a signal, of the cell body; transmitting, by the first-level single cell body, the interface identification information of the docking part, which sends the signal, of the cell body and the interface identification information of the docking part, which receives the signal, of the first-level single cell body itself to the cell body to obtain position information of the first-level single cell body;
  • T103 sending a signal by the M th -level single cell body to the (M+1) th -level single cell body;
  • T104 receiving the signal and then performing face recognition by the (M+1) th -level single cell body to obtain the interface identification information of the docking part, which sends the signal, of the M th -level single cell body; and transmitting, by the (M+1) th -level single cell body, the interface identification information of the docking part, which sends the signal, of the M th -level single cell body and the interface identification information of the docking part, which receives the signal, of the (M+1) th -level single cell body itself to the cell body.
  • the signal transmitted from the cell body to the first-level single cell body and the signal transmitted from the M th -level single cell body to the (M+1) th -level single cell body are preferably electrical signals or wireless signals.
  • the steps T103 and T104 can be omitted.
  • one of the module units is defined as a main module unit, that is, the above-mentioned cell body; the module unit directly connected to the main module unit is a first-level single cell body, the module unit connected to the first-level single cell body is defined as a second-level single cell body, and the module unit connected to an M th -level single cell body is defined as a (M+1) th -level single cell body, M being an integer greater than or equal to 1.
  • the above steps T101-T104 are also performed.
  • the multi-level cell units can directly transmit their respective position information directly to the remote terminal instead of transmitting to the main module unit.
  • each module unit recognizes the interface identification information of the docking part of the adjacent module unit connected thereto, and obtains its position information based on the interface identification information of the docking part of the adjacent module unit and the interface identification information of the docking part by which the module unit itself is connected to the adjacent module unit.
  • the process of obtaining the position information of the wheels of the robot may be the same as the process of obtaining the position information of the plurality of module units. That is, one of the wheels can be defined as the cell body, and the module unit or the wheel directly connected to the cell body is the first-level single cell body. The specific identification process is the same as above, which will not be repeated here.
  • the position information between the wheels of the robot and the main body and the angle information can also be obtained according to the method used between the plurality of modules to represent the posture information of the robot.
  • the remote terminal can also directly obtain the angle information, position information, etc. of each module unit and wheel, and the remote terminal processes the corresponding angle information and position information to obtain the current configuration information of the modular robot to complete the recognition of the current entity configuration of the modular robot.
  • step T101 is performed before or at the same time as the step T101:
  • T100 sending broadcast signals by the module unit or the wheel to instruct the respective single cell bodies to prepare for face recognition.
  • wireless communication can be performed between module units.
  • the wireless communication may be wifi communication, Bluetooth communication, or zigbee communication, preferably zigbee communication.
  • the module unit or the wheel first instructs other module units connected thereto to enter a face recognition preparation state in a form of broadcast signals, and then performs the face recognition action after the module units connected thereto receive the electrical signals.
  • each docking part on the cell body sends different electrical signals to a plurality of first-level single cell bodies.
  • the plurality of first-level single cell bodies obtains interface identification information of the docking parts of the cell body connected thereto according to the received different electrical signals.
  • Each first-level single cell body responds to the cell body with the interface identification information of the docking part, which transmits the electrical signals, of the cell body and the interface identification information of the docking part, which receives the electrical signal, of the first-level single cell body itself.
  • the cell body calculates position information of this first-level single cell body through an algorithm. After the plurality of first-level single cell bodies performs the same action, the cell body obtains position information of the plurality of first-level single cell bodies.
  • each docking part on the M th -level single cell body sends different electrical signals to a plurality of (M+1) th -level single cell bodies.
  • the plurality of (M+1) th -level single cell bodies obtain interface identification information of the docking parts of the M th -level cell body connected thereto according to the received different electrical signals.
  • Each (M+1) th -level single cell body responds to the cell body with the interface identification information of the docking part, that transmits the electrical signal, of the M th -level single cell body and the interface identification information of the docking part, that receives the electrical signal, of the (M+1) th -level cell body itself.
  • the cell body calculates the position information of the (M+1) th -level single cell body through an algorithm. After the plurality of (M+1) th -level single cell bodies perform the same action, the cell body obtains position information of the plurality of (M+1) th -level single cell bodies. After a series of face recognition, the cell body obtains the position information of all single cell bodies, thereby obtaining the configuration information of the robot, that is, the motion-posture information of the robot.
  • the plurality of lower-level single cell bodies responds to the cell body with their position information based on a time sequence according to the interface identification information of the docking part, that transmits different electrical signals, of the cell body or a higher-level single cell body; or when the cell body or single cell body sends the same or different electrical signals to a plurality of lower-level single cell bodies based on a time sequence, the plurality of lower-single cell bodies sequentially responds to the cell body with their position information according to a time sequence in which the electrical signals are received.
  • the interface identification information is defined as 1 and 2 , respectively, and the cell body simultaneously sends two different electrical signals to two first-level single cell bodies connected thereto, it is set that the first-level single cell body connected to a docking part 1 first makes a response with its position information, and after a wait of 10s (the specific time may be adjusted), the first-level single cell body connected to a docking part 2 makes a response with its position information.
  • T102a stopping sending electrical signals by the cell body, and instructing, by the cell body, the first-level single cell body connected directly to the cell body to send electrical signals to the second-level single cell body connected to the first-level single cell body.
  • the cell body preferably instructs the first-level single cell body in a form of broadcast signals.
  • the cell body controls the M th -level single cell body to send electrical signals to a plurality of (M+1) th -level single cell bodies based on a time sequence in a form of broadcast signals according to the interface identification information of a plurality of docking parts of the M th -level single cell body.
  • the electrical signals sent by the M th -level single cell body to the plurality of (M+1) th -level single cell bodies may be the same or different, and it is preferable that the plurality of docking parts of the M th -level single cell body sends different electrical signals.
  • the cell body after receiving the position information transmitted from the single cell bodies, the cell body individually numbers the respective single cell bodies, and stores the position information of each single cell body in association with the corresponding number.
  • the cell body communicates with the remote terminal, the cell body transmits the position information of each single cell body and its number to the remote terminal.
  • the remote terminal After the remote terminal sends action control information to the cell body, the cell body decomposes the control information according to different numbers and transmits the decomposed control information to the respective single cell bodies according to the numbers.
  • the step T10 it may be a remote terminal or a module unit storing initial virtual configuration information to generate the initial virtual configuration of the robot according to the initial virtual configuration information.
  • the remote terminal generates the initial virtual configuration of the robot through three-dimensional simulation or three-dimensional modeling according to the obtained initial virtual configuration information.
  • the initial virtual configuration can be displayed on the remote terminal to facilitate the user to view the motion posture of the robot at any time.
  • the speed is associated with the motion speed of the robot and/or obtained through user customization.
  • the association with the motion speed of the robot is that the speed is obtained by pushing the robot to move in a corresponding motion posture.
  • the speed of user customization can be understood as editing, inputting or other definition methods on the operation interface of the robot. It should be noted that when the number of wheels is at least two, the speed of some wheels may be obtained through motion, and the speed of other wheels may be obtained through customization.
  • the actual test speed of the wheels is tested by artificially pushing the robot to walk on a road, which is called the test-drive mode and used to obtain the speed ratio of each wheel. Then the actual speed of the wheels during actual walking is set according to the speed ratio by the user. By obtaining the speed ratio of each wheel through the test-drive mode, the wheels can have a better balance during actual motion.
  • speed sensors can be installed on each wheel, and the actual speed of each wheel during motion can be obtained through the speed sensors.
  • the motion of the robot can also be completed via the app software operating system.
  • the robot control method includes the steps of:
  • T1 providing a robot, with at least one wheel and at least one motion posture
  • T2 regulating the robot to a corresponding motion posture, saving motion-posture information corresponding to the motion posture, pushing the robot to walk on a road in the corresponding motion posture to obtain the speed of the wheel, and generating preset motion control information according to the speed and the motion-posture information;
  • T3 constructing and forming an operating model based on the preset action control information
  • T4 outputting, by the operating model, actual motion control information of a motion according to user's input to control the robot to perform the motion.
  • the step T2 includes the steps of:
  • T21 controlling the robot to move forward, obtaining the forward speed ratio of each wheel, and determining the forward preset action control information according to the forward-posture information and the forward speed ratio of each wheel;
  • T22 controlling the robot to turn left, obtaining the left-turn speed ratio of each wheel, and determining the left-turn preset action control information according to the left-turn-posture information and the left-turn speed ratio of each wheel;
  • T23 controlling the robot to turn right, obtaining the right-turn speed ratio of each wheel, and determining the right-turn preset action control information according to the right-turn-posture information and the right-turn speed ratio of each wheel.
  • controlling the robot to move forward, controlling the robot to turn left, and controlling the robot to turn right include pushing the robot to walk on a road or manipulate the robot on the app.
  • the speed obtained by controlling the motion of the robot under the corresponding motion posture may be realized in the following manner.
  • the forward speed ratio can be obtained in the following manner: first, obtaining the actual test speed corresponding to each wheel; calculating according to the actual test speed of each wheel to obtain the speed ratio between each wheel.
  • step T21 after the forward speed ratio is obtained, the actual speed of one of the wheels can be given, and then the speed of each wheel is calculated according to the speed ratio between each wheel to obtain the speed of all wheels as the speed of generating preset motion control information.
  • left-turn speed ratio and the right-turn speed ratio are obtained in the same manner as the forward speed ratio, which will not be repeated here.
  • the step T2 includes the steps of:
  • T24 controlling the robot to move forward, saving the forward-posture information and the maximum forward speed thereof, and determining forward preset action control information according to the forward-posture information, the maximum forward speed and/or a customized speed;
  • T25 controlling the robot to turn left, saving the left-turn-posture information and the maximum left-turn speed thereof, and determining left-turn preset action control information according to the left-turn-posture information, the maximum left-turn speed and/or a customized speed;
  • T26 controlling the robot to turn right, saving the right-turn-posture information and the maximum right-turn speed thereof, and determining right-turn preset action control information according to the right-turn-posture information, the maximum right-turn speed and/or a customized speed.
  • controlling the robot to move forward, controlling the robot to turn left, and controlling the robot to turn right include pushing the robot to walk on a road or manipulate the robot on the app.
  • the speed obtained by controlling the motion of the robot in a corresponding motion posture can be specifically realized by the following manner: setting the speed of one of the wheels as the actual maximum speed during the actual motion of the wheel, and converting the actual maximum speed of other wheels according to the speed ratio to obtain the maximum forward speed, the maximum left-turn speed, and the maximum right-turn speed of all wheels.
  • the maximum speed can also be obtained directly according to the actual motion speed of the robot.
  • a visual ratio bar Z can be provided on the operation interface; the control point F on the ratio bar Z can slide between points N and M along the ratio bar; the point M represents the maximum speed ratio of each wheel.
  • the speed ratio of each wheel is converted according to the length of the entire sliding bar occupied by the MN.
  • the maximum speed ratio at the beginning is 10:9:8:9; when sliding to the midpoint of the ratio bar Z, the speed ratio of each wheel will be 5:4.5:4:4.5.
  • step T2 when the number of the wheels is at least two, in the step T24, T25 or T26, the way to obtain the speed in the user-customization manner may be achieved through the steps of:
  • the maximum actual speed of one of the wheels and the speed ratio of each wheel may be set on a controller that transmits signals to the robot.
  • the controller may be an APP loaded on a remote terminal.
  • the remote terminal can be a mobile phone, an iPad, a computer or other equipment.
  • the speed of one of the wheels can be adjusted separately after the speed of each wheel is set according to the speed ratio; or, the speed of each wheel can be set directly without setting the speed ratio of each wheel.
  • a coordinate system including X-axis and Y-axis perpendicular to each other is constructed.
  • a circular area or a fan-shaped area is defined on the coordinate system with the origin as the center to form a manipulation area, and the fan-shaped area includes areas located in the first quadrant and the second quadrant.
  • the fan-shaped area in FIG. 8 is OPQ.
  • the forward motion of the robot is defined as: pushing the robot to move from the center O to the vertex A.
  • the speed thereof at the center O is 0, and the speed thereof at the vertex A is the maximum forward speed.
  • the maximum speed thereof is defined as V 1 .
  • the left-turn motion of the robot is defined as: pushing the robot to move from the vertex A to the left-end point Q along the arc.
  • the speed when moving to the left-end point Q is the maximum left-turn speed.
  • the maximum speed thereof is defined as V 2 .
  • the right-turn motion of the robot is defined as: pushing the robot to move from vertex A to the right-end point P along the arc.
  • the speed when moving to the right-end point P is the maximum right-turn speed.
  • the maximum speed thereof is defined as V 3 .
  • the step T3 includes the steps of:
  • T31 constructing a coordinate system, defining a circular area or a fan-shaped area on the coordinate system with the origin as the center to form a manipulation area; the fan-shaped area including areas located in the first and second quadrants; taking the manipulation area as the fan-shaped area OPQ as an example.
  • T32 mapping the forward preset action control information, the left-turn preset action control information, and the right-turn preset action control information to the upper vertex, the left vertex, and the right vertex of the manipulation area, respectively; that is, the preset motion control information of each vertex including speed, posture information, etc.
  • the step T3 further includes the step of:
  • T33 defining user's input as a control point located in the coordinate system; when the control point is located in the first and second quadrants, the control point located in the first quadrant being a forward-right-motion control point, and forward-right preset action control information being obtained by the interpolation operation of the forward preset action control information and the right-turn preset action control information;
  • control point located in the second quadrant being a forward-left-motion control point;
  • forward-left preset action control information being obtained by the interpolation operation of the forward preset action control information and the left-turn preset action control information;
  • control point located on the Y-axis of the coordinate system being obtained by converting the forward preset action control information according to the distance between the control point and the center;
  • control point located on the arc AQ being obtained by converting the left-turn preset action control information according to the arc length between the control point and the vertex A;
  • the control point located on the arc AP being obtained by converting the right-turn preset action control information according to the arc length between the control point and the vertex A.
  • the control point X1 located in the second quadrant is taken as an example for explanation.
  • the speed calculation process is as follows: first, the speed value VAO at the point A 0 is obtained by converting according to the arc length AAO; then the speed is obtained by converting the VAO according to the distance between the O 1 and the center; the posture information is slowly and gradually deflected from the straight posture according to the angle ⁇ 1 .
  • step T3 further includes the steps of:
  • T34 interpolating to the lower vertex of the manipulation area according to the forward preset action control information to form backward preset action control information
  • the fan-shaped area further includes areas located in the third and fourth quadrants.
  • the fan-shaped area there are blank areas QOQ 1 between the first quadrant and the fourth quadrant and POP 1 between the second quadrant and the fourth quadrant where no control points are disposed. The robot cannot be manipulated through the blank areas.
  • an operating disc is provided on a Graphic User Interface for operating the robot, and the operating disc corresponds to the circular area or the fan-shaped area on the coordinate system.
  • the step T4 includes the steps of:
  • T41 operating on the operating disc to form an input
  • T42 calculating, by the operating model, the actual motion control information according to the control point corresponding to the input to control the robot to perform a motion.
  • a manipulation signal can be formed by user touching the circular or fan-shaped manipulation area on the manipulation disc.
  • Angle-sensing sensors and speed-sensing sensors are provided on each single cell body and each wheel.
  • a second embodiment of the present disclosure further provides a robot control system 30 , which is used to control a robot spliced by at least one wheel and a main body.
  • the robot control system 30 includes:
  • a storage module 31 for storing motion-posture information and preset action control information corresponding to the motion posture of the robot;
  • a model-generating module 33 for constructing and forming an operating model according to the preset action control information.
  • the robot control system 30 further includes speed sensors provided on each of the wheels.
  • the speed sensor may be a code-disc encoder for measuring the actual speed of the wheel.
  • the actual speed of the wheel includes the linear speed and the angular speed of the wheel.
  • a third embodiment of the present disclosure further provides a robot control system 40 , and the robot control system 40 further includes:
  • a robot 41 which is assembled from a main body and at least one wheel connected to the main body and has an initial physical structure;
  • the robot is a modular robot
  • the main body includes a plurality of module units
  • the wheels are connected to the module units.
  • the plurality of module units include a cell body and at least one single cell body; each docking part has unique interface identification information; the single cell body directly connected to the cell body is defined as a first-level single cell body; the obtaining of the initial virtual configuration information of the robot specifically includes the following steps:
  • the robot control system 40 further includes a controller 45 , and signals can be transmitted between the controller 45 and the robot 41 .
  • the controller 45 may be provided on a remote mobile terminal that includes electronic devices such as mobile phones, tablet computers, or computers. Referring to FIG. 12 A , FIG. 12 B and FIG. 12 C , the controller 45 includes a display screen 451 . By manipulating the display screen 451 , the preset motion control information under the corresponding motion posture can be edited, saved or set.
  • the display screen 451 includes a motion-posture setting area A corresponding to the first editing button, the speed setting area B corresponding to the second editing button, a virtual-configuration display area C, an operation-video-playing area D and an edited steering wheel E.
  • the operation process will be described below in conjunction with the display interface of the display screen 451 : when setting the forward-posture preset action control information, first click on the motion-posture setting area A to enter the interface shown in FIG. 12 B ; then move the entity configuration according to the tutorial in the operation-video-playing area D to make the robot in the forward posture; after the adjustment is completed, save the motion-posture information corresponding to the corresponding forward posture; after the posture frame is saved, click on the speed setting area B to push the robot to move forward to obtain the motion speed or obtain the speed through user customization; after the speed is adjusted, save the speed; the first edit button is used to set the robot to a corresponding motion posture, and the second edit button is used to set the speed of the wheel.
  • the setting methods of the left-turn posture and the right-turn posture are the same as the setting methods of the forward posture, which will not be repeated here.
  • the formation of the preset motion control information of the forward-left posture, the forward-right posture, the backward-left posture, and the backward-right posture is the same as the method of the first embodiment, which will not be repeated here.
  • an operating disc is provided on the controller 45 , and the operating disc corresponds to the circular area or the fan-shaped area on the coordinate system.
  • the user operates on the operating disc to form an input; at the same time, the operating model calculates actual motion control information according to the control point corresponding to the input to control the robot to execute the motion.
  • the display screen 451 presents at least one ratio bar, and at least one of the forward speed ratio, the left-turn speed ratio and the right-turn speed ratio is adjusted by manipulating the ratio bar.
  • a modular robot which is used to execute the robot control method of the first embodiment and the variant embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Manufacturing & Machinery (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
US17/900,878 2020-01-07 2022-09-01 Robot control method, a robot control system and a modular robot Pending US20220413511A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN202010015756.9A CN111190387B (zh) 2020-01-07 2020-01-07 机器人控制方法及系统
CN202010015756.9 2020-01-07
PCT/CN2021/070428 WO2021139671A1 (zh) 2020-01-07 2021-01-06 机器人控制方法、控制系统及模块化机器人

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/070428 Continuation WO2021139671A1 (zh) 2020-01-07 2021-01-06 机器人控制方法、控制系统及模块化机器人

Publications (1)

Publication Number Publication Date
US20220413511A1 true US20220413511A1 (en) 2022-12-29

Family

ID=70706089

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/900,878 Pending US20220413511A1 (en) 2020-01-07 2022-09-01 Robot control method, a robot control system and a modular robot

Country Status (5)

Country Link
US (1) US20220413511A1 (zh)
EP (1) EP4089490A4 (zh)
JP (1) JP7457814B2 (zh)
CN (1) CN111190387B (zh)
WO (1) WO2021139671A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111190387B (zh) * 2020-01-07 2022-07-05 北京可以科技有限公司 机器人控制方法及系统
CN114633259B (zh) * 2022-04-26 2023-08-08 军事科学院系统工程研究院卫勤保障技术研究所 一种并联移动机器人步距优化方法及装置

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3257113B2 (ja) * 1993-01-06 2002-02-18 ソニー株式会社 リモートコントロールシステム
JP2006313455A (ja) * 2005-05-09 2006-11-16 Funai Electric Co Ltd 自走式掃除ロボット、自走式ロボットおよび自走式ロボットの走行を制御するためのプログラム
CN101826251A (zh) * 2009-03-05 2010-09-08 冷中安 远端遥控方法及系统
CN205752715U (zh) * 2016-03-31 2016-11-30 深圳贝尔创意科教有限公司 连接结构及应用该连接结构的电子装置
CN106054953B (zh) * 2016-05-26 2019-12-17 广东博力威科技股份有限公司 一种带安全机制的滑板车速度控制系统和控制方法
CN108326846B (zh) * 2017-12-19 2020-11-03 北京可以科技有限公司 模块化机器人及其模块单元位置计算方法
CN108189029B (zh) * 2017-12-19 2020-10-30 北京可以科技有限公司 模块化机器人的控制系统、模块化机器人系统及控制模块化机器人的方法
CN108356806B (zh) * 2017-12-19 2020-12-18 北京可以科技有限公司 模块化机器人控制方法及系统
CN109648568B (zh) * 2019-01-30 2022-01-04 深圳镁伽科技有限公司 机器人控制方法、系统及存储介质
CN111190387B (zh) * 2020-01-07 2022-07-05 北京可以科技有限公司 机器人控制方法及系统

Also Published As

Publication number Publication date
CN111190387A (zh) 2020-05-22
EP4089490A4 (en) 2024-02-28
EP4089490A1 (en) 2022-11-16
JP7457814B2 (ja) 2024-03-28
CN111190387B (zh) 2022-07-05
WO2021139671A1 (zh) 2021-07-15
JP2023509088A (ja) 2023-03-06

Similar Documents

Publication Publication Date Title
US20220413511A1 (en) Robot control method, a robot control system and a modular robot
AU2020201554B2 (en) System and method for robot teaching based on RGB-D images and teach pendant
CN108241339B (zh) 仿人机械臂的运动求解和构型控制方法
CN103853133B (zh) 机器人系统校准方法
CN103085072B (zh) 基于三维建模软件实现工业机器人离线编程的方法
US20170348858A1 (en) Multiaxial motion control device and method, in particular control device and method for a robot arm
KR101876845B1 (ko) 로봇 제어 장치
US20210347043A1 (en) Modular robot control method and system
WO2022205941A1 (zh) 运动控制方法、装置、机器人控制设备及可读存储介质
US11413742B2 (en) Modular robot and method for calculating position of module unit thereof
WO2021003994A1 (zh) 虚拟角色的控制方法及相关产品
CN110695988A (zh) 双机械臂协同运动方法及系统
CN109531577A (zh) 机械臂标定方法、装置、系统、介质、控制器及机械臂
CN110299062A (zh) 机械臂示教系统、方法、装置、介质、控制器及机械臂
CN112847292A (zh) 机器人控制方法、控制系统及模块化机器人
CN105467841B (zh) 一种类人机器人上肢运动的类神经控制方法
CN112757292A (zh) 基于视觉的机器人自主装配方法及装置
CN112818476A (zh) 教学用坐标机械手三维模型、构建方法及其仿真系统
CN112571411A (zh) 用于电视机智能生产线的两用机械臂
CN117648042B (zh) 工业机器人拖拽示教移动控制方法及系统
Cheluszka The use of low-budget self-assembly sets for research and robotics education
Sajimon et al. Implementation of Accelerometer Controlled Robot using MIT Inventor App and Arduino
Lin et al. Synchronized Control of Robotic Arm based on Virtual Reality and IMU Sensing
Moon et al. Development of a 3-DOF Interactive Modular Robot with Human-like Head Motions
Potasheva et al. Wireless Information Processing Module for Augmented Reality system

Legal Events

Date Code Title Description
AS Assignment

Owner name: BEIJING KEYI TECHNOLOGY CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YANG, JIANBO;REEL/FRAME:060961/0209

Effective date: 20220829

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION