CN106406328B - Motion control method based on robot development platform - Google Patents

Motion control method based on robot development platform Download PDF

Info

Publication number
CN106406328B
CN106406328B CN201610969071.1A CN201610969071A CN106406328B CN 106406328 B CN106406328 B CN 106406328B CN 201610969071 A CN201610969071 A CN 201610969071A CN 106406328 B CN106406328 B CN 106406328B
Authority
CN
China
Prior art keywords
action
instruction
response
motion
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610969071.1A
Other languages
Chinese (zh)
Other versions
CN106406328A (en
Inventor
陈辉
于赛赛
洪定安
何仁渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Changdong Intelligent Technology Co ltd
Original Assignee
Hangzhou Changdong Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Changdong Intelligent Technology Co ltd filed Critical Hangzhou Changdong Intelligent Technology Co ltd
Priority to CN201610969071.1A priority Critical patent/CN106406328B/en
Publication of CN106406328A publication Critical patent/CN106406328A/en
Application granted granted Critical
Publication of CN106406328B publication Critical patent/CN106406328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0875Control of attitude, i.e. control of roll, pitch, or yaw specially adapted to water vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a motion control method based on a robot development platform, which comprises an action response method and a motion communication method, wherein the action response method comprises a user action response and a free action response; the user action response is used for responding to an action instruction defined by the user side; the free motion response is used for responding to the free motion command of the robot in the idle state; the motion communication adopts a bus communication protocol for transmission; the motion communication comprises instruction sending and instruction receiving; the instruction receiving module is used for receiving the action instruction; the command sending is sent in a motion command format. The invention has flexible motion control and high control precision, and greatly improves the simulation degree of the robot action.

Description

Motion control method based on robot development platform
Technical Field
The invention relates to a robot development motion module, in particular to a motion control method based on a robot development platform.
Background
2016 is the world frightened by Alphago of Google, an artificial intelligence original year. The robot is a carrier of artificial intelligence technology, so that research institutes or companies all over the world also promote a plurality of robots, from humanoid robots, service robots to armed robots on the military, and it can be said that the robot rows are as vigorous and hot as the mobile internet of the current year. The robots in the market are mainly industrial robots and service robots, which are generally customized according to specific functions, such as welding robots, sweeping robots, etc., and further modification and upgrade of the robots are difficult and basically redesigned. The general robot development platform is high in entrance, basically has a large monopoly and is expensive, and small enterprises or individuals basically cannot consider the monopoly. Those small robot development platforms often provide only a few development boards or development modules, which are only suitable for learning or making simple toys. At present, a robot development platform which is easy and convenient to develop and high in modularization degree is urgently needed. In a robot development platform, motion processing is the core of robot motion, and how to move from single repetitive motion to intelligent high-simulation motion is a blank in the current research field.
Disclosure of Invention
The invention aims to overcome the problems in the prior art and provides a motion control method based on a robot development platform.
In order to achieve the technical purpose and achieve the technical effect, the invention is realized by the following technical scheme:
a motion control method based on a robot development platform comprises an action response method and a motion communication method, wherein the action response method comprises a user action response and a free action response; the user action response is used for responding to an action instruction defined by the user side; the free motion response is used for responding to the free motion command of the robot in the idle state; the motion communication adopts a bus communication protocol for transmission; the motion communication comprises instruction sending and instruction receiving; the instruction receiving module is used for receiving the action instruction; the command sending is sent in a motion command format.
Further, the user action response comprises the following steps: initializing variables, analyzing action instructions (action type judgment, action state judgment, action instruction sending, action instruction executing, action synchronization judgment, action timeout judgment and action cycle judgment), updating robot states, judging errors and deleting action instructions.
Furthermore, the user action response mode comprises single action response and continuous action response.
Further, the free-motion response comprises the following steps: variable initialization, delay setting, idle state judgment, judgment of user configuration, setting of contextual model, contextual model judgment (executing contextual model), free state model judgment, obstacle avoidance judgment (executing obstacle avoidance), generation of random number judgment execution probability, execution of random action
Further, the action instruction format is frame header + machine code + instruction type + data length + sub command data + check frame + frame tail.
Further, the instruction sending process is as follows: the user side sends an action command, waits for a response mark, waits for overtime, requests the user side to resend the action command, checks action command data after receiving the response mark, requests the user side to resend an action command data packet if the action command data packet has errors, returns the response mark if the action command data packet has no errors, and finishes command sending.
Further, the instruction receiving step is as follows:
waiting for receiving, and waiting for receiving action instruction data of a user side;
frame header checking, starting to receive and store user side action instruction data after no error exists, and continuing to wait if the error exists;
receiving data, and receiving and storing a user side action instruction data packet;
frame end checking to finish receiving the user side action instruction data packet by receiving a correct frame end;
and data checking, namely checking the data length and the check frame contained in the user side action instruction data packet, judging whether the data contained in the user side action instruction data packet is correct or not, returning a response correct mark if the data is correct, and returning a response error mark if the data is not correct.
Further, the sub-command data is transmitted in an action data packet, and the motion data packet format action execution value a + action execution value B + action execution value C + priority bit + fault bit + instruction + action execution reservation bit a + action execution reservation bit B.
Furthermore, the bus communication protocol is provided with a sending priority control, and the level of the sending priority is determined by the size of the priority bit; when the plurality of action execution units send the execution action feedback information, the bus responds to the action execution unit with high sending priority level in a priority mode, the action execution unit with high priority level sends the execution action feedback information, and the action execution unit with low sending priority level abandons sending to be converted into receiving instructions.
Further, the action execution numerical value is an angle value, a speed value and a current value.
The invention provides a motion control method based on a robot development platform, which comprises an action response method and a motion communication method, wherein the action response method comprises user action response and free action response; the user action response is used for responding to an action instruction defined by the user side; the free motion response is used for responding to the free motion command of the robot in the idle state; the motion communication adopts a bus communication protocol for transmission; the motion communication comprises instruction sending and instruction receiving; the instruction receiving module is used for receiving the action instruction; the command sending is sent in a motion command format. The invention has flexible motion control and high control precision, and greatly improves the simulation degree of the robot action.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical solutions of the present invention more clearly understood and to implement them in accordance with the contents of the description, the following detailed description is given with reference to the preferred embodiments of the present invention and the accompanying drawings. The detailed description of the present invention is given in detail by the following examples and the accompanying drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of a frame of a motion control method based on a robot development platform according to the present invention;
FIG. 2 is a user action response flow diagram of the present invention;
FIG. 3 is a free-form action response flow diagram of the present invention;
FIG. 4 is an exploded view of a robot of the present invention;
FIG. 5 is a schematic diagram of the internal structure of a robot according to the present invention;
FIG. 6 is a schematic turning diagram of a robot of the present invention;
FIG. 7 is a schematic view of a robot continuous motion response visualization window of the present invention;
reference numbers in the figures: the robot comprises a robot 1, a head 2, a tail 3, a bionic fish fin 4, a framework 5, a direction steering engine 6, an auxiliary steering engine 7, a lower jaw 21, a fish fin framework 41 and a power steering engine 60.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Referring to fig. 1 to 7, a motion control method based on a robot development platform, as shown in fig. 1, includes an action response method and a motion communication method, where the action response method includes a user action response and a free action response; the user action response is used for responding to an action instruction defined by the user side; the free motion response is used for responding to the free motion command of the robot in the idle state; the motion communication adopts a bus communication protocol for transmission; the motion communication comprises instruction sending and instruction receiving; the instruction receiving module is used for receiving the action instruction; the command sending is sent in a motion command format.
As shown in fig. 2, the processing unit includes a user instruction processing unit, and the user instruction processing unit is configured to respond to an instruction issued by a user terminal. The user instruction processing comprises the following steps: initializing variables, analyzing action instructions (action type judgment, action state judgment, action instruction sending, action instruction executing, action synchronization judgment, action overtime judgment and action circulation judgment) to judge whether the actions are real-time test actions, and assigning different pointer values to the pointers of the action structure to be analyzed according to classification. And analyzing the action step by step, setting the angle and the speed of a steering engine according to the action type, overtime, action repetition times, a synchronous bit of each action state, an action control mode and the like, judging the action type after executing one-time action analysis, releasing the signal quantity, informing a processing unit that the operation can be continued, updating the state of the robot, and wrongly judging and deleting the action command.
As shown in fig. 3, the processing unit further includes a free-motion process, where the free-motion process is used to respond to a random command in an idle state of a user-side command. The free motion processing flow is as follows: firstly, in the priority, the special situation of the contextual model has the highest priority, the sensing unit data is detected, the comprehensive judgment is carried out, if a certain contextual model is met, the action command set by the contextual model is directly executed, when the contextual model is not met, the sensing unit value is judged firstly, the obstacle avoidance operation is carried out, if the obstacle avoidance is not needed, the current robot state is judged, the corresponding action is executed, if the robot state is in a free state, the random number within the action command number range is generated firstly, meanwhile, the random number execution probability is calculated to determine whether the action is executed, and after the occurrence times are reached, the command corresponding to the random number is executed.
As shown in fig. 4, which is a schematic exploded view of a structure of a robot 1 in a specific embodiment, the robot 1 includes a head 2, a tail 3, a bionic fin 4, and a skeleton 5, as shown in fig. 5, a direction steering gear 6 is installed on the front part of the skeleton 5, and a power steering gear 60 is installed on each of the middle part and the tail of the skeleton 5; the direction steering engine responds to a direction instruction sent by the processing unit; and the power steering engine responds to a forward command sent by the processing unit. The framework 5 is also provided with 4 auxiliary steering engines 7; the 4 auxiliary steering engines 7 are symmetrically arranged in a pair of bionic fish fins 4 at the front and the back respectively; the auxiliary steering engine 7 is connected with the bionic fish fin 4; as shown in fig. 6, when the robot 1 turns, the single-side auxiliary steering engine 7 moves to drive the single-side bionic fish fin 4 to move, and when the robot 1 turns to the right, the left-side auxiliary steering engine 7 moves to drive the left-side bionic fish fin 4 to move, so that auxiliary right turning is completed; the principle of steering to the left is the same as above. Meanwhile, the bionic fish fins 4 are also used as auxiliary power steering engines, when the robot 1 does not perform steering motion, the auxiliary steering engines 7 on the two sides drive the bionic fish fins 4 on the two sides to move, and the motion modes are synchronous moving and asynchronous moving.
Preferably, the action execution unit further comprises an action customizing unit; the action customizing unit comprises a sinking and floating action unit and a reverse power action unit; the sinking and floating action unit responds to a floating or sinking instruction sent by the processing unit; the reverse power action unit responds to a backward command sent by the processing unit. When the floating and sinking action unit responds to the floating or sinking instruction, the bionic fish fins 4 drive the bionic fish fins 4 on two sides to move in the auxiliary steering engines 7 on two sides, and the motion modes are synchronous floating and asynchronous floating, so that the simulation motion and the auxiliary floating and sinking functions are realized.
Preferably, the sensing unit comprises a gyroscope sensor, an acceleration sensor, a distance sensor, an angle sensor, a humidity sensor, a temperature sensor, a water pressure sensor, a current sensor and a hall sensor; the gyroscope sensor induces the robot position and posture information to be fed back to the processing unit; the acceleration sensor senses the motion acceleration information of the robot and feeds the motion acceleration information back to the processing unit; the distance sensor comprises an infrared distance measuring sensor, an ultrasonic distance measuring sensor and a sonar distance measuring sensor; the distance sensor senses the distance information between the robot and the external environment and feeds the distance information back to the processing unit; the angle sensor senses the rotation angle information of the steering engine and feeds the rotation angle information back to the processing unit; the humidity sensor senses humidity information in the steering engine and feeds the humidity information back to the processing unit; the temperature sensor senses temperature information in the steering engine and feeds the temperature information back to the processing unit; the water pressure sensor senses water pressure information in the sinking and floating action unit and feeds the water pressure information back to the processing unit; the current sensor senses phase current information in the steering engine and feeds the phase current information back to the processing unit; and the Hall sensor senses the position information of the motor rotor in the steering engine and feeds the position information back to the processing unit. The sonar sensor is a sensor specially used for underwater distance measurement, sonar signals are transmitted in water and are not easy to attenuate, and the stability of the signals is good. Two sonar sensors are installed below the fish head in the application, and the contained angle of two sonars is about 20 degrees, is used for detecting the distance change of left place ahead and right place ahead respectively. When the robotic fish moves about, processing unit intermittent type formula control sonar sensor transmission sonar signal to the return signal of real-time receipt sonar, through calculating the time difference of receiving after the sonar transmission and the transmission speed of sonar in aqueous, can work out the distance between robotic fish and the barrier, this section time of sonar from launching to receiving the reflection signal is exactly sonar time of making a round trip, and the product result division of this time and sonar transmission speed in aqueous is exactly the distance value of 2. The processing unit judges the change of the distance in real time, controls the left turn or the right turn of the robot 1 in advance, and achieves the function of avoiding obstacles. The gyroscope sensor, the acceleration sensor and the water pressure sensor are matched with sonar ranging in two directions (in front and below), so that the motion attitude and the planned motion path of the robot 1 can be detected in real time. The flowmeter is used for measuring the water inflow and the water outflow of the water storage tank, the liquid level meter is used for detecting the depth of the robot in the water pool, and the water pressure sensor is used for detecting the water storage amount in the water storage tank. The two sides of the body of the robot 1 are respectively provided with a plurality of visible light sensors, the sensors can detect the illumination intensity, and the advancing direction of the robot is controlled by comparing the light intensity of the two sides, so that the phototaxis or photophobic movement is realized. The robot body is provided with a laser receiver for receiving the laser beam after special modulation, and a user can trigger the robot to make corresponding actions by irradiating different receivers with a laser pen; some moving parts of the robot 1 may contact or hurt people and other animals, and a pressure sensor and a microswitch are added at the parts, so that accidents can be prevented; the current sensor detects the phase current of a brushless motor in the steering engine, the position of a rotor of the brushless motor in the Hall sensor steering engine, the temperature sensor detects the temperature of a motor module, the humidity sensor detects whether the motor in the steering engine enters water or not, and the magnetic angle sensor detects the rotation angle of the motor in the steering engine.
Preferably, the motion communication includes data transmission and data reception, and the motion communication is transmitted by using a motion instruction packet, as shown in table 1, the format of the motion instruction packet is frame header + machine code number + instruction type + data length + sub command data + check frame + frame tail.
TABLE 1
Figure BDA0001145636200000091
Description of the format of the action command packet:
(1) the frame header is represented by two bytes, namely 0xfe and 0 xef;
(2) machine code, different equipment, define different code to distinguish;
(3) the instruction type is used for indicating the function to be executed by the data packet and can be defined by self;
(4) data length, the data length after removing four bytes of the frame head and the frame tail;
(5) a sub-command, a sub-command class under the instruction type, i.e. a secondary command;
(6) subdata, transmitted user end data;
(7) and checking the frame by adopting CRC, and calculating the result value of the data from the frame head to the front of the CRC value through CRC.
(8) The end of the frame, represented by two bytes, is 0xfd, 0xdf, respectively.
As shown in fig. 2, in data transmission, according to a transmission request from a user side, data is grouped into a series of data packets according to an action instruction data format, the series of data packets are transmitted out wirelessly, a waiting response flag is received, and a waiting response timeout count is started. And if the overtime counting is finished and the response mark is not received or the received response mark is wrong, retransmitting the data, and if the received response mark is correct and is not overtime, ending the transmission.
And the data receiving end judges whether the byte is 0xfe after receiving the first byte, if so, the data receiving end continues to receive the second byte, judges that the second byte is 0xef, and starts to store the effective data received later and records the length of the received data, and the frame header is correct. When the received data is 0xfd and the next received data is 0xdf, it indicates that the end of the data packet is received, compares the length value in the data packet with the length value recorded in the receiving record to see if they are equal, if they are not equal, it indicates that the data is lost in the transmitting process, and the data in the data packet has error and can not be used. If the lengths of the two are equal, the length of the data packet is correct, then a check value is calculated through CRC, the calculated check value is compared with the CRC check value in the data packet, if the lengths of the two are equal, the data is correct, if the lengths of the two are not equal, the data is wrong due to other reasons in the sending process, the data packet is unavailable, and the resending request is made. If the received data is checked to be correct, a correct response mark is returned, otherwise, an incorrect response mark is returned, the sending end is requested to resend the data, and the data of each communication is ensured to be correct.
The instruction types are shown in Table 2.
TABLE 2
Instruction type value Definition of
0x01 User side 1 (handset pad) sends out reading command
0x02 The user side 2 (action simulator) issues a read command
0x03 User side 3(Pc configuration software) issues a read command
0x04 User side 1 (handset pad) sends out write-in command
0x05 The user side 2 (action simulator) issues a write command
0x06 The user side 3(PC configuration software) issues a write command
0x07 User side 1 (handset pad) sends out control command
0x11 The terminal feeds back the result of the reading command to the user side 1 (mobile phone pad)
0x12 The terminal feeds back the result of the read command to the user side 2 (action simulator)
0x13 The terminal feeds back the result of the read command to the user side 3(PC configuration software)
0x14 The terminal feeds back the writing command result to the user side 1 (mobile phone pad)
0x15 The terminal feeds back the result of the write command to the user side 2 (action simulator)
0x16 The terminal feeds back the write command result to the user side 3(PC configuration software)
0x17 The terminal feeds back a control command result 1 to the user side 1 (mobile phone pad)
0x18 The terminal feeds back a control command result 2 to the user side 1 (mobile phone pad)
0x19 The terminal feeds back a control command result 3 to the user side 1 (mobile phone pad)
In table 3 are sub-command types.
TABLE 3
Subcommand type value Definition of
0x01 Robot state update
0x02 At rest
0x03 Free play
0x11 Query robot information
0x12 Inquiring steering engine information
0x13 Inquiring sinking and floating module information
0x14 Query can communication error rate
0x21 Real-time steering engine control
0x31 Write basic action combinations
0x32 Testing basic action combinations
0x33 Writing user configuration data
0x34 Erase command flash
0x35 IAP upgrade Command
0x36 User configuration reading
0x37 Action command reading
0x38 Setting steering engine absolute position
0x39 Setting a robot into a free mode
0x3A Make the robot enter an idle state
0x41 Robot error code
0x41 Make the equipment enter can burn-in mode
0x42 Make the master control enter can programming mode
Preferably, a data structure struct { data 1; data 2; data 3; data 4; data 5; data 6; data 7; data8 }; the data structure has 8 bytes of data in it as action execution value A + action execution value B + action execution value C + priority bit + fault bit + instruction + action execution reservation bit A + action execution reservation bit B, and in different action execution units, the meaning of each byte is also different:
(1) the steering engine execution module: data1 represents the angle of the steering engine; data2 represents the speed of the steering engine; data3 represents the current of the steering engine; data4 represents the ID of the steering engine module; data5 represents command, data6 represents fault information of the steering engine, data7, and data8 reserves.
(2) The servo motor execution module: data1 represents the angle of the servo motor; data2 represents the speed of the servo motor; data3 represents the current of the servo motor; data4 denotes a servo motor module ID; data5 represents a command, data6 represents failure information of the servo motor, data7, data8 reserve.
(3) A stepping motor module: data1 denotes the angle of the stepping motor; data2 represents the speed of the stepper motor; data3 represents the current of the stepping motor; data4 denotes the ID of the stepper motor module; data5 denotes a command, data6 denotes failure information of the stepping motor, data7, data8 reserve.
Taking a robot as an example shown in fig. 3, hereinafter called a robotic fish, the forward motion instruction codes are as follows:
FE FF 00 06 45 00 32 00 00 BF FB 00 00 00 00 40 00 02 00 02 00 00 0003 00 18 00 03 00 40 0D 07 00 00 00 01 50 3C 00 02 3C 32 00 03 1E 14 00 03 0018 00 03 00 40 0D 07 00 00 00 01 50 C4 FF 02 3C CE FF 03 1E EC FF 00 00 00 DAF3 9A 40 FD DF
the command controls the steering engine of the fish head of the robot fish to move forward at a speed of 30 degrees within an angle range of-20 degrees to 20 degrees, the steering engine of the fish body of the robot fish to move forward at a speed of 60 degrees within an angle range of-50 degrees to 50 degrees, and the steering engine of the fish tail of the robot fish to swing at a speed of 80 degrees within an angle range of-60 degrees to 60 degrees.
When the bus is idle, all action execution units can send data, and the first accessed action execution unit obtains control right first. In the bus protocol, all messages are sent in a fixed format. When the bus is free, all action performing units connected to the bus may start sending new messages. When two or more action execution units start sending messages at the same time, the priority is determined according to the identifier ID. The ID indicates not the destination address of transmission but the priority of the message accessing the bus. When two or more units start to transmit messages at the same time, arbitration comparison is performed for each bit of each message ID one by one. The unit with the winning arbitration (determined as the highest priority) can continue to send messages, and the unit with the losing arbitration immediately stops sending and receives. When a plurality of action execution units simultaneously transmit information, arbitration comparison is performed on each bit of each message ID one by one. The action execution unit with the winning arbitration (judged as the highest priority) can continuously send feedback information to the main control module, and the action execution unit with the losing arbitration immediately stops sending and enters a receiving state.
The bus can determine whether the type of error is a temporary data error on the bus (e.g., external noise, etc.) or a persistent data error (e.g., an intra-cell failure, a driver failure, a wire break, etc.). With this function, when a persistent data error occurs on the bus, the unit causing the failure can be isolated from the bus. A bus is a bus to which a plurality of units can be connected at the same time. The total number of connectable units is theoretically unlimited. In practice the number of connectable units is limited by time delays on the bus and electrical loads. The communication speed is reduced, and the number of connectable units is increased; when the communication speed is increased, the number of connectable units is reduced.
During communication, the main action execution unit transmits data to the bus according to a data packet defined by the structure body in a bus communication mode, a bus filter of the auxiliary action execution unit mounted on the bus can automatically identify the ID non-main action execution unit contained in the data frame, the data packet can be filtered, and the subsequent data is not received any more. If the filter identifies that the frame data ID is the main action execution unit, the filter receives the whole frame data, analyzes the command function in the data packet, and executes the corresponding action according to the command code and the data and feeds back the feedback information to the user side.
Preferably, the user action response mode comprises a single action response and a continuous action response; the format of the specific type of command code is shown in table 4:
TABLE 4
Figure BDA0001145636200000141
As shown in fig. 7, when the user side wants to generate various required action combinations during continuous action responses, some of the command codes established on the user side can be selected in the work control window to form an action group, and a plurality of action groups can be selected to form corresponding combined command codes. You can click the test button after the combined command code is generated, test whether the action generated by you achieves the effect expected by you, and if not, can adjust the above parameters until the requirement of you is achieved. And creating a sensor and action group association, and executing a corresponding action group command code when the value of the sensor meets a set threshold range. When a user generates a combined command code meeting the requirements, the command code is named in time and stored in a local database, and meanwhile, command codes of other user action groups are downloaded and updated to a robot hardware platform of the user, and the translation work of the command codes is carried out in a hardware system. The user may also choose to upload the combined command code to a cloud server for sharing or sale.
The invention provides a motion control method based on a robot development platform, which comprises an action response method and a motion communication method, wherein the action response method comprises a user action response and a free action response; the user action response is used for responding to an action instruction defined by the user side; the free motion response is used for responding to the free motion command of the robot in the idle state; the motion communication adopts a bus communication protocol for transmission; the motion communication comprises instruction sending and instruction receiving; the instruction receiving module is used for receiving the action instruction; the command sending is sent in a motion command format. The invention has flexible motion control and high control precision, and greatly improves the simulation degree of the robot action.
The foregoing is merely a preferred embodiment of the invention and is not intended to limit the invention in any manner; the present invention may be readily implemented by those of ordinary skill in the art as illustrated in the accompanying drawings and described above; however, those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiments as a basis for designing or modifying other structures for carrying out the same purposes of the present invention without departing from the scope of the invention as defined by the appended claims; meanwhile, any changes, modifications, and evolutions of the equivalent changes of the above embodiments according to the actual techniques of the present invention are still within the protection scope of the technical solution of the present invention.

Claims (9)

1. A motion control method based on a robot development platform comprises an action response method and a motion communication method, and is characterized in that: the action response method comprises a user action response and a free action response; the user action response is used for responding to an action instruction defined by the user side; the free motion response is used for responding to the free motion command of the robot in the idle state; the motion communication adopts a bus communication protocol for transmission; the motion communication comprises instruction sending and instruction receiving; the instruction receiving module is used for receiving the action instruction; the instruction sending is sent in an action instruction format; the free-motion response comprises the following steps: initializing variables, setting delay, judging an idle state, judging user configuration, setting a contextual model, judging the contextual model, judging a free state model, avoiding obstacles, generating random numbers, judging execution probability and executing random actions; when a free motion processing flow needs to be carried out, the contextual model is judged firstly, and the steps are as follows: detecting data of the sensing units, and judging whether the data are in a set contextual model; when the result of the contextual model judgment accords with the set contextual model, directly executing an action command of the set contextual model; when the result of the contextual model judgment does not accord with the set contextual model, performing the obstacle avoidance judgment operation; when the judgment result of the obstacle avoidance judgment is that the obstacle avoidance is needed, performing obstacle avoidance operation; when the judgment result of the obstacle avoidance judgment is that the obstacle avoidance is not needed, judging that the current robot is in a free state; and when the robot is in a free state, generating a random number to judge the execution probability and executing random actions.
2. The method of claim 1, wherein the user action response comprises the steps of: initializing variables, analyzing action instructions, updating the state of the robot, judging errors and deleting the action instructions.
3. The motion control method based on the robot development platform according to claim 2, characterized in that: the user action response mode comprises single action response and continuous action response.
4. The motion control method based on the robot development platform according to claim 1, characterized in that: the action instruction format is frame head + machine code number + instruction type + data length + sub-command data + check frame + frame tail.
5. The motion control method based on the robot development platform according to claim 4, characterized in that: the instruction sending process comprises the following steps: the user side sends an action command, waits for a response mark, waits for overtime, requests the user side to resend the action command, checks action command data after receiving the response mark, requests the user side to resend an action command data packet if the action command data packet has errors, returns the response mark if the action command data packet has no errors, and finishes command sending.
6. The method of claim 4, wherein the command receiving step comprises:
waiting for receiving, and waiting for receiving action instruction data of a user side;
frame header checking, starting to receive and store user side action instruction data after no error exists, and continuing to wait if the error exists;
receiving data, and receiving and storing a user side action instruction data packet;
frame end checking to finish receiving the user side action instruction data packet by receiving a correct frame end;
and data checking, namely checking the data length and the check frame contained in the user side action instruction data packet, judging whether the data contained in the user side action instruction data packet is correct or not, returning a response correct mark if the data is correct, and returning a response error mark if the data is not correct.
7. The motion control method based on the robot development platform according to claim 4, characterized in that: the sub-command data is transmitted in an action data packet, and the action data packet format comprises an action execution value A + an action execution value B + an action execution value C + a priority bit + a fault bit + an instruction + an action execution reservation bit A + an action execution reservation bit B.
8. The motion control method based on the robot development platform according to claim 7, characterized in that: the bus communication protocol is provided with a sending priority control, and the level of the sending priority is determined by the size of the priority bit; when the plurality of action execution units send the execution action feedback information, the bus responds to the action execution unit with high sending priority level in a priority mode, the action execution unit with high priority level sends the execution action feedback information, and the action execution unit with low sending priority level abandons sending to be converted into receiving instructions.
9. The motion control method based on the robot development platform according to claim 8, wherein: the action execution numerical value is an angle value, a speed value and a current value.
CN201610969071.1A 2016-11-05 2016-11-05 Motion control method based on robot development platform Active CN106406328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610969071.1A CN106406328B (en) 2016-11-05 2016-11-05 Motion control method based on robot development platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610969071.1A CN106406328B (en) 2016-11-05 2016-11-05 Motion control method based on robot development platform

Publications (2)

Publication Number Publication Date
CN106406328A CN106406328A (en) 2017-02-15
CN106406328B true CN106406328B (en) 2020-04-03

Family

ID=58014395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610969071.1A Active CN106406328B (en) 2016-11-05 2016-11-05 Motion control method based on robot development platform

Country Status (1)

Country Link
CN (1) CN106406328B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107135049B (en) * 2017-04-19 2020-08-14 北京航天自动控制研究所 Reliable asynchronous communication method facing discrete data stream
CN107562017B (en) * 2017-08-06 2019-08-16 苏州镁伽智能制造科技有限公司 Parameter edit methods, computer-readable medium and the computer of control parts of motion
CN109960178B (en) * 2017-12-26 2020-10-16 深圳市优必选科技有限公司 Robot and joint motion control method and device thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1310795A (en) * 1998-05-20 2001-08-29 创造者有限公司 Intelligent toy
CN1416381A (en) * 2000-12-06 2003-05-07 索尼公司 Robot device, method for controlling motion of robot device, and system for controlling motion of robot device
CN1553845A (en) * 2001-11-07 2004-12-08 索尼公司 Robot system and robot apparatus control method
CN1612768A (en) * 2001-03-29 2005-05-04 布拉布兰德技术开发有限公司 Remote control system
CN101262437A (en) * 2008-04-17 2008-09-10 中兴通讯股份有限公司 A method for state migration of stream control transmission protocol
CN101362334A (en) * 2008-09-25 2009-02-11 塔米智能科技(北京)有限公司 Intelligent robot and operation method thereof
CN102046337A (en) * 2008-04-09 2011-05-04 奥尔德巴伦机器人公司 Control-command architecture for a mobile robot using articulated limbs

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101227366B (en) * 2007-12-11 2011-02-02 北京豪沃尔科技发展股份有限公司 Anti-conflict method of bus transfer data

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1310795A (en) * 1998-05-20 2001-08-29 创造者有限公司 Intelligent toy
CN1416381A (en) * 2000-12-06 2003-05-07 索尼公司 Robot device, method for controlling motion of robot device, and system for controlling motion of robot device
CN1612768A (en) * 2001-03-29 2005-05-04 布拉布兰德技术开发有限公司 Remote control system
CN1553845A (en) * 2001-11-07 2004-12-08 索尼公司 Robot system and robot apparatus control method
CN102046337A (en) * 2008-04-09 2011-05-04 奥尔德巴伦机器人公司 Control-command architecture for a mobile robot using articulated limbs
CN101262437A (en) * 2008-04-17 2008-09-10 中兴通讯股份有限公司 A method for state migration of stream control transmission protocol
CN101362334A (en) * 2008-09-25 2009-02-11 塔米智能科技(北京)有限公司 Intelligent robot and operation method thereof

Also Published As

Publication number Publication date
CN106406328A (en) 2017-02-15

Similar Documents

Publication Publication Date Title
CN106383521B (en) Motion processing module based on robot development platform
CN111123963B (en) Unknown environment autonomous navigation system and method based on reinforcement learning
CN112325884B (en) DWA-based ROS robot local path planning method
CN106406328B (en) Motion control method based on robot development platform
CN113341935A (en) Vehicle testing method, device, testing equipment, system and storage medium
CN112631314B (en) Robot control method and system based on multi-line laser radar and event camera SLAM
US10996678B2 (en) Obstacle avoidance method and system for robot and robot using the same
CN109163724A (en) Multiple target point autonomous navigation method based on Turtlebot2 robot building map
CN204423154U (en) A kind of automatic charging toy robot based on independent navigation
CN106648614B (en) Robot development system architecture based on modular platform and main control unit thereof
CN106547558A (en) It is a kind of to be based on modularization robot platform development system
CN106364650A (en) Bionic robot fish
US20210141381A1 (en) Information processing device, information processing system, behavior planning method, and computer program
CN116534059B (en) Adaptive perception path decision method, device, computer equipment and storage medium
CN115619869B (en) Positioning method and device of automatic guiding transport vehicle and automatic guiding transport vehicle
CN115657494A (en) Virtual object simulation method, device, equipment and storage medium
US11216966B2 (en) Systems and methods for automated product measurement
CN110764511A (en) Mobile robot with multi-sensor fusion and control method thereof
Shang et al. Design and Implementation of a Two-Car Tracking System
CN110857861A (en) Trajectory planning method and system
KR102652927B1 (en) A drone for surveying
del Rosario et al. Development of a Vision Based Parking Monitoring System Using Quadrotor UAV
Caprari et al. Robot navigation in centimeter range labyrinths
KR102366333B1 (en) Method for providing artificial intelligence platform
CN113158779B (en) Walking method, walking device and computer storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant