CN108051001A - A kind of robot movement control method, system and inertia sensing control device - Google Patents

A kind of robot movement control method, system and inertia sensing control device Download PDF

Info

Publication number
CN108051001A
CN108051001A CN201711232485.7A CN201711232485A CN108051001A CN 108051001 A CN108051001 A CN 108051001A CN 201711232485 A CN201711232485 A CN 201711232485A CN 108051001 A CN108051001 A CN 108051001A
Authority
CN
China
Prior art keywords
msub
mrow
mtd
mtr
mfrac
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711232485.7A
Other languages
Chinese (zh)
Other versions
CN108051001B (en
Inventor
金学波
项娜
杨念香
孔建磊
苏婷立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Technology and Business University
Original Assignee
Beijing Technology and Business University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Technology and Business University filed Critical Beijing Technology and Business University
Priority to CN201711232485.7A priority Critical patent/CN108051001B/en
Publication of CN108051001A publication Critical patent/CN108051001A/en
Application granted granted Critical
Publication of CN108051001B publication Critical patent/CN108051001B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The present invention relates to a kind of robot movement control method, including:The angular velocity data of inertial sensor is obtained, pretreatment is filtered to the angular velocity data;Quaternion differential equation is established according to the angular velocity data, the quaternion differential equation is solved using runge kutta method, obtains the attitude matrix for including object attitude angle;Object attitude angle is converted into navigational coordinate system from carrier coordinate system;Object attitude angle not in threshold range in object attitude angle in the navigational coordinate system is excluded;Robot motion is controlled according to the object attitude angle in threshold range.The present invention can utilize inertial sensor control robot to move, and have degree of precision and good online recognition effect, and universality is strong, and application prospect value is preferable.The present invention also provides a kind of robot mobile control system and inertia sensing control devices.

Description

A kind of robot movement control method, system and inertia sensing control device
Technical field
The present invention relates to technical field of robot control more particularly to a kind of robot movement control method, system and it is used to Property sensing and controlling device.
Background technology
In numerous expanded applications, man-machine interaction mode is in occupation of an important role, and people and robot interactive are then It is the important topic in robotic technology field, especially life auxiliary robot field.All the time, people find it is more natural, The step of more humane man-machine interaction mode never stopped, and it is complicated cumbersome to control robot that can replace using gesture Procedure operation, simply and easily operate machine people, to robot issue order, interacted with robot, it has also become research Hot spot.
Gesture identification essence is to be intended to according to the small operation of the gesture of user to perceive the operation of user, belongs to multichannel friendship Mutual scope, research contents are related to a series of related disciplines such as pattern-recognition, robot, image procossing, computer vision. The research of gesture identification can not only promote the development of these subjects, but also because some elder generations of gesture motion to a certain extent Its intrinsic advantages also has very big realistic meaning.
Gesture identification mainly has 2 kinds of methods at present, and a kind of is the Gesture Recognition of view-based access control model, and technology development is compared Early also relative maturity, but stringent to equipment and environmental requirement, it is larger using limitation.Another kind is the hand based on inertial sensor Gesture identification technology, the technology from environment, light influence, mainly by measuring the variation of acceleration and angular speed, so as into Row gesture identification, but inertial sensor in gesture identification, such as judges still to deposit in this minor motion of finger there are drift error The problem of inaccurate is judged precision is not high.
The content of the invention
The technical problems to be solved by the invention are in view of the deficiencies of the prior art, to provide a kind of robot movement controlling party Method, system and inertia sensing control device.
The technical solution that the present invention solves above-mentioned technical problem is as follows:A kind of robot movement control method, including:
S1 obtains the angular velocity data of inertial sensor;
S2 carries out the angular velocity data pretreatment of online adaptive filtering;
S3 establishes quaternion differential equation according to the angular velocity data through online adaptive filter preprocessing, utilizes Runge-Kutta method solves the quaternion differential equation, obtains the attitude matrix for including object attitude angle;
Object attitude angle is converted to navigational coordinate system by S4 from carrier coordinate system;
S5 arranges the object attitude angle not in threshold range in the object attitude angle in the navigational coordinate system It removes;
S6 controls robot motion according to the object attitude angle in threshold range.
Another technical solution that the present invention solves above-mentioned technical problem is as follows:A kind of robot mobile control system, including:
Collecting unit, for obtaining the angular velocity data of inertial sensor;
Pretreatment unit, for carrying out the pretreatment of online adaptive filtering to the angular velocity data;
Processing unit, for establishing quaternary fractional differentiation according to the angular velocity data through online adaptive filter preprocessing Equation solves the quaternion differential equation using Runge-Kutta method, obtains the attitude matrix for including object attitude angle;
Coordinate system converting unit, for object attitude angle to be converted to navigational coordinate system from carrier coordinate system;
Screening unit, for by the target not in threshold range in the object attitude angle in the navigational coordinate system Attitude angle excludes;
Control unit, for controlling robot motion according to the object attitude angle in threshold range.
Another technical solution that the present invention solves above-mentioned technical problem is as follows:A kind of inertia sensing control device, including upper State the robot mobile control system described in technical solution, the inertia sensing control device and robot wireless communication.
The beneficial effects of the invention are as follows:The problem of present invention is handled in real time for sensing data, utilizes online adaptive Filtering method realizes online denoising, reduces influence of the noise to later stage update attitude matrix, attitude matrix is made to solve attitude angle more Accurately;Attitude matrix is described using Quaternion Method, solves the differential equation, calculation amount is smaller, has higher precision, and can keep away Exempt to be absorbed in " singular point ";Finger maloperation behavior is foreclosed using threshold value, completes to identify different small gesture motions;This hair It is bright that inertial sensor control robot can be utilized to move, there is degree of precision and good online recognition effect, universality is strong, should It is preferable with prospect value.
Description of the drawings
Fig. 1 is the schematic flow chart that the present invention one applies the robot movement control method that example provides;
Fig. 2 is the system signal process chart that one embodiment of the invention provides;
When Fig. 3 is that the finger that one embodiment of the invention provides moves up and down, the variation of pitch angle and course angle;
When Fig. 4 is that the finger that one embodiment of the invention provides moves left and right, the variation of pitch angle and course angle;
Fig. 5 is the schematic block diagram that the present invention one applies the robot mobile control system that example provides.
Specific embodiment
The principle and features of the present invention will be described below with reference to the accompanying drawings, and the given examples are served only to explain the present invention, and It is non-to be used to limit the scope of the present invention.
Fig. 1 gives a kind of schematic flow chart of robot movement control method provided in an embodiment of the present invention.Such as Fig. 1 Shown, this method includes:
S1 obtains the angular velocity data of inertial sensor, wherein, the inertial sensor can be worn at user's finger On;
S2 carries out the angular velocity data pretreatment of online adaptive filtering;
S3 establishes quaternion differential equation according to the angular velocity data through online adaptive filter preprocessing, utilizes Runge-Kutta method solves the quaternion differential equation, obtains the attitude matrix for including object attitude angle;
Object attitude angle is converted to navigational coordinate system by S4 from carrier coordinate system;
S5 arranges the object attitude angle not in threshold range in the object attitude angle in the navigational coordinate system It removes;
S6 controls robot motion according to the object attitude angle in threshold range.
In the embodiment, the problem of processing in real time for sensing data, realized using online adaptive filtering method Line denoising;Attitude matrix is described using Quaternion Method, solves the differential equation, and calculation amount is smaller, has a higher precision, and can be with It avoids being absorbed in " singular point ";Finger maloperation behavior is foreclosed using threshold value, completes to identify different small gesture motions;This Invention can utilize inertial sensor control robot to move, and have degree of precision and good online recognition effect, and universality is strong, Application prospect value is preferable.
Optionally, as one embodiment of the invention, to the pre- place of angular velocity data progress online adaptive filtering Reason includes:
S2.1, init state amount and system self-adaption parameter.Specifically,
2.1.1, state initial value is setGeneral to be set to 3-dimensional full 0 column vector, dimension is state procedure mould The dimension of type state vector, angular speed, angular speed first derivative and angular speed second dervative are state vector;
2.1.2, system self-adaption initial parameter values α=α0WithTake positive count, such as α0ValueValue 3,
2.1.3, auto-correlation function initial value r0(0) and r0(1) initial value is taken as
2.1.4, angular speed second dervative initial valueGenerallyTake 0.
S2.2 establishes the state procedure model with system self-adaption parameter.Specifically,
2.2.1, the moving characteristic of target is described using following formula;
It is similar with Singer models, if angular speed second dervative is the time correlation random process of Non-zero MeanWhereinFor angular speed second dervative average, a (t) is zero-mean correlation of indices coloured noise model, related Function is:
Wherein, t be arbitrary sampling instant, τ be calculation of correlation parameter, Ra(τ) represents correlation function;Represent acceleration side Difference;α is maneuvering frequency, has reacted the variation stochastic behaviour of state;
Whitening processing is done to coloured noise a (t), is obtained
Wherein, w (t) is zero-mean white noise, and variance is
ByWithThe continuous state equation for obtaining state change is as follows:
It is sampled with cycle T, system state change meets below equation after discretization:
WhereinFor 3-dimensional state column vector,It is angular speed, angular speed first derivative and angle respectively Speed second dervative, x (k+1) are k+1 moment state vectors, and k is sampling instant;A (k+1, k) is state-transition matrix;x(k) For the state vector of k moment targets;U (k) matrixes in order to control;The angular speed second order started for 0 moment to k moment targets is led Number average;W (k) is process noise, and average 0, variance is Q (k);Containing motor-driven in the A (k+1, k), U (k) and Q (k) Frequency alpha and angular speed second dervative varianceChange with the variation of system self-adaption parameter;State-transition matrix A (k+1, K) expression formula such as following formula
Control the expression formula such as following formula of matrix U (k)
The expression formula of the variance Q (k) of process noise w (k) such as following formula
Wherein,
2.2.2, it is as follows to measure equation
Y (k)=H (k) x (k)+v (k) (8)
Wherein k is sampling instant, and measured values of the y (k) at y (k) moment, H (k) is calculation matrix, and x (k) is k moment targets State vector;V (k) be Gauss measurement white noise, variance R, and with process noise w (k) independently of each other.
S2.3 carries out the target mobile status according to the state procedure model with system self-adaption parameter of foundation Prediction obtains status predication value and state covariance predicted value.
2.3.1 according to state procedure model of the foundation with system self-adaption parameter and a step of initial value completion status Prediction, predictive equation formula are as follows:
WhereinRepresent that the k-1 moment predicts state of the target at the k moment, k is sampling instant, and A (k, k-1) is shape State transfer matrix;Represent target k-1 moment state estimations;U (k-1) matrixes in order to control;For from 0 moment started to the angular speed second dervative average of k-1;
2.3.2 the one-step prediction of completion status covariance according to the following formula:
P (k | k-1)=A (k, k-1) P (k-1 | k-1) AT(k, k-1)+Q (k-1) (10)
Wherein P (k | k-1) represent that the k-1 moment predicts state covariance of the target at the k moment, k is sampling instant, | it represents Conditional operation accords with;P (k-1 | k-1) represents the estimate of the state covariance of k-1 moment targets;A (k, k-1) shifts square for state Battle array;Q (k-1) is process noise covariance.
S2.4 moves shape according to the status predication value, measured data values and state covariance predicted value to the target State is updated, and obtains state estimation.
2.4.1 wave filter is calculated according to the following formula according to state covariance predicted value, calculation matrix and measurement noise variance to increase Benefit;
K (k)=P (k | k-1) HT(k)[H(k)P(k|k-1)HT(k)+R]T (11)
Wherein, K (k) is filter gain, and k is sampling instant, and P (k | k-1) represent that the k-1 moment predicts target at the k moment State covariance, H (k) be the k moment calculation matrix, R be Gauss measurement white noise variance, HT(k) square is measured for the k moment The transposition of battle array;
2.4.2 target current state estimate, such as following formula are calculated using status predication value and observed data value
Wherein,Represent k moment state estimations,Represent that prediction is in the shape at k moment during the k-1 moment State, k are sampling instant, and K (k) is k moment filter gains, and y (k) is the observation at the k moment, and H (k) is the measurement at k moment Matrix;
2.4.3 the estimate of state covariance is calculated according to the following formula;
P (k | k)=(I-K (k) H (k)) P (k | k-1) (13)
Wherein I is 3-dimensional unit matrix, and P (k | k) represents the estimate of the state covariance at k moment, and k is sampling instant, K (k) for k moment filter gains, H (k) is the calculation matrix at k moment, and P (k | k-1) represent that the k-1 moment predicts the shape at the k moment State covariance.
S2.5 calculates angular speed second dervative average and angular speed second dervative estimate according to the state estimation.
Angular speed second dervative average is calculated using following formula;
WhereinFor 0 to k moment angular speed second dervative averages,For the state estimation at k moment's The third line value, k are sampling instant;And the angular speed second dervative estimate at system k-1 moment and k moment is obtained according to the following formula
WhereinFor k-1 moment state estimationsThe third line value,For k moment shapes State is estimatedThe third line value.
S2.6 is modified the system self-adaption parameter according to the angular speed second dervative estimate.
According to the size of sampling instant k values, select update the system auto-adaptive parameter α andMethod, if k be less than or equal to 4 2.6.1 is entered step, if k enters step 2.6.2 more than 4,
2.6.1 when sampling instant k is less than or equal to 4, because sampled data is less, taken using the parameter of current statistical model Value method, be calculated as follows system self-adaption parameter alpha and
α=α0Wherein α0For the initial value of system self-adaption parameter alpha,
IfThen take
IfThen take
IfThenTake (0,10] between arbitrary number,
Wherein,For k moment angular speed second dervative estimates, π is pi, is taken as 3.14, aMFor positive constant, It is taken as 3, a-MFor with aMThe equal negative constant of absolute value, is taken as -3;
2.6.2 when sampling instant k be more than 4 when, be calculated as follows system self-adaption parameter alpha and
Wherein b is greater than 1 constant, rk(1) it is k moment angular speed second dervatives to back correlation function, rk-1(1) It is k-1 moment angular speed second dervatives to back correlation function,WithRespectively k-1 moment and k moment angles speed Spend second dervative estimate;rk(0) it is k-1 moment angular speed second dervative auto-correlation functions, rk-1(0) it is k-1 moment angular speed Second dervative auto-correlation function;
For example, modus ponens (17) is 10 with the b in (18), that is, it is shown below:
According to system equationObtaining angular speed second dervative expires The following single order Markov random sequence of foot:
WhereinFor the angular speed second dervative at k+1 moment,For the acceleration at k moment, β adds afterwards to be discrete The maneuvering frequency of speed random sequence, wa(k) it is zero-mean white noise discrete series, variance isWherein For the variance of zero-mean white noise w (t), the relation of β and α are β=e-αT
The single order Markov time accelerates degree series to meet parameters relationship as shown below:
Wherein, rk(1) for the k moment acceleration to back auto-correlation function, rk(0) it is the acceleration auto-correlation at k moment Function, α and β are respectively the maneuvering frequency of acceleration and its maneuvering frequency of discretization post-acceleration sequence, auto-adaptive parameter α andIt can be calculated according to the following formula:
Wherein, rk(1) for the k moment acceleration to back correlation function, rk(0) it is the acceleration auto-correlation letter at k moment Number, ln is the Logarithmic calculation taken using e the bottom of as;α andFor system self-adaption parameter, T is the sampling interval.
S2.7 updates the shape according to the angular speed second dervative average and the revised system self-adaption parameter State process model obtains the filtered angular velocity data of online adaptive.
Optionally, as one embodiment of the invention, quaternion differential equation is established according to the angular velocity data, is utilized Runge-Kutta method solves the quaternion differential equation, and the attitude matrix that acquisition includes object attitude angle includes:
S3.1 establishes quaternion differential equation using filtered angular velocity data and quaternary number;
Specifically, using the angular velocity information of gyroscope in filtered inertial sensor, established using quaternary number as follows The differential equation:
Wherein, the left side of equation is the derivative operation of quaternary number, and q (t) represents quaternary number, and symbol ο represents quaternary number multiplication,For the matrix expression of three axis angular rates;
S3.2 solves the quaternion differential equation using fourth-order Runge-Kutta method, obtains the posture by quaternion representation Matrix, first value by updating quaternary number update object attitude angle so as to update attitude matrix;
Wherein, using the slope initial value needed for the Fourth order Runge-Kutta solution quaternion differential equation by quaternary number Initial value determine that the initial value of the quaternary number is determined by the initial value of object attitude angle.
The above-mentioned differential equation is solved using Fourth order Runge-Kutta, slope of the iteration more needed for the new algorithm first step is initial Value can be determined by the initial value of quaternary number, and quaternary number initial value is determined by attitude angle initial value, is calculating the quaternary of next step After each element of number, you can substitute into the attitude matrix by quaternion representationAnd then the attitude angle letter at current time can be solved Breath.
Wherein K is slope, and t is current time, and h is newer step-length and its usually same with sensor sample cycle phase, formula InWhereinFor the matrix expression of three axis angular rates.
Optionally, as one embodiment of the invention, object attitude angle is converted into navigational coordinate system from carrier coordinate system Including:Attitude matrix is solved using Quaternion Method
It is as follows that attitude angle is solved according to attitude matrix
θIt is main=arcsin C32
In order to accurately determine attitude angleThe true value of θ, γ below define the domain of attitude angle, wherein bowing The domain at the elevation angle for (- 90 °, the domains of 90 ° of) , Transverse roll angles is (- 180 °, 180 °), the domain of course angle for (0 °, 360°).θ in formulaIt is main、γIt is mainWithIt is pitch angle, roll angle and the course angle being calculated by attitude matrix respectively, θ, γ WithRepresent the angle value being transformed into domain.
θ=θIt is main
Since the inertia component of Strapdown Inertial Navigation System is connected firmly on carrier, the output valve of sensor is to carry Output valve under body coordinate system, so these measured values must be converted to a coordinate system convenient for navigational parameter needed for calculating Under, i.e. navigational coordinate system, and attitude matrix is exactly each item data measured in carrier coordinate system is gone to and lead by carrier coordinate system The transformational relation of boat coordinate system can represent the attitude angle of carrier after the attitude matrix of carrier is determined.
Fig. 2 gives system signal process flow, from attached drawing as can be seen that the data gathered from inertia sensing control device Control command that is filtered, solving quaternary number acquisition attitude angle, finally obtain robot.
When Fig. 3 gives finger and moves up and down, the variation of pitch angle and course angle, Pitch represents pitch angle, Yaw in figure Course angle is represented, from attached drawing as can be seen that pitch angle shows cyclically-varying, but course with the up and down motion of finger at this time Angle variation is little.
When Fig. 4 gives finger and moves left and right, the variation of pitch angle and course angle, Pitch represents pitch angle, Yaw in figure Course angle is represented, from attached drawing as can be seen that course angle shows cyclically-varying, but pitching with the side-to-side movement of finger at this time Angle variation is little.
Optionally, as one embodiment of the invention, will not exist in the object attitude angle in the navigational coordinate system Object attitude angle exclusion in threshold range includes:
Definition of gesture is respectively:Finger lifts, downwards, to four actions of right and left, by testing we have found that working as hand When finger lifts, pitch angle angle is advanced just, to define the gesture control robot;When finger is put down, pitch angle angle is negative, is determined The adopted gesture control robot stops;Finger to the right when, course angle angle defines the gesture control robot and turns right for just; Finger to the left when, course angle angle be it is negative, control robot to the left.
In the process, when finger wears inertial sensor, due to the slight of finger or significantly act all can shadow Ring the inaccuracy of finger identification, by experimental demonstration, when finger lifts (Finger_Up), pitch angle 20 degree (threshold value 1, we Use T1Represent) (threshold value 2, we use T to 50 degree2Represent) when, for the normal range (NR) of finger movement, advance as control robot Correct gesture;When finger is downward (Finger_Down), pitch angle is when -20 degree are spent to -50, for the normal model of finger movement It encloses, the correct gesture stopped as control robot.When finger is turned right (Finger_Right), course angle is in 20 degree of (threshold values 3, we use T3Represent) (threshold value 4, we use T to 40 degree4Represent) when, it is the normal range (NR) that finger turns to, as control machine The correct gesture that people turns right;When finger is turned left (Finger_Left), course angle is turned to when -20 degree are spent to -40 for finger Normal range (NR), as control robot turn left correct gesture.
Using above-mentioned threshold value, the instruction of following control gesture can be obtained:
Wherein, Pitch represents pitch angle, and Yaw represents course angle.
Optionally, as one embodiment of the invention, robot motion is controlled according to the object attitude angle in threshold range Including:
After finger wears sensor, communicated by WIFI connection sensors with robot, utilize inertia sensing Device gathers the movable signal of finger, and computer obtains movable signal, judges the posture of finger, then pass through move wireless Module passes to robot, and table 1 lists " G ", " S ", " R " and " L " described 4 orders.When gesture is lifted, sensor The movement of finger is captured, sends the judgement that computer carries out posture to, when the threshold range for meeting setting, computer is sent to machine Device people " G " instructs, and robot makes the action of advance;When gesture is downward, sensor captures the movement of finger, sends calculating to Machine carries out the judgement of posture, and when the threshold range for meeting setting, computer is sent to robot " S " instruction, and robot, which makes, to stop Action only;When gesture to the right when, sensor capture finger movement, send to computer carry out posture judgement, work as satisfaction The threshold range of setting, computer are sent to robot " R " instruction, and robot makes the action of right-hand rotation;When gesture to the left when, pass Sensor captures the movement of finger, sends the judgement that computer carries out posture to, and when the threshold range for meeting setting, computer is sent It is instructed to robot " L ", robot makes the action of left-hand rotation.
The present invention accurately identifies finger movement for inertial sensor, and experiment flow is carried out with the artificial example of finger control machine Figure is as shown in Figure 1.The robot platform of our we selected typicals --- NAO robots, the robot system there are one it is complete from Balance module, when there is instruction input, NAO can be with itself steady walking, and therefore, we only consider whether that NAO can obtain base In the accurate order of gesture.
Gesture identification method provided by the present invention relies primarily on inertial sensor data acquisition, and passes through control machine Device people moves completion system and realizes.First, it is respectively used for measuring the angle movement of carrier using gyroscope and accelerometer, line moves Information carries out online adaptive filtering removal noise;Then, described using fourth-order Runge-Kutta method solution with Quaternion Method The differential equation, the member through quaternary number reversely negate trigonometric function to calculate pitch angle and course angle;Finally, according to the shifting of finger The adaptive recognition threshold that should determine that pitch angle and course angle of dynamic scope, so as to which the faulty operation of finger be foreclosed.
The present invention provides one kind and gathers finger small range action data by inertial sensor, data is carried out online adaptive Filtering and noise reduction is answered to handle, builds quaternary number attitude matrix, quaternion differential equation is solved using Runge-Kutta method, updates quaternary Number obtains real-time attitude angle so as to update attitude matrix, and due to identification finger small range action, given threshold excludes hand Refer to the situation of maloperation.
Above in association with Fig. 1 to 4, it is described in detail and robot movement control method is provided according to embodiments of the present invention.Below With reference to Fig. 5, the embodiment of the present invention is described in detail, robot mobile control system is provided.
Fig. 5 gives a kind of schematic block diagram of robot mobile control system provided in an embodiment of the present invention.Such as Shown in Fig. 5, which includes:Collecting unit 510, pretreatment unit 520, processing unit 530, coordinate system converting unit 540 are sieved Menu member 550.
Collecting unit 510 obtains the angular velocity data of inertial sensor, wherein, the inertial sensor can be worn at use On the finger of family;Pretreatment unit 520 carries out the angular velocity data pretreatment of online adaptive filtering;Processing unit 530 Quaternion differential equation is established according to the angular velocity data through online adaptive filter preprocessing, utilizes Runge-Kutta method The quaternion differential equation is solved, obtains the attitude matrix for including object attitude angle;Coordinate system converting unit 540 is by target appearance State angle is converted to navigational coordinate system from carrier coordinate system;Screening unit 550 is by the targeted attitude in the navigational coordinate system Object attitude angle in angle not in threshold range excludes;Control unit 560 is controlled according to the object attitude angle in threshold range Robot motion.
In the embodiment, the problem of processing in real time for sensing data, realized using online adaptive filtering method Line denoising;Attitude matrix is described using Quaternion Method, solves the differential equation, and calculation amount is smaller, has a higher precision, and can be with It avoids being absorbed in " singular point ";Finger maloperation behavior is foreclosed using threshold value, completes to identify different small gesture motions;This Invention can utilize inertial sensor control robot to move, and have degree of precision and good online recognition effect, and universality is strong, Application prospect value is preferable.
The system is realized mainly in combination with solving of attitude and adaptive threshold analysis, is first passed through adaptive-filtering and is measured The denoising of data, then the strap-down inertial navigation system attitude algorithm algorithm based on quaternary number, completion include quaternary number appearance State matrix, navigational parameter extraction are with calculating, primary condition gives and calculated with primary data, and passes through more newly arriving not for attitude matrix It is disconnected to calculate course, posture, speed and location information;Then Threshold Analysis means realize that noise suppressed and maloperation are sentenced Not, detect in real time, identify gesture minute movement situation, then complete to robot mobile control accordingly.
The embodiment of the present invention also provides a kind of inertia sensing control device, is moved including the robot described in above-mentioned technical proposal Autocontrol system, the inertia sensing control device and robot wireless communication.
It is apparent to those skilled in the art that for convenience of description and succinctly, the dress of foregoing description The specific work process with unit is put, may be referred to the corresponding process in preceding method embodiment, details are not described herein.
In several embodiments provided herein, it should be understood that disclosed apparatus and method can pass through it Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of unit, is only A kind of division of logic function, can there is an other dividing mode in actual implementation, for example, multiple units or component can combine or Person is desirably integrated into another system or some features can be ignored or does not perform.
The unit illustrated as separating component may or may not be physically separate, be shown as unit Component may or may not be physical location, you can be located at a place or can also be distributed to multiple networks On unit.Some or all of unit therein can be selected to realize the mesh of the embodiment of the present invention according to the actual needs 's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also It is that unit is individually physically present or two or more units integrate in a unit.It is above-mentioned integrated The form that hardware had both may be employed in unit is realized, can also be realized in the form of SFU software functional unit.
If integrated unit is realized in the form of SFU software functional unit and is independent production marketing or in use, can To be stored in a computer read/write memory medium.Based on such understanding, technical scheme substantially or Saying all or part of the part contribute to the prior art or the technical solution can be embodied in the form of software product Out, which is stored in a storage medium, is used including some instructions so that a computer equipment (can be personal computer, server or the network equipment etc.) performs all or part of each embodiment method of the present invention Step.And foregoing storage medium includes:It is USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random Access various Jie that can store program code such as memory (RAM, Random Access Memory), magnetic disc or CD Matter.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all the present invention spirit and Within principle, any modifications, equivalent replacements and improvements are made should all be included in the protection scope of the present invention.

Claims (10)

1. a kind of robot movement control method, which is characterized in that including:
S1 obtains the angular velocity data of inertial sensor;
S2 carries out the angular velocity data pretreatment of online adaptive filtering;
S3 establishes quaternion differential equation according to the angular velocity data through online adaptive filter preprocessing, using Long Ge- Ku Tafa solves the quaternion differential equation, obtains the attitude matrix for including object attitude angle;
Object attitude angle is converted to navigational coordinate system by S4 from carrier coordinate system;
S5 excludes the object attitude angle not in threshold range in the object attitude angle in the navigational coordinate system;
S6 controls robot motion according to the object attitude angle in threshold range.
2. according to the method described in claim 1, it is characterized in that, the S2 includes:
S2.1, init state amount and system self-adaption parameter, the quantity of state include angular speed, angular speed first derivative and angle Speed second dervative;
S2.2 establishes the state procedure model with system self-adaption parameter;
S2.3 predicts the quantity of state according to the state procedure model with system self-adaption parameter of foundation, obtains Status predication value and state covariance predicted value;
S2.4 is updated the quantity of state according to the status predication value, measured data values and state covariance predicted value, Obtain state estimation;
S2.5 calculates angular speed second dervative average and angular speed second dervative estimate according to the state estimation;
S2.6 is modified the system self-adaption parameter according to the angular speed second dervative estimate;
S2.7 updates the state mistake according to the angular speed second dervative average and the revised system self-adaption parameter Journey model obtains the filtered angular velocity data of online adaptive.
3. according to the method described in claim 2, it is characterized in that, the S3 includes:
S3.1 establishes quaternion differential equation using filtered angular velocity data and quaternary number;
S3.2 solves the quaternion differential equation using fourth-order Runge-Kutta method, obtains the posture square by quaternion representation Battle array, first value by updating quaternary number update object attitude angle so as to update attitude matrix;
Wherein, using the slope initial value needed for the Fourth order Runge-Kutta solution quaternion differential equation by the first of quaternary number Initial value determines that the initial value of the quaternary number is determined by the initial value of object attitude angle.
It is 4. according to the method described in claim 3, it is characterized in that, described micro- using the Runge-Kutta method solution quaternary number Divide equation, obtain as follows by the attitude matrix of quaternion representation:
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mover> <msub> <mi>q</mi> <mn>0</mn> </msub> <mo>&amp;CenterDot;</mo> </mover> </mtd> </mtr> <mtr> <mtd> <mover> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>&amp;CenterDot;</mo> </mover> </mtd> </mtr> <mtr> <mtd> <mover> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>&amp;CenterDot;</mo> </mover> </mtd> </mtr> <mtr> <mtd> <mover> <msub> <mi>q</mi> <mn>3</mn> </msub> <mo>&amp;CenterDot;</mo> </mover> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>x</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>y</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>z</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>x</mi> </mrow> </msub> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>z</mi> </mrow> </msub> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>y</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>y</mi> </mrow> </msub> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>z</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>x</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>z</mi> </mrow> </msub> </mtd> <mtd> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>y</mi> </mrow> </msub> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>x</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>q</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>q</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>q</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>q</mi> <mn>3</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow>
<mfenced open = "" close = "}"> <mtable> <mtr> <mtd> <mrow> <msub> <mi>K</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>&amp;Gamma;</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mi>q</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>K</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>&amp;Gamma;</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mfrac> <mi>h</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>q</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>+</mo> <mfrac> <mi>h</mi> <mn>2</mn> </mfrac> <msub> <mi>K</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>K</mi> <mn>3</mn> </msub> <mo>=</mo> <msub> <mi>&amp;Gamma;</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mfrac> <mi>h</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>q</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>+</mo> <mfrac> <mi>h</mi> <mn>2</mn> </mfrac> <msub> <mi>K</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>K</mi> <mn>4</mn> </msub> <mo>=</mo> <msub> <mi>&amp;Gamma;</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mi>h</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>q</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>hK</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced>
<mrow> <mi>q</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mi>h</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>q</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>+</mo> <mfrac> <mi>h</mi> <mn>6</mn> </mfrac> <mrow> <mo>(</mo> <msub> <mi>K</mi> <mn>1</mn> </msub> <mo>+</mo> <mn>2</mn> <msub> <mi>K</mi> <mn>2</mn> </msub> <mo>+</mo> <mn>3</mn> <msub> <mi>K</mi> <mn>3</mn> </msub> <mo>+</mo> <msub> <mi>K</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow>
Wherein, K is slope, and t is current time, and h is newer step-length, in formulaFor three shaft angles speed The matrix expression of degree, q (t) represent quaternary number.
5. method according to any one of claims 1 to 4, which is characterized in that the object attitude angle include pitch angle and Course angle;The absolute value of the threshold range of the pitch angle is 20 to 50 degree, and the threshold range of the course angle is 20 Degree is to 40 degree.
6. a kind of robot mobile control system, which is characterized in that including:
Collecting unit, for obtaining the angular velocity data of inertial sensor;
Pretreatment unit, for carrying out the pretreatment of online adaptive filtering to the angular velocity data;
Processing unit, for establishing quaternary fractional differentiation side according to the angular velocity data through online adaptive filter preprocessing Journey solves the quaternion differential equation using Runge-Kutta method, obtains the attitude matrix for including object attitude angle;
Coordinate system converting unit, for object attitude angle to be converted to navigational coordinate system from carrier coordinate system;
Screening unit, for by the targeted attitude not in threshold range in the object attitude angle in the navigational coordinate system Angle excludes;
Control unit, for controlling robot motion according to the object attitude angle in threshold range.
7. system according to claim 6, which is characterized in that the pretreatment unit is specifically used for:
Init state amount and system self-adaption parameter, the quantity of state include angular speed, angular speed first derivative and angular speed Second dervative;
Establish the state procedure model with system self-adaption parameter;
The quantity of state is predicted according to the state procedure model with system self-adaption parameter of foundation, it is pre- to obtain state Measured value and state covariance predicted value;
The quantity of state is updated according to the status predication value, measured data values and state covariance predicted value, is obtained State estimation;
Angular speed second dervative average and angular speed second dervative estimate are calculated according to the state estimation;
The system self-adaption parameter is modified according to the angular speed second dervative estimate;
The state procedure mould is updated according to the angular speed second dervative average and the revised system self-adaption parameter Type obtains the filtered angular velocity data of online adaptive.
8. system according to claim 7, which is characterized in that the processing unit is specifically used for:
Quaternion differential equation is established using filtered angular velocity data and quaternary number;
The quaternion differential equation is solved using fourth-order Runge-Kutta method, the attitude matrix by quaternion representation is obtained, passes through First value of update quaternary number updates object attitude angle so as to update attitude matrix;
Wherein, using the slope initial value needed for the Fourth order Runge-Kutta solution quaternion differential equation by the first of quaternary number Initial value determines that the initial value of the quaternary number is determined by the initial value of object attitude angle.
9. according to claim 6 to 8 any one of them system, which is characterized in that described using described in Runge-Kutta method solution Quaternion differential equation obtains as follows by the attitude matrix of quaternion representation:
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mover> <msub> <mi>q</mi> <mn>0</mn> </msub> <mo>&amp;CenterDot;</mo> </mover> </mtd> </mtr> <mtr> <mtd> <mover> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>&amp;CenterDot;</mo> </mover> </mtd> </mtr> <mtr> <mtd> <mover> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>&amp;CenterDot;</mo> </mover> </mtd> </mtr> <mtr> <mtd> <mover> <msub> <mi>q</mi> <mn>3</mn> </msub> <mo>&amp;CenterDot;</mo> </mover> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>x</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>y</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>z</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>x</mi> </mrow> </msub> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>z</mi> </mrow> </msub> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>y</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>y</mi> </mrow> </msub> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>z</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>x</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>z</mi> </mrow> </msub> </mtd> <mtd> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>y</mi> </mrow> </msub> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>n</mi> <mi>b</mi> <mi>x</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>q</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>q</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>q</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>q</mi> <mn>3</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow>
<mfenced open = "" close = "}"> <mtable> <mtr> <mtd> <mrow> <msub> <mi>K</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>&amp;Gamma;</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mi>q</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>K</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>&amp;Gamma;</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mfrac> <mi>h</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>q</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>+</mo> <mfrac> <mi>h</mi> <mn>2</mn> </mfrac> <msub> <mi>K</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>K</mi> <mn>3</mn> </msub> <mo>=</mo> <msub> <mi>&amp;Gamma;</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mfrac> <mi>h</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>q</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>+</mo> <mfrac> <mi>h</mi> <mn>2</mn> </mfrac> <msub> <mi>K</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>K</mi> <mn>4</mn> </msub> <mo>=</mo> <msub> <mi>&amp;Gamma;</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mi>h</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>q</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>hK</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced>
<mrow> <mi>q</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mi>h</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>q</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>+</mo> <mfrac> <mi>h</mi> <mn>6</mn> </mfrac> <mrow> <mo>(</mo> <msub> <mi>K</mi> <mn>1</mn> </msub> <mo>+</mo> <mn>2</mn> <msub> <mi>K</mi> <mn>2</mn> </msub> <mo>+</mo> <mn>3</mn> <msub> <mi>K</mi> <mn>3</mn> </msub> <mo>+</mo> <msub> <mi>K</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow>
Wherein, K is slope, and t is current time, and h is newer step-length, in formulaFor three shaft angles speed The matrix expression of degree, q (t) represent quaternary number.
10. a kind of inertia sensing control device, which is characterized in that moved including claim 6 to 9 any one of them robot Control system, the inertia sensing control device and robot wireless communication.
CN201711232485.7A 2017-11-30 2017-11-30 Robot movement control method and system and inertial sensing control device Active CN108051001B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711232485.7A CN108051001B (en) 2017-11-30 2017-11-30 Robot movement control method and system and inertial sensing control device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711232485.7A CN108051001B (en) 2017-11-30 2017-11-30 Robot movement control method and system and inertial sensing control device

Publications (2)

Publication Number Publication Date
CN108051001A true CN108051001A (en) 2018-05-18
CN108051001B CN108051001B (en) 2020-09-04

Family

ID=62121365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711232485.7A Active CN108051001B (en) 2017-11-30 2017-11-30 Robot movement control method and system and inertial sensing control device

Country Status (1)

Country Link
CN (1) CN108051001B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020220284A1 (en) * 2019-04-30 2020-11-05 深圳市大疆创新科技有限公司 Aiming control method, mobile robot and computer-readable storage medium
CN113496165A (en) * 2020-04-01 2021-10-12 北京海益同展信息科技有限公司 User gesture recognition method and device, hand intelligent wearable device and storage medium
CN114102600A (en) * 2021-12-02 2022-03-01 西安交通大学 Multi-space fusion man-machine skill migration and parameter compensation method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140371906A1 (en) * 2013-06-13 2014-12-18 GM Global Technology Operations LLC Method and Apparatus for Controlling a Robotic Device via Wearable Sensors
CN107122724A (en) * 2017-04-18 2017-09-01 北京工商大学 A kind of method of the online denoising of sensing data based on adaptive-filtering

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140371906A1 (en) * 2013-06-13 2014-12-18 GM Global Technology Operations LLC Method and Apparatus for Controlling a Robotic Device via Wearable Sensors
CN107122724A (en) * 2017-04-18 2017-09-01 北京工商大学 A kind of method of the online denoising of sensing data based on adaptive-filtering

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
刘梁: "一种手势控制小车运动系统的设计与实现", 《数字技术与应用》 *
周宏仁: "《机动目标跟踪》", 31 August 1991, 国防工业出版社 *
左国玉等: "遥操作护理机器人系统的操作者姿态解算方法研究", 《自动化学报》 *
陈伟: "基于四元数和卡尔曼滤波的姿态角估计算法研究与应用", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020220284A1 (en) * 2019-04-30 2020-11-05 深圳市大疆创新科技有限公司 Aiming control method, mobile robot and computer-readable storage medium
CN113496165A (en) * 2020-04-01 2021-10-12 北京海益同展信息科技有限公司 User gesture recognition method and device, hand intelligent wearable device and storage medium
CN113496165B (en) * 2020-04-01 2024-04-16 京东科技信息技术有限公司 User gesture recognition method and device, hand intelligent wearable device and storage medium
CN114102600A (en) * 2021-12-02 2022-03-01 西安交通大学 Multi-space fusion man-machine skill migration and parameter compensation method and system
CN114102600B (en) * 2021-12-02 2023-08-04 西安交通大学 Multi-space fusion human-machine skill migration and parameter compensation method and system

Also Published As

Publication number Publication date
CN108051001B (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN107635204B (en) Indoor fusion positioning method and device assisted by exercise behaviors and storage medium
Ferris et al. Wifi-slam using gaussian process latent variable models.
CN105102928B (en) Inertial device, methods and procedures
CN107084714B (en) A kind of multi-robot Cooperation object localization method based on RoboCup3D
CN109117893A (en) A kind of action identification method and device based on human body attitude
CN109959381A (en) A kind of localization method, device, robot and computer readable storage medium
CN107016342A (en) A kind of action identification method and system
CN109176512A (en) A kind of method, robot and the control device of motion sensing control robot
CN109579853A (en) Inertial navigation indoor orientation method based on BP neural network
CN105814609A (en) Fusing device and image motion for user identification, tracking and device association
CN110118560A (en) A kind of indoor orientation method based on LSTM and Multi-sensor Fusion
CN108762309A (en) Human body target following method based on hypothesis Kalman filtering
CN106643715A (en) Indoor inertial navigation method based on bp neural network improvement
Li et al. Vision-aided inertial navigation for resource-constrained systems
US20200319721A1 (en) Kinematic Chain Motion Predictions using Results from Multiple Approaches Combined via an Artificial Neural Network
CN108051001A (en) A kind of robot movement control method, system and inertia sensing control device
CN111178170B (en) Gesture recognition method and electronic equipment
Rao et al. Ctin: Robust contextual transformer network for inertial navigation
CN110928432A (en) Ring mouse, mouse control device and mouse control system
Garg et al. Learning to grasp under uncertainty using POMDPs
CN115410233B (en) Gesture attitude estimation method based on Kalman filtering and deep learning
CN108882169A (en) The acquisition methods and device and robot of a kind of WiFi location fingerprint data
Du et al. A novel human–manipulators interface using hybrid sensors with Kalman filter and particle filter
CN103839280B (en) A kind of human body attitude tracking of view-based access control model information
CN107203271B (en) Double-hand recognition method based on multi-sensor fusion technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant