CN109676583A - Based on targeted attitude deep learning vision collecting method, learning system and storage medium - Google Patents
Based on targeted attitude deep learning vision collecting method, learning system and storage medium Download PDFInfo
- Publication number
- CN109676583A CN109676583A CN201811466680.0A CN201811466680A CN109676583A CN 109676583 A CN109676583 A CN 109676583A CN 201811466680 A CN201811466680 A CN 201811466680A CN 109676583 A CN109676583 A CN 109676583A
- Authority
- CN
- China
- Prior art keywords
- teaching
- image information
- movement
- function
- targeted attitude
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/0081—Programme-controlled manipulators with master teach-in means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
- Numerical Control (AREA)
Abstract
The present invention relates to the technical fields of robot, it discloses based on targeted attitude deep learning vision collecting method, learning system and storage medium, for controlling robot learning teaching movement, wherein based on targeted attitude deep learning vision collecting method the following steps are included: from the teaching image information of the multi-direction upper acquisition teaching action process;The teaching image information is analyzed, multiple reference points of the teaching movement are chosen, is fitted at least two functions: posture function, displacement function with time relationship by mobile;Control program is generated, so that the robot can realize that the teaching acts according to the posture function and the displacement function.Teaching function of movement is simplified in the present invention, reduces and takes a calculation amount, the teaching movement by acquiring target generates driver, reduces to the desirability manually participated in, has many advantages, such as that intelligence degree is high, it is high to imitate reduction degree.
Description
Technical field
The present invention relates to the technical fields of robot, more particularly to based on targeted attitude deep learning vision collecting method,
Learning system and storage medium.
Background technique
Robot (Robot) is a kind of high-tech product, and internal preset has program or principle guiding principle, receives letter
Number or instruction after, can judge and take action to a certain extent, such as move, take, swinging limbs etc. to act.Machine
The task of people mainly assists the work for even replacing the mankind in some situations, action involved in actual operative scenario and
Information judgement is often very complicated, it is difficult to is all recorded in robot in a manner of program in advance, therefore how according to existing
Knowledge, voluntarily study improves adaptability and intelligent level namely robot learning, become in robot industry one it is non-
Often popular research emphasis.
In the prior art, the process of the teaching movement of the robot simulation mankind specifically includes that 1, digital collection teaching
Multiple key point coordinates of movement;2, a little anti-solve as robot control program is taken.In two above-mentioned steps, require a large amount of
Artificial participation, especially in step 1, not only need to choose key point, but also need to act teaching and simplify, such as from A
Point is moved to B point, rises in B point or this declines, and the simplification degree of teaching movement is higher, and robot simulation's reduction degree is lower,
The simplification degree of teaching movement is lower, relevant to take a calculation amount bigger, eventually leads to robot and is difficult to realize high reduction degree
Simulate mankind's teaching movement.
Summary of the invention
The purpose of the present invention is to provide be situated between based on targeted attitude deep learning vision collecting method, learning system and storage
Matter, it is intended to solve robot in the prior art simulate the mankind's teaching movement when, movement reduction degree it is low, take a little it is computationally intensive,
It is artificial to participate in the problem more, intelligence degree is low.
The present invention provides targeted attitude deep learning vision collecting method is based on, the teaching for simulated target is acted,
The following steps are included: from the teaching image information of the multi-direction upper acquisition teaching action process;Analyze the teaching image letter
Breath, chooses multiple reference points of the teaching movement, is fitted at least two functions with time relationship by mobile: for describing
Posture function that the targeted attitude changes over time, the displacement function changed over time for describing the target position;It is raw
At control program, so that the robot can realize that the teaching acts according to the posture function and the displacement function.
The present invention also provides learning systems, and for controlling robot learning teaching movement, the robot, which has, to be executed
End, comprising: image acquisition part acquires the teaching image information of the teaching action process from multi-direction photographs;Data point
Analysis portion is analyzed after receiving teaching image information, obtains the movement function of teaching movement;Drive control part receives institute
Driver is generated after stating movement function, and controls the actuating station and carries out echomotism.
The present invention also provides a kind of computer readable storage medium, the computer-readable recording medium storage has calculating
Machine program, the computer program are realized aforementioned based on targeted attitude deep learning vision collecting method when being executed by processor
Step
Compared with prior art, the teaching movement of target is reduced at least two functions in the present invention to be described: position
Function is moved, description is displaced relationship at any time;Posture function describes posture relationship at any time.After simplifying movement, reduces and take a little
Calculation amount, the teaching movement by acquiring target generate driver, reduce to the desirability manually participated in, have intelligence
Change degree is high, imitates the advantages that reduction degree is high.
Detailed description of the invention
Fig. 1 is the flow diagram provided in an embodiment of the present invention based on targeted attitude deep learning vision collecting method;
Fig. 2 is the pendulum provided in an embodiment of the present invention based on posture function in targeted attitude deep learning vision collecting method
Dynamic angle calculates schematic diagram.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
In the description of the present invention, it is to be understood that, term " length ", " width ", "upper", "lower", "front", "rear",
The orientation or positional relationship of the instructions such as "left", "right", "vertical", "horizontal", "top", "bottom" "inner", "outside" is based on attached drawing institute
The orientation or positional relationship shown, is merely for convenience of description of the present invention and simplification of the description, rather than the dress of indication or suggestion meaning
It sets or element must have a particular orientation, be constructed and operated in a specific orientation, therefore should not be understood as to limit of the invention
System.
In the description of the present invention, the meaning of " plurality " is two or more, unless otherwise specifically defined.
In the present invention unless specifically defined or limited otherwise, term " installation ", " connected ", " connection ", " fixation " etc.
Term shall be understood in a broad sense, for example, it may be being fixedly connected, may be a detachable connection, or integral;It can be mechanical connect
It connects, is also possible to be electrically connected;It can be directly connected, can also can be in two elements indirectly connected through an intermediary
The interaction relationship of the connection in portion or two elements.It for the ordinary skill in the art, can be according to specific feelings
Condition understands the concrete meaning of above-mentioned term in the present invention.
The realization of the present embodiment is described in detail below in conjunction with specific attached drawing, for the ease of narration, establishes space
Coordinate system (x, y, z), wherein x-axis and y-axis are located at horizontal plane and are mutually perpendicular to, and z-axis is located at vertical direction.
The teaching provided in the present embodiment based on targeted attitude deep learning vision collecting method, for learning objective is dynamic
Make, comprising the following steps:
101, from the teaching image information of multi-direction upper acquisition teaching action process.Target in the present embodiment can be people
Class, the entirety of animal or other mechanical devices or some specific part, such as the hand of people, the wing of birds, other
The actuating station etc. of robot.Specifically, illustrate so that people accepts the calligraphy movement of some Chinese character of brush writing as an example in the present embodiment,
Wherein writing brush is target, and the movement of writing brush itself is teaching movement in writing, holds brush writing Chinese character in writer
In the process, from the teaching image information of multiple directions shooting, collecting writing brush, due to time shaft, teaching image information is
Multistage video file.It is easily understood that hand can also be chosen in the present embodiment as target, in other examples.
102, teaching image information is analyzed, multiple reference points of teaching movement are chosen, is fitted by mobile with time relationship
For at least two functions: for describing posture function that targeted attitude changes over time, becoming at any time for describing target position
The displacement function of change.In this step, multi-view image information will be obtained in step 101 and carry out pattern-recognition analysis, in this implementation
As shown in Figure 1, acquisition tri- points of A, B, C are as a reference point on writing brush in example, wherein B point is that writing brush turns in writing process
Dynamic centre point sets certain interval time as shooting unit, such as selection t=0.5s, then analysis is every each ginseng of 0.5s
The change in location information of examination point is fitted at least two functions: posture function and displacement function.Wherein, mistake of the target in movement
Cheng Zhong, the variation of its own posture can be described by posture function, such as rotate certain angle etc. along the vertical direction.Displacement
In function, target is considered as particle, the displacement variable of target is described, such as be moved to second point from first point, then rises to again
Third point.In other embodiments, can also increase the quantity of function, such as function of movement: description is at certain specific time points
Output signal executes specified operation, such as is welded in t moment, pressed.It should be understood that if the teaching of target
Whole all not variations of posture are acted, the variation being only displaced, posture Function Fitting is the normal function for being assigned a value of 0, conversely, entirely
Journey only have attitudes vibration non-displacement variation, displacement function be fitted to be assigned a value of 0 normal function, have posture function and displacement function
At least two functions obviously include both of these case.And the location information shot at this time is remembered simultaneously with its corresponding time point
Record.
As depicted in figs. 1 and 2, in the present embodiment, since B point is rotation centre point, namely if ignore the position of writing brush
It moves, then B point is considered as rest point in writing process, therefore B point changes with time and can be used as displacement function.Pass through A point
Relative distance (l in diagram between the variation of position in the t time and A point and B point1Length), angle of oscillation can be calculated
Degree namely attitudes vibration.Circular can there are many, for example, setting the distance between A point and B point as l1, B point and C point
The distance between be l2, the writing brush that the t time shoots will be spaced and be reduced to t1And t2The B point of the two is overlapped by two straight lines, is calculated
t1And t2On A point between distance X1, X1And l1Angle α can be calculated by cosine formula, the variation of angle α relative time t is
The posture function at the moment.Similarly, t1And t2Distance X between upper C point2And l2, angle beta can be calculated by cosine formula,
Angle beta theoretically should be equal with angle α, can be used as in calculating and data are mutually authenticated.
Posture function and displacement function include identical variable: time, on the one hand can make the two simultaneous, common to describe
On the other hand the movement of target can know it in specific position/time speed and acceleration by the increment of unit time
Degree, the reference data as control robot.
During brush writing, displacement function is used to record variation with time t, pen spatially three coordinates
The movement in direction, wherein the variation of the coordinate on x and y-axis can be used as rough stroke trend, font when describing writing words
Size, the data for writing the movements such as range.Changes in coordinates in z-axis can the approximate function as the thickness for describing stroke, with paper
Face is 0 point of z coordinate, then for z coordinate closer to 0, pen tip is higher by compressing force, and stroke is thicker, and corresponding writing power at this time is bigger;
Z-axis coordinate is bigger, and the compressing force that pen tip is subject to is smaller, and stroke is thinner.Z-axis coordinate is more than the part of threshold value, table in displacement function
Bright pen tip at this time leaves paper, is identified as the invalid displacement operation write operation, record as the mobile position of record.
Posture function is used to record the variation of t at any time, and pen is from x, y, z three axial rotary states.Posture function
It can be used in describing the postural change of penholder in writing process, it is corresponding into calligraphy, it can be understood as to become with the posture of the vigour of style in writing
Change.
103, control program is generated, to allow the robot to realize teaching movement according to posture function and displacement function.Machine
Device people can carry out echomotism according to driver, and motion mode desirably is moved: the shifting of actuating station at any time
Dynamic to defer to displacement function, the attitudes vibration of actuating station itself defers to posture function during mobile according to displacement function, from
And the teaching movement of simulated target.
During above-mentioned as can be seen that provided in the present embodiment based on targeted attitude deep learning vision collecting side
Method first determines multiple reference points, then the teaching action process of vision photographic subjects, completes acquisition raw motion data, then
After original activities are combed, the teaching action process that two functions describe target is constituted with time variable, due to two function phases
Mutually independent, wherein posture function only records the attitudes vibration of target itself relative time, and displacement function regards target as particle,
The change in location of target relative time is only recorded, so that action data simplifies, anti-solution is fitted to two functions, according to two functions
Generate control program, robot operation control program can simulated target operating process.Since action data simplifies, so that
Reduce and take a calculation amount when imitating more complicated teaching movement, can guarantee that acting to teaching for higher reduction degree carries out
It imitates, and judges to simplify movement without artificial participation, so that demand of the process of learning by imitation to manually participating in is low, intelligence
Change degree is high.
Preferably, further comprising the steps of as shown in Figure 1, after step 103:
104, echomotism is carried out according to control driven by program robot.
105, from the imitation image information of multi-direction upper acquisition echomotism process.
106, it compares and imitates image information and teaching image information, Correction and Control program.
On the basis of being only in that data acquisition and automatic calculating due to the control program of generation, the movement after execution may not
Imitation can be complied fully with to require, therefore trial operation controls program at step 104, according to step 101 when executing
Mode, record imitate image information, then will imitate image information and compare with teaching image information, then Correction and Control program,
Form control closed loop namely robot learning process.
Specific alignments can there are many, such as the process of such as step 101 to step 103 is repeated, by robot
The acquisition target that actuating station is acted as teaching, it is secondary to generate new displacement function, posture function, it is generated with raw motion data
Displacement function, posture function ratio pair, search whether occur exceed threshold value deviation;Or the side directly compared by image
Formula will be imitated image information and teaching image, is superimposed after adjusting transparency, and compare the error on image to judge similarity.
If it find that error exceeds threshold value, amendment direction and size are determined, then instead solve Correction and Control program.
Above-mentioned step 103 to step 106 can be repeated, and compare by multiple trial operation, acquisition, amendment learns
Afterwards, so that the final difference for executing the result of the action and original activities result is less than threshold value.Complete entire learning process.
Preferably, before step 101, the point for drawing special color in target can be used, paste special shape
Pattern, installation can issue the part of special light as marker, shoot when carrying out image recognition after image, directly will mark
Substance markers are reference point.In other examples, reference point can also be after image taking acquisition, in system when image recognition
It is middle to be used as digital information processing, the point being actually labeled in target may be not present.
Preferably, in a step 102, the rotation centre point remaining stationary actually may and being not present in rotation process
B can choose the smallest reference point of angle of oscillation at this time, correct the influence in its swing process for displacement, make it as displacement
The reference point of function.
Preferably, in a step 102, it during the calculating of posture function, only swings in a plane, therefore
It calculates specific angle of oscillation α in the following ways in embodiment: acquiring object respectively in x/y plane, xz plane, yz plane
Image namely target projecting figure on this plane, calculate the angle of oscillation of projecting figure in three planes, be then fitted to
Angle of oscillation α spatially.It is easily understood that in specific posture function, can also direct three equations of simultaneous, retouch respectively
The angle of oscillation stated in three planes changes over time relationship.
Preferably, relevant sensor, such as acceleration transducer etc. can be installed in target, acquisition behaviour Zou acted
Data in journey also install sensor in robot actuating station, and record executes data when control program behavior, by the two ratio
It is right, the index of reduction degree is imitated as judgement.
Learning system is additionally provided in the present embodiment, for controlling robot learning teaching movement, robot includes image
Acquisition portion, data analysis portion, drive control part and actuating station, the wherein multi-direction photographs acquisition teaching of image acquisition part acted
The teaching image information of journey, data analysis portion are analyzed after receiving teaching image information, obtain the movement function of teaching movement,
Namely displacement function and posture function above, drive control part generate control program after receiving movement function, control actuating station
Carry out echomotism.
Learning system in the present embodiment, can be by acquisition teaching movement, and voluntarily Construction analysis generates movement function, after
And control program is generated, after operation control program, actuating station carries out echomotism, imitates teaching movement.Due to action data letter
Change so that take a calculation amount when imitating more complicated teaching movement reducing, can guarantee higher reduction degree to showing
Religion movement is imitated, and judge without artificial participation to movement simplification, so that the process of learning by imitation is to manually participating in
Demand is low, and intelligence degree is high.
Preferably, image acquisition part is acquired not only for teaching movement, and acquires the imitation image of echomotism
Information.Learning system further includes study portion, after study portion is by comparing imitation image information and teaching image information, to control journey
Sequence is modified, namely carries out robot learning process.By study, amendment repeatedly, the reduction degree of echomotism can be improved,
Allow the robot to the teaching movement of the imitation reduction target of higher precision.
In the present embodiment, image acquisition part specifically includes multiple cameras, places in all directions, while being clapped
It takes the photograph, acquire and records image information.
A kind of computer readable storage medium is additionally provided in the present embodiment, computer-readable recording medium storage has calculating
Machine program realizes the above-mentioned step based on targeted attitude deep learning vision collecting method when computer program is executed by processor
Suddenly.
The above is merely preferred embodiments of the present invention, be not intended to limit the invention, it is all in spirit of the invention and
Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within principle.
Claims (10)
1. targeted attitude deep learning vision collecting method is based on, for controlling robot learning teaching movement, which is characterized in that
The following steps are included:
From the teaching image information of the multi-direction upper acquisition teaching action process;
The teaching image information is analyzed, multiple reference points of the teaching movement are chosen, is fitted by mobile with time relationship
For at least two functions: for describing posture function that the targeted attitude changes over time, for describing the target position
The displacement function changed over time;
Control program is generated, so that the robot can realize the teaching according to the posture function and the displacement function
Movement.
2. being based on targeted attitude deep learning vision collecting method as described in claim 1, which is characterized in that controlled generating
It is further comprising the steps of after program:
Echomotism is carried out according to robot described in the control driven by program;
From the imitation image information of the multi-direction upper acquisition echomotism process;
The imitation image information and the teaching image information are compared, the control program is corrected.
3. being based on targeted attitude deep learning vision collecting method as described in claim 1, which is characterized in that from multi-direction
It is further comprising the steps of after the teaching image information for acquiring the teaching action process:
The smallest reference point of angle of oscillation is selected, the influence in swing process for displacement is corrected, makes it as the displacement
The reference point of function.
4. being based on targeted attitude deep learning vision collecting method as described in claim 1, which is characterized in that show described in selection
Multiple reference points of religion movement are fitted at least two functions with time relationship by mobile, specifically includes the following steps:
The distance between the image of interval time t is overlapped and measures same reference points, angle of oscillation is calculated, according to the angle of oscillation
It can get the posture function this moment with time t.
5. targeted attitude deep learning vision collecting method as claimed in claim 3, which is characterized in that clap interval time t
The image for the target taken the photograph the distance between is overlapped and measures same reference points, calculates angle of oscillation, specifically includes the following steps:
Projecting figure of the target in orthogonal three planes is acquired, the pendulum of the projecting figure in each plane is calculated
Dynamic subangle, is spatially fitted to the angle of oscillation.
6. targeted attitude deep learning vision collecting method as described in claim 1, which is characterized in that the target is equipped with
Convenient for observing the marker taken a little.
7. learning system, for controlling robot learning teaching movement, the robot has actuating station, which is characterized in that packet
It includes:
Image acquisition part acquires the teaching image information of the teaching action process from multi-direction photographs;
Data analysis portion is analyzed after receiving teaching image information, obtains the movement function of teaching movement;
Drive control part generates driver after receiving the movement function, and controls the actuating station and carry out echomotism.
8. learning system as claimed in claim 7, which is characterized in that further include study portion;Described image acquisition portion is also used to
From the imitation image information of the multi-direction upper acquisition echomotism, the study portion compares the teaching image information and described
Image information is imitated, the control program is corrected.
9. learning system as claimed in claim 7, which is characterized in that described image acquisition portion includes multiple cameras, simultaneously
From multiple into shooting and recording image information.
10. storage medium, the computer-readable recording medium storage has computer program, which is characterized in that the computer
It is realized when program is executed by processor and is based on targeted attitude deep learning vision collecting as described in any one of claims 1 to 6
The step of method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811466680.0A CN109676583B (en) | 2018-12-03 | 2018-12-03 | Deep learning visual acquisition method based on target posture, learning system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811466680.0A CN109676583B (en) | 2018-12-03 | 2018-12-03 | Deep learning visual acquisition method based on target posture, learning system and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109676583A true CN109676583A (en) | 2019-04-26 |
CN109676583B CN109676583B (en) | 2021-08-24 |
Family
ID=66186069
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811466680.0A Active CN109676583B (en) | 2018-12-03 | 2018-12-03 | Deep learning visual acquisition method based on target posture, learning system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109676583B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111230862A (en) * | 2020-01-10 | 2020-06-05 | 上海发那科机器人有限公司 | Handheld workpiece deburring method and system based on visual recognition function |
CN114789470A (en) * | 2022-01-25 | 2022-07-26 | 北京萌特博智能机器人科技有限公司 | Method and device for adjusting simulation robot |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2993002A1 (en) * | 2014-09-03 | 2016-03-09 | Canon Kabushiki Kaisha | Robot apparatus and method for controlling robot apparatus |
CN106182003A (en) * | 2016-08-01 | 2016-12-07 | 清华大学 | A kind of mechanical arm teaching method, Apparatus and system |
CN107309882A (en) * | 2017-08-14 | 2017-11-03 | 青岛理工大学 | A kind of robot teaching programming system and method |
US20170361464A1 (en) * | 2016-06-20 | 2017-12-21 | Canon Kabushiki Kaisha | Method of controlling robot apparatus, robot apparatus, and method of manufacturing article |
CN108527319A (en) * | 2018-03-28 | 2018-09-14 | 广州瑞松北斗汽车装备有限公司 | The robot teaching method and system of view-based access control model system |
-
2018
- 2018-12-03 CN CN201811466680.0A patent/CN109676583B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2993002A1 (en) * | 2014-09-03 | 2016-03-09 | Canon Kabushiki Kaisha | Robot apparatus and method for controlling robot apparatus |
US20170361464A1 (en) * | 2016-06-20 | 2017-12-21 | Canon Kabushiki Kaisha | Method of controlling robot apparatus, robot apparatus, and method of manufacturing article |
CN106182003A (en) * | 2016-08-01 | 2016-12-07 | 清华大学 | A kind of mechanical arm teaching method, Apparatus and system |
CN107309882A (en) * | 2017-08-14 | 2017-11-03 | 青岛理工大学 | A kind of robot teaching programming system and method |
CN108527319A (en) * | 2018-03-28 | 2018-09-14 | 广州瑞松北斗汽车装备有限公司 | The robot teaching method and system of view-based access control model system |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111230862A (en) * | 2020-01-10 | 2020-06-05 | 上海发那科机器人有限公司 | Handheld workpiece deburring method and system based on visual recognition function |
CN111230862B (en) * | 2020-01-10 | 2021-05-04 | 上海发那科机器人有限公司 | Handheld workpiece deburring method and system based on visual recognition function |
CN114789470A (en) * | 2022-01-25 | 2022-07-26 | 北京萌特博智能机器人科技有限公司 | Method and device for adjusting simulation robot |
Also Published As
Publication number | Publication date |
---|---|
CN109676583B (en) | 2021-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107833271B (en) | Skeleton redirection method and device based on Kinect | |
Hager | Task-directed sensor fusion and planning: a computational approach | |
Riley et al. | Enabling real-time full-body imitation: a natural way of transferring human movement to humanoids | |
KR101810415B1 (en) | Information processing device, information processing system, block system, and information processing method | |
CN105528082A (en) | Three-dimensional space and hand gesture recognition tracing interactive method, device and system | |
CN102169366A (en) | Multi-target tracking method in three-dimensional space | |
CN109993073A (en) | A kind of complicated dynamic gesture identification method based on Leap Motion | |
CN109676583A (en) | Based on targeted attitude deep learning vision collecting method, learning system and storage medium | |
Taylor et al. | Visual perception and robotic manipulation: 3D object recognition, tracking and hand-eye coordination | |
CN110293552A (en) | Mechanical arm control method, device, control equipment and storage medium | |
CN109655059B (en) | Vision-inertia fusion navigation system and method based on theta-increment learning | |
CN109590987A (en) | Semi-intelligent learning from instruction method, intelligent robot and storage medium | |
CN108536314A (en) | Method for identifying ID and device | |
CN108279773A (en) | A kind of data glove based on MARG sensors and Magnetic oriented technology | |
CN109974853A (en) | Based on the multispectral compound detection of bionical sensation target and tracking | |
CN113284192A (en) | Motion capture method and device, electronic equipment and mechanical arm control system | |
WO2024094227A1 (en) | Gesture pose estimation method based on kalman filtering and deep learning | |
CN111433783A (en) | Hand model generation method and device, terminal device and hand motion capture method | |
Banerjee et al. | 3D face authentication software test automation | |
CN109685828A (en) | Based on targeted attitude deep learning tracking acquisition method, learning system and storage medium | |
CN105917385A (en) | Information processing device and information processing method | |
CN107101632A (en) | Space positioning apparatus and method based on multi-cam and many markers | |
CN116485953A (en) | Data processing method, device, equipment and readable storage medium | |
CN115129162A (en) | Picture event driving method and system based on human body image change | |
WO2023103145A1 (en) | Head pose truth value acquisition method, apparatus and device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |