CN104298345A - Control method for man-machine interaction system - Google Patents

Control method for man-machine interaction system Download PDF

Info

Publication number
CN104298345A
CN104298345A CN201410364071.XA CN201410364071A CN104298345A CN 104298345 A CN104298345 A CN 104298345A CN 201410364071 A CN201410364071 A CN 201410364071A CN 104298345 A CN104298345 A CN 104298345A
Authority
CN
China
Prior art keywords
identification point
marker
frame image
geometric configuration
movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410364071.XA
Other languages
Chinese (zh)
Other versions
CN104298345B (en
Inventor
李江
王卫红
杨洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201410364071.XA priority Critical patent/CN104298345B/en
Publication of CN104298345A publication Critical patent/CN104298345A/en
Application granted granted Critical
Publication of CN104298345B publication Critical patent/CN104298345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a control method for a man-machine interaction system. The method includes the steps of obtaining images of a plurality of identification objects, wherein a plurality of identification points are arranged on the identification objects, and the identification points on the different identification objects are in different shapes; obtaining the identification points which are in different shapes from the images; grouping the identification points according to the shapes of the identification points so as to obtain geometric graphs obtained after grouping is conducted; obtaining the movement states of the identification objects by comparing the geometric shape of the current frame of the image with the geometric shape of a previous frame of the image; outputting a corresponding control command to the man-machine interaction system according to the movement states. By means of the method, the control over the man-machine interaction system is achieved at a low cost and through the relatively simple method.

Description

A kind of control method of man-machine interactive system
Technical field
The present invention relates to field of human-computer interaction, particularly relate to a kind of control method of man-machine interactive system.
Background technology
Along with the development of science and technology, reality environment is widely used in the fields such as video/computer game, emulator, cad tools.Wherein, most of reality environment allows user to control object at six-freedom degree, that is, allow user to control object to move along transverse axis (horizontal), move along Z-axis (vertiacal), move along zoom axis (zoom), around horizontal axis (pitch), rotate (roll) around vertical axes (yaw) and around zoom axis.
In the man-machine interactive system of reality environment, a kind of control method of existing man-machine interactive system is: by tracking target (such as, user's head) on motion sensor is set to realize the control of man-machine interactive system, wherein, motion sensor can for motion accelerator.Adopt the method, motion accelerator needs to be connected to by connecting line the motion state that user's head followed the tracks of by computing machine, and the use of connecting line can hinder the motion of user and then alleviate the experience of user.Secondly, the wearing and tearing of in running order motion accelerator are easily accelerated in the motion of user, thus increase the cost of system.Finally, the measuring accuracy of motion accelerator is lower, thus limited subscriber performs the ability of particular task under reality environment.
Summary of the invention
The technical matters that the present invention mainly solves is to provide a kind of control method of man-machine interactive system, at lower cost and relatively simple mode can realize the control of man-machine interactive system.
For solving the problems of the technologies described above, the technical scheme that the present invention adopts is: a kind of control method of man-machine interactive system, is characterized in that, described method comprises:
Step 1, obtains the image of multiple marker, and wherein said marker arranges multiple identification point, and the described identification point that different described markers is arranged has difformity;
Step 2, extracts difform described identification point from image described in step 1;
Step 3, divides into groups to described identification point the geometric figure that the described identification point after obtaining grouping formed according to the shape of identification point described in step 2;
Step 4, obtains the motion state of described marker by the described geometric configuration that the described geometric configuration and previous frame image that compare current frame image are corresponding;
Step 5, exports corresponding control command to man-machine interactive system according to described motion state.
Further, step 2 specifically comprises:
21 search for hot spot in described image;
The optical density of each described hot spot of 22 calculating;
23 select the hot spot of described optical density in preset range as described identification point.
1. method according to claim 2, is characterized in that: optical density described in step 22 calculates according to following formula:
M = 4 π * S L 2 ;
Wherein, M is the optical density of hot spot, and S is the area of hot spot, and L is the girth of hot spot.
Further, step 3 specifically comprises:
31 divide into groups to described identification point according to the shape of described identification point, and wherein, the described identification point after grouping is of similar shape;
The triangle that described identification point after 32 acquisition groupings is formed, wherein, described triangle is formed by the first identification point, the second identification point and the 3rd identification point, and described first identification point, the second identification point and the 3rd identification point are the identification point arranged by the ascending order of horizontal coordinate.
Further again, step 4 specifically comprises:
411 obtain the average level coordinate of described geometric configuration of current frame images as the first horizontal coordinate, and the average level coordinate obtaining described geometric configuration corresponding to previous frame image is as the second horizontal coordinate;
412 determine the movement of described marker along transverse axis by more described first horizontal coordinate and described second horizontal coordinate;
Wherein, the described average level coordinate of described geometric configuration is the mean value of the horizontal coordinate of described first identification point, the second identification point and the 3rd identification point; Can according to the movement of following formula determination marker along transverse axis:
Movement_hortizontal=(q1x+q2x+q3x)/3-(p1x+p2x+p3x)/3;
Wherein, q1x, q2x and q3x are the horizontal coordinate of the first identification point of current frame image, the second identification point and the 3rd identification point, and p1x, p2x and p3x are the horizontal coordinate of the first identification point of previous frame image, the second identification point and the 3rd identification point;
Wherein, when Movement_hortizontal be on the occasion of time, represent marker move right along horizontal axis, when Movement_hortizontal is negative value, represent marker move left along horizontal axis.
Or step 4 specifically comprises:
421 obtain the average vertical coordinate of described geometric configuration of current frame images as the first average vertical coordinate, and the average vertical coordinate obtaining described geometric configuration corresponding to previous frame image is as the second average vertical coordinate;
422 determine the movement of described marker along Z-axis by more described first average vertical coordinate and described second average vertical coordinate;
Wherein, the described average vertical coordinate of described geometric configuration is the mean value of the vertical coordinate of described first identification point, the second identification point and the 3rd identification point; Can according to the movement of following formula determination marker along Z-axis:
Movement_vertical=(q1y+q2y+q3y)/3-(p1y+p2y+p3y)/3;
Wherein, q1y, q2y and q3y are the vertical coordinate of the first identification point of current frame image, the second identification point and the 3rd identification point, and p1y, p2y and p3y are the vertical coordinate of the first identification point of previous frame image, the second identification point and the 3rd identification point;
Wherein, when Movement_vertical be on the occasion of time, represent marker move up along Z-axis, when Movement_vertical is negative value, represent marker move down along Z-axis.
Or step 4 specifically comprises:
431 obtain the area of described geometric configuration of current frame images as the first area, and the area obtaining described geometric configuration corresponding to previous frame image is as second area;
432 determine the movement of described marker along zoom axis by second area described in more described first surface sum.
Can according to the movement of following formula determination marker along zoom axis:
Movement_zoom=qarea-parea;
Wherein, qarea is the area of the geometric configuration of current frame image, and parea is the area of the geometric configuration of previous frame image;
Wherein, when Movement_zoom be on the occasion of time, represent that marker amplifies mobile along zoom axis, when Movement_zoom is negative value, represent that marker reduces movement along zoom axis.
In addition, the area marea of geometric configuration can calculate according to following formula:
ma = ( m 1 x - m 2 x ) 2 + ( m 1 y - m 2 y ) 2 ;
mb = ( m 3 x - m 2 x ) 2 + ( m 3 y - m 2 y ) 2 ;
mc = ( m 1 x - m 3 x ) 2 + ( m 1 y - m 3 y ) 2 ;
ms=0.5*(ma+mb+mc);
marea=ms*(ms-ma)*(ms-mb)*(ms-mc);
Wherein, m is the geometric configuration q of current frame image or the geometric configuration p of previous frame image, ma, mb and mc are three limits of geometric configuration, m1x, m2x and m3x are the horizontal coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point, and m1y, m2y and m3y are the vertical coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point.
Or step 4 specifically comprises:
441 obtain using the described second identification point angle that is summit as the first angle in the described triangle of current frame images, and using the described second identification point angle that is summit as the second angle in the described triangle obtaining previous frame image;
442 determine the rotation of described marker around transverse axis by more described first angle and described second angle.
Can according to the rotation of following formula determination marker along transverse axis:
Movement_pitch=q2angle-p2angle;
Wherein, q2angle is the angle that the triangle of current frame image is summit with the second identification point, and p2angle is the angle that the triangle of previous frame image is summit with the second identification point;
Wherein, when Movement_pitch be on the occasion of time, represent marker turn right around transverse axis, when Movement_pitch is negative value, represent motion state for turn left along transverse axis.
In addition, triangle can calculate according to following formula with the angle m2angle that the second identification point is summit:
ma = ( m 1 x - m 2 x ) 2 + ( m 1 y - m 2 y ) 2 ;
mb = ( m 3 x - m 2 x ) 2 + ( m 3 y - m 2 y ) 2 ;
mc = ( m 1 x - m 3 x ) 2 + ( m 1 y - m 3 y ) 2 ;
m2angle=arcos((ma) 2+(mb) 2-(mc) 2)/(2*ma*mb));
Wherein, m is the geometric configuration q of current frame image or the geometric configuration p of previous frame image, ma, mb and mc are three limits of geometric configuration, m1x, m2x and m3x are the horizontal coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point, and m1y, m2y and m3y are the vertical coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point.
Or step 4 specifically comprises:
451 obtain using described first identification point or the described 3rd identification point angle that is summit as the first angle in the described triangle of current frame images, and using described first identification point or the described 3rd identification point angle that is summit as the second angle in the described triangle obtaining previous frame image;
452 determine the rotation of described marker around Z-axis by more described first angle and described second angle.
Specifically, the angle being summit for the first identification point, can according to the rotation of following formula determination marker along Z-axis:
Movement_yaw=q1angle-p1angle;
Wherein, q1angle is the angle that the triangle of current frame image is summit with the first identification point, and p1angle is the angle that the triangle of previous frame image is summit with the first identification point;
Wherein, when Movement_yaw be on the occasion of time, represent marker turn right around Z-axis, when Movement_yaw is negative value, represent marker turn left around Z-axis.
Specifically, the angle being summit for the 3rd identification point, can according to the rotation of following formula determination marker along Z-axis:
Movement_yaw=q3angle-p3angle;
Wherein, q3angle is the angle that the triangle of current frame image is summit with the 3rd identification point, and p3angle is the angle that the triangle of previous frame image is summit with the 3rd identification point;
Wherein, when Movement_yaw be on the occasion of time, represent marker turn left around Z-axis, when Movement_yaw is negative value, represent marker turn right around Z-axis.
In addition, triangle can calculate according to following formula with the angle m3angle that the 3rd identification point is summit with the first identification point angle m1angle that is summit and triangle:
ma = ( m 1 x - m 2 x ) 2 + ( m 1 y - m 2 y ) 2 ;
mb = ( m 3 x - m 2 x ) 2 + ( m 3 y - m 2 y ) 2 ;
mc = ( m 1 x - m 3 x ) 2 + ( m 1 y - m 3 y ) 2 ;
m1angle=arcos((ma) 2+(mc) 2-(mb) 2)/(2*ma*mc));
m3angle=arcos((mc) 2+(mb) 2-(ma) 2)/(2*mb*mc));
Wherein, m is the geometric configuration q of current frame image or the geometric configuration p of previous frame image, ma, mb and mc are three limits of geometric configuration, m1x, m2x and m3x are the horizontal coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point, and m1y, m2y and m3y are the vertical coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point.
Or step 4 comprises:
461 obtain the difference of described first identification points of current frame image and the vertical coordinate of described 3rd identification point as the first difference, and the difference of vertical coordinate obtaining described first identification point corresponding to previous frame image and described 3rd identification point is as the second difference;
462 determine the rotation of described marker around zoom axis by more described first difference and described second difference.
Can according to the rotation of following formula determination marker along zoom axis:
Movement_roll=(q1y-q3y)-(p1y-p3y);
Wherein, q1y and q3y is the first identification point of current frame image and the vertical coordinate of the 3rd identification point, p1y and p3y is the first identification point of previous frame image and the vertical coordinate of the 3rd identification point;
Wherein, when Movement_roll be on the occasion of time, represent marker turn right around zoom axis, when Movement_roll is negative value, represent marker turn left around zoom axis.
The invention has the beneficial effects as follows: the situation being different from prior art, the present invention is by obtaining the image of multiple marker and extracting to be arranged at from image multiple marker has difform identification point, according to the shape of identification point, the geometric figure that the identification point after obtaining grouping formed is divided into groups to identification point, the geometric configuration that the multiple identification points comparing current frame image are further formed and geometric configuration corresponding to previous frame image obtain the motion state of marker, finally export according to motion state the control that corresponding control command to man-machine interactive system realizes man-machine interactive system.Compared with prior art, the present invention wirelessly can realize the control of man-machine interactive system, can not affect the experience of user.Further, the present invention carries out processing the control that can realize man-machine interactive system to the image of marker by simple mathematical algorithm, realizes simple and is easy to promote.In addition, the present invention can follow the tracks of multiple marker simultaneously, thus realizes controlling multiple target at six degree of freedom or controlling single target at the latitude more than six degree of freedom in reality environment.
Accompanying drawing explanation
Fig. 1 is the application scenarios figure of the control method of the man-machine interactive system that embodiment of the present invention provides;
Fig. 2 is the process flow diagram of the control method of the man-machine interactive system that embodiment of the present invention provides;
Fig. 3 is the structural representation of the image of marker in embodiment of the present invention;
Fig. 4-15 is motion schematic diagram of current frame image and geometric configuration corresponding to previous frame image in embodiment of the present invention.
Embodiment
The present invention program is understood better in order to make those skilled in the art person, below in conjunction with the accompanying drawing in embodiment of the present invention, technical scheme in embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the embodiment of a part of the present invention, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, should belong to the scope of protection of the invention.
Fig. 1 is the application scenarios figure of the control method of a kind of man-machine interactive system that embodiment of the present invention provides, and this application scenarios can comprise: computing machine 105, display 110, keyboard 120, camera 115, marker 130 and marker 140.Wherein, the control method of man-machine interactive system of the present invention is deployed on computing machine 105.
In the present embodiment, computing machine 105 for running the software of the man-machine interactive system based on reality environment, and controls reality environment according to the image comprising marker 130 and marker 140 that camera 115 obtains.Display 110 is connected with computing machine 105, for display virtual real environment.Keyboard 120 is connected with computing machine, for receiving the order of user's input to arrange the parameter in reality environment.
In the present embodiment, marker 130 is the glasses can worn for user, marker 130 is provided with three identification points 135, wherein, an identification point 135 is arranged at the centre position at the top of glasses, two other identification point 135 is arranged at the marginal position of the both sides of glasses respectively, thus makes three identification points 135 form a leg-of-mutton geometric configuration.Marker 140 is the circular hand holding equipment that can be held in the hand for user, marker 140 is provided with three identification points, 145, three identification points 145 and forms a leg-of-mutton geometric configuration.When user holds marker 140, camera 115 can get the figure of the marker comprising three identification points 145.
In the present embodiment, the identification point 135 being arranged at marker 130 and the identification point 145 being arranged at marker 140 have different shapes.
Wherein, when camera 115 gets the image that comprises marker 130 and marker 140 and after passing to computing machine 105, computing machine 105 extracts difform identification point from image, also from image, identification point 135 and identification point 145 is namely extracted, according to the shape of identification point, the geometric configuration that the identification point after obtaining grouping formed is divided into groups to identification point, also the triangle that three identification points 135 and three identification points 145 are formed namely is obtained respectively, the motion state of marker is obtained by the geometric configuration that the geometric configuration and previous frame image that compare current frame image are corresponding, also namely compare the triangle that in present frame and previous frame image, identification point 135 is formed obtain the motion state of marker 130 and compare the motion state of the triangle acquisition marker 140 that identification point 145 in present frame and previous frame image is formed, corresponding control command is exported to man-machine interactive system according to motion state, to realize the control of user to reality environment.
Those skilled in the art will appreciate that the computing machine 105 in Fig. 1 is desktop computer, computing machine 105 can be also the computing system of other energy operating software, and the present invention is not as limit.Display 110 in Fig. 1 is liquid crystal display, and display 110 also can be other display device, and as conformable display, televisor etc., the present invention is not as limit.Fig. 1 is described for marker 130 and marker 140, and it will be understood by those skilled in the art that the present invention also can comprise the multiple markers being different from two, the present invention is not limited with two markers.
Fig. 2 is the process flow diagram of the control method of the man-machine interactive system that embodiment of the present invention provides.If it is noted that there is result identical in fact, method of the present invention is not limited with the flow sequence shown in Fig. 2.As shown in Figure 2, the method comprising the steps of:
Step S101: the image obtaining multiple marker, wherein marker is arranged multiple identification point, and the identification point that different markers is arranged has difformity;
In step S101, marker is arranged in tracking target, and identification point is arranged on marker.Wherein, identification point can be infrarede emitting diode (light Emitting Diode, LED, generating laser (Laser emitter) etc.
It is two with the quantity of marker, two markers being respectively arranged with three identification points is example, as shown in Figure 3, marker 1950 and marker 1960 is comprised at the image obtained, specifically, marker 1950 comprises the first identification point 1910, second identification point 1915 and the 3rd identification point 1920, first identification point 1910, second identification point 1915 and the 3rd identification point 1920 are the identification point arranged by the ascending order of horizontal coordinate, and the first identification point 1910, second identification point 1915 and the 3rd identification point 1920 are triangle.Marker 1960 comprises the first identification point 1925, second identification point 1930 and the 3rd identification point 1935, first identification point 1925, second identification point 1930 and the 3rd identification point 1935 are the identification point arranged by the ascending order of horizontal coordinate, and the first identification point 1925, second identification point 1930 and the 3rd identification point 1935 are circle.
Step S102: extract the difform identification point of mark from image;
In step s 102, first, search for hot spot in the picture, wherein, hot spot is the adjacent set with the pixel of same color or cardinal principle same color, and difform identification point has difform hot spot.
Then, calculate the optical density of the hot spot searched, wherein optical density is for measuring the compactedness in hot spot between pixel, and difform hot spot has different optical density.Specifically, the step calculating the optical density of the hot spot searched comprises: the area obtaining each hot spot, and wherein the area of hot spot calculates according to total number of pixel in hot spot; Obtain the girth of each hot spot, wherein the girth of hot spot calculates according to the number of the pixel of hot spot periphery; According to area and the corresponding girth acquisition optical density of each hot spot.
Preferably, optical density can calculate according to following formula:
M = 4 π * S L 2 ;
Wherein, M is the optical density of hot spot, and S is the area of hot spot, and L is the girth of hot spot.
Finally, the hot spot of selective light density in preset range is as identification point.Accept aforementioned citing, when the identification point in marker 1950 is triangle, the predetermined optical density of leg-of-mutton identification point is π/3.The optical density of the hot spot relatively calculated and predetermined optical density, when the optical density of hot spot is in the preset range of the first threshold centered by predetermined optical density and Second Threshold, select this hot spot as leg-of-mutton identification point.Such as, when first threshold is 1.030, when Second Threshold is 1.062, then selective light density be greater than 1.030 and the hot spot being less than 1.062 as leg-of-mutton identification point.
When the identification point in marker 1960 is circular, the predetermined optical density of circular identification point is 1.The optical density of the hot spot relatively calculated and predetermined optical density, when the optical density of hot spot is in the preset range of the 3rd threshold value centered by predetermined optical density and the 4th threshold value, select this hot spot as the identification point of circle.Such as, when the 3rd threshold value be the 0.99, four threshold value is 1.01, then selective light density be greater than 0.99 and the hot spot being less than 1.01 as the identification point of circle.
In addition, after the hot spot of selective light density in preset range is as identification point, obtain the barycenter of the hot spot of optical density in preset range.Wherein, barycenter is mass centre, and it is positioned at the center of hot spot, and the two-dimensional coordinate of barycenter is the two-dimensional coordinate of hereinafter described identification point.
Those skilled in the art will appreciate that the present invention also can extract identification point according to other method, be not limited to the above-mentioned optical density according to hot spot in this enforcement and extract the method for identification point.
Step S103: the geometric figure that the identification point after obtaining grouping formed is divided into groups to identification point according to the shape of identification point.
In step s 103, the identification point after grouping is of similar shape, and the geometric configuration that the identification point after grouping is formed is determined by the number of identification point and position.Accepting aforementioned citing, is that leg-of-mutton first identification point 1910, second identification point 1915 and the 3rd identification point 1920 are divided into one group by shape, obtains the triangle that the first identification point 1910, second identification point 1915 and the 3rd identification point 1920 are formed.Be that the first circular identification point 1925, second identification point 1930 and the 3rd identification point 1935 are divided into one group by shape, obtain the triangle that the first identification point 1925, second identification point 1930 and the 3rd identification point 1935 are formed.Those skilled in the art will appreciate that geometric configuration is that triangle is only citing, the present invention is not as limit.
Step S104: the motion state being obtained marker by the geometric configuration that the geometric configuration and previous frame image that compare current frame image are corresponding;
In step S104, the motion state of marker comprises six kinds of different motion states, and it is respectively the movement of marker along transverse axis, the movement along Z-axis, the movement along zoom axis, the rotation around transverse axis, the rotation around Z-axis and the rotation around zoom axis.Wherein, transverse axis, Z-axis and zoom axis are mutually vertical to be arranged, and forms the three-dimensional system of coordinate meeting right hand principle.
Please also refer to Fig. 4-15, Fig. 4-15 is the motion schematic diagram of the geometric configuration that current frame image and previous frame image are corresponding, and wherein, solid line represents the geometric configuration of present frame figure, dotted line represents and the geometric configuration that previous frame image is corresponding is described for the geometric configuration of triangle.
As shown in Figure 4 and Figure 5, when the motion state of marker is marker along transverse axis mobile, because the size of present frame figure and geometric configuration corresponding to previous frame image does not change and only has the horizontal coordinate of each identification point in geometric configuration to change, it can by obtaining the average level coordinate of the geometric configuration of current frame image as the first horizontal coordinate, and the average level coordinate obtaining geometric configuration corresponding to previous frame image is as the second horizontal coordinate, by comparing the movement along transverse axis of the first horizontal coordinate and the second horizontal coordinate determination marker, wherein, the average level coordinate of geometric configuration is the first identification point, the mean value of the horizontal coordinate of the second identification point and the 3rd identification point.
Specifically, can according to the movement of following formula determination marker along transverse axis:
Movement_hortizontal=(q1x+q2x+q3x)/3-(p1x+p2x+p3x)/3;
Wherein, q1x, q2x and q3x are the horizontal coordinate of the first identification point of current frame image, the second identification point and the 3rd identification point, and p1x, p2x and p3x are the horizontal coordinate of the first identification point of previous frame image, the second identification point and the 3rd identification point;
Wherein, when Movement_hortizontal be on the occasion of time, represent marker move right along horizontal axis, when Movement_hortizontal is negative value, represent marker move left along horizontal axis.
As shown in Figure 6 and Figure 7, when the motion state of marker is marker along Z-axis mobile, because the size of present frame figure and geometric configuration corresponding to previous frame image does not change and only has the vertical coordinate of each identification point in geometric configuration to change, it can by obtaining the average vertical coordinate of the geometric configuration of current frame image as the first average vertical coordinate, and the average vertical coordinate obtaining geometric configuration corresponding to previous frame image is as the second average vertical coordinate, by comparing the movement along Z-axis of the first average vertical coordinate and the second average vertical coordinate determination marker, wherein, the average vertical coordinate of geometric configuration is the first identification point, the mean value of the vertical coordinate of the second identification point and the 3rd identification point.
Specifically, can according to the movement of following formula determination marker along Z-axis:
Movement_vertical=(q1y+q2y+q3y)/3-(p1y+p2y+p3y)/3;
Wherein, q1y, q2y and q3y are the vertical coordinate of the first identification point of current frame image, the second identification point and the 3rd identification point, and p1y, p2y and p3y are the vertical coordinate of the first identification point of previous frame image, the second identification point and the 3rd identification point;
Wherein, when Movement_vertical be on the occasion of time, represent marker move up along Z-axis, when Movement_vertical is negative value, represent marker move down along Z-axis.
As shown in Figure 8 and Figure 9, when the motion state of marker is marker along zoom axis mobile, because the size of present frame figure and geometric configuration corresponding to previous frame image changes and the angle being summit with each identification point in present frame figure and geometric configuration corresponding to previous frame image does not change, it can by obtaining the area of the geometric configuration of current frame image as the first area, and the area obtaining geometric configuration corresponding to previous frame image is as second area, by comparing the movement of first surface sum second area determination marker along zoom axis.
Specifically, can according to the movement of following formula determination marker along zoom axis:
Movement_zoom=qarea-parea;
Wherein, qarea is the area of the geometric configuration of current frame image, and parea is the area of the geometric configuration of previous frame image;
Wherein, when Movement_zoom be on the occasion of time, represent that marker amplifies mobile along zoom axis, when Movement_zoom is negative value, represent that marker reduces movement along zoom axis.
In addition, the area marea of geometric configuration can calculate according to following formula:
ma = ( m 1 x - m 2 x ) 2 + ( m 1 y - m 2 y ) 2 ;
mb = ( m 3 x - m 2 x ) 2 + ( m 3 y - m 2 y ) 2 ;
mc = ( m 1 x - m 3 x ) 2 + ( m 1 y - m 3 y ) 2 ;
ms=0.5*(ma+mb+mc);
marea=ms*(ms-ma)*(ms-mb)*(ms-mc);
Wherein, m is the geometric configuration q of current frame image or the geometric configuration p of previous frame image, ma, mb and mc are three limits of geometric configuration, m1x, m2x and m3x are the horizontal coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point, and m1y, m2y and m3y are the vertical coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point.
As shown in Figure 10 and Figure 11, when the motion state of marker is that marker is when the rotation of transverse axis, because the size of present frame figure and geometric configuration corresponding to previous frame image changes, its can by the triangle that obtains current frame image using the second identification point angle that is summit as the first angle, and using the second identification point angle that is summit as the second angle in the triangle obtaining previous frame image, by comparing the rotation around transverse axis of the first angle and the second angle determination marker.
Specifically, can according to the rotation of following formula determination marker along transverse axis:
Movement_pitch=q2angle-p2angle;
Wherein, q2angle is the angle that the triangle of current frame image is summit with the second identification point, and p2angle is the angle that the triangle of previous frame image is summit with the second identification point;
Wherein, when Movement_pitch be on the occasion of time, represent marker turn right around transverse axis, when Movement_pitch is negative value, represent motion state for turn left along transverse axis.
In addition, triangle can calculate according to following formula with the angle m2angle that the second identification point is summit:
ma = ( m 1 x - m 2 x ) 2 + ( m 1 y - m 2 y ) 2 ;
mb = ( m 3 x - m 2 x ) 2 + ( m 3 y - m 2 y ) 2 ;
mc = ( m 1 x - m 3 x ) 2 + ( m 1 y - m 3 y ) 2 ;
m2angle=arcos((ma) 2+(mb) 2-(mc) 2)/(2*ma*mb));
Wherein, m is the geometric configuration q of current frame image or the geometric configuration p of previous frame image, ma, mb and mc are three limits of geometric configuration, m1x, m2x and m3x are the horizontal coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point, and m1y, m2y and m3y are the vertical coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point.
As shown in Figure 12 and Figure 13, when the motion state of marker is marker along the rotation of zoom axis, size due to present frame figure and geometric configuration corresponding to previous frame image does not change and in geometric configuration, the vertical coordinate of each identification point and horizontal coordinate change simultaneously, it can by obtaining the difference of the first identification point of current frame image and the vertical coordinate of the 3rd identification point as the first difference, and the difference of vertical coordinate obtaining the first identification point corresponding to previous frame image and the 3rd identification point is as the second difference, by comparing the rotation around zoom axis of the first difference and the second difference determination marker.
Specifically, can according to the rotation of following formula determination marker along zoom axis:
Movement_roll=(q1y-q3y)-(p1y-p3y);
Wherein, q1y and q3y is the first identification point of current frame image and the vertical coordinate of the 3rd identification point, p1y and p3y is the first identification point of previous frame image and the vertical coordinate of the 3rd identification point;
Wherein, when Movement_roll be on the occasion of time, represent marker turn right around zoom axis, when Movement_roll is negative value, represent marker turn left around zoom axis.
As shown in Figure 14 and Figure 15, when the motion state of marker is marker along the rotation of Z-axis, because the size of present frame figure and geometric configuration corresponding to previous frame image changes, its can by the triangle that obtains current frame image using the first identification point or the 3rd identification point angle that is summit as the first angle, and using the first identification point or the 3rd identification point angle that is summit as the second angle in the triangle obtaining previous frame image, by comparing the rotation around Z-axis of the first angle and the second angle determination marker.
Specifically, the angle being summit for the first identification point, can according to the rotation of following formula determination marker along Z-axis:
Movement_yaw=q1angle-p1angle;
Wherein, q1angle is the angle that the triangle of current frame image is summit with the first identification point, and p1angle is the angle that the triangle of previous frame image is summit with the first identification point;
Wherein, when Movement_yaw be on the occasion of time, represent marker turn right around Z-axis, when Movement_yaw is negative value, represent marker turn left around Z-axis.
Specifically, the angle being summit for the 3rd identification point, can according to the rotation of following formula determination marker along Z-axis:
Movement_yaw=q3angle-p3angle;
Wherein, q3angle is the angle that the triangle of current frame image is summit with the 3rd identification point, and p3angle is the angle that the triangle of previous frame image is summit with the 3rd identification point;
Wherein, when Movement_yaw be on the occasion of time, represent marker turn left around Z-axis, when Movement_yaw is negative value, represent marker turn right around Z-axis.
In addition, triangle can calculate according to following formula with the angle m3angle that the 3rd identification point is summit with the first identification point angle m1angle that is summit and triangle:
ma = ( m 1 x - m 2 x ) 2 + ( m 1 y - m 2 y ) 2 ;
mb = ( m 3 x - m 2 x ) 2 + ( m 3 y - m 2 y ) 2 ;
mc = ( m 1 x - m 3 x ) 2 + ( m 1 y - m 3 y ) 2 ;
m1angle=arcos((ma) 2+(mc) 2-(mb) 2)/(2*ma*mc));
m3angle=arcos((mc) 2+(mb) 2-(ma) 2)/(2*mb*mc));
Wherein, m is the geometric configuration q of current frame image or the geometric configuration p of previous frame image, ma, mb and mc are three limits of geometric configuration, m1x, m2x and m3x are the horizontal coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point, and m1y, m2y and m3y are the vertical coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point.
Preferably, in above-mentioned six kinds of motion states, when motion state is that marker is along horizontal axis, along vertical axes with when three kinds of motion states of zoom axis movement, because the size of geometric configuration all changes, in order to simplify the step of detection, can after judging that the size of geometric configuration changes, by calculating the angle being summit with the first identification point, the second identification point or the 3rd identification point, after then computing being compensated to this angle, determine the motion state of marker along zoom axis movement.
Step S105: export corresponding control command to man-machine interactive system according to motion state.
In step S105, according to the corresponding relation of motion state and control command, the motion state of the marker detected is converted into corresponding control command, exports in man-machine interactive system.
In the present embodiment, accept aforementioned citing, when marker comprises marker 1950 and marker 1960, the motion state of marker 1950 and marker 1960 can be utilized to control same target in man-machine interactive system simultaneously, thus realize in the dimension more than six-freedom degree, such as, in seven, eight degree of freedom control objectives; Also can utilize the different target in the motion state control man-machine interactive system of marker 1950 and marker 1960, thus can realize such as while the angles of display changing reality environment, controlling run-home at six-freedom degree.
By above-mentioned embodiment, the control method of man-machine interactive system of the present invention is arranged at by the image extracting from image obtaining multiple marker, and multiple marker has difform identification point, according to the shape of identification point, the geometric figure that the identification point after obtaining grouping formed is divided into groups to identification point, the geometric configuration that the multiple identification points comparing current frame image are further formed and geometric configuration corresponding to previous frame image obtain the motion state of marker, finally export according to motion state the control that corresponding control command to man-machine interactive system realizes man-machine interactive system.Compared with prior art, the present invention wirelessly can realize the control of man-machine interactive system, can not affect the experience of user.Further, the present invention carries out processing the control that can realize man-machine interactive system to the image of marker by simple mathematical algorithm, realizes simple and is easy to promote.In addition, the present invention can follow the tracks of multiple marker simultaneously, thus realizes controlling multiple target at six degree of freedom or controlling single target at the latitude more than six degree of freedom in reality environment.
The foregoing is only embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every utilize instructions of the present invention and accompanying drawing content to do equivalent structure or equivalent flow process conversion; or be directly or indirectly used in other relevant technical fields, be all in like manner included in scope of patent protection of the present invention.

Claims (10)

1. a control method for man-machine interactive system, is characterized in that, described method comprises:
Step 1, obtains the image of multiple marker, and wherein said marker arranges multiple identification point, and the described identification point that different described markers is arranged has difformity;
Step 2, extracts difform described identification point from image described in step 1;
Step 3, divides into groups to described identification point the geometric figure that the described identification point after obtaining grouping formed according to the shape of identification point described in step 2;
Step 4, obtains the motion state of described marker by the described geometric configuration that the described geometric configuration and previous frame image that compare current frame image are corresponding;
Step 5, exports corresponding control command to man-machine interactive system according to described motion state.
2. method according to claim 1, is characterized in that: step 2 specifically comprises:
21 search for hot spot in described image;
The optical density of each described hot spot of 22 calculating;
23 select the hot spot of described optical density in preset range as described identification point.
3. method according to claim 2, is characterized in that: optical density described in step 22 calculates according to following formula:
M = 4 π * S L 2 ;
Wherein, M is the optical density of hot spot, and S is the area of hot spot, and L is the girth of hot spot.
4. method according to claim 2, is characterized in that: step 3 specifically comprises:
31 divide into groups to described identification point according to the shape of described identification point, and wherein, the described identification point after grouping is of similar shape;
The triangle that described identification point after 32 acquisition groupings is formed, wherein, described triangle is formed by the first identification point, the second identification point and the 3rd identification point, and described first identification point, the second identification point and the 3rd identification point are the identification point arranged by the ascending order of horizontal coordinate.
5. method according to claim 4, is characterized in that: step 4 specifically comprises:
411 obtain the average level coordinate of described geometric configuration of current frame images as the first horizontal coordinate, and the average level coordinate obtaining described geometric configuration corresponding to previous frame image is as the second horizontal coordinate;
412 determine the movement of described marker along transverse axis by more described first horizontal coordinate and described second horizontal coordinate;
Wherein, the described average level coordinate of described geometric configuration is the mean value of the horizontal coordinate of described first identification point, the second identification point and the 3rd identification point; Can according to the movement of following formula determination marker along transverse axis:
Movement_hortizontal=(q1x+q2x+q3x)/3-(p1x+p2x+p3x)/3;
Wherein, q1x, q2x and q3x are the horizontal coordinate of the first identification point of current frame image, the second identification point and the 3rd identification point, and p1x, p2x and p3x are the horizontal coordinate of the first identification point of previous frame image, the second identification point and the 3rd identification point;
Wherein, when Movement_hortizontal be on the occasion of time, represent marker move right along horizontal axis, when Movement_hortizontal is negative value, represent marker move left along horizontal axis.
6. method according to claim 4, is characterized in that: step 4 specifically comprises:
421 obtain the average vertical coordinate of described geometric configuration of current frame images as the first average vertical coordinate, and the average vertical coordinate obtaining described geometric configuration corresponding to previous frame image is as the second average vertical coordinate;
422 determine the movement of described marker along Z-axis by more described first average vertical coordinate and described second average vertical coordinate;
Wherein, the described average vertical coordinate of described geometric configuration is the mean value of the vertical coordinate of described first identification point, the second identification point and the 3rd identification point; Can according to the movement of following formula determination marker along Z-axis:
Movement_vertical=(q1y+q2y+q3y)/3-(p1y+p2y+p3y)/3;
Wherein, q1y, q2y and q3y are the vertical coordinate of the first identification point of current frame image, the second identification point and the 3rd identification point, and p1y, p2y and p3y are the vertical coordinate of the first identification point of previous frame image, the second identification point and the 3rd identification point;
Wherein, when Movement_vertical be on the occasion of time, represent marker move up along Z-axis, when Movement_vertical is negative value, represent marker move down along Z-axis.
7. method according to claim 4, is characterized in that: step 4 specifically comprises:
431 obtain the area of described geometric configuration of current frame images as the first area, and the area obtaining described geometric configuration corresponding to previous frame image is as second area;
432 determine the movement of described marker along zoom axis by second area described in more described first surface sum.
Can according to the movement of following formula determination marker along zoom axis:
Movement_zoom=qarea-parea;
Wherein, qarea is the area of the geometric configuration of current frame image, and parea is the area of the geometric configuration of previous frame image;
Wherein, when Movement_zoom be on the occasion of time, represent that marker amplifies mobile along zoom axis, when Movement_zoom is negative value, represent that marker reduces movement along zoom axis.
In addition, the area marea of geometric configuration can calculate according to following formula:
ma = ( m 1 x - m 2 x ) 2 + ( m 1 y - m 2 y ) 2 ;
mb = ( m 3 x - m 2 x ) 2 + ( m 3 y - m 2 y ) 2 ;
mc = ( m 1 x - m 3 x ) 2 + ( m 1 y - m 3 y ) 2 ;
ms=0.5*(ma+mb+mc);
marea=ms*(ms-ma)*(ms-mb)*(ms-mc);
Wherein, m is the geometric configuration q of current frame image or the geometric configuration p of previous frame image, ma, mb and mc are three limits of geometric configuration, m1x, m2x and m3x are the horizontal coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point, and m1y, m2y and m3y are the vertical coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point.
8. method according to claim 4, is characterized in that: step 4 specifically comprises:
441 obtain using the described second identification point angle that is summit as the first angle in the described triangle of current frame images, and using the described second identification point angle that is summit as the second angle in the described triangle obtaining previous frame image;
442 determine the rotation of described marker around transverse axis by more described first angle and described second angle.
Can according to the rotation of following formula determination marker along transverse axis:
Movement_pitch=q2angle-p2angle;
Wherein, q2angle is the angle that the triangle of current frame image is summit with the second identification point, and p2angle is the angle that the triangle of previous frame image is summit with the second identification point;
Wherein, when Movement_pitch be on the occasion of time, represent marker turn right around transverse axis, when Movement_pitch is negative value, represent motion state for turn left along transverse axis.
In addition, triangle can calculate according to following formula with the angle m2angle that the second identification point is summit:
ma = ( m 1 x - m 2 x ) 2 + ( m 1 y - m 2 y ) 2 ;
mb = ( m 3 x - m 2 x ) 2 + ( m 3 y - m 2 y ) 2 ;
mc = ( m 1 x - m 3 x ) 2 + ( m 1 y - m 3 y ) 2 ;
m2angle=arcos((ma) 2+(mb) 2-(mc) 2)/(2*ma*mb));
Wherein, m is the geometric configuration q of current frame image or the geometric configuration p of previous frame image, ma, mb and mc are three limits of geometric configuration, m1x, m2x and m3x are the horizontal coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point, and m1y, m2y and m3y are the vertical coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point.
9. method according to claim 4, is characterized in that: step 4 specifically comprises:
451 obtain using described first identification point or the described 3rd identification point angle that is summit as the first angle in the described triangle of current frame images, and using described first identification point or the described 3rd identification point angle that is summit as the second angle in the described triangle obtaining previous frame image;
452 determine the rotation of described marker around Z-axis by more described first angle and described second angle.
Specifically, the angle being summit for the first identification point, can according to the rotation of following formula determination marker along Z-axis:
Movement_yaw=q1angle-p1angle;
Wherein, q1angle is the angle that the triangle of current frame image is summit with the first identification point, and p1angle is the angle that the triangle of previous frame image is summit with the first identification point;
Wherein, when Movement_yaw be on the occasion of time, represent marker turn right around Z-axis, when Movement_yaw is negative value, represent marker turn left around Z-axis.
Specifically, the angle being summit for the 3rd identification point, can according to the rotation of following formula determination marker along Z-axis:
Movement_yaw=q3angle-p3angle;
Wherein, q3angle is the angle that the triangle of current frame image is summit with the 3rd identification point, and p3angle is the angle that the triangle of previous frame image is summit with the 3rd identification point;
Wherein, when Movement_yaw be on the occasion of time, represent marker turn left around Z-axis, when Movement_yaw is negative value, represent marker turn right around Z-axis.
In addition, triangle can calculate according to following formula with the angle m3angle that the 3rd identification point is summit with the first identification point angle m1angle that is summit and triangle:
ma = ( m 1 x - m 2 x ) 2 + ( m 1 y - m 2 y ) 2 ;
mb = ( m 3 x - m 2 x ) 2 + ( m 3 y - m 2 y ) 2 ;
mc = ( m 1 x - m 3 x ) 2 + ( m 1 y - m 3 y ) 2 ;
m1angle=arcos((ma) 2+(mc) 2-(mb) 2)/(2*ma*mc));
m3angle=arcos((mc) 2+(mb) 2-(ma) 2)/(2*mb*mc));
Wherein, m is the geometric configuration q of current frame image or the geometric configuration p of previous frame image, ma, mb and mc are three limits of geometric configuration, m1x, m2x and m3x are the horizontal coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point, and m1y, m2y and m3y are the vertical coordinate of the first identification point of geometric configuration, the second identification point and the 3rd identification point.
10. method according to claim 4, is characterized in that: step 4 comprises:
461 obtain the difference of described first identification points of current frame image and the vertical coordinate of described 3rd identification point as the first difference, and the difference of vertical coordinate obtaining described first identification point corresponding to previous frame image and described 3rd identification point is as the second difference;
462 determine the rotation of described marker around zoom axis by more described first difference and described second difference.
Can according to the rotation of following formula determination marker along zoom axis:
Movement_roll=(q1y-q3y)-(p1y-p3y);
Wherein, q1y and q3y is the first identification point of current frame image and the vertical coordinate of the 3rd identification point, p1y and p3y is the first identification point of previous frame image and the vertical coordinate of the 3rd identification point;
Wherein, when Movement_roll be on the occasion of time, represent marker turn right around zoom axis, when Movement_roll is negative value, represent marker turn left around zoom axis.
CN201410364071.XA 2014-07-28 2014-07-28 Control method for man-machine interaction system Active CN104298345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410364071.XA CN104298345B (en) 2014-07-28 2014-07-28 Control method for man-machine interaction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410364071.XA CN104298345B (en) 2014-07-28 2014-07-28 Control method for man-machine interaction system

Publications (2)

Publication Number Publication Date
CN104298345A true CN104298345A (en) 2015-01-21
CN104298345B CN104298345B (en) 2017-05-17

Family

ID=52318108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410364071.XA Active CN104298345B (en) 2014-07-28 2014-07-28 Control method for man-machine interaction system

Country Status (1)

Country Link
CN (1) CN104298345B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105892638A (en) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 Virtual reality interaction method, device and system
CN106200964A (en) * 2016-07-06 2016-12-07 浙江大学 A kind of method carrying out man-machine interaction based on motion track identification in virtual reality
WO2017143745A1 (en) * 2016-02-22 2017-08-31 上海乐相科技有限公司 Method and apparatus for determining movement information of to-be-detected object
CN107340965A (en) * 2017-06-28 2017-11-10 丝路视觉科技股份有限公司 Desktop display and its control method, object to be identified and its recognition methods

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103409A (en) * 2011-01-20 2011-06-22 桂林理工大学 Man-machine interaction method and device based on motion trail identification
CN103186226A (en) * 2011-12-28 2013-07-03 北京德信互动网络技术有限公司 Man-machine interaction system and method
CN103336575A (en) * 2013-06-27 2013-10-02 深圳先进技术研究院 Man-machine interaction intelligent glasses system and interaction method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103409A (en) * 2011-01-20 2011-06-22 桂林理工大学 Man-machine interaction method and device based on motion trail identification
CN103186226A (en) * 2011-12-28 2013-07-03 北京德信互动网络技术有限公司 Man-machine interaction system and method
CN103336575A (en) * 2013-06-27 2013-10-02 深圳先进技术研究院 Man-machine interaction intelligent glasses system and interaction method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105892638A (en) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 Virtual reality interaction method, device and system
WO2017143745A1 (en) * 2016-02-22 2017-08-31 上海乐相科技有限公司 Method and apparatus for determining movement information of to-be-detected object
CN106200964A (en) * 2016-07-06 2016-12-07 浙江大学 A kind of method carrying out man-machine interaction based on motion track identification in virtual reality
CN106200964B (en) * 2016-07-06 2018-10-26 浙江大学 The method for carrying out human-computer interaction is identified in a kind of virtual reality based on motion track
CN107340965A (en) * 2017-06-28 2017-11-10 丝路视觉科技股份有限公司 Desktop display and its control method, object to be identified and its recognition methods

Also Published As

Publication number Publication date
CN104298345B (en) 2017-05-17

Similar Documents

Publication Publication Date Title
CN106200679B (en) Single operation person's multiple no-manned plane mixing Active Control Method based on multi-modal natural interaction
CN101715581B (en) Volume recognition method and system
CN102999152B (en) A kind of gesture motion recognition methods and system
CN111694428B (en) Gesture and track remote control robot system based on Kinect
CN104050859A (en) Interactive digital stereoscopic sand table system
CN103677240B (en) Virtual touch exchange method and virtual touch interactive device
CN102053702A (en) Dynamic gesture control system and method
CN104570731A (en) Uncalibrated human-computer interaction control system and method based on Kinect
CN104317391A (en) Stereoscopic vision-based three-dimensional palm posture recognition interactive method and system
CN102243687A (en) Physical education teaching auxiliary system based on motion identification technology and implementation method of physical education teaching auxiliary system
CN104298345A (en) Control method for man-machine interaction system
CN103714322A (en) Real-time gesture recognition method and device
CN103049934A (en) Roam mode realizing method in three-dimensional scene simulation system
CN106708270A (en) Display method and apparatus for virtual reality device, and virtual reality device
CN105631901A (en) Method and device for determining movement information of to-be-detected object
WO2016035941A1 (en) Pose recognizing system and method using 3d spatial data on human model
CN110705385B (en) Method, device, equipment and medium for detecting angle of obstacle
CN105867613A (en) Head control interaction method and apparatus based on virtual reality system
CN110433467A (en) Picking up table tennis ball robot operation method and equipment based on binocular vision and ant group algorithm
CN103761011B (en) A kind of method of virtual touch screen, system and the equipment of calculating
CN111158362A (en) Charging pile, robot charging method and device and robot system
CN106650628A (en) Fingertip detection method based on three-dimensional K curvature
US20150103080A1 (en) Computing device and method for simulating point clouds
CN103644894B (en) A kind of method that complex-curved target identification and three-dimensional pose are measured
CN107688431B (en) Man-machine interaction method based on radar positioning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant