CN104298345B - Control method for man-machine interaction system - Google Patents

Control method for man-machine interaction system Download PDF

Info

Publication number
CN104298345B
CN104298345B CN201410364071.XA CN201410364071A CN104298345B CN 104298345 B CN104298345 B CN 104298345B CN 201410364071 A CN201410364071 A CN 201410364071A CN 104298345 B CN104298345 B CN 104298345B
Authority
CN
China
Prior art keywords
identification point
marker
geometry
frame image
movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410364071.XA
Other languages
Chinese (zh)
Other versions
CN104298345A (en
Inventor
李江
王卫红
杨洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201410364071.XA priority Critical patent/CN104298345B/en
Publication of CN104298345A publication Critical patent/CN104298345A/en
Application granted granted Critical
Publication of CN104298345B publication Critical patent/CN104298345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a control method for a man-machine interaction system. The method includes the steps of obtaining images of a plurality of identification objects, wherein a plurality of identification points are arranged on the identification objects, and the identification points on the different identification objects are in different shapes; obtaining the identification points which are in different shapes from the images; grouping the identification points according to the shapes of the identification points so as to obtain geometric graphs obtained after grouping is conducted; obtaining the movement states of the identification objects by comparing the geometric shape of the current frame of the image with the geometric shape of a previous frame of the image; outputting a corresponding control command to the man-machine interaction system according to the movement states. By means of the method, the control over the man-machine interaction system is achieved at a low cost and through the relatively simple method.

Description

A kind of control method of man-machine interactive system
Technical field
The present invention relates to field of human-computer interaction, more particularly to a kind of control method of man-machine interactive system.
Background technology
With the development of science and technology, reality environment be widely used in video/computer game, emulator, In the fields such as cad tools.Wherein, most of reality environments allow user to control thing in six-freedom degree Body, that is to say, that, it is allowed to user's control object along trunnion axis (horizontal) it is mobile, along vertical axises (vertiacal) it is mobile, (pitch) is rotated along zoom axis (zoom) movement, around trunnion axis, (yaw) is rotated around vertical axises and is rotated around zoom axis (roll)。
In the man-machine interactive system of reality environment, a kind of control method of existing man-machine interactive system is:It is logical Cross and arrange motion sensor in tracking target (for example, user's head) to realize the control of man-machine interactive system, wherein, motion Sensor can be motion accelerator.Using the method, motion accelerator needs to be connected to computer to track by connecting line The kinestate of user's head, and the use of connecting line can hinder the motion of user and then mitigate the experience of user.Secondly, user Motion easily accelerate the abrasion of in running order motion accelerator, so as to increase the cost of system.Finally, motion accelerates The certainty of measurement of device is relatively low, so as to limit the ability that user performs particular task under reality environment.
The content of the invention
The invention mainly solves the technical problem of provide a kind of control method of man-machine interactive system, can with it is relatively low into This and relatively simple mode realize the control of man-machine interactive system.
To solve above-mentioned technical problem, one aspect of the present invention is:A kind of control of man-machine interactive system Method, it is characterised in that methods described includes:
Step 1, obtains the image of multiple markers, wherein arranging multiple identification points on the marker, different is described The identification point arranged on marker has different shape;
Step 2, extracts the identification point of different shapes from step 1 described image;
Step 3, the shape of identification point according to step 2 identification point is grouped with obtain packet after described in The geometric figure that identification point is formed;
Step 4, by the corresponding geometry of the geometry and previous frame image for comparing current frame image Obtain the kinestate of the marker;
Step 5, corresponding control command is exported to man-machine interactive system according to the kinestate.
Further, step 2 is specifically included:
21 search for hot spot in described image;
22 optical density for calculating each hot spot;
23 select optical density hot spot within a predetermined range as the identification point.
1. method according to claim 2, it is characterised in that:Optical density described in step 22 is carried out according to equation below Calculate:
Wherein, M is the optical density of hot spot, and S is the area of hot spot, and L is the girth of hot spot.
Further, step 3 is specifically included:
31 are grouped according to the shape of the identification point to the identification point, wherein, the identification point tool after packet There is identical shape;
32 obtain packet after the identification point formed trianglees, wherein, the triangle by the first identification point, second Identification point and the 3rd identification point are formed, and first identification point, the second identification point and the 3rd identification point are by the liter of horizontal coordinate The identification point of sequence arrangement.
Further, step 4 is specifically included:
The average level coordinate of 411 geometries for obtaining current frame image is used as first level coordinate, and obtains The average level coordinate of the corresponding geometry of previous frame image is taken as the second horizontal coordinate;
412 determine the marker along trunnion axis by relatively the first level coordinate and second horizontal coordinate It is mobile;
Wherein, the average level coordinate of the geometry is first identification point, the second identification point and the 3rd The meansigma methodss of the horizontal coordinate of identification point;Movement of the marker along trunnion axis can be determined according to equation below:
Movement_hortizontal=(q1x+q2x+q3x)/3- (p1x+p2x+p3x)/3;
Wherein, q1x, q2x and q3x are the water of first identification point, the second identification point and the 3rd identification point of current frame image Flat coordinate, p1x, p2x and p3x are the horizontal coordinate of first identification point, the second identification point and the 3rd identification point of previous frame image;
Wherein, when Movement_hortizontal be on the occasion of when, represent marker move right along horizontal axis, when When Movement_hortizontal is negative value, represent that marker is moved left along horizontal axis.
Or, step 4 is specifically included:
The average vertical coordinate of 421 geometries for obtaining current frame images as the first average vertical coordinate, with And the average vertical coordinate of the corresponding geometry of previous frame image is obtained as the second average vertical coordinate;
422 determine the marker edge by relatively the first average vertical coordinate and the second average vertical coordinate The movement of vertical axises;
Wherein, the average vertical coordinate of the geometry is first identification point, the second identification point and the 3rd The meansigma methodss of the vertical coordinate of identification point;Movement of the marker along vertical axises can be determined according to equation below:
Movement_vertical=(q1y+q2y+q3y)/3- (p1y+p2y+p3y)/3;
Wherein, q1y, q2y and q3y are hanging down for the first identification point, the second identification point and the 3rd identification point of current frame image Straight coordinate, p1y, p2y and p3y are the vertical coordinate of first identification point, the second identification point and the 3rd identification point of previous frame image;
Wherein, when Movement_vertical be on the occasion of when, represent marker move up along vertical axises, when When Movement_vertical is negative value, represent that marker is moved down along vertical axises.
Or, step 4 is specifically included:
The area of 431 geometries for obtaining current frame image is used as the first area, and obtains previous frame image The area of the corresponding geometry is used as second area;
432 determine movement of the marker along zoom axis by relatively first area and the second area.
Movement of the marker along zoom axis can be determined according to equation below:
Movement_zoom=qarea-parea;
Wherein, qarea is the area of the geometry of current frame image, and parea is the geometry of previous frame image Area;
Wherein, when Movement_zoom be on the occasion of when, represent marker along zoom axis amplify move, Movement_zoom For negative value when, represent marker along zoom axis reduce move.
In addition, the area marea of geometry can be calculated according to equation below:
Ms=0.5* (ma+mb+mc);
Marea=ms* (ms-ma) * (ms-mb) * (ms-mc);
Wherein, m is geometry q of current frame image or geometry p of previous frame image, and ma, mb and mc are geometry Three sides of shape, m1x, m2x and m3x are the level of first identification point, the second identification point and the 3rd identification point of geometry Coordinate, m1y, m2y and m3y are the vertical coordinate of first identification point, the second identification point and the 3rd identification point of geometry.
Or, step 4 is specifically included:
Angle in 441 trianglees for obtaining current frame image using second identification point as summit is used as first jiao Angle in degree, and the triangle of acquisition previous frame image using second identification point as summit is used as second angle;
442 determine rotation of the marker around trunnion axis by the relatively first angle and the second angle.
Rotation of the marker along trunnion axis can be determined according to equation below:
Movement_pitch=q2angle-p2angle;
Wherein, q2angle is angle of the triangle of current frame image with the second identification point as summit, and p2angle is front Angle of the triangle of one two field picture with the second identification point as summit;
Wherein, when Movement_pitch be on the occasion of when, represent marker turn right around trunnion axis, work as Movement_ When pitch is negative value, represent that kinestate is to turn left along trunnion axis.
In addition, angle m2angle of the triangle with the second identification point as summit can be calculated according to equation below:
M2angle=arcos ((ma)2+(mb)2-(mc)2)/(2*ma*mb));
Wherein, m is geometry q of current frame image or geometry p of previous frame image, and ma, mb and mc are geometry Three sides of shape, m1x, m2x and m3x are the level of first identification point, the second identification point and the 3rd identification point of geometry Coordinate, m1y, m2y and m3y are the vertical coordinate of first identification point, the second identification point and the 3rd identification point of geometry.
Or, step 4 is specifically included:
With first identification point or the 3rd identification point as summit in 451 trianglees for obtaining current frame image Angle as first angle, and obtain previous frame image the triangle in first identification point or the described 3rd Identification point is the angle on summit as second angle;
452 determine rotation of the marker around vertical axises by the relatively first angle and the second angle.
Specifically, for as a example by the angle with the first identification point as summit, marker can be determined according to equation below Along the rotation of vertical axises:
Movement_yaw=q1angle-p1angle;
Wherein, q1angle is angle of the triangle of current frame image with the first identification point as summit, and p1angle is front Angle of the triangle of one two field picture with the first identification point as summit;
Wherein, when Movement_yaw be on the occasion of when, represent marker turn right around vertical axises, work as Movement_yaw For negative value when, represent marker turn left around vertical axises.
Specifically, for as a example by the angle with the 3rd identification point as summit, marker can be determined according to equation below Along the rotation of vertical axises:
Movement_yaw=q3angle-p3angle;
Wherein, q3angle is angle of the triangle of current frame image with the 3rd identification point as summit, and p3angle is front Angle of the triangle of one two field picture with the 3rd identification point as summit;
Wherein, when Movement_yaw be on the occasion of when, represent marker turn left around vertical axises, work as Movement_yaw For negative value when, represent marker turn right around vertical axises.
In addition, angle m1angle and triangle of the triangle with the first identification point as summit is with the 3rd identification point as summit Angle m3angle can be calculated according to equation below:
M1angle=arcos ((ma)2+(mc)2-(mb)2)/(2*ma*mc));
M3angle=arcos ((mc)2+(mb)2-(ma)2)/(2*mb*mc));
Wherein, m is geometry q of current frame image or geometry p of previous frame image, and ma, mb and mc are geometry Three sides of shape, m1x, m2x and m3x are the level of first identification point, the second identification point and the 3rd identification point of geometry Coordinate, m1y, m2y and m3y are the vertical coordinate of first identification point, the second identification point and the 3rd identification point of geometry.
Or, step 4 includes:
The difference of the vertical coordinate of 461 first identification points and the 3rd identification point for obtaining current frame images is used as the One difference, and obtain the difference work of the vertical coordinate of corresponding first identification point of previous frame image and the 3rd identification point For the second difference;
462 determine rotation of the marker around zoom axis by relatively first difference and second difference.
Rotation of the marker along zoom axis can be determined according to equation below:
Movement_roll=(q1y-q3y)-(p1y-p3y);
Wherein, q1y and q3y is the first identification point of current frame image and the vertical coordinate of the 3rd identification point, p1y and p3y The vertical coordinate of the first identification point and the 3rd identification point for previous frame image;
Wherein, when Movement_roll be on the occasion of when, represent marker turn right around zoom axis, Movement_roll For negative value when, represent marker turn left around zoom axis.
The invention has the beneficial effects as follows:It is different from the situation of prior art, the figure that the present invention passes through the multiple markers of acquisition As and extract from image be arranged on multiple markers with identification point of different shapes, according to the shape of identification point to mark Know the geometric figure that the identification point after point is grouped to obtain packet is formed, further compare multiple marks of current frame image The formed geometry of point and the corresponding geometry of previous frame image obtain the kinestate of marker, finally according to motion The corresponding control command of State- output to man-machine interactive system realizes the control of man-machine interactive system.Compared with prior art, originally Invention is wirelessly capable of achieving the control of man-machine interactive system, does not interfere with the experience of user.Further, the present invention The control for being capable of achieving man-machine interactive system is processed the image of marker by simple mathematical algorithm, realize it is simple and It is easy to spread.In addition, the present invention can be tracked to multiple markers simultaneously, so as to realize in reality environment six Degree of freedom controls multiple targets or in the latitude control single target more than six degree of freedom.
Description of the drawings
Fig. 1 is the application scenario diagram of the control method of the man-machine interactive system that embodiment of the present invention is provided;
Fig. 2 is the flow chart of the control method of the man-machine interactive system that embodiment of the present invention is provided;
Fig. 3 is the structural representation of the image of marker in embodiment of the present invention;
Fig. 4-15 is that the motion of current frame image and the corresponding geometry of previous frame image in embodiment of the present invention is shown It is intended to.
Specific embodiment
In order that those skilled in the art more fully understand the present invention program, below in conjunction with embodiment of the present invention Accompanying drawing, the technical scheme in embodiment of the present invention is clearly and completely described, it is clear that described embodiment The only embodiment of a present invention part, rather than the embodiment of whole.Based on the embodiment in the present invention, ability The every other embodiment that domain those of ordinary skill is obtained under the premise of creative work is not made, should all belong to this The scope of invention protection.
Fig. 1 is a kind of application scenario diagram of the control method of man-machine interactive system that embodiment of the present invention is provided, should Can be included with scene:Computer 105, display 110, keyboard 120, photographic head 115, marker 130 and marker 140.Its In, the control method of the man-machine interactive system of the present invention is deployed on computer 105.
In the present embodiment, computer 105 is used for software of the operation based on the man-machine interactive system of reality environment, And the image including marker 130 and marker 140 obtained according to photographic head 115 is controlled to reality environment.It is aobvious Show that device 110 is connected with computer 105, for display virtual real environment.Keyboard 120 is connected with computer, for receive user The order of input to arrange reality environment in parameter.
In the present embodiment, marker 130 is the glasses for being available for user to wear, and three marks are provided with marker 130 Know point 135, wherein, identification point 135 is arranged at the centre position at the top of glasses, and two other identification point 135 sets respectively The marginal position of the both sides of glasses is placed in, so that the geometry of three one trianglees of formation of identification point 135.Marker The 140 circular handheld devices to be available for user to be held in the hand, are provided with three identification points 145, three identification points on marker 140 145 geometries for forming a triangle.When user holds marker 140, photographic head 115 can be got including three marks Know the figure of the marker of point 145.
In the present embodiment, the identification point 135 for being arranged at marker 130 and the identification point 145 for being arranged at marker 140 With different shapes.
Wherein, when photographic head 115 gets the image including marker 130 and marker 140 and passes to computer 105 Afterwards, computer 105 extracts identification point of different shapes from image, namely identification point 135 and identification point are extracted from image 145, the geometry that the identification point after being grouped to identification point according to the shape of identification point to obtain packet is formed, namely The triangle that three identification points 135 and three identification points 145 are formed is obtained respectively, by the geometric form for comparing current frame image Shape and the corresponding geometry of previous frame image obtain the kinestate of marker, namely compare in present frame and previous frame image The triangle that identification point 135 is formed obtains the kinestate of marker 130 and compares in present frame and previous frame image and identifies The trianglees that point 145 is formed obtain the kinestate of markers 140, and corresponding control command is exported to man-machine according to kinestate Interactive system, to realize control of the user to reality environment.
It will be understood to those skilled in the art that the computer 105 in Fig. 1 is desktop computer, computer 105 can also be it The computing system of its energy runs software, the present invention is not limited.Display 110 in Fig. 1 be liquid crystal display, display 110 can also be other display devices, such as conformable display, television set, and the present invention is not limited.Fig. 1 is identifying Illustrate as a example by thing 130 and marker 140, it will be understood by those skilled in the art that the present invention can also include being different from two Individual multiple markers, the present invention is not limited with two markers.
Fig. 2 is the flow chart of the control method of the man-machine interactive system that embodiment of the present invention is provided.If it is noted that There is substantially the same result, the method for the present invention is not limited with the flow process order shown in Fig. 2.As shown in Fig. 2 the method bag Include step:
Step S101:The image of multiple markers is obtained, multiple identification points, different markers are wherein set on marker The identification point of upper setting has different shape;
In step S101, marker is arranged in tracking target, and identification point is arranged on marker.Wherein, identification point Can be infrarede emitting diode (light Emitting Diode, LED, generating laser (Laser emitter) etc..
With the quantity of marker as two, for being respectively arranged with two markers as a example by three identification points, such as Fig. 3 institutes Show, in the image for obtaining marker 1950 and marker 1960 are included, specifically, marker 1950 includes the first identification point 1910th, the second identification point 1915 and the 3rd identification point 1920, the first identification point 1910, the second identification point 1915 and the 3rd identification point 1920 be by horizontal coordinate ascending order arrange identification point, the first identification point 1910, the second identification point 1915 and the 3rd identification point 1920 are triangle.Marker 1960 includes the first identification point 1925, the second identification point 1930 and the 3rd identification point 1935, the One identification point 1925, the second identification point 1930 and the 3rd identification point 1935 are the identification point arranged by the ascending order of horizontal coordinate, the One identification point 1925, the second identification point 1930 and the 3rd identification point 1935 are circle.
Step S102:Mark identification point of different shapes is extracted from image;
In step s 102, first, hot spot is searched in the picture, wherein, hot spot be it is adjacent with same color or The set of the pixel of color is substantially the same, identification point of different shapes has hot spot of different shapes.
Then, the optical density of the hot spot for searching is calculated, wherein optical density is used to measure tight between pixel in hot spot Close property, hot spot of different shapes has different optical density.Specifically, the step of optical density for calculating the hot spot for searching, wraps Include:The area of each hot spot is obtained, the wherein area of hot spot is calculated according to the total number of pixel in hot spot;Obtain each hot spot Girth, the wherein girth of hot spot is calculated according to the number of pixel of hot spot periphery;According to the area of each hot spot and right The girth answered obtains optical density.
Preferably, optical density can be calculated according to equation below:
Wherein, M is the optical density of hot spot, and S is the area of hot spot, and L is the girth of hot spot.
Finally, optical density hot spot within a predetermined range is selected as identification point.Aforementioned citing is accepted, when marker 1950 In identification point when being triangle, the predetermined optical density of the identification point of triangle is π/3.The light of the calculated hot spot of comparison is close Degree and predetermined optical density, when the optical density of hot spot is in the first threshold centered on predetermined optical density and the predetermined model of Second Threshold When enclosing interior, the hot spot is selected as the identification point of triangle.For example, when first threshold is 1.030, and Second Threshold is 1.062, Then select optical density more than 1.030 and the hot spot less than 1.062 as triangle identification point.
When the identification point in marker 1960 is circular, the predetermined optical density of circular identification point is 1.Relatively calculate The optical density of the hot spot for arriving and predetermined optical density, when the optical density of hot spot is in the 3rd threshold value centered on predetermined optical density and When in the preset range of four threshold values, the hot spot is selected as circular identification point.For example, when the 3rd threshold value is 0.99, the 4th threshold Be worth for 1.01 when, then select optical density more than 0.99 and the hot spot less than 1.01 be used as circular identification point.
In addition, after optical density hot spot within a predetermined range is selected as identification point, obtaining optical density in preset range The barycenter of interior hot spot.Wherein, barycenter is mass centre, and it is located at the center of hot spot, and the two-dimensional coordinate of barycenter is hereinafter The two-dimensional coordinate of described identification point.
It will be understood to those skilled in the art that the present invention can also extract identification point according to other methods, do not limit The above-mentioned method that identification point is extracted according to the optical density of hot spot in this enforcement.
Step S103:It is several that identification point after being grouped to identification point according to the shape of identification point to obtain packet is formed What figure.
In step s 103, the identification point after packet is of similar shape, the geometry that the identification point after packet is formed Shape is determined by the number and location of identification point.Aforementioned citing is accepted, the first identification point 1910, second of triangle will be shaped as 1920 points of 1915 and the 3rd identification point of identification point is one group, obtains the first identification point 1910, the second identification point 1915 and the 3rd mark Know the triangle for being formed of point 1920.By the first identification point 1925 generally circular in shape, the second identification point 1930 and the 3rd mark Point 1935 minutes is one group, the triangle that the first identification point 1925 of acquisition, the second identification point 1930 and the 3rd identification point 1935 are formed Shape.It will be understood to those skilled in the art that geometry is only for example for triangle, the present invention is not limited.
Step S104:Mark is obtained by the corresponding geometry of the geometry and previous frame image that compare current frame image Know the kinestate of thing;
In step S104, the kinestate of marker includes six kinds of different kinestates, and it is respectively marker edge The movement of trunnion axis, the movement along vertical axises, the movement along zoom axis, the rotation around trunnion axis, the rotation around vertical axises and Around the rotation of zoom axis.Wherein, trunnion axis, vertical axises and zoom axis are arranged in a mutually vertical manner, and composition meets the three-dimensional of right hand principle Coordinate system.
Please also refer to Fig. 4-15, Fig. 4-15 shows for the motion of current frame image and the corresponding geometry of previous frame image It is intended to, wherein, solid line represents the geometry of present frame figure, and dotted line represents the corresponding geometry of previous frame image, with three Geometry as a example by angular is illustrated.
As shown in Figure 4 and Figure 5, when marker kinestate be marker along trunnion axis it is mobile when, due to present frame The size of figure and the corresponding geometry of previous frame image do not change and only in geometry each identification point water Flat coordinate changes, and it can be used as first level seat by obtaining the average level coordinate of the geometry of current frame image Mark, and obtain the average level coordinate of the corresponding geometry of previous frame image as the second horizontal coordinate, by comparing the One horizontal coordinate and the second horizontal coordinate determine movement of the marker along trunnion axis, wherein, the average level coordinate of geometry For the meansigma methodss of the horizontal coordinate of the first identification point, the second identification point and the 3rd identification point.
Specifically, movement of the marker along trunnion axis can be determined according to equation below:
Movement_hortizontal=(q1x+q2x+q3x)/3- (p1x+p2x+p3x)/3;
Wherein, q1x, q2x and q3x are the water of first identification point, the second identification point and the 3rd identification point of current frame image Flat coordinate, p1x, p2x and p3x are the horizontal coordinate of first identification point, the second identification point and the 3rd identification point of previous frame image;
Wherein, when Movement_hortizontal be on the occasion of when, represent marker move right along horizontal axis, when When Movement_hortizontal is negative value, represent that marker is moved left along horizontal axis.
As shown in Figure 6 and Figure 7, when marker kinestate be marker along vertical axises it is mobile when, due to present frame The size of figure and the corresponding geometry of previous frame image do not change and only in geometry each identification point hang down Straight coordinate changes, and it is average vertical that it can be used as first by the average vertical coordinate of the geometry of acquisition current frame image Straight coordinate, and the average vertical coordinate of the corresponding geometry of previous frame image is obtained as the second average vertical coordinate, lead to Cross and compare the first average vertical coordinate and the second average vertical coordinate determines movement of the marker along vertical axises, wherein, geometric form The average vertical coordinate of shape is the meansigma methodss of the vertical coordinate of the first identification point, the second identification point and the 3rd identification point.
Specifically, movement of the marker along vertical axises can be determined according to equation below:
Movement_vertical=(q1y+q2y+q3y)/3- (p1y+p2y+p3y)/3;
Wherein, q1y, q2y and q3y are hanging down for the first identification point, the second identification point and the 3rd identification point of current frame image Straight coordinate, p1y, p2y and p3y are the vertical coordinate of first identification point, the second identification point and the 3rd identification point of previous frame image;
Wherein, when Movement_vertical be on the occasion of when, represent marker move up along vertical axises, when When Movement_vertical is negative value, represent that marker is moved down along vertical axises.
As shown in Figure 8 and Figure 9, when marker kinestate be marker along zoom axis it is mobile when, due to present frame The size of figure and the corresponding geometry of previous frame image changes and present frame figure and previous frame image are corresponding several Angle in what shape with each identification point as summit does not change, and it can pass through the geometry of acquisition current frame image Area is used as the first area, and obtains the area of the corresponding geometry of previous frame image as second area, by comparing First area and second area determine movement of the marker along zoom axis.
Specifically, movement of the marker along zoom axis can be determined according to equation below:
Movement_zoom=qarea-parea;
Wherein, qarea is the area of the geometry of current frame image, and parea is the geometry of previous frame image Area;
Wherein, when Movement_zoom be on the occasion of when, represent marker along zoom axis amplify move, Movement_zoom For negative value when, represent marker along zoom axis reduce move.
In addition, the area marea of geometry can be calculated according to equation below:
Ms=0.5* (ma+mb+mc);
Marea=ms* (ms-ma) * (ms-mb) * (ms-mc);
Wherein, m is geometry q of current frame image or geometry p of previous frame image, and ma, mb and mc are geometry Three sides of shape, m1x, m2x and m3x are the level of first identification point, the second identification point and the 3rd identification point of geometry Coordinate, m1y, m2y and m3y are the vertical coordinate of first identification point, the second identification point and the 3rd identification point of geometry.
As shown in Figure 10 and Figure 11, when marker kinestate be marker around the rotation of trunnion axis when, due to current The size of frame figure and the corresponding geometry of previous frame image changes, and it can pass through the triangle for obtaining current frame image Angle in shape using the second identification point as summit is used as first angle, and obtains in the triangle of previous frame image with the second mark An angle for being summit is known as second angle, by comparing first angle and second angle marker turning around trunnion axis is determined It is dynamic.
Specifically, rotation of the marker along trunnion axis can be determined according to equation below:
Movement_pitch=q2angle-p2angle;
Wherein, q2angle is angle of the triangle of current frame image with the second identification point as summit, and p2angle is front Angle of the triangle of one two field picture with the second identification point as summit;
Wherein, when Movement_pitch be on the occasion of when, represent marker turn right around trunnion axis, work as Movement_ When pitch is negative value, represent that kinestate is to turn left along trunnion axis.
In addition, angle m2angle of the triangle with the second identification point as summit can be calculated according to equation below:
M2angle=arcos ((ma)2+(mb)2-(mc)2)/(2*ma*mb));
Wherein, m is geometry q of current frame image or geometry p of previous frame image, and ma, mb and mc are geometry Three sides of shape, m1x, m2x and m3x are the level of first identification point, the second identification point and the 3rd identification point of geometry Coordinate, m1y, m2y and m3y are the vertical coordinate of first identification point, the second identification point and the 3rd identification point of geometry.
As shown in Figure 12 and Figure 13, when marker kinestate be marker along the rotation of zoom axis when, due to current The size of frame figure and the corresponding geometry of previous frame image do not change and in geometry each identification point it is vertical Coordinate and horizontal coordinate change simultaneously, and it can pass through the first identification point for obtaining current frame image and the 3rd identification point The difference of vertical coordinate is used as the first difference, and obtains the vertical of corresponding first identification point of previous frame image and the 3rd identification point The difference of coordinate determines rotation of the marker around zoom axis as the second difference by comparing the first difference and the second difference.
Specifically, rotation of the marker along zoom axis can be determined according to equation below:
Movement_roll=(q1y-q3y)-(p1y-p3y);
Wherein, q1y and q3y is the first identification point of current frame image and the vertical coordinate of the 3rd identification point, p1y and p3y The vertical coordinate of the first identification point and the 3rd identification point for previous frame image;
Wherein, when Movement_roll be on the occasion of when, represent marker turn right around zoom axis, Movement_roll For negative value when, represent marker turn left around zoom axis.
As shown in Figure 14 and Figure 15, when marker kinestate be marker along the rotation of vertical axises when, due to current The size of frame figure and the corresponding geometry of previous frame image changes, and it can pass through the triangle for obtaining current frame image Angle in shape using the first identification point or the 3rd identification point as summit is used as first angle, and the triangle for obtaining previous frame image Angle in shape using the first identification point or the 3rd identification point as summit as second angle, by comparing first angle and second jiao Degree determines rotation of the marker around vertical axises.
Specifically, for as a example by the angle with the first identification point as summit, marker can be determined according to equation below Along the rotation of vertical axises:
Movement_yaw=q1angle-p1angle;
Wherein, q1angle is angle of the triangle of current frame image with the first identification point as summit, and p1angle is front Angle of the triangle of one two field picture with the first identification point as summit;
Wherein, when Movement_yaw be on the occasion of when, represent marker turn right around vertical axises, work as Movement_yaw For negative value when, represent marker turn left around vertical axises.
Specifically, for as a example by the angle with the 3rd identification point as summit, marker can be determined according to equation below Along the rotation of vertical axises:
Movement_yaw=q3angle-p3angle;
Wherein, q3angle is angle of the triangle of current frame image with the 3rd identification point as summit, and p3angle is front Angle of the triangle of one two field picture with the 3rd identification point as summit;
Wherein, when Movement_yaw be on the occasion of when, represent marker turn left around vertical axises, work as Movement_yaw For negative value when, represent marker turn right around vertical axises.
In addition, angle m1angle and triangle of the triangle with the first identification point as summit is with the 3rd identification point as summit Angle m3angle can be calculated according to equation below:
M1angle=arcos ((ma)2+(mc)2-(mb)2)/(2*ma*mc));
M3angle=arcos ((mc)2+(mb)2-(ma)2)/(2*mb*mc));
Wherein, m is geometry q of current frame image or geometry p of previous frame image, and ma, mb and mc are geometry Three sides of shape, m1x, m2x and m3x are the level of first identification point, the second identification point and the 3rd identification point of geometry Coordinate, m1y, m2y and m3y are the vertical coordinate of first identification point, the second identification point and the 3rd identification point of geometry.
Preferably, in above-mentioned six kinds of kinestates, when kinestate is that marker is rotated along trunnion axis, turned along vertical axises It is dynamic and around the three kinds of motions state of zoom axis movement when, because the size of geometry changes, in order to simplify detection Step, can be after the size for judging geometry changes, by calculating with the first identification point, the second identification point or Three identification points are the angle on summit, determination marker after computing is then compensated to the angle and moves shape along what zoom axis were moved State.
Step S105:Corresponding control command is exported to man-machine interactive system according to kinestate.
In step S105, according to kinestate and the corresponding relation of control command, by the motion of the marker for detecting Condition conversion is corresponding control command, is exported into man-machine interactive system.
In the present embodiment, aforementioned citing is accepted, when marker includes marker 1950 and marker 1960, can be with Same target in man-machine interactive system is controlled simultaneously using the kinestate of marker 1950 and marker 1960, so as to realize More than the dimension of six-freedom degree, such as seven, control targe on eight degree of freedom;Can also be using marker 1950 and mark Different target in the kinestate control man-machine interactive system of thing 1960, such that it is able to realize for example changing virtual reality ring Run-home is controlled while the angles of display in border in six-freedom degree.
By above-mentioned embodiment, the control method of the man-machine interactive system of the present invention passes through the figure for obtaining multiple markers As and extract from image be arranged on multiple markers with identification point of different shapes, according to the shape of identification point to mark Know the geometric figure that the identification point after point is grouped to obtain packet is formed, further compare multiple marks of current frame image The formed geometry of point and the corresponding geometry of previous frame image obtain the kinestate of marker, finally according to motion The corresponding control command of State- output to man-machine interactive system realizes the control of man-machine interactive system.Compared with prior art, originally Invention is wirelessly capable of achieving the control of man-machine interactive system, does not interfere with the experience of user.Further, the present invention The control for being capable of achieving man-machine interactive system is processed the image of marker by simple mathematical algorithm, realize it is simple and It is easy to spread.In addition, the present invention can be tracked to multiple markers simultaneously, so as to realize in reality environment six Degree of freedom controls multiple targets or in the latitude control single target more than six degree of freedom.
Embodiments of the present invention are the foregoing is only, the scope of the claims of the present invention is not thereby limited, it is every using this Equivalent structure or equivalent flow conversion that description of the invention and accompanying drawing content are made, or directly or indirectly it is used in other correlations Technical field, is included within the scope of the present invention.

Claims (1)

1. a kind of control method of man-machine interactive system, it is characterised in that methods described includes:
Step 1, obtains the image of multiple markers, wherein multiple identification points are set on the marker, the different marks The identification point arranged on thing has different shape;
Step 2, extracts the identification point of different shapes from step 1 described image;
Step 3, the shape of identification point according to step 2 is grouped to obtain the mark after being grouped to the identification point The geometry that point is formed;
Step 4, is obtained by the corresponding geometry of the geometry and previous frame image that compare current frame image The kinestate of the marker;
Step 5, corresponding control command is exported to man-machine interactive system according to the kinestate;
Step 2 is specifically included:
21 search for hot spot in described image;
22 optical density for calculating each hot spot;
23 select optical density hot spot within a predetermined range as the identification point;
Optical density is calculated according to equation below described in step 22:
M = 4 π * S L 2 ;
Wherein, M is the optical density of hot spot, and S is the area of hot spot, and L is the girth of hot spot;
Step 3 is specifically included:
31 are grouped according to the shape of the identification point to the identification point, wherein, the identification point after packet has phase Same shape;
32 obtain the triangle that the identification point after packet is formed, wherein, the triangle is by the first identification point, the second mark Point and the 3rd identification point are formed, and first identification point, the second identification point and the 3rd identification point are to arrange by the ascending order of horizontal coordinate The identification point of row;
Step 4 is specifically included:
411 obtain current frame images the geometries average level coordinate as first level coordinate, and obtain before The average level coordinate of the corresponding geometry of one two field picture is used as the second horizontal coordinate;
412 determine shifting of the marker along trunnion axis by relatively the first level coordinate and second horizontal coordinate It is dynamic;
Wherein, the average level coordinate of the geometry is first identification point, the second identification point and the 3rd mark The meansigma methodss of the horizontal coordinate of point;Movement of the marker along trunnion axis can be determined according to equation below:
Movement_hortizontal=(q1x+q2x+q3x)/3- (p1x+p2x+p3x)/3;
Wherein, q1x, q2x and q3x sit for the level of first identification point, the second identification point and the 3rd identification point of current frame image Mark, p1x, p2x and p3x are the horizontal coordinate of first identification point, the second identification point and the 3rd identification point of previous frame image;
Wherein, when Movement_hortizontal be on the occasion of when, represent marker move right along horizontal axis, when When Movement_hortizontal is negative value, represent that marker is moved left along horizontal axis;
The average vertical coordinate of 421 geometries for obtaining current frame image is used as the first average vertical coordinate, and obtains The average vertical coordinate of the corresponding geometry of previous frame image is taken as the second average vertical coordinate;
422 determine the marker along vertical by relatively the first average vertical coordinate and the second average vertical coordinate The movement of axle;
Wherein, the average vertical coordinate of the geometry is first identification point, the second identification point and the 3rd mark The meansigma methodss of the vertical coordinate of point;Movement of the marker along vertical axises can be determined according to equation below:
Movement_vertical=(q1y+q2y+q3y)/3- (p1y+p2y+p3y)/3;
Wherein, q1y, q2y and q3y are the vertical seat of first identification point, the second identification point and the 3rd identification point of current frame image Mark, p1y, p2y and p3y are the vertical coordinate of first identification point, the second identification point and the 3rd identification point of previous frame image;
Wherein, when Movement_vertical be on the occasion of when, represent marker move up along vertical axises, work as Movement_ When vertical is negative value, represent that marker is moved down along vertical axises;
The area of 431 geometries for obtaining current frame image is used as the first area, and obtains previous frame image correspondence The geometry area as second area;
432 determine movement of the marker along zoom axis by relatively first area and the second area;
Movement of the marker along zoom axis is determined according to equation below:
Movement_zoom=qarea-parea;
Wherein, qarea is the area of the geometry of current frame image, and parea is the area of the geometry of previous frame image;
Wherein, when Movement_zoom be on the occasion of when, represent marker along zoom axis amplify move, Movement_zoom is negative During value, represent that marker reduces along zoom axis and move;
In addition, the area marea of geometry can be calculated according to equation below:
m a = ( m 1 x - m 2 x ) 2 + ( m 1 y - m 2 y ) 2 ;
m b = ( m 3 x - m 2 x ) 2 + ( m 3 y - m 2 y ) 2 ;
m c = ( m 1 x - m 3 x ) 2 + ( m 1 y - m 3 y ) 2 ;
Ms=0.5* (ma+mb+mc);
Marea=ms* (ms-ma) * (ms-mb) * (ms-mc);
Wherein, m is geometry q of current frame image or geometry p of previous frame image, and ma, mb and mc are geometry Three sides, m1x, m2x and m3x for geometry the first identification point, the second identification point and the 3rd identification point horizontal coordinate, M1y, m2y and m3y are the vertical coordinate of first identification point, the second identification point and the 3rd identification point of geometry;
Angle using second identification point as summit is used as first angle in 441 trianglees for obtaining current frame images, And the angle in the triangle of acquisition previous frame image using second identification point as summit is used as second angle;
442 determine rotation of the marker around trunnion axis by the relatively first angle and the second angle;
Rotation of the marker along trunnion axis is determined according to equation below:
Movement_pitch=q2angle-p2angle;
Wherein, q2angle is angle of the triangle of current frame image with the second identification point as summit, and p2angle is former frame Angle of the triangle of image with the second identification point as summit;
Wherein, when Movement_pitch be on the occasion of when, represent marker turn right around trunnion axis, work as Movement_pitch For negative value when, represent kinestate be to turn left along trunnion axis;
In addition, angle m2angle of the triangle with the second identification point as summit is calculated according to equation below:
m a = ( m 1 x - m 2 x ) 2 + ( m 1 y - m 2 y ) 2 ;
m b = ( m 3 x - m 2 x ) 2 + ( m 3 y - m 2 y ) 2 ;
m c = ( m 1 x - m 3 x ) 2 + ( m 1 y - m 3 y ) 2 ;
M2angle=arcos ((ma)2+(mb)2-(mc)2)/(2*ma*mb));
Wherein, m is geometry q of current frame image or geometry p of previous frame image, and ma, mb and mc are geometry Three sides, m1x, m2x and m3x for geometry the first identification point, the second identification point and the 3rd identification point horizontal coordinate, M1y, m2y and m3y are the vertical coordinate of first identification point, the second identification point and the 3rd identification point of geometry;
Angle in 451 trianglees for obtaining current frame image with first identification point or the 3rd identification point as summit Degree is used as first angle, and obtains in the triangle of previous frame image with first identification point or the 3rd mark Point is the angle on summit as second angle;
452 determine rotation of the marker around vertical axises by the relatively first angle and the second angle;
Specifically, for as a example by the angle with the first identification point as summit, determine marker along vertical axises according to equation below Rotation:
Movement_yaw=q1angle-p1angle;
Wherein, q1angle is angle of the triangle of current frame image with the first identification point as summit, and p1angle is former frame Angle of the triangle of image with the first identification point as summit;
Wherein, when Movement_yaw be on the occasion of when, represent marker turn right around vertical axises, when Movement_yaw is negative During value, represent that marker turns left around vertical axises;
Specifically, for as a example by the angle with the 3rd identification point as summit, determine marker along vertical axises according to equation below Rotation:
Movement_yaw=q3angle-p3angle;
Wherein, q3angle is angle of the triangle of current frame image with the 3rd identification point as summit, and p3angle is former frame Angle of the triangle of image with the 3rd identification point as summit;
Wherein, when Movement_yaw be on the occasion of when, represent marker turn left around vertical axises, when Movement_yaw is negative During value, represent that marker turns right around vertical axises;
In addition, angle of angle m1angle and triangle of the triangle with the first identification point as summit with the 3rd identification point as summit Degree m3angle is calculated according to equation below:
m a = ( m 1 x - m 2 x ) 2 + ( m 1 y - m 2 y ) 2 ;
m b = ( m 3 x - m 2 x ) 2 + ( m 3 y - m 2 y ) 2 ;
m c = ( m 1 x - m 3 x ) 2 + ( m 1 y - m 3 y ) 2 ;
M1angle=arcos ((ma)2+(mc)2-(mb)2)/(2*ma*mc));
M3angle=arcos ((mc)2+(mb)2-(ma)2)/(2*mb*mc));
Wherein, m is geometry q of current frame image or geometry p of previous frame image, and ma, mb and mc are geometry Three sides, m1x, m2x and m3x for geometry the first identification point, the second identification point and the 3rd identification point horizontal coordinate, M1y, m2y and m3y are the vertical coordinate of first identification point, the second identification point and the 3rd identification point of geometry;
The difference of the vertical coordinate of 461 first identification points and the 3rd identification point for obtaining current frame image is poor as first Value, and obtain corresponding first identification point of previous frame image and the 3rd identification point vertical coordinate difference as the Two differences;
462 determine rotation of the marker around zoom axis by relatively first difference and second difference;
Rotation of the marker along zoom axis is determined according to equation below:
Movement_roll=(q1y-q3y)-(p1y-p3y);
Wherein, q1y and q3y is the first identification point of current frame image and the vertical coordinate of the 3rd identification point, and p1y and p3y is front First identification point of one two field picture and the vertical coordinate of the 3rd identification point;
Wherein, when Movement_roll be on the occasion of when, represent marker turn right around zoom axis, Movement_roll is negative During value, represent that marker turns left around zoom axis.
CN201410364071.XA 2014-07-28 2014-07-28 Control method for man-machine interaction system Active CN104298345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410364071.XA CN104298345B (en) 2014-07-28 2014-07-28 Control method for man-machine interaction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410364071.XA CN104298345B (en) 2014-07-28 2014-07-28 Control method for man-machine interaction system

Publications (2)

Publication Number Publication Date
CN104298345A CN104298345A (en) 2015-01-21
CN104298345B true CN104298345B (en) 2017-05-17

Family

ID=52318108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410364071.XA Active CN104298345B (en) 2014-07-28 2014-07-28 Control method for man-machine interaction system

Country Status (1)

Country Link
CN (1) CN104298345B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105892638A (en) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 Virtual reality interaction method, device and system
CN105631901A (en) * 2016-02-22 2016-06-01 上海乐相科技有限公司 Method and device for determining movement information of to-be-detected object
CN106200964B (en) * 2016-07-06 2018-10-26 浙江大学 The method for carrying out human-computer interaction is identified in a kind of virtual reality based on motion track
CN107340965A (en) * 2017-06-28 2017-11-10 丝路视觉科技股份有限公司 Desktop display and its control method, object to be identified and its recognition methods

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103409A (en) * 2011-01-20 2011-06-22 桂林理工大学 Man-machine interaction method and device based on motion trail identification
CN103186226A (en) * 2011-12-28 2013-07-03 北京德信互动网络技术有限公司 Man-machine interaction system and method
CN103336575A (en) * 2013-06-27 2013-10-02 深圳先进技术研究院 Man-machine interaction intelligent glasses system and interaction method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103409A (en) * 2011-01-20 2011-06-22 桂林理工大学 Man-machine interaction method and device based on motion trail identification
CN103186226A (en) * 2011-12-28 2013-07-03 北京德信互动网络技术有限公司 Man-machine interaction system and method
CN103336575A (en) * 2013-06-27 2013-10-02 深圳先进技术研究院 Man-machine interaction intelligent glasses system and interaction method

Also Published As

Publication number Publication date
CN104298345A (en) 2015-01-21

Similar Documents

Publication Publication Date Title
CN104298345B (en) Control method for man-machine interaction system
US11461958B2 (en) Scene data obtaining method and model training method, apparatus and computer readable storage medium using the same
CN103677240B (en) Virtual touch exchange method and virtual touch interactive device
US10664993B1 (en) System for determining a pose of an object
US10497179B2 (en) Apparatus and method for performing real object detection and control using a virtual reality head mounted display system
CN102073414B (en) Multi-touch tracking method based on machine vision
CN109816730A (en) Workpiece grabbing method, apparatus, computer equipment and storage medium
CN110355754A (en) Robot eye system, control method, equipment and storage medium
CN101619984A (en) Mobile robot visual navigation method based on colorful road signs
CN104647377B (en) A kind of industrial robot based on cognitive system and control method thereof
CN104050859A (en) Interactive digital stereoscopic sand table system
CN105631901A (en) Method and device for determining movement information of to-be-detected object
CN110428465A (en) View-based access control model and the mechanical arm grasping means of tactile, system, device
CN109822568B (en) Robot control method, system and storage medium
CN114029951B (en) Robot autonomous recognition intelligent grabbing method based on depth camera
CN106403924B (en) Robot based on depth camera quickly positions and Attitude estimation method
CN108257173A (en) Object separation method and apparatus and system in a kind of image information
CN109407940A (en) A kind of target object chooses method, system, terminal and storage medium
CN110433467A (en) Picking up table tennis ball robot operation method and equipment based on binocular vision and ant group algorithm
CN104317398A (en) Gesture control method, wearable equipment and electronic equipment
CN110308817A (en) A kind of touch action recognition methods and touch control projection system
CN103793178A (en) Vector graph editing method of touch screen of mobile device
CN102799703A (en) Forest class fire spreading stimulation method based on plane octree
CN109760070A (en) Robot elevator push button position control method and system
CN111158362A (en) Charging pile, robot charging method and device and robot system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant