Disclosure of Invention
Therefore, in order to solve the technical defects and shortcomings of the prior art, the invention provides an auxiliary election method for contestants in a power competition, which comprises the following steps:
after a player wears VR glasses, enabling a VR scene to appear in N target points according to a preset rule to guide the player to observe, wherein N is an integer greater than or equal to 1;
and counting the response speed of the player for observing the N target points to finish the auxiliary selection of the player.
On the basis of the above embodiment, the VR scene appears N target points according to a preset rule, including:
the N target points appear in the VR scene simultaneously;
correspondingly, the step of counting the response speed of the player for observing the N target points comprises the following steps:
and counting the total time for the players to finish observing all the N target points in sequence.
On the basis of the above embodiment, the VR scene appears N target points according to a preset rule, including: the N target points sequentially appear in the VR scene;
correspondingly, the step of counting the response speed of the player for observing the N target points comprises the following steps:
and respectively counting the time required by the player to observe each target point.
On the basis of the above embodiment, the counting of the time required for the player to observe each of the target points includes:
after the player wears VR glasses, recording the moment when each target point appears in the VR scene as a first moment; recording the moment when the player observes the target point as a second moment;
and counting the time interval between the first time and the second time for the target point.
On the basis of the above embodiment, before N target points appear in turn in a VR scene according to a preset rule, the method further includes:
determining a pupil center of the player;
determining a matching relationship between the player's pupil center and the player's viewpoint.
On the basis of the above embodiment, determining the pupil center of the player includes:
the player wears VR glasses, and the VR eyes comprise infrared shooting equipment;
the infrared shooting equipment shoots the eyes of the player to form an eye information image;
and processing the eye information image to obtain the pupil center of the player.
On the basis of the above embodiment, processing the eye information image includes:
carrying out graying processing on the eye image information to obtain an image gray information image;
determining a point with the minimum gray value in the image gray information map;
and determining the pupil center of the player according to the point with the minimum gray value.
On the basis of the above embodiment, determining a matching relationship between the pupil center of the player and the player viewpoint includes:
and establishing a matching model between the pupil center of the player and the observation point of the player, wherein the input of the matching model is the pupil center of the player, and the output of the matching model is the observation point of the player obtained according to the pupil center of the player.
On the basis of the above embodiment, establishing a matching model between the pupil center of the player and the viewpoint of the player includes:
the player wears VR glasses and observes a VR scene, the VR scene comprises a screen, and K observation points with known coordinates appear on the screen in sequence;
the player sequentially observes the K observation points and records the pupil center when the player observes each observation point;
and obtaining the matching model according to the pupil center and the coordinates of the observation point.
On the basis of the above embodiment, the counting of the response speed of the player observing the N target points includes:
determining the position of the current observation point of the user when the player observes according to the pupil center of the player and the matching relation;
if the difference between the position of the current observation point of the user and the position of the target point appearing according to a preset rule does not exceed a threshold value, determining that the user successfully observes the target point;
and recording the interval between the moment when the target point appears and the moment when the user successfully observes the target point so as to finish the statistics of the response speed.
Other aspects and features of the present invention will become apparent from the following detailed description, which proceeds with reference to the accompanying drawings. It is to be understood, however, that the drawings are designed solely for purposes of illustration and not as a definition of the limits of the invention, for which reference should be made to the appended claims. It should be further understood that the drawings are not necessarily drawn to scale and that, unless otherwise indicated, they are merely intended to conceptually illustrate the structures and procedures described herein.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
The invention provides an auxiliary selecting method for players of a competitive game, which comprises the following steps:
after a player wears VR glasses, enabling a VR scene to appear in N target points according to a preset rule to guide the player to observe, wherein N is an integer greater than or equal to 1;
and counting the response speed of the player for observing the N target points to finish the auxiliary selection of the player.
Further, on the basis of the above embodiment, N target points appear in the VR scene according to a preset rule, which may be:
the N target points appear in the VR scene simultaneously;
correspondingly, the step of counting the response speed of the player for observing the N target points comprises the following steps:
and counting the total time for the players to finish observing all the N target points in sequence.
Or, on the basis of the above embodiment, the VR scene may have N target points according to a preset rule, and the method may further include: the N target points sequentially appear in the VR scene;
correspondingly, the step of counting the response speed of the player for observing the N target points comprises the following steps:
and respectively counting the time required by the player to observe each target point.
Further, on the basis of the above embodiment, the time required for the player to observe each target point is counted, and specifically, the time required for the player to observe each target point may be:
after the player wears VR glasses, recording the moment when each target point appears in the VR scene as a first moment; recording the moment when the player observes the target point as a second moment;
and counting the time interval between the first time and the second time for the target point.
Further, on the basis of the above embodiment, before N target points appear in the VR scene in sequence according to a preset rule, the following steps are further performed:
determining a pupil center of the player;
determining a matching relationship between the player's pupil center and the player's viewpoint.
Further, on the basis of the above embodiment, the pupil center of the player is determined in the following manner:
the player wears VR glasses, and the VR eyes comprise infrared shooting equipment;
the infrared shooting equipment shoots the eyes of the player to form an eye information image;
and processing the eye information image to obtain the pupil center of the player.
Further, on the basis of the above embodiment, the eye information image is processed by the following steps:
carrying out graying processing on the eye image information to obtain an image gray information image;
determining a point with the minimum gray value in the image gray information map;
and determining the pupil center of the player according to the point with the minimum gray value.
The eye image information is subjected to graying processing to obtain an image gray information map, and the following method can be specifically adopted:
a coordinate system is established in the image gray scale information map, and f0(x, y) is taken as the original gray scale value of the point with the coordinate (x, y). The eye information image may be pre-processed using an enhancement operator to increase the contrast of the player's pupil region and non-pupil region images. One of the expression modes of the enhancement operator En is as follows:
En=c*lg(1+double(f0));
where En is the enhancement operator, f0 is the original gray scale value, and c is a constant coefficient.
Preferably, after the preprocessing of the eye information image is performed, the image filtering processing can be performed on the image by using the laplacian algorithm, so that the image can be well dried.
On the basis of the above embodiment, the point with the minimum gray scale value in the image gray scale information map is determined, and the following steps may be adopted:
estimating a point with the minimum gray value in the image gray information map, and enabling the coordinate of the point to be (xmin, ymin), wherein the formula of xmin and ymin is as follows:
where min represents the minimum operation, sum represents the summation operation, and f (i, j) represents the gray value of the image at coordinates (x, y).
Further, on the basis of the above embodiment, the pupil center of the player is determined according to the point with the minimum grayscale value, and specifically, the following steps may be adopted:
calculating a gradient value of gray scale on the image gray scale information graph along the direction of a specified ray by taking coordinates (xmin, ymin) as a starting point, and determining the position where the gradient value reaches the maximum value as the position of the pupil edge point of the player;
and obtaining the pupil center of the player according to the position of the pupil edge point.
Wherein the position at which the gradient value reaches the maximum value is determined as the position of the player pupil edge point because: there is a significant difference in the reflection of infrared light inside and outside the pupil. The shot image has strong light in the pupil area and high brightness; in the non-pupillary region, the image is dark and has low brightness. The pupil area is significantly lower in gray than other areas, and the gradient changes dramatically at the edge locations. In the processed image information, in the appointed direction, the gray value of the pupil area and the gray value of the non-pupil area are changed sharply at the boundary, and the gradient value at the boundary is also maximized, so that the position of the pupil edge point is judged.
For example, let f (i, j) be the gray value of the image at coordinate (i, j), the variance of the gray value is:
the gray scale gradient of that directionThe point with the maximum D is the pupil edge point.
Wherein the pupil center of the player is obtained from the position of the pupil edge point, and the following method may be adopted:
calculating a central position of all pupil edge points, specifically, respectively calculating a central value xmean of all pupil edge points in the direction of a horizontal axis of a coordinate system and a central value ymean in the direction of a vertical axis of the coordinate system;
determining the pupil center coordinates of the player as (xmean, ymean).
Wherein the pupil center of the player is obtained from the position of the pupil edge point, and another method may be adopted, as follows:
and fitting the pupil edge points to form an ellipse-like curve, and taking the center of the ellipse-like curve as the pupil center point.
Specifically, the invention adopts an algorithm of Least Square Method (LSM) and random sample consensus (RANSAC) mixed iteration to fit a plurality of pupil edge points to form an ellipse-like curve. The method can overcome the defect that the least square method is not high in precision, and improves the defect of timeliness of the RANSAC algorithm. The method comprises the following specific steps:
step a, selecting any 5 points from the N characteristic points, and carrying out ellipse fitting by using a least square method to form a first type of ellipse equation;
b, screening local interior points and local exterior points of the N feature points through the first type of elliptical equation by using a random sampling consistency algorithm, and counting to obtain M local interior points and N-M local exterior points;
in this embodiment, the points falling on the ellipse-like are regarded as local points. Of course, the invention is not limited thereto.
Step c, judging the occupation rate of the local pointWhether it is less than a first threshold t 1; if yes, determining the 5 points as atypical feature points, and if the fitting ellipse is an atypical feature ellipse, re-executing the step a; if not, determining the 5 points as classicalC, executing step d if the type characteristic points are the type characteristic points;
d, randomly selecting 5 points according to the M local interior points, optimizing the first type of elliptic equation by using a least square method to form a second type of elliptic equation, screening the local interior points and the local exterior points of the N characteristic points by using a random sampling consistency algorithm through the second type of elliptic equation, and finally counting to obtain M1 local interior points and N-M1 local exterior points;
step e, judging the occupation rate of the local pointWhether it is greater than a second threshold t 2; if so, terminating the iteration, and considering the second type of elliptic equation as an optimal equation; if not, executing step d.
Optionally, another algorithm that uses a mixed iteration of a Least Squares Method (LSM) and a random sample consensus (RANSAC) is further provided in the embodiments of the present invention, and a plurality of pupil edge points are fitted to form an ellipse-like curve. The method comprises the following specific steps:
step S1: selecting any 5 points from the N characteristic points, carrying out ellipse fitting by using LSM, bringing the rest N-5 points into an ellipse equation by using RANSAC to carry out local point and local point discrimination, and counting M local points and N-M local points in total from the N points;
step S2: if local point occupancyIf the value is less than the threshold t1, the 5 points selected in the step S1 are regarded as atypical feature points, the fitted ellipse is an atypical feature ellipse, and the step S1 is skipped; if it isIf the 5 point typical feature points selected in the step S1 are considered, jumping to a step S3;
step S3: all the M local characteristic points screened in the step S1 are substituted into an LSM equation to obtain an optimized elliptical equation, the RANSAC algorithm is used again to substitute the N-M characteristic points into a new elliptical equation to carry out local point re-screening, the screened local point is M1, and the local point is N-M1;
step S4: calculating local point occupancyIf the current ellipse equation is larger than the threshold t2, terminating the iteration, and considering the current ellipse equation as the optimal equation; if it isThe local point ratio is considered not to satisfy the threshold value, and the process jumps to step S3 to iterate again.
Further, on the basis of the above embodiment, determining the matching relationship between the pupil center of the player and the player viewpoint may be:
and establishing a matching model between the pupil center of the player and the observation point of the player, wherein the input of the matching model is the pupil center of the player, and the output of the matching model is the observation point of the player obtained according to the pupil center of the player.
Specifically, a matching model between the pupil center of the player and the observation point of the player is established, and the steps are as follows:
the player wears VR glasses and observes a VR scene, the VR scene comprises a screen, and K observation points with known coordinates appear on the screen in sequence;
the player sequentially observes the K observation points and records the pupil center when the player observes each observation point;
and obtaining the matching model according to the pupil center and the coordinates of the observation point.
Specifically, a matching model (X, Y) ═ F (X, Y) is obtained from the coordinates (X, Y) of the K observation points and the coordinates (X, Y) of the pupil center, and the following method can be adopted:
selecting the matching model as follows:
and determining parameters a, b, c, d, e, f, g, h, K, l, m and n of the matching model according to the coordinates (X, Y) of the K points and the corresponding coordinates (X, Y) of the pupil center point to complete the establishment of the matching model.
Specifically, the coordinates of K points appearing in sequence in the screen are recorded as X ═ X (X)1,X2,X3…Xk),Y=(Y1,Y2,Y3…Yk) The corresponding pupil center coordinate is x ═ x (x)1,x2,x3…xk),y=(y1,y2,y3…yk) The model can be built by the following matrix:
f (x, y) represents a relationship of x and y, and M represents a mapping parameter between the pupil center position and the screen coordinates.
The model established by the invention is as follows:
the model is expressed using a matrix form as:
then:
in the present model, when K is 6, X and Y correspond to 6 screen coordinates, and the pupil also corresponds to 6 center coordinates, X is | X1,X2,X3,X4,X5,X6|,Y=|Y1,Y2,Y3,Y4,Y5,Y6Similarly, x ═ x1,x2,x3,x4,x5,x6|,y=|y1,y2,y3,y4,y5,y6Then the above matrix may be further rewritten as:
and solving a, b, c, d, e, f, g, h, k, l, m and n through an equation set so as to obtain the matching model.
Further, on the basis of the above embodiment, the statistical response speed of the player observing the N target points may be:
determining the position of the current observation point of the user when the player observes according to the pupil center of the player and the matching relation;
if the difference between the position of the current observation point of the user and the position of the target point appearing according to a preset rule does not exceed a threshold value, determining that the user successfully observes the target point;
and recording the interval between the moment when the target point appears and the moment when the user successfully observes the target point so as to finish the statistics of the response speed.
The auxiliary election method for the electric competition players, provided by the embodiment of the invention, can accurately and quantitatively measure the reaction speed of the players to the target points, and avoids subjective evaluation when the electric competition players elect.
In summary, the principle and the embodiments of the present invention are explained by applying specific examples, and the above descriptions of the examples are only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention, and the scope of the present invention should be subject to the appended claims.