CN106669139B - Auxiliary election method for electric competition players - Google Patents

Auxiliary election method for electric competition players Download PDF

Info

Publication number
CN106669139B
CN106669139B CN201611098201.5A CN201611098201A CN106669139B CN 106669139 B CN106669139 B CN 106669139B CN 201611098201 A CN201611098201 A CN 201611098201A CN 106669139 B CN106669139 B CN 106669139B
Authority
CN
China
Prior art keywords
player
point
pupil center
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611098201.5A
Other languages
Chinese (zh)
Other versions
CN106669139A (en
Inventor
左瑜
张捷
单瑚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhou Chun
Original Assignee
Research Institute of Zhejiang University Taizhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Institute of Zhejiang University Taizhou filed Critical Research Institute of Zhejiang University Taizhou
Priority to CN201611098201.5A priority Critical patent/CN106669139B/en
Publication of CN106669139A publication Critical patent/CN106669139A/en
Application granted granted Critical
Publication of CN106669139B publication Critical patent/CN106669139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F9/00Games not otherwise provided for
    • A63F9/0096Reaction time games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B69/00Training appliances or apparatus for special sports
    • A63B69/0053Apparatus generating random stimulus signals for reaction-time training involving a substantial physical effort

Abstract

The invention relates to an auxiliary selecting method for players of a competitive game, which comprises the following steps: after a player wears VR glasses, enabling a VR scene to appear in N target points according to a preset rule to guide the player to observe, wherein N is an integer greater than or equal to 1; and counting the response speed of the player for observing the N target points to finish the auxiliary selection of the player. The auxiliary election method for the electric competition players, provided by the embodiment of the invention, can accurately and quantitatively measure the reaction speed of the players to the target points, and avoids subjective evaluation when the electric competition players elect.

Description

Auxiliary election method for electric competition players
Technical Field
The invention relates to the field of electronic equipment, in particular to an auxiliary selecting and drawing method for players of an electronic competition.
Background
With the professional education and adult education department of the education department issuing the notification about the application of the professional to the student in the high-level professional schools with 2017, the electronic competitive sports and management professions are supplemented into a new profession with 2016, the promotion of national policies is more obvious, the degree of the sports received by the masses is obviously promoted, more and more people are attracted to put on the sports, and a plurality of electronic contestants hope to participate in world high-level electronic contests on behalf of the country to strive for honor for the country, so that the electronic contestants with more potential are gradually paid more attention by the industry.
For a competitive athlete, reaction speed is a manifestation of the quality of speed. In a competitive game, a very important response rate is reflected in the length of time required to find an object from the field of view. For example, a person with a reaction rate of 0.25s is confronted with a person with a reaction rate of 0.15s, the person with a reaction rate of 0.15s can find and take action in a very short time, and the person with a reaction rate of 0.25s just finds a specific position of the object, so that the advantage of the person with a high reaction rate is very obvious.
Currently, there is no method for accurately measuring the reaction speed of the electric contestant to find a target in the process of selecting the electric contestant, so that the evaluation method for the capacity is still rough.
Disclosure of Invention
Therefore, in order to solve the technical defects and shortcomings of the prior art, the invention provides an auxiliary election method for contestants in a power competition, which comprises the following steps:
after a player wears VR glasses, enabling a VR scene to appear in N target points according to a preset rule to guide the player to observe, wherein N is an integer greater than or equal to 1;
and counting the response speed of the player for observing the N target points to finish the auxiliary selection of the player.
On the basis of the above embodiment, the VR scene appears N target points according to a preset rule, including:
the N target points appear in the VR scene simultaneously;
correspondingly, the step of counting the response speed of the player for observing the N target points comprises the following steps:
and counting the total time for the players to finish observing all the N target points in sequence.
On the basis of the above embodiment, the VR scene appears N target points according to a preset rule, including: the N target points sequentially appear in the VR scene;
correspondingly, the step of counting the response speed of the player for observing the N target points comprises the following steps:
and respectively counting the time required by the player to observe each target point.
On the basis of the above embodiment, the counting of the time required for the player to observe each of the target points includes:
after the player wears VR glasses, recording the moment when each target point appears in the VR scene as a first moment; recording the moment when the player observes the target point as a second moment;
and counting the time interval between the first time and the second time for the target point.
On the basis of the above embodiment, before N target points appear in turn in a VR scene according to a preset rule, the method further includes:
determining a pupil center of the player;
determining a matching relationship between the player's pupil center and the player's viewpoint.
On the basis of the above embodiment, determining the pupil center of the player includes:
the player wears VR glasses, and the VR eyes comprise infrared shooting equipment;
the infrared shooting equipment shoots the eyes of the player to form an eye information image;
and processing the eye information image to obtain the pupil center of the player.
On the basis of the above embodiment, processing the eye information image includes:
carrying out graying processing on the eye image information to obtain an image gray information image;
determining a point with the minimum gray value in the image gray information map;
and determining the pupil center of the player according to the point with the minimum gray value.
On the basis of the above embodiment, determining a matching relationship between the pupil center of the player and the player viewpoint includes:
and establishing a matching model between the pupil center of the player and the observation point of the player, wherein the input of the matching model is the pupil center of the player, and the output of the matching model is the observation point of the player obtained according to the pupil center of the player.
On the basis of the above embodiment, establishing a matching model between the pupil center of the player and the viewpoint of the player includes:
the player wears VR glasses and observes a VR scene, the VR scene comprises a screen, and K observation points with known coordinates appear on the screen in sequence;
the player sequentially observes the K observation points and records the pupil center when the player observes each observation point;
and obtaining the matching model according to the pupil center and the coordinates of the observation point.
On the basis of the above embodiment, the counting of the response speed of the player observing the N target points includes:
determining the position of the current observation point of the user when the player observes according to the pupil center of the player and the matching relation;
if the difference between the position of the current observation point of the user and the position of the target point appearing according to a preset rule does not exceed a threshold value, determining that the user successfully observes the target point;
and recording the interval between the moment when the target point appears and the moment when the user successfully observes the target point so as to finish the statistics of the response speed.
Other aspects and features of the present invention will become apparent from the following detailed description, which proceeds with reference to the accompanying drawings. It is to be understood, however, that the drawings are designed solely for purposes of illustration and not as a definition of the limits of the invention, for which reference should be made to the appended claims. It should be further understood that the drawings are not necessarily drawn to scale and that, unless otherwise indicated, they are merely intended to conceptually illustrate the structures and procedures described herein.
Drawings
The following detailed description of embodiments of the invention will be made with reference to the accompanying drawings.
Fig. 1 is a flowchart of an auxiliary election method for contestants in a sports competition according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
The invention provides an auxiliary selecting method for players of a competitive game, which comprises the following steps:
after a player wears VR glasses, enabling a VR scene to appear in N target points according to a preset rule to guide the player to observe, wherein N is an integer greater than or equal to 1;
and counting the response speed of the player for observing the N target points to finish the auxiliary selection of the player.
Further, on the basis of the above embodiment, N target points appear in the VR scene according to a preset rule, which may be:
the N target points appear in the VR scene simultaneously;
correspondingly, the step of counting the response speed of the player for observing the N target points comprises the following steps:
and counting the total time for the players to finish observing all the N target points in sequence.
Or, on the basis of the above embodiment, the VR scene may have N target points according to a preset rule, and the method may further include: the N target points sequentially appear in the VR scene;
correspondingly, the step of counting the response speed of the player for observing the N target points comprises the following steps:
and respectively counting the time required by the player to observe each target point.
Further, on the basis of the above embodiment, the time required for the player to observe each target point is counted, and specifically, the time required for the player to observe each target point may be:
after the player wears VR glasses, recording the moment when each target point appears in the VR scene as a first moment; recording the moment when the player observes the target point as a second moment;
and counting the time interval between the first time and the second time for the target point.
Further, on the basis of the above embodiment, before N target points appear in the VR scene in sequence according to a preset rule, the following steps are further performed:
determining a pupil center of the player;
determining a matching relationship between the player's pupil center and the player's viewpoint.
Further, on the basis of the above embodiment, the pupil center of the player is determined in the following manner:
the player wears VR glasses, and the VR eyes comprise infrared shooting equipment;
the infrared shooting equipment shoots the eyes of the player to form an eye information image;
and processing the eye information image to obtain the pupil center of the player.
Further, on the basis of the above embodiment, the eye information image is processed by the following steps:
carrying out graying processing on the eye image information to obtain an image gray information image;
determining a point with the minimum gray value in the image gray information map;
and determining the pupil center of the player according to the point with the minimum gray value.
The eye image information is subjected to graying processing to obtain an image gray information map, and the following method can be specifically adopted:
a coordinate system is established in the image gray scale information map, and f0(x, y) is taken as the original gray scale value of the point with the coordinate (x, y). The eye information image may be pre-processed using an enhancement operator to increase the contrast of the player's pupil region and non-pupil region images. One of the expression modes of the enhancement operator En is as follows:
En=c*lg(1+double(f0));
where En is the enhancement operator, f0 is the original gray scale value, and c is a constant coefficient.
Preferably, after the preprocessing of the eye information image is performed, the image filtering processing can be performed on the image by using the laplacian algorithm, so that the image can be well dried.
On the basis of the above embodiment, the point with the minimum gray scale value in the image gray scale information map is determined, and the following steps may be adopted:
estimating a point with the minimum gray value in the image gray information map, and enabling the coordinate of the point to be (xmin, ymin), wherein the formula of xmin and ymin is as follows:
where min represents the minimum operation, sum represents the summation operation, and f (i, j) represents the gray value of the image at coordinates (x, y).
Further, on the basis of the above embodiment, the pupil center of the player is determined according to the point with the minimum grayscale value, and specifically, the following steps may be adopted:
calculating a gradient value of gray scale on the image gray scale information graph along the direction of a specified ray by taking coordinates (xmin, ymin) as a starting point, and determining the position where the gradient value reaches the maximum value as the position of the pupil edge point of the player;
and obtaining the pupil center of the player according to the position of the pupil edge point.
Wherein the position at which the gradient value reaches the maximum value is determined as the position of the player pupil edge point because: there is a significant difference in the reflection of infrared light inside and outside the pupil. The shot image has strong light in the pupil area and high brightness; in the non-pupillary region, the image is dark and has low brightness. The pupil area is significantly lower in gray than other areas, and the gradient changes dramatically at the edge locations. In the processed image information, in the appointed direction, the gray value of the pupil area and the gray value of the non-pupil area are changed sharply at the boundary, and the gradient value at the boundary is also maximized, so that the position of the pupil edge point is judged.
For example, let f (i, j) be the gray value of the image at coordinate (i, j), the variance of the gray value is:
the gray scale gradient of that directionThe point with the maximum D is the pupil edge point.
Wherein the pupil center of the player is obtained from the position of the pupil edge point, and the following method may be adopted:
calculating a central position of all pupil edge points, specifically, respectively calculating a central value xmean of all pupil edge points in the direction of a horizontal axis of a coordinate system and a central value ymean in the direction of a vertical axis of the coordinate system;
determining the pupil center coordinates of the player as (xmean, ymean).
Wherein the pupil center of the player is obtained from the position of the pupil edge point, and another method may be adopted, as follows:
and fitting the pupil edge points to form an ellipse-like curve, and taking the center of the ellipse-like curve as the pupil center point.
Specifically, the invention adopts an algorithm of Least Square Method (LSM) and random sample consensus (RANSAC) mixed iteration to fit a plurality of pupil edge points to form an ellipse-like curve. The method can overcome the defect that the least square method is not high in precision, and improves the defect of timeliness of the RANSAC algorithm. The method comprises the following specific steps:
step a, selecting any 5 points from the N characteristic points, and carrying out ellipse fitting by using a least square method to form a first type of ellipse equation;
b, screening local interior points and local exterior points of the N feature points through the first type of elliptical equation by using a random sampling consistency algorithm, and counting to obtain M local interior points and N-M local exterior points;
in this embodiment, the points falling on the ellipse-like are regarded as local points. Of course, the invention is not limited thereto.
Step c, judging the occupation rate of the local pointWhether it is less than a first threshold t 1; if yes, determining the 5 points as atypical feature points, and if the fitting ellipse is an atypical feature ellipse, re-executing the step a; if not, determining the 5 points as classicalC, executing step d if the type characteristic points are the type characteristic points;
d, randomly selecting 5 points according to the M local interior points, optimizing the first type of elliptic equation by using a least square method to form a second type of elliptic equation, screening the local interior points and the local exterior points of the N characteristic points by using a random sampling consistency algorithm through the second type of elliptic equation, and finally counting to obtain M1 local interior points and N-M1 local exterior points;
step e, judging the occupation rate of the local pointWhether it is greater than a second threshold t 2; if so, terminating the iteration, and considering the second type of elliptic equation as an optimal equation; if not, executing step d.
Optionally, another algorithm that uses a mixed iteration of a Least Squares Method (LSM) and a random sample consensus (RANSAC) is further provided in the embodiments of the present invention, and a plurality of pupil edge points are fitted to form an ellipse-like curve. The method comprises the following specific steps:
step S1: selecting any 5 points from the N characteristic points, carrying out ellipse fitting by using LSM, bringing the rest N-5 points into an ellipse equation by using RANSAC to carry out local point and local point discrimination, and counting M local points and N-M local points in total from the N points;
step S2: if local point occupancyIf the value is less than the threshold t1, the 5 points selected in the step S1 are regarded as atypical feature points, the fitted ellipse is an atypical feature ellipse, and the step S1 is skipped; if it isIf the 5 point typical feature points selected in the step S1 are considered, jumping to a step S3;
step S3: all the M local characteristic points screened in the step S1 are substituted into an LSM equation to obtain an optimized elliptical equation, the RANSAC algorithm is used again to substitute the N-M characteristic points into a new elliptical equation to carry out local point re-screening, the screened local point is M1, and the local point is N-M1;
step S4: calculating local point occupancyIf the current ellipse equation is larger than the threshold t2, terminating the iteration, and considering the current ellipse equation as the optimal equation; if it isThe local point ratio is considered not to satisfy the threshold value, and the process jumps to step S3 to iterate again.
Further, on the basis of the above embodiment, determining the matching relationship between the pupil center of the player and the player viewpoint may be:
and establishing a matching model between the pupil center of the player and the observation point of the player, wherein the input of the matching model is the pupil center of the player, and the output of the matching model is the observation point of the player obtained according to the pupil center of the player.
Specifically, a matching model between the pupil center of the player and the observation point of the player is established, and the steps are as follows:
the player wears VR glasses and observes a VR scene, the VR scene comprises a screen, and K observation points with known coordinates appear on the screen in sequence;
the player sequentially observes the K observation points and records the pupil center when the player observes each observation point;
and obtaining the matching model according to the pupil center and the coordinates of the observation point.
Specifically, a matching model (X, Y) ═ F (X, Y) is obtained from the coordinates (X, Y) of the K observation points and the coordinates (X, Y) of the pupil center, and the following method can be adopted:
selecting the matching model as follows:
and determining parameters a, b, c, d, e, f, g, h, K, l, m and n of the matching model according to the coordinates (X, Y) of the K points and the corresponding coordinates (X, Y) of the pupil center point to complete the establishment of the matching model.
Specifically, the coordinates of K points appearing in sequence in the screen are recorded as X ═ X (X)1,X2,X3…Xk),Y=(Y1,Y2,Y3…Yk) The corresponding pupil center coordinate is x ═ x (x)1,x2,x3…xk),y=(y1,y2,y3…yk) The model can be built by the following matrix:
f (x, y) represents a relationship of x and y, and M represents a mapping parameter between the pupil center position and the screen coordinates.
The model established by the invention is as follows:
the model is expressed using a matrix form as:
then:
in the present model, when K is 6, X and Y correspond to 6 screen coordinates, and the pupil also corresponds to 6 center coordinates, X is | X1,X2,X3,X4,X5,X6|,Y=|Y1,Y2,Y3,Y4,Y5,Y6Similarly, x ═ x1,x2,x3,x4,x5,x6|,y=|y1,y2,y3,y4,y5,y6Then the above matrix may be further rewritten as:
and solving a, b, c, d, e, f, g, h, k, l, m and n through an equation set so as to obtain the matching model.
Further, on the basis of the above embodiment, the statistical response speed of the player observing the N target points may be:
determining the position of the current observation point of the user when the player observes according to the pupil center of the player and the matching relation;
if the difference between the position of the current observation point of the user and the position of the target point appearing according to a preset rule does not exceed a threshold value, determining that the user successfully observes the target point;
and recording the interval between the moment when the target point appears and the moment when the user successfully observes the target point so as to finish the statistics of the response speed.
The auxiliary election method for the electric competition players, provided by the embodiment of the invention, can accurately and quantitatively measure the reaction speed of the players to the target points, and avoids subjective evaluation when the electric competition players elect.
In summary, the principle and the embodiments of the present invention are explained by applying specific examples, and the above descriptions of the examples are only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention, and the scope of the present invention should be subject to the appended claims.

Claims (7)

1. An auxiliary election method for competitive game players, comprising:
determining a pupil center of the player;
determining a matching relationship between the player's pupil center and the player's viewpoint;
after a player wears VR glasses, enabling a VR scene to appear in N target points according to a preset rule to guide the player to observe, wherein N is an integer greater than or equal to 1;
counting the response speed of the player for observing the N target points to complete the auxiliary selection of the player,
wherein determining the pupil center of the player comprises:
the player wears VR glasses, and the VR eyes comprise infrared shooting equipment;
the infrared shooting equipment shoots the eyes of the player to form an eye information image;
processing the eye information image to obtain the pupil center of the player,
wherein processing the eye information image comprises:
carrying out graying processing on the eye image information to obtain an image gray information image:
determining a point with the minimum gray value in the image gray information map;
determining the pupil center of the player according to the point with the minimum gray value;
the graying processing is carried out on the eye image information to obtain an image gray information image, and the method comprises the following steps:
establishing a coordinate system in the image gray scale information image, enabling f0(x, y) to be an original gray scale value of a point at the coordinate (x, y), and preprocessing the eye information image by adopting an enhancement operator, wherein the expression mode of the enhancement operator En is as follows:
En=c*lg(1+double(f0));
wherein En is an enhancement operator, f0 is an original gray value, and c is a constant coefficient;
after the preprocessing of the eye information image is executed, performing image filtering processing on the image by adopting a Laplace algorithm;
wherein, determining the point with the minimum gray value in the image gray information map comprises:
let the coordinates of the minimum point be (xmin, ymin), and the equations for xmin and ymin are:
where min represents the minimum operation, sum represents the summation operation, and f (i, j) represents the gray value of the image at coordinates (x, y).
2. The method of claim 1, wherein the VR scene presents N target points according to a preset rule, comprising:
the N target points appear in the VR scene simultaneously;
correspondingly, the step of counting the response speed of the player for observing the N target points comprises the following steps:
and counting the total time for the players to finish observing all the N target points in sequence.
3. The method of claim 1, wherein the VR scene presents N target points according to a preset rule, comprising: the N target points sequentially appear in the VR scene;
correspondingly, the step of counting the response speed of the player for observing the N target points comprises the following steps:
and respectively counting the time required by the player to observe each target point.
4. The method of claim 3, wherein counting the time required for the player to observe each of the target points comprises:
after the player wears VR glasses, recording the moment when each target point appears in the VR scene as a first moment; recording the moment when the player observes the target point as a second moment;
and counting the time interval between the first time and the second time for the target point.
5. The method of claim 1, wherein determining a matching relationship between the player's pupil center and the player viewpoint comprises:
and establishing a matching model between the pupil center of the player and the observation point of the player, wherein the input of the matching model is the pupil center of the player, and the output of the matching model is the observation point of the player obtained according to the pupil center of the player.
6. The method of claim 5, wherein modeling a match between the player's pupil center and the player's viewpoint comprises:
the player wears VR glasses and observes a VR scene, the VR scene comprises a screen, and K observation points with known coordinates appear on the screen in sequence;
the player sequentially observes the K observation points and records the pupil center when the player observes each observation point;
and obtaining the matching model according to the pupil center and the coordinates of the observation point.
7. The method of claim 1, wherein counting response speeds of the player observing the N target points comprises:
determining the position of the current observation point of the user when the player observes according to the pupil center of the player and the matching relation;
if the difference between the position of the current observation point of the user and the position of the target point appearing according to a preset rule does not exceed a threshold value, determining that the user successfully observes the target point;
and recording the interval between the moment when the target point appears and the moment when the user successfully observes the target point so as to finish the statistics of the response speed.
CN201611098201.5A 2016-12-03 2016-12-03 Auxiliary election method for electric competition players Active CN106669139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611098201.5A CN106669139B (en) 2016-12-03 2016-12-03 Auxiliary election method for electric competition players

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611098201.5A CN106669139B (en) 2016-12-03 2016-12-03 Auxiliary election method for electric competition players

Publications (2)

Publication Number Publication Date
CN106669139A CN106669139A (en) 2017-05-17
CN106669139B true CN106669139B (en) 2019-12-27

Family

ID=58867509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611098201.5A Active CN106669139B (en) 2016-12-03 2016-12-03 Auxiliary election method for electric competition players

Country Status (1)

Country Link
CN (1) CN106669139B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108205395B (en) * 2018-01-16 2021-03-23 安徽慧视金瞳科技有限公司 Method for accurately positioning center coordinates of calibration points

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1584917A (en) * 2004-06-11 2005-02-23 清华大学 Living body iris patterns collecting method and collector
CN101779960A (en) * 2010-02-24 2010-07-21 沃建中 Test system and method of stimulus information cognition ability value
CN102076387A (en) * 2008-04-29 2011-05-25 百利游戏有限公司 Biofeedback for gaming device, such as electronic gaming machine (egm)
CN103748599A (en) * 2011-05-20 2014-04-23 爱福露恩斯公司 Systems and methods for measuring reactions of head, eyes, eyelids and pupils
CN104983435A (en) * 2015-08-07 2015-10-21 北京环度智慧智能技术研究所有限公司 Stimulus information establishing method for interest orientation value test

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120244935A1 (en) * 2011-03-24 2012-09-27 International Business Machines Corporation Synchronizing game character display with game player viewing detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1584917A (en) * 2004-06-11 2005-02-23 清华大学 Living body iris patterns collecting method and collector
CN102076387A (en) * 2008-04-29 2011-05-25 百利游戏有限公司 Biofeedback for gaming device, such as electronic gaming machine (egm)
CN101779960A (en) * 2010-02-24 2010-07-21 沃建中 Test system and method of stimulus information cognition ability value
CN103748599A (en) * 2011-05-20 2014-04-23 爱福露恩斯公司 Systems and methods for measuring reactions of head, eyes, eyelids and pupils
CN104983435A (en) * 2015-08-07 2015-10-21 北京环度智慧智能技术研究所有限公司 Stimulus information establishing method for interest orientation value test

Also Published As

Publication number Publication date
CN106669139A (en) 2017-05-17

Similar Documents

Publication Publication Date Title
WO2020151489A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN108549874B (en) Target detection method, target detection equipment and computer-readable storage medium
CN102113335B (en) Image processing apparatus and method
CN109191428B (en) Masking texture feature-based full-reference image quality evaluation method
CN111079740A (en) Image quality evaluation method, electronic device, and computer-readable storage medium
CN107423678A (en) A kind of training method and face identification method of the convolutional neural networks for extracting feature
CN102567734B (en) Specific value based retina thin blood vessel segmentation method
Orhan Particle filtering
US9480925B2 (en) Image construction game
CN109584198A (en) A kind of quality of human face image evaluation method, device and computer readable storage medium
CN106780446A (en) It is a kind of to mix distorted image quality evaluating method without reference
CN103458765A (en) Image processing apparatus
CN111415304A (en) Underwater vision enhancement method and device based on cascade deep network
CN110866872A (en) Pavement crack image preprocessing intelligent selection method and device and electronic equipment
CN110910365A (en) Quality evaluation method for multi-exposure fusion image of dynamic scene and static scene simultaneously
CN116311061A (en) Material identification method based on image processing
CN111191542B (en) Method, device, medium and electronic equipment for identifying abnormal actions in virtual scene
CN106669139B (en) Auxiliary election method for electric competition players
CN109558797B (en) Method for distinguishing human body balance disorder based on gravity center area model under visual stimulation
CN112733581A (en) Vehicle attribute identification method and system
CN110874835A (en) Crop leaf disease resistance identification method and system, electronic equipment and storage medium
CN112396016B (en) Face recognition system based on big data technology
CN111784660B (en) Method and system for analyzing frontal face degree of face image
CN105872516A (en) Method and device for obtaining parallax parameters of three-dimensional film source
CN114342411B (en) Method for providing one or more sets of graphics parameters, computer executing program for implementing method for providing one or more sets of graphics parameters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20191031

Address after: 318000, No. 618, west section of Taizhou Avenue, Jiaojiang District, Zhejiang

Applicant after: RESEARCH INSTITUTE OF ZHEJIANG University TAIZHOU

Address before: 710071 Shaanxi city of Xi'an province high tech Zone Road No. 86 leading Times Square (B) second building 1 unit 22 floor room 12202 No. 51

Applicant before: XI'AN CREATION KEJI Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201118

Address after: Room 201, No.208 Xinggang West Road, Damaiyu street, Taizhou City, Zhejiang Province

Patentee after: Qihuan Technology Co.,Ltd.

Address before: 318000, No. 618, west section of Taizhou Avenue, Jiaojiang District, Zhejiang

Patentee before: RESEARCH INSTITUTE OF ZHEJIANG University TAIZHOU

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220909

Address after: Room 2704, Building 2, Wanda Huafu, No. 888, Maocheng Middle Road, Yinzhou District, Ningbo City, Zhejiang Province, 315100

Patentee after: Zhou Chun

Address before: 317604 Room 201, 208 Xinggang West Road, Damaiyu street, Taizhou City, Zhejiang Province

Patentee before: Qihuan Technology Co.,Ltd.

TR01 Transfer of patent right