CN114939272B - Vehicle-mounted interactive game method and system based on HUD - Google Patents

Vehicle-mounted interactive game method and system based on HUD Download PDF

Info

Publication number
CN114939272B
CN114939272B CN202210670676.6A CN202210670676A CN114939272B CN 114939272 B CN114939272 B CN 114939272B CN 202210670676 A CN202210670676 A CN 202210670676A CN 114939272 B CN114939272 B CN 114939272B
Authority
CN
China
Prior art keywords
action
user
hud
game
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210670676.6A
Other languages
Chinese (zh)
Other versions
CN114939272A (en
Inventor
胡素君
朱光欢
刘关林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Automobile Group Co Ltd
Original Assignee
Guangzhou Automobile Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Automobile Group Co Ltd filed Critical Guangzhou Automobile Group Co Ltd
Priority to CN202210670676.6A priority Critical patent/CN114939272B/en
Publication of CN114939272A publication Critical patent/CN114939272A/en
Application granted granted Critical
Publication of CN114939272B publication Critical patent/CN114939272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to a vehicle-mounted interactive game method and a vehicle-mounted interactive game system based on HUD, wherein the vehicle-mounted interactive game method and the vehicle-mounted interactive game system based on HUD comprise the following steps: controlling the HUD to display a game preparation picture; starting a game according to a first user instruction input by a user, and controlling the HUD to sequentially display a plurality of dynamic images according to a preset display mode; each dynamic image comprises preset action content for indicating the eyes of a user to act according to the action requirement; identifying a plurality of pieces of motion information of the user's eyes according to the preset motion content of the plurality of dynamic images; and comparing the action information with the preset action content of the dynamic images in a one-to-one correspondence manner, and obtaining a game result according to the comparison result. The invention is different from the game mode of the mobile phone or the game machine, can realize the entertainment experience, and is beneficial to relieving eye fatigue and protecting eyes.

Description

Vehicle-mounted interactive game method and system based on HUD
Technical Field
The invention relates to the technical field of vehicle-mounted interactive games, in particular to a vehicle-mounted interactive game method and system based on HUD.
Background
With the continuous development of the automobile industry, the automobile is not just a transportation means for riding instead of walking, but also can provide entertainment for a driver, for example, play a vehicle-mounted game, and the current vehicle-mounted game is mainly played based on a mode of touch control of a host screen or externally connected with a game handle, so that no obvious difference exists between the game mode of a mobile phone or a game console. Meanwhile, the driver is easy to generate eye fatigue after long-time continuous driving, and at present, no scheme for relieving the eye fatigue and protecting eyes can be provided for the driver.
Disclosure of Invention
The invention aims to provide a vehicle-mounted interactive game method and a vehicle-mounted interactive game system based on HUD, which are different from a game mode of a mobile phone or a game machine, and can realize entertainment experience different from a traditional game, and simultaneously, the vehicle-mounted interactive game method and the vehicle-mounted interactive game system are beneficial to relieving eye fatigue and protecting eyes.
To achieve the above object, a first embodiment of the present invention provides a vehicle-mounted interactive game method based on HUD, the method including:
controlling the HUD to display a game preparation picture;
starting a game according to a first user instruction input by a user, and controlling the HUD to sequentially display a plurality of dynamic images according to a preset display mode; each dynamic image comprises preset action content for indicating the eyes of a user to act according to the action requirement;
identifying a plurality of pieces of motion information of the user's eyes according to the preset motion content of the plurality of dynamic images;
and comparing the action information with the preset action content of the dynamic images in a one-to-one correspondence manner, and obtaining a game result according to the comparison result.
Preferably, the action information at least comprises action content;
the one-to-one correspondence comparison between the plurality of motion information and the preset motion content of the plurality of dynamic images comprises the following steps:
comparing the action content of each action information with the preset action content of the corresponding dynamic image to obtain action similarity;
the comparison result at least comprises action similarity of each piece of action information.
Preferably, the motion content of each motion information includes a motion track of the pupil center of the user's eye.
Preferably, the method further comprises:
controlling the HUD to display the calibration point, acquiring the eye focus coordinate when the user's eyes watch the calibration point, and establishing a mapping relation between the eye focus coordinate and the coordinate of the calibration point;
the step of comparing the motion content of each motion information with the preset motion content of the corresponding dynamic image to obtain the motion similarity comprises the following steps:
and obtaining the presentation condition of the action content of the eyes of the user in the HUD according to the mapping relation and the eye movement track, and comparing the presentation condition with the preset action content of the dynamic image to obtain the action similarity.
Preferably, the action information further comprises action time;
the one-to-one correspondence comparison between the plurality of motion information and the preset motion content of the plurality of dynamic images comprises the following steps:
comparing the action time of each action information with the time of the corresponding dynamic image to start displaying to obtain the action following degree;
the comparison result comprises the action following degree and the action similarity of each piece of action information.
Preferably, the game result obtaining step includes:
scoring the game completion condition of the user according to the comparison result, and accumulating the scoring to the user account score; wherein the user account points are used to redeem prizes or to exchange for preferential services.
Preferably, the method further comprises:
determining game difficulty according to a second user instruction input by a user;
the game is started according to a first user instruction input by a user, the HUD is controlled to sequentially display a plurality of dynamic images according to a preset display mode, and the method comprises the following steps:
acquiring a plurality of dynamic images corresponding to the game difficulty according to the game difficulty, and controlling the HUD to sequentially display the plurality of dynamic images according to a preset display mode;
or alternatively, the process may be performed,
and acquiring a plurality of dynamic images, and controlling the HUD to sequentially display the plurality of dynamic images according to a preset display mode corresponding to the game difficulty.
As the same inventive concept, corresponding to the above method, a second embodiment of the present invention proposes a HUD-based vehicle-mounted interactive game system, the system comprising:
a first control unit for controlling the HUD to display a game preparation screen;
the second control unit is used for starting the game according to a first user instruction input by a user and controlling the HUD to sequentially display a plurality of dynamic images according to a preset display mode; each dynamic image comprises preset action content for indicating the eyes of a user to act according to the action requirement;
an eye recognition unit for recognizing a plurality of pieces of motion information in which a user's eyes perform a motion according to preset motion contents of the plurality of moving images;
and the comparison unit is used for carrying out one-to-one correspondence comparison on the plurality of action information and the preset action content of the plurality of dynamic images, and obtaining a game result according to the comparison result.
Preferably, the system further comprises:
the third control unit is used for controlling the HUD to display the calibration points, acquiring the eye focus coordinates of the user when the eyes watch the calibration points, and establishing a mapping relation between the eye focus coordinates and the coordinates of the calibration points;
wherein, the comparison unit is specifically configured to:
and obtaining the presentation condition of the action content of the eyes of the user in the HUD according to the mapping relation and the eye movement track, and comparing the presentation condition with the preset action content of the dynamic image to obtain the action similarity.
Preferably, the system further comprises:
the difficulty setting unit is used for determining game difficulty according to a second user instruction input by a user;
wherein, the second control unit is specifically configured to:
acquiring a plurality of dynamic images corresponding to the game difficulty according to the game difficulty, and controlling the HUD to sequentially display the plurality of dynamic images according to a preset display mode;
or alternatively, the process may be performed,
and acquiring a plurality of dynamic images, and controlling the HUD to sequentially display the plurality of dynamic images according to a preset display mode corresponding to the game difficulty.
The method and the system have at least the following beneficial effects:
the vehicle-mounted interactive game is realized based on the vehicle-mounted HUD (head-up display, also called as a usual display system) and the vehicle-mounted camera system, is different from a game mode of a mobile phone or a game machine, and can be played only by a user according to the motion requirement of the dynamic image indication, so that the entertainment experience different from the traditional game can be obtained, and meanwhile, the eye fatigue can be relieved and eyes can be protected.
Additional features and advantages of the invention will be set forth in the description which follows.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a vehicle-mounted interactive game method based on HUD in an embodiment of the present invention.
Fig. 2 is a schematic diagram of a HUD display interface and calibration points according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a presentation situation of action content of eyes of a user on a HUD display interface according to the mapping relationship and the eye movement track in the embodiment of the present invention.
Fig. 4 is a frame diagram of a HUD-based vehicle-mounted interactive game system according to an embodiment of the present invention.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In addition, numerous specific details are set forth in the following examples in order to provide a better illustration of the invention. It will be understood by those skilled in the art that the present invention may be practiced without some of these specific details. In some instances, well known means have not been described in detail in order to not obscure the present invention.
An embodiment of the present invention provides a vehicle-mounted interactive game method based on HUD, referring to fig. 1, the method of the present embodiment includes the following steps:
step S10, controlling the HUD to display a game preparation picture;
specifically, in one example, the user may actively enter the game, start the game according to the user instruction M input by the user, and after starting the game, control the HUD to display a game preparation screen; the input mode of the user command M may be a physical key input mode, a virtual key input mode, a voice control input mode, etc., which is not limited to a certain mode in the embodiment; when receiving a user instruction M input by a user, judging to enter a game, and controlling the HUD to display a game preparation picture; the head-up display is abbreviated as HUD, and is called as head-up display system, which is a multifunctional instrument panel with a vehicle driver as a center and with blind operation, and is generally used for projecting important driving information such as speed per hour, navigation and the like onto windshield glass in front of the driver, so that the driver can see the important driving information such as speed per hour, navigation and the like without lowering the head or turning the head as much as possible; in the embodiment, the application of the HUD is further expanded, and the HUD is combined with the vehicle-mounted camera system, so that the game method of the embodiment is provided;
the user in the present embodiment refers to a driver or an occupant of a co-driver;
step S20, starting a game according to a first user instruction input by a user, and controlling the HUD to sequentially display a plurality of dynamic images according to a preset display mode; each dynamic image comprises preset action content for indicating the eyes of a user to act according to the action requirement;
specifically, the first user command may be the same as the input mode of the first user command, that is, for example, a physical key input mode, a virtual key input mode, a voice control input mode, and the like; further, in this embodiment, an instruction input manner is further provided, that is, after the eyes of the user are adjusted to a proper position, blink or other eye-like actions are performed, the vehicle-mounted camera system collects the images of the eyes of the user and performs image recognition, when the image recognition result is that the user inputs the first user instruction, the game is confirmed to start, and a plurality of dynamic images are extracted from a preset action library; the preset display mode can be random sequential display of images, or other set fixed sequences, for example, the images are displayed in sequence according to image numbers;
preferably, the preset action content is, for example, arrow 3s repeatedly moving upwards, arrow 3s repeatedly moving downwards, 3 weeks clockwise around a central point, 3 weeks anticlockwise around the central point, etc.;
preferably, the display mode of the dynamic images may be that the display time of each dynamic image is a fixed value, and when the display time reaches a preset fixed value, for example, 5 seconds, the display of the next dynamic image is automatically entered;
step S30, identifying a plurality of pieces of motion information of the user' S eyes according to the preset motion content of the plurality of dynamic images;
specifically, when a game is played, the eyes of a user need to follow the preset action content of a dynamic image displayed by the HUD to act, in the step, the action of the eyes of the user is specifically identified through a vehicle-mounted camera system, the vehicle-mounted camera system has the functions of shooting the images of the user and identifying the eyes of the user in the shot images, for example, a fatigue driving early warning system is adopted as the vehicle-mounted camera system, at present, a plurality of automobiles provided with the fatigue driving early warning function are provided with the fatigue driving early warning system (Driver Fatigue Monitor System), the fatigue driving early warning system is generally formed by an ECU and a camera based on physiological image response of the driver, the fatigue state of the driver is inferred by utilizing the facial features, eye signals, head motility and the like of the driver, and devices for alarming, prompting and taking corresponding measures are adopted to give active intelligent safety guarantee to the driver; for the method of the embodiment, the step S30 in the embodiment may be implemented by means of the existing fatigue driving early warning system for identifying a part of the eye signals of the driver, so that the user eye motion identification part is not repeated here;
based on the above, it can be known that the game method of the embodiment can be widely applied to automobiles provided with HUDs and fatigue driving early warning systems, new hardware is not needed to be added, only software is needed to be developed, new hardware cost is not caused, and the game method can be regarded as function supplement of the original fatigue driving early warning system, and effects of relieving eye fatigue and protecting eyes are achieved;
and S40, comparing the action information with preset action content of the dynamic images in a one-to-one correspondence manner, and obtaining a game result according to a comparison result.
As can be seen from the description of the above embodiments, the vehicle-mounted interactive game method of the present embodiment is implemented based on a vehicle-mounted HUD (head-up display, also called as a usual display system) and a vehicle-mounted camera system, which is different from a game mode of a mobile phone or a game console, and a user can play a game only by performing eye movements according to motion requirements indicated by dynamic images, so that entertainment experience different from that of a conventional game can be obtained, and eye fatigue can be relieved and eyes can be protected.
In some more specific embodiments, the action information includes at least action content; the motion content refers to preset motion content of a dynamic image, for example, arrow 3s for repeated upward movement, the motion content of eyes of a user is that eyes of the user move according to the indication of arrow 3s for repeated upward movement, and the motion content of eyes of an end user is possibly consistent with the indication, namely, the motion content of eyes of the end user can also possibly deviate from the indication, for example, the motion content of eyes of the end user is obtained through the vehicle-mounted camera system, wherein the motion content of eyes of the end user is the same as the indication, namely, the motion content of eyes of the end user repeatedly move upward for 3 s;
wherein, the step S40 includes:
comparing the action content of each action information with the preset action content of the corresponding dynamic image to obtain action similarity;
the comparison result at least comprises action similarity of each piece of action information.
In some more specific embodiments, the motion content of each motion information includes a motion trajectory of a pupil center of an eye of the user.
Specifically, referring to fig. 3, taking a horizontal rightward arrow movement as an example, according to the above-mentioned vehicle-mounted image capturing system, an eye video image is captured, a movement track P of the pupil center of the user eye extracted after processing is identified, and is enlarged to a display interface of the vehicle-mounted HUD, and then becomes a curve P', and the preset movement content of the dynamic image displayed by the HUD is a horizontal rightward movement arrow pattern, and the center curve of the horizontal rightward movement arrow pattern is Q. And (3) coinciding the starting end points of the curve P 'and the curve Q, uniformly extracting a coordinate point sequence B and a coordinate point sequence A from the curve P' and the curve Q, calculating LCSS (B, A) on the coordinate point sequence B and the coordinate point sequence A by adopting a longest common subsequence LCSS algorithm, and setting similarity C=LCSS (B, A).
In some more specific embodiments, the method further comprises:
s50, controlling the HUD to display the calibration points, acquiring eye focus coordinates when the user eyes watch the calibration points, and establishing a mapping relation between the eye focus coordinates and the coordinates of the calibration points;
specifically, referring to fig. 2, the display of calibration points may sequentially occur at the center and four corners of the display interface of the HUD, where the user's eyes are required to look at the calibration points for a period of time, for example, within a period of seconds acceptable to the user, after the user image is collected by the vehicle-mounted camera system disposed in the vehicle, the user's eye focus coordinates are identified by the image processing technology, and the user's eye focus coordinates and the current calibration point coordinates corresponding to the HUD are mapped, so that the mapping relationship is established, and calibration of the system to the eye position is completed after the 5 calibration points of the center and four corners are completed;
the step S40 is to compare the motion content of each motion information with the preset motion content of the corresponding dynamic image to obtain the motion similarity, and specifically includes:
and obtaining the presentation condition of the action content of the eyes of the user in the HUD according to the mapping relation and the eye movement track, and comparing the presentation condition with the preset action content of the dynamic image to obtain the action similarity.
It should be noted that, the mapping relationship is used to determine a presentation condition of the action content actually implemented by the eyes of the user on the HUD display interface, for example, in fig. 3, after the motion track P is enlarged to the display interface of the vehicle-mounted HUD, the motion track P becomes a curve P'; specifically, referring to fig. 3, taking a horizontal rightward arrow movement as an example, according to the above-mentioned vehicle-mounted image capturing system, an eye video image is captured, a movement track P of the pupil center of the user eye extracted after processing is identified, and is enlarged to a display interface of the vehicle-mounted HUD, and then becomes a curve P', and the preset movement content of the dynamic image displayed by the HUD is a horizontal rightward movement arrow pattern, and the center curve of the horizontal rightward movement arrow pattern is Q. The starting end points of the curve P 'and the curve Q are overlapped, a coordinate point sequence B and a coordinate point sequence A are evenly extracted from the curve P' and the curve Q, then the coordinate point sequence B and the coordinate point sequence A are calculated by adopting a longest common subsequence LCSS algorithm, LCSS (B, A) is calculated, and similarity C = is set
LCSS(B,A)。
In some more specific embodiments, the action information further includes an action time T1; the action time T2 is determined by the starting time of shooting the action of eyes of a user by the vehicle-mounted camera system;
wherein, the step S40 includes:
comparing the action time of each action information with the time of the corresponding dynamic image to start displaying to obtain the action following degree;
specifically, the action time T1 of each action information is compared with the time T2 when the preset action content (for example, rightward arrow) of the corresponding dynamic image starts to be displayed to obtain a time difference between the two, that is, the action following degree may be defined as d= (T1-T2), which represents the following speed of the user's eye action on the indicated action of the HUD dynamic image;
the comparison result comprises a motion following degree D and a motion similarity C of each piece of motion information.
In some more specific embodiments, the step S40 further includes:
scoring the game completion condition of the user according to the comparison result, and accumulating the scoring to the user account score; the user account points are used for exchanging prizes or exchanging preferential service so as to improve the enthusiasm of users to participate in the game.
Specifically, the scoring in the step includes scoring the motion following degree D and the motion similarity C of each motion information, respectively, assuming five motionsAs information, there are five corresponding motion following degrees d= [ D ] 1 ,D 2 ,D 3 ,D 4 ,D 5 ]And motion similarity c= [ C ] 1 ,C 2 ,C 3, C 4 ,C 5 ];
Specifically, the score of the action following degree may be set as: s is S 1 =a/D, a is a preset weight value; when the action following degree is smaller, the score is higher, and for a plurality of action following degrees, the average score values of the action following degrees can be solved as a final result; the scoring of the action similarity may be set to: s is S 2 B is a preset weight value; when the action similarity is larger, the score is higher, and for a plurality of action similarities, the average score values of the action similarities can be solved as a final result;
wherein, the values of a and b are such that S 1 And S is 2 Is in the same order of magnitude and S 1 Not more than 1/3 of S;
and finally scoring the game completion condition of the user according to the comparison result, wherein the result is as follows: s=s1+s2;
wherein the number of bonus points is above a certain grading level, the higher the grading is, the more the points are, and the upper limit of the bonus points in a single game and a period of time can be set.
In some more specific embodiments, the method further comprises:
step S60, determining game difficulty according to a second user instruction input by a user;
for example, the game difficulty is set to, for example, primary, medium, high, and the like, and is not particularly limited in this embodiment. Wherein, the game difficulty can be determined by the action difficulty of the dynamic image and/or the update frequency of the dynamic image;
wherein, the step S20 includes:
acquiring a plurality of dynamic images corresponding to the game difficulty according to the game difficulty, and controlling the HUD to sequentially display the plurality of dynamic images according to a preset display mode; specifically, the higher the game difficulty, the higher the action difficulty, such as "rotate 3 weeks counterclockwise about the center point" than "repeat downward arrow 3 s";
or alternatively, the process may be performed,
acquiring a plurality of dynamic images, and controlling the HUD to sequentially display the plurality of dynamic images according to a preset display mode corresponding to the game difficulty; specifically, the higher the game difficulty, the higher the update frequency of the dynamic image, the faster the time for the image to change, and the greater the difficulty for the user to follow the eyes.
Furthermore, the duration of the game should be set to a range that can achieve both eye movement and fatigue.
In correspondence to the method described in the above embodiment, another embodiment of the present invention proposes a HUD-based vehicle-mounted interactive game system, which includes a plurality of functional units, where the plurality of functional units may be used to execute the corresponding steps of the method described in the above embodiment; referring to fig. 4, the system of the present embodiment includes:
a first control unit 1 for controlling the HUD to display a game preparation screen;
a second control unit 2, configured to start a game according to a first user instruction input by a user, and control the HUD to sequentially display a plurality of dynamic images according to a preset display mode; each dynamic image comprises preset action content for indicating the eyes of a user to act according to the action requirement;
an eye recognition unit 3 for recognizing a plurality of pieces of motion information in which a user's eyes perform a motion according to preset motion contents of the plurality of moving images;
and the comparison unit 4 is used for carrying out one-to-one correspondence comparison on the plurality of action information and the preset action content of the plurality of dynamic images, and obtaining a game result according to the comparison result.
In some more specific embodiments, the action information includes at least action content;
wherein, the comparing unit 4 is specifically configured to:
comparing the action content of each action information with the preset action content of the corresponding dynamic image to obtain action similarity;
the comparison result at least comprises action similarity of each piece of action information.
In some more specific embodiments, the motion content of each motion information includes a motion trajectory of a pupil center of an eye of the user.
In some more specific embodiments, the system further comprises:
a third control unit 5, configured to control the HUD to display a calibration point, obtain an eye focus coordinate when the user's eye gazes at the calibration point, and establish a mapping relationship between the eye focus coordinate and a coordinate of the calibration point;
wherein, the comparing unit 4 is specifically configured to:
and obtaining the presentation condition of the action content of the eyes of the user in the HUD according to the mapping relation and the eye movement track, and comparing the presentation condition with the preset action content of the dynamic image to obtain the action similarity.
In some more specific embodiments, the action information further comprises an action time;
wherein, the comparing unit 4 is specifically configured to:
comparing the action time of each action information with the time of the corresponding dynamic image to start displaying to obtain the action following degree;
the comparison result comprises the action following degree and the action similarity of each piece of action information.
In some more specific embodiments, the comparing unit 4 is specifically configured to:
scoring the game completion condition of the user according to the comparison result, and accumulating the scoring to the user account score; wherein the user account points are used to redeem prizes or to exchange for preferential services.
In some more specific embodiments, the system further comprises:
a difficulty setting unit 6 for determining game difficulty according to a second user instruction input by the user;
wherein, the second control unit 5 is specifically configured to:
acquiring a plurality of dynamic images corresponding to the game difficulty according to the game difficulty, and controlling the HUD to sequentially display the plurality of dynamic images according to a preset display mode;
or alternatively, the process may be performed,
and acquiring a plurality of dynamic images, and controlling the HUD to sequentially display the plurality of dynamic images according to a preset display mode corresponding to the game difficulty.
The system of the above-described embodiments is merely illustrative, in which the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the system solution of the embodiment.
It should be noted that, the system of the foregoing embodiment corresponds to the method of the foregoing embodiment, and therefore, a portion of the system of the foregoing embodiment that is not described in detail may be obtained by referring to the content of the method of the foregoing embodiment, that is, the specific step content described in the method of the foregoing embodiment may be understood as a function that can be implemented by the system of the foregoing embodiment, which is not described herein again.
Moreover, the system of the above embodiment may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a separate product, so as to be another embodiment, the present invention also proposes a computer readable storage medium having stored thereon a computer program that when executed by a processor implements the steps of the HUD-based on-vehicle interactive game method of the above embodiment.
The foregoing description of embodiments of the invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (6)

1. A HUD-based vehicle-mounted interactive game method, the method comprising:
controlling the HUD to display a game preparation picture;
starting a game according to a first user instruction input by a user, and controlling the HUD to sequentially display a plurality of dynamic images according to a preset display mode; each dynamic image comprises preset action content for indicating the eyes of a user to act according to the action requirement;
identifying a plurality of pieces of motion information of the user's eyes according to the preset motion content of the plurality of dynamic images; the action information comprises action content and action time, wherein the action content comprises a motion track of the pupil center of the eyes of the user;
the action information and the preset action content of the dynamic images are compared in a one-to-one correspondence manner, the game completion condition of the user is scored according to the comparison result, and the scoring is accumulated to the user account score; wherein the user account points are used for exchanging prizes or exchanging preferential service;
the one-to-one correspondence comparison between the plurality of motion information and the preset motion content of the plurality of dynamic images comprises the following steps: comparing the action content of each action information with the preset action content of the corresponding dynamic image to obtain action similarity, and comparing the action time of each action information with the display starting time of the corresponding dynamic image to obtain action following degree; the comparison result comprises the action similarity and the action following degree of each piece of action information.
2. The HUD-based on-board interactive game method according to claim 1, wherein said method further comprises:
controlling the HUD to display the calibration point, acquiring the eye focus coordinate when the user's eyes watch the calibration point, and establishing a mapping relation between the eye focus coordinate and the coordinate of the calibration point;
the step of comparing the motion content of each motion information with the preset motion content of the corresponding dynamic image to obtain the motion similarity comprises the following steps:
and obtaining the presentation condition of the action content of the eyes of the user in the HUD according to the mapping relation and the eye movement track, and comparing the presentation condition with the preset action content of the dynamic image to obtain the action similarity.
3. The HUD-based on-board interactive game method according to any one of claims 1-2, wherein said method further comprises:
determining game difficulty according to a second user instruction input by a user;
the game is started according to a first user instruction input by a user, the HUD is controlled to sequentially display a plurality of dynamic images according to a preset display mode, and the method comprises the following steps:
acquiring a plurality of dynamic images corresponding to the game difficulty according to the game difficulty, and controlling the HUD to sequentially display the plurality of dynamic images according to a preset display mode;
or alternatively, the process may be performed,
and acquiring a plurality of dynamic images, and controlling the HUD to sequentially display the plurality of dynamic images according to a preset display mode corresponding to the game difficulty.
4. A HUD-based in-vehicle interactive game system, the system comprising:
a first control unit for controlling the HUD to display a game preparation screen;
the second control unit is used for starting the game according to a first user instruction input by a user and controlling the HUD to sequentially display a plurality of dynamic images according to a preset display mode; each dynamic image comprises preset action content for indicating the eyes of a user to act according to the action requirement;
an eye recognition unit for recognizing a plurality of pieces of motion information in which a user's eyes perform a motion according to preset motion contents of the plurality of moving images; the action information comprises action content and action time, wherein the action content comprises a motion track of the pupil center of the eyes of the user;
the comparison unit is used for carrying out one-to-one correspondence comparison on the plurality of action information and preset action contents of the plurality of dynamic images, scoring the game completion condition of the user according to the comparison result, and accumulating the scoring to the user account score; wherein the user account points are used for exchanging prizes or exchanging preferential service; the one-to-one correspondence comparison between the plurality of motion information and the preset motion content of the plurality of dynamic images comprises the following steps: comparing the action content of each action information with the preset action content of the corresponding dynamic image to obtain action similarity, and comparing the action time of each action information with the display starting time of the corresponding dynamic image to obtain action following degree; the comparison result comprises the action similarity and the action following degree of each piece of action information.
5. The HUD-based on-board interactive game system according to claim 4, wherein said system further comprises:
the third control unit is used for controlling the HUD to display the calibration points, acquiring the eye focus coordinates of the user when the eyes watch the calibration points, and establishing a mapping relation between the eye focus coordinates and the coordinates of the calibration points;
wherein, the comparison unit is specifically configured to:
and obtaining the presentation condition of the action content of the eyes of the user in the HUD according to the mapping relation and the eye movement track, and comparing the presentation condition with the preset action content of the dynamic image to obtain the action similarity.
6. The HUD-based on-board interactive game system according to claim 4 or 5, wherein said system further comprises:
the difficulty setting unit is used for determining game difficulty according to a second user instruction input by a user;
wherein, the second control unit is specifically configured to:
acquiring a plurality of dynamic images corresponding to the game difficulty according to the game difficulty, and controlling the HUD to sequentially display the plurality of dynamic images according to a preset display mode;
or alternatively, the process may be performed,
and acquiring a plurality of dynamic images, and controlling the HUD to sequentially display the plurality of dynamic images according to a preset display mode corresponding to the game difficulty.
CN202210670676.6A 2022-06-15 2022-06-15 Vehicle-mounted interactive game method and system based on HUD Active CN114939272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210670676.6A CN114939272B (en) 2022-06-15 2022-06-15 Vehicle-mounted interactive game method and system based on HUD

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210670676.6A CN114939272B (en) 2022-06-15 2022-06-15 Vehicle-mounted interactive game method and system based on HUD

Publications (2)

Publication Number Publication Date
CN114939272A CN114939272A (en) 2022-08-26
CN114939272B true CN114939272B (en) 2023-08-04

Family

ID=82908383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210670676.6A Active CN114939272B (en) 2022-06-15 2022-06-15 Vehicle-mounted interactive game method and system based on HUD

Country Status (1)

Country Link
CN (1) CN114939272B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003265858A (en) * 2003-03-24 2003-09-24 Namco Ltd 3-d simulator apparatus and image-synthesizing method
CN104750232A (en) * 2013-12-28 2015-07-01 华为技术有限公司 Eye tracking method and eye tracking device
CN106095111A (en) * 2016-06-24 2016-11-09 北京奇思信息技术有限公司 The method that virtual reality is mutual is controlled according to user's eye motion
KR20170029166A (en) * 2015-09-07 2017-03-15 삼성전자주식회사 Method and apparatus for eye tracking
CN106569590A (en) * 2015-10-10 2017-04-19 华为技术有限公司 Object selection method and device
CN107115669A (en) * 2017-05-26 2017-09-01 合肥充盈信息科技有限公司 A kind of eyeshield games system and its implementation
CN108310759A (en) * 2018-02-11 2018-07-24 广东欧珀移动通信有限公司 Information processing method and related product
CN108681403A (en) * 2018-05-18 2018-10-19 吉林大学 A kind of trolley control method using eye tracking
CN113421346A (en) * 2021-06-30 2021-09-21 暨南大学 Design method of AR-HUD head-up display interface for enhancing driving feeling
CN114004867A (en) * 2021-11-01 2022-02-01 上海交通大学 Method and terminal for measuring, calculating and predicting eye movement consistency among dynamic observers
CN114125376A (en) * 2020-09-01 2022-03-01 通用汽车环球科技运作有限责任公司 Environment interaction system for providing augmented reality for in-vehicle infotainment and entertainment
CN114210045A (en) * 2021-12-14 2022-03-22 深圳创维-Rgb电子有限公司 Intelligent eye protection method and device and computer readable storage medium
CN114299569A (en) * 2021-12-16 2022-04-08 武汉大学 Safe face authentication method based on eyeball motion

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003265858A (en) * 2003-03-24 2003-09-24 Namco Ltd 3-d simulator apparatus and image-synthesizing method
CN104750232A (en) * 2013-12-28 2015-07-01 华为技术有限公司 Eye tracking method and eye tracking device
KR20170029166A (en) * 2015-09-07 2017-03-15 삼성전자주식회사 Method and apparatus for eye tracking
CN106569590A (en) * 2015-10-10 2017-04-19 华为技术有限公司 Object selection method and device
CN106095111A (en) * 2016-06-24 2016-11-09 北京奇思信息技术有限公司 The method that virtual reality is mutual is controlled according to user's eye motion
CN107115669A (en) * 2017-05-26 2017-09-01 合肥充盈信息科技有限公司 A kind of eyeshield games system and its implementation
CN108310759A (en) * 2018-02-11 2018-07-24 广东欧珀移动通信有限公司 Information processing method and related product
CN108681403A (en) * 2018-05-18 2018-10-19 吉林大学 A kind of trolley control method using eye tracking
CN114125376A (en) * 2020-09-01 2022-03-01 通用汽车环球科技运作有限责任公司 Environment interaction system for providing augmented reality for in-vehicle infotainment and entertainment
CN113421346A (en) * 2021-06-30 2021-09-21 暨南大学 Design method of AR-HUD head-up display interface for enhancing driving feeling
CN114004867A (en) * 2021-11-01 2022-02-01 上海交通大学 Method and terminal for measuring, calculating and predicting eye movement consistency among dynamic observers
CN114210045A (en) * 2021-12-14 2022-03-22 深圳创维-Rgb电子有限公司 Intelligent eye protection method and device and computer readable storage medium
CN114299569A (en) * 2021-12-16 2022-04-08 武汉大学 Safe face authentication method based on eyeball motion

Also Published As

Publication number Publication date
CN114939272A (en) 2022-08-26

Similar Documents

Publication Publication Date Title
US20180286268A1 (en) Virtual reality driver training and assessment system
US8520901B2 (en) Image generation system, image generation method, and information storage medium
US9043042B2 (en) Method to map gaze position to information display in vehicle
CN109847337A (en) Game control method and device, storage medium
CN107844194A (en) Training Methodology, device and computer-readable recording medium based on VR technologies
JP6097377B1 (en) Image display method and program
US20230218999A1 (en) Information control method and apparatus in game, and electronic device
KR100648539B1 (en) Game machine
CN110155072B (en) Carsickness prevention method and carsickness prevention device
JP7322971B2 (en) vehicle driving system
CN111045587B (en) Game control method, electronic device, and computer-readable storage medium
CN114939272B (en) Vehicle-mounted interactive game method and system based on HUD
JP5136948B2 (en) Vehicle control device
JP6057738B2 (en) GAME PROGRAM, GAME DEVICE, GAME SYSTEM, AND GAME PROCESSING METHOD
WO2023179006A1 (en) Information processing method and apparatus, electronic device, and storage medium
CN114434466B (en) Automobile intelligent cockpit performance evaluation simulation robot
CN114967128A (en) Sight tracking system and method applied to VR glasses
JPH11347249A (en) Image generating device and information memorizing medium
CN110548292A (en) Multi-identity tracking capability training method and device
JP3179739B2 (en) Driving game machine and recording medium storing driving game program
US20230331162A1 (en) Display controller
CN112947761B (en) Virtual image position adjustment method, device and storage medium of AR-HUD system
US11908097B2 (en) Information processing system, program, and information processing method
CN116139478A (en) Vehicle-mounted game system integrating neck health care, operation method and electronic equipment
TWI683261B (en) Computer cockpit and authentication method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant