CN114939272A - Vehicle-mounted interactive game method and system based on HUD - Google Patents
Vehicle-mounted interactive game method and system based on HUD Download PDFInfo
- Publication number
- CN114939272A CN114939272A CN202210670676.6A CN202210670676A CN114939272A CN 114939272 A CN114939272 A CN 114939272A CN 202210670676 A CN202210670676 A CN 202210670676A CN 114939272 A CN114939272 A CN 114939272A
- Authority
- CN
- China
- Prior art keywords
- action
- hud
- user
- game
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 26
- 230000009471 action Effects 0.000 claims abstract description 173
- 238000013507 mapping Methods 0.000 claims description 16
- 230000004424 eye movement Effects 0.000 claims description 10
- 208000003464 asthenopia Diseases 0.000 abstract description 7
- 230000009286 beneficial effect Effects 0.000 abstract description 4
- 231100001263 laboratory chemical safety summary Toxicity 0.000 description 6
- 210000003128 head Anatomy 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000004397 blinking Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004899 motility Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/25—Output arrangements for video game devices
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/30—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
- A63F2300/303—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to a vehicle-mounted interactive game method and system based on HUD, comprising the following steps: controlling the HUD to display a game preparation screen; starting a game according to a first user instruction input by a user, and controlling the HUD to sequentially display a plurality of dynamic images according to a preset display mode; each dynamic image comprises preset action content for indicating the eyes of a user to act according to action requirements; identifying a plurality of action information of the user's eyes acting according to the preset action contents of the plurality of dynamic images; and comparing the action information with preset action contents of the dynamic images in a one-to-one correspondence manner, and obtaining a game result according to a comparison result. The invention is different from the game mode of a mobile phone or a game machine, can realize entertainment experience and is beneficial to relieving eye fatigue and protecting eyes.
Description
Technical Field
The invention relates to the technical field of vehicle-mounted interactive games, in particular to a vehicle-mounted interactive game method and system based on HUD.
Background
With the continuous development of the automobile industry, an automobile is not only a transportation means for transportation, but also can provide entertainment for a driver, for example, a vehicle-mounted game is played, and the current vehicle-mounted game is mainly played based on the touch of a host computer screen or in a mode of externally connecting a game handle, and is not obviously different from the game mode of a mobile phone or a game machine. Meanwhile, the driver is easy to generate eyestrain after driving for a long time continuously, and the existing automobile does not have a scheme for relieving the eyestrain and protecting eyes for the driver.
Disclosure of Invention
The invention aims to provide a vehicle-mounted interactive game method and system based on a HUD, which are different from a game mode of a mobile phone or a game machine, can achieve entertainment experience different from that of a traditional game, and are beneficial to relieving eye fatigue and protecting eyes.
In order to achieve the above object, a first embodiment of the present invention provides a HUD-based vehicle-mounted interactive game method, which includes:
controlling the HUD to display a game preparation screen;
starting a game according to a first user instruction input by a user, and controlling the HUD to sequentially display a plurality of dynamic images according to a preset display mode; each dynamic image comprises preset action content for indicating the eyes of a user to act according to action requirements;
identifying a plurality of action information of the user's eyes acting according to the preset action contents of the plurality of dynamic images;
and comparing the action information with preset action contents of the dynamic images in a one-to-one correspondence manner, and obtaining a game result according to a comparison result.
Preferably, the action information includes at least action content;
wherein, the comparing the plurality of action information with preset action contents of the plurality of dynamic images in a one-to-one correspondence manner includes:
comparing the action content of each action information with the preset action content of the corresponding dynamic image to obtain action similarity;
wherein the comparison result at least comprises the action similarity of each action information.
Preferably, the action content of each action message includes a movement track of the pupil center of the eye of the user.
Preferably, the method further comprises:
controlling the HUD to display the calibration point, acquiring an eye focus coordinate when the eye of a user gazes at the calibration point, and establishing a mapping relation between the eye focus coordinate and the coordinate of the calibration point;
wherein, the comparing the action content of each action information with the preset action content of the corresponding dynamic image to obtain the action similarity includes:
and obtaining the presenting condition of the action content of the eyes of the user in the HUD according to the mapping relation and the eye movement track, and comparing the presenting condition with the preset action content of the dynamic image to obtain action similarity.
Preferably, the action information further includes an action time;
wherein, the comparing the plurality of action information with preset action contents of the plurality of dynamic images in a one-to-one correspondence manner includes:
comparing the action time of each action information with the corresponding time for starting to display the dynamic image to obtain action following degree;
wherein the comparison result comprises the action following degree and the action similarity of each action information.
Preferably, the deriving the game result according to the comparison result includes:
scoring the game completion condition of the user according to the comparison result, and accumulating the score to the user account point; wherein the user account credits are used for redeeming prizes or for redeeming a coupon service.
Preferably, the method further comprises:
determining the game difficulty according to a second user instruction input by the user;
wherein, the starting game according to the first user instruction input by the user, and controlling the HUD to display a plurality of dynamic images in sequence according to a preset display mode, comprises:
acquiring a plurality of dynamic images corresponding to the game difficulty according to the game difficulty, and controlling the HUD to display the plurality of dynamic images in sequence according to a preset display mode;
or,
and acquiring a plurality of dynamic images, and controlling the HUD to display the plurality of dynamic images in sequence according to a preset display mode corresponding to the game difficulty.
As the same inventive concept, corresponding to the above method, a second embodiment of the present invention provides a HUD-based vehicle-mounted interactive game system, the system comprising:
a first control unit for controlling the HUD to display a game preparation screen;
the second control unit is used for starting a game according to a first user instruction input by a user and controlling the HUD to sequentially display a plurality of dynamic images according to a preset display mode; each dynamic image comprises preset action content for indicating the eyes of a user to act according to action requirements;
the eye identification unit is used for identifying a plurality of pieces of action information of the user eyes acting according to the preset action contents of the plurality of dynamic images;
and the comparison unit is used for carrying out one-to-one corresponding comparison on the action information and the preset action contents of the dynamic images and obtaining a game result according to a comparison result.
Preferably, the system further comprises:
the third control unit is used for controlling the HUD to display the calibration point, acquiring the eye focus coordinate when the eyes of the user watch the calibration point, and establishing a mapping relation between the eye focus coordinate and the coordinate of the calibration point;
wherein, the comparing unit is specifically configured to:
and obtaining the presenting condition of the action content of the eyes of the user in the HUD according to the mapping relation and the eye movement track, and comparing the presenting condition with the preset action content of the dynamic image to obtain action similarity.
Preferably, the system further comprises:
the difficulty setting unit is used for determining the game difficulty according to a second user instruction input by the user;
the second control unit is specifically configured to:
acquiring a plurality of dynamic images corresponding to the game difficulty according to the game difficulty, and controlling the HUD to display the plurality of dynamic images in sequence according to a preset display mode;
or,
and acquiring a plurality of dynamic images, and controlling the HUD to sequentially display the plurality of dynamic images according to a preset display mode corresponding to the game difficulty.
The method and the system of the invention have at least the following beneficial effects:
the vehicle-mounted interactive game is realized based on a vehicle-mounted HUD (head-up display, also called a normal display system) and a vehicle-mounted camera system, is different from the game mode of a mobile phone or a game machine, and can be played only by a user needing to move eyes according to the action requirement indicated by a dynamic image, so that the entertainment experience different from the traditional game can be obtained, the eye fatigue can be relieved, and the eyes can be protected.
Additional features and advantages of the invention will be set forth in the description which follows.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a HUD-based vehicle-mounted interactive game method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a HUD display interface and calibration points in an embodiment of the invention.
Fig. 3 is a schematic view of a presentation situation of action content of the user's eyes on the HUD display interface according to the mapping relationship and the eye movement trajectory in the embodiment of the present invention.
FIG. 4 is a block diagram of an embodiment of a HUD-based in-vehicle interactive game system.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In addition, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present invention. It will be understood by those skilled in the art that the present invention may be practiced without some of these specific details. In some instances, well known means have not been described in detail so as not to obscure the present invention.
An embodiment of the present invention provides a HUD-based vehicle-mounted interactive game method, and referring to fig. 1, the method of this embodiment includes the following steps:
step S10, controlling the HUD to display a game preparation screen;
specifically, in one example, the user may actively enter the game, start the game according to the user instruction M input by the user, and control the HUD to display the game preparation screen after the game is started; the input mode of the user instruction M may be a physical key input mode, a virtual key input mode, a voice control input mode, and the like, and is not limited to a certain mode in this embodiment; when a user instruction M input by a user is received, judging to enter a game, and controlling the HUD to display a game preparation picture; the head-up display is called HUD for short, and is also called head-up display system, which refers to a multifunctional instrument panel which is operated by a vehicle driver in a blind mode and is centered, and generally has the function of projecting important driving information such as speed per hour, navigation and the like onto a windshield in front of the driver, so that the driver can see the important driving information such as speed per hour, navigation and the like without lowering head or turning head as much as possible; in the embodiment, the application use of the HUD is further expanded, and the HUD is combined with a vehicle-mounted camera system to provide the game method of the embodiment;
wherein, the user in the present embodiment refers to a driver or a passenger in a co-driver;
step S20, starting a game according to a first user instruction input by a user, and controlling the HUD to sequentially display a plurality of dynamic images according to a preset display mode; each dynamic image comprises preset action content for indicating the eyes of a user to act according to action requirements;
specifically, the first user instruction may be input in the same manner as the first user instruction, that is, in a physical key input manner, a virtual key input manner, a voice control input manner, or the like; further, the embodiment further provides an instruction input method, that is, after the eyes of the user are adjusted to a proper position, blinking or other similar eye movements are performed, the vehicle-mounted camera system collects the eye images of the user and performs image recognition, when the image recognition result is that the user inputs the first user instruction, the game is confirmed to start, and a plurality of dynamic images are extracted from a preset action library; the preset display mode may be that images are displayed in sequence at random, or may be other set fixed sequence, for example, the images are displayed in sequence according to the image numbers;
preferably, the preset action content is, for example, an arrow 3s repeating upward movement, for example, a downward arrow 3s repeating downward movement, for example, clockwise rotation around a central point for 3 cycles, for example, counterclockwise rotation around a central point for 3 cycles, or the like;
preferably, the display mode of the dynamic images may be that the display time of each dynamic image is a fixed value, and when the display time reaches a preset fixed value, for example, 5 seconds, the display of the next dynamic image is automatically started;
step S30, identifying a plurality of action information of the user' S eyes acting according to the preset action contents of the plurality of dynamic images;
specifically, when a game is played, the eyes of a user need to act along with preset action content of a dynamic image displayed by an HUD, the action of the eyes of the user is specifically identified through a vehicle-mounted camera System, the vehicle-mounted camera System has functions of shooting images of the user and identifying the eyes of the user in the shot images, the vehicle-mounted camera System is a Fatigue driving early warning System, at present, a plurality of automobiles with the Fatigue driving early warning function are provided with the Fatigue driving early warning System (Driver Fatigue Monitor System), generally based on physiological image reaction of the Driver, and composed of an ECU and a camera, the Fatigue state of the Driver is inferred by utilizing facial features, eye signals, head motility and the like of the Driver, and devices for alarming, prompting and taking corresponding measures are adopted to provide active intelligent safety guarantee for the Driver; for the method of this embodiment, step S30 in this embodiment can be implemented by using a partial function of recognizing eye signals of a driver by using an existing fatigue driving warning system, so details of the recognition part of the eye actions of the user are not described herein;
based on the above contents, the game method of the embodiment can be widely applied to the automobile equipped with the HUD and the fatigue driving early warning system, without adding new hardware, only software development is needed, without causing new hardware cost, and the game method can also be regarded as a function supplement of the original fatigue driving early warning system, and has the effects of relieving eye fatigue and protecting eyes;
and step S40, comparing the action information with the preset action contents of the dynamic images in a one-to-one correspondence manner, and obtaining a game result according to the comparison result.
It can be known from the description of the above embodiment that the vehicle-mounted interactive game method of the embodiment is implemented based on a vehicle-mounted HUD (head-up display, also called a normal-time display system) and a vehicle-mounted camera system, and is different from a game mode of a mobile phone or a game machine, and a user can play a game only by performing eye movement according to an action requirement indicated by a dynamic image, so that the entertainment experience different from that of a traditional game can be obtained, and the vehicle-mounted interactive game method is also beneficial to relieving eye fatigue and protecting eyes.
In some more specific embodiments, the action information includes at least action content; the action content refers to preset action content of the user according to the dynamic image, for example, an arrow 3s which moves upwards repeatedly, the action content of the eyes of the user is that the eyes of the user perform eye movement according to the instruction of the arrow 3s which moves upwards repeatedly, and finally the action content of the eyes of the user may be consistent with the instruction, namely, the arrow 3s which moves upwards repeatedly, or may be deviated from the instruction, for example, the arrow 2s which moves upwards repeatedly, and the action content is obtained through the vehicle-mounted camera system;
wherein, the step S40 includes:
comparing the action content of each action information with the preset action content of the corresponding dynamic image to obtain action similarity;
wherein the comparison result at least comprises the action similarity of each action information.
In some more specific embodiments, the action content of each action information includes a motion track of the pupil center of the eye of the user.
Specifically, referring to fig. 3, taking a horizontal right arrow motion as an example, the vehicle-mounted imaging system collects an eye video image, recognizes a motion trajectory P of the center of the pupil of the eye of the user, extracts the motion trajectory P, enlarges the motion trajectory P onto the display interface of the vehicle-mounted HUD, and then changes the motion trajectory P 'into a curve P', where the preset motion content of the dynamic image displayed by the HUD is a horizontal right motion arrow pattern, and the center curve of the horizontal right motion arrow pattern is Q. Coinciding starting end points of the curve P 'and the curve Q, uniformly extracting a coordinate point sequence B and a coordinate point sequence A from the curve P' and the curve Q, then calculating LCSS (B, A) by adopting a longest common subsequence LCSS algorithm for the coordinate point sequence B and the coordinate point sequence A, and setting the similarity C as LCSS (B, A).
In some more specific embodiments, the method further comprises:
step S50, controlling the HUD to display the calibration point, acquiring the eye focus coordinate when the eyes of the user fix the calibration point, and establishing the mapping relation between the eye focus coordinate and the coordinate of the calibration point;
specifically, referring to fig. 2, calibration points may appear in the center and four corners of the display interface of the HUD in sequence, and it is necessary for the user to watch the calibration points with his eyes for a period of time, for example, within several seconds acceptable to the user, the vehicle-mounted camera system arranged in the vehicle collects the user image, and then identifies the coordinates of the focal point of the user's eyes by using the image processing technology, and forms a mapping with the coordinates of the current calibration point corresponding to the HUD, so as to establish the mapping relationship, and complete the calibration of the system for the positions of the eyes after completing the 5 calibration points of the center and four corners;
in step S40, comparing the action content of each piece of action information with the preset action content of the corresponding dynamic image to obtain an action similarity, specifically including:
and obtaining the presenting condition of the action content of the eyes of the user in the HUD according to the mapping relation and the eye movement track, and comparing the presenting condition with the preset action content of the dynamic image to obtain action similarity.
It should be noted that the mapping relationship is used to determine the presentation condition of the action content actually implemented by the eyes of the user on the HUD display interface, for example, the motion trajectory P in fig. 3 is changed into a curve P' after being enlarged to the display interface of the vehicle-mounted HUD; specifically, referring to fig. 3, taking a horizontal rightward arrow as an example, the motion trajectory P of the center of the pupil of the eye of the user, which is extracted after the eye video image is captured and processed by the vehicle-mounted imaging system, is enlarged to the display interface of the vehicle-mounted HUD, and then becomes a curve P', and the preset motion content of the dynamic image displayed by the HUD is a horizontal rightward motion arrow pattern, and the center curve of the horizontal rightward motion arrow pattern is Q. Coinciding starting end points of a curve P 'and a curve Q, uniformly extracting a coordinate point sequence B and a coordinate point sequence A from the curve P' and the curve Q, then calculating LCSS (B, A) by adopting a longest common subsequence LCSS algorithm for the coordinate point sequence B and the coordinate point sequence A, and setting a similarity C as
LCSS(B,A)。
In some more specific embodiments, the action information further includes an action time T1; the action time T2 is determined by the starting time of the vehicle-mounted camera system shooting the eye action of the user;
wherein, the step S40 includes:
comparing the action time of each action information with the corresponding time for starting to display the dynamic image to obtain action following degree;
specifically, the motion time T1 of each piece of motion information is compared with the time T2 at which the preset motion content (for example, a right arrow) of the corresponding dynamic image starts to be displayed, to obtain a time difference therebetween, that is, a motion following degree may be defined as D ═ T1-T2, which represents a following speed of the user's eye motion with respect to the pointing motion of the HUD dynamic image;
and the comparison result comprises the action following degree D and the action similarity C of each piece of action information.
In some more specific embodiments, the step S40, further includes:
scoring the game completion condition of the user according to the comparison result, and accumulating the score to the user account point; the user account points are used for exchanging prizes or preferential services so as to improve the enthusiasm of the user for participating in the game.
Specifically, the step of scoring includes scoring the motion following degree D and the motion similarity C of each piece of motion information, and assuming that there are five pieces of motion information, there are five corresponding motion following degrees D ═ D 1 ,D 2 ,D 3 ,D 4 ,D 5 ]And degree of motion similarity C ═ C 1 ,C 2 ,C 3, C 4 ,C 5 ];
Specifically, the score of the motion following degree may be set as: s 1 A is a preset weight value; when the action following degree is smaller, the score is higher, and for a plurality of action following degrees, the average score value of the action following degrees can be solved as a final result; the score of the action similarity may be set as: s 2 B is a preset weight value; when the action similarity is larger, the score is higher, and for a plurality of action similarities, the average score value of the action similarities can be solved as a final result;
wherein, it should be noted that a and b take values such that S 1 And S 2 In the same order of magnitude, and S 1 1/3 not exceeding S;
and finally, according to the comparison result, scoring the game completion condition of the user as follows: S-S1 + S2;
the number of the bonus points is above a certain grade, the bonus points are more when the grade is higher, and the upper limit of the bonus points in a single game and a period of time can be set.
In some more specific embodiments, the method further comprises:
step S60, determining the difficulty of the game according to a second user instruction input by the user;
for example, the game difficulty is set to be a primary level, a middle level, a high level, and the like, and is not particularly limited in this embodiment. Wherein, the game difficulty can be determined by the action difficulty of the dynamic image and/or the update frequency of the dynamic image;
wherein, the step S20 includes:
acquiring a plurality of dynamic images corresponding to the game difficulty according to the game difficulty, and controlling the HUD to display the plurality of dynamic images in sequence according to a preset display mode; specifically, the higher the game difficulty, the higher the action difficulty, for example, "rotate 3 revolutions counterclockwise around the center point" is higher than "repeat downward arrow 3 s";
or,
acquiring a plurality of dynamic images, and controlling the HUD to display the plurality of dynamic images in sequence according to a preset display mode corresponding to the game difficulty; specifically, the higher the game difficulty, the higher the update frequency of the dynamic image, and the faster the image changes, the greater the user's eye-following difficulty.
In addition, the duration of the game should be set within a range that can achieve the effect of moving eyes without fatigue.
Corresponding to the method described in the above embodiment, another embodiment of the present invention provides a HUD-based vehicle-mounted interactive game system, which includes a plurality of functional units, where the functional units can be used to execute the corresponding steps of the method described in the above embodiment; referring to fig. 4, the system of the present embodiment includes:
a first control unit 1 for controlling the HUD to display a game preparation screen;
the second control unit 2 is used for starting a game according to a first user instruction input by a user and controlling the HUD to sequentially display a plurality of dynamic images according to a preset display mode; each dynamic image comprises preset action content for indicating the eyes of a user to act according to action requirements;
an eye recognition unit 3, configured to recognize a plurality of pieces of motion information of a user's eyes performing a motion according to preset motion contents of the plurality of dynamic images;
and the comparison unit 4 is used for carrying out one-to-one corresponding comparison on the action information and the preset action contents of the dynamic images and obtaining a game result according to a comparison result.
In some more specific embodiments, the action information includes at least action content;
wherein, the comparing unit 4 is specifically configured to:
comparing the action content of each action information with the preset action content of the corresponding dynamic image to obtain action similarity;
wherein, the comparison result at least comprises the action similarity of each action information.
In some more specific embodiments, the action content of each action information includes a motion track of the pupil center of the eye of the user.
In some more specific embodiments, the system further comprises:
the third control unit 5 is used for controlling the HUD to display the calibration point, acquiring the eye focus coordinate when the eye of the user gazes at the calibration point, and establishing a mapping relation between the eye focus coordinate and the coordinate of the calibration point;
wherein, the comparing unit 4 is specifically configured to:
and obtaining the presenting condition of the action content of the eyes of the user in the HUD according to the mapping relation and the eye movement track, and comparing the presenting condition with the preset action content of the dynamic image to obtain action similarity.
In some more specific embodiments, the action information further includes an action time;
wherein, the comparing unit 4 is specifically configured to:
comparing the action time of each action information with the corresponding time for starting to display the dynamic image to obtain action following degree;
and the comparison result comprises the action following degree and the action similarity of each piece of action information.
In some more specific embodiments, the comparing unit 4 is specifically configured to:
scoring the game completion condition of the user according to the comparison result, and accumulating the score to the user account point; wherein the user account credits are used for redeeming prizes or for redeeming a coupon service.
In some more specific embodiments, the system further comprises:
the difficulty setting unit 6 is used for determining the game difficulty according to a second user instruction input by the user;
wherein, the second control unit 5 is specifically configured to:
acquiring a plurality of dynamic images corresponding to the game difficulty according to the game difficulty, and controlling the HUD to display the plurality of dynamic images in sequence according to a preset display mode;
or,
and acquiring a plurality of dynamic images, and controlling the HUD to display the plurality of dynamic images in sequence according to a preset display mode corresponding to the game difficulty.
The system of the above-described embodiment is only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the scheme of the system of the embodiment.
It should be noted that the system of the foregoing embodiment corresponds to the method of the foregoing embodiment, and therefore, a part of the system of the foregoing embodiment that is not described in detail can be obtained by referring to the content of the method of the foregoing embodiment, that is, the specific step content described in the method of the foregoing embodiment can be understood as the function that can be realized by the system of the foregoing embodiment, and is not described again here.
Furthermore, if the system of the above embodiment is implemented in the form of a software functional unit and sold or used as a standalone product, the system may be stored in a computer readable storage medium, so as to provide another embodiment, the invention further provides a computer readable storage medium, on which a computer program is stored, and the computer program is executed by a processor to implement the steps of the HUD-based vehicle-mounted interactive game method of the above embodiment.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (10)
1. A vehicle-mounted interactive game method based on HUD is characterized by comprising the following steps:
controlling the HUD to display a game preparation screen;
starting a game according to a first user instruction input by a user, and controlling the HUD to sequentially display a plurality of dynamic images according to a preset display mode; each dynamic image comprises preset action content for indicating the eyes of a user to act according to action requirements;
identifying a plurality of action information of the user's eyes acting according to the preset action contents of the plurality of dynamic images;
and comparing the action information with preset action contents of the dynamic images in a one-to-one correspondence manner, and obtaining a game result according to a comparison result.
2. The HUD-based in-vehicle interactive game method according to claim 1, wherein the motion information includes at least motion contents;
wherein, the comparing the plurality of action information with preset action contents of the plurality of dynamic images in a one-to-one correspondence manner includes:
comparing the action content of each action information with the preset action content of the corresponding dynamic image to obtain action similarity;
wherein, the comparison result at least comprises the action similarity of each action information.
3. The HUD-based vehicle-mounted interactive game method according to claim 2, wherein the action content of each action message comprises a movement track of the pupil center of the user's eye.
4. The HUD-based in-vehicle interactive game method of claim 3, further comprising:
controlling the HUD to display the calibration point, acquiring an eye focus coordinate when the eye of a user gazes at the calibration point, and establishing a mapping relation between the eye focus coordinate and the coordinate of the calibration point;
wherein, the comparing the action content of each action information with the preset action content of the corresponding dynamic image to obtain the action similarity includes:
and obtaining the presenting condition of the action content of the eyes of the user in the HUD according to the mapping relation and the eye movement track, and comparing the presenting condition with the preset action content of the dynamic image to obtain action similarity.
5. The HUD-based in-vehicle interactive game method of claim 2, wherein the action information further includes an action time;
wherein, the comparing the plurality of action information with preset action contents of the plurality of dynamic images in a one-to-one correspondence manner includes:
comparing the action time of each action information with the corresponding time for starting to display the dynamic image to obtain action following degree;
wherein the comparison result comprises the action following degree and the action similarity of each action information.
6. The HUD-based in-vehicle interactive game method according to any one of claims 1-5, wherein the deriving the game result according to the comparison result comprises:
scoring the game completion condition of the user according to the comparison result, and accumulating the score to the user account point; wherein the user account credits are used for redeeming prizes or for redeeming a coupon service.
7. The HUD-based in-vehicle interactive game method according to any one of claims 1-5, further comprising:
determining the game difficulty according to a second user instruction input by the user;
wherein, the starting game according to the first user instruction of user input, control HUD and show a plurality of dynamic images according to predetermineeing the display mode in proper order, include:
acquiring a plurality of dynamic images corresponding to the game difficulty according to the game difficulty, and controlling the HUD to display the plurality of dynamic images in sequence according to a preset display mode;
or,
and acquiring a plurality of dynamic images, and controlling the HUD to display the plurality of dynamic images in sequence according to a preset display mode corresponding to the game difficulty.
8. An on-vehicle interactive game system based on HUD, characterized in that, the system includes:
a first control unit for controlling the HUD to display a game preparation screen;
the second control unit is used for starting a game according to a first user instruction input by a user and controlling the HUD to sequentially display a plurality of dynamic images according to a preset display mode; each dynamic image comprises preset action content for indicating the eyes of a user to act according to action requirements;
the eye identification unit is used for identifying a plurality of pieces of action information of the user eyes acting according to the preset action contents of the plurality of dynamic images;
and the comparison unit is used for carrying out one-to-one corresponding comparison on the action information and the preset action contents of the dynamic images and obtaining a game result according to a comparison result.
9. The HUD-based in-vehicle interactive game system of claim 8, further comprising:
the third control unit is used for controlling the HUD to display the calibration point, acquiring the eye focus coordinate when the eyes of the user watch the calibration point, and establishing a mapping relation between the eye focus coordinate and the coordinate of the calibration point;
wherein, the comparing unit is specifically configured to:
and obtaining the presenting condition of the action content of the eyes of the user in the HUD according to the mapping relation and the eye movement track, and comparing the presenting condition with the preset action content of the dynamic image to obtain action similarity.
10. A HUD-based in-vehicle interactive game system according to claim 8 or 9, wherein the system further comprises:
the difficulty setting unit is used for determining the game difficulty according to a second user instruction input by the user;
the second control unit is specifically configured to:
acquiring a plurality of dynamic images corresponding to the game difficulty according to the game difficulty, and controlling the HUD to display the plurality of dynamic images in sequence according to a preset display mode;
or,
and acquiring a plurality of dynamic images, and controlling the HUD to display the plurality of dynamic images in sequence according to a preset display mode corresponding to the game difficulty.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210670676.6A CN114939272B (en) | 2022-06-15 | 2022-06-15 | Vehicle-mounted interactive game method and system based on HUD |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210670676.6A CN114939272B (en) | 2022-06-15 | 2022-06-15 | Vehicle-mounted interactive game method and system based on HUD |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114939272A true CN114939272A (en) | 2022-08-26 |
CN114939272B CN114939272B (en) | 2023-08-04 |
Family
ID=82908383
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210670676.6A Active CN114939272B (en) | 2022-06-15 | 2022-06-15 | Vehicle-mounted interactive game method and system based on HUD |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114939272B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003265858A (en) * | 2003-03-24 | 2003-09-24 | Namco Ltd | 3-d simulator apparatus and image-synthesizing method |
CN104750232A (en) * | 2013-12-28 | 2015-07-01 | 华为技术有限公司 | Eye tracking method and eye tracking device |
CN106095111A (en) * | 2016-06-24 | 2016-11-09 | 北京奇思信息技术有限公司 | The method that virtual reality is mutual is controlled according to user's eye motion |
KR20170029166A (en) * | 2015-09-07 | 2017-03-15 | 삼성전자주식회사 | Method and apparatus for eye tracking |
CN106569590A (en) * | 2015-10-10 | 2017-04-19 | 华为技术有限公司 | Object selection method and device |
CN107115669A (en) * | 2017-05-26 | 2017-09-01 | 合肥充盈信息科技有限公司 | A kind of eyeshield games system and its implementation |
CN108310759A (en) * | 2018-02-11 | 2018-07-24 | 广东欧珀移动通信有限公司 | Information processing method and related product |
CN108681403A (en) * | 2018-05-18 | 2018-10-19 | 吉林大学 | A kind of trolley control method using eye tracking |
CN113421346A (en) * | 2021-06-30 | 2021-09-21 | 暨南大学 | Design method of AR-HUD head-up display interface for enhancing driving feeling |
CN114004867A (en) * | 2021-11-01 | 2022-02-01 | 上海交通大学 | Method and terminal for measuring, calculating and predicting eye movement consistency among dynamic observers |
CN114125376A (en) * | 2020-09-01 | 2022-03-01 | 通用汽车环球科技运作有限责任公司 | Environment interaction system for providing augmented reality for in-vehicle infotainment and entertainment |
CN114210045A (en) * | 2021-12-14 | 2022-03-22 | 深圳创维-Rgb电子有限公司 | Intelligent eye protection method and device and computer readable storage medium |
CN114299569A (en) * | 2021-12-16 | 2022-04-08 | 武汉大学 | Safe face authentication method based on eyeball motion |
-
2022
- 2022-06-15 CN CN202210670676.6A patent/CN114939272B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003265858A (en) * | 2003-03-24 | 2003-09-24 | Namco Ltd | 3-d simulator apparatus and image-synthesizing method |
CN104750232A (en) * | 2013-12-28 | 2015-07-01 | 华为技术有限公司 | Eye tracking method and eye tracking device |
KR20170029166A (en) * | 2015-09-07 | 2017-03-15 | 삼성전자주식회사 | Method and apparatus for eye tracking |
CN106569590A (en) * | 2015-10-10 | 2017-04-19 | 华为技术有限公司 | Object selection method and device |
CN106095111A (en) * | 2016-06-24 | 2016-11-09 | 北京奇思信息技术有限公司 | The method that virtual reality is mutual is controlled according to user's eye motion |
CN107115669A (en) * | 2017-05-26 | 2017-09-01 | 合肥充盈信息科技有限公司 | A kind of eyeshield games system and its implementation |
CN108310759A (en) * | 2018-02-11 | 2018-07-24 | 广东欧珀移动通信有限公司 | Information processing method and related product |
CN108681403A (en) * | 2018-05-18 | 2018-10-19 | 吉林大学 | A kind of trolley control method using eye tracking |
CN114125376A (en) * | 2020-09-01 | 2022-03-01 | 通用汽车环球科技运作有限责任公司 | Environment interaction system for providing augmented reality for in-vehicle infotainment and entertainment |
CN113421346A (en) * | 2021-06-30 | 2021-09-21 | 暨南大学 | Design method of AR-HUD head-up display interface for enhancing driving feeling |
CN114004867A (en) * | 2021-11-01 | 2022-02-01 | 上海交通大学 | Method and terminal for measuring, calculating and predicting eye movement consistency among dynamic observers |
CN114210045A (en) * | 2021-12-14 | 2022-03-22 | 深圳创维-Rgb电子有限公司 | Intelligent eye protection method and device and computer readable storage medium |
CN114299569A (en) * | 2021-12-16 | 2022-04-08 | 武汉大学 | Safe face authentication method based on eyeball motion |
Also Published As
Publication number | Publication date |
---|---|
CN114939272B (en) | 2023-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8520901B2 (en) | Image generation system, image generation method, and information storage medium | |
US7855713B2 (en) | Program, input evaluation system, and input evaluation method | |
US20230218999A1 (en) | Information control method and apparatus in game, and electronic device | |
CN112770819A (en) | Implementing graphical overlays for streaming games based on current game scenes | |
KR100648539B1 (en) | Game machine | |
CN110155072B (en) | Carsickness prevention method and carsickness prevention device | |
Kun et al. | Calling while driving: An initial experiment with HoloLens | |
US20190286216A1 (en) | Attention-based rendering and fidelity | |
US20170075417A1 (en) | Data processing apparatus and method of controlling display | |
JP2000126457A (en) | Display control of game screen, character movement control, game machine, and recording medium storing program | |
JP3415416B2 (en) | GAME DEVICE, IMAGE DATA FORMING METHOD, AND MEDIUM | |
JP2011258158A (en) | Program, information storage medium and image generation system | |
CN111045587B (en) | Game control method, electronic device, and computer-readable storage medium | |
CN115525152A (en) | Image processing method, system, device, electronic equipment and storage medium | |
JP5136948B2 (en) | Vehicle control device | |
US10747308B2 (en) | Line-of-sight operation apparatus, method, and medical device | |
CN114939272B (en) | Vehicle-mounted interactive game method and system based on HUD | |
JP6057738B2 (en) | GAME PROGRAM, GAME DEVICE, GAME SYSTEM, AND GAME PROCESSING METHOD | |
CN114434466B (en) | Automobile intelligent cockpit performance evaluation simulation robot | |
CN110548292A (en) | Multi-identity tracking capability training method and device | |
CN114967128A (en) | Sight tracking system and method applied to VR glasses | |
JP5213913B2 (en) | Program and image generation system | |
JP3179739B2 (en) | Driving game machine and recording medium storing driving game program | |
US20230331162A1 (en) | Display controller | |
TWI683261B (en) | Computer cockpit and authentication method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |