CN111931702A - Target pushing method, system and equipment based on eyeball tracking - Google Patents

Target pushing method, system and equipment based on eyeball tracking Download PDF

Info

Publication number
CN111931702A
CN111931702A CN202010958016.9A CN202010958016A CN111931702A CN 111931702 A CN111931702 A CN 111931702A CN 202010958016 A CN202010958016 A CN 202010958016A CN 111931702 A CN111931702 A CN 111931702A
Authority
CN
China
Prior art keywords
server
gps data
target object
vehicle
new target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010958016.9A
Other languages
Chinese (zh)
Other versions
CN111931702B (en
Inventor
陈翔
陈豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Joynext Technology Corp
Original Assignee
Ningbo Joynext Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Joynext Technology Corp filed Critical Ningbo Joynext Technology Corp
Priority to CN202010958016.9A priority Critical patent/CN111931702B/en
Publication of CN111931702A publication Critical patent/CN111931702A/en
Application granted granted Critical
Publication of CN111931702B publication Critical patent/CN111931702B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the application discloses a target pushing method, a system and equipment based on eyeball tracking, wherein the method comprises the following steps: the first server receives a first picture about the sight of the user and sent by the first camera and a second picture about the vehicle foreground and sent by the second camera; identifying the eyeball steering angle of the user in the first picture, simultaneously calculating the depth of field data of the second picture and acquiring the GPS data of the vehicle at the trigger moment, and calculating to obtain target GPS data according to the eyeball steering angle, the depth of field data and the GPS data of the vehicle; loading target GPS data into a map obtained by calling, and judging whether a target object exists or not; and when the target object exists, pushing the relevant information corresponding to the target object to the vehicle terminal. According to the method and the device, the positioning of the interest point can be realized only by capturing the eyeball rotation information, the complex interaction process is simplified, and meanwhile, the safety of the driving process can also be improved.

Description

Target pushing method, system and equipment based on eyeball tracking
Technical Field
The invention belongs to the technical field of information, and particularly relates to a target pushing method, a target pushing system and target pushing equipment based on eyeball tracking.
Background
At present, when a user needs to know a certain interest point during driving, the user is usually required to click navigation or enter a search engine for searching, although the mode has certain accuracy and can meet most scene requirements, the whole operation process needs multi-level interaction and is complicated in operation, and if the interaction is carried out in the vehicle driving process, the personal safety of a driver is not facilitated, and traffic accidents are possibly caused.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a target pushing method, a target pushing system and a target pushing device based on eyeball tracking.
The embodiment of the invention provides the following specific technical scheme:
in a first aspect, the present invention provides a target pushing method based on eye tracking, where the method includes:
the method comprises the steps that a first server receives a first picture about a user sight line sent by a first camera and a second picture about a vehicle foreground sent by a second camera, wherein the first picture and the second picture are obtained by shooting when a shooting instruction triggered by a user is obtained through the first camera and the second camera respectively;
identifying the eyeball steering angle of the user in the first picture, simultaneously calculating the depth of field data of the second picture and acquiring the GPS data of the vehicle at the trigger moment, and calculating to obtain target GPS data according to the eyeball steering angle, the depth of field data and the GPS data of the vehicle;
loading the target GPS data into a map obtained by calling, and judging whether a target object exists or not;
and when the target object exists, pushing the relevant information corresponding to the target object to the vehicle terminal.
Preferably, the method further comprises:
the first server sends the eyeball steering angle, the depth of field data and the GPS data of the vehicle to a second server, wherein the second server is a cloud server;
the second server calculates the received eyeball steering angle, the received depth of field data and the received GPS data of the vehicle to obtain new target GPS data, judges whether the new target GPS data is the same as the target GPS data or not, loads the new target GPS data into a high-definition map when the new target GPS data is different from the target GPS data, judges whether a new target object exists or not, and sends related information corresponding to the new target object to the first server when the new target object exists;
the pushing of the relevant information corresponding to the target object to the vehicle terminal by the first server specifically includes:
and the first server pushes the related information corresponding to the new target object to the vehicle terminal.
Preferably, when the second server determines that there are new target objects and the number of the new target objects is greater than one, that is, it is determined that there are multiple new target objects, before sending the related information corresponding to the new target objects to the first server, the method further includes:
the second server sending confirmation requests to the first server regarding the plurality of new target objects;
the first server sends the received confirmation requests of the plurality of new target objects to the vehicle terminal, receives a confirmation instruction of the new target object returned by the vehicle terminal and sends the confirmation instruction to the second server;
the second server acquires relevant information corresponding to the confirmed new target object according to the received confirmation instruction;
the sending, by the second server, the relevant information corresponding to the new target object to the first server specifically includes:
the second server sends the relevant information corresponding to the confirmed new target object to the first server;
the pushing, by the first server, the related information corresponding to the new target object to the vehicle terminal specifically includes:
and the first server sends the relevant information corresponding to the confirmed new target object to the vehicle terminal.
Preferably, the method further comprises:
when the new target GPS data is different from the target GPS data, the second server sends the new target GPS data to the first server;
the first server updates a pre-constructed incidence relation table according to the received new target GPS data and the related information of the new target object;
the incidence relation table is used for expressing incidence relations among the eyeball steering angle, the depth of field data, the GPS data of the vehicle, the target GPS data and relevant information corresponding to the target object.
Preferably, the method further comprises:
the first server communicates with the second server according to priority rules defined in a communication protocol;
the method specifically comprises the following steps:
when the first server detects that a roadside unit exists, judging whether the roadside unit with the communication efficiency meeting the preset condition exists, if so, sending the eyeball turning angle, the depth of field data and the GPS data of the vehicle to the second server through the determined roadside unit; meanwhile, the determined roadside unit sends GPS data and surrounding environment information of the roadside unit to the second server;
and when the first server judges that no roadside unit with the communication efficiency meeting the preset condition exists or no roadside unit is detected, the eyeball turning angle, the depth of field data and the GPS data of the vehicle are sent to the second server through the base station.
Preferably, when the second server loads the new target GPS data into a high definition map and then does not identify any target object, the method further includes:
and the second server loads the GPS data of the roadside unit, the surrounding environment information and the calculated new target GPS data into the high-definition map for relocation.
Preferably, the method further comprises:
and when the first server does not identify any target object, sending a voice prompt to the vehicle terminal so that the user resends the voice request according to the voice prompt.
Preferably, the step of calculating, by the first server, a target GPS data according to the eyeball steering angle, the depth of field data, and the GPS data of the vehicle specifically includes:
the first server calculates an offset according to the eyeball turning angle and the depth of field data;
and calculating the target GPS data according to the offset and the GPS data of the vehicle.
In a second aspect, the invention provides an eyeball tracking-based target pushing system, which comprises a first camera, a second camera, a vehicle terminal and a first server;
the first server is configured to:
receiving a first picture about the sight of a user and a second picture about the vehicle foreground, wherein the first picture and the second picture are sent by the first camera and the second camera respectively, and the first picture and the second picture are obtained by shooting when a shooting instruction triggered by the user is obtained by the first camera and the second camera;
identifying the eyeball steering angle of the user in the first picture, simultaneously calculating the depth of field data of the second picture and acquiring the GPS data of the vehicle at the trigger moment, and calculating to obtain target GPS data according to the eyeball steering angle, the depth of field data and the GPS data of the vehicle;
loading the target GPS data into a map obtained by calling, and judging whether a target object exists or not;
when the target object exists, pushing related information corresponding to the target object to a vehicle terminal;
the vehicle terminal is used for:
and receiving and displaying the related information corresponding to the target object returned by the first server.
In a third aspect, the present invention provides a computer device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein:
the processor, when executing the computer program, implements the eye tracking-based target push method according to the first aspect.
The embodiment of the invention has the following beneficial effects:
1. according to the method, the position of the target object, namely the interest point, can be finally calculated by tracking the eyeball rotation angle of the user, calculating the depth of field information in the picture about the vehicle foreground and the GPS data of the vehicle, compared with the traditional mode that the searching is carried out through manual input, the method is obviously more convenient, the interaction is simpler, and the personal safety of the user in the driving process is ensured to the great extent;
2. in order to enable the target object to be more accurate and meet the expectation of a user, the first server receives the relevant data and then sends the relevant data to the cloud server, and the cloud server has stronger computing capacity and more accurate map, so that the obtained position of the target object is more accurate;
3. in the invention, when the first server communicates with the cloud server, data transmission is carried out according to a predefined communication protocol, namely, when a roadside unit with good communication efficiency exists, the roadside unit sends data to the cloud server, so that the data transmission efficiency can be improved, and the network pressure is relieved;
4. in the invention, when the calculation result (GPS data) of the cloud server is not matched with the calculation result of the first server, the calculation result of the cloud server is taken as the standard, and the association relation table is updated so as to quickly and accurately position when the same target is identified;
5. in the invention, when the cloud server can not identify the target object, the target object can be identified again according to the GPS data, the surrounding environment information and the like of the roadside unit sent by the roadside unit, and the acquired information is richer and more beneficial to accurate identification.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a target pushing method based on eye tracking according to embodiment 1 of the present application;
fig. 2 is a relationship diagram of an eyeball steering angle, depth of field data, and GPS data of a vehicle and target GPS data provided in embodiment 1 of the present application;
fig. 3 is a schematic structural diagram of a computer device provided in embodiment 3 of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As described in the background art, when a user needs to know a certain interest point during driving, the user is usually required to click to navigate or enter a search engine for searching, the mode requires multi-level interaction and is excessively complicated to operate, and the interaction is not beneficial to the personal safety of a driver during the driving process of a vehicle, so that traffic accidents are possibly caused.
Example 1
In order to achieve the above object, as shown in fig. 1, a target pushing method based on eyeball tracking is provided, which includes the following steps:
110. the first server receives a first picture about the sight of the user sent by the first camera and a second picture about the vehicle foreground sent by the second camera, and the first picture and the second picture are obtained by shooting when the first camera and the second camera obtain a shooting instruction triggered by the user.
Specifically, the user can issue "what that is
Figure DEST_PATH_IMAGE001
And similar shooting instructions are carried out, eyes look at the target object, and at the moment, the first camera and the second camera are triggered to start shooting.
The first camera is an intelligent camera, and the first camera is installed in the car and aims at the driver, and the second camera can be installed in the car equally and aims at the outside of the car for shoot the vehicle prospect.
Before the user sends the shooting instruction, the first camera and the second camera can be awakened firstly, and the first camera and the second camera are ready to shoot at any time.
After receiving the shooting instruction, the first camera captures the sight of the user and shoots the face (eye part is important) image of the user, meanwhile, the second camera shoots the foreground image of the vehicle, and the foreground image is sent to the first server after shooting is finished.
The first server may be a mobile edge computing server. The mobile edge computing server is built based on a mobile edge computing system architecture, and with the rapid development of the internet of things technology, the mass data generated at the edge of the network cannot be efficiently processed by a centralized cloud computing center, so that the real-time requirement of a user is difficult to guarantee. Therefore, the pressure of a large number of requests on the cloud computing platform network, computing and storage can be reduced by sinking the computing and storage capacity of the cloud computing platform to the edge of the network (base station).
120. And identifying the eyeball steering angle of the user in the first picture, simultaneously calculating the depth of field data of the second picture and acquiring the GPS data of the vehicle at the trigger moment, and calculating to obtain target GPS data according to the eyeball steering angle, the depth of field data and the GPS data of the vehicle.
When the depth-of-field data of the second picture is calculated, a traditional calculation method or calculation based on a neural network model, a depth learning algorithm and the like can be selected for calculation, and since the depth-of-field calculation belongs to a relatively mature technology in the prior art, too much description is not repeated in the scheme.
In the above step, the step of calculating a target GPS data according to the eyeball steering angle, the depth of field data, and the GPS data of the vehicle may specifically include the steps of:
1. the first server calculates the offset according to the eyeball steering angle and the depth of field data;
2. and calculating target GPS data according to the offset and the GPS data of the vehicle.
Referring to fig. 2, wherein, GPS1 is the vehicle's GPS data,θfor the identified eye steering angle, an offset can be calculated based on the depth of field data and the eye steering angle, and then a GPS2 (i.e., target GPS data) can be calculated based on the GPS 1.
130. And loading the target GPS data into a map obtained by calling, and judging whether a target object exists or not.
140. And when the target object exists, pushing the relevant information corresponding to the target object to the vehicle terminal for displaying.
The first server can call map information by opening map software, so that calculated target GPS data is loaded into a map to identify a target object.
The related information of the target object may be some data for identifying the target object, such as the name, brief introduction, evaluation information, etc. of the target object, which are pre-stored in the corresponding database. When the target object is a restaurant, information such as a cuisine, a special dish, a score and a personal average price of the restaurant can be obtained from the database and pushed to the vehicle terminal. Specifically, the data can be pushed to the AR HUD or the central control.
150. And when the first server does not identify any target object, sending a voice prompt to the vehicle terminal so that the user resends the voice request according to the voice prompt.
Because uncontrollable conditions such as jolt can occur in the driving process of the vehicle, at the moment, the line of sight captured by the first camera can be inaccurate due to the jolt and other conditions, and target GPS data obtained through natural subsequent calculation is also inaccurate, so that a target object cannot be identified. At this moment, the first server can send the voice prompt to the terminal, so that the user can resend the voice request according to the voice prompt and trigger the first camera and the second camera to shoot again.
In addition, when the first server identifies the target object, a situation that a plurality of target objects overlap may occur, such as: the front and back of the two buildings are superposed, and at the moment, the scheme further comprises the following implementation steps:
210. when the first server judges that the target object exists and the number of the target objects is larger than one, namely, a plurality of new target objects exist, and before the related information of the target objects is pushed to the vehicle terminal, a plurality of confirmation requests of the target objects are sent to the vehicle terminal.
220. And receiving a confirmation instruction which is returned by the vehicle terminal and is about to the target object, acquiring relevant information corresponding to the confirmed target object, and pushing the relevant information to the vehicle terminal for displaying.
Therefore, according to the scheme, the target object can be finally identified and obtained, when a plurality of target objects exist, the user is inquired in a multi-mode interaction mode, the specific object is determined, the corresponding related information is sent to the vehicle terminal after the answer is obtained, compared with the traditional mode that the user needs to search through manual input, the method is obviously more convenient, the interaction is simpler, and the personal safety of the user in the driving process is guaranteed to the greatest extent.
Because the information quantity in the local database is limited and the computing capability is also limited, in order to further improve the accuracy of target object identification, the scheme also provides the following implementation steps:
310. when the first server acquires the eyeball steering angle, the depth of field data and the GPS data of the vehicle, the eyeball steering angle, the depth of field data and the GPS data of the vehicle are sent to a second server, wherein the second server is a cloud server.
320. And the second server calculates the received data to obtain new target GPS data, judges whether the new target GPS data is the same as the target GPS data or not, loads the new target GPS data into a high-definition map when the new target GPS data is different from the target GPS data, judges whether a new target object exists or not, and sends related information corresponding to the new target object to the first server when the new target object exists.
In the above step, when the second server calculates a new target GPS data according to the received eyeball steering angle, depth of field data, and GPS data of the vehicle, the same method as that in the first server may be selected for calculation, that is, the offset is calculated first, and then the new target GPS data is calculated according to the offset.
330. The first server pushes the relevant information corresponding to the new target object to the vehicle terminal for displaying.
The second server is more powerful than the first server in computing and provides more and comprehensive map information than the first server, so that the obtained target object is more accurate.
It should be noted that, after the second server determines that the new target GPS data is different from the target GPS data, it indicates that the calculation result of the first server may be inaccurate, and in order to enable the user to obtain accurate information, information push may be performed next to the first server, and for the user, two types of information are received: the first type is the related information of the target object pushed by the first server, and the second type is the information of the target object pushed by the second server.
In addition, if the second server determines that the new target GPS data is the same as the target GPS data, it indicates that the first server is accurate in calculation, and at this time, the related pushing is not required.
Also, in this case, there may be a case where a plurality of target objects exist, and therefore, the present solution may further include the following implementation steps:
410. when the second server judges that a plurality of new target objects exist and the number of the new target objects is more than one, namely, a plurality of new target objects exist, and before the related information corresponding to the new target objects is sent to the first server, confirmation requests about the plurality of new target objects are sent to the first server.
420. The first server sends the received confirmation requests of the plurality of new target objects to the vehicle terminal, receives confirmation instructions of the new target objects returned by the vehicle terminal and sends the confirmation instructions to the second server.
If, show at the vehicle terminal: the method comprises the steps of 'please confirm whether a front building or a rear building', after obtaining voice confirmation information of a user, sending the voice confirmation information to a first server, and sending the voice confirmation information to a second server by the first server.
430. And the second server acquires the relevant information corresponding to the confirmed new target object according to the received confirmation instruction, and sends the relevant information corresponding to the confirmed new target object to the first server.
440. And the first server sends the relevant information corresponding to the confirmed new target object to the vehicle terminal for displaying.
In order to improve the target identification effect of the first server and quickly identify the target object in the subsequent driving process, the scheme further comprises the following steps:
510. when the new target GPS data is different from the target GPS data, the second server sends the new target GPS data to the first server.
520. And the first server updates the pre-constructed association relation table according to the received new target GPS data and the related information of the new target object.
The incidence relation table is used for expressing incidence relations among the eyeball steering angle, the depth of field data, the GPS data of the vehicle, the target GPS data and the relevant information corresponding to the target object.
Specifically, the new target GPS data and the related information of the new target object are substituted for the original target GPS data corresponding to the eyeball turning angle, the depth of field data, the GPS data of the vehicle, and the related information corresponding to the target object, and the association table is stored in the database. Therefore, when other vehicles inquire the same target object at the same position in the following, the updated information can be directly obtained from the database.
With the rapid increase of global mobile data, it is difficult to satisfy the increasing demand of users by adding cellular network infrastructure, so that other connection methods need to be considered. In the scheme, the technical scheme of preferentially utilizing the roadside units for communication is provided in consideration of the problem, the roadside units are facilities which are deployed at the roadside for auxiliary communication, the network pressure can be effectively relieved, the communication efficiency is improved, and in addition, the roadside units can also improve the high-speed transmission of messages between vehicles.
In particular communication, the first server communicates with the second server according to priority rules defined in the communication protocol.
For example, the defined priority rule is: when the roadside unit RSU meeting the condition exists, the communication is carried out through the roadside unit, when the roadside unit does not meet the condition or the roadside unit does not exist, the communication is carried out through the base station, and the specific steps are as follows:
610. when the first server detects that the roadside unit exists, judging whether the roadside unit with the communication efficiency meeting the preset condition exists, if so, sending eyeball steering angle, depth of field data and GPS data of the vehicle to the second server through the determined roadside unit; meanwhile, the determined roadside unit transmits the GPS data of the roadside unit and the surrounding environment information to the second server.
620. And when the first server judges that no roadside unit with the communication efficiency meeting the preset condition exists or no roadside unit is detected, transmitting the eyeball steering angle, the depth of field data and the GPS data of the vehicle to the second server through the base station.
Based on this, in the scheme, firstly, the roadside units with the communication efficiency meeting the preset conditions are selected according to factors such as distance, bandwidth and the like to carry out communication, and the roadside units which are far away and have poor communication effect are not selected any more and directly carry out communication through the base station.
Furthermore, when the roadside unit performs communication, the roadside unit also sends the GPS data and some surrounding environment information to the second server, and further can assist the second server in performing target identification, specifically including the following steps:
when the second server loads the new target GPS data to a high-definition map and then identifies no target object, the second server loads the GPS data of the roadside unit, the surrounding environment information and the calculated new target GPS data to the high-definition map for relocation.
Therefore, the roadside unit not only improves the communication efficiency, but also can help the second server to perform relocation identification, and the identification effect of the target object is improved.
Example 2
Corresponding to embodiment 1, the present application provides a target push system based on eye tracking, including a first camera, a second camera, a vehicle terminal, and a first server;
the first server is used for:
receiving a first picture about the sight of a user and sent by a first camera and a second picture about the vehicle foreground and sent by a second camera, wherein the first picture and the second picture are obtained by shooting when the first camera and the second camera acquire a shooting instruction triggered by the user respectively;
identifying the eyeball steering angle of the user in the first picture, simultaneously calculating the depth of field data of the second picture and acquiring the GPS data of the vehicle at the trigger moment, and calculating to obtain target GPS data according to the eyeball steering angle, the depth of field data and the GPS data of the vehicle;
loading target GPS data into a map obtained by calling, and judging whether a target object exists or not;
when the target object exists, pushing related information corresponding to the target object to a vehicle terminal;
the vehicle terminal is used for: and receiving and displaying the related information corresponding to the target object returned by the first server.
Preferably, the first server is further configured to: sending the eyeball steering angle, the depth of field data and the GPS data of the vehicle to a second server, wherein the second server is a cloud server;
the second server is used for: calculating the received eyeball steering angle, the received depth of field data and the GPS data of the vehicle to obtain new target GPS data, judging whether the new target GPS data is the same as the target GPS data, loading the new target GPS data into a high-definition map when the new target GPS data is different from the target GPS data, judging whether a new target object exists, and sending related information corresponding to the new target object to a first server when the new target object exists;
the first server is further configured to: and pushing the related information corresponding to the new target object to the vehicle terminal.
Preferably, the second server is further configured to: when judging that a new target object exists and the number of the new target objects is more than one, namely judging that a plurality of new target objects exist, and sending a confirmation request about the plurality of new target objects to a first server before sending related information corresponding to the new target objects to the first server;
the first server is further configured to: sending the received confirmation requests of the plurality of new target objects to the vehicle terminal, receiving confirmation instructions of the new target objects returned by the vehicle terminal and sending the confirmation instructions to the second server;
the second server is further configured to: acquiring relevant information corresponding to the confirmed new target object according to the received confirmation instruction, and sending the relevant information corresponding to the confirmed new target object to the first server;
the first server is further configured to: and sending the relevant information corresponding to the confirmed new target object to the vehicle terminal.
Preferably, the second server is further configured to: when the new target GPS data is different from the target GPS data, sending the new target GPS data to the first server;
the first server is further configured to: updating a pre-constructed incidence relation table according to the received new target GPS data and the related information of the new target object;
the incidence relation table is used for expressing incidence relations among the eyeball steering angle, the depth of field data, the GPS data of the vehicle, the target GPS data and the relevant information corresponding to the target object.
Preferably, the first server is further configured to: communicating with a second server according to a priority rule defined in a communication protocol, specifically for:
when the roadside unit is detected to exist, judging whether the roadside unit with the communication efficiency meeting the preset condition exists or not, and if the roadside unit exists, sending eyeball steering angle, depth of field data and GPS data of the vehicle to a second server through the determined roadside unit; meanwhile, the determined roadside unit is used for: sending GPS data and surrounding environment information of the roadside unit to a second server;
and when the roadside unit with the communication efficiency meeting the preset condition does not exist or the roadside unit is not detected, transmitting the eyeball steering angle, the depth of field data and the GPS data of the vehicle to the second server through the base station.
Preferably, the second server is further configured to: and if no target object can be identified after the new target GPS data is loaded to a high-definition map, loading the GPS data of the roadside unit, the surrounding environment information and the new target GPS data obtained by calculation into the high-definition map for relocation.
Preferably, the first server is further configured to: and when any target object is not identified, sending a voice prompt to the vehicle terminal so that the user resends the voice request according to the voice prompt.
Preferably, the first server is specifically configured to: calculating the offset according to the eyeball steering angle and the field depth data;
and calculating target GPS data according to the offset and the GPS data of the vehicle.
Example 3
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing all the methods described in embodiment 1 when executing the computer program.
Fig. 3 is an internal structural diagram of a computer device according to an embodiment of the present invention. The computer device may be a server, and its internal structure diagram may be as shown in fig. 3. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an eye tracking based target push method.
Those skilled in the art will appreciate that the configuration shown in fig. 3 is a block diagram of only a portion of the configuration associated with aspects of the present invention and is not intended to limit the computing devices to which aspects of the present invention may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only show some embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A target pushing method based on eyeball tracking is characterized by comprising the following steps:
the method comprises the steps that a first server receives a first picture about a user sight line sent by a first camera and a second picture about a vehicle foreground sent by a second camera, wherein the first picture and the second picture are obtained by shooting when a shooting instruction triggered by a user is obtained through the first camera and the second camera respectively;
identifying the eyeball steering angle of the user in the first picture, simultaneously calculating the depth of field data of the second picture and acquiring the GPS data of the vehicle at the trigger moment, and calculating to obtain target GPS data according to the eyeball steering angle, the depth of field data and the GPS data of the vehicle;
loading the target GPS data into a map obtained by calling, and judging whether a target object exists or not;
and when the target object exists, pushing the relevant information corresponding to the target object to the vehicle terminal.
2. The method of claim 1, further comprising:
the first server sends the eyeball steering angle, the depth of field data and the GPS data of the vehicle to a second server, wherein the second server is a cloud server;
the second server calculates the received eyeball steering angle, the received depth of field data and the received GPS data of the vehicle to obtain new target GPS data, judges whether the new target GPS data is the same as the target GPS data or not, loads the new target GPS data into a high-definition map when the new target GPS data is different from the target GPS data, judges whether a new target object exists or not, and sends related information corresponding to the new target object to the first server when the new target object exists;
the pushing of the relevant information corresponding to the target object to the vehicle terminal by the first server specifically includes:
and the first server pushes the related information corresponding to the new target object to the vehicle terminal.
3. The method according to claim 2, wherein when the second server determines that there are new target objects and the number of the new target objects is greater than one, that is, it is determined that there are a plurality of new target objects, before sending the related information corresponding to the new target objects to the first server, the method further comprises:
the second server sending confirmation requests to the first server regarding the plurality of new target objects;
the first server sends the received confirmation requests of the plurality of new target objects to the vehicle terminal, receives a confirmation instruction of the new target object returned by the vehicle terminal and sends the confirmation instruction to the second server;
the second server acquires relevant information corresponding to the confirmed new target object according to the received confirmation instruction;
the sending, by the second server, the relevant information corresponding to the new target object to the first server specifically includes:
the second server sends the relevant information corresponding to the confirmed new target object to the first server;
the pushing, by the first server, the related information corresponding to the new target object to the vehicle terminal specifically includes:
and the first server sends the relevant information corresponding to the confirmed new target object to the vehicle terminal.
4. The method of claim 2, further comprising:
when the new target GPS data is different from the target GPS data, the second server sends the new target GPS data to the first server;
the first server updates a pre-constructed incidence relation table according to the received new target GPS data and the related information of the new target object;
the incidence relation table is used for expressing incidence relations among the eyeball steering angle, the depth of field data, the GPS data of the vehicle, the target GPS data and relevant information corresponding to the target object.
5. The method according to any one of claims 2 to 4, further comprising:
the first server communicates with the second server according to priority rules defined in a communication protocol;
the method specifically comprises the following steps:
when the first server detects that a roadside unit exists, judging whether the roadside unit with the communication efficiency meeting the preset condition exists, if so, sending the eyeball turning angle, the depth of field data and the GPS data of the vehicle to the second server through the determined roadside unit; meanwhile, the determined roadside unit sends GPS data and surrounding environment information of the roadside unit to the second server;
and when the first server judges that no roadside unit with the communication efficiency meeting the preset condition exists or no roadside unit is detected, the eyeball turning angle, the depth of field data and the GPS data of the vehicle are sent to the second server through the base station.
6. The method of claim 5, wherein when no target object is identified after the second server loads the new target GPS data into a high definition map, the method further comprises:
and the second server loads the GPS data of the roadside unit, the surrounding environment information and the calculated new target GPS data into the high-definition map for relocation.
7. The method of claim 1, further comprising:
and when the first server does not identify any target object, sending a voice prompt to the vehicle terminal so that the user resends the voice request according to the voice prompt.
8. The method according to any one of claims 1 to 4, 6 and 7, wherein the calculating, by the first server, a target GPS data according to the eyeball steering angle, the depth of field data and the GPS data of the vehicle specifically comprises:
the first server calculates an offset according to the eyeball turning angle and the depth of field data;
and calculating the target GPS data according to the offset and the GPS data of the vehicle.
9. A target pushing system based on eyeball tracking is characterized by comprising a first camera, a second camera, a vehicle terminal and a first server;
the first server is configured to:
receiving a first picture about the sight of a user and a second picture about the vehicle foreground, wherein the first picture and the second picture are sent by the first camera and the second camera respectively, and the first picture and the second picture are obtained by shooting when a shooting instruction triggered by the user is obtained by the first camera and the second camera;
identifying the eyeball steering angle of the user in the first picture, simultaneously calculating the depth of field data of the second picture and acquiring the GPS data of the vehicle at the trigger moment, and calculating to obtain target GPS data according to the eyeball steering angle, the depth of field data and the GPS data of the vehicle;
loading the target GPS data into a map obtained by calling, and judging whether a target object exists or not;
when the target object exists, pushing related information corresponding to the target object to a vehicle terminal;
and the vehicle terminal is used for receiving and displaying the related information corresponding to the target object returned by the first server.
10. A computer device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, characterized in that:
the processor, when executing the computer program, implements the eye tracking based target pushing method according to any one of claims 1 to 8.
CN202010958016.9A 2020-09-14 2020-09-14 Target pushing method, system and equipment based on eyeball tracking Active CN111931702B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010958016.9A CN111931702B (en) 2020-09-14 2020-09-14 Target pushing method, system and equipment based on eyeball tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010958016.9A CN111931702B (en) 2020-09-14 2020-09-14 Target pushing method, system and equipment based on eyeball tracking

Publications (2)

Publication Number Publication Date
CN111931702A true CN111931702A (en) 2020-11-13
CN111931702B CN111931702B (en) 2021-02-26

Family

ID=73309891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010958016.9A Active CN111931702B (en) 2020-09-14 2020-09-14 Target pushing method, system and equipment based on eyeball tracking

Country Status (1)

Country Link
CN (1) CN111931702B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112758099A (en) * 2020-12-31 2021-05-07 福瑞泰克智能系统有限公司 Driving assistance method and device, computer equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104729485A (en) * 2015-03-03 2015-06-24 北京空间机电研究所 Visual positioning method based on vehicle-mounted panorama image and streetscape matching
CN110703904A (en) * 2019-08-26 2020-01-17 深圳疆程技术有限公司 Augmented virtual reality projection method and system based on sight tracking
CN110929703A (en) * 2020-02-04 2020-03-27 北京未动科技有限公司 Information determination method and device and electronic equipment
CN111159459A (en) * 2019-12-04 2020-05-15 恒大新能源汽车科技(广东)有限公司 Landmark positioning method, device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104729485A (en) * 2015-03-03 2015-06-24 北京空间机电研究所 Visual positioning method based on vehicle-mounted panorama image and streetscape matching
CN110703904A (en) * 2019-08-26 2020-01-17 深圳疆程技术有限公司 Augmented virtual reality projection method and system based on sight tracking
CN111159459A (en) * 2019-12-04 2020-05-15 恒大新能源汽车科技(广东)有限公司 Landmark positioning method, device, computer equipment and storage medium
CN110929703A (en) * 2020-02-04 2020-03-27 北京未动科技有限公司 Information determination method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112758099A (en) * 2020-12-31 2021-05-07 福瑞泰克智能系统有限公司 Driving assistance method and device, computer equipment and readable storage medium
CN112758099B (en) * 2020-12-31 2022-08-09 福瑞泰克智能系统有限公司 Driving assistance method and device, computer equipment and readable storage medium

Also Published As

Publication number Publication date
CN111931702B (en) 2021-02-26

Similar Documents

Publication Publication Date Title
EP3550479A1 (en) Augmented-reality-based offline interaction method and apparatus
US20220020127A1 (en) Dynamic image recognition model updates
EP3480627A1 (en) Positioning method and device
US20210272306A1 (en) Method for training image depth estimation model and method for processing image depth information
US10939240B2 (en) Location information processing method and apparatus, storage medium and processor
CN105318881A (en) Map navigation method, and apparatus and system thereof
CN110363735B (en) Internet of vehicles image data fusion method and related device
CN107690149B (en) Method for triggering network policy update, management function entity and core network equipment
US11989400B2 (en) Data sharing method and device
WO2019138597A1 (en) System and method for assigning semantic label to three-dimensional point of point cloud
WO2020135065A1 (en) User information management method and apparatus, and identification method and apparatus
US20220074743A1 (en) Aerial survey method, aircraft, and storage medium
CN111931702B (en) Target pushing method, system and equipment based on eyeball tracking
EP3800443B1 (en) Database construction method, positioning method and relevant device therefor
CN114554391A (en) Parking lot vehicle searching method, device, equipment and storage medium
CN114925295A (en) Method for determining guide point of interest point, related device and computer program product
CN111126209A (en) Lane line detection method and related equipment
CN109633725B (en) Processing method and device for positioning initialization and readable storage medium
EP4098978A2 (en) Data processing method and apparatus for vehicle, electronic device, and medium
CN111105641A (en) BIM model-based vehicle searching method and device and readable storage medium
US9079309B2 (en) Terminal positioning method and system, and mobile terminal
JP7478831B2 (en) Autonomous driving based riding method, device, equipment and storage medium
US20230162309A1 (en) Traffic accident handling method and device, and storage medium
CN113240839A (en) Vehicle unlocking method, device, equipment, server, medium and product
CN114153312B (en) VPA control method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 4 / F, building 5, 555 Dongqing Road, hi tech Zone, Ningbo City, Zhejiang Province

Applicant after: Ningbo Junlian Zhixing Technology Co.,Ltd.

Address before: 4 / F, building 5, 555 Dongqing Road, hi tech Zone, Ningbo City, Zhejiang Province

Applicant before: Ningbo Junlian Zhixing Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant