CN115752481A - AR navigation method, AR glasses, medium and equipment based on image recognition - Google Patents

AR navigation method, AR glasses, medium and equipment based on image recognition Download PDF

Info

Publication number
CN115752481A
CN115752481A CN202211589057.0A CN202211589057A CN115752481A CN 115752481 A CN115752481 A CN 115752481A CN 202211589057 A CN202211589057 A CN 202211589057A CN 115752481 A CN115752481 A CN 115752481A
Authority
CN
China
Prior art keywords
navigation
module
glasses
wearer
collision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211589057.0A
Other languages
Chinese (zh)
Other versions
CN115752481B (en
Inventor
郭文艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Icar Guard Information Technology Co ltd
Original Assignee
Guangdong Icar Guard Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Icar Guard Information Technology Co ltd filed Critical Guangdong Icar Guard Information Technology Co ltd
Priority to CN202211589057.0A priority Critical patent/CN115752481B/en
Publication of CN115752481A publication Critical patent/CN115752481A/en
Application granted granted Critical
Publication of CN115752481B publication Critical patent/CN115752481B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to an AR navigation method based on image recognition, AR glasses, a medium and equipment, which can realize the track prediction and speed calculation of a moving object by utilizing an AR technology by automatically acquiring the position of the object on a current advancing route and predicting a collision position coordinate according to the advancing speed and the position of the object, and project a second navigation image into a target area in front of a glasses frame, so that the specific object motion track prediction and navigation route indication information are combined and displayed in the target area, the real-time object path updating and object path collision prediction are realized by the vivid AR technology, a wearer can conveniently and timely change the advancing route according to the predicted track of the object, the possible safety accident or hidden danger is avoided, and the flexibility and portability of AR navigation are greatly improved.

Description

AR navigation method, AR glasses, medium and equipment based on image recognition
Technical Field
The application relates to the technical field of intelligent glasses, in particular to an AR navigation method, AR glasses, a medium and equipment based on image recognition.
Background
Intelligent wearable devices have become a trend in the scientific and technological community. The intelligent wearable equipment is a general name for intelligently designing daily wearing and developing wearable equipment by applying a wearable technology. The coming era of wearable intelligent devices means the intelligent extension of people, and through the devices, people can better perceive external and self information, can process information more efficiently under the assistance of computers, networks and even other people, and can realize more seamless communication.
The existing intelligent glasses can already complete route navigation by combining with the AR technology, however, the AR navigation glasses are limited by technical development, can only navigate a route in a field at present, cannot predict a dynamic object appearing in the field, and cannot solve the problem of trajectory analysis of the dynamic object, so that the AR navigation application has certain limitation.
Disclosure of Invention
In view of the above problems, the present application provides an AR navigation method, AR glasses, a medium, and an apparatus based on image recognition, which solve the problem that the current AR navigation glasses cannot perform dynamic object recognition and trajectory analysis.
In order to achieve the above object, in a first aspect, the present invention provides an AR navigation method based on image recognition, which is suitable for AR navigation glasses, where the AR navigation glasses include a glasses frame, an intelligent navigation system is disposed in the glasses frame, and the intelligent navigation system includes a navigation module, a projection module, and a processing module; the method comprises the following steps:
the navigation module acquires positioning information, generates a navigation route according to the input starting point position and the input end point position, and generates a first navigation image based on the positioning information and the navigation route;
the processing module receives the infrared image acquisition module to acquire the position of an object on the current advancing route of the wearer of the AR navigation glasses in real time, judges whether the possibility of collision exists according to the advancing speed of the wearer of the AR navigation glasses relative to the object and the position of the object, predicts the coordinate of the collision position if the possibility exists, and marks the coordinate of the collision position in the first navigation image to obtain a second navigation image;
the projection module projects the second navigation image to a target area in front of the glasses frame.
In some embodiments, the receiving the infrared image collecting module to obtain the object position on the current advancing route of the wearer of the AR navigation glasses in real time, and determining whether the possibility of collision exists according to the advancing speed and the object position of the wearer of the AR navigation glasses relative to the object includes:
after the infrared image acquisition module detects that an object exists on the current advancing route of a wearer of the AR navigation glasses, the movement track of the object in a preset time period is obtained, the coordinate position of the object after the first time period is judged according to the movement track of the object, whether the possibility of collision exists is judged according to the coordinate position of the object after the first time period, and the first time period is the time required by the wearer to reach the position of the object, which is predicted according to the advancing speed of the wearer relative to the object and the position of the object.
In some embodiments, the intelligent navigation system further comprises a voice prompt module; the method further comprises the following steps:
the processing module sends a first control signal to the voice prompt module so that the voice prompt module sends first prompt information, and the first prompt information comprises the collision coordinate position and/or the time interval between the predicted collision time and the current system timestamp.
In some embodiments, the intelligent navigation system further comprises an infrared image acquisition module; the method further comprises the following steps:
the infrared image acquisition module acquires an object image on the current advancing route of a wearer of the AR navigation glasses in real time;
the processing module identifies the object image acquired by the infrared image acquisition module, determines the type of the object, and if the identified type of the object is in accordance with the preset type, the execution step judges whether the possibility of collision exists according to the advancing speed of the wearer of the AR navigation glasses relative to the object and the position of the object, otherwise, the judgment step is not executed.
In some embodiments, the intelligent navigation system further comprises a voice prompt module; the method further comprises the following steps:
and the processing module sends a second control signal to the voice prompt module so that the voice prompt module sends second prompt information, wherein the second prompt information comprises the type of the object, the collision coordinate position and/or the time interval between the predicted collision time and the current system timestamp.
In some embodiments, the infrared image capture module is disposed within the eyeglass frame or in front of the vehicle of the wearer; when the infrared image acquisition module is arranged in front of the carrying tool of the wearer, the infrared image acquisition module is in communication connection with the processing module.
In some embodiments, the intelligent navigation system further comprises an eyeball tracking module and a communication module, the eyeball tracking module is arranged in the eyeglass frame around the target area, and the AR navigation eyeglasses are further connected with the vehicle-mounted control module through the communication module; the method comprises the following steps:
the eyeball tracking module collects eyeball track information of a wearer, and the processing module sends a vehicle control signal to the vehicle-mounted control module when judging that the collected eyeball track information of the wearer is matched with preset eyeball track information, so that the vehicle-mounted control module adjusts the running parameters of the current vehicle according to the vehicle control signal.
In order to achieve the above object, in a second aspect, the present invention provides AR navigation glasses, where the AR navigation glasses include a glasses frame, an intelligent navigation system is disposed in the glasses frame, and the intelligent navigation system further includes a navigation module, a projection module, and a processing module; the navigation module is used for acquiring positioning information, generating a navigation route according to an input starting point position and an input end point position, and generating a first navigation image based on the positioning information and the navigation route; the processing module is used for receiving the infrared image acquisition module to acquire the position of an object on the current advancing route of the wearer of the AR navigation glasses in real time, judging whether the possibility of collision exists or not according to the advancing speed of the wearer of the AR navigation glasses relative to the object and the position of the object, predicting the coordinate of the collision position if the possibility exists, and marking the coordinate of the collision position in the first navigation image to obtain a second navigation image; the projection module is used for projecting the second navigation image to a target area in front of the glasses frame.
To achieve the above object, in a third aspect, the present invention provides a computer-readable storage medium having stored thereon a computer program, which, when executed by a processor, performs the steps of the AR navigation method based on image recognition as set forth in the first aspect.
To achieve the above object, in a fourth aspect, the present invention provides an electronic apparatus comprising: the AR navigation system comprises a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, when the electronic device runs, the processor is in communication with the storage medium through the bus, and the processor executes the machine-readable instructions to execute the steps of the AR navigation method based on image recognition in the first aspect.
Different from the prior art, the technical scheme realizes the track prediction and speed calculation of the moving object by utilizing the AR technology through automatically acquiring the object position on the current advancing route and predicting the collision position coordinate according to the advancing speed and the object position of the object, and projects the second navigation image to the target area in front of the spectacle frame, so that the specific object motion track prediction and navigation route indication information are combined and displayed in the target area, the real-time object path updating and object path collision prediction are realized through the visual AR technology, a wearer can conveniently and timely change the advancing route according to the predicted track of the object, the possible safety accidents or hidden dangers are avoided, and the flexibility and portability of the AR navigation are greatly improved.
The above description of the present invention is only an outline of the present invention, and in order to make the technical solution of the present invention more clearly understood by those skilled in the art, the present invention may be implemented based on the content described in the text and drawings of the present specification, and in order to make the above object, other objects, features, and advantages of the present invention more easily understood, the following description will be made in conjunction with the embodiments of the present application and the drawings.
Drawings
The drawings are only for purposes of illustrating the principles, implementations, applications, features, and effects of particular embodiments of the invention, as well as others related thereto, and are not to be construed as limiting the application.
In the drawings of the specification:
FIG. 1 is a schematic diagram of a first exemplary AR navigation method step;
FIG. 2 is a diagram illustrating steps of a second exemplary AR navigation method;
FIG. 3 is a diagram illustrating steps of a third exemplary AR navigation method;
fig. 4 is a first schematic view of the AR navigation glasses according to the embodiment.
Fig. 5 is a second schematic diagram of the AR navigation glasses according to the embodiment.
The reference numerals referred to in the above figures are explained below:
1. an intelligent navigation system;
11. a navigation module;
12. a projection module;
13. a processing module;
14. a voice prompt module;
15. an infrared image acquisition module;
16. an eyeball tracking module;
17. and a communication module.
Detailed Description
In order to explain in detail possible application scenarios, technical principles, practical embodiments, and the like of the present application, the following detailed description is given with reference to the accompanying drawings in conjunction with the listed embodiments. The embodiments described herein are merely for more clearly illustrating the technical solutions of the present application, and therefore, the embodiments are only used as examples, and the scope of the present application is not limited thereby.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase "an embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or related to other embodiments specifically defined. In principle, in the present application, the technical features mentioned in the embodiments can be combined in any manner to form a corresponding implementable solution as long as there is no technical contradiction or conflict.
Unless defined otherwise, technical terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the use of relational terms herein is intended to describe specific embodiments only and is not intended to limit the present application.
In the description of the present application, the term "and/or" is a expression for describing a logical relationship between objects, meaning that three relationships may exist, for example a and/or B, meaning: there are three cases of A, B, and both A and B. In addition, the character "/" herein generally indicates that the former and latter associated objects are in a logical relationship of "or".
In this application, terms such as "first" and "second" are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Without further limitation, in this application, the use of "including," "comprising," "having," or other similar open-ended expressions in phrases and expressions of "including," "comprising," or other similar expressions, is intended to encompass a non-exclusive inclusion, and such expressions do not exclude the presence of additional elements in a process, method, or article that includes the recited elements, such that a process, method, or article that includes a list of elements may include not only those elements but also other elements not expressly listed or inherent to such process, method, or article.
As is understood in the examination of the guidelines, the terms "greater than", "less than", "more than" and the like in this application are to be understood as excluding the number; the expressions "above", "below", "within" and the like are understood to include the present numbers. In addition, in the description of the embodiments of the present application, "a plurality" means two or more (including two), and expressions related to "a plurality" similar thereto are also understood, for example, "a plurality of groups", "a plurality of times", and the like, unless specifically defined otherwise.
In the description of the embodiments of the present application, spatially relative expressions such as "central," "longitudinal," "lateral," "length," "width," "thickness," "up," "down," "front," "back," "left," "right," "vertical," "horizontal," "vertical," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," and the like are used, and the indicated orientations or positional relationships are based on the orientations or positional relationships shown in the specific embodiments or drawings and are only for convenience of describing the specific embodiments of the present application or for the convenience of the reader, and do not indicate or imply that the device or component in question must have a specific position, a specific orientation, or be constructed or operated in a specific orientation and therefore should not be construed as limiting the embodiments of the present application.
Unless specifically stated or limited otherwise, the terms "mounted," "connected," "secured," and "disposed" used in the description of the embodiments of the present application are to be construed broadly. For example, the connection can be a fixed connection, a detachable connection, or an integrated arrangement; it can be a mechanical connection, an electrical connection, or a communication connection; they may be directly connected or indirectly connected through an intermediate; which may be communication within two elements or an interaction of two elements. Specific meanings of the above terms in the embodiments of the present application can be understood by those skilled in the art to which the present application pertains in accordance with specific situations.
Referring to fig. 1, in a first aspect, the present invention provides an AR navigation method based on image recognition, which is suitable for AR navigation glasses, where the AR navigation glasses include a glasses frame, an intelligent navigation system is disposed in the glasses frame, and the intelligent navigation system includes a navigation module, a projection module, and a processing module; the method comprises the following steps:
s1, a navigation module acquires positioning information, generates a navigation route according to an input starting point position and an input end point position, and generates a first navigation image based on the positioning information and the navigation route;
s2, the processing module receives the infrared image acquisition module to acquire the position of an object on the current advancing route of the wearer of the AR navigation glasses in real time, judges whether the possibility of collision exists according to the advancing speed of the wearer of the AR navigation glasses relative to the object and the position of the object, predicts the coordinate of the collision position if the possibility exists, and marks the coordinate of the collision position in the first navigation image to obtain a second navigation image;
and S3, the projection module projects the second navigation image to a target area in front of the glasses frame.
The eyeglass frame also includes a lens portion on which the projection module 12 projects an image for AR navigation by the wearer in conjunction with the actual scene. Optionally, the lens part is made of a digital screen, the top ends of the two lenses are respectively provided with a camera, the two cameras simultaneously collect patterns in the advancing direction, integrate the patterns into one image, regard the image as an actual scene in the current positioning information, and project the actual scene onto the lenses. Through setting up two cameras, can simulate the image acquisition function of eyeball, enlarge field of vision, the image integration through two cameras simultaneously, the condition of the blurred image that produces when can avoiding single camera to gather high-speed moving object improves image acquisition's discernment degree and rate of accuracy.
The navigation module 11 is internally provided with a GPS or Beidou positioning system, and can acquire the position of the current AR navigation glasses in real time. Optionally, a wireless communication module 17 such as bluetooth or WIFI is further disposed in the navigation module 11, and can establish a connection with an electronic terminal of a current wearer, after the connection is established, the navigation module 11 acquires map information on the current electronic terminal, where the map information on the electronic terminal includes a start position and an end position, and the electronic terminal may be a car machine, or a mobile phone, a smart watch, or other wearable electronic devices. The navigation module 11 generates a navigation route according to the input starting position and ending position, and simultaneously acquires positioning information of the current wearer, and combines the positioning information with the navigation route and the current scene image to generate a first navigation image, specifically, the method includes judging the position of the current wearer on the navigation route according to the positioning information of the current wearer, acquiring the current scene image, and generating a navigation indicator on the current scene image according to the navigation route, wherein an image containing the current scene image and the navigation indicator is the first navigation image.
As a preferred embodiment, the update frequency of the first navigation image is changed according to the current route direction. When the route direction is a straight line, updating the first navigation image according to a first preset frequency, when the route direction is a turning angle smaller than 90 degrees, updating the first navigation image according to a second preset frequency, and when the route direction is a turning angle larger than 90 degrees, updating the first navigation image according to a third preset frequency. The frequency values of the first preset frequency, the second preset frequency and the third preset frequency are increased in sequence. By dynamically distributing the updating frequency of the first navigation image, the electric energy loss of the AR navigation glasses and the image simulation data measurement and calculation amount can be saved, the operation rate is improved, and the updating continuity of the first navigation image is improved.
The infrared image acquisition module 15 can be integrated on the vehicle, and the infrared image acquisition module 15 is based on the detector that infrared spectrum used, can be infrared radar or infrared sensor etc. adopts infrared image generation technique, can utilize the thermal imaging distribution condition of object heat acquisition current object in real time to obtain the concrete position information and the shape information that current object located. In this embodiment, the dynamic object is a moving object capable of generating heat, and may be an animal or an object with heat. The position information and the shape information of the current object refer to the position of the object on the current forward route of the wearer of the current AR glasses, and the current forward route includes the current traveling road and the environmental information within a certain distance around the current traveling road, for example, when the current wearer is in the middle lane of the three same-direction lanes in the road, the current forward route includes the environmental information within the three same-direction lanes; when the current wearer is on a single-lane traveling road, the current advancing route comprises pedestrian road environment information on two sides of the road, and so on. After the position information and the shape information of the current object are acquired, the infrared image acquisition module 15 sends the information to the processing module 13.
After receiving the position information and the shape information of the current object, the processing module 13 determines whether there is a possibility of collision according to the forward speed and the object position of the current object. If the possibility of collision exists, coordinates of collision positions possibly generating collision are obtained through trajectory prediction and calculation according to the advancing speed and the object position of the current object, the number of the coordinates of the collision positions is determined according to the predicted number of the trajectories, and if three possible collision trajectories exist, the coordinates of the three collision positions are calculated. After the coordinates of the collision position are predicted, the coordinates of the collision position are marked in the first navigation image to obtain a second navigation image, namely the second navigation image is generated by combining the coordinates of the collision position on the basis of the first navigation image.
The projection module 12 projects the second navigation image to a target area in front of the glasses frame, where the target area is a position where the lens is located in the glasses frame.
By automatically acquiring the object position on the current advancing route and predicting the collision position coordinate according to the advancing speed and the object position of the object, the track prediction and the speed calculation of the moving object by utilizing the AR technology are realized, the second navigation image is projected to the target area in front of the glasses frame, the specific object motion track prediction and the navigation route indication information are combined and displayed in the target area, the real-time object path updating and the object path collision prediction are realized through the visual AR technology, a wearer can conveniently and timely change the advancing route according to the predicted track of the object, the possible safety accident or hidden danger is avoided, and the safety and the portability of the AR navigation are greatly improved.
Referring to fig. 2, in some embodiments, the receiving infrared image collecting module 15 obtains the position of the object on the current forward route of the wearer of the AR navigation glasses in real time, and determining whether the possibility of collision exists according to the forward speed of the wearer of the AR navigation glasses relative to the object and the position of the object includes:
s21, after the infrared image acquisition module detects that an object exists on the current advancing route of a wearer of the AR navigation glasses, acquiring a motion track of the object in a preset time period;
s22, judging the coordinate position of the object after the first time period according to the motion track of the object;
and S23, judging whether the possibility of collision exists or not according to the coordinate position of the object after a first time period, wherein the first time period is the time required by the wearer to reach the object position predicted according to the advancing speed of the wearer relative to the object and the object position.
In this embodiment, the predetermined time period may be set manually, for example, the predetermined time period may be 50ms, or may be 90ms. Optionally, a predetermined time period may be determined according to the motion rate of the current object, for example, the object position information acquired by the infrared image acquisition module 15 is acquired within a fixed frequency, the motion rate and the displacement of the object are calculated, if the motion rate and the displacement are greater than 10m/s, the time interval of the preset time period is reduced to 50ms, the motion trajectory of the moving object is acquired, the motion trajectory possibly generated by the object within the first time period is measured and drawn according to the motion trajectory of the object, and the coordinate position of the object after the first time period is acquired according to the motion trajectory of the object. And judging whether the possibility of collision exists or not according to the coordinate position of the object after a first time period, wherein the first time period is the predicted time required by the wearer to reach the object position relative to the advancing speed of the current wearer and the object position.
For example, the normal driving speed of the current wearer is 20m/s, the advancing speed of the object which is relatively moved by the current wearer is 15m/s after the object is combined with the traveling speed, that is, the traveling direction of the current object is the same direction, the distance between the current wearer and the position of the current object is 50m, the first time period required by the wearer to reach the position of the current object is 2.5s, the distance between the current object and the current wearer after the current object passes through the first time period, that is, 62.5m, is increased on the basis of the first time period, and whether the object has the possibility of colliding with the current traveling route at 62.5m is judged.
By acquiring and measuring the motion trail of the object within a preset time period and judging whether the interference is generated with the advancing route of the wearer, the subsequent calculation of collision data is facilitated, including the calculation of coordinates of a collision position, an avoidance route and the like, a measure for executing avoidance operation can be reserved for the wearer, and the safety performance of AR navigation is improved.
Referring to fig. 5, in some embodiments, the intelligent navigation system 1 further includes a voice prompt module 14; the method further comprises the following steps:
the processing module sends a first control signal to the voice prompt module so that the voice prompt module sends first prompt information, and the first prompt information comprises a collision coordinate position and/or a time interval between a predicted collision time and a current system timestamp.
The collision coordinate position is prompted by the content of the indication information in the second navigation image when prompted, which can be understood by combining the following examples: the information of the collision coordinate position is represented as a (11, 12) in the second navigation image, and when the voice prompt module 14 sends out the first prompt information, the broadcast is as follows: the 'collision coordinate position information is point A' without embodying concrete coordinate data, so that the wearer can rapidly determine the position of the point A by combining with the AR image. In this embodiment, the coordinate position indicates the coordinate position in the current second navigation image, so that the specific coordinate value has no referential property, but the mark symbol for marking the position has a prompting function, and optionally, when the voice prompt module 14 sends out the first prompt information, the letter a may be flashed or bolded, so that the position of the coordinate of the point a in the second navigation image is more conspicuous, and is convenient for searching.
After the predicted collision time is the predicted track, the current time stamp of the collision between the wearer and the object is obtained, as a preferred embodiment, the preset time stamp can be combined with the time stamp of the current system to obtain time interval information, the time interval information is sent out according to the voice prompt module 14, the time interval information can be transmitted to the wearer more intuitively, the wearer can rapidly lock the position of the point A in the lens through the voice prompt point A, the time interval is synchronously reported to the wearer by utilizing the voice information, the reaction time of the wearer can be shortened, more avoidance operation time is reserved, the avoidance success rate is improved, and the safety of the wearer in the driving process is further improved.
Referring to fig. 5 and 3, in some embodiments, the intelligent navigation system further includes an infrared image acquisition module; the method further comprises the following steps:
s211, an infrared image acquisition module acquires an object image on the current advancing route of a wearer of the AR navigation glasses in real time;
s212, the processing module identifies the object image acquired by the infrared image acquisition module, determines the type of the object, and if the identified type of the object is in accordance with the preset type, the step is executed to judge whether the possibility of collision exists according to the advancing speed of the wearer of the AR navigation glasses relative to the object and the position of the object, otherwise, the step is not executed.
In this embodiment, the determination of the type of the object is to make a rough distinction between objects, for example, objects classified into light type and medium-heavy type, heavy type based on weight, or objects classified into small type, medium type, and large type based on the size and shape of the object.
As a preferred embodiment, the object is divided by weight. Light objects that are not life threatening to the wearer even if they collide, such as plastic bags, cardboard or paper sheets, bags, etc.; the medium-heavy type is an object which can generate certain influence on a wearer when the medium-heavy type and the heavy type collide with each other, and comprises objects with certain weight such as vehicle appearance damage (corresponding to driving) or human body micro damage (riding equipment), for example, rabbits, squirrels, packaging boxes containing food and the like; heavy objects that can cause significant damage to the wearer or to the impacting object for collisions, such as: running children, racing riding equipment, people walking on a bus, or large or medium-sized pets. In this embodiment, the preset types include medium-heavy type and heavy type object types, that is, objects that clearly collide with the wearer and affect the wearer are represented, and only when the identified object is in the preset types, the AR navigation glasses perform trajectory prediction and analysis of collision points according to the movement speed and movement trajectory of the object, and if the object is a light object, the AR navigation glasses do not consider the object. Therefore, data measurement and calculation contents of the AR navigation glasses can be optimized, unnecessary program operation is reduced, the operation speed of the AR navigation glasses is increased, and the fluency of the intelligent navigation system 1 is further improved.
As a preferred embodiment, the processing module 13 pre-screens the category of the dynamic object that may appear in the database according to the current positioning information, determines the type of the dynamic object according to the shape and size of the obtained dynamic object, acquires the motion speed range of the dynamic object according to the type of the dynamic object, rapidly screens the specific type of the current object according to the motion speed range, and determines the motion trajectory of the current object according to the specific type of the current object. For example, if the current positioning information is displayed through a park, the categories of dynamic objects corresponding to the park when the wearer is within the forward range of the park include: toy, pet, children and adults, after obtaining the object shape of infrared image acquisition module 15 (supposing to be a medium-sized dog), judge the type of dynamic object to be the pet according to the object shape, obtain the motion velocity range that the pet may exist: and 5m/s to 20m/s, obtaining the traveling speed of the current pet as 15m/s, judging that the current pet type is a medium-sized dog, and obtaining the motion track of the medium-sized dog according to the shape and size of the medium-sized dog. By classifying objects according to scenes in advance, the calculated amount of image recognition can be reduced, and meanwhile, the category of the dynamic object can be more accurately determined through the current classification, so that the accuracy of track simulation of the object is improved, and the condition that the object track obtained through program calculation is not in accordance with the normal rule is avoided.
In some embodiments, the intelligent navigation system 1 further comprises a voice prompt module 14; the method further comprises the following steps:
the processing module 13 sends a second control signal to the voice prompt module 14, so that the voice prompt module 14 sends a second prompt message, where the second prompt message includes the object type, and the collision coordinate position and/or the time interval between the predicted collision time and the current system timestamp.
The object types may be general categories of object types, such as light versus medium heavy, heavy based on weight, and specific categories of object types, such as: pets, small pets, cats, dogs, etc., and may also be children, etc. After confirming that there is a possibility of collision, the voice prompt module 14 may also broadcast the predicted collision coordinate position together, which may be understood with reference to the following examples: the information of the collision coordinate position is represented as B (11, 12) in the second navigation image, and when the voice prompt module 14 sends the first prompt information, the broadcast is as follows: the 'collision coordinate position information is the B point' without embodying concrete coordinate data, so that the wearer can quickly determine the position of the B point by combining the AR image. In this embodiment, the coordinate position indicates the coordinate position in the current second navigation image, so that the specific coordinate value has no referential property, but the mark symbol for marking the position has a prompting function, and optionally, when the voice prompt module 14 sends out the first prompt information, the letter B may be flashed or bolded, so that the position of the B point coordinate in the second navigation image is more conspicuous, and the search is facilitated. After the predicted collision time is the predicted track, the current time stamp of the collision between the wearer and the object is obtained, as a preferred embodiment, the preset time stamp can be combined with the time stamp of the current system to obtain time interval information, the time interval information is sent out according to the voice prompt module 14, the time interval information can be transmitted to the wearer more intuitively, the wearer can rapidly lock the position of the point B in the lens through the voice prompt point B, the time interval is synchronously reported to the wearer by utilizing the voice information, the reaction time of the wearer can be shortened, more avoidance operation time is reserved, the avoidance success rate is improved, and further the safety of the wearer in the driving process is improved.
In some embodiments, the infrared image capture module 15 is disposed within the eyeglass frame or in front of the wearer's vehicle; when the infrared image capturing module 15 is disposed in front of the vehicle of the wearer, the infrared image capturing module 15 is in communication with the processing module 13.
The vehicle means a vehicle used by a current wearer for traveling, and includes an automobile, a scooter, and riding equipment such as an electric vehicle, a bicycle, and a motorcycle. The infrared image capturing module 15 is disposed within the frame of the glasses, and may also be disposed in front of the carrier of the wearer. When infrared image acquisition module 15 sets up in the delivery vehicle the place ahead of the person of wearing, infrared image acquisition module 15 and processing module 13 communication connection specifically can be wireless communication mode, for example realize the connection of signal through bluetooth or WIFI etc. also can connect through wired mode. By additionally arranging the infrared image acquisition module 15, the AR navigation glasses can be matched with the existing carrying tool, the existing carrying tool does not need to be structurally modified or substantially modified, the safety performance of the carrying tool is guaranteed, meanwhile, the use requirements of the AR navigation glasses are met, and the application range of the AR navigation glasses is widened.
In some embodiments, the intelligent navigation system 1 further includes an eyeball tracking module 16 and a communication module 17, the eyeball tracking module 16 is disposed in a glasses frame around the target area, and the AR navigation glasses are further connected with the vehicle-mounted control module through the communication module 17; the method comprises the following steps:
the eyeball tracking module collects eyeball track information of a wearer, and the processing module sends a vehicle control signal to the vehicle-mounted control module when judging that the collected eyeball track information of the wearer is matched with preset eyeball track information, so that the vehicle-mounted control module adjusts the running parameters of the current vehicle according to the vehicle control signal.
The eyeball tracking module 16 can receive the current eye movement information, as a preferred embodiment, the number of the eyeball tracking modules 16 is two, and the two eyeball tracking modules are respectively arranged around the spectacle frames of the left and right eyes, the eyeball tracking module 16 collects the track information, integrates and corrects the two track information, so as to obtain the corrected eyeball track information, and transmits the eyeball track information to the processing module 13. The processing module 13 determines whether the acquired eyeball trajectory information of the wearer matches with preset eyeball trajectory information. And if the vehicle control signals are matched, the vehicle control signals are sent to the vehicle-mounted control module, so that the vehicle-mounted control module adjusts the current running parameters of the vehicle according to the vehicle control signals, wherein the parameters comprise acceleration, deceleration, emergency braking and the like. The preset eyeball trajectory information may be stored in the database of the navigation glasses in advance, for example, when the eyeball trajectory is "0" to indicate emergency braking, and when the eyeball trajectory is "S" to indicate acceleration. Optionally, the preset eye trajectory may be collected and corrected according to the personal eye trajectory motion, for example, in an initialization stage of the AR navigation glasses, the eye trajectory information executed by the wearer according to the preset eye trajectory is collected, and after inputting multiple times of the same eye trajectory, the preset eye trajectory of the current wearer is obtained through comprehensive calculation. Therefore, personalized customization of the AR glasses can be improved, eyeball motion habits of each wearer can be better fitted, and accuracy and sensitivity of eyeball track recognition are improved. Especially, in equipment which does not need to be grasped by hands, such as a scooter and the like which are used as a carrier, the operation parameters of the vehicle are controlled by utilizing the eyeball track, and the portability and the use comfort degree of a user in the driving process can be improved.
As a preferred embodiment, only when there is a possibility of collision in front of the advancing route, the eyeball tracking module 16 is used to collect the eyeball trajectory information, and the operation is performed in the other states according to the normal driving state, so that the error control of the vehicle operation parameters by the eyeball trajectory information generated by the wearer in the advancing process is avoided, and the use safety of the AR navigation glasses is improved.
Referring to fig. 4 and fig. 5, in a second aspect, the present invention provides an AR navigation glasses, which includes a glasses frame, an intelligent navigation system 1 is disposed in the glasses frame, the intelligent navigation system 1 further includes a navigation module 11, a projection module 12 and a processing module 13; the navigation module 11 is configured to obtain positioning information, generate a navigation route according to an input starting point position and an input end point position, and generate a first navigation image based on the positioning information and the navigation route; the processing module 13 is configured to receive the object position on the current forward route of the wearer of the AR navigation glasses obtained by the infrared image acquisition module 15 in real time, determine whether the possibility of collision exists according to the forward speed of the wearer of the AR navigation glasses relative to the object and the object position, predict the coordinates of the collision position if the possibility of collision exists, and mark the coordinates of the collision position in the first navigation image to obtain a second navigation image; the projection module 12 is used to project the second navigation image to a target area in front of the glasses frame.
The eyeglass frame also includes a lens portion on which the projection module 12 projects an image for AR navigation by the wearer in conjunction with the actual scene. Optionally, the lens part is made of a digital screen, the top ends of the two lenses are respectively provided with a camera, the two cameras simultaneously collect patterns in the advancing direction, the patterns are integrated into one image, the image is regarded as an actual scene in the current positioning information, and the actual scene is projected onto the lenses. Through setting up two cameras, can simulate the image acquisition function of eyeball, enlarge field of vision, the image integration through two cameras simultaneously, the condition of the blurred image that produces when can avoiding single camera to gather high-speed moving object improves image acquisition's discernment degree and rate of accuracy.
The navigation module 11 is internally provided with a GPS or Beidou positioning system, and can acquire the position of the current AR navigation glasses in real time. Optionally, a wireless communication module 17 such as bluetooth or WIFI is further disposed in the navigation module 11, and can establish a connection with an electronic terminal of a current wearer, after the connection is established, the navigation module 11 acquires map information on the current electronic terminal, where the map information on the electronic terminal includes a start position and an end position, and the electronic terminal may be a car machine, or a mobile phone, a smart watch, or other wearable electronic devices. The navigation module 11 generates a navigation route according to the input starting position and the input end position, and simultaneously obtains the positioning information of the current wearer, and combines the positioning information with the navigation route and the current scene image to generate a first navigation image, specifically, the position of the current wearer on the navigation route is judged according to the positioning information of the current wearer, the current scene image is obtained, and a navigation indicator mark on the current scene image is generated according to the navigation route, wherein the image containing the current scene image and the navigation indicator mark is the first navigation image.
As a preferred embodiment, the update frequency of the first navigation image is changed according to the current route direction. When the route direction is a straight line, updating the first navigation image according to a first preset frequency, when the route direction is a turning angle smaller than 90 degrees, updating the first navigation image according to a second preset frequency, and when the route direction is a turning angle larger than 90 degrees, updating the first navigation image according to a third preset frequency. The frequency values of the first preset frequency, the second preset frequency and the third preset frequency are increased in sequence. By dynamically distributing the updating frequency of the first navigation image, the electric energy loss of the AR navigation glasses and the image simulation data measurement and calculation amount can be saved, the operation rate is improved, and the updating continuity of the first navigation image is improved.
The infrared image acquisition module 15 can be integrated on the vehicle, and the infrared image acquisition module 15 is a detector based on infrared spectrum application, can be an infrared radar or an infrared sensor, and the like, and adopts an infrared image generation technology, so that the thermal imaging distribution condition of the current object can be acquired in real time by using the heat of the object, and the specific position information and the shape information of the current object are obtained. In this embodiment, the dynamic object is a moving object capable of generating heat, and may be an animal, or an object with heat. The position information and the shape information of the current object refer to the position of the object on the current forward route of the wearer of the current AR glasses, and the current forward route includes the current traveling road and the environmental information within a certain distance around the current traveling road, for example, when the current wearer is in the middle lane of the three same-direction lanes in the road, the current forward route includes the environmental information within the three same-direction lanes; when the current wearer is on a single-lane traveling road, the current advancing route comprises pedestrian road environment information on two sides of the road, and so on. After acquiring the position information and the shape information of the current object, the infrared image acquisition module 15 sends the information to the processing module 13.
After receiving the position information and the shape information of the current object, the processing module 13 determines whether there is a possibility of collision according to the forward speed and the object position of the current object. If the possibility of collision exists, coordinates of collision positions which are possible to generate collision are obtained through trajectory prediction and calculation according to the advancing speed and the object position of the current object, the number of the coordinates of the collision positions is determined according to the predicted number of the trajectories, if three possible collision trajectories exist, the coordinates of the three collision positions are calculated, optionally, the coordinate range of the collision positions can be increased, and a wearer is prompted to avoid the coordinate range of the current collision positions as far as possible. After the coordinates of the collision position are predicted, the coordinates of the collision position are marked in the first navigation image to obtain a second navigation image, namely the second navigation image is generated by combining the coordinates of the collision position on the basis of the first navigation image.
The projection module 12 projects the second navigation image into a target area in front of the glasses frame, where the target area is a position where the lens in the glasses frame is located.
By automatically acquiring the object position on the current advancing route and predicting the collision position coordinate according to the advancing speed and the object position of the object, the track prediction and speed calculation of the moving object by utilizing the AR technology are realized, the second navigation image is projected into the target area in front of the glasses frame, the specific object motion track prediction and the navigation route indication information are combined and displayed in the target area, the real-time object path updating and object path collision prediction are realized through the visual AR technology, a wearer can conveniently and timely change the advancing route according to the predicted track of the object, the possible safety accident or hidden danger is avoided, and the safety and the portability of AR navigation are greatly improved.
In some embodiments, the AR navigation glasses further comprise a vibrator and an eye tracking module 16, the vibrator and the eye tracking module 16 are disposed on the glasses frame, and optionally the vibrator is specifically disposed near the temple of the human body or the forehead and the bridge of the nose, so as to facilitate the transmission of the vibration sense to the human body. The eye tracking module 16 is disposed on an eyeglass frame around the eye orbit of the human body. When the possibility of dormancy of the current wearer is judged, the vibrator is started in time to remind the current wearer, so that the current wearer can be enabled to lift the spirit, and fatigue driving is avoided.
The step of determining that the current possibility of dormancy of the wearer specifically includes: the eyeball tracking module 16 acquires eyeball trajectory information in real time and uploads the eyeball trajectory information to the processing module 13, and the processing module 13 monitors the eyeball trajectory information, and when an abnormal eyeball trajectory is dialed or no eyeball trajectory information exists in a time zone, it is determined that the possibility of dormancy exists. The processing module 13 sends vibration information to the vibrator, so that the vibrator vibrates at a fixed frequency to remind a current wearer, the current wearer can be enabled to lift the spirit, and fatigue driving is avoided.
In a third aspect, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the AR navigation method based on image recognition as set forth in the first aspect.
In a fourth aspect, the present invention provides an electronic device comprising: the AR navigation method comprises a processor, a storage medium and a bus, wherein the storage medium stores machine readable instructions executable by the processor, when the electronic device runs, the processor and the storage medium communicate through the bus, and the processor executes the machine readable instructions to execute the steps of the AR navigation method based on image recognition in the first aspect.
The storage medium/memory includes, but is not limited to: RAM, ROM, magnetic disk, magnetic tape, optical disk, flash memory, U disk, removable hard disk, memory card, memory stick, network server storage, network cloud storage, etc. The processor includes, but is not limited to, a CPU (central processing unit), a GPU (image processor), an MCU (microprocessor), and the like.
By automatically acquiring the object position on the current advancing route and predicting the collision position coordinate according to the advancing speed and the object position of the object, the track prediction and speed calculation of the moving object by utilizing the AR technology are realized, the second navigation image is projected into the target area in front of the glasses frame, the specific object motion track prediction and the navigation route indication information are combined and displayed in the target area, the real-time object path updating and object path collision prediction are realized through the visual AR technology, a wearer can conveniently and timely change the advancing route according to the predicted track of the object, the possible safety accident or hidden danger is avoided, and the flexibility and the portability of AR navigation are greatly improved.
Finally, it should be noted that, although the above embodiments have been described in the text and drawings of the present application, the scope of the patent protection of the present application is not limited thereby. All technical solutions which are generated by replacing or modifying the equivalent structure or the equivalent flow according to the contents described in the text and the drawings of the present application, and which are directly or indirectly implemented in other related technical fields, are included in the scope of protection of the present application.

Claims (10)

1. An AR navigation method based on image recognition is suitable for AR navigation glasses, the AR navigation glasses comprise glasses frames, and an intelligent navigation system is arranged in the glasses frames, and the AR navigation method is characterized in that the intelligent navigation system comprises a navigation module, a projection module and a processing module;
the method comprises the following steps:
the navigation module acquires positioning information, generates a navigation route according to an input starting point position and an input end point position, and generates a first navigation image based on the positioning information and the navigation route;
the processing module receives an infrared image acquisition module to acquire the position of an object on the current advancing route of a wearer of the AR navigation glasses in real time, judges whether the possibility of collision exists according to the advancing speed of the wearer of the AR navigation glasses relative to the object and the position of the object, predicts the coordinate of the collision position if the possibility of collision exists, and marks the coordinate of the collision position in the first navigation image to obtain a second navigation image;
the projection module projects the second navigation image to a target area in front of the glasses frame.
2. The image recognition-based AR navigation method of claim 1, wherein the receiving an infrared image acquisition module obtains a position of an object on a current advancing route of the wearer of the AR navigation glasses in real time, and the determining whether the possibility of collision exists according to the advancing speed of the wearer of the AR navigation glasses relative to the object and the position of the object comprises:
after the infrared image acquisition module detects that an object exists on the current advancing route of the wearer of the AR navigation glasses, acquiring the motion track of the object in a preset time period, judging the coordinate position of the object after a first time period according to the motion track of the object, and judging whether the possibility of collision exists according to the coordinate position of the object after the first time period, wherein the first time period is the time required by the wearer to reach the object position predicted according to the advancing speed of the wearer relative to the object and the object position.
3. The image recognition-based AR navigation method of claim 1, wherein the intelligent navigation system further comprises a voice prompt module;
the method further comprises the following steps:
the processing module sends a first control signal to the voice prompt module so that the voice prompt module sends first prompt information, wherein the first prompt information comprises a collision coordinate position and/or a time interval between a predicted collision time and a current system timestamp.
4. The image recognition-based AR navigation method of claim 1, wherein the intelligent navigation system further comprises an infrared image acquisition module;
the method further comprises the following steps:
an infrared image acquisition module acquires an object image on the current advancing route of a wearer of the AR navigation glasses in real time;
and the processing module identifies the object image acquired by the infrared image acquisition module, determines the type of the object, and if the identified type of the object is in accordance with a preset type, the execution step judges whether the possibility of collision exists according to the advancing speed of the wearer of the AR navigation glasses relative to the object and the position of the object, otherwise, the judgment step is not executed.
5. The image recognition-based AR navigation method of claim 4, wherein the intelligent navigation system further comprises a voice prompt module;
the method further comprises the following steps:
and the processing module sends a second control signal to the voice prompt module so as to enable the voice prompt module to send out second prompt information, wherein the second prompt information comprises the type of the object, the collision coordinate position and/or the time interval between the predicted collision time and the current system timestamp.
6. The image recognition-based AR navigation method of any of claims 1 to 5, wherein the infrared image capture module is disposed within a glasses frame or in front of the vehicle of the wearer; when the infrared image acquisition module is arranged in front of the carrying tool of the wearer, the infrared image acquisition module is in communication connection with the processing module.
7. The image recognition-based AR navigation method according to any one of claims 1 to 5, wherein the intelligent navigation system further comprises an eyeball tracking module and a communication module, the eyeball tracking module is arranged in a glasses frame around the target area, and the AR navigation glasses are further connected with a vehicle-mounted control module through the communication module;
the method comprises the following steps:
the eyeball tracking module collects eyeball track information of the wearer, and the processing module further sends a vehicle control signal to the vehicle-mounted control module when judging that the collected eyeball track information of the wearer is matched with preset eyeball track information, so that the vehicle-mounted control module adjusts the running parameters of the current vehicle according to the vehicle control signal.
8. The AR navigation glasses comprise a glasses frame, wherein an intelligent navigation system is arranged in the glasses frame, and the AR navigation glasses are characterized by further comprising a navigation module, a projection module and a processing module;
the navigation module is used for acquiring positioning information, generating a navigation route according to an input starting point position and an input end point position, and generating a first navigation image based on the positioning information and the navigation route;
the processing module is used for receiving an object position on a current advancing route of a wearer of the AR navigation glasses obtained by the infrared image acquisition module in real time, judging whether the possibility of collision exists according to the advancing speed of the wearer of the AR navigation glasses relative to the object and the object position, predicting the coordinate of the collision position if the possibility of collision exists, and marking the coordinate of the collision position in the first navigation image to obtain a second navigation image;
the projection module is used for projecting the second navigation image to a target area in front of the glasses frame.
9. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the image recognition based AR navigation method according to any one of claims 1 to 7.
10. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the image recognition based AR navigation method according to any one of claims 1 to 7.
CN202211589057.0A 2022-12-09 2022-12-09 AR navigation method, AR glasses, medium and equipment based on image recognition Active CN115752481B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211589057.0A CN115752481B (en) 2022-12-09 2022-12-09 AR navigation method, AR glasses, medium and equipment based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211589057.0A CN115752481B (en) 2022-12-09 2022-12-09 AR navigation method, AR glasses, medium and equipment based on image recognition

Publications (2)

Publication Number Publication Date
CN115752481A true CN115752481A (en) 2023-03-07
CN115752481B CN115752481B (en) 2023-09-01

Family

ID=85345392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211589057.0A Active CN115752481B (en) 2022-12-09 2022-12-09 AR navigation method, AR glasses, medium and equipment based on image recognition

Country Status (1)

Country Link
CN (1) CN115752481B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108592937A (en) * 2018-05-09 2018-09-28 何辉 A kind of night flight or navigation road conditions for field exploration identify system
CN109154499A (en) * 2016-08-18 2019-01-04 深圳市大疆创新科技有限公司 System and method for enhancing stereoscopic display
CN109606377A (en) * 2018-12-29 2019-04-12 安徽中科美络信息技术有限公司 A kind of emergency running behavior defence reminding method and system
CN111595346A (en) * 2020-06-02 2020-08-28 浙江商汤科技开发有限公司 Navigation reminding method and device, electronic equipment and storage medium
CN112363494A (en) * 2020-09-24 2021-02-12 深圳优地科技有限公司 Method and device for planning advancing path of robot and storage medium
CN112799426A (en) * 2020-12-25 2021-05-14 陈南方 Unmanned aerial vehicle navigation control system and method based on big data analysis
CN113284240A (en) * 2021-06-18 2021-08-20 深圳市商汤科技有限公司 Map construction method and device, electronic equipment and storage medium
CN114063769A (en) * 2020-08-04 2022-02-18 费希尔-罗斯蒙特系统公司 Fast activation techniques for industrial augmented reality applications
CN114136318A (en) * 2020-08-13 2022-03-04 科沃斯商用机器人有限公司 Intelligent navigation method and device for machine
CN114152253A (en) * 2021-12-06 2022-03-08 华东师范大学 All-weather hiking auxiliary system and method based on deep learning and big data
CN114310834A (en) * 2020-10-09 2022-04-12 福特全球技术公司 System and method for nighttime delivery of mobile robots
CN115273035A (en) * 2022-08-18 2022-11-01 广州文远知行科技有限公司 Method, device and equipment for detecting air floating object and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109154499A (en) * 2016-08-18 2019-01-04 深圳市大疆创新科技有限公司 System and method for enhancing stereoscopic display
CN108592937A (en) * 2018-05-09 2018-09-28 何辉 A kind of night flight or navigation road conditions for field exploration identify system
CN109606377A (en) * 2018-12-29 2019-04-12 安徽中科美络信息技术有限公司 A kind of emergency running behavior defence reminding method and system
CN111595346A (en) * 2020-06-02 2020-08-28 浙江商汤科技开发有限公司 Navigation reminding method and device, electronic equipment and storage medium
CN114063769A (en) * 2020-08-04 2022-02-18 费希尔-罗斯蒙特系统公司 Fast activation techniques for industrial augmented reality applications
CN114136318A (en) * 2020-08-13 2022-03-04 科沃斯商用机器人有限公司 Intelligent navigation method and device for machine
CN112363494A (en) * 2020-09-24 2021-02-12 深圳优地科技有限公司 Method and device for planning advancing path of robot and storage medium
CN114310834A (en) * 2020-10-09 2022-04-12 福特全球技术公司 System and method for nighttime delivery of mobile robots
CN112799426A (en) * 2020-12-25 2021-05-14 陈南方 Unmanned aerial vehicle navigation control system and method based on big data analysis
CN113284240A (en) * 2021-06-18 2021-08-20 深圳市商汤科技有限公司 Map construction method and device, electronic equipment and storage medium
CN114152253A (en) * 2021-12-06 2022-03-08 华东师范大学 All-weather hiking auxiliary system and method based on deep learning and big data
CN115273035A (en) * 2022-08-18 2022-11-01 广州文远知行科技有限公司 Method, device and equipment for detecting air floating object and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李颖: "虚拟古建筑的交互设计与文化体验研究", 北京:地质出版社, pages: 26 *

Also Published As

Publication number Publication date
CN115752481B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN110167823B (en) System and method for driver monitoring
US20200074200A1 (en) Training of Vehicles to Improve Autonomous Capabilities
JP4694440B2 (en) Driver support system
JP6841843B2 (en) Vehicle control systems, vehicle control methods, and vehicle control programs
US11945435B2 (en) Devices and methods for predicting collisions and/or intersection violations
JP4372804B2 (en) Image processing device
CN111033512A (en) Motion control device for communication with autonomous vehicle based on simple two-dimensional plane camera device
CN110892351A (en) Vehicle control device and vehicle control method
CN114666499A (en) Image processing apparatus, image processing method, and movable body
CN109151204B (en) Navigation method and device based on mobile terminal and mobile terminal
KR20170010645A (en) Autonomous vehicle and autonomous vehicle system including the same
WO2022041869A1 (en) Road condition prompt method and apparatus, and electronic device, storage medium and program product
CN109849788B (en) Information providing method, device and system
US20210312193A1 (en) Devices and methods for predicting intersection violations and/or collisions
CN113260547A (en) Information processing apparatus, mobile apparatus, method, and program
CN114340970A (en) Information processing device, mobile device, information processing system, method, and program
US11814041B2 (en) Vehicle control device, vehicle control method, and storage medium that performs risk calculation for traffic participant
US11904893B2 (en) Operating a vehicle
CN110803170B (en) Driving assistance system with intelligent user interface
CN115752481B (en) AR navigation method, AR glasses, medium and equipment based on image recognition
JP7136538B2 (en) electronic device
CN116071949A (en) Augmented reality method and device for driving assistance
Han et al. HeadMon: Head Dynamics Enabled Riding Maneuver Prediction
WO2021085371A1 (en) Driving assistance device, driving assistance method, and driving assistance program
CN114693727A (en) Information processing method and device, and information processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant