CN109204326B - Driving reminding method and system based on augmented reality - Google Patents

Driving reminding method and system based on augmented reality Download PDF

Info

Publication number
CN109204326B
CN109204326B CN201710518390.5A CN201710518390A CN109204326B CN 109204326 B CN109204326 B CN 109204326B CN 201710518390 A CN201710518390 A CN 201710518390A CN 109204326 B CN109204326 B CN 109204326B
Authority
CN
China
Prior art keywords
vehicle
target
state information
target vehicle
virtual image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710518390.5A
Other languages
Chinese (zh)
Other versions
CN109204326A (en
Inventor
孙其民
李炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inlife Handnet Co Ltd
Original Assignee
Inlife Handnet Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inlife Handnet Co Ltd filed Critical Inlife Handnet Co Ltd
Priority to CN201710518390.5A priority Critical patent/CN109204326B/en
Publication of CN109204326A publication Critical patent/CN109204326A/en
Application granted granted Critical
Publication of CN109204326B publication Critical patent/CN109204326B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
    • B60R2300/8026Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views in addition to a rear-view mirror system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8073Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for vehicle security, e.g. parked vehicle surveillance, burglar detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/80Spatial relation or speed relative to objects
    • B60W2554/801Lateral distance
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/80Spatial relation or speed relative to objects
    • B60W2554/804Relative longitudinal speed

Abstract

The embodiment of the invention discloses a driving reminding method and a driving reminding system based on augmented reality; the method comprises the following steps: acquiring state information of a target vehicle in front of the running vehicle, wherein the state information comprises the current speed of the target vehicle and the distance between the target vehicle and the running vehicle, generating a virtual image of the target vehicle according to the state information, and acquiring the visual angle position information of a driver in the running vehicle; and projecting the virtual image of the target vehicle onto the window of the running vehicle according to the visual angle position information of the driver. The invention can generate the virtual image of the vehicle in front when the sight is blocked by the front vehicle, and project the virtual image onto the window of the running vehicle so as to provide augmented reality superposed with the window image, so that a driver can intuitively and easily master the running condition of the vehicle in front of which the sight is blocked, and the driving safety is further improved.

Description

Driving reminding method and system based on augmented reality
Technical Field
The invention relates to the field of data processing, in particular to a driving reminding method and system based on augmented reality.
Background
The AR (Augmented Reality) technology is a technology for calculating the position and angle of a camera image in real time and adding corresponding images, videos and 3D models, and the purpose of the technology is to overlap a virtual world on a screen in the real world and perform interaction.
Conventionally, the judgment of the driver on other vehicles is determined based on the indicator lights of the nearby vehicles and the experience of the driver, and besides the potential safety hazard caused by misjudgment due to the attention and subjective factors of the driver, the driving condition of the nearby vehicles which cannot be seen by the driver due to the blind vision zone is also an important potential. For example, in the actual driving process, when there is another vehicle in front of the vehicle to block the driver's sight line, the driver cannot see the road condition blocking the front of the vehicle, and at this time, if the vehicle needs to perform operations such as passing or changing lanes, there is a certain traffic hazard.
In the prior art, there is a method for reminding a user of the surrounding conditions of a vehicle through voice or a warning light, for example, when an obstacle is close to the vehicle, the vehicle can send out a voice alarm or twinkle the warning light, but the method is not intuitive enough, and if the obstacle suddenly appears in the process of driving at a high speed of the vehicle, the reaction time left for a driver is often insufficient, traffic accidents can be caused, and the safety is poor.
Disclosure of Invention
The embodiment of the invention provides a driving reminding method and system based on augmented reality, which can display the road condition in front of a vehicle, thereby improving the driving safety.
In a first aspect, an embodiment of the present invention provides a driving reminding method based on augmented reality, including:
acquiring state information of a target vehicle in front of a running vehicle, wherein the state information comprises the current speed of the target vehicle and the distance between the target vehicle and the running vehicle;
generating a virtual image of the target vehicle according to the state information;
acquiring visual angle position information of a driver in the running vehicle; and
and projecting the virtual image of the target vehicle onto the window of the running vehicle according to the visual angle position information of the driver.
In a second aspect, an embodiment of the present invention further provides a driving reminding system based on augmented reality, including: the system comprises a state information acquisition module, an image generation module, a visual angle information acquisition module and a projection module;
the state information acquisition module is used for acquiring state information of a target vehicle in front of a running vehicle, wherein the state information comprises the current speed of the target vehicle and the distance between the target vehicle and the running vehicle;
the image generation module is used for generating a virtual image of the target vehicle according to the state information;
the visual angle information acquisition module is used for acquiring visual angle position information of a driver in the running vehicle;
the projection module is used for projecting the virtual image of the target vehicle onto the window of the running vehicle according to the visual angle position information of the driver.
The method comprises the steps of firstly, acquiring state information of a target vehicle in front of a running vehicle, wherein the state information comprises the current speed of the target vehicle and the distance between the target vehicle and the running vehicle, generating a virtual image of the target vehicle according to the state information, and acquiring the visual angle position information of a driver in the running vehicle; and projecting the virtual image of the target vehicle onto the window of the running vehicle according to the visual angle position information of the driver. The invention can generate the virtual image of the vehicle in front when the sight is blocked by the front vehicle, and project the virtual image onto the window of the running vehicle so as to provide augmented reality superposed with the window image, so that a driver can intuitively and easily master the running condition of the vehicle in front of which the sight is blocked, and the driving safety is further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a driving reminding method based on augmented reality according to an embodiment of the present invention.
Fig. 2 is a schematic view of an application scenario of the driving reminding method based on augmented reality according to the embodiment of the present invention.
Fig. 3 is a schematic flow chart of another driving reminding method based on augmented reality according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a driving reminding system based on augmented reality according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of another driving reminding system based on augmented reality according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present invention are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the invention and should not be taken as limiting the invention with regard to other embodiments that are not detailed herein.
In the description that follows, specific embodiments of the present invention are described with reference to steps and symbols executed by one or more computers, unless otherwise indicated. Accordingly, these steps and operations will be referred to, several times, as being performed by a computer, the computer performing operations involving a processing unit of the computer in electronic signals representing data in a structured form. This operation transforms the data or maintains it at locations in the computer's memory system, which may be reconfigured or otherwise altered in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the invention have been described in language specific to above, it is not intended to be limited to the specific form set forth herein, but on the contrary, it is to be understood that various steps and operations described hereinafter may be implemented in hardware.
The principles of the present invention are operational with numerous other general purpose or special purpose computing, communication environments or configurations. Examples of well known computing systems, environments, and configurations that may be suitable for use with the invention include, but are not limited to, hand-held telephones, personal computers, servers, multiprocessor systems, microcomputer-based systems, mainframe-based computers, and distributed computing environments that include any of the above systems or devices.
The details will be described below separately.
The embodiment will be described from the perspective of an augmented reality-based driving reminder system, which may be specifically integrated in a terminal.
Referring to fig. 1, fig. 1 is a schematic flow chart of a driving reminding method based on augmented reality according to an embodiment of the present invention, where the driving reminding method based on augmented reality according to the embodiment includes:
in step S101, target vehicle state information in front of the traveling vehicle is acquired.
The state information may include information such as a current speed of the target vehicle and a distance between the target vehicle and the traveling vehicle, but is not limited to the above information. The current speed of the target vehicle includes traveling speed information of the target vehicle, and the distance between the target vehicle and the traveling vehicle includes a straight-line distance between the target vehicle and the traveling vehicle.
Specifically, an infrared sensor, a radar sensor, an ultrasonic sensor, or the like may be employed to collect target vehicle state information in front of the running vehicle.
In one embodiment, an infrared sensor is taken as an example, and the infrared sensor can be installed at the head position of a running vehicle and used for acquiring the state information of a front vehicle. The sensor includes a light emitting sensor that emits infrared rays, which are reflected when the infrared rays encounter an obstacle, i.e., a preceding vehicle, and a light receiving sensor that receives the infrared rays reflected by the preceding vehicle, and calculates a distance between the preceding vehicle and a traveling vehicle, i.e., position information of a target vehicle, based on a time from when the light emitting sensor emits the infrared rays to when the light receiving sensor receives the infrared rays and a propagation speed of the infrared rays. The infrared rays may be transmitted to and received from the target vehicle at predetermined periods, and the traveling speed of the target vehicle may be calculated by integrating the current speed information of the traveling vehicle and the position information of the target vehicle.
Continuously acquiring the position relation between the target vehicle and the running vehicle, further calculating the speed of the target vehicle according to the speed of the running vehicle,
the infrared sensor may include a signal processing device that performs signal processing based on signal information acquired by the sensor to calculate state information of the target vehicle.
Step S102, generating a virtual image of the target vehicle according to the state information.
In an embodiment of the present invention, the state information of the target vehicle may be visualized to generate a virtual image. The virtual image refers to processing data into a figure or an image according with a purpose to generate a virtual image.
For example, a plurality of image resources (image resources) may be stored in advance, and an image resource suitable for the status information indicating the target vehicle may be selected and laid out on the screen. And the layout may be output through an imaging device to be visualized so that a driver can intuitively recognize the state information of the target vehicle.
Step S103, obtaining the visual angle position information of the driver in the running vehicle.
First, it is known that observing the same object from two points at a certain distance will generate a certain directional difference, i.e. parallax. Therefore, the images seen at the main driving position and the sub-driving position in the vehicle may be different. In the embodiment of the present invention, the virtual image is mainly viewed by the driver, so that the viewing angle position information of the driver can be obtained first, and then the projection is performed.
The viewing angle position information of the driver may include eye position information of the driver, and specifically, an in-vehicle image may be acquired from an inside of a traveling vehicle, and eyeball position information of the driver may be determined in the image. For example, a face recognition technology may be used to recognize a face image in an image, and then the position information of the glasses is extracted from the face image, where the face recognition is a biological recognition technology that performs identity recognition based on face feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to capture an image or video stream containing a face with a camera or a video camera, automatically detect and track the face in the image, and then perform face recognition on the detected face. In addition, the face recognition may use an adaptive boosting (adaptive boosting) algorithm based on Haar features to detect the face in the original image, or use another algorithm to detect the face in the original image, which is not limited in this embodiment.
And step S104, projecting the virtual image of the target vehicle onto the window of the running vehicle according to the visual angle position information of the driver.
In an embodiment, before projecting the virtual image to the window, the virtual image may be subjected to image rendering, and the image may finally conform to the 3D scene through the image rendering. There are various kinds of software for rendering, such as: each CG software has its own rendering engine, also such as RenderMan.
As shown in fig. 2, the traveling vehicle a is a, and it is understood that there is a target vehicle B, C, D in front of the traveling vehicle a, and since the target vehicle B is directly in front of the traveling vehicle a, the view of the target vehicles B and C is blocked, and the vehicles B and C are not visible to the driver in the traveling vehicle a. At this time, the running vehicle a may acquire the state information of the target vehicle B, C, D, such as the current speed of the target vehicle B, C, D and the distance between the target vehicle B, C, D and the running vehicle a, respectively, through the infrared sensor. Thereby generating a virtual image of the target vehicle B, C, D and projecting the virtual image onto the front window glass of the running vehicle a so that the driver of the running vehicle a can observe the state in front of the vehicle. The infrared sensor may be disposed at a head a of the traveling vehicle a.
As can be seen from the above, the driving reminding method based on augmented reality provided by the embodiment of the present invention can acquire the state information of the target vehicle in front of the driving vehicle, where the state information includes the current speed of the target vehicle and the distance between the target vehicle and the driving vehicle, generate the virtual image of the target vehicle according to the state information, and acquire the view angle position information of the driver in the driving vehicle; and projecting the virtual image of the target vehicle onto the window of the running vehicle according to the visual angle position information of the driver. The invention can generate the virtual image of the vehicle in front when the sight is blocked by the front vehicle, and project the virtual image onto the window of the running vehicle so as to provide augmented reality superposed with the window image, so that a driver can intuitively and easily master the running condition of the vehicle in front of which the sight is blocked, and the driving safety is further improved.
In light of the above description of the embodiment, the augmented reality-based driving reminding method of the present invention will be described below with specific embodiments.
Firstly, in the practical application,
referring to fig. 3, fig. 3 is a schematic flow chart of another driving reminding method based on augmented reality according to an embodiment of the present invention, which is described by taking a smart phone as an example, and includes:
in step S201, target vehicle state information in front of the traveling vehicle is acquired.
The state information may include information such as a current speed of the target vehicle and a distance between the target vehicle and the traveling vehicle, but is not limited to the above information.
In practical applications, considering that when the distance of the target vehicle in front of the traveling vehicle is long, it is not necessary to start the augmented reality-based driving reminding function of the present invention, in an embodiment, before acquiring the state information of the target vehicle in front of the traveling vehicle, the method may further include:
judging whether a vehicle exists in a first preset distance range or not;
if the vehicle exists, the vehicle is taken as a target vehicle, and the step of acquiring the state information of the target vehicle in front of the running vehicle is executed.
The first preset distance may be set by a user according to actual requirements, for example, 20 meters, 30 meters, and the like, and may also be set by system control, which is not further limited in the present invention.
In addition, there may be many target vehicles in front of the traveling vehicle, and target vehicles farther away from the traveling vehicle do not necessarily have to generate a virtual image, for example, vehicles 200 meters away may be ignored. Therefore, in an embodiment, the step of acquiring the state information of the target vehicle in front of the running vehicle may specifically include;
screening target vehicles existing in a second preset distance range;
state information of the target vehicle is acquired.
In step S202, a virtual image of the target vehicle is generated based on the state information.
In an embodiment of the present invention, the state information of the target vehicle may be visualized to generate a virtual image. The virtual image refers to processing data into a figure or an image according with a purpose to generate a virtual image.
For example, a plurality of image resources (image resources) may be stored in advance, and an image resource suitable for the status information indicating the target vehicle may be selected and laid out on the screen. And the layout may be output through an imaging device to be visualized so that a driver can intuitively recognize the state information of the target vehicle.
In step S203, a sub-image corresponding to the vehicle with the minimum distance to the traveling vehicle in the virtual image of the target vehicle is acquired.
Step S204, determining the images except the sub-images in the virtual image as the target virtual image.
In the embodiment of the present invention, it is considered that the target vehicle closest to the traveling vehicle will appear directly in the field of view of the driver in the traveling vehicle, such as the target vehicle B in fig. 2, when the driver in the vehicle a can directly see the target vehicle B, and therefore the target vehicle B does not necessarily appear in the virtual image.
In step S205, the viewing angle position information of the driver in the traveling vehicle is acquired.
First, it is known that observing the same object from two points at a certain distance will generate a certain directional difference, i.e. parallax. Therefore, the images seen at the main driving position and the sub-driving position in the vehicle may be different. In the embodiment of the present invention, the virtual image is mainly viewed by the driver, so that the viewing angle position information of the driver can be obtained first, and then the projection is performed.
And step S206, projecting the target virtual image onto the window of the running vehicle according to the visual angle position information of the driver.
In an embodiment, before the projection, the virtual image may be rendered, where the rendering may specifically include a light processing and a texture processing, the light processing includes performing a lighting effect simulation on the collision model, and the texture processing includes performing a texture effect simulation on the collision model. Specifically, the illumination model can be constructed by the ray processing module. In the basic lighting model, the surface color of an object is the sum of lighting effects such as emission (emissive), ambient reflection (ambient), diffuse reflection (diffuse), and specular reflection (specular). Each illumination effect depends on the combined effect of the properties of the surface material (e.g., brightness and material color) and the properties of the light source (e.g., color and location of light). The module supports multiple light source models including parallel light, spot light, floodlight, etc., and can check the illumination effect in real time by adjusting parameters. And then carrying out illumination simulation based on GPU Shader technology. The texture data of the virtual scene is managed and scheduled by the texture processing module. At the core of the sub-module is a texture manager (TextureManager), which supports common texture data formats, including tga, png, jpg, bmp, dds, etc. Analyzing and extracting the texture data loaded into the memory, managing the public texture of the rendering engine by adopting the texture counter technology, avoiding the repeated loading of the same texture and saving the memory and video memory space. Meanwhile, the module supports rendering of various GPU texture special effects, and combines with FBO (frame Buffer object), PBO (pixel Buffer object) and other display caching technologies to perform more vivid special effect simulation on the model texture, including concave-convex texture, bright special effect, AVI video and the like.
After rendering, the virtual image is projected onto a window of the running vehicle, preferably onto a front window of the running vehicle.
And step S207, acquiring the position relation between the running vehicle and the target vehicle in real time.
In step S208, the virtual image on the window of the traveling vehicle is updated according to the positional relationship.
As can be seen from the above, the driving reminding method based on augmented reality provided by the embodiment of the present invention can obtain the state information of the target vehicle in front of the driving vehicle, generate the virtual image of the target vehicle according to the state information, obtain the sub-image corresponding to the vehicle with the minimum distance to the driving vehicle in the virtual image of the target vehicle, determine the image other than the sub-image in the virtual image as the target virtual image, obtain the view angle position information of the driver in the driving vehicle, project the target virtual image onto the window of the driving vehicle according to the view angle position information of the driver, obtain the position relationship between the driving vehicle and the target vehicle in real time, and update the virtual image on the window of the driving vehicle according to the position relationship.
In order to better implement the driving reminding method based on the augmented reality provided by the embodiment of the invention, the embodiment of the invention also provides a system based on the driving reminding method based on the augmented reality. The meaning of the noun is the same as that in the augmented reality-based driving reminding method, and specific implementation details can refer to the description in the method embodiment.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a driving reminding system based on augmented reality according to an embodiment of the present invention, where the driving reminding system 30 based on augmented reality includes: a state information acquisition module 301, an image generation module 302, a viewing angle information acquisition module 303, and a projection module 304;
a state information obtaining module 301, configured to obtain state information of a target vehicle in front of a running vehicle, where the state information includes a current speed of the target vehicle and a distance between the target vehicle and the running vehicle;
an image generation module 302, configured to generate a virtual image of the target vehicle according to the state information;
a view angle information acquiring module 303, configured to acquire view angle position information of a driver in the running vehicle;
a projection module 304, configured to project the virtual image of the target vehicle onto a window of the traveling vehicle according to the viewing angle and position information of the driver.
In one embodiment, as shown in fig. 5, the system 30 further comprises: a judging module 305 and a determining module 306;
a determination module 305, configured to determine whether a vehicle exists within a first preset distance range before the state information acquisition module 301 acquires state information of a target vehicle ahead of a traveling vehicle;
a determination module 306 for regarding the vehicle as a target vehicle.
Further, the state information acquiring module 301 specifically includes: a screening submodule 3011 and an acquisition submodule 3012;
the screening submodule 3011 is configured to screen a target vehicle existing within a second preset distance range;
the obtaining sub-module 3012 is configured to obtain state information of the target vehicle.
In one embodiment, the system 30 further comprises: the device comprises a sub-image acquisition module and a target image determination module;
the sub-image obtaining module is configured to obtain a sub-image corresponding to a vehicle with a minimum distance to the vehicle in the virtual image of the target vehicle after the image generating module 302 generates the virtual image of the target vehicle according to the state information;
the target image determining module is used for determining images except the sub-images in the virtual images as target virtual images;
the projection module 304 is specifically configured to project the target virtual image onto a window of the traveling vehicle according to the viewing angle and position information of the driver.
In one embodiment, the system further comprises: the device comprises a position acquisition module and an updating module;
a position obtaining module, configured to obtain a position relationship between the traveling vehicle and the target vehicle in real time after the projection module 304 projects the virtual image of the target vehicle onto a window of the traveling vehicle according to the viewing angle position information of the driver;
and the updating module is used for updating the virtual image on the window of the running vehicle according to the position relation.
As can be seen from the above, the driving reminding system based on augmented reality provided by the embodiment of the present invention can acquire the state information of the target vehicle in front of the driving vehicle, where the state information includes the current speed of the target vehicle and the distance between the target vehicle and the driving vehicle, generate the virtual image of the target vehicle according to the state information, and acquire the view angle position information of the driver in the driving vehicle; and projecting the virtual image of the target vehicle onto the window of the running vehicle according to the visual angle position information of the driver. The invention can generate the virtual image of the vehicle in front when the sight is blocked by the front vehicle, and project the virtual image onto the window of the running vehicle so as to provide augmented reality superposed with the window image, so that a driver can intuitively and easily master the running condition of the vehicle in front of which the sight is blocked, and the driving safety is further improved.
Accordingly, an embodiment of the present invention further provides a server 500, as shown in fig. 6, where the server 500 includes a Radio Frequency (RF) circuit 501, a memory 502 including one or more computer-readable storage media, an input unit 503, a power supply 504, a Wireless Fidelity (WiFi) module 505, a processor 506 including one or more processing cores, and other components. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 6 is not intended to be limiting of mobile terminals and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The rf circuit 501 may be used for receiving and transmitting information, or receiving and transmitting signals during a call, and in particular, receives downlink information of a base station and then sends the received downlink information to one or more processors 506 for processing; in addition, data relating to uplink is transmitted to the base station. In general, radio frequency circuit 501 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the radio frequency circuit 501 may also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 502 may be used to store applications and data. Memory 502 stores applications containing executable code. The application programs may constitute various functional modules. The processor 506 executes various functional applications and data processing by running the application programs stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the mobile terminal, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the processor 506 and the input unit 503 access to the memory 502.
The input unit 503 may be used to receive information sent by other devices and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. The input information is then sent to the processor 506, and can receive and execute commands sent from the processor 506. The input unit 503 may also include other input devices. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a mouse, a joystick, and the like.
The server also includes a power supply 504 (such as a battery) that powers the various components. Preferably, the power source may be logically connected to the processor 506 through a power management system, so that functions of managing charging, discharging, and power consumption management are implemented through the power management system. The power supply 504 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Wireless fidelity (WiFi) belongs to short-range wireless transmission technology, and the terminal can help the user send and receive e-mail, browse web pages, access streaming media and the like through the wireless fidelity module 505, and provides wireless broadband internet access for the user. Although fig. 6 shows the wireless fidelity module 505, it is understood that it does not necessarily form part of the server and may be omitted entirely as needed within the scope not to change the essence of the invention.
The processor 506 is a control center, connects the respective parts using various interfaces and lines, performs various functions and processes data by running or executing an application program stored in the memory 502 and calling data stored in the memory 502, thereby performing overall monitoring. Optionally, processor 506 may include one or more processing cores; preferably, the processor 506 may integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 506.
The processor 506 is configured to implement the following functions:
acquiring state information of a target vehicle in front of the running vehicle, wherein the state information comprises the current speed of the target vehicle and the distance between the target vehicle and the running vehicle, generating a virtual image of the target vehicle according to the state information, and acquiring the visual angle position information of a driver in the running vehicle; and projecting the virtual image of the target vehicle onto the window of the running vehicle according to the visual angle position information of the driver.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above modules may refer to the foregoing method embodiments, which are not described herein again.
It should be noted that, as one of ordinary skill in the art would understand, all or part of the steps in the various methods of the above embodiments may be implemented by relevant hardware instructed by a program, where the program may be stored in a computer-readable storage medium, such as a memory of a terminal, and executed by at least one processor in the terminal, and during the execution, the flow of the embodiments such as the information distribution method may be included. Among others, the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The driving reminding method and system based on augmented reality provided by the embodiment of the invention are described in detail above, and each functional module can be integrated in one processing chip, or each module can exist alone physically, or two or more modules can be integrated in one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A driving reminding method based on augmented reality is characterized by comprising the following steps:
acquiring state information of a target vehicle in front of a running vehicle, wherein the state information comprises the current speed of the target vehicle and the position information of the target vehicle;
generating a virtual image of the target vehicle according to the state information;
acquiring a sub-image corresponding to a vehicle with the minimum distance to the running vehicle in the virtual image of the target vehicle;
determining images other than the sub-images in the virtual images as target virtual images;
acquiring visual angle position information of a driver in the running vehicle; and
and calculating the actual position of the vehicle corresponding to the target virtual image, relative to the visual angle position of the driver, on the window of the running vehicle according to the position information of the vehicle corresponding to the target virtual image, and projecting the target virtual image to the actual position.
2. The augmented reality-based driving alert method as recited in claim 1, wherein prior to obtaining target vehicle state information ahead of a traveling vehicle, the method further comprises:
judging whether a vehicle exists in a first preset distance range or not;
and if so, taking the vehicle as a target vehicle, and executing the step of acquiring the state information of the target vehicle in front of the running vehicle.
3. The augmented reality-based driving alert method according to claim 1, wherein the step of acquiring the target vehicle state information in front of the running vehicle specifically includes;
screening target vehicles existing in a second preset distance range;
and acquiring the state information of the target vehicle.
4. The augmented reality-based driving reminder method of claim 1, wherein after projecting the target virtual image to the actual location, the method further comprises:
acquiring the position relation between the running vehicle and the target vehicle in real time;
and updating the virtual image on the window of the running vehicle according to the position relation.
5. An augmented reality based driving reminder system, comprising: the system comprises a state information acquisition module, an image generation module, a visual angle information acquisition module, a sub-image acquisition module, a target image determination module and a projection module;
the state information acquisition module is used for acquiring state information of a target vehicle in front of a running vehicle, wherein the state information comprises the current speed of the target vehicle and the position information of the target vehicle;
the image generation module is used for generating a virtual image of the target vehicle according to the state information;
the sub-image acquisition module is used for acquiring a sub-image corresponding to a vehicle with the minimum distance to the running vehicle in the virtual image of the target vehicle;
the target image determining module is used for determining images except the sub-images in the virtual images as target virtual images;
the visual angle information acquisition module is used for acquiring visual angle position information of a driver in the running vehicle;
the projection module is used for calculating the actual position of the vehicle corresponding to the target virtual image relative to the visual angle position of the driver on the window of the running vehicle according to the position information of the vehicle corresponding to the target virtual image, and projecting the target virtual image to the actual position.
6. The augmented reality-based driving reminder system of claim 5, further comprising: the device comprises a judging module and a determining module;
the judging module is used for judging whether a vehicle exists in a first preset distance range before the state information acquiring module acquires the state information of a target vehicle in front of the running vehicle;
the determining module is used for taking the vehicle as a target vehicle.
7. The augmented reality-based driving reminder system of claim 5, wherein the state information acquisition module specifically includes: a screening submodule and an acquisition submodule;
the screening submodule is used for screening target vehicles existing in a second preset distance range;
the obtaining submodule is used for obtaining the state information of the target vehicle.
8. The augmented reality-based driving reminder system of claim 5, further comprising: the device comprises a position acquisition module and an updating module;
the position acquisition module is used for acquiring the position relation between the running vehicle and the target vehicle in real time after the projection module projects the target virtual image to the actual position;
and the updating module is used for updating the virtual image on the window of the running vehicle according to the position relation.
CN201710518390.5A 2017-06-29 2017-06-29 Driving reminding method and system based on augmented reality Active CN109204326B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710518390.5A CN109204326B (en) 2017-06-29 2017-06-29 Driving reminding method and system based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710518390.5A CN109204326B (en) 2017-06-29 2017-06-29 Driving reminding method and system based on augmented reality

Publications (2)

Publication Number Publication Date
CN109204326A CN109204326A (en) 2019-01-15
CN109204326B true CN109204326B (en) 2020-06-12

Family

ID=64960903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710518390.5A Active CN109204326B (en) 2017-06-29 2017-06-29 Driving reminding method and system based on augmented reality

Country Status (1)

Country Link
CN (1) CN109204326B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111752373A (en) * 2019-03-28 2020-10-09 上海擎感智能科技有限公司 Vehicle-mounted interaction method, device and system
CN110286754B (en) * 2019-06-11 2022-06-24 Oppo广东移动通信有限公司 Projection method based on eyeball tracking and related equipment
CN111391855A (en) * 2020-02-18 2020-07-10 北京聚利科技有限公司 Auxiliary control method and device for vehicle
CN114407903B (en) * 2020-05-15 2022-10-28 华为技术有限公司 Cabin system adjusting device and method for adjusting a cabin system
CN112184920B (en) * 2020-10-12 2023-06-06 中国联合网络通信集团有限公司 AR-based skiing blind area display method, device and storage medium
US11948227B1 (en) 2023-04-18 2024-04-02 Toyota Motor Engineering & Manufacturing North America, Inc. Eliminating the appearance of vehicles and/or other objects when operating an autonomous vehicle

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1985266B (en) * 2004-07-26 2010-05-05 奥普提克斯晶硅有限公司 Panoramic vision system and method
WO2012071858A1 (en) * 2010-12-02 2012-06-07 Liu Ansheng Driving safety device
KR101478135B1 (en) * 2013-12-02 2014-12-31 현대모비스(주) Augmented reality lane change helper system using projection unit
CN103895572B (en) * 2014-04-22 2016-05-18 北京汽车研究总院有限公司 Vehicle traveling information display unit and method in a kind of field of front vision
CN104260669B (en) * 2014-09-17 2016-08-31 北京理工大学 A kind of intelligent automobile HUD
KR102270578B1 (en) * 2014-11-18 2021-06-29 현대모비스 주식회사 Apparatus and method for controlling displaying forward information of vehicle

Also Published As

Publication number Publication date
CN109204326A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
CN109204326B (en) Driving reminding method and system based on augmented reality
US10913461B2 (en) Driving behavior determining method, apparatus, and device, and storage medium
CN106407984B (en) Target object identification method and device
CN110733426B (en) Sight blind area monitoring method, device, equipment and medium
CN104679509A (en) Graph rendering method and device
US20210001886A1 (en) Vehicle Control Method, Related Device, and Computer Storage Medium
CN110059623B (en) Method and apparatus for generating information
CN109784807A (en) Determine the method, equipment and storage medium of vehicle
CN113160427A (en) Virtual scene creating method, device, equipment and storage medium
CN113313804A (en) Image rendering method and device, electronic equipment and storage medium
CN110293977B (en) Method and apparatus for displaying augmented reality alert information
US20170161880A1 (en) Image processing method and electronic device implementing the same
CN113110487A (en) Vehicle simulation control method and device, electronic equipment and storage medium
CN108957460A (en) Detection method, equipment and the computer readable storage medium of vehicle distances
CN112991439B (en) Method, device, electronic equipment and medium for positioning target object
CN108665523A (en) A kind of drilling method and system of traffic accident
CN111586557A (en) Vehicle communication method and device, computer readable medium and electronic equipment
US20230029628A1 (en) Data processing method for vehicle, electronic device, and medium
CN109427220B (en) Virtual reality-based display method and system
CN116009519A (en) Simulation test method, device, equipment and storage medium
CN115379208A (en) Camera evaluation method and device
CN115681483A (en) Vehicle controller, vehicle and vehicle control method
CN112308766B (en) Image data display method and device, electronic equipment and storage medium
CN113744379A (en) Image generation method and device and electronic equipment
CN113384893A (en) Data processing method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant