CN116208758A - Vehicle panoramic perspective display method, system and storage medium - Google Patents

Vehicle panoramic perspective display method, system and storage medium Download PDF

Info

Publication number
CN116208758A
CN116208758A CN202310159190.0A CN202310159190A CN116208758A CN 116208758 A CN116208758 A CN 116208758A CN 202310159190 A CN202310159190 A CN 202310159190A CN 116208758 A CN116208758 A CN 116208758A
Authority
CN
China
Prior art keywords
user
vehicle
obstacle
module
target vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310159190.0A
Other languages
Chinese (zh)
Inventor
蔡远馨
韩旭
伍荣茂
邱家玲
唐恺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou Desay SV Automotive Co Ltd
Original Assignee
Huizhou Desay SV Automotive Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou Desay SV Automotive Co Ltd filed Critical Huizhou Desay SV Automotive Co Ltd
Priority to CN202310159190.0A priority Critical patent/CN116208758A/en
Publication of CN116208758A publication Critical patent/CN116208758A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • B60Q9/008Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling for anti-collision purposes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8073Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for vehicle security, e.g. parked vehicle surveillance, burglar detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8093Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a vehicle panoramic perspective display method, a system and a storage medium; the vehicle panoramic perspective display method comprises the following steps: the AR user terminal enters a working mode after networking is completed, a panoramic image is obtained by collecting a plurality of surrounding environment images of a target vehicle through each camera, and the panoramic image corresponding to the current visual angle of the user is obtained according to real-time coordinates and pose data of the AR user terminal; detecting an obstacle around the target vehicle at the same time, and generating corresponding indication animation and prompt instructions according to the obstacle; and then the panoramic image and the indication animation are further fused and presented to the AR user side. The AR user side adopted by the invention can sense the position of the obstacle, quickly judge the influence degree of the obstacle and generate the corresponding indication animation and the prompting instruction, so that the user can sense the risk direction to make proper driving behavior; the vehicle can be remotely assisted by a driver in the target vehicle, so that the driving safety is improved; while also providing a three-dimensional interactive experience.

Description

Vehicle panoramic perspective display method, system and storage medium
Technical Field
The invention relates to the technical field of panoramic display, in particular to a vehicle panoramic perspective display method, a system and a storage medium.
Background
The intelligent automobile is a new generation automobile which has an automatic driving function by carrying devices such as an advanced sensor and applying new technologies such as artificial intelligence, and gradually becomes an intelligent mobile space and an application terminal. It is usually equipped with cameras, ultrasonic radars, millimeter wave radars, lidar and other sensors to "sense" the body and surrounding environment, and smart antennas to obtain internet information.
The augmented reality head-up display is abbreviated as AR-HUD (Augmented Reality Head Up Display) and is used for displaying information for perception by a user. In the field of intelligent automobiles, stationary AR-HUDs use display carriers such as display screens, transparent glasses, and the like, and wearable devices are provided with AR glasses, display helmets, and the like. The AR-HUD has the function of improving the safety, convenience and comfort of the vehicle. The intelligent automobile shoots the mark pattern by using a fisheye or ultra-wide angle lens, and an all-around panoramic image is obtained through distortion correction, affine transformation and image stitching technology. The technical route is mature, the stability is good, and the method is widely applied.
The existing wearable AR-HUD system is built based on requirements of driving safety, information acquisition and convenient interaction, and mainly focuses on forward-looking driving scenes of vehicles. When facing a complex scene of non-medium-high-speed running, the system cannot realize 3D panoramic perspective, so that a user is difficult to directly observe the external environment shielded by a vehicle body; moreover, the information interaction mode of the existing wearable AR-HUD system has failure risk, the prompt tone is difficult to distinguish due to interference in noise occasions, and the warning picture cannot indicate the direction according to the human body gesture, so that the reaction time of a user is not facilitated to be shortened; and mainly display two-dimensional icon and digital information, be difficult to provide three-dimensional interactive experience, and only be used in the car, can't be used for the latent risk of remote monitoring vehicle situation and external environment.
Disclosure of Invention
The invention provides a vehicle panoramic perspective display method, a system and a storage medium for solving the technical problems; the method comprises the steps that through networking of an AR user side and a target vehicle, acquisition of a surrounding environment of the target vehicle is achieved, corresponding panoramic images and indication animations are generated, and then fusion of the panoramic images and the indication animations is achieved and presented to the AR user side. The AR user side adopted by the invention can sense the position of the obstacle, quickly judge the influence degree of the obstacle and generate the corresponding indication animation and the prompting instruction, so that the user can sense the risk direction to make proper driving behavior; the vehicle can be remotely assisted by a driver in the target vehicle, so that the driving safety is improved; while also providing a three-dimensional interactive experience.
Specifically, the invention provides a vehicle panoramic perspective display method, which comprises the following steps:
step S10: the AR user side enters a working mode after networking is completed.
Step S20: a plurality of surrounding environment images of a target vehicle are acquired, and the surrounding environment images are spliced to obtain a panoramic image.
Step S30: acquiring real-time coordinates of the AR user side, calculating pose data of the AR user side and detecting obstacles around a target vehicle in real time, acquiring panoramic images corresponding to the current visual angle of the user according to the real-time coordinates and the pose data, and generating corresponding indication animation and prompt instructions according to the obstacles.
Step S40: and merging and presenting the panoramic image and the indication animation to the AR user side.
In the step S10, before the AR client enters the working mode, the method further includes: authenticating the identity of the current user, and configuring the driving preference of the user according to the identity authentication result; the AR user end binds at least 1 target vehicle and configures at least 1 user.
The step S10 further includes: an AR user end coordinate system is defined, wherein the AR user end coordinate system takes the center of a target vehicle as an origin, the width direction is an X axis, the length direction is a Y axis, and the height direction is a Z axis.
The step S20 specifically includes:
step S21: cameras are respectively arranged on the periphery, the bottom and the top of a vehicle body of the target vehicle, and calibration work is carried out on each camera.
Step S22: and selecting a projection model, and generating a corresponding mapping table according to the projection model.
Step S23: and acquiring a plurality of vehicle surrounding environment images through each camera, and performing optical flow tracking operation on the plurality of vehicle surrounding environment images to obtain environment parameters in the projection model.
Step S24: according to the environment parameter matching mapping table, performing texture mapping on a plurality of vehicle surrounding environment images according to the mapping table so as to obtain panoramic images; the panoramic image comprises a perspective picture and a vehicle surrounding environment image which is filled and displayed.
In the step S30, corresponding instruction animation and prompt instructions are generated according to the obstacle, specifically: calculating the relative position of an obstacle in the AR user side coordinate system, judging the relative distance between the obstacle and a target vehicle according to the relative position, and generating an indication animation and a prompt instruction corresponding to the obstacle if the relative distance is smaller than or equal to a preset threshold value; otherwise, continuing to detect the obstacle around the target vehicle.
The step S40 further includes:
when the user is in the in-car environment, the user interacts with the panoramic image and the indication animation through gestures, voice or touch; and the system interacts with the user through the obstacle voice broadcast.
Wherein the gesture includes at least click, grab and expand.
When the user is in a remote environment, the user assists the driver to drive the vehicle through voice, and the driver is in the target vehicle bound with the AR user side configured by the user.
Based on the same inventive concept, the invention also provides a vehicle panoramic perspective display system, which comprises:
and the acquisition module is used for: the vehicle body comprises at least cameras respectively arranged at the periphery of the vehicle body and at the bottom and the top, wherein the cameras are used for acquiring the surrounding environment images of the target vehicle.
And a storage module: for storing the user identity and its corresponding ride preference.
And an identification module: and the user authentication module is used for authenticating the user so as to configure the driving preference of the user according to the user identity in the storage module.
And a display module: the AR user terminal is used for displaying panoramic images and indication animations; the panoramic image comprises a perspective picture and a vehicle surrounding environment image which is filled and displayed.
A first communication module: the method is used for connecting the AR user end with the target vehicle through WIFI so as to carry out in-vehicle communication or remote communication.
And a second communication module: for assisting in-vehicle communication or remote communication through a microphone and a speaker; the microphone is used for voice interaction of a user; the loudspeaker is used for voice broadcasting of the obstacle.
And an interaction module: when the user is in the in-car environment, the user interacts with the panoramic image and the indication animation through gestures, voice or touch; the second communication module interacts with the user through voice broadcasting of the obstacle; the gesture includes at least click, grab, and expand.
When the user is in a remote environment, the method is used for assisting the driver to drive the vehicle through voice, and the driver is in a target vehicle bound with the AR user side configured by the user.
The system further comprises:
and a measurement module: and the method is used for measuring the pose data of the AR user side in real time.
The acquisition module is used for: and the real-time coordinates of the AR user side are obtained, and a panoramic image corresponding to the current view angle of the user is obtained according to the real-time coordinates and the pose data.
And a detection module: for detecting obstacles around the target vehicle in real time.
A first generation module: and the camera is used for carrying out optical flow tracking operation and texture mapping on a plurality of vehicle surrounding environment images acquired by the camera which completes the calibration work so as to generate panoramic images corresponding to the plurality of vehicle surrounding environment images.
And a second generation module: and the method is used for calculating the relative position of the obstacle in the AR user side coordinate system and generating indication animation and a prompt instruction corresponding to the obstacle according to the relative position.
The display module at least comprises a first display area, a second display area and a third display area, and the display areas are matched to display the indication animation according to the relative positions of the obstacles; the first display area and the second display area are used for displaying perspective pictures; and the third display area is used for filling and displaying the vehicle surrounding environment image acquired by the camera.
Based on the same inventive concept, the invention also provides a storage medium, which is one of computer readable storage media, and is characterized in that a computer program is stored on the storage medium, and when the computer program is executed by a processor, the vehicle panoramic perspective display method is realized.
Compared with the prior art, the invention has the beneficial effects that:
1. according to real-time coordinates and pose data of an AR user side, a panoramic image corresponding to a current view angle of the user can be obtained, and the panoramic image comprises a perspective picture and a vehicle surrounding environment image which is filled and displayed; the technical problem that in the prior art, when facing a complex scene of non-medium-high-speed driving, 3D panoramic perspective cannot be realized, so that a user is difficult to directly observe the external environment shielded by a vehicle body is solved.
2. According to the invention, the relative position of the obstacle in the AR user side coordinate system is obtained in real time, and the relative distance between the obstacle and the target vehicle is obtained according to the relative position, so that a corresponding indication animation and a prompting instruction are generated according to the relative distance, a user can sense the risk direction, the reaction time of the user is shortened, and the safe driving is assisted; the technical problems that in the prior art, the prompt tone is difficult to distinguish due to interference on noise occasions, and the warning picture cannot indicate the direction according to the human body posture are solved.
3. According to the invention, interaction between the acquired panoramic image and the instruction animation can be realized through gestures, voice or touch, and voice broadcasting of the obstacle can be realized to assist driving; the system solves the technical problem that the system in the prior art mainly displays two-dimensional icons and digital information.
4. According to the invention, the AR user end is connected with the target vehicle in a remote environment, so that the remote assistance of a driver in the target vehicle through voice communication can be realized; the technical problem that the remote monitoring of the vehicle condition and the potential risk of the external environment cannot be realized in the prior art is solved.
Drawings
Fig. 1 is a flowchart of a vehicle panorama perspective display method according to the present invention.
Fig. 2 is a flowchart of a method for obtaining a panoramic image as described in fig. 1.
Fig. 3 is a system frame diagram of the vehicle panorama perspective display method of fig. 1.
FIG. 4 is a diagram illustrating an environment before using an AR client in an embodiment.
Fig. 5 is a schematic view of an environment screen after using an AR client in an embodiment.
Fig. 6 is a schematic diagram of a display area of the AR client shown in fig. 1.
Detailed Description
The embodiment of the invention provides a vehicle panoramic perspective display method, a system and a storage medium; the three-dimensional interaction method and device solve the technical problems that 3D panoramic perspective cannot be achieved, risk gestures cannot be perceived, three-dimensional interaction experience cannot be provided, and remote assistance cannot be achieved in the prior art.
The technical scheme in the embodiment of the invention aims to solve the technical problems, and the overall thought is as follows:
the AR user end completes networking, performs identity authentication on the current user, and configures driving preference corresponding to the user according to an identity authentication result; then entering a working mode, acquiring a plurality of surrounding environment images of a target vehicle through each camera to obtain a panoramic image, and obtaining the panoramic image corresponding to the current visual angle of the user according to real-time coordinates and pose data of the AR user side; detecting an obstacle around the target vehicle at the same time, and generating corresponding instruction animation and prompt instructions according to the obstacle; and merging the panoramic image and the indication animation and presenting the merged panoramic image and the indication animation to the AR user side.
The invention relates to a vehicle panorama perspective display method, a system and a storage medium, which are described in further detail below with reference to specific embodiments and drawings.
Referring to fig. 1, the invention provides a vehicle panorama perspective display method, which comprises the following steps:
step S10: the AR user side enters a working mode after networking is completed.
The AR user side is an AR-HUD, namely an augmented reality head-up display, which can be selected as AR glasses or a display helmet.
Before the AR user side enters the working mode, the method further comprises the following steps: and carrying out identity authentication on the current user, and configuring the driving preference of the user according to the identity authentication result.
The AR user end binds at least 1 target vehicle and configures at least 1 user.
It should be noted that, when the user uses the AR user terminal for the first time, personal characteristic information needs to be recorded, where the personal characteristic information may be voice, fingerprint or face.
After the AR user end completes interconnection with the target vehicle through WIFI, if voice is adopted for identity authentication, a user sends any voice, the system compares the voice with stored voice data, and if the comparison is successful, the identity authentication is passed; if the fingerprint is adopted for identity authentication, the user needs to touch a designated key through the finger for inputting information to finish the identity authentication; if the identity authentication is carried out by adopting a face, after the user wears the AR user terminal, the system can automatically carry out the identity authentication, and if the face information is matched with the stored face data, the authentication is passed.
After the identity authentication of the user is completed, the system automatically configures the driving preference corresponding to the user; the driving preference includes at least data of vehicle body mirror position information, driving seat position information, multimedia control information, and the like.
The step S10 further includes: an AR user end coordinate system is defined, wherein the AR user end coordinate system takes the center of a target vehicle as an origin, the width direction is an X axis, the length direction is a Y axis, and the height direction is a Z axis.
After the interconnection between the AR client and the target vehicle is completed, step S20 may be executed.
Step S20: a plurality of surrounding environment images of a target vehicle are acquired, and the surrounding environment images are spliced to obtain a panoramic image.
Referring to fig. 2, the step S20 specifically includes:
step S21: the method comprises the steps of respectively arranging fisheye cameras around a body of a target vehicle, respectively arranging the bottom and the top, acquiring images through six paths of fisheye cameras, carrying out distortion elimination processing on the acquired images by combining a fisheye image distortion elimination algorithm and built-in matrix parameters and deformation coefficients of the fisheye cameras, then calibrating the cameras, and carrying out corresponding projection transformation according to real position information of objects in the images.
It is to be noted that, increase bottom and top camera on the basis that sets up the camera around the automobile body, avoided sight blind area as far as, improved information interaction efficiency.
Step S22: selecting a panoramic overlooking projection model as the bottom of the model, and constructing a position mapping table from a fisheye image to an undistorted image according to the projection model as the rest of the panoramic overlooking projection model.
Step S23: acquiring a plurality of vehicle surrounding environment videos through each camera, acquiring image frames from the videos, carrying out characteristic point tracking matching, judging characteristic points on an obstacle main plane according to tracking results, and acquiring light current values and relative depth values of the characteristic points according to different distances between the obstacle and a target vehicle and different light flow sizes and change rules; and then acquiring parameter values of the main plane of the obstacle, namely environmental parameters, according to the light current value and the relative depth value.
Step S24: matching the position mapping table according to the environment parameters, and performing texture mapping on a plurality of vehicle surrounding environment images according to the position mapping table to obtain panoramic images; the panoramic image comprises a perspective picture and a vehicle surrounding environment image which is filled and displayed.
When texture mapping is carried out, in order to make the boundary area smoothly transition, an overlapping area is reserved between spliced images of adjacent cameras, and fusion processing is carried out on the overlapping area.
After the panoramic image is acquired, step S30 may be performed.
Step S30: acquiring real-time coordinates of the AR user side, calculating pose data of the AR user side in real time, detecting obstacles around a target vehicle in real time, acquiring panoramic images corresponding to the current view angle of the user according to the real-time coordinates and the pose data, calculating the relative positions of the obstacles in the AR user side coordinate system, judging the relative distance between the obstacles and the target vehicle according to the relative positions, and generating indication animation and prompt instructions corresponding to the obstacles if the relative distance is smaller than or equal to a preset threshold value; otherwise, continuing to detect the obstacle around the target vehicle.
The pose data are acquired by adopting a pose sensor, and the pose data comprise displacement data and angle data; the attitude sensor comprises a plurality of accelerometers and gyroscopes.
The indication animation is a virtual object generated by the system and is overlapped and expanded into a three-dimensional environment of a user, and the virtual object at least comprises a digital road image, a directional arrow, a step distance, a speed limit, a speed, an engine rotating speed, a battery voltage, an engine fault, a tire indicator lamp, a fatigue driving warning, a water temperature warning, a music playing, a weather forecast, a voice interaction, oil consumption, obstacle information and the like.
In one possible implementation, regarding lane keeping, which is necessary for lane keeping in severe weather and dark conditions, the AR user side may score the driving path so that the current lane is more prominent to reduce vehicle deviation.
In one possible embodiment, regarding the detection of critical road events, the driver must keep track of vehicles, road hazards, lanes, pedestrians, and traffic signs while controlling the speed and direction of the vehicle; these tasks add physical and mental burden, especially dangerous for the elderly and those with slow cognitive response. Therefore, a vivid and three-dimensional alarm reminds the driver of the danger on the road, and can help to reduce the workload of the driver so as to reduce the occurrence of traffic accidents; displaying obstacle information and indicating arrows may help the driver identify dangerous events and reduce the driver's reaction time to some extent.
In one possible implementation, regarding night vision, the AR user side may indicate the position information of the pedestrian or the vehicle, so that the driver may directly understand the information, and the information transfer is more efficient.
In one possible embodiment, the prompt instruction is a vibration instruction, so that the user can perceive the risk degree and the risk direction of the obstacle to the target vehicle. The dangerous degree is mainly represented by vibration intensity, and the dangerous direction is represented by vibration at the corresponding position of the AR user side.
Step S40: and merging and presenting the panoramic image and the indication animation to the AR user side.
The step S40 further includes:
when the user is in the in-car environment, the user interacts with the panoramic image and the indication animation through gestures, voice or touch; and the system interacts with the user through the obstacle voice broadcast.
Wherein the gesture includes at least click, grab and expand.
It should be noted that, the AR user end is further provided with a depth camera and a pressure sensor to determine the interaction action that the user wants to complete; the depth camera can collect point cloud data of user gestures; the pressure sensor collects touch information; and the point cloud data and the touch information are both sent to the target vehicle through WIFI signals so as to identify corresponding interaction information and complete interaction work.
When the user is in a remote environment, the user assists the driver to drive the vehicle through voice, and the driver is in the target vehicle bound with the AR user side configured by the user.
It should be noted that, the microphone and the speaker cooperate with each other to realize the above-mentioned obstacle voice broadcasting and voice assistance.
Based on the same inventive concept, the present invention further provides a vehicle panorama perspective display system, please refer to fig. 3, the system comprises:
and the acquisition module is used for: the vehicle body comprises at least cameras respectively arranged at the periphery of the vehicle body and at the bottom and the top, wherein the cameras are used for acquiring the surrounding environment images of the target vehicle.
And a storage module: for storing the user identity and its corresponding ride preference.
The driving preference includes at least data such as vehicle body mirror position information, driving seat position information, and multimedia control information.
And an identification module: and the user authentication module is used for authenticating the user so as to configure the driving preference of the user according to the user identity in the storage module.
And a display module: the AR user terminal is used for displaying panoramic images and indication animations; the panoramic image comprises a perspective picture and a vehicle surrounding environment image which is filled and displayed.
A first communication module: the method is used for connecting the AR user end with the target vehicle through WIFI so as to carry out in-vehicle communication or remote communication.
And a second communication module: for assisting in-vehicle communication or remote communication through a microphone and a speaker; the microphone is used for voice interaction of a user; the loudspeaker is used for voice broadcasting of the obstacle.
And an interaction module: when the user is in the in-car environment, the user interacts with the panoramic image and the indication animation through gestures, voice or touch; the second communication module interacts with the user through voice broadcasting of the obstacle; the gesture includes at least click, grab, and expand.
When the user is in a remote environment, the method is used for assisting the driver to drive the vehicle through voice, and the driver is in a target vehicle bound with the AR user side configured by the user.
The system further comprises:
and a measurement module: and the method is used for measuring the pose data of the AR user side in real time.
The acquisition module is used for: and the real-time coordinates of the AR user side are obtained, and a panoramic image corresponding to the current view angle of the user is obtained according to the real-time coordinates and the pose data.
In one possible embodiment, the current user wants to observe environmental information on the right side of the main driver's seat, the environmental information including in-vehicle environmental information and out-vehicle environmental information; fig. 4 is an environmental image that a user can observe before using the AR client, where only the in-vehicle environment and the out-of-vehicle environment that can be seen through the windows are visible in the environmental image; fig. 5 shows an environmental image that a user can observe after using the AR client, where the environmental image includes not only an in-vehicle environment but also an out-vehicle environment covered by a vehicle door, thereby implementing a perspective display function.
And a detection module: for detecting obstacles around the target vehicle in real time.
A first generation module: and the camera is used for carrying out optical flow tracking operation and texture mapping on a plurality of vehicle surrounding environment images acquired by the camera which completes the calibration work so as to generate panoramic images corresponding to the plurality of vehicle surrounding environment images.
And a second generation module: and the method is used for calculating the relative position of the obstacle in the AR user side coordinate system and generating indication animation and a prompt instruction corresponding to the obstacle according to the relative position.
Referring to fig. 6, the display module W4 includes at least a first display area W4-1, a second display area W4-2 and a third display area W4-3, where each display area cooperates with and displays the indication animation according to the relative position of the obstacle; the first display area W4-1 and the second display area W4-2 are used for displaying perspective pictures; the third display area W4-3 is used for filling and displaying the vehicle surrounding environment image acquired by the camera.
Based on the same inventive concept, the invention also provides a storage medium, which is one of computer readable storage media, and is characterized in that a computer program is stored on the storage medium, and when the computer program is executed by a processor, the vehicle panoramic perspective display method is realized.
In summary, the invention provides a vehicle panoramic perspective display method, a system and a storage medium; the method comprises the steps that through networking of an AR user side and a target vehicle, acquisition of a surrounding environment of the target vehicle is achieved, corresponding panoramic images and indication animations are generated, and then fusion of the panoramic images and the indication animations is achieved and presented to the AR user side. The AR user side adopted by the invention can sense the position of the obstacle, quickly judge the influence degree of the obstacle and generate the corresponding indication animation and the prompting instruction, so that the user can sense the risk direction to make proper driving behavior; the vehicle can be remotely assisted by a driver in the target vehicle, so that the driving safety is improved; while also providing a three-dimensional interactive experience.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present invention thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the invention has been described in conjunction with the specific embodiments above, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art in light of the foregoing description. Accordingly, all such alternatives, modifications, and variations are included within the spirit and scope of the following claims.

Claims (10)

1. A vehicle panorama perspective display method, comprising the steps of:
s10: the AR user terminal enters a working mode after networking is completed;
s20: collecting a plurality of surrounding environment images of a target vehicle, and splicing the surrounding environment images to obtain a panoramic image;
s30: acquiring real-time coordinates of the AR user side, calculating pose data of the AR user side and detecting obstacles around a target vehicle in real time, acquiring a panoramic image corresponding to a current view angle of a user according to the real-time coordinates and the pose data, and generating corresponding indication animation and prompt instructions according to the obstacles;
s40: and merging and presenting the panoramic image and the indication animation to the AR user side.
2. The vehicle panorama perspective display method according to claim 1, wherein in step S10, before the AR user side enters the operation mode, the method further comprises: authenticating the identity of the current user, and configuring the driving preference of the user according to the identity authentication result; the AR user end binds at least 1 target vehicle and configures at least 1 user.
3. The vehicle panorama perspective display method according to claim 2, wherein the step S10 further comprises: an AR user end coordinate system is defined, wherein the AR user end coordinate system takes the center of a target vehicle as an origin, the width direction is an X axis, the length direction is a Y axis, and the height direction is a Z axis.
4. The vehicle panorama perspective display method according to claim 3, wherein the step S20 is specifically:
s21: cameras are respectively arranged at the periphery, the bottom and the top of a vehicle body of a target vehicle, and calibration work is carried out on each camera;
s22: selecting a projection model, and generating a corresponding mapping table according to the projection model;
s23: collecting a plurality of vehicle surrounding environment images through each camera, and performing optical flow tracking operation on the plurality of vehicle surrounding environment images to obtain environment parameters in the projection model;
s24: according to the environment parameter matching mapping table, performing texture mapping on a plurality of vehicle surrounding environment images according to the mapping table so as to obtain panoramic images; the panoramic image comprises a perspective picture and a vehicle surrounding environment image which is filled and displayed.
5. The vehicle panorama perspective display method according to claim 4, wherein the generating of the corresponding instruction animation and prompt instruction according to the obstacle in step S30 comprises: calculating the relative position of an obstacle in the AR user side coordinate system, judging the relative distance between the obstacle and a target vehicle according to the relative position, and generating an indication animation and a prompt instruction corresponding to the obstacle if the relative distance is smaller than or equal to a preset threshold value; otherwise, continuing to detect the obstacle around the target vehicle.
6. The vehicle panorama perspective display method according to claim 5, wherein the step S40 further comprises:
when the user is in the in-car environment, the user interacts with the panoramic image and the indication animation through gestures, voice or touch; the system interacts with the user through the voice broadcasting of the obstacle; the gesture at least comprises clicking, grabbing and expanding;
when the user is in a remote environment, the user assists the driver to drive the vehicle through voice, and the driver is in the target vehicle bound with the AR user side configured by the user.
7. A system employing the vehicle panorama perspective display method according to any one of claims 1-6, wherein the system comprises:
and the acquisition module is used for: the system at least comprises cameras respectively arranged at the periphery of a vehicle body and at the bottom and the top, wherein the cameras are used for acquiring the surrounding environment images of a target vehicle;
and a storage module: for storing the user identity and its corresponding ride preference;
and an identification module: the user authentication module is used for authenticating the user to configure the driving preference of the user according to the user identity in the storage module;
and a display module: the AR user terminal is used for displaying panoramic images and indication animations; the panoramic image comprises a perspective picture and a vehicle surrounding environment image which is filled and displayed;
a first communication module: the method comprises the steps of connecting an AR user end with a target vehicle through WIFI to perform in-vehicle communication or remote communication;
and a second communication module: for assisting in-vehicle communication or remote communication through a microphone and a speaker; the microphone is used for voice interaction of a user; the loudspeaker is used for voice broadcasting of the obstacle;
and an interaction module: when the user is in the in-car environment, the user interacts with the panoramic image and the indication animation through gestures, voice or touch; the second communication module interacts with the user through voice broadcasting of the obstacle; the gesture at least comprises clicking, grabbing and expanding;
when the user is in a remote environment, the method is used for assisting the driver to drive the vehicle through voice, and the driver is in a target vehicle bound with the AR user side configured by the user.
8. The system of claim 7, wherein the system further comprises:
and a measurement module: the method is used for measuring the pose data of the AR user side in real time;
the acquisition module is used for: the real-time coordinate acquiring module is used for acquiring real-time coordinates of the AR user side and acquiring a panoramic image corresponding to the current view angle of the user according to the real-time coordinates and the pose data;
and a detection module: the obstacle detection device is used for detecting obstacles around a target vehicle in real time;
a first generation module: the method comprises the steps of performing optical flow tracking operation and texture mapping on a plurality of vehicle surrounding environment images acquired by cameras completing calibration work so as to generate panoramic images corresponding to the plurality of vehicle surrounding environment images;
and a second generation module: and the method is used for calculating the relative position of the obstacle in the AR user side coordinate system and generating indication animation and a prompt instruction corresponding to the obstacle according to the relative position.
9. The system of claim 8, wherein the display module comprises at least a first display area, a second display area, and a third display area, each display area cooperatively displaying the indication animation according to a relative position of the obstacle;
the first display area and the second display area are used for displaying perspective pictures;
and the third display area is used for filling and displaying the vehicle surrounding environment image acquired by the camera.
10. A storage medium, being one of computer readable storage media, having stored thereon a computer program which, when executed by a processor, implements a vehicle panorama perspective display method according to claims 1-6.
CN202310159190.0A 2023-02-24 2023-02-24 Vehicle panoramic perspective display method, system and storage medium Pending CN116208758A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310159190.0A CN116208758A (en) 2023-02-24 2023-02-24 Vehicle panoramic perspective display method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310159190.0A CN116208758A (en) 2023-02-24 2023-02-24 Vehicle panoramic perspective display method, system and storage medium

Publications (1)

Publication Number Publication Date
CN116208758A true CN116208758A (en) 2023-06-02

Family

ID=86518812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310159190.0A Pending CN116208758A (en) 2023-02-24 2023-02-24 Vehicle panoramic perspective display method, system and storage medium

Country Status (1)

Country Link
CN (1) CN116208758A (en)

Similar Documents

Publication Publication Date Title
JP7052174B2 (en) Systems and methods for estimating future routes
US11127373B2 (en) Augmented reality wearable system for vehicle occupants
JP5706874B2 (en) Vehicle periphery monitoring device
JP6014442B2 (en) Image generation apparatus, image display system, and image generation method
CN102616182B (en) Parking assistance system and method
EP2660104B1 (en) Apparatus and method for displaying a blind spot
JP5267660B2 (en) Image processing apparatus, image processing program, and image processing method
US20150109444A1 (en) Vision-based object sensing and highlighting in vehicle image display systems
US9771083B2 (en) Cognitive displays
CN114026611A (en) Detecting driver attentiveness using heatmaps
US20100054580A1 (en) Image generation device, image generation method, and image generation program
JP6307895B2 (en) Vehicle periphery monitoring device
US10930070B2 (en) Periphery monitoring device
JP4872245B2 (en) Pedestrian recognition device
WO2011108217A1 (en) Vehicle perimeter monitoring device
US11525694B2 (en) Superimposed-image display device and computer program
George et al. DAARIA: Driver assistance by augmented reality for intelligent automobile
JP2011251681A (en) Image display device and image display method
CN110544368B (en) Fatigue driving augmented reality early warning device and early warning method
CN102695037A (en) Method for switching and expression of vehicle-mounted multi-view camera picture
US20220041105A1 (en) Rearview device simulation
KR102531888B1 (en) How to operate a display device in a car
JP2017056909A (en) Vehicular image display device
JPWO2020105685A1 (en) Display controls, methods, and computer programs
JP2005269010A (en) Image creating device, program and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination