CN115626173A - Vehicle state display method and device, storage medium and vehicle - Google Patents

Vehicle state display method and device, storage medium and vehicle Download PDF

Info

Publication number
CN115626173A
CN115626173A CN202211286450.2A CN202211286450A CN115626173A CN 115626173 A CN115626173 A CN 115626173A CN 202211286450 A CN202211286450 A CN 202211286450A CN 115626173 A CN115626173 A CN 115626173A
Authority
CN
China
Prior art keywords
vehicle
model
digital twin
driving scene
state information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211286450.2A
Other languages
Chinese (zh)
Inventor
高宝连
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Great Wall Motor Co Ltd
Original Assignee
Great Wall Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Great Wall Motor Co Ltd filed Critical Great Wall Motor Co Ltd
Priority to CN202211286450.2A priority Critical patent/CN115626173A/en
Publication of CN115626173A publication Critical patent/CN115626173A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/12Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to parameters of the vehicle itself, e.g. tyre models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a vehicle state display method and device, a storage medium and a vehicle, and belongs to the technical field of vehicles. According to the method and the device, the digital twin driving scene changing along with the change of the actual environment information is constructed on the basis of the three-dimensional model of the vehicle and the actual environment information around the vehicle, and then the virtual state information of the part model corresponding to the vehicle part can be displayed in the digital twin driving scene on the basis of the actual state information of the vehicle part. According to the method and the device, the virtual state information of each part model is displayed in the digital twin driving scene, the actual state information of each vehicle part can be directly and comprehensively reflected, the actual environment information of a user in the driving process is fed back really, the user can check the driving process of the user more clearly, the driving experience of the user is improved, and meanwhile, comprehensive and accurate data support can be provided for the driving behavior analysis of the user, the vehicle fault analysis and the accident analysis.

Description

Vehicle state display method and device, storage medium and vehicle
Technical Field
The present application relates to the field of vehicle technologies, and in particular, to a vehicle state display method, apparatus, storage medium, and vehicle.
Background
The digital twin technology is a simulation process integrating multiple disciplines, multiple physical quantities, multiple scales and multiple probabilities by fully utilizing data such as a physical model, sensor updating, operation history and the like, and can realize interaction and fusion between a real physical world and a virtual digital world so as to reflect the whole life cycle process of a corresponding physical entity.
The existing digital twin method is usually that a map and a positioning track are integrated, and a vehicle model constructed in the method is usually a universal vehicle contour model, and only real-time position information of a vehicle can be presented in the map, but a vehicle state and environmental information around the vehicle cannot be intuitively presented on the vehicle model.
Disclosure of Invention
The application provides a vehicle state display method, a vehicle state display device, a storage medium and a vehicle, and aims to solve the problem that the vehicle state and environmental information around the vehicle cannot be visually presented on a vehicle model in the prior art.
In order to solve the above problems, the present application adopts the following technical solutions:
in a first aspect, an embodiment of the present application provides a vehicle state display method, where the method includes:
constructing a digital twin driving scene of a vehicle based on a three-dimensional model of the vehicle and actual environmental information around the vehicle; the digital twin driving scene changes along with the change of the actual environment information; the three-dimensional model comprises a plurality of part models, and different part models correspond to different vehicle parts;
acquiring actual state information of the vehicle parts;
and displaying virtual state information of a part model corresponding to the vehicle part in the digital twin driving scene based on the actual state information of the vehicle part.
In an embodiment of the present application, the three-dimensional model is constructed by the following steps:
determining a part model and a texture mapping relation matched with the identification information based on the identification information of the vehicle; different identification information corresponds to different part models and texture mapping relations;
and constructing the three-dimensional model based on the part model and the texture mapping relation so that the three-dimensional model has the same texture characteristics as the vehicle.
In an embodiment of the present application, constructing a digital twin driving scene of a vehicle based on a three-dimensional model of the vehicle and actual environmental information around the vehicle includes:
generating a three-dimensional scene graph of the vehicle based on actual environment information around the vehicle;
and fusing the three-dimensional model and the three-dimensional scene graph to obtain a digital twin driving scene of the vehicle.
In an embodiment of the present application, generating a three-dimensional scene graph of the vehicle based on actual environment information around the vehicle includes:
acquiring map data of a road section where the vehicle is located and/or image data of the vehicle in at least four preset directions;
and generating the three-dimensional scene graph based on the map data and/or the image data.
In one embodiment of the present application, the part model includes a first dynamic model and a second dynamic model;
displaying virtual state information of a part model corresponding to the vehicle part in the digital twin driving scene based on the actual state information of the vehicle part, wherein the virtual state information comprises:
under the condition that the vehicle part is the first dynamic model, displaying virtual state information of a part model corresponding to the first dynamic model on a three-dimensional model in the digital twin driving scene; and/or the presence of a gas in the gas,
and under the condition that the vehicle part is the second dynamic model, displaying virtual state information of the part model corresponding to the second dynamic model in a preset area in the digital twin driving scene.
In an embodiment of the present application, the method further includes:
acquiring a monitoring video of a user and a display picture of a host system;
and displaying the monitoring video and the display picture in the digital twin driving scene.
In an embodiment of the present application, after the monitoring video and the display are shown in the digital twin driving scene, the method further includes:
generating a driving behavior analysis report based on the digital twin driving scenario in the event that the vehicle is detected to be stalled.
In a second aspect, based on the same inventive concept, embodiments of the present application provide a vehicle state display apparatus, including:
the driving scene construction module is used for constructing a digital twin driving scene of the vehicle based on a three-dimensional model of the vehicle and actual environment information around the vehicle; the digital twin driving scene changes along with the change of the actual environment information; the three-dimensional model comprises a plurality of part models, and different part models correspond to different vehicle parts;
the acquisition module is used for acquiring the actual state information of the vehicle parts;
and the display module is used for displaying the virtual state information of the part model corresponding to the vehicle part in the digital twin driving scene based on the actual state information of the vehicle part.
In an embodiment of the present application, the apparatus further comprises a vehicle model building module, the vehicle model building module comprising:
the determining submodule is used for determining a part model and a texture mapping relation matched with the identification information based on the identification information of the vehicle; different identification information corresponds to different part models and texture mapping relations;
and the model construction sub-module is used for constructing the three-dimensional model based on the part model and the texture mapping relation so that the three-dimensional model has the same texture characteristics as the vehicle.
In an embodiment of the present application, the driving scenario construction module includes:
the three-dimensional scene graph generation submodule is used for generating a three-dimensional scene graph of the vehicle based on actual environment information around the vehicle;
and the fusion submodule is used for fusing the three-dimensional model and the three-dimensional scene graph to obtain a digital twin driving scene of the vehicle.
In an embodiment of the present application, the three-dimensional scene graph generation sub-module includes:
the data acquisition unit is used for acquiring map data of a road section where the vehicle is located and/or image data of the vehicle in at least four preset directions;
a generating unit configured to generate the three-dimensional scene graph based on the map data and/or the image data.
In one embodiment of the present application, the part model includes a first dynamic model and a second dynamic model; the display module comprises:
the first display submodule is used for displaying virtual state information of a part model corresponding to the first dynamic model on a three-dimensional model in the digital twin driving scene under the condition that the vehicle part is the first dynamic model;
and the second display submodule is used for displaying the virtual state information of the part model corresponding to the second dynamic model in a preset area in the digital twin driving scene under the condition that the vehicle part is the second dynamic model.
In an embodiment of the present application, the apparatus further includes:
the image acquisition module is used for acquiring a monitoring video of a user and a display image of the host system;
and the picture display module is used for displaying the monitoring video and the display picture in the digital twin driving scene.
In an embodiment of the present application, the apparatus further includes:
a report generating module for generating a driving behavior analysis report based on the digital twin driving scene when the vehicle is detected to be flameout after the monitoring video and the display screen are shown in the digital twin driving scene.
In a third aspect, based on the same inventive concept, embodiments of the present application provide a storage medium, where machine-executable instructions are stored in the storage medium, and when executed by a processor, the machine-executable instructions implement the vehicle status display method proposed in the first aspect of the present application.
In a fourth aspect, based on the same inventive concept, embodiments of the present application provide a vehicle, including a processor and a memory, where the memory stores machine executable instructions that can be executed by the processor, and the processor is configured to execute the machine executable instructions to implement the vehicle state displaying method provided in the first aspect of the present application.
Compared with the prior art, the method has the following advantages:
according to the vehicle state display method provided by the embodiment of the application, a digital twin driving scene which changes along with the change of actual environment information is constructed on the basis of the three-dimensional model of the vehicle and the actual environment information around the vehicle, and further virtual state information of a part model corresponding to the vehicle part can be displayed in the digital twin driving scene on the basis of the actual state information of the vehicle part. According to the embodiment of the application, the three-dimensional model of the vehicle is split into different part models, the virtual state information of each part model is displayed in the digital twin driving scene, the actual state information of the vehicle parts can be directly and comprehensively reflected, meanwhile, the environmental information of a user in the driving process is truly fed back, the user can check the driving process of the user more clearly, the driving experience of the user is improved, and meanwhile, comprehensive and accurate data support is provided for the driving behavior analysis, the vehicle fault analysis and the accident analysis of the user.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart illustrating steps of a vehicle status displaying method according to an embodiment of the present application.
Fig. 2 is a functional block diagram of a vehicle status display device according to an embodiment of the present application.
Reference numerals: 200-vehicle status display means; 201-driving scene construction module; 202-an obtaining module; 203-display module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a vehicle state display method according to the present application is shown, which may specifically include the following steps:
s101: the method comprises the steps of constructing a digital twin driving scene of a vehicle based on a three-dimensional model of the vehicle and actual environment information around the vehicle, wherein the digital twin driving scene changes along with the change of the actual environment information, the three-dimensional model comprises a plurality of part models, and different part models correspond to different vehicle parts.
It should be noted that the execution subject in this embodiment may be a computing service device with data processing, network communication, and program running functions, such as a cloud server, or an Electronic device with the above functions, such as an ECU (Electronic Control Unit), a BCM (Body Control Module), a VCU (Vehicle Control Unit), and the like.
In the present embodiment, the vehicle can be disassembled into a plurality of vehicle components such as a vehicle body contour, a door, a window, a sunroof, a rearview mirror, a turn signal, a headlight, a tail light, a fog light, a steering wheel, a center screen, a dashboard, a seat, and an engine, and a digital model can be constructed for each vehicle component to obtain a corresponding component model. Specifically, the model can be lightened by a three-dimensional engine tool such as 3DMAX or C4D, and the simulation construction of the component model can be realized.
It should be noted that the component models of the vehicles may be stored in a pre-configured digital twin engine, and a separate model management module is provided in the digital twin engine to manage the component models of the vehicles of different models, where the vehicles of different models correspond to different component models.
Specifically, the VCU may obtain model information of the Vehicle by identifying a Vehicle Identification Number (VIN) of the Vehicle, and transmit the model information of the Vehicle to the digital twin engine, so that the digital twin engine matches a component model corresponding to the Vehicle of the model in the model management module, and synthesizes the matched component models to obtain the three-dimensional model of the Vehicle.
It should be noted that the component model of the vehicle may be divided into a static component model and a dynamic component model according to whether the state information changes, where the static component model is a component model whose state information does not change, and the dynamic component model is a component model whose state information changes. When the three-dimensional model of the vehicle is constructed, the state information of the dynamic part model is set as a default initial state, and after the three-dimensional model and the digital twin driving scene are constructed, the state information of the dynamic part model is changed along with the change of the actual state of the vehicle part corresponding to the dynamic part model.
In the present embodiment, the digital twin driving scene may be constructed by a digital twin engine stored in advance. The digital twin engine can be stored in a cloud server or locally in the vehicle.
In the embodiment, the camera, the laser radar, the millimeter wave lightning sensor and/or the like can be used for acquiring the actual environment information around the vehicle in real time, and the VCU transmits the acquired actual environment information to the digital twin engine, so that the digital twin engine maps the actual environment information in the physical space to the virtual space and then performs superposition fusion with the three-dimensional model, and the digital twin driving scene of the vehicle is constructed in the virtual space. The actual environmental information around the vehicle may include, but is not limited to, road information, traffic information, obstacle information, and building information, among others.
In the embodiment, after the digital twin engine completes construction of the digital twin driving scene, data including the digital twin driving scene is returned to the VCU, and the digital twin driving scene is displayed on the vehicle-mounted display terminal.
It should be noted that, the VCU may control the vehicle-mounted display terminal to display the digital twin driving scene, and may also send the digital twin driving scene to a preset terminal such as a mobile phone terminal and a background server terminal to be displayed and/or stored.
In the embodiment, the actual environment information around the vehicle is added into the construction of the digital twin driving scene of the vehicle, so that the digital twin driving scene can change along with the change of the actual environment information, the environment characteristics of a physical space can be truly restored, and meanwhile, as the three-dimensional model of the vehicle is split into different part models, more vehicle details can be displayed in the digital twin driving scene, and the real restoration of the vehicle state can be realized.
S102: actual state information of the vehicle component is acquired.
In this embodiment, the actual state information of the vehicle component refers to the operation state data of each vehicle component during the driving process of the user, such as the data generated during the driving process of the vehicle, such as the engine speed, the vehicle driving speed, the accelerator pedaling strength, the braking strength, the turn signal state, the window state, and the like. In specific implementation, a preprocessing module can be further arranged to filter redundant data and junk data of the running state data so as to retain real and effective data.
S103: and displaying virtual state information of a part model corresponding to the vehicle part in a digital twin driving scene based on the actual state information of the vehicle part.
In the present embodiment, based on the actual state information of the vehicle components, the virtual state information of the component model corresponding to each vehicle component in the digital twin driving scenario may be determined. It should be noted that the actual state information represents the state of the vehicle component in the physical space, the virtual state information represents the state of the component model corresponding to the vehicle component in the virtual space, and the actual state information of the vehicle component and the virtual state information of the corresponding component model are consistent.
Illustratively, when a user turns on a left turn light by using a light control lever, a VCU acquires user driving behavior data of the user for turning on the left turn light triggered by the light control lever, and simultaneously acquires that actual state information of the left turn light is in an on state, so as to determine that virtual state information of a left turn light model in a three-dimensional model is also in the on state, and transmits the virtual state information of the left turn light model to a digital twin engine, so that the digital twin engine performs state assignment on the left turn light model in a digital twin driving scene, and further completes state mapping from the left turn light in a physical space to the left turn light model in a virtual space.
In the embodiment, after the digital twin engine acquires the virtual state information of the component model, the display state of the component model is updated to the display state corresponding to the virtual state information, so that the digital twin driving scene is updated, the updated digital twin driving scene is returned to the VCU, and the VCU synchronously displays the updated digital twin driving scene on the vehicle-mounted display terminal, so that a user can more clearly view the actual state information of the vehicle component on the vehicle-mounted display terminal, and real-time vehicle state feedback is realized.
Specifically, when the virtual state information of the component model corresponding to the vehicle component is changed from the historical virtual state information to the historical virtual state information, an animation of the component model changing from the historical virtual state information to the virtual state information in the digital twin driving scene, such as a rotating process of a playing steering wheel model, an opening or closing process of a window model, a changing process of an accelerator pedal model, and the like, will be displayed on the vehicle-mounted display terminal. Illustratively, when a user turns the steering wheel, the turning angle and the turning rate of the steering wheel are collected, and then in a digital twin driving scene, the turning motion of the steering wheel model is played based on the collected turning angle and the collected turning rate to feed back the real motion of the steering wheel in the physical space.
In the embodiment, by playing the animation reflecting the change process of the actual state information, a more real and smooth impression can be provided, and the use experience of the user is further improved.
In the concrete implementation, the part model can be numbered, mapping is carried out according to the part model number and the data interface, the relation between the action of the part model and the value obtained by the data interface is set, and when the obtained data of the vehicle part changes, the part model synchronously changes and is displayed in real time to reflect the real action of the vehicle part in reality.
For example, the part model may be numbered, and the model number of the part model may be linked according to the data interface ID, that is, a data transmission channel between the part model and the corresponding vehicle part may be constructed by using different data interfaces. If the relation between the action of the brake lamp model and the value obtained by the corresponding data interface is set as follows according to the linkage between the brake lamp and the brake lamp model: when the value which is transmitted to the data interface bound with the brake lamp by the VCU and used for representing the state of the brake lamp is 1, the action of the brake lamp model is on; when the value transmitted to the data interface by the VCU is 0, the action of the brake lamp model is off.
In the embodiment, the three-dimensional model of the vehicle is split into different part models, so that the actual state information of each vehicle part can be visually and comprehensively fed back in a digital twin driving scene, and meanwhile, the actual environment information of a user in the driving process can be really fed back, so that the user can more clearly check the driving process of the user, the driving experience of the user is improved, meanwhile, comprehensive and accurate data support can be provided for the driving behavior analysis, the vehicle fault and accident analysis of the user, for example, after a traffic accident happens to the vehicle, the display record of the digital twin driving scene can be called to obtain the state change process and the actual environment information of each vehicle part of the user in the driving process, the accident scene can be really restored, and real and effective data support can be provided for judging whether the user has misoperation, whether other vehicles have illegal driving and whether obstacles exist.
In one possible embodiment, the three-dimensional model may be specifically constructed by the following steps:
s201: and determining a part model and a texture mapping relation matched with the identification information based on the identification information of the vehicle.
It should be noted that the identification information represents the body type of the vehicle, that is, represents the general structure or shape of the vehicle, such as the number of doors and windows, and the body type can be obtained by identifying the VIN of the vehicle; and different identification information corresponds to different part models and texture mapping relations.
S202: and constructing the three-dimensional model based on the part model and the texture mapping relation so that the three-dimensional model has the same texture characteristics as the vehicle.
In the present embodiment, since the appearances of vehicles of different vehicle types are greatly different, in order to enable the three-dimensional model to truly restore the vehicle, the component model corresponding to the vehicle is matched based on the identification information of the vehicle, and meanwhile, the component model is subjected to texture mapping, so that the three-dimensional model has the same texture characteristics as the vehicle.
In a specific implementation, the three-dimensional model can be implemented in a texture mapping manner to have the same effect as that of a vehicle, and specifically, a UV mapping may be used, which is similar to the X, Y, and Z axes in a spatial model, where "UV" refers to short names of u and v texture mapping coordinates, and defines position information of each point on an image. UV is to precisely correspond each point on the image to the surface of the three-dimensional digital car model, and image smooth interpolation processing can be carried out on the positions of gaps among the points.
In a possible implementation, S101 may specifically include the following steps:
s101-1: and generating a three-dimensional scene graph of the vehicle based on the actual environment information around the vehicle.
In this embodiment, the vehicle-mounted front camera, the rear camera, and the two-side rearview mirror cameras can be used to acquire video data of the vehicle in at least four preset directions, the multimedia serial link deserializer is used to synchronize video signals of the 4 cameras, and image data of the vehicle in at least four preset directions at the same moment is acquired through image recognition.
In this embodiment, the digital twin engine may further obtain map data of a road section where the vehicle is located and GPS (Global Positioning System) position information of the vehicle from the VCU, convert the GPS position information of the vehicle into a coordinate System of the digital twin map after coordinate conversion and position correction, load a three-dimensional model based on the vehicle GPS position information by the digital twin engine, and linearly move the position of the three-dimensional model according to the GPS position.
In this embodiment, based on the acquired map data and/or image data, a three-dimensional scene map of the vehicle may be generated, and a user may select to present different three-dimensional scene maps according to actual needs, such as independently presenting a three-dimensional navigation map using the map data, independently presenting a three-dimensional panoramic view using the image data, or simultaneously presenting the three-dimensional navigation map and the three-dimensional panoramic view. In specific implementation, when the three-dimensional scene graph is visually presented, the three-dimensional navigation map and the three-dimensional panoramic graph can be arranged in different areas for presentation, for example, the three-dimensional navigation map can be presented in a left area of a display interface so as to display map navigation information of a vehicle in real time; and presenting the three-dimensional panoramic image in the right area of the display interface to display the environment image of the vehicle in real time. It should be noted that, under the condition of sufficient computing power, the three-dimensional navigation map and the three-dimensional panoramic map can be fused and displayed, the environmental images around the vehicle are synchronously displayed in the navigation interface of the vehicle, and a more realistic three-dimensional navigation effect is presented.
S101-2: and fusing the three-dimensional model and the three-dimensional scene graph to obtain a digital twin driving scene of the vehicle.
In the embodiment, the digital twin driving scene obtained by fusing the three-dimensional model and the three-dimensional scene graph can not only truly reflect the real physical scene in the driving process of the vehicle, but also provide map data required by navigation and assist the driving of a user.
In a possible embodiment, the part model includes a first dynamic model and a second dynamic model, and S103 may specifically include the following steps:
s103-1: and under the condition that the vehicle part is the first dynamic model, displaying the virtual state information of the part model corresponding to the first dynamic model on the three-dimensional model in the digital twin driving scene.
S103-2: and under the condition that the vehicle part is the second dynamic model, displaying the virtual state information of the part model corresponding to the second dynamic model in a preset area in the digital twin driving scene.
In the present embodiment, the component model of the vehicle may be divided into a static component model and a dynamic component model according to whether the state information changes, and the dynamic component model may be divided into a first dynamic model and a second dynamic model according to whether the state information is updated on the three-dimensional model. Specifically, the first dynamic model refers to a preset part model for performing state display on the three-dimensional model, such as vehicle parts like a vehicle door, a vehicle window, a skylight, a rearview mirror, a steering lamp, a headlamp, a tail lamp and a fog lamp; the second dynamic model refers to a preset part model which does not display the state on the three-dimensional model, such as parts of a brake pedal, an accelerator pedal, a steering wheel, a central control screen, an instrument panel and the like.
In this embodiment, for the first dynamic model, the actual state information of the vehicle component corresponding to the first dynamic model, such as the state information of the brake light, the turn light, and the turn light, can be directly displayed on the three-dimensional model; for the second dynamic model, the actual state information is displayed in a preset area in a display interface presenting the digital twin driving scene, for example, for a steering wheel, an accelerator pedal and a brake pedal, the state information such as the rotation of the steering wheel, the accelerator pedaling strength, the brake pedaling strength and the like can be visually presented in the preset area in an animation mode, for example, the motion of the steering wheel can be reflected by playing the rotation animation of the steering wheel model, and the accelerator pedaling strength, the brake pedaling strength and the like can be presented in real time by using a progress bar of 0-100%.
In one possible embodiment, the vehicle state display method may further include the steps of:
s301: and acquiring a monitoring video of a user and a display picture of a host system.
S302: and displaying the monitoring video and the display picture in the digital twin driving scene.
In the embodiment, although the actual state information of the vehicle parts can reflect the operation information of the user to a certain extent, the operation of the user cannot be restored in a hundred percent, for example, the control of the light and the control of the steering wheel of the vehicle may be automatically controlled by the vehicle auxiliary driving function, such as an automatic driving scene, and therefore, in order to completely and accurately restore the operation information of the user, the monitoring video of the user and the display picture of the host system are also acquired.
In the present embodiment, the monitoring video of the user can be acquired by a Driver Monitor System (DMS). The DMS is mainly configured to monitor and understand a driving state of a human driver by using a system sensing capability, and is configured to monitor undesirable driving behaviors such as fatigue driving, distracted driving, and dangerous driving, and specifically, may collect a surveillance video of a user through an in-vehicle camera, and transmit the surveillance video to the digital twin engine, so that the digital twin engine displays the surveillance video in a preset surveillance video display area in a digital twin driving scene.
It should be noted that, in the present embodiment, a Head Unit (HUT), also called a vehicle-mounted multimedia system, is configured to display terminal information on a display screen based on a trigger operation of a user, and can acquire operation information of the user for the host system by acquiring the display screen of the host system. Specifically, after the display screen is transferred to the digital twin engine, the digital twin engine may synchronously display the display screen in a preset screen display area in the digital twin driving scene.
In the embodiment, the monitoring video and the display picture are displayed in the digital twin driving scene, so that the operation behaviors of the user can be comprehensively and accurately fed back and recorded, the post analysis is convenient, and the user can check the driving behaviors of the user.
In one possible embodiment, after S302, the vehicle status displaying method may further include the steps of:
s303: in the event that a vehicle misfire is detected, a driving behavior analysis report is generated based on the digital twin driving scenario.
In the embodiment, in the driving process of a user, the generated digital twin driving scene is uploaded to the cloud or stored locally in real time, and the digital twin driving scene comprises state data of all vehicle parts of the user in the driving process, operation information of the user and actual environment information around the vehicle, so that when the vehicle is detected to be flameout, after the user finishes driving, the uploaded digital twin driving scene is subjected to data analysis, a driving behavior analysis report can be output, the user can register at the cloud, the user can verify and log in a code scanning mode and the like, and the driving behavior analysis report can be checked and shared at a mobile phone terminal.
In the embodiment, the digital conversion is completed in the vehicle driving process, so that the full-digital monitoring of the driving process is realized, and the user can check the driving behavior of the user; through the digital twinning in the driving process, when a vehicle breaks down and has an accident, the whole driving behavior and the surrounding environment can be completely restored, and a complete evidence chain is provided for accident analysis; by constructing a digital twin driving scene capable of reflecting the vehicle state, the driving behavior of the user and the actual environment information around the vehicle, the user can check the driving process of the user more clearly, and the experience of the user is improved.
In a second aspect, based on the same inventive concept, referring to fig. 2, an embodiment of the present application provides a vehicle state display apparatus 200, where the vehicle state display apparatus 200 includes:
the driving scene construction module 201 is configured to construct a digital twin driving scene of the vehicle based on the three-dimensional model of the vehicle and actual environment information around the vehicle; the digital twin driving scene changes along with the change of the actual environment information; the three-dimensional model comprises a plurality of part models, and different part models correspond to different vehicle parts;
the acquiring module 202 is used for acquiring actual state information of the vehicle parts;
the displaying module 203 is configured to display virtual state information of a part model corresponding to a vehicle part in a digital twin driving scene based on actual state information of the vehicle part.
In an embodiment of the present application, the vehicle state display apparatus 200 further includes a vehicle model building module, which includes:
the determining submodule is used for determining a part model and a texture mapping relation matched with the identification information based on the identification information of the vehicle; different identification information corresponds to different part models and texture mapping relations;
and the model construction submodule is used for constructing a three-dimensional model based on the part model and the texture mapping relation so that the three-dimensional model has the same texture characteristics as the vehicle.
In an embodiment of the present application, the driving scenario constructing module 201 includes:
the three-dimensional scene graph generation submodule is used for generating a three-dimensional scene graph of the vehicle based on actual environment information around the vehicle;
and the fusion submodule is used for fusing the three-dimensional model and the three-dimensional scene graph to obtain a digital twin driving scene of the vehicle.
In an embodiment of the present application, the three-dimensional scene graph generation sub-module includes:
the data acquisition unit is used for acquiring map data of a road section where the vehicle is located and image data of the vehicle in at least four preset directions;
and the generating unit is used for generating a three-dimensional scene graph based on the map data and the image data.
In one embodiment of the application, the part model comprises a first dynamic model and a second dynamic model; the display module 203 includes:
the first display submodule is used for displaying virtual state information of a part model corresponding to a first dynamic model on a three-dimensional model in a digital twin driving scene under the condition that the vehicle part is the first dynamic model;
and the second display submodule is used for displaying the virtual state information of the part model corresponding to the second dynamic model in a preset area in the digital twin driving scene under the condition that the vehicle part is the second dynamic model.
In an embodiment of the present application, the vehicle status display apparatus 200 further includes:
the image acquisition module is used for acquiring a monitoring video of a user and a display image of the host system;
and the picture display module is used for displaying the monitoring video and the display picture in the digital twin driving scene.
In an embodiment of the present application, the vehicle status display apparatus 200 further includes:
and the report generating module is used for generating a driving behavior analysis report based on the digital twin driving scene under the condition that the vehicle is detected to be flameout after the monitoring video and the display picture are displayed in the digital twin driving scene.
It should be noted that, for the specific implementation of the vehicle state displaying apparatus 200 in the embodiment of the present application, reference is made to the specific implementation of the vehicle state displaying method provided in the first aspect of the embodiment of the present application, and details are not repeated herein.
In a third aspect, based on the same inventive concept, embodiments of the present application provide a storage medium, where machine-executable instructions are stored in the storage medium, and when the machine-executable instructions are executed by a processor, the vehicle state display method provided in the first aspect of the present application is implemented.
It should be noted that, for a specific implementation of the storage medium according to the embodiment of the present application, reference is made to the specific implementation of the vehicle state displaying method provided in the first aspect of the embodiment of the present application, and details are not repeated here.
In a fourth aspect, based on the same inventive concept, embodiments of the present application provide a vehicle, including a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor is configured to execute the machine executable instructions, so as to implement the vehicle state displaying method proposed in the first aspect of the present application.
It should be noted that, for the specific implementation of the vehicle in the embodiment of the present application, reference is made to the specific implementation of the vehicle state display method in the first aspect of the embodiment of the present application, and details are not repeated herein.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or terminal apparatus that comprises the element.
The vehicle state display method, the vehicle state display device, the storage medium and the vehicle provided by the invention are described in detail, specific examples are applied in the description to explain the principle and the implementation of the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A vehicle state display method, characterized in that the method comprises:
constructing a digital twin driving scene of a vehicle based on a three-dimensional model of the vehicle and actual environmental information around the vehicle; the digital twin driving scene changes along with the change of the actual environment information; the three-dimensional model comprises a plurality of part models, and different part models correspond to different vehicle parts;
acquiring actual state information of the vehicle parts;
and displaying virtual state information of a part model corresponding to the vehicle part in the digital twin driving scene based on the actual state information of the vehicle part.
2. The vehicle state display method according to claim 1, wherein the three-dimensional model is constructed by:
determining a part model and a texture mapping relation matched with the identification information based on the identification information of the vehicle; different identification information corresponds to different part models and texture mapping relations;
and constructing the three-dimensional model based on the part model and the texture mapping relation so that the three-dimensional model has the same texture characteristics as the vehicle.
3. The vehicle state display method according to claim 1, wherein constructing a digital twin driving scene of a vehicle based on a three-dimensional model of the vehicle and actual environmental information around the vehicle comprises:
generating a three-dimensional scene graph of the vehicle based on actual environment information around the vehicle;
and fusing the three-dimensional model and the three-dimensional scene graph to obtain a digital twin driving scene of the vehicle.
4. The vehicle state display method according to claim 3, wherein generating a three-dimensional scene graph of the vehicle based on actual environmental information around the vehicle comprises:
acquiring map data of a road section where the vehicle is located and/or image data of the vehicle in at least four preset directions;
and generating the three-dimensional scene graph based on the map data and/or the image data.
5. The vehicle state display method according to claim 1, wherein the component model includes a first dynamic model and a second dynamic model;
displaying virtual state information of a part model corresponding to the vehicle part in the digital twin driving scene based on the actual state information of the vehicle part, wherein the virtual state information comprises:
under the condition that the vehicle part is the first dynamic model, displaying virtual state information of a part model corresponding to the first dynamic model on a three-dimensional model in the digital twin driving scene; and/or the presence of a gas in the gas,
and under the condition that the vehicle part is the second dynamic model, displaying virtual state information of the part model corresponding to the second dynamic model in a preset area in the digital twin driving scene.
6. The vehicle state display method according to claim 1, characterized in that the method further comprises:
acquiring a monitoring video of a user and a display picture of a host system;
and displaying the monitoring video and the display picture in the digital twin driving scene.
7. The vehicle state presentation method according to claim 6, characterized in that after presenting the monitor video and the display screen in the digital twin driving scene, the method further comprises:
generating a driving behavior analysis report based on the digital twin driving scenario in the event that the vehicle is detected to be stalled.
8. A vehicle condition displaying apparatus, characterized in that the apparatus comprises:
the driving scene construction module is used for constructing a digital twin driving scene of the vehicle based on a three-dimensional model of the vehicle and actual environment information around the vehicle; the digital twin driving scene changes along with the change of the actual environment information; the three-dimensional model comprises a plurality of part models, and different part models correspond to different vehicle parts;
the acquisition module is used for acquiring the actual state information of the vehicle parts;
and the display module is used for displaying the virtual state information of the part model corresponding to the vehicle part in the digital twin driving scene based on the actual state information of the vehicle part.
9. A storage medium having stored therein machine-executable instructions which, when executed by a processor, implement a vehicle status display method as claimed in any one of claims 1 to 7.
10. A vehicle comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor being configured to execute the machine executable instructions to implement the vehicle status presenting method as claimed in any one of claims 1 to 7.
CN202211286450.2A 2022-10-20 2022-10-20 Vehicle state display method and device, storage medium and vehicle Pending CN115626173A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211286450.2A CN115626173A (en) 2022-10-20 2022-10-20 Vehicle state display method and device, storage medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211286450.2A CN115626173A (en) 2022-10-20 2022-10-20 Vehicle state display method and device, storage medium and vehicle

Publications (1)

Publication Number Publication Date
CN115626173A true CN115626173A (en) 2023-01-20

Family

ID=84905962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211286450.2A Pending CN115626173A (en) 2022-10-20 2022-10-20 Vehicle state display method and device, storage medium and vehicle

Country Status (1)

Country Link
CN (1) CN115626173A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115828145A (en) * 2023-02-09 2023-03-21 深圳市仕瑞达自动化设备有限公司 Online monitoring method, system and medium for electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115828145A (en) * 2023-02-09 2023-03-21 深圳市仕瑞达自动化设备有限公司 Online monitoring method, system and medium for electronic equipment

Similar Documents

Publication Publication Date Title
US10482003B1 (en) Method and system for modifying a control unit of an autonomous car
WO2022012094A1 (en) Driving scene reconstruction method and apparatus, system, vehicle, device, and storage medium
CN108319249B (en) Unmanned driving algorithm comprehensive evaluation system and method based on driving simulator
CN113848855B (en) Vehicle control system test method, device, equipment, medium and program product
CN112819968B (en) Test method and device for automatic driving vehicle based on mixed reality
Galko et al. Vehicle-Hardware-In-The-Loop system for ADAS prototyping and validation
US20220198107A1 (en) Simulations for evaluating driving behaviors of autonomous vehicles
CN113440849B (en) Vehicle control method, device, computer equipment and storage medium
US20220204009A1 (en) Simulations of sensor behavior in an autonomous vehicle
CN111798718A (en) Driving adaptability training method, host and device based on virtual reality
CN113260430A (en) Scene processing method, device and system and related equipment
US11604908B2 (en) Hardware in loop testing and generation of latency profiles for use in simulation
CN115626173A (en) Vehicle state display method and device, storage medium and vehicle
CN113867315B (en) Virtual-real combined high-fidelity traffic flow intelligent vehicle test platform and test method
CN117436821B (en) Method, device and storage medium for generating traffic accident diagnosis report
US11327878B2 (en) Method for rating a software component of an SiL environment
Weber et al. Approach for improved development of advanced driver assistance systems for future smart mobility concepts
CN115688481A (en) Hardware-in-loop simulation test system and method for man-machine common-driving type vehicle
CN114987512A (en) Collecting sensor data for a vehicle
CN116761999A (en) Automatic driving test method, system, vehicle and storage medium
CN112883489A (en) Automatic driving automobile simulation system
CN114867992A (en) Method and apparatus for presenting virtual navigation elements
CN112669612B (en) Image recording and playback method, device and computer system
CN113777951B (en) Automatic driving simulation system and method for collision avoidance decision of vulnerable road users
US11755312B2 (en) Bootloader update

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination