CN117874927A - Display control method and device for initial three-dimensional model of vehicle - Google Patents

Display control method and device for initial three-dimensional model of vehicle Download PDF

Info

Publication number
CN117874927A
CN117874927A CN202410174723.7A CN202410174723A CN117874927A CN 117874927 A CN117874927 A CN 117874927A CN 202410174723 A CN202410174723 A CN 202410174723A CN 117874927 A CN117874927 A CN 117874927A
Authority
CN
China
Prior art keywords
vehicle
dimensional model
initial
target
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410174723.7A
Other languages
Chinese (zh)
Inventor
刘亚楼
丁速
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Chang'an Technology Co ltd
Original Assignee
Chongqing Chang'an Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Chang'an Technology Co ltd filed Critical Chongqing Chang'an Technology Co ltd
Priority to CN202410174723.7A priority Critical patent/CN117874927A/en
Publication of CN117874927A publication Critical patent/CN117874927A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a display control method and device for an initial three-dimensional model of a vehicle. Comprising the following steps: detecting environment data of the current environment of the target vehicle and vehicle parameters of the target vehicle; generating an initial three-dimensional model by using the environmental data and the vehicle parameters; displaying an initial three-dimensional model based on a vehicle-mounted screen of a target vehicle, and acquiring an updating requirement acting on the initial three-dimensional model; according to the interactive content indicated by the updating requirement, updating the initial three-dimensional model to obtain a target three-dimensional model, and displaying the target three-dimensional model on a vehicle-mounted screen. And generating an initial three-dimensional model by utilizing the collected environment data and vehicle parameters, acquiring an updating requirement, and finally updating the initial three-dimensional model according to the interactive content of the updating requirement. The target three-dimensional model is displayed on the vehicle-mounted screen, so that a driver can know the current state and the surrounding environment of the vehicle more accurately and intuitively, and the driving safety and the decision making capability are improved.

Description

Display control method and device for initial three-dimensional model of vehicle
Technical Field
The invention relates to the technical field of computers, in particular to a display control method and device for an initial three-dimensional model of a vehicle.
Background
Nowadays, the performance of the intelligent automobile is continuously improved, the automobile control function is continuously enriched, more and more automobile quantity is used for rapidly controlling the automobile function and assisting the driving function by using the three-dimensional automobile model or the three-dimensional model scene on the automobile, but the current application scene is single, and a user cannot be helped to more intuitively and conveniently know the real-time state of the automobile.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a display control method and device for an initial three-dimensional model of a vehicle, so as to solve the problem that the application scene of the existing three-dimensional model is single and can not help a user to more intuitively and conveniently know the real-time state of the vehicle.
In a first aspect, an embodiment of the present invention provides a display control method for an initial three-dimensional model of a vehicle, where the method includes:
detecting environment data of the current environment of a target vehicle and vehicle parameters of the target vehicle;
generating an initial three-dimensional model using the environmental data and the vehicle parameters;
displaying the initial three-dimensional model based on a vehicle-mounted screen of the target vehicle, and acquiring an update requirement acting on the initial three-dimensional model;
And updating the initial three-dimensional model according to the interactive content indicated by the updating requirement to obtain a target three-dimensional model, and displaying the target three-dimensional model on the vehicle-mounted screen.
In an embodiment of the present application, the generating an initial three-dimensional model using the environmental data and the vehicle parameters includes:
determining the environment type of the environment where the target vehicle is located according to the environment data;
and processing the vehicle parameters according to the scene processing strategy corresponding to the environment type to generate the initial three-dimensional model.
In an embodiment of the present application, the obtaining the update requirement acting on the initial three-dimensional model includes:
detecting a current usage parameter of a vehicle component in the target vehicle;
determining a component status of the vehicle component according to the current usage parameter;
and if the part state is matched with the preset part state, taking the preset interaction content corresponding to the part state as first interaction content, and generating the update requirement based on the first interaction content.
In an embodiment of the present application, the obtaining the update requirement acting on the initial three-dimensional model includes:
detecting interaction operation triggered by a target user based on the vehicle-mounted screen, and determining second interaction content based on the interaction operation;
The update requirements are generated based on the second interactive content.
In this embodiment of the present application, the detecting, by the target user, the interaction triggered by the on-vehicle screen, and determining, based on the interaction, first interaction content includes:
detecting gesture operation triggered by the target user based on the vehicle-mounted screen, and taking operation content corresponding to the gesture operation as the second interaction content;
detecting an eye focus of the target user, obtaining a focus position of the eye focus, and taking a view angle corresponding to the focus position as the second interaction content;
and detecting a voice command triggered by the target user based on the vehicle-mounted screen, and taking voice content carried by the voice command as the second interaction content.
In an embodiment of the present application, the obtaining the update requirement acting on the initial three-dimensional model includes:
receiving an interaction request sent by other vehicles, wherein the interaction request is generated after the other vehicles receive an initial three-dimensional model sent by the target vehicle;
analyzing the interaction request to obtain third interaction content of the other vehicles;
and generating the update requirement according to the third interactive content.
In an embodiment of the present application, after displaying the updated initial three-dimensional model on the on-vehicle screen, the method further includes:
detecting whether a safety accident occurs to the target vehicle currently;
if the safety accident occurs to the target vehicle, updating the target three-dimensional model according to the accident type corresponding to the safety accident;
and sending the target three-dimensional model and the current position information of the target vehicle to other vehicles.
In a second aspect, an embodiment of the present invention provides a display control apparatus for an initial three-dimensional model of a vehicle, the apparatus including:
the detection module is used for detecting environment data of the current environment of the target vehicle and vehicle parameters of the target vehicle;
a generation module for generating an initial three-dimensional model based on the environmental data and the vehicle parameters;
the acquisition module is used for displaying the initial three-dimensional model based on a vehicle-mounted screen of the target vehicle and acquiring the update requirement acting on the initial three-dimensional model;
and the updating module is used for updating the initial three-dimensional model according to the interactive content indicated by the updating requirement to obtain a target three-dimensional model, and displaying the target three-dimensional model on the vehicle-mounted screen.
In a third aspect, an embodiment of the present invention provides a computer apparatus, including: the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions to perform the method of the first aspect or any implementation manner corresponding to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of the first aspect or any of its corresponding embodiments.
The method provided by the embodiment of the application has the following beneficial effects:
the method provided by the embodiment of the application detects the environment data and the vehicle parameters of the target vehicle in real time. The method is beneficial to acquiring key information such as road conditions, weather conditions, vehicle speed and direction and the like, and provides necessary input data for subsequent three-dimensional model generation and updating. An initial three-dimensional model is generated using the collected environmental data and vehicle parameters. An initial visual representation of the target vehicle in the current environment may be provided for preliminary display and interaction. And then acquiring the related update requirement of the initial three-dimensional model, updating the initial three-dimensional model by using the interactive content of the update requirement, and displaying the updated initial three-dimensional model for observation of a driver. This helps the driver to know the current state and surrounding environment of the vehicle more accurately and intuitively, improving driving safety and decision making ability.
The method provided by the embodiment of the application can detect the current use parameters of each vehicle component in the target vehicle in real time. By monitoring the usage status of vehicle components, critical vehicle operating information such as engine speed, fuel consumption, temperature, etc. may be obtained. This helps to understand the overall operation of the vehicle and the operating conditions of the various components. By comparing the states of the components with the preset states of the components, whether each component works normally, is abnormal or fails can be judged. The method is beneficial to timely maintenance and fault diagnosis, and ensures the reliability and safety of the vehicle. When the state of the vehicle component is matched with the preset state, the preset interaction content corresponding to the component state can be used as the first interaction content. Based on the first interactive content, a corresponding update requirement may be generated. The vehicle control system is favorable for personalized setting and adjustment of the vehicle, improves the use experience of a user, and is favorable for meeting personalized requirements of the user and improving the driving comfort and convenience.
The method provided by the embodiment of the application can detect the interactive operation triggered by the target user on the vehicle-mounted screen in real time. By monitoring the operation behaviors of the user, the requirements and the instructions of the user can be acquired, and the preferences of the user on the functions and the settings of the vehicle can be known. According to the operation instructions and intentions of the user, corresponding interactive contents and functions can be provided to meet the personalized requirements of the user. By responding to the interactive operation of the user, the setting, configuration or function of the vehicle can be adjusted, and corresponding updating requirements can be generated. This helps to achieve user-personalized vehicle control and a customized experience. By detecting the user interaction operation and providing corresponding interaction content, the satisfaction degree and driving experience of the user can be improved. The user can realize personalized setting and adjustment of the vehicle functions through simple operation, so that the user can better adapt to the driving preference and the requirements of the user.
The method provided by the embodiment of the application can detect whether the safety accident occurs to the target vehicle in real time. And after the safety accident of the target vehicle, updating the three-dimensional model of the target vehicle according to the accident type. Accurate vehicle state and damage degree information can be provided, so that other vehicles can accurately know the situation of the accident vehicle, and appropriate actions can be taken. By transmitting the three-dimensional model of the target vehicle and the current position information to other vehicles, more comprehensive road information can be provided. Other vehicles can instantly know the state and surrounding environment of the target vehicle, and overall traffic safety is enhanced. Meanwhile, other vehicles can respond to the safety accidents of the target vehicle more quickly. The scheme enables other drivers to quickly adjust own driving route and speed so as to avoid collision or further accident with the target vehicle.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method of display control of an initial three-dimensional model of a vehicle according to some embodiments of the invention;
FIG. 2 is a flow chart of a method of display control of an initial three-dimensional model of a vehicle according to some embodiments of the invention;
FIG. 3 is a flow chart of a method of display control of an initial three-dimensional model of a vehicle according to some embodiments of the invention;
FIG. 4 is a block diagram showing the structure of a display control apparatus of an initial three-dimensional model of a vehicle according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
According to an embodiment of the present invention, a method and apparatus for controlling display of an initial three-dimensional model of a vehicle are provided, and it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different from that illustrated herein.
In this embodiment, a display control method for an initial three-dimensional model of a vehicle is provided, which may be used in the mobile terminal described above, and fig. 1 is a flowchart of a display control method for an initial three-dimensional model of a vehicle according to an embodiment of the present invention, as shown in fig. 1, where the flowchart includes the following steps:
in step S11, environmental data of the current environment of the target vehicle and vehicle parameters of the target vehicle are detected.
In the embodiment of the application, the environmental data of the current environment of the target vehicle may be external road environment information (including but not limited to road conditions, road types), temperature, topography, real-time weather, air particulate matters, and the like.
Specifically, road conditions: road conditions are determined by monitoring vibrations, body gestures, etc. of the vehicle by vehicle sensors, such as suspension sensors, wheel sensors, etc. Road images may also be acquired using cameras inside or outside the vehicle and road conditions identified through image processing and computer vision techniques.
Road type: the type of the road where the current vehicle is located, such as urban roads, rural roads, expressways, etc., is acquired through a vehicle navigation system or map data.
Ambient temperature: the data of the current ambient temperature is acquired using a temperature sensor or a connected weather station. The temperature sensor may be mounted on a vehicle and the weather station may provide real-time weather data.
Height and gradient: the height and grade information of the location of the vehicle may be determined using a height sensor or GPS system on the vehicle. These data can help to understand topography.
Topography of the ground: and the vehicle-mounted camera, the laser radar and other sensors are used for acquiring the geographic information around the vehicle. Image processing and three-dimensional reconstruction techniques may be used to analyze and identify topographical features such as hills, curves, and the like.
Weather forecast data: and continuously updated weather forecast data, including information such as precipitation, air temperature, wind speed and wind direction, is acquired by connecting a weather station, a weather forecast service or the Internet.
Rain sensor: the rain sensor can be installed on the vehicle, and whether the vehicle rains and the magnitude of the rain are judged by measuring the quantity and the frequency of the rain drops.
Air quality sensor: an air quality sensor is installed inside the vehicle for detecting and measuring the concentration and quality of particulate matter in the air. These sensors can monitor the concentration of air pollutants such as fine particulate matter (PM 2.5, PM 10).
In addition, there is a need to detect vehicle parameters of the vehicle, including: the running speed of the vehicle, the running direction, the vehicle body parameters, the shape, and the like. In particular, radar technology may be utilized to measure the distance, speed and direction of a target vehicle. By continuously measuring the position of the target vehicle relative to the radar and combining the angle and direction of the radar beam, the body parameters of the target vehicle, such as the length, width and height of the vehicle, can be calculated.
Step S12, generating an initial three-dimensional model using the environmental data and the vehicle parameters.
In an embodiment of the present application, generating an initial three-dimensional model using environmental data and vehicle parameters includes: determining the environment type of the environment where the target vehicle is located according to the environment data; and processing the vehicle parameters according to the scene processing strategy corresponding to the environment type to generate an initial three-dimensional model.
Specifically, the ground information is acquired through the camera or the laser sensor, the ground can be identified by using image processing and a computer vision algorithm, and the characteristics of the ground such as texture, color and form are detected, so that the environment type of the target vehicle, such as a flat road surface, a muddy road surface, a ponding road surface, a crushed stone road surface, snow or grassland, and the like, can be determined. Alternatively, terrain analysis and processing may be performed using lidar or other high-precision sensors to acquire terrain data. By counting the curvature, the height difference and the change condition of the terrain, the type of the terrain where the target vehicle is located, such as a flat ground, a mountain or a desert, can be determined.
According to the determined environment type, corresponding scene processing strategies can be adopted to process the vehicle parameters, and a virtual initial three-dimensional model is generated. The specific processing strategies comprise: and the data acquired by different sensors are fused, so that the accuracy and the integrity of the data are improved. The data of multiple sensors such as a radar, a camera, a laser and the like can be fused by using a sensor fusion algorithm such as an Extended Kalman Filter (EKF) or a Particle Filter (PF) and the like, so that more accurate vehicle parameters are obtained. Based on the environment type and vehicle parameters, a virtual initial three-dimensional model may be generated using computer graphics techniques. According to parameters such as length, width, height, shape and the like of the vehicle, three-dimensional modeling software or algorithm can be used for generating an initial three-dimensional model of the vehicle.
Step S13, displaying the initial three-dimensional model based on the vehicle-mounted screen of the target vehicle, and acquiring the updating requirement acting on the initial three-dimensional model.
In the embodiment of the application, the initial three-dimensional model is displayed based on the on-board screen of the target vehicle, and the method can be carried out according to the following steps: rendering the generated three-dimensional model, and converting the three-dimensional model into a visualized image. Rendering may use computer graphics techniques, including lighting, shading, reflection, etc. effects, making the model more realistic. Appropriate adjustments can be made according to resolution and display requirements of the vehicle screen to ensure clarity and visibility of the model on the screen. And outputting the rendered three-dimensional model to a vehicle-mounted screen for display. The image data of the virtual model may be transmitted to the on-board screen using a software development tool or a graphic library. The viewing angle of the model may be interoperated, e.g., rotated, scaled, or translated, as needed to provide a better viewing experience.
It should be noted that displaying the initial three-dimensional model based on the on-board screen of the target vehicle requires consideration of the size, resolution and display capability of the screen, as well as the observation needs and interaction experience of the vehicle driver. In addition, real-time performance and performance requirements are considered, and the generation and display processes of the model are ensured not to influence the safety and driving experience of vehicle operation.
In the embodiment of the application, the update requirement acting on the initial three-dimensional model is obtained, and the method comprises the following steps of A1-A3:
step A1, detecting a current usage parameter of a vehicle component in a target vehicle.
In the embodiment of the application, the sensor, the control unit or other devices are used to obtain the usage parameters of each component in the target vehicle. These parameters may include engine speed, vehicle speed, brake pressure, vehicle tilt angle, light status, etc. The sensor may be an original sensor of the vehicle or an external sensor added.
And step A2, determining the component state of the vehicle component according to the current use parameters.
In the embodiment of the application, the acquired parameters are processed, and a signal processing or data processing algorithm can be used. The data is converted into a suitable format and unit for further analysis and judgment, depending on the specific components and parameters of the vehicle. And according to the processed use parameters, applying specific rules or algorithms to judge the state of the vehicle component. These rules may be based on predefined thresholds, models, or specifications. For example, the states of the engine and the brake system are determined based on the engine speed and the brake pressure, the state of the steering system is determined based on the vehicle speed and the steering angle, and the like.
And step A3, if the component state is matched with the preset component state, taking the preset interaction content corresponding to the component state as first interaction content, and generating an update requirement based on the first interaction content.
In the embodiment of the application, the detected component state is matched with a preset component state. The preset component status may be defined according to specifications or related criteria provided by the vehicle manufacturer, such as engine failure, brake system anomalies, light failure, etc. And determining whether the component state is matched with the preset state by comparing the component states. For matched part states, the corresponding interactive contents may be predefined. The content may be text, icons, sounds or other forms of information. Based on the first interactive content, an update requirement for updating the three-dimensional model may be generated. Depending on the state of the component, the appearance, shape, or additional details of the model may be adjusted, etc. For example, a flashing of a fault indicator light, a warning sign of a braking system fault, a change of a color or state of a light, etc. are displayed in the three-dimensional model. In addition, the priority and time for updating the model can be determined according to the severity and urgency of the component state.
As one example: data relating to vehicle components, such as the mileage of tires, road segment topography, temperature, etc., is acquired via sensors, computer vision techniques, or other means. These data can be used to determine the use and status of the component. The collected data is processed and analyzed to convert it into a suitable form and standard. For example, the wear degree is calculated according to the driving mileage of the tire, and the possible damage degree of the tire is judged according to the road section topography and the temperature. And generating state information of the corresponding parts according to the analyzed data. Depending on the application requirements, wear, loss or damage of the components can be indicated in different ways. For example, the degree of wear is indicated by adjusting the depth of the tire texture, changing the texture of the tire surface, or changing the color.
The method provided by the embodiment of the application can detect the current use parameters of each vehicle component in the target vehicle in real time. By monitoring the usage status of vehicle components, critical vehicle operating information such as engine speed, fuel consumption, temperature, etc. may be obtained. This helps to understand the overall operation of the vehicle and the operating conditions of the various components. By comparing the states of the components with the preset states of the components, whether each component works normally, is abnormal or fails can be judged. The method is beneficial to timely maintenance and fault diagnosis, and ensures the reliability and safety of the vehicle. When the state of the vehicle component is matched with the preset state, the preset interaction content corresponding to the component state can be used as the first interaction content. Based on the first interactive content, a corresponding update requirement may be generated. The vehicle control system is favorable for personalized setting and adjustment of the vehicle, improves the use experience of a user, and is favorable for meeting personalized requirements of the user and improving the driving comfort and convenience.
And S14, updating the initial three-dimensional model according to the interactive content indicated by the updating requirement to obtain a target three-dimensional model, and displaying the target three-dimensional model on the vehicle-mounted screen.
In the embodiment of the application, according to the updating requirement of the interactive content, the specific model updating operation required to be performed is analyzed. This may involve adjustments in texture, shape, color, detail, etc. of the model. And updating the initial three-dimensional model according to the updating requirement and the analyzed updating operation. This may involve making corresponding modifications and adjustments to the model using computer graphics techniques such as texture mapping, polygon modeling, texture settings, etc. And processing the updated initial three-dimensional model to generate a target three-dimensional model. This may include computing the surface normal vector, lighting effects, materials, etc. of the model to enhance the fidelity and realism of the model. And finally, sending the data of the target three-dimensional model to a vehicle-mounted screen for display.
As an example, the three-dimensional model is updated accordingly based on information generated from the state of the part. Texture, color, shape, or other details of the master mold may be adjusted using graphics rendering techniques to reflect the effects of wear, loss, or damage to the parts. For example, the wear texture of the tire, changing the appearance of the tire, or adding details of damage, etc. are displayed on the vehicle model. The updated three-dimensional vehicle model is displayed on a vehicle-mounted screen or other display equipment so as to be convenient for a driver to observe. Through the interactive operation module, a driver can interact with the automobile model, such as touching, rotating, zooming in or zooming out the model, so as to better observe and understand the conditions of the parts.
The method provided by the embodiment of the application can provide accurate environment and vehicle information by detecting the environment data (such as weather, road conditions and the like) and the vehicle parameters (such as speed, direction and the like) of the current environment of the target vehicle, and provide necessary data basis for the subsequent generation and updating of the three-dimensional model. An initial three-dimensional model is generated using the environmental data and the vehicle parameters. Therefore, the target vehicle can display the real-time vehicle model of the target vehicle on the vehicle-mounted screen, the visual effect is improved, and the user experience is improved.
In addition, the environment data and the vehicle parameters can be compared with the initial three-dimensional model to generate updating requirements and interactive contents. This enables the target vehicle to perform corresponding interactive operations, such as updating the vehicle color, replacing vehicle components, etc., in accordance with the received update demand. And updating the initial three-dimensional model according to the updating requirement to obtain a target three-dimensional model, and displaying the target three-dimensional model on the vehicle-mounted screen. The real-time state and change of the target vehicle can be intuitively displayed to a driver or other vehicles, and the efficiency and accuracy of information transmission are improved.
The method provided by the embodiment of the invention can realize real-time display of the three-dimensional model of the target vehicle, generate updating requirements according to the environment and the vehicle parameters, and provide better interaction experience and information transfer effect.
The method provided by the embodiment of the application detects the environment data and the vehicle parameters of the target vehicle in real time. The method is beneficial to acquiring key information such as road conditions, weather conditions, vehicle speed and direction and the like, and provides necessary input data for subsequent three-dimensional model generation and updating. An initial three-dimensional model is generated using the collected environmental data and vehicle parameters. An initial visual representation of the target vehicle in the current environment may be provided for preliminary display and interaction. And then acquiring the related update requirement of the initial three-dimensional model, updating the initial three-dimensional model by using the interactive content of the update requirement, and displaying the updated initial three-dimensional model for observation of a driver. This helps the driver to know the current state and surrounding environment of the vehicle more accurately and intuitively, improving driving safety and decision making ability.
Fig. 2 is a flowchart of a display control method of an initial three-dimensional model of a vehicle according to an embodiment of the present invention, as shown in fig. 2, the flowchart including the steps of:
in step S21, environmental data of the current environment of the target vehicle and vehicle parameters of the target vehicle are detected. The details of step S11 in the above embodiment are specifically described in the foregoing embodiments, and will not be described in detail herein.
Step S22, generating an initial three-dimensional model by using the environmental data and the vehicle parameters. The step S12 in the above embodiment is specifically described, and will not be described in detail herein.
Step S23, displaying the initial three-dimensional model based on the vehicle-mounted screen of the target vehicle, and acquiring the update requirement acting on the initial three-dimensional model.
In the embodiment of the application, the update requirement acting on the initial three-dimensional model is obtained, which comprises the following steps of:
and B1, detecting the interactive operation triggered by the target user based on the vehicle-mounted screen, and determining second interactive content based on the interactive operation.
And step B2, generating an update requirement based on the second interactive content.
In this embodiment of the present application, detecting an interaction operation triggered by a target user based on a vehicle-mounted screen, and determining first interaction content based on the interaction operation includes: and detecting gesture operation of the target user triggered based on the vehicle-mounted screen, and taking operation content corresponding to the gesture operation as second interaction content.
Specifically, a sensor technology, such as a touch screen, a camera or a gesture recognizer, is adopted to recognize the gesture of the target user in real time. Computer vision algorithms, machine learning techniques, or deep learning models may be used to detect and identify types and actions of user gestures. Mapping the recognized gesture operation to corresponding operation content. And associating different gesture operations with specific functions or interaction contents according to the requirements of the application and the designed interaction mode. For example, a swipe gesture is mapped to a page switch, a pinch gesture is mapped to a zoom operation, a double-tap gesture is mapped to a specific function trigger, and so on. And generating corresponding second interactive contents according to the mapping result of the gesture operation. This may be to update information displayed on the screen, trigger a specific function, or perform other interactive operations. For example, information displayed on a screen is switched according to a sliding gesture of a user, a model is enlarged or reduced according to a pinch gesture, and a specific instruction or operation is triggered according to a double-click gesture.
The method provided by the embodiment of the application can detect the interactive operation triggered by the target user on the vehicle-mounted screen in real time. By monitoring the operation behaviors of the user, the requirements and the instructions of the user can be acquired, and the preferences of the user on the functions and the settings of the vehicle can be known. According to the operation instructions and intentions of the user, corresponding interactive contents and functions can be provided to meet the personalized requirements of the user. By responding to the interactive operation of the user, the setting, configuration or function of the vehicle can be adjusted, and corresponding updating requirements can be generated. This helps to achieve user-personalized vehicle control and a customized experience. By detecting the user interaction operation and providing corresponding interaction content, the satisfaction degree and driving experience of the user can be improved. The user can realize personalized setting and adjustment of the vehicle functions through simple operation, so that the user can better adapt to the driving preference and the requirements of the user.
In this embodiment of the present application, detecting an interaction operation triggered by a target user based on a vehicle-mounted screen, and determining first interaction content based on the interaction operation includes: detecting an eye focus of a target user, obtaining a focus position of the eye focus, and taking a view angle corresponding to the focus position as second interaction content;
Specifically, the eye tracking device detects an eye focus of the user. These devices can track the eye movement of the user in real time and determine the focus position. Such as an eye tracker or an infrared camera. The system processes the acquired eyeball data by using a corresponding algorithm and judges the focus position of the user. Common eye tracking algorithms include pupil positioning, pupil center detection, pupil movement tracking, and the like. After the eye has determined the user's focus position, it is converted into corresponding screen coordinates. In particular by screen calibration and coordinate mapping. The system will calculate the specific coordinates of the focus position on the screen based on the eye position and the screen layout. The corresponding viewing angle information may be extracted based on the focal position of the user. This can be achieved by a predetermined viewing angle range and target position. For example, the focus position of the user is mapped to a specific area around the vehicle to acquire the angle of view information. And finally taking the extracted visual angle information as the content of the second interaction.
In this embodiment of the present application, detecting an interaction operation triggered by a target user based on a vehicle-mounted screen, and determining first interaction content based on the interaction operation includes: and detecting a voice command triggered by the target user based on the vehicle-mounted screen, and taking voice content carried by the voice command as second interaction content.
Specifically, the in-vehicle system provides a user interface, such as a button or touch screen, for the user to trigger the voice command. The user may click a button or activate a voice command function by sliding on a touch screen or the like. The user triggers the voice command function and the vehicle-mounted system needs to start recording to capture the voice command of the user. After the recording is completed, the vehicle-mounted system identifies the voice command. In particular by using speech recognition techniques such as Natural Language Processing (NLP) and speech recognition algorithms. The system will analyze the sound recordings and convert the speech content into intelligible text. When the voice command is identified, the vehicle-mounted system extracts voice content therein. In particular by using text processing techniques such as keyword extraction or pattern matching. The system will analyze the identified text and extract key information related to the target interaction. The extracted voice content may be used as the content of the second interaction.
And step S24, updating the initial three-dimensional model according to the interactive content indicated by the updating requirement to obtain a target three-dimensional model, and displaying the target three-dimensional model on the vehicle-mounted screen. The details of step S11 in the above embodiment are specifically described in the foregoing embodiments, and will not be described in detail herein.
As one example, external weather conditions such as rain, snow, wind, sand, etc. are identified according to a preset scene processing manner, and road conditions such as water accumulation, snow, mud, etc. are detected. According to the information, the processing module generates corresponding effects to influence the display of the three-dimensional car model. And generating a display effect of the three-dimensional car model according to the external weather and the road condition. For example, in rainy days, the processing module can reduce the brightness of the paint of the three-dimensional vehicle model, add effects such as scratches, stains, water stains and the like, and even generate a ponding effect on the vehicle body or a snow effect on the vehicle roof.
The display effect of the generated three-dimensional vehicle model can be displayed through the vehicle-mounted screen. The on-board screen may present the three-dimensional car model effect to the user in the form of images or animations. The user can perform gesture operations such as wiping, cleaning, polishing, etc. through the in-vehicle screen. This may be accomplished by touching a particular area on the screen or using gesture recognition techniques. For example: and receiving gesture operation of a user, and generating a display effect of the three-dimensional car model again according to the operation type and the position. For example, a user swipes a finger over a paint, and the on-board screen may be displayed to reduce the scratch effect on the paint; the user swipes a finger over the simulated glass and the vehicle screen may be displayed to wash away the water spot. And then the generated new three-dimensional vehicle model display effect is displayed to the user again through the display and operation module, so that gesture interactive operation of the user is realized.
Fig. 3 is a flowchart of a display control method of an initial three-dimensional model of a vehicle according to an embodiment of the present invention, as shown in fig. 3, the flowchart including the steps of:
in step S31, environmental data of the current environment of the target vehicle and vehicle parameters of the target vehicle are detected. The details of step S11 in the above embodiment are specifically described in the foregoing embodiments, and will not be described in detail herein.
Step S32, generating an initial three-dimensional model by using the environmental data and the vehicle parameters. The step S12 in the above embodiment is specifically described, and will not be described in detail herein.
Step S33, displaying the initial three-dimensional model based on the vehicle-mounted screen of the target vehicle, and acquiring the update requirement acting on the initial three-dimensional model.
In the embodiment of the application, the screen can present the three-dimensional car model effect to the user in the form of images or animations.
In the embodiment of the application, the update requirement acting on the initial three-dimensional model is obtained, and the method comprises the following steps of C1-C3:
step C1, receiving an interaction request sent by other vehicles, wherein the interaction request is generated after the other vehicles receive an initial three-dimensional model sent by a target vehicle;
step C2, analyzing the interaction request to obtain third interaction content of other vehicles;
And step C3, generating an update requirement according to the third interactive content.
Specifically, the steps of receiving an interaction request sent by another vehicle, analyzing and obtaining third interaction content of the other vehicle, and generating an update requirement according to the third interaction content may be performed as follows: the target vehicle needs to have the capability to receive the interactive requests sent by other vehicles. This may be achieved by an in-vehicle communication system, such as inter-vehicle communication (V2V) technology or an in-vehicle network connection. After receiving the interaction request sent by other vehicles, the target vehicle needs to analyze the request. The parsing of the interactive request may obtain the initial three-dimensional model by parsing the transmitted data packet or message. After parsing the interaction request, the target vehicle may obtain third interaction content of the other vehicles. This may be information that other vehicles modify, add or delete on the basis of the initial three-dimensional model. According to the acquired third interactive content, the target vehicle can generate corresponding updating requirements. This includes modification, addition or deletion operations on the three-dimensional model, such as changing the vehicle color, changing vehicle components, or adding new functions, etc.
And step S34, updating the initial three-dimensional model according to the interactive content indicated by the updating requirement to obtain a target three-dimensional model, and displaying the target three-dimensional model on the vehicle-mounted screen. The step S14 in the above embodiment is specifically described, and will not be described in detail herein.
In the embodiment of the application, after displaying the updated initial three-dimensional model on the vehicle-mounted screen, the method further includes the following steps D1-D3:
and D1, detecting whether a safety accident occurs to the target vehicle currently.
And D2, if the target vehicle has a safety accident, updating the target three-dimensional model according to the accident type corresponding to the safety accident.
And D3, transmitting the target three-dimensional model and the current position information of the target vehicle to other vehicles.
Specifically, the target vehicle needs to be equipped with a safety accident detection system such as a collision sensor, an inertial measurement unit, and the like. These devices can sense whether the vehicle has a safety accident, such as a collision, rollover, etc. Once a security incident is detected, the target vehicle needs to determine the type of incident based on the characteristics of the security incident and the sensor data. This may be achieved by techniques such as accident detection algorithms and pattern matching. According to the judged safety accident type, the target vehicle can update the three-dimensional model of the target vehicle. For example, after a collision accident, a three-dimensional model of the target vehicle may add effects such as broken body parts or deformed body structures. The target vehicle needs to acquire its own current location information. This may be obtained by Global Positioning System (GPS) or other positioning technology. And the target vehicle sends the updated three-dimensional model and the current position information to other vehicles. For example: by means of an in-vehicle communication system, such as inter-vehicle communication (V2V) or an in-vehicle network connection.
The method provided by the embodiment of the application can monitor whether the target vehicle has a safety accident or not in real time, and help other drivers to know the safety condition on the current road. This helps to improve overall road safety and reduce the incidence of accidents. And after the safety accident of the target vehicle, updating the three-dimensional model of the target vehicle according to the accident type. This helps other vehicles to more accurately understand the status and location of the accident vehicle, taking appropriate action to avoid further accident occurrences. Meanwhile, by transmitting the three-dimensional model of the target vehicle and the current position information to other vehicles, more accurate road conditions can be provided. This helps other drivers to better predict and avoid potentially dangerous situations, enhancing overall traffic safety.
In addition, by sharing the safety accident and the vehicle position information in real time, the scheme can help other drivers to be more alert and keep track of the situation around the target vehicle. This may raise the driver's safety awareness, causing them to take appropriate action to avoid potential hazards.
The embodiment also provides a display control device for an initial three-dimensional model of a vehicle, which is used for implementing the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The present embodiment provides a display control device for an initial three-dimensional model of a vehicle, as shown in fig. 4, including:
a detection module 41, configured to detect environmental data of an environment in which a target vehicle is currently located and vehicle parameters of the target vehicle;
a generation module 42 for generating an initial three-dimensional model based on the environmental data and the vehicle parameters;
the acquiring module 43 is configured to display an initial three-dimensional model based on a vehicle-mounted screen of the target vehicle, and acquire an update requirement acting on the initial three-dimensional model;
the updating module 44 is configured to update the initial three-dimensional model according to the interactive content indicated by the update requirement, obtain the target three-dimensional model, and display the target three-dimensional model on the vehicle-mounted screen.
In the embodiment of the present application, the generating module 42 is configured to determine, according to the environmental data, an environmental type of an environment in which the target vehicle is located; and processing the vehicle parameters according to the scene processing strategy corresponding to the environment type to generate an initial three-dimensional model.
In the embodiment of the present application, the obtaining module 43 is configured to detect a current usage parameter of a vehicle component in the target vehicle; determining a component state of the vehicle component according to the current usage parameter; if the component state is matched with the preset component state, the preset interaction content corresponding to the component state is used as first interaction content, and the updating requirement is generated based on the first interaction content.
In the embodiment of the present application, the obtaining module 43 is configured to detect an interaction operation triggered by the target user based on the vehicle-mounted screen, and determine a second interaction content based on the interaction operation; an update requirement is generated based on the second interactive content.
In this embodiment of the present application, the obtaining module 43 is configured to detect a gesture operation triggered by the target user based on the vehicle-mounted screen, and use an operation content corresponding to the gesture operation as the second interaction content; detecting an eye focus of a target user, obtaining a focus position of the eye focus, and taking a view angle corresponding to the focus position as second interaction content; and detecting a voice command triggered by the target user based on the vehicle-mounted screen, and taking voice content carried by the voice command as second interaction content.
In this embodiment of the present application, the obtaining module 43 is configured to receive an interaction request sent by another vehicle, where the interaction request is generated after the other vehicle receives an initial three-dimensional model sent by a target vehicle; analyzing the interaction request to obtain third interaction content of other vehicles; and generating an update requirement according to the third interactive content.
In an embodiment of the present application, the apparatus further includes: the transmission module is used for detecting whether a safety accident occurs to the target vehicle currently; if the target vehicle has a safety accident, updating the target three-dimensional model according to the accident type corresponding to the safety accident; and sending the target three-dimensional model and the current position information of the target vehicle to other vehicles.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a computer device according to an alternative embodiment of the present invention, as shown in fig. 5, the computer device includes: one or more processors 10, memory 20, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the computer device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computer devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system).
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform the methods shown in implementing the above embodiments.
The memory 20 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created from the use of the computer device of the presentation of a sort of applet landing page, and the like. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may optionally include memory located remotely from processor 10, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Memory 20 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as flash memory, hard disk, or solid state disk; the memory 20 may also comprise a combination of the above types of memories.
The computer device also includes a communication interface 30 for the computer device to communicate with other devices or communication networks.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware. The storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, a flash memory, a hard disk, a solid state disk or the like; further, the storage medium may also comprise a combination of memories of the kind described above. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.

Claims (10)

1. A display control method of an initial three-dimensional model of a vehicle, the method comprising:
detecting environment data of the current environment of a target vehicle and vehicle parameters of the target vehicle;
generating an initial three-dimensional model using the environmental data and the vehicle parameters;
displaying the initial three-dimensional model based on a vehicle-mounted screen of the target vehicle, and acquiring an update requirement acting on the initial three-dimensional model;
and updating the initial three-dimensional model according to the interactive content indicated by the updating requirement to obtain a target three-dimensional model, and displaying the target three-dimensional model on the vehicle-mounted screen.
2. The method of claim 1, wherein the generating an initial three-dimensional model using the environmental data and the vehicle parameters comprises:
determining the environment type of the environment where the target vehicle is located according to the environment data;
And processing the vehicle parameters according to the scene processing strategy corresponding to the environment type to generate the initial three-dimensional model.
3. The method of claim 1, wherein the obtaining the update requirement for the initial three-dimensional model comprises:
detecting a current usage parameter of a vehicle component in the target vehicle;
determining a component status of the vehicle component according to the current usage parameter;
and if the part state is matched with the preset part state, taking the preset interaction content corresponding to the part state as first interaction content, and generating the update requirement based on the first interaction content.
4. The method of claim 1, wherein the obtaining the update requirement for the initial three-dimensional model comprises:
detecting interaction operation triggered by a target user based on the vehicle-mounted screen, and determining second interaction content based on the interaction operation;
the update requirements are generated based on the second interactive content.
5. The method of claim 4, wherein the detecting the interaction triggered by the target user based on the on-board screen and determining the first interaction content based on the interaction comprises:
Detecting gesture operation triggered by the target user based on the vehicle-mounted screen, and taking operation content corresponding to the gesture operation as the second interaction content;
detecting an eye focus of the target user, obtaining a focus position of the eye focus, and taking a view angle corresponding to the focus position as the second interaction content;
and detecting a voice command triggered by the target user based on the vehicle-mounted screen, and taking voice content carried by the voice command as the second interaction content.
6. The method of claim 1, wherein the obtaining the update requirement for the initial three-dimensional model comprises:
receiving an interaction request sent by other vehicles, wherein the interaction request is generated after the other vehicles receive an initial three-dimensional model sent by the target vehicle;
analyzing the interaction request to obtain third interaction content of the other vehicles;
and generating the update requirement according to the third interactive content.
7. The method of claim 1, wherein after displaying the updated initial three-dimensional model on the on-board screen, the method further comprises:
Detecting whether a safety accident occurs to the target vehicle currently;
if the safety accident occurs to the target vehicle, updating the target three-dimensional model according to the accident type corresponding to the safety accident;
and sending the target three-dimensional model and the current position information of the target vehicle to other vehicles.
8. A display control apparatus for an initial three-dimensional model of a vehicle, the apparatus comprising:
the detection module is used for detecting environment data of the current environment of the target vehicle and vehicle parameters of the target vehicle;
a generation module for generating an initial three-dimensional model based on the environmental data and the vehicle parameters;
the acquisition module is used for displaying the initial three-dimensional model based on a vehicle-mounted screen of the target vehicle and acquiring the update requirement acting on the initial three-dimensional model;
and the updating module is used for updating the initial three-dimensional model according to the interactive content indicated by the updating requirement to obtain a target three-dimensional model, and displaying the target three-dimensional model on the vehicle-mounted screen.
9. A computer device, comprising:
a memory and a processor in communication with each other, the memory having stored therein computer instructions which, upon execution, cause the processor to perform the method of any of claims 1 to 7.
10. A computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1 to 7.
CN202410174723.7A 2024-02-07 2024-02-07 Display control method and device for initial three-dimensional model of vehicle Pending CN117874927A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410174723.7A CN117874927A (en) 2024-02-07 2024-02-07 Display control method and device for initial three-dimensional model of vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410174723.7A CN117874927A (en) 2024-02-07 2024-02-07 Display control method and device for initial three-dimensional model of vehicle

Publications (1)

Publication Number Publication Date
CN117874927A true CN117874927A (en) 2024-04-12

Family

ID=90593194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410174723.7A Pending CN117874927A (en) 2024-02-07 2024-02-07 Display control method and device for initial three-dimensional model of vehicle

Country Status (1)

Country Link
CN (1) CN117874927A (en)

Similar Documents

Publication Publication Date Title
US10755007B2 (en) Mixed reality simulation system for testing vehicle control system designs
CN108571974B (en) Vehicle positioning using a camera
US10991170B1 (en) Vehicle sensor collection of other vehicle information
US9959764B1 (en) Synchronization of vehicle sensor information
US10223752B1 (en) Assessing risk using vehicle environment information
CN114026611A (en) Detecting driver attentiveness using heatmaps
US8954226B1 (en) Systems and methods for visualizing an accident involving a vehicle
US20160210775A1 (en) Virtual sensor testbed
JP2022141744A (en) Traveling lane identification without road curvature data
US20150112800A1 (en) Targeted advertising using vehicle information
US20150112731A1 (en) Risk assessment for an automated vehicle
US20140354684A1 (en) Symbology system and augmented reality heads up display (hud) for communicating safety information
WO2017053422A1 (en) Three-dimensional risk maps
CN111094095B (en) Method and device for automatically sensing driving signal and vehicle
JP2016520882A (en) Modification of autonomous vehicle operation based on sensor undetectable locations and sensor limitations
CN112750323A (en) Management method, apparatus and computer storage medium for vehicle safety
CN112334370A (en) Automated vehicle actions such as lane departure alerts and associated systems and methods
CN113165510B (en) Display control device, method, and computer program
JP2023024857A (en) Road-to-vehicle cooperative information processing method, apparatus, system, electronic device, storage medium, and computer program
KR20230008177A (en) Automatic driving actions for determined driving conditions
CN111422203B (en) Driving behavior evaluation method and device
CN117874927A (en) Display control method and device for initial three-dimensional model of vehicle
US20240177418A1 (en) Mixed reality-based display device and route guide system
CN114722931A (en) Vehicle-mounted data processing method and device, data acquisition equipment and storage medium
Morden et al. Driving in the Rain: A Survey toward Visibility Estimation through Windshields

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination