CN111815745B - Driving condition display method and device, storage medium and electronic equipment - Google Patents

Driving condition display method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111815745B
CN111815745B CN202010549903.0A CN202010549903A CN111815745B CN 111815745 B CN111815745 B CN 111815745B CN 202010549903 A CN202010549903 A CN 202010549903A CN 111815745 B CN111815745 B CN 111815745B
Authority
CN
China
Prior art keywords
running
driving
information
position point
target vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010549903.0A
Other languages
Chinese (zh)
Other versions
CN111815745A (en
Inventor
王翔宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Everything Mirror Beijing Computer System Co ltd
Original Assignee
Everything Mirror Beijing Computer System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Everything Mirror Beijing Computer System Co ltd filed Critical Everything Mirror Beijing Computer System Co ltd
Priority to CN202010549903.0A priority Critical patent/CN111815745B/en
Publication of CN111815745A publication Critical patent/CN111815745A/en
Application granted granted Critical
Publication of CN111815745B publication Critical patent/CN111815745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure relates to a driving situation display method, a driving situation display device, a storage medium, and an electronic apparatus, the method including: acquiring driving parameter information of at least two driving position points of a target vehicle; creating a running path passing through the at least two running position points, and determining the running posture information of the target vehicle at the corresponding running position points according to the pre-acquired running parameter information; when the three-dimensional virtual model corresponding to the control target vehicle runs to each running position point of the running path, running according to the running posture information of the corresponding running position point, and generating running animation. According to the embodiment of the disclosure, the running condition of the vehicle can be displayed in a three-dimensional dynamic mode, the running posture of the vehicle is displayed while the running path of the vehicle is displayed, the displayed information is more comprehensive, and the visual effect is improved.

Description

Driving condition display method and device, storage medium and electronic equipment
Technical Field
The disclosure relates to the technical field of communication, in particular to a driving condition display method, a driving condition display device, a storage medium and electronic equipment.
Background
The running status display of the vehicles (such as automobiles, airplanes and the like) can be used for running monitoring and management of the vehicles. The existing running condition display schemes of vehicles are mostly two-dimensional static display schemes, only display the position of a certain vehicle on a map at a certain moment, the displayed information is relatively unilateral, and the visual effect is relatively poor.
Disclosure of Invention
The invention aims to provide a driving condition display method, a driving condition display device, a storage medium and electronic equipment, and aims to improve visual effects.
In order to achieve the above object, in a first aspect of the present disclosure, there is provided a driving situation display method including:
acquiring driving parameter information of at least two driving position points of a target vehicle;
creating a running path passing through the at least two running position points, and determining running posture information of the target vehicle at the corresponding running position points according to the pre-acquired running parameter information;
and controlling the three-dimensional virtual model corresponding to the target vehicle to run according to the running posture information of the corresponding running position point when the three-dimensional virtual model runs to each running position point of the running path, and generating a running animation.
Optionally, the creating a travel path through the at least two travel location points includes:
and creating a spline curve passing through the at least two running position points to obtain the running path.
Optionally, the target vehicle includes a vehicle, and determining the driving posture information of the target vehicle at the corresponding driving position point according to the pre-acquired driving parameter information includes:
acquiring speed information and direction information of each driving position point from driving parameter information of the corresponding driving position point;
at least one of a tire rotation speed, a steering wheel deflection angle, and a vehicle body inclination angle of the vehicle at the corresponding driving position point is determined according to the speed information and the direction information of each driving position point.
Optionally, the target vehicle includes an aircraft, and the determining, according to the pre-acquired driving parameter information, driving posture information of the target vehicle at a corresponding driving position point includes:
acquiring speed information, direction information and height information of each driving position point from driving parameter information of the corresponding driving position point;
and determining at least one of the turbine rotation speed, the attitude angle, the steering direction of a rudder, the deflection angle of a tail wing, the retraction state of a landing gear, the retraction state of a flap and the opening and closing state of a speed reducing plate of the aircraft at the corresponding driving position point according to the speed information, the direction information and the height information of each driving position point.
Optionally, when the three-dimensional virtual model corresponding to the target vehicle is controlled to travel to each travel position point of the travel path, the three-dimensional virtual model travels according to the travel posture information of the corresponding position point, and generates a travel animation, including:
associating the driving gesture information of the target vehicle at each driving position point with the corresponding driving position point on the driving path;
and controlling the three-dimensional virtual model to run on the running path, rendering running posture information associated with the corresponding running position point to the three-dimensional virtual model when the three-dimensional virtual model runs to each running position point, and generating the running animation.
Optionally, before acquiring the driving parameter information of the at least two driving location points of the target vehicle, the method further includes:
acquiring real-time driving data of each vehicle from a server;
preprocessing the real-time driving data to obtain driving parameter information of the corresponding vehicle at each driving position point;
and storing the driving parameter information of each vehicle at each driving position point into a corresponding type of driving information storage list.
Optionally, the acquiring the driving parameter information of at least two driving position points of the target vehicle includes:
Determining the target vehicles selected by the user in a driving condition display list;
and acquiring the driving parameter information of at least two driving position points of the target vehicle from the driving information storage list of the corresponding type according to the target vehicle selected by the user.
Optionally, the method further comprises:
determining animation updating frequency according to the distance between the target vehicle and a preset lens;
and updating the running animation according to the animation updating frequency.
Optionally, the method further comprises:
receiving a viewing instruction triggered by the user;
and adjusting the display view angle and the size of the three-dimensional virtual model in response to the viewing instruction.
In a second aspect of the present disclosure, there is provided a driving situation display apparatus, the apparatus comprising:
the first acquisition module is used for acquiring the driving parameter information of at least two driving position points of the target vehicle;
a creation module for creating a travel path through the at least two travel location points;
the first determining module is used for determining the driving posture information of the target vehicle at the corresponding driving position point according to the pre-acquired driving parameter information;
And the control module is used for controlling the three-dimensional virtual model corresponding to the target vehicle to run according to the running posture information of the corresponding running position point when running to each running position point of the running path, and generating running animation.
Optionally, the creating module is specifically configured to create a spline curve passing through the at least two driving location points, so as to obtain the driving path.
Optionally, the target vehicle comprises a vehicle, and the first determining module comprises:
the first acquisition sub-module is used for acquiring speed information and direction information of the corresponding driving position point from the driving parameter information of each driving position point;
and the first determination submodule is used for determining at least one of the tire rotation speed, the steering wheel deflection angle and the vehicle body inclination angle of the vehicle at the corresponding driving position point according to the speed information and the direction information of each driving position point.
Optionally, the target vehicle comprises an aircraft, and the first determining module comprises:
the second acquisition sub-module is used for acquiring speed information, direction information and height information of the corresponding driving position point from the driving parameter information of each driving position point;
The second determining submodule is used for determining at least one of the turbine rotating speed, the attitude angle, the steering of a rudder, the deflection angle of a tail wing, the retraction state of a landing gear, the retraction state of a flap and the opening and closing state of a speed reducing plate of the aircraft at the corresponding driving position point according to the speed information, the direction information and the height information of each driving position point.
Optionally, the control module includes:
the association sub-module is used for associating the running gesture information of the target vehicle at each running position point with the corresponding running position point on the running path;
and the control sub-module is used for controlling the three-dimensional virtual model to run on the running path, rendering the running gesture information associated with the corresponding running position point to the three-dimensional virtual model when the three-dimensional virtual model runs to each running position point, and generating the running animation.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring real-time driving data of each vehicle from the server;
the preprocessing module is used for preprocessing the real-time running data to obtain running parameter information of the corresponding vehicle at each running position point;
And the storage module is used for storing the driving parameter information of each vehicle at each driving position point into a corresponding type of driving information storage list.
Optionally, the first acquisition module includes:
a third determination sub-module for determining the target vehicle selected by the user in the driving situation display list;
and the third acquisition sub-module is used for acquiring the running parameter information of at least two running position points of the target vehicle from the running information storage list of the corresponding type according to the target vehicle selected by the user.
Optionally, the apparatus further comprises:
the second determining module is used for determining the animation updating frequency according to the distance between the target vehicle and the preset lens;
and the updating module is used for updating the running animation according to the animation updating frequency.
Optionally, the apparatus further comprises:
the receiving module is used for receiving the viewing instruction triggered by the user;
and the adjusting module is used for responding to the checking instruction and adjusting the display view angle and the size of the three-dimensional virtual model.
In a third aspect of the present disclosure there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of the first aspect above.
In a fourth aspect of the present disclosure, there is provided an electronic device comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of the first aspect above.
According to the technical scheme, the driving parameter information of at least two driving position points of the target vehicle is obtained; creating a running path passing through the at least two running position points, and determining the running posture information of the target vehicle at the corresponding running position points according to the pre-acquired running parameter information; when the three-dimensional virtual model corresponding to the control target vehicle runs to each running position point of the running path, running according to the running posture information of the corresponding running position point, and generating running animation. That is, according to the technical scheme shown in the embodiment of the disclosure, the running condition of the vehicle can be displayed in a three-dimensional dynamic mode, the running path of the vehicle is displayed, the running posture of the vehicle is displayed, the displayed information is more comprehensive, and the visual effect is improved.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the description serve to explain, but do not limit the disclosure. In the drawings:
FIG. 1 is a flow chart of a driving situation presenting method shown in an embodiment of the present disclosure;
FIG. 2 is a flow chart of another driving situation presenting method shown in an embodiment of the present disclosure;
FIG. 3 is a flow chart of a method of processing travel parameter information shown in an embodiment of the present disclosure;
FIG. 4 is a block diagram of a three-dimensional vehicle model shown in an embodiment of the present disclosure;
FIG. 5 is a block diagram of a three-dimensional aircraft model shown in an embodiment of the present disclosure;
FIG. 6 is a presentation effect diagram of a vehicle travel animation shown in an embodiment of the present disclosure;
FIG. 7 is a display effect diagram of an aircraft travel animation shown in an embodiment of the present disclosure;
FIG. 8 is a flow chart of a vehicle travel animation update method shown in an embodiment of the present disclosure;
FIG. 9 is a flow chart of an aircraft travel animation update method shown in an embodiment of the present disclosure;
FIG. 10 is a flow chart illustrating one travel parameter information update monitoring in accordance with an embodiment of the present disclosure;
FIG. 11 is a flowchart of another driving parameter information processing method shown in an embodiment of the present disclosure;
FIG. 12 is a block diagram of a driving situation presenting apparatus according to an embodiment of the present disclosure;
FIG. 13 is a block diagram of another travel situation presenting device shown in an embodiment of the present disclosure;
fig. 14 is a block diagram illustrating a structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the disclosure, are not intended to limit the disclosure.
Before describing the specific embodiments of the present disclosure in detail, first, a simple description is given of an application scenario of the present disclosure, where the present disclosure may be applied to a vehicle management system, such as an airport management system, and a manager may monitor and manage running of each vehicle according to a running condition of each vehicle displayed by the management system, for example, query or track a current location of each vehicle, and perform flight adjustment, train number allocation, and so on according to the current location of each vehicle. The existing method for displaying the running condition of the vehicle is mainly a two-dimensional static display scheme, only displays the position of a certain vehicle on a map at a certain moment, and has relatively poor visual effect due to relatively single-sided displayed information.
The inventor notices the problem and proposes a driving condition display method, which specifically comprises the following steps:
referring to fig. 1, fig. 1 is a flowchart of a driving situation display method according to an embodiment of the disclosure, where the driving situation display method may be applied to a background management system of a vehicle, and the background management system may be installed on an electronic device, and the electronic device may be a mobile device such as a tablet computer, a smart phone, a smart television, a PDA (english: personal Digital Assistant, chinese: personal digital assistant), a portable computer, or a fixed device such as a desktop computer.
As shown in fig. 1, the method comprises the steps of:
step 101, acquiring driving parameter information of at least two driving position points of a target vehicle.
In this embodiment, a travel information storage list may be established in advance for a vehicle managed by the background management system, for example, when the vehicle managed by the background management system includes a vehicle and an aircraft (e.g., an airplane, a helicopter), a vehicle travel information storage list may be established for storing travel parameter information of each vehicle, and an aircraft travel information storage list may be established for storing travel parameter information of each aircraft. Each vehicle and aircraft may collect real-time travel data through its own positioning system (such as global positioning system GPS) and sensors (such as speed sensor, acceleration sensor, rotation angle sensor, yaw angle sensor, etc.), and upload the collected real-time travel data to a server. In a specific embodiment, the real-time travel data may include position, speed, direction, etc. data, and if the vehicle is an aircraft, the real-time travel data may also include altitude data. The background management system can acquire real-time running data of each vehicle and each aircraft from the server, preprocesses the acquired real-time running data to obtain running parameter information of the corresponding vehicles at each running position point, and stores the running parameter information of each vehicle at each running position point into the established corresponding type running information storage list to form the running parameter information of the vehicles and the aircraft at each running position point.
Such pretreatments include, but are not limited to: the method comprises the steps of performing validity recognition (such as recognizing whether position information in the real-time running data is empty or not, if so, the data is illegal, direct filtering) on the real-time running data, performing validity recognition (such as recognizing whether the position information in the real-time running data is in a preset area or not, if not, the data is invalid, direct filtering), numbering the real-time running data, uniformly converting specific information in the real-time running data (such as converting the position information expressed by longitude and latitude into world coordinates), and adding a time identifier to the real-time running data, wherein the time identifier can comprise an absolute time identifier (such as time for receiving the data from a server) and a relative time identifier (such as time difference between data of each position point and data of a first position point).
In a specific embodiment, the running parameter information of the running position point stored in the running information storage list may specifically include position information (such as world coordinates) of the point, speed information (such as speed, acceleration, angular velocity, etc.), direction information (such as direction, yaw angle, azimuth angle, etc.), running parameter information of the aircraft stored in the running information storage list may further include altitude information, and in addition, the running parameter information of the running position point stored in the running information storage list may further include a number of the position point, a relative time (such as a difference in acquisition time of data of each position point and data of the first position point), an absolute time (such as a time of receiving the data from a server), and the like, which are not particularly limited herein.
In particular, in this embodiment, a running condition display list of the vehicles may be provided on a user interface of the background management system, where the list provides the user with identifiers of the respective vehicles capable of viewing the running condition, and the user may select a vehicle to be viewed (i.e., a target vehicle) in the list, and the background management system obtains running parameter information of at least two running location points of the target vehicle from the running information storage list according to the selection of the user.
Step 102, creating a driving path passing through the at least two driving position points, and determining driving posture information of the target vehicle at the corresponding driving position points according to the pre-acquired driving parameter information.
Specifically, if the travel parameter information of the target vehicle stored in the travel information storage list relates to relatively few travel position points, the existing travel position points may be linearly interpolated according to the point numbers, relative times, absolute times, and the like in the travel parameter information of the existing travel position points to create a travel path passing through the at least two travel position points.
Determining driving posture information of the target vehicle at the corresponding driving position point according to the driving parameter information of each driving position point, for example, when the target vehicle is a vehicle, speed information and direction information of the corresponding driving position point can be obtained from the driving parameter information of each driving position point, and at least one of a tire rotation speed, a steering wheel deflection angle and a vehicle body inclination angle of the vehicle at the corresponding driving position point is determined according to the speed information and the direction information of each driving position point; when the target vehicle is an aircraft, acquiring speed information, direction information and altitude information of the corresponding driving position point from driving parameter information of each driving position point; and determining at least one of the turbine rotation speed, the attitude angle, the steering direction of a rudder, the deflection angle of a tail wing, the retraction state of a landing gear, the retraction state of a flap and the opening and closing state of a speed reducing plate of the aircraft at the corresponding driving position point according to the speed information, the direction information and the height information of each driving position point.
And 103, when the three-dimensional virtual model corresponding to the control target vehicle runs to each running position point of the running path, running according to the running posture information of the corresponding running position point, and generating a running animation.
In this embodiment, a three-dimensional virtual model may be previously established for each vehicle managed by the background management system, and the established three-dimensional virtual model may be stored in a corresponding three-dimensional model storage list, for example, a three-dimensional vehicle model storage list and a three-dimensional aircraft model storage list may be established, where the three-dimensional vehicle model storage list is used for storing each three-dimensional vehicle model, and the three-dimensional aircraft model storage list is used for storing each three-dimensional aircraft model.
After the driving path is established and the driving gesture information of the target vehicle at each driving position point is determined, a three-dimensional virtual model corresponding to the target vehicle can be obtained from a three-dimensional model storage list, and the three-dimensional virtual model is controlled to sequentially drive through each driving position point on the driving path according to the driving gesture information in a real-time scene (such as a high-precision map and a street view map) of a user interface. For example, when the target vehicle is an airplane and the corresponding running gesture information is landing gear and flap, the landing gear and flap effect is generated when the corresponding three-dimensional virtual model of the airplane is controlled to run to a certain running position point, for example, when the corresponding running gesture information is a deceleration baffle opening and the three-dimensional virtual model is controlled to run to the running position point, the deceleration baffle opening effect is generated, and therefore, the animation effect is generated through the control of the three-dimensional virtual model, and the reduction of the running path and the running gesture of the target vehicle is realized.
In addition, after the running animation is generated, the play of the running animation may be controlled according to the operation of the user. For example, the playing, stopping, fast-forwarding or fast-rewinding of the running animation can be controlled according to the playing, pause, fast-forwarding, fast-rewinding and other instructions triggered by the user.
According to the technical scheme, the driving parameter information of at least two driving position points of the target vehicle is obtained; creating a running path passing through the at least two running position points, and determining the running posture information of the target vehicle at the corresponding running position points according to the pre-acquired running parameter information; when the three-dimensional virtual model corresponding to the control target vehicle runs to each running position point of the running path, running according to the running posture information of the corresponding running position point, and generating running animation. That is, according to the technical scheme shown in the embodiment of the disclosure, the running condition of the vehicle can be displayed in a three-dimensional dynamic mode, the running path of the vehicle is displayed, the running posture of the vehicle is displayed, the displayed information is more comprehensive, and the visual effect is improved.
Fig. 2 is a flow chart of another driving situation presenting method shown in an embodiment of the present disclosure, the method comprising the steps of:
In step 201, real-time travel data of each vehicle is acquired from a server.
By way of example, the vehicle may include vehicles such as cars, taxis, coaches, and the like, and aircraft such as airplanes, helicopters, and the like. Each vehicle and aircraft can acquire real-time travel data through its own positioning system (such as a global positioning system GPS) and sensors (such as a speed sensor, an acceleration sensor, a rotation angle sensor, a yaw angle sensor, etc.), and upload the acquired real-time travel data to a server, from which a background management system can acquire real-time travel data of each vehicle and aircraft. In a specific embodiment, the real-time travel data may include position, speed, direction, etc. data, and if the vehicle is an aircraft, the real-time travel data may also include altitude data.
Step 202, preprocessing the real-time driving data to obtain driving parameter information of the corresponding vehicle at each driving position point.
Such pretreatments include, but are not limited to: the method comprises the steps of performing validity recognition (such as recognizing whether position information in the real-time running data is empty or not, if so, the data is illegal, direct filtering) on the real-time running data, performing validity recognition (such as recognizing whether the position information in the real-time running data is in a preset area or not, if not, the data is invalid, direct filtering), numbering the real-time running data, uniformly converting specific information in the real-time running data (such as converting the position information expressed by longitude and latitude into world coordinates), and adding a time identifier to the real-time running data, wherein the time identifier can comprise an absolute time identifier (such as time for receiving the data from a server) and a relative time identifier (such as time difference between data of each position point and data of a first position point).
In a specific embodiment, the running parameter information of the running position point stored in the running information storage list may specifically include position information (such as world coordinates), speed information (such as speed, acceleration, angular velocity, etc.), direction information (such as direction, yaw angle, azimuth angle, etc.) of the point, and the running parameter information stored in the list may also include altitude information for the running information storage list of the aircraft, and in addition, the running parameter information of the running position point stored in the running information storage list may also include the number, relative time, absolute time, etc. of the position point, which are not particularly limited herein.
And 203, storing the running parameter information of each vehicle at each running position point into a corresponding type of running information storage list.
In this embodiment, a travel information storage list may be established in advance for a vehicle managed by the background management system, for example, when the vehicle managed by the background management system includes a vehicle and an aircraft (e.g., an airplane, a helicopter), a vehicle travel information storage list may be established for storing travel parameter information of each vehicle, and an aircraft travel information storage list may be established for storing travel parameter information of each aircraft.
Step 204, determining a target vehicle selected by the user in the driving situation display list.
In particular, in this embodiment, a running condition display list of the vehicles may be provided on a user interface of the background management system, where the list provides the user with identifiers of the respective vehicles capable of viewing the running condition, and the user may select a vehicle to be viewed (i.e., a target vehicle) in the list, and the background management system obtains running parameter information of at least two running location points of the target vehicle from the running information storage list according to the selection of the user.
Step 205, according to the running parameter information of at least two running position points of the target vehicle selected by the user from the corresponding type of running information storage list.
It should be noted that, at this time, the running parameter information of at least two running position points of the target vehicle acquired from the running information storage list may be the running parameter information of all running position points of the target vehicle stored in the running information storage list, that is, the running parameter information acquired at this time is the running parameter information of all running position points that the target vehicle has currently generated, and there may be only one or two or more running position points, specifically, after the running parameter information of all running position points that the target vehicle has currently generated is acquired, the number of points may be determined as shown in fig. 3, if there is only one running position point data at present, the data may be stored in the path queue first for standby, and if there is two or more running position point data at present, a running path may be created.
And 206, creating a spline curve passing through the at least two running position points to obtain a running path.
Specifically, if the running parameter information of the target vehicle currently stored in the running information storage list relates to relatively few (but at least two) running position points, it is possible to linearly interpolate the existing running position points according to the point numbers, relative times, absolute times, and the like in the running parameter information of the existing running position points, and create a spline curve passing through the at least two running position points, thereby obtaining the running path.
Step 207, determining the driving posture information of the target vehicle at the corresponding driving position point according to the pre-acquired driving parameter information.
For example, when the target vehicle is a vehicle, speed information and direction information of each driving position point may be obtained from driving parameter information of the corresponding driving position point, and at least one of a tire rotation speed, a steering wheel deflection angle, and a vehicle body inclination angle of the vehicle at the corresponding driving position point may be determined from the speed information and direction information of each driving position point.
When the target vehicle is an aircraft, speed information, direction information and altitude information of the corresponding driving position point can be obtained from the driving parameter information of each driving position point; and determining at least one of the turbine rotation speed, the attitude angle, the steering direction of a rudder, the deflection angle of a tail wing, the retraction state of a landing gear, the retraction state of a flap and the opening and closing state of a speed reducing plate of the aircraft at the corresponding driving position point according to the speed information, the direction information and the height information of each driving position point.
Specifically, for example, when the target vehicle is an aircraft, a steering inclination angle θ of the current running position point may be calculated according to speed information of the current running position point, θ=arctan (ω×v/g), where ω is an angular speed, V is a speed, g is a gravitational acceleration, and an attitude angle (such as pitch angle, yaw angle, roll angle) of the aircraft is determined according to the calculated steering inclination angle θ of the current running position point; meanwhile, judging whether the height of the current running position point is lower than a preset height, if so, judging that the aircraft is in a low airspace state, and determining that the current running attitude information of the aircraft is landing gear and flap landing; further, if the height of the current running position point from the ground is 0 (i.e. the aircraft contacts the ground), the speed is greater than the first preset speed, and the acceleration is negative, determining that the current running posture information of the aircraft is the opening of the speed reducer; further, if the speed of the front running position point is smaller than a second preset speed, determining that the current running posture information of the aircraft is a stow speed reducing plate, and when the current running position point enters a taxiway, determining that the current running posture information of the aircraft is a stow flap, wherein the first preset speed is larger than the second preset speed.
Step 208, associating the driving gesture information of the target vehicle at each driving position point with the corresponding driving position point on the driving path.
Step 209, controlling the three-dimensional virtual model corresponding to the target vehicle to run on the running path, and when the three-dimensional virtual model runs to each running position point, rendering the running gesture information associated with the corresponding running position point to the three-dimensional virtual model, and generating a running animation.
In this embodiment, a three-dimensional virtual model may be previously established for each vehicle managed by the background management system, and the established three-dimensional virtual model may be stored in a corresponding three-dimensional model storage list, for example, a three-dimensional vehicle model storage list and a three-dimensional aircraft model storage list may be established, where the three-dimensional vehicle model storage list is used for storing each three-dimensional vehicle model, and the three-dimensional aircraft model storage list is used for storing each three-dimensional aircraft model. In a specific embodiment, the three-dimensional vehicle model may be as shown in FIG. 4 and the three-dimensional aircraft model may be as shown in FIG. 5.
In a specific embodiment, the built three-dimensional virtual model may include the following logical components:
Arrow component (MovingDirection): a forward direction toward the model for determining where to be "forward";
rotation center assembly (RotateCenter): judging where the lens should rotate around the model after focusing occurs;
label assembly (Widget): displaying the name of the model, and performing one-step operation on the model by clicking;
GPS position marker component (GPSMArker): the GPS module of the aircraft is positioned at the aircraft nose instead of the center of the aircraft, so that the model is displayed to be integrally forward without GPS correction, and the larger the model is, the larger the error is;
a play management component (VehicleLayer) that drives model animation: when the model is created, the model is initialized once, the basic model only initializes a data queue, a tag and a light switch, if the model is an aircraft, the aircraft also performs primary altitude monitoring for initializing animation data, and the default data queue of a preset frame (such as 10 frames) is stored for animation calculation;
in addition, for the corresponding three-dimensional virtual model of the aircraft, a ground tracker component (groundtracker) is further included, wherein the component is used for judging the altitude, sending event notification when the altitude changes, and starting ray detection when approaching the ground to find the ground position in the real-time scene.
After the driving path is created, the driving path and the three-dimensional virtual model corresponding to the target vehicle in the three-dimensional model storage list can be associated, the three-dimensional virtual model corresponding to the target vehicle can be obtained from the three-dimensional model storage list during control, and the three-dimensional virtual model is controlled to sequentially drive through each driving position point on the driving path according to the driving gesture information in a real-time scene of the user interface. For example, when the target vehicle is an airplane and the corresponding running gesture information is landing gear and flap when the target vehicle runs to a certain running position point, the landing gear and flap effect is generated when the corresponding three-dimensional virtual model of the airplane is controlled to run to the running position point, for example, when the corresponding running gesture information is a deceleration baffle opening when the target vehicle runs to the certain running position point, the deceleration baffle opening effect is generated when the three-dimensional virtual model is controlled to run to the running position point, and therefore, the animation effect is generated through the control of the three-dimensional virtual model, and the reduction of the running path and the running gesture of the target vehicle is realized. In a specific embodiment, the generated driving animation includes a vehicle driving animation and an aircraft driving animation, wherein the vehicle driving animation can be as shown in fig. 6, and the tire rotation speed, the steering trend, the vehicle body inclination angle, the engine state and the like in the driving process of the vehicle can be restored in real time; the running animation of the aircraft can be shown as in fig. 7, and the turbine rotation speed, attitude angle, steering of a rudder, deflection angle of a tail wing, retraction state of a landing gear, retraction state of a flap, retraction state of a speed reducing plate and the like in the running process of the aircraft can be restored in real time.
In addition, after the running animation is generated, the play of the running animation may be controlled according to the operation of the user. For example, the playing, stopping, fast-forwarding or fast-rewinding of the running animation can be controlled according to the playing, pause, fast-forwarding, fast-rewinding and other instructions triggered by the user.
Step 210, determining the animation updating frequency according to the distance between the target vehicle and the preset lens.
The running animation generated in step 209 is obtained from the running parameter information of all the running position points that the target vehicle has currently generated, and since the target vehicle is still running and new running parameter information of the running position points is also generated, the running animation needs to be updated.
In one embodiment, the running animation may be updated according to the update frequency of the running parameter information of the new running position point (i.e., the frequency at which the target vehicle collects real-time running data), i.e., once the running parameter information of the new running position point is generated, the running animation is updated once.
In another embodiment, when updating, the distance between the target vehicle and a preset lens (such as a panoramic lens of an airport) can be determined first, the animation updating frequency is determined according to the distance, and the running parameter information of the next running position point of the target vehicle is obtained from the running information storage list according to the animation updating frequency. Generally speaking, when the distance between the target vehicle and the preset lens is long, a low animation update frequency may be set, when the distance between the target vehicle and the preset lens is short, a high animation update frequency may be set, a specific frequency may be set according to actual requirements, after the animation update frequency is determined, a time for acquiring the running parameter information of the next running position point from the running information storage list may be determined according to the animation update frequency, for example, the animation update frequency may be 60 frames/second, then the running parameter information of the next running position point may be acquired from the running information storage list at a time interval of 1/60 (about 0.017) seconds, for example, the running parameter information acquisition time of the current position point is 2020-05-27:11:00:00.000, and then the running parameter information acquisition time of the next running position point is 2020-05-27:11:00:00.017.
Step 211, updating the running animation according to the animation updating frequency.
After the running parameter information of the next running position point is acquired, the next running position point can be added on the original running path, the running posture information of the target vehicle at the next running position point is determined according to the running parameter information of the next running position point, the three-dimensional virtual model is controlled to run to the next running position point of the running path, and running is carried out on the next running position point according to the corresponding running posture information, so that updating of running animation is realized.
For example, when the target vehicle is a vehicle, the running animation update flow may update the tire rotation speed, the steering wheel yaw angle, the vehicle body side-to-side inclination angle, and the vehicle body front-to-rear inclination angle of the vehicle after reading the running posture information of the next running position point as shown in fig. 8.
When the target vehicle is an aircraft, the running animation updating flow may be as shown in fig. 9, and after the running gesture information of the next running position point is read, the turbine rotation speed, the retraction state of the landing gear, the retraction state of the flap, the opening and closing state of the speed reducing plate, the steering direction of the rudder, and the deflection angle of the tail wing of the aircraft may be updated.
Step 212, receiving a view instruction triggered by a user.
The user can click, slide and other operations on the user interface by using the mouse/finger to trigger the viewing instruction so as to switch the display view angle and the size of the three-dimensional virtual model.
And step 213, adjusting the display view angle and size of the three-dimensional virtual model in response to the viewing instruction.
For example, the three-dimensional virtual model is rotated and scaled according to the view instruction, and the real-time scene of the three-dimensional virtual model is correspondingly rotated and scaled.
It should be noted that the sequence of the steps shown in fig. 2 is merely illustrative, and does not limit the actual execution sequence, and in practice, the execution sequence of some of the steps may be adjusted according to needs, for example, step 207 may also be performed before step 206.
The model and animation effects shown in fig. 4, 5, 6, and 7 are also merely examples, and do not constitute final limitations on the style and presentation effect of the actual model.
In addition, in this embodiment, a thumbnail of the driving status of each vehicle managed by the background management system may be displayed for the user on the user interface, in the thumbnail, the three-dimensional virtual model corresponding to each vehicle may travel on the corresponding driving path according to the corresponding driving gesture information in a small image, and meanwhile, the identification information (such as license plate number/flight number) of the corresponding vehicle and the driving parameter information (such as position information, altitude information, direction information, speed information, etc.) of the corresponding vehicle may be displayed on the three-dimensional virtual model for the user to view, and if the user clicks to focus on a certain three-dimensional virtual model, the three-dimensional virtual model may be focused on the user interface and the driving process of the three-dimensional virtual model may be tracked.
In order to optimize the storage space and save processing resources, the updating condition of the running parameter information of each vehicle can be monitored, for example, as shown in fig. 10, a timer can be set to trigger to check the updating condition of the running parameter information of each vehicle at regular time, if the vehicle is an aircraft, the data receiving time of the last time data received from the server in the aircraft running information storage list can be checked, if it is known that a certain aircraft has no new running parameter information added beyond a first preset duration according to the receiving time, the engine animation effect of the three-dimensional virtual model corresponding to the aircraft displayed on the user interface can be closed, if the aircraft has no new running parameter information added beyond a second preset duration in the running information storage list, the running animation of the three-dimensional virtual model corresponding to the aircraft displayed on the user interface can be removed, and the running parameter information corresponding to the aircraft in the running information storage list can be deleted, and the three-dimensional virtual model corresponding to the aircraft in the three-dimensional model storage list can be deleted, wherein the first preset duration is smaller than a second preset duration, for example, the second preset duration is longer than 2 minutes, and the second preset duration is longer than 20 minutes. When the vehicle is a vehicle, the processing method is the same as that of the aircraft, and will not be described here again.
The processing procedure of the background management system on the real-time running data received from the server is described below with reference to a specific flowchart, as shown in fig. 11, after the background management system receives the real-time running data from the server, it will determine whether the real-time running data belongs to a vehicle or an aircraft, if the real-time running data belongs to the vehicle, the data is preprocessed and then stored in a vehicle running information storage list and the data receiving time is recorded, and if the data belongs to the aircraft, the data is preprocessed and then stored in the aircraft running information storage list and the data receiving time is recorded. Aiming at the data stored in the running information storage list of the aircraft currently, the background management system judges whether a three-dimensional aircraft model corresponding to the aircraft is displayed in a real-time scene or not, if the model is displayed in the real-time scene, a running position point is newly added in a running path according to the currently stored data, the running gesture information of the running position point is determined, and then a running animation is updated according to the newly added running position point and the running gesture information; if the model is not displayed in the real-time scene, a running path is created according to the data in the running information storage list of the aircraft, the running gesture information is determined, then a three-dimensional aircraft model corresponding to the aircraft is established, the buffer time is set, and the three-dimensional aircraft model is controlled to run on the running path according to the running gesture information, so that a running animation is generated. The processing method is the same as the processing method of the data in the vehicle running information storage list aiming at the data stored in the vehicle running information storage list currently.
According to the technical scheme, the driving parameter information of at least two driving position points of the target vehicle is obtained; creating a running path passing through the at least two running position points, and determining the running posture information of the target vehicle at the corresponding running position points according to the pre-acquired running parameter information; when the three-dimensional virtual model corresponding to the control target vehicle runs to each running position point of the running path, running according to the running posture information of the corresponding running position point, and generating running animation. That is, according to the technical scheme shown in the embodiment of the disclosure, the running condition of the vehicle can be displayed in a three-dimensional dynamic mode, the running path of the vehicle is displayed, the running posture of the vehicle is displayed, the displayed information is more comprehensive, and the visual effect is improved.
Fig. 12 is a block diagram of a running situation presenting apparatus 300 according to an embodiment of the present disclosure, and as shown in fig. 12, the apparatus includes:
a first obtaining module 301, configured to obtain driving parameter information of at least two driving location points of a target vehicle;
a creation module 302, configured to create a travel path passing through the at least two travel location points;
A first determining module 303, configured to determine driving gesture information of the target vehicle at a corresponding driving location point according to the driving parameter information acquired in advance;
and the control module 304 is configured to control the three-dimensional virtual model corresponding to the target vehicle to travel according to the travel posture information of the corresponding travel position point when traveling to each travel position point of the travel path, and generate a travel animation.
In an embodiment, the creating module 302 is specifically configured to create a spline curve passing through the at least two driving location points, so as to obtain the driving path.
In one embodiment, the target vehicle includes a vehicle, and as shown in fig. 13, the first determining module 303 includes:
a first obtaining submodule 3031, configured to obtain speed information and direction information of each driving location point from driving parameter information of the corresponding driving location point;
a first determining submodule 3032 is used for determining at least one of the tire rotation speed, the steering wheel deflection angle and the vehicle body inclination angle of the vehicle at the corresponding running position point according to the speed information and the direction information of each running position point.
In one embodiment, the target vehicle includes an aircraft, and as shown in fig. 13, the first determining module 303 includes:
A second obtaining sub-module 3033, configured to obtain, from the driving parameter information of each driving location point, speed information, direction information and altitude information of the corresponding driving location point;
the second determining submodule 3034 is configured to determine at least one of a turbine rotation speed, a attitude angle, a steering direction of a rudder, a deflection angle of a tail wing, a retraction state of a landing gear, a retraction state of a flap, and a retraction state of a speed reducer of the aircraft at the corresponding driving position point according to the speed information, the direction information, and the height information of each driving position point.
In one embodiment, as shown in fig. 13, the control module 304 includes:
an association submodule 3041, configured to associate driving gesture information of the target vehicle at each driving location point with a corresponding driving location point on the driving path;
and the control submodule 3042 is used for controlling the three-dimensional virtual model to run on the running path, rendering the running gesture information associated with the corresponding running position point to the three-dimensional virtual model when the three-dimensional virtual model runs to each running position point, and generating the running animation.
In one embodiment, as shown in fig. 13, the apparatus further includes:
A second obtaining module 305, configured to obtain real-time driving data of each vehicle from the server;
the preprocessing module 306 is configured to preprocess the real-time driving data to obtain driving parameter information of the corresponding vehicle at each driving location point;
a storage module 307, configured to store the driving parameter information of each vehicle at each driving location point into a corresponding type of driving information storage list.
In one embodiment, as shown in fig. 13, the first obtaining module 301 includes:
a third determination submodule 3011, configured to determine the target vehicle selected by the user in the driving situation display list;
and a third obtaining submodule 3012, configured to obtain, according to the target vehicle selected by the user, running parameter information of at least two running position points of the target vehicle from the running information storage list of a corresponding type.
In one embodiment, as shown in fig. 13, the apparatus further includes:
a second determining module 308, configured to determine an animation update frequency according to a distance between the target vehicle and a preset lens;
and an updating module 309, configured to update the running animation according to the animation updating frequency.
In one embodiment, as shown in fig. 13, the apparatus further includes:
a receiving module 310, configured to receive a view instruction triggered by the user;
and the adjusting module 311 is configured to adjust the display viewing angle and size of the three-dimensional virtual model in response to the viewing instruction.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
According to the technical scheme, the driving parameter information of at least two driving position points of the target vehicle is obtained; creating a running path passing through the at least two running position points, and determining the running posture information of the target vehicle at the corresponding running position points according to the pre-acquired running parameter information; when the three-dimensional virtual model corresponding to the control target vehicle runs to each running position point of the running path, running according to the running posture information of the corresponding running position point, and generating running animation. That is, according to the technical scheme shown in the embodiment of the disclosure, the running condition of the vehicle can be displayed in a three-dimensional dynamic mode, the running path of the vehicle is displayed, the running posture of the vehicle is displayed, the displayed information is more comprehensive, and the visual effect is improved.
Fig. 14 is a block diagram of an electronic device 400, shown in an embodiment of the disclosure. As shown in fig. 14, the electronic device 400 may include: a processor 401, a memory 402. The electronic device 400 may also include one or more of a multimedia component 403, an input/output (I/O) interface 404, and a communication component 405.
The processor 401 is configured to control the overall operation of the electronic device 400 to complete all or part of the steps in the driving status display method. The memory 402 is used to store various types of data to support operation at the electronic device 400, which may include, for example, instructions for any application or method operating on the electronic device 400, as well as application-related data, such as contact data, transceived messages, pictures, audio, video, and the like. The Memory 402 may be implemented by any type or combination of volatile or non-volatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM for short), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia component 403 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen, the audio component being for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may be further stored in the memory 402 or transmitted through the communication component 405. The audio assembly further comprises at least one speaker for outputting audio signals. The I/O interface 404 provides an interface between the processor 401 and other interface modules, which may be a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 405 is used for wired or wireless communication between the electronic device 400 and other devices. Wireless communication, such as Wi-Fi, bluetooth, near field communication (Near Field Communication, NFC for short), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or one or a combination of more of them, is not limited herein. The corresponding communication component 405 may thus comprise: wi-Fi module, bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic device 400 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), digital signal processor (Digital Signal Processor, abbreviated as DSP), digital signal processing device (Digital Signal Processing Device, abbreviated as DSPD), programmable logic device (Programmable Logic Device, abbreviated as PLD), field programmable gate array (Field Programmable Gate Array, abbreviated as FPGA), controller, microcontroller, microprocessor, or other electronic components for performing the driving condition displaying method described above.
In another exemplary embodiment, a computer readable storage medium is also provided, comprising program instructions which, when executed by a processor, implement the steps of the above-described driving situation presenting method. For example, the computer readable storage medium may be the memory 402 including the program instructions described above, which are executable by the processor 401 of the electronic device 400 to perform the driving situation presenting method described above.
In summary, in the present disclosure, the driving parameter information of at least two driving location points of the target vehicle is obtained; creating a running path passing through the at least two running position points, and determining the running posture information of the target vehicle at the corresponding running position points according to the pre-acquired running parameter information; when the three-dimensional virtual model corresponding to the control target vehicle runs to each running position point of the running path, running according to the running posture information of the corresponding running position point, and generating running animation. That is, according to the technical scheme shown in the embodiment of the disclosure, the running condition of the vehicle can be displayed in a three-dimensional dynamic mode, the running path of the vehicle is displayed, the running posture of the vehicle is displayed, the displayed information is more comprehensive, and the visual effect is improved.
The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the present disclosure is not limited to the specific details of the embodiments described above, and other embodiments of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the present disclosure within the scope of the technical concept of the present disclosure.
In addition, it should be noted that, where specific features described in the foregoing embodiments are not contradictory, they may be combined in any suitable manner, and in order to avoid unnecessary repetition, the disclosure does not further describe the various possible combinations, so long as they do not violate the spirit of the disclosure, which should also be regarded as the disclosure.

Claims (8)

1. A driving condition display method, comprising:
acquiring driving parameter information of at least two driving position points of a target vehicle;
creating a running path passing through the at least two running position points, and determining running posture information of the target vehicle at the corresponding running position points according to the pre-acquired running parameter information;
controlling the three-dimensional virtual model corresponding to the target vehicle to run according to the running posture information of the corresponding running position point when running to each running position point of the running path, and generating a running animation;
Determining animation updating frequency according to the distance between the target vehicle and the preset lens;
updating the running animation according to the animation updating frequency;
wherein said creating a travel path through said at least two travel location points comprises:
creating a spline curve passing through the at least two driving position points to obtain the driving path;
wherein when the three-dimensional virtual model corresponding to the target vehicle is controlled to travel to each travel position point of the travel path, the three-dimensional virtual model travels according to the travel posture information of the corresponding travel position point, and generates a travel animation, and the three-dimensional virtual model comprises:
associating the driving gesture information of the target vehicle at each driving position point with the corresponding driving position point on the driving path;
and controlling the three-dimensional virtual model to run on the running path, rendering running posture information associated with the corresponding running position point to the three-dimensional virtual model when the three-dimensional virtual model runs to each running position point, and generating the running animation.
2. The running condition exhibiting method according to claim 1, wherein the target vehicle includes a vehicle, the determining running posture information of the target vehicle at a corresponding running position point from the running parameter information acquired in advance includes:
Acquiring speed information and direction information of each driving position point from driving parameter information of the corresponding driving position point;
at least one of a tire rotation speed, a steering wheel deflection angle, and a vehicle body inclination angle of the vehicle at the corresponding driving position point is determined according to the speed information and the direction information of each driving position point.
3. The running condition exhibiting method according to claim 1, wherein the target vehicle includes an aircraft, and the determining running posture information of the target vehicle at a corresponding running position point from the pre-acquired running parameter information includes:
acquiring speed information, direction information and height information of each driving position point from driving parameter information of the corresponding driving position point;
and determining at least one of the turbine rotation speed, the attitude angle, the steering direction of a rudder, the deflection angle of a tail wing, the retraction state of a landing gear, the retraction state of a flap and the opening and closing state of a speed reducing plate of the aircraft at the corresponding driving position point according to the speed information, the direction information and the height information of each driving position point.
4. A running condition exhibiting method according to any one of claims 1 to 3, further comprising, before acquiring the running parameter information of at least two running position points of the target vehicle:
Acquiring real-time driving data of each vehicle from a server;
preprocessing the real-time driving data to obtain driving parameter information of the corresponding vehicle at each driving position point;
and storing the driving parameter information of each vehicle at each driving position point into a corresponding type of driving information storage list.
5. The driving situation presenting method according to claim 4, wherein the acquiring the driving parameter information of at least two driving location points of the target vehicle includes:
determining the target vehicles selected by the user in a driving condition display list;
and acquiring the driving parameter information of at least two driving position points of the target vehicle from the driving information storage list of the corresponding type according to the target vehicle selected by the user.
6. A driving situation display device, characterized in that the method according to any one of claims 1 to 5 is applied, the device comprising:
the acquisition module is used for acquiring the running parameter information of at least two running position points of the target vehicle;
a creation module for creating a travel path through the at least two travel location points;
The determining module is used for determining the driving posture information of the target vehicle at the corresponding driving position point according to the pre-acquired driving parameter information;
the control module is used for controlling the three-dimensional virtual model corresponding to the target vehicle to run according to the running posture information of the corresponding running position point when running to each running position point of the running path, and generating a running animation; determining animation updating frequency according to the distance between the target vehicle and the preset lens; updating the running animation according to the animation updating frequency;
wherein said creating a travel path through said at least two travel location points comprises:
creating a spline curve passing through the at least two driving position points to obtain the driving path;
wherein when the three-dimensional virtual model corresponding to the target vehicle is controlled to travel to each travel position point of the travel path, the three-dimensional virtual model travels according to the travel posture information of the corresponding travel position point, and generates a travel animation, and the three-dimensional virtual model comprises:
associating the driving gesture information of the target vehicle at each driving position point with the corresponding driving position point on the driving path;
And controlling the three-dimensional virtual model to run on the running path, rendering running posture information associated with the corresponding running position point to the three-dimensional virtual model when the three-dimensional virtual model runs to each running position point, and generating the running animation.
7. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 5.
8. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1 to 5.
CN202010549903.0A 2020-06-16 2020-06-16 Driving condition display method and device, storage medium and electronic equipment Active CN111815745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010549903.0A CN111815745B (en) 2020-06-16 2020-06-16 Driving condition display method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010549903.0A CN111815745B (en) 2020-06-16 2020-06-16 Driving condition display method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111815745A CN111815745A (en) 2020-10-23
CN111815745B true CN111815745B (en) 2024-01-12

Family

ID=72845083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010549903.0A Active CN111815745B (en) 2020-06-16 2020-06-16 Driving condition display method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111815745B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381954A (en) * 2020-11-11 2021-02-19 郑州捷安高科股份有限公司 Dynamic model display method, device and equipment
CN113268301B (en) * 2021-05-25 2024-02-13 北京北大方正电子有限公司 Animation generation method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140144919A (en) * 2013-06-12 2014-12-22 국민대학교산학협력단 Simulation system for autonomous vehicle for applying obstacle information in virtual reality
CN108521788A (en) * 2017-11-07 2018-09-11 深圳市大疆创新科技有限公司 Generate method, the method for simulated flight, equipment and the storage medium in simulation course line
CN111028331A (en) * 2019-11-20 2020-04-17 天津市测绘院 High-performance vehicle dynamic three-dimensional modeling and track real-time rendering method and device
CN111089583A (en) * 2019-11-27 2020-05-01 安徽江淮汽车集团股份有限公司 Three-dimensional navigation method, equipment, storage medium and device in building
CN111125236A (en) * 2019-12-18 2020-05-08 中国东方电气集团有限公司 Three-dimensional dynamic information physical system based on GIS
CN111142552A (en) * 2018-11-06 2020-05-12 宝沃汽车(中国)有限公司 Method and device for controlling unmanned aerial vehicle, storage medium and vehicle

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105378811B (en) * 2013-06-07 2017-12-26 横滨橡胶株式会社 Vehicle line display device, vehicle line display methods, and vehicle line show program
US20190101405A1 (en) * 2017-10-02 2019-04-04 Hua-Chuang Automobile Information Technical Center Co., Ltd. Three-dimensional driving navigation device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140144919A (en) * 2013-06-12 2014-12-22 국민대학교산학협력단 Simulation system for autonomous vehicle for applying obstacle information in virtual reality
CN108521788A (en) * 2017-11-07 2018-09-11 深圳市大疆创新科技有限公司 Generate method, the method for simulated flight, equipment and the storage medium in simulation course line
CN111142552A (en) * 2018-11-06 2020-05-12 宝沃汽车(中国)有限公司 Method and device for controlling unmanned aerial vehicle, storage medium and vehicle
CN111028331A (en) * 2019-11-20 2020-04-17 天津市测绘院 High-performance vehicle dynamic three-dimensional modeling and track real-time rendering method and device
CN111089583A (en) * 2019-11-27 2020-05-01 安徽江淮汽车集团股份有限公司 Three-dimensional navigation method, equipment, storage medium and device in building
CN111125236A (en) * 2019-12-18 2020-05-08 中国东方电气集团有限公司 Three-dimensional dynamic information physical system based on GIS

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李水良 ; 杨新红 ; 谢金发 ; .虚拟环境中车辆驱动及位姿和运动参数获取.河南科技大学学报(自然科学版).2010,(第03期),25-28. *

Also Published As

Publication number Publication date
CN111815745A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
US7129887B2 (en) Augmented reality traffic control center
CN111815745B (en) Driving condition display method and device, storage medium and electronic equipment
CN113260430B (en) Scene processing method, device and system and related equipment
US20130034834A1 (en) Electronic device and method for simulating flight of unmanned aerial vehicle
JP2021140822A (en) Vehicle control method, vehicle control device, and vehicle
US11971481B2 (en) Point cloud registration for lidar labeling
US20210239972A1 (en) Methods, systems, apparatuses, and devices for facilitating provisioning of a virtual experience
US20230333842A1 (en) Firmware update mechanism of a power distribution board
JP2022132075A (en) Ground Truth Data Generation for Deep Neural Network Perception in Autonomous Driving Applications
CN114537141A (en) Method, apparatus, device and medium for controlling vehicle
TWI799000B (en) Method, processing device, and display system for information display
CN114820504B (en) Method and device for detecting image fusion deviation, electronic equipment and storage medium
CN115082690B (en) Target recognition method, target recognition model training method and device
US20230196643A1 (en) Synthetic scene generation using spline representations of entity trajectories
CN115675528A (en) Automatic driving method and vehicle based on similar scene mining
CN116642511A (en) AR navigation image rendering method and device, electronic equipment and storage medium
EP4369042A1 (en) Systems and techniques for processing lidar data
US20240010208A1 (en) Map-assisted target detection for sensor calibration
US20230229425A1 (en) Bootloader update
US20240101129A1 (en) Sensor synchronization system
WO2020073270A1 (en) Snapshot image of traffic scenario
US20240017731A1 (en) Drive-through calibration process
US20230005214A1 (en) Use of Real-World Lighting Parameters to Generate Virtual Environments
WO2020073271A1 (en) Snapshot image of traffic scenario
WO2020073272A1 (en) Snapshot image to train an event detector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 307, 3 / F, supporting public building, Mantingfangyuan community, qingyanli, Haidian District, Beijing 100086

Applicant after: Beijing Wuyi Vision digital twin Technology Co.,Ltd.

Address before: Room 307, 3 / F, supporting public building, Mantingfangyuan community, qingyanli, Haidian District, Beijing 100086

Applicant before: DANGJIA MOBILE GREEN INTERNET TECHNOLOGY GROUP Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220922

Address after: Room 315, 3rd Floor, Supporting Public Building, Mantingfangyuan Community, Qingyunli, Haidian District, Beijing 100000

Applicant after: Everything mirror (Beijing) computer system Co.,Ltd.

Address before: Room 307, 3 / F, supporting public building, Mantingfangyuan community, qingyanli, Haidian District, Beijing 100086

Applicant before: Beijing Wuyi Vision digital twin Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant