CN115641426A - Method and device for displaying environment information and computer readable storage medium - Google Patents

Method and device for displaying environment information and computer readable storage medium Download PDF

Info

Publication number
CN115641426A
CN115641426A CN202211296214.9A CN202211296214A CN115641426A CN 115641426 A CN115641426 A CN 115641426A CN 202211296214 A CN202211296214 A CN 202211296214A CN 115641426 A CN115641426 A CN 115641426A
Authority
CN
China
Prior art keywords
target
environment
digital twin
terminal
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211296214.9A
Other languages
Chinese (zh)
Inventor
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suiguang Technology Beijing Co ltd
Original Assignee
Suiguang Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suiguang Technology Beijing Co ltd filed Critical Suiguang Technology Beijing Co ltd
Priority to CN202211296214.9A priority Critical patent/CN115641426A/en
Publication of CN115641426A publication Critical patent/CN115641426A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses a method and a device for displaying environmental information and a computer readable storage medium, which can be applied to scenes such as cloud technology. Specifically, an environment information image and an environment layout parameter corresponding to a target position are obtained; inputting the environment information image into a target recognition model, so that the target recognition model carries out feature classification processing on the environment information image to obtain environment entity category information corresponding to the environment information image; constructing an environment digital twin body corresponding to the target position according to the environment layout parameters and the environment entity category information; acquiring a terminal digital twin body corresponding to a target terminal, and fusing the environment digital twin body and the terminal digital twin body to obtain a target digital twin body; and performing multi-dimensional display on the environmental information of the target position based on the target digital twin. Therefore, the limitation of fixed visual dimension content is broken through, rich visual effects are provided for users, the method is applicable to real control business and simulation control business, and user experience is improved.

Description

Method and device for displaying environment information and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for displaying environmental information, and a computer-readable storage medium.
Background
The information technology is widely applied to various fields, such as industry, transportation, military, medical treatment, home and the like, and has great significance for business development of related fields. For example, an Augmented Reality (Augmented Reality) technology belongs to an information technology integrating real world information and virtual world information, and in actual service application, real environment information and digital information are superimposed through the information technology, for example, vehicle navigation route information and real-time road condition environment information are combined through the technology to display real-time road navigation service data, so that user experience is improved.
However, in the related art, when the augmented reality technology is used for processing the real environment information and the digital information, the fixed visual dimension content information is mainly generated according to the surrounding real-time environment condition and the control information so as to be displayed on the fixed display interface, for example, when a driver drives a vehicle, the real-time position navigation information is combined with the environment information acquired in real time so as to be displayed on the central control platform interface; although the manner can realize the combined display of the environmental information and the digital information to a certain extent, for the control user, the content with fixed visual dimension is limited in visual experience, and is only suitable for the real control service, so that the requirement of the user for simulating the control service cannot be met, and the user experience is influenced.
Disclosure of Invention
The embodiment of the application provides a method and a device for displaying environmental information and a computer readable storage medium, which can break through the limitation of fixed visual dimension content, provide rich visual effects, be applicable to real control services and simulated control services, and improve user experience.
The embodiment of the application provides a method for displaying environmental information, which comprises the following steps:
acquiring an environment information image and environment layout parameters corresponding to a target position;
inputting the environmental information image into a target recognition model, so that the target recognition model performs feature classification processing on the environmental information image to obtain environmental entity category information corresponding to the environmental information image;
constructing an environment digital twin body corresponding to the target position according to the environment layout parameters and the environment entity category information;
acquiring a terminal digital twin body corresponding to a target terminal, and fusing the environment digital twin body and the terminal digital twin body to obtain a target digital twin body;
and displaying the environmental information of the target position in a multi-dimension mode based on the target digital twin body.
Correspondingly, the embodiment of the present application provides a display device of environmental information, including:
the acquisition unit is used for acquiring an environment information image and an environment layout parameter corresponding to the target position;
the identification unit is used for inputting the environment information image into a target identification model, so that the target identification model carries out feature classification processing on the environment information image to obtain environment entity category information corresponding to the environment information image;
the construction unit is used for constructing an environment digital twin body corresponding to the target position according to the environment layout parameters and the environment entity category information;
the fusion unit is used for acquiring a terminal digital twin body corresponding to a target terminal and fusing the environment digital twin body and the terminal digital twin body to obtain a target digital twin body;
and the display unit is used for carrying out multi-dimensional display on the environment information of the target position based on the target digital twin body.
In some embodiments, the building unit is further configured to:
constructing a three-dimensional environment semantic map corresponding to the target position according to the environment structure parameters and the environment entity category information;
and rendering the three-dimensional environment semantic map to obtain a rendered environment digital twin body.
In some embodiments, the display device of the environmental information may further include a pre-construction unit for:
acquiring structural parameters of a target terminal;
carrying out digital modeling on the target terminal according to the structural parameters to obtain a terminal digital twin model corresponding to the target terminal;
and rendering the terminal digital twin model to obtain a terminal digital twin corresponding to the target terminal.
In some embodiments, the apparatus for displaying environmental information may further include a query unit configured to:
inquiring a local long connection list, wherein the local long connection list comprises a display equipment identifier for establishing long connection with the local;
the display unit is further configured to: and sending the target digital twin to target display equipment corresponding to the display equipment identification to carry out multi-dimensional display on the environmental information of the target position.
In some embodiments, the display unit is further configured to:
acquiring the posture characteristics of a target object corresponding to the display equipment identifier, and determining the face direction parameters of the target object according to the posture characteristics;
estimating spatial layout parameters when performing digital spatial picture display on the target digital twin;
according to the space layout parameters and the face direction parameters, predicting visual range parameters of the target object in the digital space picture, and selecting target digital twin bodies corresponding to the visual range parameters from the target digital twin bodies;
and sending the target sub-digital twin to a target display device corresponding to the display device identification, so that the target display device performs multi-dimensional display on the target sub-digital twin.
In some embodiments, the display device of the environmental information further includes a selecting unit configured to:
acquiring a display mode of target display equipment corresponding to the display equipment identifier;
if the display mode is identified to be an external environment display mode, selecting an external environment digital twin body when a digital space picture is displayed from the target sub digital twin body;
the display unit is further configured to: and sending the external environment digital twin to the target display equipment, so that the target display equipment performs multi-dimensional display on the external environment digital twin.
In some embodiments, the display device of the environmental information further includes a stopping unit for:
if the target display equipment and the local are detected to be in a disconnection state, recording a second environment information image and a second environment layout parameter corresponding to the disconnection state;
constructing a second environment digital twin body corresponding to the second environment information image and the second environment layout parameter, and updating the target digital twin body according to the second environment digital twin body to obtain an updated second target digital twin body;
ceasing transmission of the second target digital twin to the target display device.
In some embodiments, the display device of the environmental information further includes an updating unit configured to:
acquiring a control instruction of the target terminal, and determining a space adjustment parameter of the terminal digital twin body according to the control instruction;
determining a spatial state between the terminal digital twin and the environmental digital twin in the target digital twin;
updating the space state according to the space adjusting parameters to obtain an updated target digital twin body;
and displaying the environmental information of the second target position according to the updated target digital twin.
In addition, an embodiment of the present application further provides a computer device, which includes a processor and a memory, where the memory stores a computer program, and the processor is configured to execute the computer program in the memory to implement the steps in any one of the methods for displaying environmental information provided in the embodiments of the present application.
In addition, a computer-readable storage medium is provided, where a plurality of instructions are stored, and the instructions are adapted to be loaded by a processor to perform the steps in any one of the methods for displaying environmental information provided in the embodiments of the present application.
In addition, the embodiment of the present application further provides a computer program product, which includes computer instructions, and when the computer instructions are executed, the steps in any one of the methods for displaying environmental information provided by the embodiment of the present application are implemented.
The method and the device can acquire the environment information image and the environment layout parameter corresponding to the target position; inputting the environment information image into a target recognition model, so that the target recognition model carries out feature classification processing on the environment information image to obtain environment entity category information corresponding to the environment information image; constructing an environment digital twin body corresponding to the target position according to the environment layout parameters and the environment entity category information; acquiring a terminal digital twin body corresponding to a target terminal, and fusing the environment digital twin body and the terminal digital twin body to obtain a target digital twin body; and displaying the environmental information of the target position in a multi-dimension mode based on the target digital twin body. Therefore, the scheme can identify the environment entity type in the environment information image of the target position, construct the environment digital twin body of the three-dimensional environment framework of the position according to the environment entity type and the environment layout parameters, further fuse the environment digital twin body and the terminal digital twin body to obtain the target digital twin body of the target terminal at the current target position, and perform multi-dimensional visual display on the environment information according to the target digital twin body; therefore, the limitation of fixed visual dimension content can be broken through, a rich visual effect can be provided for a user, and an environment digital twin body can be constructed according to actual environment information or simulated environment information so as to be suitable for real control business and simulated control business and improve user experience.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic view of a display system of environmental information provided in an embodiment of the present application;
FIG. 2 is a flowchart illustrating steps of a method for displaying environmental information according to an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating another step of a method for displaying environmental information according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an architecture of a display system for displaying environmental information according to an embodiment of the present application;
fig. 5 is a schematic component architecture diagram of a terminal carrier provided in an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a target recognition model provided in an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a display device for displaying environmental information according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a method and a device for displaying environmental information and a computer readable storage medium. Specifically, the embodiment of the present application will be described from the perspective of a display device of environment information, where the display device of environment information may be specifically integrated in a computer device, and the computer device may be a server, or may be a device such as a user terminal. The server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, big data and artificial intelligence platforms and the like. The user terminal may be a smart phone, a tablet computer, a notebook computer, a desktop computer, an intelligent sound box, an intelligent watch, an intelligent household appliance, a vehicle-mounted terminal, an intelligent voice interaction device, an aircraft, a military device, a military vehicle, or the like, but is not limited thereto.
It should be noted that, in the specific implementation manner of the present application, the data related to user information, user behavior, business habits, characteristics, etc. need to be obtained or agreed by the user when the above embodiments of the present application are applied to specific products or technologies, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant countries and regions.
The method for displaying the environmental information provided by the embodiment of the application can be applied to various application scenes, including but not limited to application scenes for displaying the environmental information such as cloud technology, artificial intelligence, intelligent transportation, driving assistance, military technology, aviation technology and the like, the application scenes are not limited to be realized in a cloud service mode, a big data mode and the like, and the method is specifically explained by the following embodiment:
for example, referring to fig. 1, a scene diagram of a display system for environment information provided in an embodiment of the present application is shown. The scenario includes a terminal or a server.
The terminal or the server can acquire an environment information image and an environment layout parameter corresponding to the target position; inputting the environment information image into a target recognition model, so that the target recognition model carries out feature classification processing on the environment information image to obtain environment entity category information corresponding to the environment information image; constructing an environment digital twin body corresponding to the target position according to the environment layout parameters and the environment entity category information; acquiring a terminal digital twin body corresponding to a target terminal, and fusing the environment digital twin body and the terminal digital twin body to obtain a target digital twin body; and displaying the environmental information of the target position in a multi-dimension mode based on the target digital twin body.
Wherein, the displaying of the environment information can comprise the following steps: determining an environment information image and an environment layout parameter of a target position, identifying environment entity type information in the environment information image, constructing an environment digital twin of the target position, fusing the target digital twin between a terminal and an environment, performing multi-dimensional display according to the target numerical value twin, and the like.
The following are detailed below. The order of the following examples is not intended to limit the preferred order of the examples.
In the embodiments of the present application, description will be made from the perspective of a display device of environmental information, which may be specifically integrated in a computer apparatus such as a terminal or a server. Referring to fig. 2, fig. 2 is a schematic step flow diagram of a method for displaying environmental information according to an embodiment of the present disclosure, where an example is that a display device of the environmental information is specifically integrated on a server, and when a processor on the server executes a program instruction corresponding to the method for displaying the environmental information, a specific flow is as follows:
101. and acquiring an environment information image and environment layout parameters corresponding to the target position.
The target position may be a real geographical position where the target terminal is currently located, such as a position of a street, a building, or a mountain forest; in addition, the virtual geographic position to be simulated by the target terminal can also be used, for example, the target terminal is simulated to be at the forest A position through a computer technology, so that the relevant simulation data operation is carried out on the forest A scene. The target terminal may be a terminal device with computer computing capability, and the target terminal may be a terminal carrier carrying a subset of the computer computing capability, such as a military combat vehicle (e.g. a tank) carrying the terminal, an aviation airplane carrying the terminal, a vehicle-mounted terminal, and a simulated transport vehicle (e.g. a simulated airplane, a simulated vehicle, etc.) carrying the terminal.
The environment information image may be an image of environment information around the target location, and may specifically include one or more entity information in the environment, where the entity information may specifically be environment information that can be observed by surrounding the target location by 360 degrees. For example, the environmental information image of a forest scene is taken as an example, and is not limited to include trees, grasses, bushes, shade streets, birds, and the like. As another example, the environmental information image of a marine scene is taken as an example, which is not limited to include sea level, islands, distant sky, sun, ships, and the like. The above is merely an example, and the environment information image may also be an environment image of a specific position in another scene.
The environment layout parameter may be a layout parameter of each environment entity corresponding to the target location, which reflects a distribution situation between one environment entity or multiple environment entities in an environment space around the target location; the environment layout parameter may be height, width, size, location coordinates (or latitude and longitude), color distribution, color proportion, number, etc. of each environment entity. For example, taking a location in a forest scene as an example, the location is not limited to include trees, grasses, bushes, shade streets, birds, etc., and the environmental layout parameter may be the current height, width, size, location coordinates (or longitude and latitude), color distribution, and color ratio of trees, grasses, bushes, shade streets, birds, etc. It should be noted that, when the service environment is a simulated service environment, the environment layout parameter may be a parameter of a virtual entity in the simulated service environment; for example, taking a simulated service scene of simulated driving of an aviation aircraft as an example, the simulated driving device of the aviation aircraft is located in a working room on the earth surface, and the sky service environment selected by setting the simulated training sub-mode is set, the virtual entity may be a virtual entity such as a cloud, a sun, a sky and the like in the sky service environment, and the environment layout parameter may be chromaticity, a size, a visibility, a sun direction, a light intensity, a sky chromaticity and the like of the cloud. The above is merely an example, and may also be applied to other real or simulated business environments, which are not exemplified here.
In the embodiment of the application, in order to perform multidimensional visual display on the environmental information of the target position and implement real control service or simulated control service on the environmental scene based on the target position, an environmental information image and an environmental layout parameter corresponding to the target position need to be obtained in advance, so that a corresponding environmental digital twin is constructed subsequently based on the obtained environmental information image and the environmental layout parameter, and the multidimensional visual display of the environmental information is performed according to the environmental digital twin.
In some embodiments, the obtaining of the environment information image and the environment layout parameter corresponding to the target location may be obtaining the environment information image and the environment layout parameter of a real location address or a simulated location address.
For example, taking the example of obtaining an environmental information image and environmental layout parameters of a simulated location address, the environmental information image and environmental layout parameters corresponding to the simulated location address may be searched from a simulated location environment database, where the simulated location environment database may include the environmental information image and environmental layout parameters of a pre-established place scene, such as the simulated environmental information image and environmental layout parameters of each location in a forest, ocean, city, sky, etc., and an environmental entity in the environment may be set freely; it should be noted that the simulated location address may also be a real geographic location at a historical time, and the simulated environment information image and environment layout parameter may be an environment information image and environment layout parameter of the location at the historical time, such as an environment information image and environment entity layout parameter in a pacific war area (location) of world war at the historical time.
For another example, the environment information image and the environment layout parameter of the real location address are obtained, and the environment information image and the environment layout parameter around the current location can be collected through the target terminal. Taking a military combat vehicle carrying a terminal as an example, a low-illumination camera, a thermal infrared imager, an uncooled infrared sensor, a laser radar, a millimeter wave radar, a Beidou positioning and time service component, an inertial navigation component and the like can be arranged around a vehicle body to shoot and detect surrounding environment images of the position of the combat vehicle in real time, so as to acquire environment information images and environment layout parameters respectively or in combination with environment information of each sensing component.
Through the method, the environment information image and the environment layout parameters corresponding to the target position can be obtained, so that the multi-dimensional environment view scene corresponding to the target position can be constructed subsequently, real control service or simulated control service can be carried out subsequently based on the scene, and user experience and use are improved.
102. And inputting the environmental information image into the target recognition model, so that the target recognition model performs feature classification processing on the environmental information image to obtain environmental entity category information corresponding to the environmental information image.
It should be noted that the multi-dimensional environment view corresponding to the target position to be obtained in the embodiment of the present application may be provided by an environment digital twin of the constructed target position, and the environment digital twin may be regarded as a three-dimensional environment semantic map. Therefore, in order to construct the three-dimensional environment semantic map, in the embodiment of the application, after the environment information image is obtained, the environment information in the environment information image needs to be converted into a semantic tag or a category tag, so that the three-dimensional environment semantic map is constructed according to the tag information and the environment layout parameters, and an environment digital twin at the target position is obtained.
The environmental entity category information may be category information corresponding to an environmental entity included in the environmental information image, that is, a semantic tag or a category tag of the environmental entity may be category information corresponding to each environmental entity. For example, taking a target position of a forest scene as an example, the corresponding environment information image includes environment entities such as trees, small grasses, shrubs, boulevards, birds, and the like, and the environment entity type information is respectively corresponding type information of trees, small grasses, shrubs, boulevards, birds, and the like, for example, the tree environment entity corresponds to "tree" type label information, and specifically, the name of a tree can be detailed, and for example, the boulevard environment entity corresponds to "road" type label information, and specifically, the tree environment entity can be detailed into a "stone road", "earth road", and the like; the above are examples only.
In the embodiment of the application, in order to identify the category of the environmental entity in the environmental information map, the environmental information image can be identified through the neural network model, so that the identification efficiency and accuracy of the environmental entity information in the image are improved. The target recognition model is a trained convolutional network model, for example, the target recognition model is a target detection model (RetinaNet), in order to recognize the environmental information image through the target recognition model, a preset recognition model needs to be trained, and specifically, joint training can be performed according to the sample environmental information image and the sample environmental entity category information to obtain a trained target recognition model for recognizing the environmental entity in the environmental information image.
In some embodiments, the training process of the target recognition model is: acquiring a sample environment information image and corresponding sample environment entity category information, inputting the sample environment information image into a preset identification model, and acquiring predicted environment entity category information output by the preset identification model; determining the category loss between the prediction environment entity category information and the sample environment entity category information, performing model parameter adjustment on the preset recognition model according to the category loss and a reverse gradient learning algorithm, and repeatedly executing the step of performing the model parameter adjustment on the preset recognition model according to the category loss and the reverse gradient learning algorithm so as to perform iterative training until the category loss difference is converged to obtain the trained target recognition model.
For example, taking a RetinaNet Network model as an example, the RetinaNet Network model structurally includes a Convolutional Neural Network (CNN), a Feature Pyramid Network (FPN), a prediction (or classification) module layer, and the like. Training is carried out aiming at the RetinaNet network model, and the training process is as follows: inputting a sample environment information image into a RetinaNet network model, carrying out convolution processing on the sample environment image information through a convolution neural network layer to obtain a plurality of sample convolution characteristics with different sizes, constructing corresponding pyramid characteristics based on a plurality of different sample convolution characteristics through a characteristic pyramid network layer, and carrying out prediction processing on each sub-characteristic in the pyramid characteristics through a prediction module layer to obtain prediction environment entity category information; and further, determining a cross entropy Loss value (such as Focal local) between the sample environment entity type information and the prediction environment entity type information, adjusting network parameters of the RetinaNet network model according to a Regression algorithm (Regression) based on the cross entropy Loss value, and performing iterative training until the model Loss value is converged to obtain a trained target recognition model.
In the embodiment of the application, after the trained target recognition model is obtained, the environment information image can be recognized through the target recognition model. Specifically, the convolution processing is performed on the environment information image through the convolution neural network layer in the target recognition model, the convolution neural network layer comprises a plurality of sub-convolution layers, the sizes of convolution kernels among the sub-convolution layers are different, for example, the structures of the sub-convolution layers in the convolution neural network layer can be arranged in a mode that the sizes of the convolution kernels are from large to small, each sub-convolution layer performs convolution processing on the environment information image and/or the image characteristics respectively to obtain a plurality of convolution characteristics with different sizes, corresponding pyramid characteristics are constructed through the characteristic pyramid network layer based on the plurality of convolution characteristics with different sizes, and finally, the prediction module layer performs classification prediction based on the pyramid characteristics to obtain corresponding environment entity category information, namely semantic tags or category tags.
Through the method, the environmental information in the environmental information image can be converted into the semantic label or the category label, so that the three-dimensional environmental semantic map is constructed according to the label information and the environmental layout parameters, and the environmental digital twin at the target position is obtained.
103. And constructing an environment digital twin body corresponding to the target position according to the environment layout parameters and the environment entity category information.
It should be noted that a Digital twin (Digital twin) can be understood as a Digital mapping or Digital simulation of physical objects in a virtual data space, which can digitally map entities (such as buildings, plants, roads, vehicles, human bodies, entities in the sky, and other objects) in a real or virtual environment for Digital representation. For example, a building in the physical world is digitally simulated to simulate the building in a digital virtual space in a data-informative manner.
The environment digital twin body can be a digital twin body of the environment information of the target position, namely, the environment condition of the target position is subjected to digital mapping and digital simulation, so that the environment information of the target position is constructed into a corresponding digital twin space, and the environment information can be displayed according to the environment digital twin body in the following. It can be understood that the environment digital twin is associated with the environment information of the relevant position, and when the target position changes/moves, the environment digital twin also changes and updates.
In order to obtain an environment digital twin body corresponding to a current target position, after an environment entity is identified in an environment information image, the embodiment of the application can construct the corresponding environment digital twin body according to an environment layout parameter and identified environment entity category information.
In some embodiments, the step 103 of constructing an environment digital twin corresponding to the target position according to the environment layout parameters and the environment entity category information may include: constructing a three-dimensional environment semantic map corresponding to the target position according to the environment structure parameters and the environment entity category information; and rendering the three-dimensional environment semantic map to obtain a rendered environment digital twin body.
The three-dimensional environment semantic map may be a three-dimensional map corresponding to environment information related to a target location, and the map includes category information/tag information of an environment entity corresponding to the target location. Specifically, the three-dimensional environment semantic map belongs to a three-dimensional map and comprises a three-dimensional view structure of a plurality of environment entities corresponding to a target position, wherein the three-dimensional view structure comprises the plurality of environment entities, each environment entity has corresponding environment entity type information/label, and the environment entity type information can be used as a marking function in the three-dimensional environment semantic map.
Specifically, the layout condition between environment entities in the surrounding environment of the target position is determined according to the environment layout parameters, for example, the distribution position of each environment entity is determined, a corresponding three-dimensional environment entity distribution map is constructed according to the distribution position of each environment entity, and then, the corresponding environment entities in the three-dimensional environment entity distribution map are marked according to the category information of each environment entity, so that each environment entity has a corresponding semantic identifier, and a three-dimensional environment semantic map is obtained; furthermore, in order to make the environment visual image subsequently presented to the user equal to the environment information of the target position, the three-dimensional environment semantic map needs to be rendered, so as to obtain the environment digital twin at the rendered target position.
When the three-dimensional environment semantic map is rendered, because the rendering process involves more computation, the three-dimensional environment semantic map can be rendered through the cloud server, for example, the three-dimensional environment semantic map is rendered according to parameters (such as geometry, viewpoint, texture, color and the like) of each environment entity in the environment layout parameters. Specifically, a 6-Degree-of-Freedom (6-DoF) parameter of each environmental entity is determined according to an environmental layout parameter, and the six degrees of Freedom can be understood as six degrees of Freedom of an environmental entity space, for example, taking three coordinate axes of x, y and z in a three-dimensional space as an example, a Degree of Freedom of the environmental entity moving along the three coordinate axes of x, y and z and a Degree of Freedom of the environmental entity rotating along the three coordinate axes of x, y and z are six degrees of Freedom of each environmental entity; and performing three-dimensional rendering on the three-dimensional environment semantic map according to the six-degree-of-freedom parameters to obtain an environment digital twin body. It should be noted that, when an environment digital twin at a target position is constructed at a target terminal, in a rendering stage, a three-dimensional environment semantic map and related parameters (such as color distribution parameters in environment layout parameters) may be sent to a cloud rendering (GPU) server, so as to request the cloud rendering server to return the environment digital twin to the target terminal after the environment digital twin is obtained by rendering the three-dimensional environment semantic map; for another example, when the environment digital twin body at the target position is built on the server side, the rendering can be directly performed through the server in the rendering stage, and in addition, the three-dimensional environment semantic map and the related parameters (such as color distribution parameters in the environment layout parameters) can also be sent to a cloud edge rendering (GPU) server, so as to request the cloud edge rendering server to return the environment digital twin body to the server after the three-dimensional environment semantic map is rendered to obtain the environment digital twin body.
In the embodiment of the application, after the environment digital twin body corresponding to the target position is constructed, the environment digital twin body corresponding to the target position can be stored into the corresponding storage space. Such as a storage space of a terminal and/or a server, for subsequently requesting the environmental digital twin at the target location for a secondary purpose, such as a purpose of simulating a maneuvering application scene or a real maneuvering application scene, which may reduce the amount of computation of the environmental digital twin at the target location.
By the method, after the environmental entity type information and the environmental layout parameters of the target position are obtained, the environmental digital twin is constructed according to the environmental entity type information and the environmental layout parameters, so that the construction of the digital space of the environmental information of the current target position is realized, and the visual display of the multi-dimensional environmental information is realized according to the environmental digital twin.
104. And acquiring a terminal digital twin body corresponding to the target terminal, and fusing the environment digital twin body and the terminal digital twin body to obtain the target digital twin body.
The target terminal is a terminal device with computer operation capability, and the terminal may have corresponding external connection devices, such as a display device, a sound device, a carrier, and the like, for example, the target terminal may be a terminal device on a military combat vehicle (such as a tank), or a terminal device carried on an aviation aircraft. In the embodiment of the present application, the device associated with the terminal device may be regarded as a whole, that is, as a target terminal.
The terminal digital twin may be a numerical value twin corresponding to a structure and/or a functional program of the target terminal, that is, the target terminal is constructed as a corresponding digital twin by performing digital mapping and digital simulation on the structure and/or the function of the target terminal, so that the target terminal is controlled according to the terminal digital twin. For example, taking a military combat vehicle equipped with terminal equipment as an example, after a terminal digital twin corresponding to the military combat vehicle is constructed, the terminal digital twin can be controlled by the relevant occupant, so that the terminal digital twin can control the military combat vehicle. It should be noted that different terminals/terminal carriers all have an independent terminal digital twin, that is, each terminal/terminal carrier corresponds to a terminal digital twin, the terminal digital twin can be regarded as a digital twin in a digital twin space in a cloud service, and a terminal user can control the digital twin of the terminal, so that the digital twin of the terminal controls a corresponding terminal.
The target digital twin can be a combination between the environment digital twin and the terminal digital twin, namely the environment digital twin and the terminal digital twin are combined; specifically, the target digital twin may be a terminal digital twin fused with an environment digital twin in a digital space, so that the terminal digital twin is in the digital space of the environment digital twin, and the target digital twin may be in an environment information scene reflecting a target position of a target terminal in a real physical space, and may also represent an environment information scene representing a simulation position of the target terminal in a simulation application scenario.
In some embodiments, in order to construct a terminal digital twin corresponding to a target terminal for later use, the terminal digital twin may be constructed according to structural parameters (such as geometry, texture, viewpoint, color, etc.) of the target terminal. Specifically, before "acquiring a terminal digital twin corresponding to a target terminal" in step 104, the method further includes: acquiring structural parameters of a target terminal; carrying out digital modeling on the target terminal according to the structural parameters to obtain a terminal digital twin model corresponding to the target terminal; and rendering the terminal digital twin model to obtain a terminal digital twin corresponding to the target terminal.
The structural parameters may be geometric shape, color, size ratio, movable components, and other parameters of the target terminal. For example, an aircraft having a terminal mounted thereon may include external shape set parameters, color distribution, size ratio, movable screw unit, and the like of the aircraft, and may further include a control logic program and the like.
Specifically, in order to construct a terminal digital twin body corresponding to a target terminal, a structure parameter and/or a terminal control logic related to the target terminal may be acquired, and a digital model corresponding to the target terminal may be constructed according to the structure parameter and the terminal control logic, specifically, terminal six-degree-of-freedom information of the target terminal may be determined according to the structure parameter, and a digital model of the target terminal in a digital space, that is, a terminal digital twin model, is established according to the terminal six-degree-of-freedom information; furthermore, a terminal digital twin model corresponding to the target terminal can be rendered in a cloud edge rendering manner to obtain a terminal digital twin corresponding to the target terminal, so that the terminal digital twin of the target terminal is built, and the terminal digital twin can be stored in a corresponding storage space after the building is completed, such as a storage space of a local terminal or a server, so as to be used later, improve the efficiency when the terminal digital twin of the target terminal is obtained, reduce the amount of computation, and save the computation resources.
Further, a terminal digital twin body corresponding to the target terminal can be obtained, and the terminal digital twin body is fused with an environment digital twin body corresponding to the target position, so that the terminal digital twin body is fused into a digital space of the environment digital twin body at the target position, and the target digital twin body is obtained; therefore, the terminal digital twin body and the environment digital twin body are fused in the digital space, so that the experience of a real scene presented to a relevant user through the digital space structure is facilitated when the multi-dimensional environment information is displayed according to the target digital twin body subsequently, the visual effect is improved, the feeling of the user in the middle of the scene is enhanced, and the experience feeling is improved.
Through the method, the terminal digital twin body of the relevant terminal can be obtained when a real or simulated service operation scene is carried out, and the terminal digital twin body is integrated into the digital space of the environment digital twin body, so that the real operation and control service or the simulated operation and control service can be executed in the digital twin space in the following process, and the requirements and experience of users are met.
105. And displaying the environmental information of the target position in a multi-dimension mode based on the target digital twin body.
When the target digital twin is displayed in a multi-dimensional manner, the target digital twin can be displayed through related display equipment, for example, the target digital twin is displayed in a multi-dimensional manner through a related multi-face display screen, and for example, multi-dimensional environment information is displayed through Mixed Reality (MR) glasses based on the target digital twin, so that related personnel can realize visual experience of multi-dimensional visual contents.
In some embodiments, when the target digital twin is displayed in multiple dimensions, the target digital twin is displayed mainly through the related display device, so that the display device connected with the computer device can be determined, and the target digital twin is sent to the target display device to be displayed. Specifically, step 105 may include: and querying a local long connection list, wherein the local long connection list comprises the display equipment identification for establishing long connection with the local. Step 105 may include: and (105. A) transmitting the target digital twin to a target display device corresponding to the display device identification to carry out multi-dimensional display of the environmental information of the target position. It should be noted that, in the local long connection list, after the display device is in long connection with a local device (such as a server or a terminal), the identifier of the display device, the start time of connection, and the like are added to the long connection list locally, and after the target display device that establishes long connection with the local device is determined according to the device identifier in the local long connection list, the target digital twin is sent to the target display device to perform multi-dimensional environment information display.
In some embodiments, the presentation is performed primarily by a peripheral display device other than the computer device, the peripheral display device being connected to the computer device, the peripheral display device not being limited to Mixed Reality (MR) glasses or the like. It should be noted that, when a target digital twin is displayed, in order to enable the target object to be in a real environment and know the conditions of the service environment, the environment information content viewed by the target object carrying the peripheral display device may be determined according to the conditions of the posture, the location, and the like of the target object, so as to display the environment information content to be viewed by the target object on the peripheral display device. For example, the step (105.a) "sending the target digital twin to the target display device corresponding to the display device identifier for performing the multidimensional display of the environment information of the target position" may include:
(105.a.1) acquiring the posture characteristics of the target object corresponding to the display equipment identification, and determining the face direction parameters of the target object according to the posture characteristics;
(105.a.2) estimating spatial layout parameters at the time of digital spatial picture display of the target digital twin;
(105.a.3) predicting a visual range parameter of the target object in the digital space picture according to the spatial layout parameter and the face direction parameter, and selecting a target digital twin body corresponding to the visual range parameter from the target digital twin body;
and (105.a.4) sending the target sub-digital twin to a target display device corresponding to the display device identification, so that the target display device performs multi-dimensional display on the target sub-digital twin.
It should be noted that, when the target digital twin is displayed, the target digital twin belongs to the environmental information of the target position and virtual digital twin data obtained by digitally mapping/simulating the target terminal on the virtual digital space, and the target digital twin belongs to a three-dimensional digital space; when the target digital twin is displayed through the display device, especially through the intelligent glasses, the to-be-displayed environmental information can be determined by combining the visual direction condition of the target object.
The posture feature can be posture information of the target object, which can include information of the sight height, direction and the like of the target object capturing environment visual field; the posture information can be acquired through relevant capture devices such as infrared sensors, cameras and the like.
The spatial layout parameter may be an environmental data layout parameter of the target digital twin when the target digital twin is displayed in a three-dimensional space picture, and specifically, the spatial layout parameter may be a layout parameter in a three-dimensional scene of each environmental entity because the target digital twin is displayed in a three-dimensional space picture, and specifically, may be a position, a size, and the like of the environmental entity displayed in the three-dimensional space picture corresponding to the target digital twin. For example, taking a forest environment scene as an example, the layout parameters of each environment entity such as trees, roads, small grasses, shrubs, etc. in the forest environment in the three-dimensional space scene corresponding to the digital space are corresponding spatial layout parameters.
Specifically, in order to enable the target object to be in a realistic environment and know the condition of a service environment, the face direction parameter of the target object can be determined according to the posture characteristic information of the target object, so that the visual environment range of the target object can be determined in a subsequent maximization manner; further, determining a space layout parameter of the target digital twin body when the three-dimensional space picture is displayed through estimation; therefore, the visual range parameter of the target object in the three-dimensional picture is estimated based on the spatial layout parameter and the face direction parameter so as to select a target sub-digital twin corresponding to the visual range parameter; finally, the target sub-digital twin body is sent to target display equipment corresponding to the display equipment identification, so that the target display equipment displays a visual environment information picture to the target object according to the target sub-digital twin body; it is understood that when the visual direction of the target object is changed, the target child digital twin received by the display device is different, and the displayed environment information screen is also different.
In some embodiments, since the target digital twin is formed by fusing a terminal digital twin and an environmental digital twin at the target position, and the target digital twin is digital spatial information in a certain visual direction, that is, the target digital twin may contain environmental information in the target terminal or external environmental information at the target position, when the target digital twin is displayed, a display mode of a target display device carried by the target object may be determined to display the corresponding environmental information in a targeted manner. Before the step (105.a.4) "sending the target child digital twin to the target display device corresponding to the display device identifier", the method may include: acquiring a display mode of target display equipment corresponding to the display equipment identifier; and if the display mode is identified as the external environment display mode, selecting the external environment digital twin body when the digital space picture is displayed from the target sub digital twin bodies. Then step (105. A.4) may comprise: and sending the external environment digital twins to the target display equipment, so that the target display equipment performs multi-dimensional display on the external environment digital twins.
For example, taking a fighting vehicle equipped with a terminal as an example, the target object is a passenger at a passenger station of the fighting vehicle, the target object carries the target display device (such as MR glasses), when displaying mechanical energy of the target digital twin, the target object can switch the display mode of the target display device according to visual information requirements, and if external environment information of a target position outside the fighting vehicle needs to be viewed, the display mode of the target display device can be switched to the external environment display mode, and the target display device informs the display mode to the computer device, so that the computer device selects the external environment digital twin from the target digital twin when performing digital space screen display, and sends the external environment digital twin to the target display device to perform multi-dimensional display on the external environment digital twin; and otherwise, selecting the internal environment digital twin body corresponding to the internal environment display mode, and sending the internal environment digital twin body to the target display equipment for display. Therefore, the visual requirements of the user on the environmental information are met, and the user experience is improved.
In some embodiments, after the target digital twin is transmitted to the target display device for display, the target digital twin needs to be modified in real time according to the environment information observed by the target object, and it is determined whether the updated target digital twin needs to be transmitted to the target display device for display. Specifically, after the step (105.a.4) "sending the target digital twin to the target display device corresponding to the display device identifier to perform the multidimensional display of the environment information of the target position", the method may include: if the target display equipment is detected to be in a disconnection state with the local, recording a second environment information image and a second environment layout parameter corresponding to the disconnection state; constructing a second environment digital twin body corresponding to the second environment information image and the second environment layout parameter, and updating the target digital twin body according to the second environment digital twin body to obtain an updated second target digital twin body; ceasing transmission of the second target digital twin to the target display device.
Specifically, when the communication connection state between the target display device and the local is detected to be in a disconnection state, second position information of the target display device in the disconnection state is acquired, a second environment information image and a second environment layout parameter corresponding to the second position information are acquired, second environment entity type information in the second environment information image is identified through a target identification model, and a three-dimensional environment semantic map corresponding to the second environment entity type information and the second environment layout parameter component is rendered to obtain a second environment digital twin body; because the terminal digital twins in the digital twins space are not changed at present, the target digital twins are directly updated according to the second environment digital twins, and the updated second target digital twins are obtained; furthermore, the transmission of the second target digital twin to the target display device is stopped because the target display device is in a communication connection with the local, indicating that the relevant target object does not need to display the environmental information at this time. It should be noted that, as the target terminal moves in the environment, the environment digital twin in the digital space can be updated in real time, so that the phenomena of response delay and jamming in the display process due to excessive instantaneous data calculation amount when a subsequent user requests to perform multi-dimensional display on environment information through a digital space technology are avoided.
In some embodiments, after the target digital twin is sent to the target display device for display, the environment information screen to be displayed by the target display device needs to be updated according to the real-time position change of the target terminal and the change of the ambient environment information. However, in order to update the environment information screen, the target digital twin needs to be updated, and the updated digital twin is transmitted to the target display device in real time to be displayed. Step 105, after "displaying the environmental information of the target position based on the target digital twin", may include: acquiring a control instruction of a target terminal, and determining a space adjustment parameter of a terminal digital twin body according to the control instruction; determining a space state between a terminal digital twin and an environment digital twin in the target digital twin; updating the space state according to the space adjusting parameters to obtain an updated target digital twin body; and displaying the environmental information of the second target position according to the updated target digital twin.
The control command may be a control command of a relevant target object in the station of the target terminal, and for example, the control command may be a forward control command, a turning around control command, a left-turn control command, a right-turn control command, and the like of the target terminal. It should be noted that, with the execution of the control instruction, the position of the target terminal in the environment changes, and the environmental vision also changes.
Wherein the spatial tuning parameter may be a parameter that the terminating digital twin needs to be tuned in digital space. Specifically, as in the digital space structure, after receiving a control command, a corresponding space adjustment parameter is determined with respect to an original digital space layout between the terminal digital twin and the environmental digital twin, and a digital space corresponding to the target digital twin is adjusted according to the space adjustment parameter.
Specifically, the spatial movement amount of the target terminal moving/shifting relative to the target position is determined through a control command, such as moving forward by 50 meters, moving left by 10 meters, moving backward by 5 meters and the like, and the spatial adjustment parameters of the terminal digital twin are determined according to the spatial movement amount; and further, determining a space state between the terminal digital twin body and the environment digital twin body in the digital space, and updating the space state according to the space adjusting parameter to obtain an updated target digital twin body. Therefore, the environment information picture to be displayed by the target display equipment is updated according to the real-time position change of the target terminal and the change of the ambient environment information.
By the method, after the target digital twin body corresponding to the environment and the target terminal is obtained, multi-dimensional display of environment information can be achieved based on the target digital twin body, a traditional visual picture with fixed visual dimensions is broken, and visual experience of a target object is improved.
In the embodiment of the application, a real target position is obtained or a simulated geographical position which needs to be subjected to simulated operation is selected, and an environmental information image and an environmental layout parameter which correspond to the position are obtained, so that a digital twin space corresponding to the position is constructed; further, identifying the type information of the environmental entity in the environmental information image to construct a three-dimensional environmental semantic map according to the environmental information image and the environmental layout parameters, and rendering to obtain an environmental digital twin body in a digital space so as to construct the environmental information of the relevant position in the digital space; and then, fusing the terminal digital twin body and the environment digital twin body in a digital space, so as to realize the fusion of the terminal digital twin body into the environment digital twin body, realize the structure of the target terminal in a relevant position situation in the digital space, realize the digital twin of the environment entity, the target terminal, the association layout between the two and other situations through the digital space, and enable the environment information to be closer to a real scene in the subsequent display. And finally, displaying the multi-dimensional environment information according to the target digital twin.
As can be seen from the above, the embodiment of the application can acquire the environment information image and the environment layout parameter corresponding to the target position; inputting the environment information image into a target recognition model, so that the target recognition model carries out feature classification processing on the environment information image to obtain environment entity category information corresponding to the environment information image; constructing an environment digital twin body corresponding to the target position according to the environment layout parameters and the environment entity category information; acquiring a terminal digital twin body corresponding to a target terminal, and fusing the environment digital twin body and the terminal digital twin body to obtain a target digital twin body; and displaying the environmental information of the target position in a multi-dimension mode based on the target digital twin body. Therefore, the scheme can identify the environment entity type in the environment information image of the target position, construct the environment digital twin body of the three-dimensional environment framework of the position according to the environment entity type and the environment layout parameters, further fuse the environment digital twin body and the terminal digital twin body to obtain the target digital twin body of the target terminal at the current target position, and perform multi-dimensional visual display on the environment information according to the target digital twin body; therefore, the limitation of fixed visual dimension content can be broken through, a rich visual effect can be provided for a user, and an environment digital twin body can be constructed according to actual environment information or simulated environment information so as to be suitable for real control business and simulated control business and improve user experience.
The method described in the above examples is further illustrated in detail below by way of example.
The embodiment of the present application takes the display of the environmental information as an example, and further describes the display method of the environmental information provided in the embodiment of the present application.
Fig. 3 is a schematic flowchart of another step of the method for displaying environment information according to the embodiment of the present application, fig. 4 is a schematic structural diagram of a system for displaying environment information according to the embodiment of the present application, fig. 5 is a schematic structural diagram of a terminal carrier according to the embodiment of the present application, and fig. 6 is a schematic structural diagram of a target identification model according to the embodiment of the present application.
In the embodiments of the present application, description will be made from the perspective of a display device of environmental information, which may be specifically integrated in a computer apparatus such as a terminal and a server. The terminal may be a terminal mounted on a certain tool, such as a terminal device mounted on a tool, such as a sweeping robot, an aircraft, a transportation vehicle, and military equipment, and the terminal may perform data interaction with a display device of an external device. The server is a digital processing server side of the display process of the environment information. Specifically, when the processors on the terminal and the server execute the programs corresponding to the display method of the environment information, the specific steps of the display method of the environment information include the following flows:
201. and the terminal acquires an environmental information image and environmental layout parameters corresponding to the target position.
202. And the terminal sends the environment information image and the environment layout parameters to the server.
203. And the server inputs the environment information image into the target recognition model, so that the target recognition model carries out characteristic classification processing on the environment information image to obtain environment entity category information corresponding to the environment information image.
204. And the server constructs an environment digital twin body corresponding to the target position according to the environment layout parameters and the environment entity category information.
205. And the server acquires a terminal digital twin corresponding to the target terminal, and fuses the environment digital twin and the terminal digital twin to obtain the target digital twin.
206. The server transmits the target digital twin to the target terminal.
207. The target terminal receives the target digital twin and carries out multi-dimensional display of environmental information through the target display equipment
For convenience of understanding, the implementation description of the above step flow in the embodiment of the present application is similar to the description of the foregoing embodiment, and specific reference is made to the description of the foregoing embodiment, which is not repeated herein.
In order to facilitate understanding of the embodiments of the present application, the embodiments of the present application will be described with specific application scenario examples. The method and the device can be used for realizing real control business or simulation control business application scenes at relevant positions through the terminal, and the application scenes can be operated by the server in business logic and scene data processing processes, namely cloud service operation; for example, the terminal is not limited to a terminal device mounted on a tool such as a sweeping robot, an aircraft, a transportation vehicle, and a military device. For convenience of understanding, military functional scenes will be used as application scenes, and military vehicles (such as tanks) will be used as tools equipped with terminals. Specifically, by performing the above steps 201-207, the following cloud control application scenarios for military combat vehicles can be implemented:
in order to realize all-weather environmental perception all day long, thermal infrared imagers are additionally arranged in the front direction and the rear direction of a combat vehicle, vehicle-mounted laser radars and millimeter wave radars are fused to form environmental perception point cloud information, and the reliability of target detection at night and under the condition that the vehicle is light-closed is improved. The combat vehicle cloud server receives the input of the multi-mode sensor, completes edge calculation, is spliced to form panoramic imaging and pushes the panoramic imaging to the helmet display system; on the other hand, an artificial intelligence processing acceleration engine is arranged in the edge computing system, intelligent identification and detection of a travelable area and a target in the image are achieved, early warning can be performed on a dangerous target and an interested target in advance, and identification and early warning results are pushed to a helmet display and sound early warning system. In addition, the region of interest and the target in the image may be augmented and cued based on augmented reality techniques. Based on environment multi-mode intelligent sensing cloud edge intelligent processing, panoramic battlefield environment sensing is achieved, a regional dynamic map is reconstructed in real time, a panoramic image after fusion processing is pushed to MR heads of passengers, commanders and vehicle-mounted infantries to be displayed, high-grade driving assistance service adaptive to battlefield environment is provided for the passengers, and battlefield sensing and vehicle control capability of the passengers are improved. Wherein, this military combat vehicle's cloud control application scenario can include:
(1) Cloud service control combat vehicles: the control mode of the fighting vehicle based on the cloud service is that passengers enter a cloud platform of the fighting vehicle, namely a digital twin space, through an advanced MR man-machine interaction interface to obtain the cloud service, the digital twin of the cloud platform of the fighting vehicle is controlled, and the digital twin controls a real vehicle in a physical space, namely a control mode of an entity-digital twin (cloud service) -entity. The control mode completely decouples the passenger station, the vehicle chassis and the battle mission load from the structure and the function, completely meets the requirement of 'cloud battle', and revolutionarily changes the design and the application of battle vehicles.
(1.1) the cloud service control combat vehicle is mainly based on a control mode of the combat vehicle of a cloud digital twin. An entity (a passenger) enters a local computing cloud platform of the fighting vehicle through a man-machine interaction interface (MR device) via a vehicle-electricity network, controls a digital twin of the fighting vehicle deployed on the computing cloud platform, and controls a real physical fighting vehicle through the digital twin, wherein the working principle of the entity is shown in figure 4.
And (1.2) applying technologies such as multi-mode environment sensing, edge computing, a cloud service platform, network and communication, a man-machine interaction interface (comprising an MR head display, a digital control rod and the like), application software and the like, and realizing cloud service control of the fighting vehicle according to a new mode of environment intelligent sensing, man-machine interaction, cloud computing processing and vehicle control.
(2) And constructing environment information and a digital space of the fighting vehicle.
(2.1) constructing a digital space of the environment information:
(2.1.1) collecting information through a multi-modal environment perception sensor such as a laser radar, a low illumination sensor, an infrared sensor, a millimeter wave radar and the like of the fighting vehicle so as to perform information fusion processing to form a panoramic looking-around image; the system is convenient for subsequent application, processes information such as non-cooperative targets and roadblocks around the environment, and provides alarm service information, so that the service information is provided for war chariot passengers, vehicle-mounted infantry, commanders and the like. The multi-modal environment sensing information mainly depends on system architecture components of the passenger station, and the system architecture of the passenger station is not limited to include a laser radar, a low-illumination sensor, an infrared sensor, a millimeter wave radar, a Beidou positioning and time service unit, a chassis electronic control unit, an MR head display (MR glasses), and the like, and particularly refer to FIG. 5.
(2.1.2) fusion process of multi-modal environment perception information: the method mainly comprises the steps of calculating and processing sensor information by applying an artificial intelligence deep learning algorithm, extracting environmental features and forming regional semantic map information; specifically, a deep learning network structure is utilized, and obstacles such as forward vehicles and pedestrians are detected and identified in real time based on multi-mode environment intelligent perception information. The intelligent forward anti-collision early warning is realized, and the result of the environment recognition is pushed to the MR mixed reality head for display, so that the requirements of all-weather and all-day vehicle auxiliary driving and vehicle marching under the conditions of full black, smoke, rain and snow are met, and indirect visual driving and vehicle driving under the condition of light-blocking are realized. The structure of a deep-learning neural network (i.e., a target recognition model) is shown in fig. 6.
And (2.1.3) rendering the fused three-dimensional regional semantic map information. Specifically, a cloud edge rendering technique may be employed: the edge rendering technology supports MR application of an ultrahigh-surface-number complex scene by utilizing powerful computational resources of a cloud server. The MR head display end collects the information of the six degrees of Freedom (6 Degrid-of-Freedom, 6-DoF) of the head and the hand in real time, the information is synchronously transmitted to the cloud rendering server through 5G Wi-Fi/5G, the rendering server renders a 3D scene according to the received 6-DoF information, and the rendered 2D video stream is pushed to the head display end in real time through a network to be displayed and played.
(2.2) constructing a digital space of the fighting vehicle:
(2.2.1) the vehicle multi-modal environment intelligent sensing unit is composed of a laser radar group, a millimeter wave radar group, an uncooled infrared sensor group, a low-illumination sensor group, a Beidou positioning and time service, an inertial navigation combination and other devices, and sensing and intelligent identification of the vehicle state, the spatial position and the fighting environment are completed. In addition, the fighting vehicle also comprises a vehicle electric network, a chassis electronic control unit and a vehicle man-machine interaction unit
The electronic chassis control unit is a vehicle chassis drive-by-wire unit, and is used for controlling executing components such as a direction, an accelerator, a brake, a gear shift and a transfer case in a vehicle chassis according to navigation information, vehicle state information, an internal state of the electronic chassis control unit and a vehicle driving instruction provided by a combat vehicle cloud server, converting the driving instruction into actions of each executing mechanism, coordinating, controlling and supervising the executing mechanisms to complete the driving action, and reporting relevant states of the vehicle to the cloud computing server.
The vehicle man-machine interaction unit comprises passenger MR head display equipment, a multifunctional digital control rod, a vehicle-mounted infantry MR head display, a commander terminal and the like. The passenger MR head display equipment mainly completes the display of virtual and real scenes such as vehicle states, environments and the like; the multifunctional digital control lever generates a control instruction of the vehicle and completes the control of the digital twin body of the fighting vehicle through the network; the vehicle-mounted infantry MR head display equipment displays the environmental information and command instruction of a battlefield in real time; and the commander terminal is equivalent to third-party live broadcast equipment and displays virtual and real superposed scene information.
The train and power Network is composed of a gigabit Time Aware Network (TAN) switch, a broadband ad hoc Network radio station and other devices, and completes data routing and switching inside and outside the passenger station. The time-transparent network has the functions of strong real-time transmission, data message time labels, any flexible topological structure, intelligent dynamic route exchange, safety protection and the like, and can deterministically and reliably transmit large-flow and strong-real-time data in the passenger station. Based on a vehicle-electricity network and MR multi-person cooperative operation technology, cooperative operation and control of twin number fonts between passengers inside the vehicle and passengers outside the vehicle are realized, and battlefield perception information is shared; the vehicle-mounted infantry can acquire the state information of vehicle operation and battlefield situation information formed by intelligent perception of the vehicle in real time through the ad hoc network, and can also operate the cloud digital twin.
(2.2.2) the battle vehicle cloud server is a calculating and controlling brain of the battle vehicle, stores digital twin bodies of the battle vehicle, completes intelligent fusion processing of the multi-modal environment sensor, and forms semantic environment information for passengers, commanders and the like; receiving a control instruction generated by a multifunctional digital control lever of a passenger, generating a vehicle cooperative control instruction, driving a digital twin body of the fighting vehicle, and controlling the fighting vehicle (entity) by the digital twin body of the fighting vehicle; finishing edge rendering of the digital twin body, and pushing vehicle state information, environment information, virtual and real scenes, command instructions and the like to passenger MR head display equipment, vehicle-mounted infantry MR head display equipment and a commander terminal in a video stream mode; and finishing the management of the vehicle passenger station and the monitoring of the vehicle state.
Therefore, a high-fidelity digital twin model with the unified full-scale geometric structure of the fighting vehicle, the three elements of an electric vehicle control model and a business logic is developed based on a three-dimensional modeling technology. The MR head display and the multifunctional digital control rod are used for controlling the digital twin body deployed to a local cloud end server of the vehicle in real time, the MR head display can realize accurate tracking of six degrees of freedom on the digital control rod, chassis control is completed together, and indirect visual driving and closed light driving of the fighting vehicle are realized.
By implementing the above application scenario example, the following effects can be achieved: the man-machine interaction interface of the intelligent passenger station can be separated from the fighting vehicle and is deployed in the combat base, the digital twin of the fighting vehicle is deployed on the 'equipment cloud' of a battlefield, the digital twin of the fighting vehicle is controlled through the 'equipment cloud' in the combat base, the fighting vehicle on the battlefield is further controlled remotely, and the unmanned fighting vehicle control mode based on the 'equipment cloud' is achieved; meanwhile, the application and deployment of the fighting vehicle passengers are more flexible, more specialized 'cloud combat service' can be provided for a battlefield, and the innovation of combat styles is promoted. In addition, the digital twin of the fighting vehicle can be deployed in the 'equipment cloud' space of our army, becomes a core digital asset in the military field, and is used in the business fields of simulation training, equipment teaching and training, equipment maintenance guarantee, combat effectiveness evaluation and the like.
It should be noted that, the application scenario example can be applied not only to a real operation and control service scenario, but also to a simulation operation and control service scenario. For example, taking a simulated military service scene as an example, the simulated military service scene can be applied to simulated military training, equipment teaching and training, equipment maintenance support, combat effectiveness evaluation and other service scenes, and has an extremely important meaning for the military application field. It should be noted that, when a simulation operation service scene is performed, a geographic position or an environment scene to be simulated is selected to construct an environment digital twin body of the simulation scene, and a terminal digital twin body corresponding to a corresponding terminal carrier tool is selected to construct a target digital twin body corresponding to the simulation operation service.
Through the application scene example, the digital twin logic of the military service function in the target military environment can be created based on the physical and digital twin form directions of the military combat vehicle, so that the cloud operation mode is realized, in particular to 'passenger station on the military combat vehicle-digital twin logic-cloud operation of a military combat vehicle platform'. Therefore, the development function of the full-stack MR technology is realized, and the military war chariot is introduced into a brand-new combat application mode, so that the method has great significance.
As can be seen from the above, in the embodiment of the present application, the environment entity type in the environment information image of the target position may be identified, the environment digital twin of the three-dimensional environment architecture at the position is constructed according to the environment entity type and the environment layout parameters, and then the environment digital twin and the terminal digital twin are fused to obtain the target digital twin of the target terminal at the current target position, and the environment information is visually displayed in multiple dimensions according to the target digital twin; therefore, the limitation of fixed visual dimension content can be broken through, a rich visual effect can be provided for a user, and an environment digital twin body can be constructed according to actual environment information or simulated environment information so as to be suitable for real control business and simulated control business and improve user experience.
In order to better implement the method, the embodiment of the application also provides a display device of the environmental information. For example, as shown in fig. 7, the display device of the environment information may include an acquisition unit 401, a recognition unit 402, a construction unit 403, a fusion unit 404, and a display unit 405.
An obtaining unit 401, configured to obtain an environment information image and an environment layout parameter corresponding to a target position;
the identification unit 402 is configured to input the environment information image into the target identification model, so that the target identification model performs feature classification processing on the environment information image to obtain environment entity category information corresponding to the environment information image;
a constructing unit 403, configured to construct an environment digital twin corresponding to the target position according to the environment layout parameters and the environment entity category information;
a fusion unit 404, configured to obtain a terminal digital twin corresponding to the target terminal, and fuse the environment digital twin and the terminal digital twin to obtain a target digital twin;
and a display unit 405 for performing multi-dimensional display of the environmental information of the target position based on the target digital twin.
In some embodiments, the construction unit 403 is further configured to: constructing a three-dimensional environment semantic map corresponding to the target position according to the environment structure parameters and the environment entity category information; and rendering the three-dimensional environment semantic map to obtain a rendered environment digital twin body.
In some embodiments, the display apparatus of the environment information may further include a pre-construction unit for: acquiring structural parameters of a target terminal; performing digital modeling on the target terminal according to the structural parameters to obtain a terminal digital twin model corresponding to the target terminal; and rendering the terminal digital twin model to obtain a terminal digital twin corresponding to the target terminal.
In some embodiments, the display apparatus of the environment information may further include a query unit to: inquiring a local long connection list, wherein the local long connection list comprises a display equipment identifier for establishing long connection with the local;
the display unit 405 is further configured to: and sending the target digital twin to target display equipment corresponding to the display equipment identification to carry out multi-dimensional display on the environmental information of the target position.
In some embodiments, the display unit 405 is further configured to: acquiring the posture characteristics of the target object corresponding to the display equipment identifier, and determining the face direction parameters of the target object according to the posture characteristics; estimating spatial layout parameters when a digital spatial picture is displayed for the target digital twin; according to the spatial layout parameters and the face direction parameters, visual range parameters of the target object in a digital space picture are estimated, and target sub-digital twins corresponding to the visual range parameters are selected from the target digital twins; and sending the target sub-digital twin to target display equipment corresponding to the display equipment identification, so that the target display equipment performs multi-dimensional display on the target sub-digital twin.
In some embodiments, the display device of environmental information further comprises a selecting unit for: acquiring a display mode of target display equipment corresponding to the display equipment identifier; if the display mode is identified to be the external environment display mode, selecting an external environment digital twin body when the digital space picture is displayed from the target sub digital twin body;
the display unit 405 is further configured to: and sending the external environment digital twins to the target display equipment, so that the target display equipment performs multi-dimensional display on the external environment digital twins.
In some embodiments, the display device of the environmental information further includes a stopping unit for: if the disconnection state between the target display equipment and the local is detected, recording a second environment information image and a second environment layout parameter corresponding to the disconnection state; constructing a second environment digital twin body corresponding to the second environment information image and the second environment layout parameter, and updating the target digital twin body according to the second environment digital twin body to obtain an updated second target digital twin body; ceasing transmission of the second target digital twin to the target display device.
In some embodiments, the display device of environmental information further includes an updating unit for: acquiring a control instruction of a target terminal, and determining a space adjustment parameter of a terminal digital twin body according to the control instruction; determining a space state between a terminal digital twin body and an environment digital twin body in the target digital twin body; updating the space state according to the space adjusting parameters to obtain an updated target digital twin body; and displaying the environmental information of the second target position according to the updated target digital twin.
As can be seen from the above, in the embodiment of the present application, the environment entity type in the environment information image of the target position may be identified, the environment digital twin of the three-dimensional environment architecture at the position is constructed according to the environment entity type and the environment layout parameters, and then the environment digital twin and the terminal digital twin are fused to obtain the target digital twin of the target terminal at the current target position, and the environment information is visually displayed in multiple dimensions according to the target digital twin; therefore, the limitation of fixed visual dimension content can be broken through, a rich visual effect can be provided for a user, and an environment digital twin body can be constructed according to actual environment information or simulated environment information so as to be suitable for real control business and simulated control business and improve user experience.
An embodiment of the present application further provides a computer device, as shown in fig. 8, which shows a schematic structural diagram of the computer device according to the embodiment of the present application, specifically:
the computer device may include components such as a processor 501 of one or more processing cores, memory 502 of one or more computer-readable storage media, a power supply 503, and an input unit 504. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 8 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. Wherein:
the processor 501 is a control center of the computer device, connects various parts of the entire computer device using various interfaces and lines, and performs various functions of the computer device and processes data by running or executing software programs and/or modules stored in the memory 502 and calling data stored in the memory 502. Optionally, processor 501 may include one or more processing cores; preferably, the processor 501 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 501.
The memory 502 may be used to store software programs and modules, and the processor 501 executes various functional applications and display of environmental information by operating the software programs and modules stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the processor 501 with access to the memory 502.
The computer device further comprises a power supply 503 for supplying power to the various components, and preferably, the power supply 503 may be logically connected to the processor 501 through a power management system, so that functions of managing charging, discharging, power consumption, and the like are realized through the power management system. The power supply 503 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The computer device may also include an input unit 504, and the input unit 504 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment of the present application, the processor 501 in the computer device loads an executable file corresponding to a process of one or more application programs into the memory 502 according to the following instructions, and the processor 501 runs the application programs stored in the memory 502, thereby implementing various functions as follows:
acquiring an environment information image and environment layout parameters corresponding to a target position; inputting the environmental information image into a target recognition model, so that the target recognition model performs feature classification processing on the environmental information image to obtain environmental entity category information corresponding to the environmental information image; constructing an environment digital twin body corresponding to the target position according to the environment layout parameters and the environment entity category information; acquiring a terminal digital twin body corresponding to a target terminal, and fusing the environment digital twin body and the terminal digital twin body to obtain a target digital twin body; and displaying the environmental information of the target position in a multi-dimension mode based on the target digital twin body.
The above operations can be implemented in the foregoing embodiments, and are not described herein.
Therefore, the scheme can identify the environment entity type in the environment information image of the target position, construct the environment digital twin body of the three-dimensional environment framework of the position according to the environment entity type and the environment layout parameters, further fuse the environment digital twin body and the terminal digital twin body to obtain the target digital twin body of the target terminal at the current target position, and perform multi-dimensional visual display on the environment information according to the target digital twin body; therefore, the limitation of fixed visual dimension content can be broken through, a rich visual effect can be provided for a user, and an environment digital twin body can be constructed according to actual environment information or simulated environment information so as to be suitable for real control business and simulated control business and improve user experience.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a computer-readable storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in any one of the methods for displaying environmental information provided in the embodiments of the present application. For example, the instructions may perform the steps of:
acquiring an environment information image and environment layout parameters corresponding to a target position; inputting the environment information image into a target recognition model, so that the target recognition model carries out feature classification processing on the environment information image to obtain environment entity category information corresponding to the environment information image; constructing an environment digital twin body corresponding to the target position according to the environment layout parameters and the environment entity category information; acquiring a terminal digital twin body corresponding to a target terminal, and fusing the environment digital twin body and the terminal digital twin body to obtain a target digital twin body; and performing multi-dimensional display on the environmental information of the target position based on the target digital twin.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in any method for displaying environmental information provided in the embodiment of the present application, the beneficial effects that can be achieved by any method for displaying environmental information provided in the embodiment of the present application can be achieved, for details, see the foregoing embodiments, and are not described herein again.
The foregoing detailed description is directed to a method, an apparatus, and a computer-readable storage medium for displaying environmental information provided by embodiments of the present application, and specific examples are applied herein to illustrate principles and implementations of the present application, and the above descriptions of the embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, the specific implementation manner and the application scope may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for displaying environmental information, comprising:
acquiring an environment information image and environment layout parameters corresponding to a target position;
inputting the environment information image into a target recognition model, so that the target recognition model carries out feature classification processing on the environment information image to obtain environment entity category information corresponding to the environment information image;
constructing an environment digital twin body corresponding to the target position according to the environment layout parameters and the environment entity category information;
acquiring a terminal digital twin body corresponding to a target terminal, and fusing the environment digital twin body and the terminal digital twin body to obtain a target digital twin body;
and displaying the environmental information of the target position in a multi-dimension mode based on the target digital twin body.
2. The method according to claim 1, wherein the constructing an environmental digital twin corresponding to the target location according to the environmental layout parameters and the environmental entity category information comprises:
constructing a three-dimensional environment semantic map corresponding to the target position according to the environment structure parameters and the environment entity category information;
and rendering the three-dimensional environment semantic map to obtain a rendered environment digital twin body.
3. The method of claim 1, wherein before obtaining the terminal digital twin corresponding to the target terminal, the method further comprises:
acquiring structural parameters of a target terminal;
performing digital modeling on the target terminal according to the structural parameters to obtain a terminal digital twin model corresponding to the target terminal;
and rendering the terminal digital twin model to obtain a terminal digital twin corresponding to the target terminal.
4. The method according to claim 1, wherein before the displaying the environmental information of the target location in a multi-dimensional manner based on the target digital twin, the method further comprises:
inquiring a local long connection list, wherein the local long connection list comprises a display equipment identifier for establishing long connection with the local;
the displaying of the environmental information of the target location based on the target digital twin includes:
and sending the target digital twin to target display equipment corresponding to the display equipment identification to carry out multi-dimensional display on the environmental information of the target position.
5. The method of claim 4, wherein the sending the target digital twin to a target display device corresponding to the display device identifier for multidimensional display of the environmental information of the target location comprises:
acquiring the posture characteristics of a target object corresponding to the display equipment identifier, and determining the face direction parameters of the target object according to the posture characteristics;
estimating spatial layout parameters when performing digital spatial picture display on the target digital twin;
according to the space layout parameter and the face direction parameter, estimating a visual range parameter of the target object in the digital space picture, and selecting a target sub-digital twin body corresponding to the visual range parameter from the target digital twin body;
and sending the target sub-digital twin to target display equipment corresponding to the display equipment identification, so that the target display equipment performs multi-dimensional display on the target sub-digital twin.
6. The method of claim 5, wherein prior to sending the target sub-digital twin to the display device identifying the corresponding target display device, further comprising:
acquiring a display mode of target display equipment corresponding to the display equipment identifier;
if the display mode is identified to be an external environment display mode, selecting an external environment digital twin body when a digital space picture is displayed from the target sub digital twin body;
the sending the target child digital twin to a target display device corresponding to the display device identification so that the target display device performs multi-dimensional display on the target child digital twin includes:
and sending the external environment digital twin to the target display equipment, so that the target display equipment performs multi-dimensional display on the external environment digital twin.
7. The method of claim 4, wherein after sending the target digital twin to the target display device corresponding to the display device identifier for multidimensional display of the environmental information of the target location, further comprising:
if the target display equipment is detected to be in a disconnection state with the local area, recording a second environment information image and a second environment layout parameter corresponding to the disconnection state;
constructing a second environment digital twin body corresponding to the second environment information image and the second environment layout parameter, and updating the target digital twin body according to the second environment digital twin body to obtain an updated second target digital twin body;
ceasing transmission of the second target digital twin to the target display device.
8. The method according to claim 1, wherein after displaying the environmental information of the target location based on the target digital twin, further comprising:
acquiring a control instruction of the target terminal, and determining a space adjustment parameter of the terminal digital twin body according to the control instruction;
determining a spatial state between the terminal digital twin and the environmental digital twin in the target digital twin;
updating the space state according to the space adjusting parameters to obtain an updated target digital twin body;
and displaying the environmental information of the second target position according to the updated target digital twin.
9. An environmental information display device, comprising:
the acquisition unit is used for acquiring an environment information image and environment layout parameters corresponding to the target position;
the identification unit is used for inputting the environment information image into a target identification model, so that the target identification model carries out feature classification processing on the environment information image to obtain environment entity category information corresponding to the environment information image;
the construction unit is used for constructing an environment digital twin body corresponding to the target position according to the environment layout parameters and the environment entity category information;
the fusion unit is used for acquiring a terminal digital twin body corresponding to a target terminal and fusing the environment digital twin body and the terminal digital twin body to obtain a target digital twin body;
and the display unit is used for carrying out multi-dimensional display on the environmental information of the target position based on the target digital twin body.
10. A computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the method for displaying environment information according to any one of claims 1 to 8.
CN202211296214.9A 2022-10-21 2022-10-21 Method and device for displaying environment information and computer readable storage medium Pending CN115641426A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211296214.9A CN115641426A (en) 2022-10-21 2022-10-21 Method and device for displaying environment information and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211296214.9A CN115641426A (en) 2022-10-21 2022-10-21 Method and device for displaying environment information and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115641426A true CN115641426A (en) 2023-01-24

Family

ID=84944715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211296214.9A Pending CN115641426A (en) 2022-10-21 2022-10-21 Method and device for displaying environment information and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115641426A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117441980A (en) * 2023-12-20 2024-01-26 武汉纺织大学 Intelligent helmet system and method based on intelligent computation of multi-sensor information
CN117671447A (en) * 2023-12-18 2024-03-08 河北建工集团有限责任公司 Digital twin and intelligent sensor fusion system for complex scene

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117671447A (en) * 2023-12-18 2024-03-08 河北建工集团有限责任公司 Digital twin and intelligent sensor fusion system for complex scene
CN117671447B (en) * 2023-12-18 2024-05-07 河北建工集团有限责任公司 Digital twin and intelligent sensor fusion system for complex scene
CN117441980A (en) * 2023-12-20 2024-01-26 武汉纺织大学 Intelligent helmet system and method based on intelligent computation of multi-sensor information
CN117441980B (en) * 2023-12-20 2024-03-22 武汉纺织大学 Intelligent helmet system and method based on intelligent computation of multi-sensor information

Similar Documents

Publication Publication Date Title
US10580162B2 (en) Method for determining the pose of a camera and for recognizing an object of a real environment
US11593950B2 (en) System and method for movement detection
US11494937B2 (en) Multi-task multi-sensor fusion for three-dimensional object detection
CN115641426A (en) Method and device for displaying environment information and computer readable storage medium
CN114667437A (en) Map creation and localization for autonomous driving applications
US20220036579A1 (en) Systems and Methods for Simulating Dynamic Objects Based on Real World Data
JP7432595B2 (en) Cooperative virtual interface
CA3158601A1 (en) Systems and methods for vehicle-to-vehicle communications for improved autonomous vehicle operations
CN105324633A (en) Augmented video system providing enhanced situational awareness
US20220137636A1 (en) Systems and Methods for Simultaneous Localization and Mapping Using Asynchronous Multi-View Cameras
Samal et al. Task-driven rgb-lidar fusion for object tracking in resource-efficient autonomous system
Fang et al. Simulating LIDAR point cloud for autonomous driving using real-world scenes and traffic flows
US20200035030A1 (en) Augmented/virtual mapping system
CN113496510A (en) Realistic image perspective transformation using neural networks
CN109891463A (en) Image processing equipment and image processing method
CA3126236A1 (en) Systems and methods for sensor data packet processing and spatial memoryupdating for robotic platforms
CN111223354A (en) Unmanned trolley, and AR and AI technology-based unmanned trolley practical training platform and method
CN111257882A (en) Data fusion method and device, unmanned equipment and readable storage medium
CN114485700A (en) High-precision dynamic map generation method and device
CN117372991A (en) Automatic driving method and system based on multi-view multi-mode fusion
CN114332845A (en) 3D target detection method and device
Yang et al. A semantic SLAM-based method for navigation and landing of UAVs in indoor environments
CN114581748A (en) Multi-agent perception fusion system based on machine learning and implementation method thereof
Li et al. Collaborative positioning for swarms: A brief survey of vision, LiDAR and wireless sensors based methods
CN114882759A (en) Virtual-real hybrid integrated simulation intelligent ship multi-channel interactive simulation system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination