CN116630913A - Method, equipment and storage medium for mapping field environment based on feature fusion - Google Patents

Method, equipment and storage medium for mapping field environment based on feature fusion Download PDF

Info

Publication number
CN116630913A
CN116630913A CN202310341699.7A CN202310341699A CN116630913A CN 116630913 A CN116630913 A CN 116630913A CN 202310341699 A CN202310341699 A CN 202310341699A CN 116630913 A CN116630913 A CN 116630913A
Authority
CN
China
Prior art keywords
map
image information
field environment
features
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310341699.7A
Other languages
Chinese (zh)
Inventor
仲元红
陶歆
张靖怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202310341699.7A priority Critical patent/CN116630913A/en
Publication of CN116630913A publication Critical patent/CN116630913A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The application belongs to the technical field of map modeling, and particularly relates to a field environment map building method, field environment map building equipment and a storage medium based on feature fusion. A field environment mapping method based on feature fusion comprises the following steps: collecting various image information; deconstructing the image information according to a preset algorithm to obtain various image features; and establishing a model, and fusing the plurality of image features to generate a travelable region feature map. LiDAR and RGB image information are dynamically fused, and a travelable region feature map with fusion features is generated through cross learning attention, so that accurate off-road free space detection is realized.

Description

Method, equipment and storage medium for mapping field environment based on feature fusion
Technical Field
The application belongs to the technical field of map modeling, and particularly relates to a field environment map building method, field environment map building equipment and a storage medium based on feature fusion.
Background
The ultimate goal of autopilot development is to free the driver, i.e., no human attention is required for any road environment. To achieve this, it is necessary to study various road environments, not just the road environment, and the existing studies are mostly focused on the road environment. Free space detection is one of the most important technologies for autopilot, with different concepts for structured road scenes and unstructured off-road scenes. For the former, free space refers mainly to regular roads, while for off-road scenes the concept of free space is relatively blurred. Autopilot vehicles need to go through a grass, sand or muddy off-road environment. This is a great challenge for an autonomous car because of the complex diversity of off-road environments, e.g., tall and short grass are very different in traversability, as tall grass may hide invisible obstacles or holes. To our knowledge, there is relatively little research on free space detection in off-road environments. Data-driven based approaches have had great success in the last decade, and the world has entered the deep learning era. With the help of data-driven deep learning methods, many problems of autopilot have been solved and autopilot vehicles are becoming a reality. The concept of autopilot has been published, e.g., KITTI, nuScenes, waymo, etc., in order to improve the performance of the deep learning approach of autopilot. However, since the mainstream autopilot companies concentrate on autopilot technology in urban environments, existing published autopilot datasets are mainly collected in cities. There is little data collected in an off-road environment.
Free space detection, also known as traversable area detection, is an important component of autopilot technology, playing an important role in path planning in both on-road and off-road environments. However, existing off-road datasets are not concerned with traversability analysis in an off-road environment. Thus, a dataset focused on free space detection tasks in an off-road environment is needed.
Disclosure of Invention
The purpose of the application is that: the method, the device and the storage medium for mapping the field environment based on feature fusion are used for dynamically fusing LiDAR and RGB image information, and a feature map of a drivable area with fusion features is generated through cross learning attention so as to realize accurate off-road free space detection.
In order to achieve the technical purpose, the application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a method for mapping a field environment based on feature fusion, where the method includes:
collecting various image information;
deconstructing the image information according to a preset algorithm to obtain various image features;
and establishing a model, and fusing the plurality of image features to generate a travelable region feature map.
With reference to the first aspect, in some optional embodiments, collecting the plurality of image information includes:
RGB image information is collected through a first sensor, and LiDAR elevation map information is collected through a second sensor.
With reference to the first aspect, in some optional embodiments, deconstructing the image information according to a preset algorithm to obtain a plurality of image features includes:
extracting geometric features from the LiDAR elevation graph, forming a travelability cost based on the geometric features, planning and identifying a first travelability area by setting a threshold value for the travelability cost, and forming a first travelability map.
With reference to the first aspect, in some optional embodiments, deconstructing the image information according to a preset algorithm to obtain a plurality of image features, further includes:
and dividing the RGB image through a preset Mask R-CNN algorithm to obtain a second travelling map.
With reference to the first aspect, in some optional embodiments, building a model, fusing the plurality of image features to generate a travelable region feature map includes:
and synchronizing the time of the first sensor and the second sensor, selecting a calibration reference object, establishing a unified global coordinate system, fusing the first drivable map and the second drivable map, and completing modeling in the global coordinate system.
With reference to the first aspect, in some optional embodiments, fusing the first travelable map and the second travelable map includes:
inputting the first runnability map and the second runnability map into a preset OFF-Net network, and performing patch embedding processing on surface normals of the second runnability map and the second runnability map; and performing patch superposition on the first travelling map and the second travelling map at a neural network layer of the OFF-Net network, and performing cross learning processing through a preset S-shaped activation function to generate the travelling region feature map.
With reference to the first aspect, in some optional embodiments, the patch embedding process includes:
the multi-head self-attention function is calculated by the preset header Q, K, V:
in the course of the patch embedding process, the position information of the transducer encoder is captured by 3×3 convolution:
x out =MLP(GELU(Conv 3×3 (MLP(x in ))))+x in
wherein X is in Is a feature from the multihead self-attention part, GELU is an activation function, MLP is the neural network layer fully connected, conv3×3 is a 3×3 convolution layer.
With reference to the first aspect, in some optional embodiments, the cross learning process includes:
wherein X is img-in And X sn-in Learning the second travelability image and the surface normal features after a transducer block, X img-out And X sn-out Is a refined RGB image and surface normal feature, σ is an sigmoid activation function.
In a second aspect, an embodiment of the present application provides a field environment mapping apparatus based on feature fusion, where the modeling apparatus includes an image acquisition module, a processing module, a modeling module, and a storage module, where the image acquisition module is configured to acquire RGB image information and LiDAR elevation map information, the processing module is configured to process the acquired RGB image information, the modeling module is configured to generate a travelable area feature map with fusion features of the RGB image information and the LiDAR elevation map information according to the RGB image information and the LiDAR elevation map information, and the storage module stores a computer program, where the computer program is executed by the processing module or the modeling module, so that the modeling apparatus executes the method described above.
In a third aspect, embodiments of the present application also provide a computer readable medium having a computer program stored therein, which when run on a computer causes the computer to perform the above-described method.
The application adopting the technical scheme has the following advantages:
in the scheme, through collecting RGB image information and LiDAR elevation map information, after processing the two image information respectively, plaque stacking is carried out in an OFF-Net network, and then cross learning processing is further carried out, so that a travelable region characteristic map is generated. LiDAR elevation map contains space geometric information but lacks semantic information, monocular RGB image contains higher-level semantic information of environment but lacks structural information, and the generated travelable region feature map information with fusion features is more accurate, and can meet the free space detection requirement of automatic driving cross-country.
Drawings
The application can be further illustrated by means of non-limiting examples given in the accompanying drawings;
FIG. 1 is a block diagram of a field environment mapping device based on feature fusion;
FIG. 2 is a schematic diagram of steps of a method for creating a map of a field environment based on feature fusion;
fig. 3 is a block flow diagram of a method for creating a map of a field environment based on feature fusion.
The main reference numerals are as follows:
10. modeling equipment based on feature fusion; 11. a signal acquisition module; 12. a processing module; 13. and a modeling module.
Detailed Description
The present application will be described in detail below with reference to the drawings and detailed description, wherein like or similar parts are designated by the same reference numerals throughout the accompanying drawings or description, and wherein implementations not shown or described are of a form well known to those of ordinary skill in the art. In the description of the present application, the terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
As shown in fig. 1, the present application provides a field environment mapping device 10 based on feature fusion, where the modeling device includes an image acquisition module, a processing module 12, a modeling module 13 and a storage module, the image acquisition module is used to acquire RGB image information and LiDAR elevation map information, the processing module 12 is used to process the acquired RGB image information, and the modeling module 13 is used to generate a travelable area feature map with fusion features of the RGB image information and the LiDAR elevation map information according to the RGB image information and the LiDAR elevation map information.
In this embodiment, the signal acquisition module 11 is respectively connected to a first sensor and a second sensor disposed on the host vehicle, and acquires RGB image information through the first sensor and LiDAR elevation map information through the second sensor. The acquired image information is uploaded to the processing module 12, the processing module 12 processes the image information, sensor time is unified in the modeling module 13, a space coordinate system is established, and a travelable region feature map with fusion features is generated.
The memory module stores a computer program which, when executed by the processing module 12 or the modeling module 13, causes the modeling apparatus to perform the method described below.
As shown in fig. 2 and fig. 3, an embodiment of the present application provides a field environment mapping method based on feature fusion, where the modeling method based on feature fusion may include the following steps:
s110: collecting various image information;
s120: deconstructing the image information according to a preset algorithm to obtain various image features;
s130: and establishing a model, and fusing the plurality of image features to generate a travelable region feature map.
In this embodiment, various image information is collected by the image collecting module, the image information is processed by the processing module 12, image features are extracted, a model is built by the modeling module 13, the image features are fused, and a feature map of a travelable region with fused features is generated.
As an alternative embodiment, collecting a plurality of image information includes:
RGB image information is collected through a first sensor, and LiDAR elevation map information is collected through a second sensor.
In this embodiment, RGB image information and LiDAR elevation map information are acquired by an image acquisition module, respectively, wherein the data of the LiDAR point cloud contains spatial geometry information but lacks semantic information, and the monocular RGB image contains higher level semantic information of the environment but lacks structural information. It can be understood that the feature map of the travelable area obtained by fusing the RGB image information and the LiDAR elevation information can better meet the requirement of automatic off-road spatial image detection.
It will be appreciated that the first sensor may be a camera and the second sensor may be a lidar.
As an optional implementation manner, deconstructing the image information according to a preset algorithm to obtain a plurality of image features, including:
extracting geometric features from the LiDAR elevation graph, forming a travelability cost based on the geometric features, planning and identifying a first travelability area by setting a threshold value for the travelability cost, and forming a first travelability map.
In this embodiment, the geometric features include a slope angle s, an elevation difference h, and a terrain roughness r, and a travelling cost v is calculated:
wherein s is crit 、h crit 、r crit Respectively representing preset maximum slope angle, elevation difference and terrain roughness value.
It can be understood that the value range of the travelling cost v is between [0,1], the overall flatness of the terrain is evaluated by the value of the travelling cost v, and the more the travelling cost v increases, the more the topography fluctuates.
As an alternative embodiment: deconstructing the image information according to a preset algorithm to obtain a plurality of image features, and further comprising:
and dividing the RGB image through a preset Mask R-CNN algorithm to obtain a second travelling map.
In this embodiment, the Mask R-CNN algorithm consists of a faster RCNN and a semantic segmentation algorithm FCN. The master RCNN algorithm completes the target detection task, and the semantic segmentation algorithm FCN can accurately complete the task of semantic segmentation. The FCN is added on the basis of the Faster-RCNN algorithm to generate Mask branches, the network is used for dividing RGB images, and the travelable region is divided and marked.
As an optional implementation manner, a model is built, the multiple image features are fused, and a travelable region feature map is generated, which includes:
and synchronizing the time of the first sensor and the second sensor, selecting a calibration reference object, establishing a unified global coordinate system, fusing the first drivable map and the second drivable map, and completing modeling in the global coordinate system.
In the present embodiment, for convenience of modeling, the coordinate system of the second travelability map is converted into the global coordinate system Om-Xm-Ym-Zm with the coordinate system of the first travelability map as the global coordinate system Om-Xm-Ym-Zm:
let the coordinates of the space point P beThe conversion process is as follows:
wherein u, v respectively represent the horizontal and vertical coordinates of a pixel point g in the second travelling property map, d represents the depth of the pixel, and f x ,f y Focal lengths c in x and y directions of the camera x ,c y Respectively represent the offset between the optical axis of the camera in the x and y directions and the center of the projection plane coordinates, which constitute the internal reference matrix of the camera. The reference matrix of the camera can be obtained by a camera reference calibration algorithm. The algorithm of camera internal parameter calibration is mature at present, and comprises a common Zhang Zhengyou checkerboard calibration method and the like.
Where R, t represents the rotation matrix and translation vector, respectively, the value of R, t can be obtained by calibration of calibration references between the first sensor and the second sensor.
As an alternative embodiment, fusing the first travelable map and the second travelable map includes:
inputting the first runnability map and the second runnability map into a preset OFF-Net network, and performing patch embedding processing on surface normals of the second runnability map and the second runnability map; and performing patch superposition on the first travelling map and the second travelling map at a neural network layer of the OFF-Net network, and performing cross learning processing through a preset S-shaped activation function to generate the travelling region feature map.
As an alternative embodiment, the patch embedding process includes:
the multi-head self-attention function is calculated by the preset header Q, K, V:
in the course of the patch embedding process, the position information of the transducer encoder is captured by 3×3 convolution:
x out =MLP(GELU(Conv 3×3 (MLP(x in ))))+x in
wherein X is in Is a feature from the multihead self-attention part, GELU is an activation function, MLP is the neural network layer fully connected, conv3×3 is a 3×3 convolution layer.
The cross learning process as an alternative embodiment includes:
wherein X is img-in And X sn-in Learning the second travelability image and the surface normal features after a transducer block, X img-out And X sn-out Is a refined RGB image and surface normal feature, σ is an sigmoid activation function.
In this embodiment, the memory module may be, but is not limited to, a random access memory, a read-only memory, a programmable read-only memory, an erasable programmable read-only memory, an electrically erasable programmable read-only memory, etc. In this embodiment, the storage module may be configured to store RGB image information acquired by the first sensor, liDAR elevation information acquired by the second sensor, a planned first travelability map, a second travelability map, and so on. Of course, the storage module may also be used to store a program, and the processing module executes the program after receiving the execution instruction.
It will be appreciated that the structure of the feature fusion-based field environment mapping apparatus 10 shown in fig. 1 is merely a schematic structural diagram, and that the feature fusion-based modeling apparatus may also include more components than those shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
It should be noted that, for convenience and brevity of description, specific working processes of the modeling apparatus based on feature fusion described above may refer to corresponding processes of each step in the foregoing method, and will not be described in detail herein.
The embodiment of the application also provides a computer readable storage medium. The computer-readable storage medium has stored therein a computer program which, when run on a computer, causes the computer to perform the modeling method based on feature fusion as described in the above embodiments.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented in hardware, or by means of software plus a necessary general hardware platform, and based on this understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disc, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a modeling device, or a network device, etc.) to execute the method described in the respective implementation scenario of the present application.
In summary, the embodiment of the application provides a field environment mapping method, device and storage medium based on feature fusion. In the scheme, RGB image information is acquired through a first sensor, and LiDAR elevation map information is acquired through a second sensor. The acquired image information is uploaded to the processing module 12, the processing module 12 processes the image information, sensor time is unified in the modeling module 13, a space coordinate system is established, and a travelable region feature map with fusion features is generated. LiDAR elevation map contains space geometric information but lacks semantic information, monocular RGB image contains higher-level semantic information of environment but lacks structural information, and the generated travelable region feature map information with fusion features is more accurate, and can meet the free space detection requirement of automatic driving cross-country.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus, system and method may be implemented in other manners as well. The above-described apparatus, system, and method embodiments are merely illustrative, for example, flow charts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and variations will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A field environment mapping method based on feature fusion is characterized in that: the method comprises the following steps:
collecting various image information;
deconstructing the image information according to a preset algorithm to obtain various image features;
and establishing a model, and fusing the plurality of image features to generate a travelable region feature map.
2. The field environment mapping method based on feature fusion as claimed in claim 1, wherein the method is characterized in that: collecting a plurality of image information, including:
RGB image information is collected through a first sensor, and LiDAR elevation map information is collected through a second sensor.
3. The field environment mapping method based on feature fusion as claimed in claim 2, wherein the method is characterized in that: deconstructing the image information according to a preset algorithm to obtain a plurality of image features, including:
extracting geometric features from the LiDAR elevation graph, forming a travelability cost based on the geometric features, planning and identifying a first travelability area by setting a threshold value for the travelability cost, and forming a first travelability map.
4. A method for mapping a field environment based on feature fusion according to claim 3, wherein: deconstructing the image information according to a preset algorithm to obtain a plurality of image features, and further comprising:
and dividing the RGB image through a preset Mask R-CNN algorithm to obtain a second travelling map.
5. The field environment mapping method based on feature fusion according to claim 4, wherein the method comprises the following steps: establishing a model, fusing the plurality of image features to generate a drivable region feature map, wherein the method comprises the following steps of:
and synchronizing the time of the first sensor and the second sensor, selecting a calibration reference object, establishing a unified global coordinate system, fusing the first drivable map and the second drivable map, and completing modeling in the global coordinate system.
6. The field environment mapping method based on feature fusion according to claim 5, wherein the method is characterized in that: fusing the first travelable map and the second travelable map, comprising:
inputting the first runnability map and the second runnability map into a preset OFF-Net network, and performing patch embedding processing on surface normals of the second runnability map and the second runnability map; and performing patch superposition on the first travelling map and the second travelling map at a neural network layer of the OFF-Net network, and performing cross learning processing through a preset S-shaped activation function to generate the travelling region feature map.
7. The field environment mapping method based on feature fusion of claim 6, wherein the method comprises the following steps: the patch embedding process includes:
the multi-head self-attention function is calculated by the preset header Q, K, V:
in the course of the patch embedding process, the position information of the transducer encoder is captured by 3×3 convolution:
x out =MLP(GELU(Conv 3×3 (MLP(x in ))))+x in
wherein X is in Is a feature from the multihead self-attention part, GELU is an activation function, MLP is the neural network layer fully connected, conv3×3 is a 3×3 convolution layer.
8. The field environment mapping method based on feature fusion of claim 6, wherein the method comprises the following steps: the cross learning process includes:
Cross A ttention=σ(x img _ in +x snin )
x imgout =Cross A ttention*x imgin +x img_in
x snout =(1-Cross A ttention)*x snin +x snin
wherein X is img-in And X sn-in Learning the second travelability image and the surface normal features after a transducer block, X img-out And X sn-out Is a refined RGB image and surface normal feature, σ is an sigmoid activation function.
9. A field environment construction equipment based on feature fusion, its characterized in that: the modeling apparatus comprises an image acquisition module for acquiring RGB image information and LiDAR elevation information, a processing module for processing the acquired RGB image information, a modeling module for generating a travelable region feature map having fusion features of the RGB image information and the LiDAR elevation information from the RGB image information and the LiDAR elevation information, and a storage module storing a computer program which, when executed by the processing module or the modeling module, causes the modeling apparatus to perform the method of any one of claims 1-8.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when run on a computer, causes the computer to perform the method according to any of claims 1-8.
CN202310341699.7A 2023-04-03 2023-04-03 Method, equipment and storage medium for mapping field environment based on feature fusion Pending CN116630913A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310341699.7A CN116630913A (en) 2023-04-03 2023-04-03 Method, equipment and storage medium for mapping field environment based on feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310341699.7A CN116630913A (en) 2023-04-03 2023-04-03 Method, equipment and storage medium for mapping field environment based on feature fusion

Publications (1)

Publication Number Publication Date
CN116630913A true CN116630913A (en) 2023-08-22

Family

ID=87601522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310341699.7A Pending CN116630913A (en) 2023-04-03 2023-04-03 Method, equipment and storage medium for mapping field environment based on feature fusion

Country Status (1)

Country Link
CN (1) CN116630913A (en)

Similar Documents

Publication Publication Date Title
US11691648B2 (en) Drivable surface identification techniques
US10437252B1 (en) High-precision multi-layer visual and semantic map for autonomous driving
US11328158B2 (en) Visual-inertial positional awareness for autonomous and non-autonomous tracking
US11521009B2 (en) Automatically generating training data for a lidar using simulated vehicles in virtual space
US10794710B1 (en) High-precision multi-layer visual and semantic map by autonomous units
CN108647646B (en) Low-beam radar-based short obstacle optimized detection method and device
US10410328B1 (en) Visual-inertial positional awareness for autonomous and non-autonomous device
US10366508B1 (en) Visual-inertial positional awareness for autonomous and non-autonomous device
CN112740268B (en) Target detection method and device
CN110163930A (en) Lane line generation method, device, equipment, system and readable storage medium storing program for executing
CN113916242B (en) Lane positioning method and device, storage medium and electronic equipment
US20210389133A1 (en) Systems and methods for deriving path-prior data using collected trajectories
CN111089597A (en) Method and apparatus for positioning based on image and map data
CN113359782B (en) Unmanned aerial vehicle autonomous addressing landing method integrating LIDAR point cloud and image data
CN112734765A (en) Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN115639823A (en) Terrain sensing and movement control method and system for robot under rugged and undulating terrain
CN111833443A (en) Landmark position reconstruction in autonomous machine applications
CN112734811B (en) Obstacle tracking method, obstacle tracking device and chip
CN116630913A (en) Method, equipment and storage medium for mapping field environment based on feature fusion
KR102540636B1 (en) Method for create map included direction information and computer program recorded on record-medium for executing method therefor
KR102540634B1 (en) Method for create a projection-based colormap and computer program recorded on record-medium for executing method therefor
KR102540624B1 (en) Method for create map using aviation lidar and computer program recorded on record-medium for executing method therefor
KR102540632B1 (en) Method for create a colormap with color correction applied and computer program recorded on record-medium for executing method therefor
KR102540629B1 (en) Method for generate training data for transportation facility and computer program recorded on record-medium for executing method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination