CN116977524A - Three-dimensional map construction method and device, electronic equipment and storage medium - Google Patents

Three-dimensional map construction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116977524A
CN116977524A CN202310949374.7A CN202310949374A CN116977524A CN 116977524 A CN116977524 A CN 116977524A CN 202310949374 A CN202310949374 A CN 202310949374A CN 116977524 A CN116977524 A CN 116977524A
Authority
CN
China
Prior art keywords
information
environmental information
dimensional map
cloud server
environment information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310949374.7A
Other languages
Chinese (zh)
Inventor
雒冬梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Softong Intelligent Technology Co ltd
Original Assignee
Beijing Softong Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Softong Intelligent Technology Co ltd filed Critical Beijing Softong Intelligent Technology Co ltd
Priority to CN202310949374.7A priority Critical patent/CN116977524A/en
Publication of CN116977524A publication Critical patent/CN116977524A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Abstract

The application discloses a three-dimensional map construction method, a three-dimensional map construction device, electronic equipment and a storage medium. The method comprises the following steps: acquiring environmental information; wherein the environmental information includes outdoor environmental information; performing feature extraction and instance segmentation on the environment information through a cloud server to obtain feature data; determining the position and the orientation of each object in the environment information according to the characteristic data; based on the feature data, position and orientation, a three-dimensional map is constructed. According to the technical scheme, the instance segmentation technology and the cloud server are combined, so that the accuracy of object identification in outdoor environment information is improved, and the running efficiency is ensured by utilizing the strong calculation of the cloud server.

Description

Three-dimensional map construction method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of map construction technologies, and in particular, to a three-dimensional map construction method, apparatus, electronic device, and storage medium.
Background
Synchronous positioning and mapping (simultaneous localization and mapping, SLAM) are important means for realizing simultaneous positioning and map construction, and are widely applied to the fields of robot navigation, automatic driving, augmented reality and the like.
The SLAM technology can acquire environmental information by carrying various sensors by the robot, so that autonomous navigation of the robot and self-learning of environmental perception are realized. In an unknown environment, the SLAM technology obtains environment sensing data based on an external sensor of a robot, provides the position of the robot in an environment map, and performs incremental construction of the environment map and continuous positioning of the robot along with the movement of the robot, so that the SLAM technology is a basis for realizing environment sensing and automatic operation of the robot. The SLAM technology comprises four important aspects of feature extraction, feature matching, pose estimation and loop detection, and the four important aspects cooperate to form a core part of the whole SLAM algorithm.
The traditional SLAM technology relies on a semantic information extraction method for modeling, but is faced with complex, changeable and large-scale outdoor scenes, the identification accuracy of the traditional semantic information extraction method is difficult to meet the requirements, and meanwhile, a large amount of image and point cloud information needs to be processed, so that higher requirements are provided for the configuration of local equipment.
Disclosure of Invention
The application provides a three-dimensional map construction method, a three-dimensional map construction device, electronic equipment and a storage medium, which combine an instance segmentation technology with a cloud server, so that the accuracy of object identification in outdoor environment information is improved, and the running efficiency is ensured by utilizing the strong power of the cloud server.
According to an aspect of the present application, there is provided a three-dimensional map construction method, the method including:
acquiring environmental information; wherein the environmental information includes outdoor environmental information;
performing feature extraction and instance segmentation on the environment information through a cloud server to obtain feature data;
determining the position and the orientation of each object in the environment information according to the characteristic data;
based on the feature data, position and orientation, a three-dimensional map is constructed.
According to another aspect of the present application, there is provided a three-dimensional map construction apparatus including:
the environment information acquisition module is used for acquiring environment information; wherein the environmental information includes outdoor environmental information;
the feature data obtaining module is used for carrying out feature extraction and instance segmentation on the environment information through the cloud server to obtain feature data;
the position and orientation determining module is used for determining the position and orientation of each object in the environment information according to the characteristic data;
and the three-dimensional map construction module is used for constructing a three-dimensional map based on the characteristic data, the position and the orientation.
According to another aspect of the present application, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform a three-dimensional map construction method according to any one of the embodiments of the present application.
According to another aspect of the present application, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement a three-dimensional map construction method according to any one of the embodiments of the present application when executed.
According to the technical scheme, the environment information is obtained, the cloud server is used for carrying out feature extraction and instance segmentation on the environment information to obtain feature data, then the position and the orientation of each object in the environment information are determined according to the feature data, and a three-dimensional map is constructed based on the feature data, the position and the orientation. According to the technical scheme, the instance segmentation technology and the cloud server are combined, so that the accuracy of object identification in outdoor environment information is improved, and the running efficiency is ensured by utilizing the strong calculation of the cloud server.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a three-dimensional map construction method according to a first embodiment of the present application;
FIG. 2 is a schematic diagram of a three-dimensional map construction process according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a three-dimensional map building device according to a second embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device implementing a three-dimensional map construction method according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a three-dimensional map construction method according to a first embodiment of the present application, where the method may be performed by a three-dimensional map construction device, which may be implemented in hardware and/or software, and the three-dimensional map construction device may be configured in an electronic device, where the three-dimensional map construction method is applicable to a case of constructing a three-dimensional map based on an instance segmentation technique and a cloud server technique. As shown in fig. 1, the method includes:
s110, acquiring environment information; wherein the environmental information includes outdoor environmental information.
In this scheme, the environmental information may refer to complex, changeable and large-scale outdoor environmental information, for example, the environmental information may include information of an industrial park, a bridge, a city, and the like.
The environment information can be characterized in the forms of images, videos, point clouds and the like.
In this embodiment, the environmental information may be acquired from a database, or may be acquired in real time based on the photographing apparatus.
Optionally, acquiring the environmental information includes:
acquiring environmental information based on the multi-source sensor; wherein the multi-source sensor comprises at least one of a three-dimensional lidar, a camera, and an inertial measurement sensor.
In this embodiment, fig. 2 is a schematic diagram of a three-dimensional map construction process according to the first embodiment of the present application, and as shown in fig. 2, environmental information may be acquired by using multi-source sensors such as a three-dimensional laser radar, a camera, and an inertial measurement sensor (Inertial measurement unit, IMU).
By acquiring the environment information, the three-dimensional map can be constructed based on the environment information, so that the construction of the complex, changeable and large-scale outdoor map is realized.
Optionally, after acquiring the environmental information based on the multi-source sensor, the method further comprises:
and uploading the environment information to a cloud server.
In this embodiment, the cloud server has the characteristics of high distribution, high virtualization and the like, and can make full use of network resources.
In the scheme, the environment information can be uploaded to the cloud server through a network.
The environment information can be processed based on the cloud server by uploading the environment information to the cloud server, so that the efficiency of three-dimensional map construction can be improved.
And S120, carrying out feature extraction and instance segmentation on the environment information through a cloud server to obtain feature data.
In this embodiment, the feature extraction is to implement the description and information extraction of the outdoor environment through the environment information acquired by the multi-source sensor, and can be conveniently compared with other environmental features or sensor data, so as to implement the identification and classification of objects, scenes and areas, and the effective feature extraction can greatly simplify the subsequent calculation and efficiency improvement of the SLAM algorithm.
In the scheme, the instance segmentation has the characteristics of semantic segmentation, needs to be classified on the pixel level, and also has a part of characteristics of target detection, namely different instances need to be positioned even if the instances are of the same type. The instance segmentation outputs a class label and an instance label of the object.
Further, the outdoor environment is complicated in scene and is interfered by various information such as pedestrians and vehicles. The instance segmentation can distinguish similar objects and uniquely mark each object instance, so that more advanced and finer visual information is provided for equipment, and finer environment maps can be constructed.
Optionally, feature extraction and instance segmentation are performed on the environmental information through a cloud server to obtain feature data, including:
extracting the characteristics of the environmental information through a cloud server to obtain the characteristic information of each object in the environmental information; performing instance segmentation on the environment information through a cloud server to obtain semantic information of each object in the environment information;
and performing feature matching on the feature information and the semantic information to obtain feature data.
The feature matching is to match and correspond the current feature in the environment with the feature in the previous map or environment model, the process can be realized by setting a matching threshold value of feature point descriptors such as SIFT, SURF, ORB, the accuracy of the feature matching plays a vital role in the calculation accuracy and instantaneity of the SLAM algorithm, and the accuracy and the robustness of the subsequent SLAM algorithm can be influenced.
In this scheme, as shown in fig. 2, feature extraction and instance segmentation can be performed on environmental information through a cloud server, and feature matching is performed on feature information obtained by feature extraction and semantic information obtained by instance segmentation.
By carrying out feature extraction and instance segmentation on the environmental information, the identification accuracy of the environmental information can be improved.
Optionally, feature extraction and instance segmentation are performed on the environmental information through a cloud server to obtain feature data, and the method further includes:
and dynamically distributing computing power resources through a cloud server to perform feature extraction and instance segmentation on the environment information to obtain feature data.
Specifically, the cloud server can be used for dynamically distributing the computing power resources according to the data volume of the environment information, so that the computing power resources are optimal.
The cloud server is used for dynamically adjusting the computing power resources according to specific scene characteristics, so that the computing power is improved, the resource waste is avoided, the success rate of SLAM map construction is guaranteed, meanwhile, the application programs and the services can be rapidly deployed and configured, the automatic process deployment is realized, and the cost and the labor cost required by the hardware equipment and maintenance of the local server are reduced.
S130, determining the position and the orientation of each object in the environment information according to the characteristic data.
In the scheme, the characteristic data can be processed through a data processing algorithm in the cloud server, and the position and the orientation of each object in the environment information can be determined. The specific data processing algorithm is not specifically limited in this embodiment.
Optionally, determining the position and the orientation of each object in the environmental information according to the feature data includes:
and processing the characteristic data based on pose estimation, and determining the position and the orientation of each object in the environment information.
In this embodiment, pose estimation may be implemented based on sensor data, a filter, an optimization method, and the like, and its core task is to calculate the position and orientation of an object with respect to a certain reference coordinate system.
In this scheme, as shown in fig. 2, the feature data may be processed through position estimation in the cloud server.
Optionally, after determining the position and orientation of each object in the environmental information, the method further comprises:
and carrying out loop detection on the environment information.
In this scheme, loop detection is used to detect whether the device has accessed an area that has been previously passed. As shown in fig. 2, the environmental information may be processed through loop detection in the cloud server, so that each object in the environmental information may be identified.
Further, due to accumulation of errors and the like, the estimated result of the SLAM algorithm may drift, that is, the position on the map deviates from the actual position, and the loop detection algorithm detects whether the device passes through the accessed area by detecting the difference between the start point and the end point, thereby correcting the map drift.
And S140, constructing a three-dimensional map based on the characteristic data, the position and the orientation.
In this scheme, as shown in fig. 2, after the feature data and the position and orientation of each object are obtained, a three-dimensional map may be constructed based on the feature data and the position and orientation of each object.
According to the technical scheme, the environment information is obtained, the cloud server is used for carrying out feature extraction and instance segmentation on the environment information to obtain feature data, then the position and the orientation of each object in the environment information are determined according to the feature data, and a three-dimensional map is constructed based on the feature data, the position and the orientation. By executing the technical scheme, the instance segmentation technology and the cloud server are combined, so that the accuracy of object identification in the outdoor environment information is improved, and the running efficiency is ensured by utilizing the strong calculation of the cloud server.
Example two
Fig. 3 is a schematic structural diagram of a three-dimensional map building device according to a second embodiment of the present application.
As shown in fig. 3, the apparatus includes:
an environmental information obtaining module 310, configured to obtain environmental information; wherein the environmental information includes outdoor environmental information;
the feature data obtaining module 320 is configured to perform feature extraction and instance segmentation on the environmental information through a cloud server to obtain feature data;
a position and orientation determining module 330, configured to determine a position and orientation of each object in the environmental information according to the feature data;
the three-dimensional map construction module 340 is configured to construct a three-dimensional map based on the feature data, the position and the orientation.
Optionally, the environmental information obtaining module 310 is specifically configured to:
acquiring environmental information based on the multi-source sensor; wherein the multi-source sensor comprises at least one of a three-dimensional lidar, a camera, and an inertial measurement sensor.
Optionally, the apparatus further includes:
and the environment information uploading module is used for uploading the environment information to the cloud server.
Optionally, the feature data obtaining module 320 is specifically configured to:
extracting the characteristics of the environmental information through a cloud server to obtain the characteristic information of each object in the environmental information; performing instance segmentation on the environment information through a cloud server to obtain semantic information of each object in the environment information;
and performing feature matching on the feature information and the semantic information to obtain feature data.
Optionally, the feature data obtaining module 320 is further configured to:
and dynamically distributing computing power resources through a cloud server to perform feature extraction and instance segmentation on the environment information to obtain feature data.
Optionally, the location and orientation determining module 330 is specifically configured to:
and processing the characteristic data based on pose estimation, and determining the position and the orientation of each object in the environment information.
Optionally, the apparatus further includes:
and the loop detection module is used for carrying out loop detection on the environment information.
The three-dimensional map construction device provided by the embodiment of the application can execute the three-dimensional map construction method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method.
Example III
Fig. 4 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 4, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the respective methods and processes described above, for example, a three-dimensional map construction method.
In some embodiments, a three-dimensional map construction method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of a three-dimensional map construction method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform a three-dimensional map construction method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present application may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present application, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present application are achieved, and the present application is not limited herein.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.

Claims (10)

1. A three-dimensional map construction method, comprising:
acquiring environmental information; wherein the environmental information includes outdoor environmental information;
performing feature extraction and instance segmentation on the environment information through a cloud server to obtain feature data;
determining the position and the orientation of each object in the environment information according to the characteristic data;
based on the feature data, position and orientation, a three-dimensional map is constructed.
2. The method of claim 1, wherein obtaining environmental information comprises:
acquiring environmental information based on the multi-source sensor; wherein the multi-source sensor comprises at least one of a three-dimensional lidar, a camera, and an inertial measurement sensor.
3. The method of claim 2, wherein after acquiring the environmental information based on the multi-source sensor, the method further comprises:
and uploading the environment information to a cloud server.
4. The method of claim 1, wherein the feature extraction and the instance segmentation are performed on the environmental information by a cloud server to obtain feature data, including:
extracting the characteristics of the environmental information through a cloud server to obtain the characteristic information of each object in the environmental information; performing instance segmentation on the environment information through a cloud server to obtain semantic information of each object in the environment information;
and performing feature matching on the feature information and the semantic information to obtain feature data.
5. The method of claim 1, wherein the feature extraction and the instance segmentation are performed on the environmental information by a cloud server to obtain feature data, further comprising:
and dynamically distributing computing power resources through a cloud server to perform feature extraction and instance segmentation on the environment information to obtain feature data.
6. The method of claim 1, wherein determining the location and orientation of each object in the environmental information based on the characteristic data comprises:
and processing the characteristic data based on pose estimation, and determining the position and the orientation of each object in the environment information.
7. The method of claim 1, wherein after determining the position and orientation of each object in the environmental information, the method further comprises:
and carrying out loop detection on the environment information.
8. A three-dimensional map construction apparatus, comprising:
the environment information acquisition module is used for acquiring environment information; wherein the environmental information includes outdoor environmental information;
the feature data obtaining module is used for carrying out feature extraction and instance segmentation on the environment information through the cloud server to obtain feature data;
the position and orientation determining module is used for determining the position and orientation of each object in the environment information according to the characteristic data;
and the three-dimensional map construction module is used for constructing a three-dimensional map based on the characteristic data, the position and the orientation.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform a three-dimensional map construction method according to any one of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores computer instructions for causing a processor to implement a three-dimensional map construction method according to any one of claims 1-7 when executed.
CN202310949374.7A 2023-07-31 2023-07-31 Three-dimensional map construction method and device, electronic equipment and storage medium Pending CN116977524A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310949374.7A CN116977524A (en) 2023-07-31 2023-07-31 Three-dimensional map construction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310949374.7A CN116977524A (en) 2023-07-31 2023-07-31 Three-dimensional map construction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116977524A true CN116977524A (en) 2023-10-31

Family

ID=88482766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310949374.7A Pending CN116977524A (en) 2023-07-31 2023-07-31 Three-dimensional map construction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116977524A (en)

Similar Documents

Publication Publication Date Title
CN109559330B (en) Visual tracking method and device for moving target, electronic equipment and storage medium
CN112560684B (en) Lane line detection method, lane line detection device, electronic equipment, storage medium and vehicle
CN110675635B (en) Method and device for acquiring external parameters of camera, electronic equipment and storage medium
CN113392794B (en) Vehicle line crossing identification method and device, electronic equipment and storage medium
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN113506368B (en) Map data fusion method, map data fusion device, electronic device, map data fusion medium, and program product
CN116758503A (en) Automatic lane line marking method, device, equipment and storage medium
CN116129422A (en) Monocular 3D target detection method, monocular 3D target detection device, electronic equipment and storage medium
CN113762397B (en) Method, equipment, medium and product for training detection model and updating high-precision map
CN116052097A (en) Map element detection method and device, electronic equipment and storage medium
CN116977524A (en) Three-dimensional map construction method and device, electronic equipment and storage medium
CN115131315A (en) Image change detection method, device, equipment and storage medium
CN115147561A (en) Pose graph generation method, high-precision map generation method and device
CN116258769B (en) Positioning verification method and device, electronic equipment and storage medium
CN113570607B (en) Target segmentation method and device and electronic equipment
CN114694138B (en) Road surface detection method, device and equipment applied to intelligent driving
CN113591847B (en) Vehicle positioning method and device, electronic equipment and storage medium
CN115773759A (en) Indoor positioning method, device and equipment of autonomous mobile robot and storage medium
CN116824638A (en) Dynamic object feature point detection method and device, electronic equipment and storage medium
CN117911891A (en) Equipment identification method and device, electronic equipment and storage medium
CN117853614A (en) Method and device for detecting change condition of high-precision map element and vehicle
CN117496113A (en) Feature detection method, device, equipment and storage medium
CN115906001A (en) Multi-sensor fusion target detection method, device and equipment and automatic driving vehicle
CN115797407A (en) Point cloud data filtering method and device, electronic equipment and storage medium
CN118012036A (en) Target pair determining method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination