CN112101177A - Map construction method and device and carrier - Google Patents
Map construction method and device and carrier Download PDFInfo
- Publication number
- CN112101177A CN112101177A CN202010944705.4A CN202010944705A CN112101177A CN 112101177 A CN112101177 A CN 112101177A CN 202010944705 A CN202010944705 A CN 202010944705A CN 112101177 A CN112101177 A CN 112101177A
- Authority
- CN
- China
- Prior art keywords
- visual map
- environmental condition
- map
- visual
- target route
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010276 construction Methods 0.000 title claims abstract description 25
- 230000000007 visual effect Effects 0.000 claims abstract description 188
- 230000007613 environmental effect Effects 0.000 claims abstract description 157
- 238000000034 method Methods 0.000 claims abstract description 25
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 6
- 238000004891 communication Methods 0.000 description 4
- 238000005286 illumination Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
Abstract
The invention provides a map construction method, a map construction device and a delivery vehicle, and relates to the field of unmanned driving. The method comprises the following steps: determining a target route for the vehicle to travel and an environmental condition of the vehicle to travel; the environmental conditions include at least a first environmental condition and a second environmental condition; the first environmental condition and the second environmental condition have at least one different environmental factor; respectively constructing a first visual map of a target route under a first environmental condition and a second visual map under a second environmental condition; and finally, fusing the first visual map and the second visual map to construct a full visual map. According to the invention, the visual maps under different environmental conditions are obtained on the same route, and the visual maps under different environmental conditions are fused to construct the full-visual map, so that the full-visual map can be used under different environmental conditions, the adaptability of the map to the environment is improved, and the problem of unmatched image characteristics under different environmental conditions is solved.
Description
Technical Field
The invention relates to the technical field of unmanned driving, in particular to a map construction method, a map construction device and a vehicle.
Background
With the development of unmanned technology, autonomous vehicles have received much attention.
An automatic driving vehicle is an intelligent vehicle which realizes unmanned driving through a computer system. In addition, the automatic driving vehicle depends on the cooperative cooperation of artificial intelligence, a camera, a radar and a global positioning system, so that the vehicle can be driven automatically without any active human intervention.
Currently, in the prior art, an autonomous vehicle may be positioned using a visual map, but the visual map in the daytime cannot be applied at night due to the problem of unmatched image features in different environments.
Disclosure of Invention
In order to solve the above problems, an object of the present invention is to provide a map construction method, apparatus, and vehicle.
In a first aspect, an embodiment of the present invention provides a map construction method, where the method includes:
determining a target route for the vehicle to travel and an environmental condition of the vehicle to travel; the environmental conditions include at least a first environmental condition and a second environmental condition; wherein at least one different environmental factor exists for the first environmental condition and the second environmental condition;
respectively constructing a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition;
and fusing the first visual map and the second visual map to construct a full visual map.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the step of respectively constructing a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition includes:
applying a visual sensor disposed on the vehicle to construct a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition, respectively.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the step of respectively constructing a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition by using a visual sensor disposed on the vehicle includes:
acquiring first image information of the target route by applying the vision sensor under a first environmental condition; and acquiring second image information of the target route by applying the vision sensor under a second environmental condition;
and constructing and obtaining the first visual map and the second visual map respectively based on the first image information and the second image information.
With reference to the second possible implementation manner of the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the step of constructing and obtaining the first visual map and the second visual map based on the first image information and the second image information respectively includes:
carrying out feature point detection and calculation on the first image information to obtain feature points of the first image information and descriptors of the feature points of the first image information;
and carrying out feature point detection and calculation on the second image information to obtain feature points of the second image information and descriptors of the feature points of the first image information.
With reference to the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the step of fusing the first visual map and the second visual map to construct a full visual map includes:
fusing the characteristic points in the second visual map into the first visual map to obtain a full visual map; or fusing the characteristic points of the first visual map into the second visual map to obtain the full visual map.
With reference to the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the vision sensor employs a monocular camera or a binocular camera.
In a second aspect, an embodiment of the present invention provides a map building apparatus, including:
a determination module for determining a target route for travel of the vehicle and an environmental condition of travel; the environmental conditions include at least a first environmental condition and a second environmental condition; wherein at least one different environmental factor exists for the first environmental condition and the second environmental condition;
a construction module for respectively constructing a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition;
and the fusion module is used for fusing the first visual map and the second visual map to construct a full visual map.
With reference to the second aspect, the present invention provides a first possible implementation manner of the second aspect, wherein the building module, when building a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition respectively, is configured to use a visual sensor disposed on the vehicle to build a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition respectively.
In a third aspect, an embodiment of the present invention provides a vehicle, including a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions executable by the processor, and the processor executes the machine-executable instructions to implement the method of any one of the foregoing embodiments.
In a fourth aspect, embodiments of the invention provide a machine-readable storage medium having stored thereon machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a method as in any one of the preceding embodiments.
According to the map construction method, the map construction device and the vehicle, the target route and the driving environment condition of the vehicle are determined firstly; the environmental conditions include at least a first environmental condition and a second environmental condition; wherein the first environmental condition and the second environmental condition are at least one different environmental factor; then respectively constructing a first visual map of the target route under a first environmental condition and a second visual map of the target route under a second environmental condition; and finally, fusing the first visual map and the second visual map to construct a full visual map. According to the embodiment of the invention, the visual maps under different environmental conditions are obtained on the same route, and the visual maps under different environmental conditions are fused to construct the full-visual map, so that the full-visual map can be used under different environmental conditions, the adaptability of the map to the environment is improved, the problem of unmatched image characteristics under different environmental conditions is relieved, and the identification precision is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of a map construction method according to an embodiment of the present invention;
FIG. 2 is a first flowchart of step A provided by the embodiment of the present invention;
fig. 3 is a flowchart illustrating the execution of step S202 according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a map building apparatus according to an embodiment of the present invention;
fig. 5 is a schematic view of a vehicle according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
At present, visual maps of different conditions in the prior art cannot be used, for example, visual maps in the daytime cannot be applied to positioning at night. Based on this, the map construction method, the map construction device and the vehicle provided by the embodiment of the invention fuse the visual maps under different environmental conditions to obtain the full-visual map which can be used under different environmental conditions, so that the adaptability of the map construction to the environment is improved, the problem of unmatched image characteristics under different environmental conditions is alleviated, and the identification precision is favorably improved.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Fig. 1 shows a flowchart of a map building method provided by an embodiment of the present invention. Referring to fig. 1, a map construction method provided in the embodiment of the present invention mainly includes the following steps:
step S101, determining a target route for the vehicle to travel and a traveling environmental condition; the environmental conditions include at least a first environmental condition and a second environmental condition; wherein the first environmental condition and the second environmental condition are at least one different environmental factor;
in this embodiment, the vehicle may be, for example, an unmanned automobile, an unmanned spacecraft, an unmanned ship, or the like.
The target route may be a route generated by the vehicle based on an external input of the starting point and the ending point for autonomous planning, or may be a route directly sent to the vehicle from the outside, for example, a route input to the vehicle by a user.
The environmental conditions can be measured by sensors provided on the vehicle or can be directly input into the vehicle by the user. The environmental conditions include, but are not limited to, lighting conditions, brightness conditions, and may also include meteorological conditions such as wind conditions.
It should be noted that the environmental conditions may also carry time information to distinguish different time periods or day and night, such as brightness conditions during the day (6 am to 6 am), and night (7 am to 5 am), and the specific time periods may be determined according to regions and seasons.
It is understood that the environmental conditions may include a variety of different environmental conditions, such as, for example, a third environmental condition, a fourth environmental condition, etc., in addition to the first environmental condition, the second environmental condition, etc., each of which has at least one environmental factor that is distinguishable from the other environmental conditions.
The presence of at least one different environmental factor for the first and second environmental conditions may be understood as the presence of one or more different environmental factors for the first and second environmental conditions, such as different illumination for the first and second environmental conditions (e.g., day, night), or the first and second environmental conditions may be different winds, and as different illumination for the first and second environmental conditions, or different winds.
Step S102, respectively constructing a first visual map of a target route under a first environmental condition and a second visual map of the target route under a second environmental condition;
specifically, the vehicle is controlled to travel along the target route under different environmental conditions, so as to construct different visual maps.
For the convenience of understanding, the vehicle is taken as an unmanned vehicle as an example for explanation, the unmanned vehicle is controlled to run along the target route under the first environmental condition, and a first visual map of the target route is constructed through a visual sensor arranged on the vehicle; the first visual map comprises a plurality of characteristic points; similarly, the unmanned vehicle is controlled to run along the target route under a second environmental condition, and a second visual map of the target route is constructed and obtained through a visual sensor arranged on the vehicle; the second visual map includes a plurality of feature points.
In an alternative embodiment, the selection rule of the feature points is generally set to select stationary objects or signs, such as lane lines (lane), Traffic signs and ground signs (Traffic Sign and Road Sign), Traffic lights (Traffic Light), and the like, regardless of the first visual map or the second visual map.
In consideration of the question of how to construct the first and second visual maps, in an alternative embodiment, the above step S102 may be implemented by one of the following ways:
a builds a visual map with only cameras.
For example, a first visual map of the target route under a first environmental condition and a second visual map of the target route under a second environmental condition are respectively constructed using visual sensors disposed on the vehicle.
In alternative embodiments, the vision sensor may be a monocular camera or a binocular camera.
B, applying SLAM (Simultaneous Localization and mapping) technology to construct a visual map
For example, a 2D/3D SLAM based on a laser radar, an RGBD SLAM based on a depth camera, a visual SLAM (hereinafter abbreviated as vSLAM) based on a visual sensor, and a visual inertial object (abbreviated as VIO) technology based on a visual sensor and an inertial unit are applied to construct a visual map.
In particular implementations, a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition may be constructed by applying a SLAM system or a vSLAM system provided on the vehicle, respectively.
In an alternative embodiment, referring to fig. 2, the step of building a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition by using the visual sensor disposed on the vehicle in the step a may include the following sub-steps:
step S201, a visual sensor is applied to obtain first image information of a target route under a first environmental condition; acquiring second image information of the target route by using the visual sensor under a second environmental condition;
step S202, a first visual map and a second visual map are constructed and obtained based on the first image information and the second image information respectively.
Specifically, the first image information and the second image information are subjected to image processing to obtain a first visual map and a second visual map, and the first visual map and the second visual map are respectively marked with corresponding visual feature points and descriptors thereof.
Referring to fig. 3, the step S202 mainly includes the following sub-steps:
step S301, feature point detection and calculation are carried out on the first image information to obtain feature points of the first image information and descriptors of the feature points of the first image information;
step S302, performing feature point detection and calculation on the second image information to obtain feature points of the second image information and descriptors of the feature points of the second image information.
Specifically, feature point detection and calculation may be performed on the image information (the first image information or the second image information) by using a feature extraction and matching algorithm to obtain feature points of the image information and descriptors corresponding to the feature points.
For example, an orb (orientfastandandrutatedbrief) algorithm is applied to detect and describe the feature points of the first image information, so as to obtain the feature points of the first image information and descriptors of the feature points of the first image information.
Specifically, a fast (features from obtained segment test) algorithm is applied to detect the feature points of the first image information; the characteristic points of the image can be simply understood as more prominent points in the image, such as contour points, bright points in darker areas, dark points in lighter areas, and the like; the principle is that a circle of pixel values around a candidate feature point are detected based on the gray value of an image around the feature point, and if the gray value difference between enough pixel points in the field around the candidate point and the candidate point is large enough, the candidate point is considered as a feature point.
After the Feature points are obtained, the attributes of the Feature points need to be described, and the output of the attribute of each Feature point is called a descriptor (Feature descriptor) of the Feature point.
Specifically, a BRIEF algorithm is used to calculate a descriptor of a feature point.
The core idea of the BRIEF algorithm is to select N point pairs in a certain pattern around the key point P, and combine the comparison results of the N point pairs as a descriptor.
The method detects the characteristic points by applying the FAST algorithm in the ORB algorithm, and calculates the descriptor by using the BRIEF algorithm, wherein the descriptor is expressed by a specific 2-system character string, so that the storage space can be saved, and the time for matching the characteristic points can be shortened.
It should be noted that the selection of the feature points may also be determined in other ways.
And step S103, fusing the first visual map and the second visual map to construct a full visual map.
The full-visual map refers to a map obtained by fusing visual maps under different environmental conditions, and the full-visual map is suitable for different environmental conditions, for example, the full-visual map constructed under two illumination conditions of day and night can be applied to day and night.
The map fusion forms a full-vision map under different environmental conditions on the same route, for example, descriptors of visual feature points under different illumination conditions in the day and at night are fused in the map, and the map can be used in different time periods by matching the corresponding feature points with images in different time periods, so that the problem of different image features at different times is solved, and the adaptability of the map to the environment is improved.
In an alternative embodiment, this step S103 may be implemented by one of the following:
in the mode 1, feature points in a second visual map are fused into the first visual map to obtain a full visual map;
for example, the descriptors of the feature points in the second visual map are fused into the first visual map, so that the full visual map is obtained.
Assuming that the first visual map has 10 feature points and the second visual map has 20 feature points, the 20 feature points of the second visual map and the descriptors thereof are added to the first visual map, and the obtained full visual map comprises 30 feature points.
In an alternative embodiment, the second visual map may first be feature point matched to the first visual map, and a descriptor of the matched feature point may be added to the first visual map.
For ease of understanding, the matching of feature points is briefly described here by way of example:
for example, the descriptor for feature point A, B is as follows:
A:10101011;
B:10101010;
a threshold value is set, for example 80%. When the similarity of the descriptors of A and B is greater than a threshold value (80%), the feature points A and B are judged to be matched. In the example a, B differ only in the last digit, with a similarity of 87.5%, greater than 80%, so that feature points a and B are matched.
And 2, fusing the characteristic points of the first visual map into the second visual map to obtain the full visual map.
Similarly, a descriptor of a feature point in the first visual map may be added to the second visual map, resulting in a full visual map.
According to the map construction method provided by the embodiment of the invention, firstly, a target route and a driving environment condition of a vehicle are determined; the environmental conditions include at least a first environmental condition and a second environmental condition; wherein the first environmental condition and the second environmental condition are at least one different environmental factor; then respectively constructing a first visual map of the target route under a first environmental condition and a second visual map of the target route under a second environmental condition; and finally, fusing the first visual map and the second visual map to construct a full visual map. According to the embodiment of the invention, the visual maps under different environmental conditions are obtained on the same route, and the visual maps under different environmental conditions are fused to construct the full visual map, so that the full visual map is suitable for various different loop conditions, the adaptability of the map to the environment is improved, the problem of unmatched image characteristics under different environmental conditions is solved, and the identification precision is improved.
On the basis of the above embodiments, the embodiment of the present invention further provides a map construction apparatus, as shown in fig. 4, the apparatus includes a determination module 401, a construction module 402, and a fusion module 403;
the determination module 401 is configured to determine a target route traveled by a vehicle and an environmental condition of the travel; the environmental conditions include at least a first environmental condition and a second environmental condition; wherein at least one different environmental factor exists for the first environmental condition and the second environmental condition;
the construction module 402 is configured to respectively construct a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition;
the fusion module 403 is configured to fuse the first visual map and the second visual map to construct a full visual map.
In an alternative embodiment, the construction module 402 is configured to apply the visual sensor disposed on the vehicle to respectively construct a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition when respectively constructing the first visual map of the target route under the first environmental condition and the second visual map of the target route under the second environmental condition.
In an alternative embodiment, the construction module 402 is configured to acquire the first image information of the target route under the first environmental condition by using the visual sensor when the visual sensor arranged on the vehicle is used to respectively construct the first visual map of the target route under the first environmental condition and the second visual map of the target route under the second environmental condition; and acquiring second image information of the target route by applying the vision sensor under a second environmental condition; and constructing and obtaining the first visual map and the second visual map respectively based on the first image information and the second image information.
In an optional embodiment, the constructing module 402 is configured to, when the first visual map and the second visual map are constructed and obtained based on the first image information and the second image information, perform feature point detection and calculation on the first image information to obtain a feature point of the first image information and a descriptor of the feature point of the first image information; and detecting and calculating the characteristic points of the second image information to obtain the characteristic points of the second image information and descriptors of the characteristic points of the second image information.
In an optional embodiment, the fusion module 403 is configured to fuse feature points in the second visual map into the first visual map to obtain a full visual map when the first visual map and the second visual map are fused to construct the full visual map; or fusing the characteristic points of the first visual map into the second visual map to obtain the full visual map.
In an alternative embodiment, the vision sensor employs a monocular camera or a binocular camera.
The map building device provided by the embodiment of the invention can be specific hardware on the equipment or software or firmware installed on the equipment.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments.
Referring to fig. 5, embodiments of the present invention also provide a vehicle 500, comprising: the system comprises a processor 501, a memory 502, a bus 503 and a communication interface 504, wherein the processor 501, the communication interface 504 and the memory 502 are connected through the bus 503; the memory 502 is used to store programs; the processor 501 is configured to call a program stored in the memory 502 through the bus 503 to execute the vehicle control method of the above-described embodiment.
The Memory 502 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 504 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
Bus 503 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 5, but this does not indicate only one bus or one type of bus.
The memory 502 is used for storing a program, the processor 501 executes the program after receiving an execution instruction, and the method executed by the apparatus defined by the flow process disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 501, or implemented by the processor 501.
The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 501. The Processor 501 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 502, and the processor 501 reads the information in the memory 502 and completes the steps of the method in combination with the hardware.
Embodiments of the present invention also provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a vehicle control method as above.
In the description of the present invention, it should be noted that the terms "first", "second", "third", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. A map construction method, comprising:
determining a target route for the vehicle to travel and an environmental condition of the vehicle to travel; the environmental conditions include at least a first environmental condition and a second environmental condition; wherein at least one different environmental factor exists for the first environmental condition and the second environmental condition;
respectively constructing a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition;
and fusing the first visual map and the second visual map to construct a full visual map.
2. The method of claim 1, wherein the step of separately constructing a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition comprises:
applying a visual sensor disposed on the vehicle to construct a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition, respectively.
3. The method of claim 2, wherein the step of using a visual sensor disposed on the vehicle to construct a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition, respectively, comprises:
acquiring first image information of the target route by applying the vision sensor under a first environmental condition; and acquiring second image information of the target route by applying the vision sensor under a second environmental condition;
and constructing and obtaining the first visual map and the second visual map respectively based on the first image information and the second image information.
4. The method of claim 3, wherein the step of constructing the first visual map and the second visual map based on the first image information and the second image information, respectively, comprises:
carrying out feature point detection and calculation on the first image information to obtain feature points of the first image information and descriptors of the feature points of the first image information;
and detecting and calculating the characteristic points of the second image information to obtain the characteristic points of the second image information and descriptors of the characteristic points of the second image information.
5. The method of claim 1, wherein the step of fusing the first visual map and the second visual map to construct the full visual map comprises:
fusing the characteristic points in the second visual map into the first visual map to obtain a full visual map; or fusing the characteristic points of the first visual map into the second visual map to obtain the full visual map.
6. The method of claim 2, wherein the vision sensor employs a monocular camera or a binocular camera.
7. A map building apparatus, characterized in that the apparatus comprises:
a determination module for determining a target route for travel of the vehicle and an environmental condition of travel; the environmental conditions include at least a first environmental condition and a second environmental condition; wherein at least one different environmental factor exists for the first environmental condition and the second environmental condition;
a construction module for respectively constructing a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition;
and the fusion module is used for fusing the first visual map and the second visual map to construct a full visual map.
8. The apparatus of claim 7, wherein the construction module, in constructing the first visual map of the target route under the first environmental condition and the second visual map of the target route under the second environmental condition, respectively, is configured to:
applying a visual sensor disposed on the vehicle to construct a first visual map of the target route under the first environmental condition and a second visual map of the target route under the second environmental condition, respectively.
9. A vehicle comprising a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor to perform the method of any one of claims 1-6.
10. A machine-readable storage medium having stored thereon machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the method of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010944705.4A CN112101177A (en) | 2020-09-09 | 2020-09-09 | Map construction method and device and carrier |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010944705.4A CN112101177A (en) | 2020-09-09 | 2020-09-09 | Map construction method and device and carrier |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112101177A true CN112101177A (en) | 2020-12-18 |
Family
ID=73751261
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010944705.4A Pending CN112101177A (en) | 2020-09-09 | 2020-09-09 | Map construction method and device and carrier |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112101177A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112862881A (en) * | 2021-02-24 | 2021-05-28 | 清华大学 | Road map construction and fusion method based on crowd-sourced multi-vehicle camera data |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105973265A (en) * | 2016-05-19 | 2016-09-28 | 杭州申昊科技股份有限公司 | Mileage estimation method based on laser scanning sensor |
CN109461179A (en) * | 2018-10-17 | 2019-03-12 | 河南科技学院 | A kind of robot cooperated detection system of explosive primary and secondary |
CN110268354A (en) * | 2019-05-09 | 2019-09-20 | 珊口(深圳)智能科技有限公司 | Update the method and mobile robot of map |
CN110517355A (en) * | 2018-05-22 | 2019-11-29 | 苹果公司 | Environment for illuminating mixed reality object synthesizes |
CN111459269A (en) * | 2020-03-24 | 2020-07-28 | 视辰信息科技(上海)有限公司 | Augmented reality display method, system and computer readable storage medium |
US20200240793A1 (en) * | 2019-01-28 | 2020-07-30 | Qfeeltech (Beijing) Co., Ltd. | Methods, apparatus, and systems for localization and mapping |
-
2020
- 2020-09-09 CN CN202010944705.4A patent/CN112101177A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105973265A (en) * | 2016-05-19 | 2016-09-28 | 杭州申昊科技股份有限公司 | Mileage estimation method based on laser scanning sensor |
CN110517355A (en) * | 2018-05-22 | 2019-11-29 | 苹果公司 | Environment for illuminating mixed reality object synthesizes |
CN109461179A (en) * | 2018-10-17 | 2019-03-12 | 河南科技学院 | A kind of robot cooperated detection system of explosive primary and secondary |
US20200240793A1 (en) * | 2019-01-28 | 2020-07-30 | Qfeeltech (Beijing) Co., Ltd. | Methods, apparatus, and systems for localization and mapping |
CN110268354A (en) * | 2019-05-09 | 2019-09-20 | 珊口(深圳)智能科技有限公司 | Update the method and mobile robot of map |
CN111459269A (en) * | 2020-03-24 | 2020-07-28 | 视辰信息科技(上海)有限公司 | Augmented reality display method, system and computer readable storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112862881A (en) * | 2021-02-24 | 2021-05-28 | 清华大学 | Road map construction and fusion method based on crowd-sourced multi-vehicle camera data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111626208B (en) | Method and device for detecting small objects | |
US10810876B2 (en) | Road obstacle detection device, method, and program | |
CN110146097B (en) | Method and system for generating automatic driving navigation map, vehicle-mounted terminal and server | |
EP3078937B1 (en) | Vehicle position estimation system, device, method, and camera device | |
US10809723B2 (en) | Method and apparatus for generating information | |
CN111307166A (en) | Method, device and processing equipment for constructing occupied grid map | |
CN111046709B (en) | Vehicle lane level positioning method and system, vehicle and storage medium | |
JP7413543B2 (en) | Data transmission method and device | |
JP2022507077A (en) | Compartment line attribute detection methods, devices, electronic devices and readable storage media | |
CN110388929B (en) | Navigation map updating method, device and system | |
CN112580571A (en) | Vehicle running control method and device and electronic equipment | |
EP3673237A1 (en) | Apparatus, method and computer program product for facilitating navigation of a vehicle based upon a quality index of the map data | |
US20200167672A1 (en) | Detection of road elements | |
CN111323038B (en) | Method and system for positioning unmanned vehicle in tunnel and electronic equipment | |
CN115675520A (en) | Unmanned driving implementation method and device, computer equipment and storage medium | |
CN116592872A (en) | Method and device for updating occupied grid map and related equipment | |
CN110765224A (en) | Processing method of electronic map, vehicle vision repositioning method and vehicle-mounted equipment | |
CN112699711A (en) | Lane line detection method, lane line detection device, storage medium, and electronic apparatus | |
CN112101177A (en) | Map construction method and device and carrier | |
CN112765302B (en) | Method and device for processing position information and computer readable medium | |
US20210048819A1 (en) | Apparatus and method for determining junction | |
JP2019146012A (en) | Imaging apparatus | |
CN115512336B (en) | Vehicle positioning method and device based on street lamp light source and electronic equipment | |
CN111433779A (en) | System and method for identifying road characteristics | |
CN115112125A (en) | Positioning method and device for automatic driving vehicle, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |