CN110132278B - Method and device for instant positioning and mapping - Google Patents
Method and device for instant positioning and mapping Download PDFInfo
- Publication number
- CN110132278B CN110132278B CN201910400102.5A CN201910400102A CN110132278B CN 110132278 B CN110132278 B CN 110132278B CN 201910400102 A CN201910400102 A CN 201910400102A CN 110132278 B CN110132278 B CN 110132278B
- Authority
- CN
- China
- Prior art keywords
- library
- mapping
- map
- location
- bin
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013507 mapping Methods 0.000 title claims abstract description 147
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000013528 artificial neural network Methods 0.000 claims description 15
- 230000000007 visual effect Effects 0.000 claims description 14
- 239000013598 vector Substances 0.000 claims description 13
- 230000009466 transformation Effects 0.000 claims description 10
- 230000004807 localization Effects 0.000 claims description 8
- 238000003860 storage Methods 0.000 claims description 7
- 238000004891 communication Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000005457 optimization Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 5
- 230000000670 limiting effect Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 102000008115 Signaling Lymphocytic Activation Molecule Family Member 1 Human genes 0.000 description 4
- 108010074687 Signaling Lymphocytic Activation Molecule Family Member 1 Proteins 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- WTDRDQBEARUVNC-LURJTMIESA-N L-DOPA Chemical compound OC(=O)[C@@H](N)CC1=CC=C(O)C(O)=C1 WTDRDQBEARUVNC-LURJTMIESA-N 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a method and a device for instant positioning and mapping. The method for instant positioning and mapping comprises the following steps: acquiring a top view image; determining a library position angular point in the overhead image and image coordinates of the library position angular point, wherein each library position angular point corresponds to a positioning point of a library position; determining an effective library position based on the image coordinates of the library position corner points and a first preset library position width; matching the valid bin with a bin in a map; determining the pose of the mapping equipment based on the matching result; and updating the map based on the pose of the mapping equipment.
Description
Technical Field
The invention relates to the field of computer vision, in particular to the field of vision positioning and image building.
Background
An instant positioning and mapping (SLAM) is a technology for achieving the positioning and navigation targets by tracking the motion of a robot in real time and simultaneously establishing a surrounding environment map in the process.
SLAMs currently used in the field of automatic parking mainly include SLAMs based on laser radar and SLAMs based on vision. The two schemes can realize the functions of positioning and mapping based on the feature points, but the positioning information provided by the two schemes is limited, and the global or local position of the library bit cannot be clearly indicated.
Therefore, it is necessary to provide a new positioning and mapping method and apparatus for automatic parking.
Disclosure of Invention
The application aims to provide a method for instant positioning and mapping. The method can directly utilize the semantic information of the library bits (such as the width information of the library bits, the position relation of the library bit lines and the position relation of the library bit corner points) in the process of constructing the map, so that the constructed map not only can provide the positioning information of the mapping equipment, but also can store the position information of the library bits.
One aspect of the present application provides a method for instant positioning and mapping. The method comprises the following steps: acquiring a top view image; determining a library position angular point in the overhead image and image coordinates of the library position angular point, wherein each library position angular point corresponds to a positioning point of a library position; determining an effective library position based on the image coordinates of the library position corner points and a first preset library position width; matching the valid bin with a bin in a map; determining the pose of the mapping equipment based on the matching result; and updating the map based on the pose of the mapping equipment.
In some embodiments, said obtaining an overhead image comprises: obtaining at least one visual image, converting the at least one visual image into at least one sub-top view image through inverse perspective transformation, and stitching the at least one sub-top view image into the top view image.
In some embodiments, said determining the bin corner points and their image coordinates in said top view image comprises: and determining a library location corner point in the overlook image based on a depth neural network, wherein the library location corner point is a library location entrance corner point.
In some embodiments, the determining a valid bin based on the image coordinates of the bin corner point and a first preset bin width includes: and determining candidate library positions based on the image coordinates of the library position corner points and the first preset library position width, and determining the effective library positions in the candidate library positions.
In some embodiments, said determining valid bin bits among said candidate bin bits comprises: determining a region of interest of the candidate library position; and classifying the candidate library positions through a deep neural network based on the region of interest to determine the effective library positions.
In some embodiments, said matching said valid bin with a bin in a map comprises: matching the library position angular points of the effective library positions with the library position angular points of the library positions in the map; determining at least two pairs of mutually matched library position angular points; and determining the pose of the mapping equipment based on the at least two pairs of mutually matched library position corner points.
In some embodiments, the matching the library location corner point of the valid library location with the library location corner point of the library location in the map comprises: for each library position corner point of the effective library position, determining the distance between the library position corner point and the library position corner point of the library position in the map; and judging whether the distance meets a preset condition, if so, matching the library position corner point of the effective library position with the library position corner point of the library position in the map. The preset condition is that the distance is within a preset threshold range, and the distance is minimum within the preset threshold range.
In some embodiments, the determining the pose of the mapping apparatus based on the at least two pairs of mutually matched library location corner points comprises: determining a confidence level of each pair of mutually matched library-site corner points, wherein the confidence level is related to the distance between the mutually matched library-site corner points and the mapping equipment and/or the historical observation times of the mutually matched library-site corner points in the map; determining a pose of the mapping device based on the confidence.
In some embodiments, the method further comprises: determining effective library positions which are not matched with the library positions in the map in the effective library positions of the overhead view image; inserting valid bin bits that do not match the map.
In some embodiments, the method further comprises optimizing the map. The optimization includes at least one of: fitting at least part of the library position angle points in the map according to a preset position relation; carrying out weight combination on the library position corner points of which the mutual position difference is within a preset threshold range in the map; optimizing direction vectors of the library position straight lines in the map, wherein each library position straight line corresponds to one edge of a library position; and optimizing the library position corner points of the library positions in the map based on a second preset library position width.
One aspect of the present application provides an apparatus for instant positioning and mapping, the apparatus comprising at least one image acquisition device port, at least one storage device, the storage device comprising a set of instructions; and at least one processor in communication with the at least one storage device. Wherein the at least one processor, when executing the set of instructions, is configured to cause the apparatus to perform a method of instant positioning and mapping.
Additional features of the present application will be set forth in part in the description which follows. The descriptions of the figures and examples below will become apparent to those of ordinary skill in the art from this disclosure. The inventive aspects of the present application can be fully explained by the practice or use of the methods, instrumentalities and combinations set forth in the detailed examples discussed below.
Drawings
The following drawings describe in detail exemplary embodiments disclosed in the present application. Wherein like reference numerals represent similar structures throughout the several views of the drawings. Those of ordinary skill in the art will understand that the present embodiments are non-limiting, exemplary embodiments and that the accompanying drawings are for illustrative and descriptive purposes only and are not intended to limit the scope of the present disclosure, as other embodiments may equally fulfill the inventive intent of the present application. It should be understood that the drawings are not to scale. Wherein:
FIG. 1 illustrates a system for instant positioning and mapping according to some embodiments of the present application;
FIG. 2 illustrates a flow diagram of a method of instant positioning and mapping, shown in accordance with some embodiments of the present application;
figure 3 illustrates a schematic view of a parking lot shown according to some embodiments of the present application.
Detailed Description
The following description is presented to enable any person skilled in the art to make and use the present disclosure, and is provided in the context of a particular application and its requirements. Various local modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. For example, as used herein, the singular forms "a", "an" and "the" may include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "comprising," and/or "including," when used in this specification, mean that the associated integers, steps, operations, elements, and/or components are present, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof in the system/method.
These and other features of the present disclosure, as well as the operation and function of the related elements of the structure, and the combination of parts and economies of manufacture, may be particularly improved upon in view of the following description. All of which form a part of the present disclosure, with reference to the accompanying drawings. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the disclosure.
The flow diagrams used in this disclosure illustrate system-implemented operations according to some embodiments of the disclosure. It should be clearly understood that the operations of the flow diagrams may be performed out of order. Rather, the operations may be performed in reverse order or simultaneously. In addition, one or more other operations may be added to the flowchart. One or more operations may be removed from the flowchart.
One aspect of the present application relates to a method of instant positioning and mapping. Specifically, the method comprises the steps of obtaining a top view image of a specific area (such as a parking lot and a warehouse), and determining a warehouse location corner point and an image coordinate of the warehouse location corner point according to the top view image; then determining an effective library position in the overlook image by combining the preset library position width; matching the effective library position with a library position existing in the map, determining the pose of the mapping equipment according to the matching result, and updating the map according to the pose of the mapping equipment.
FIG. 1 illustrates a system for instant positioning and mapping shown in accordance with some embodiments of the present application.
The system 100 for instant positioning and mapping may acquire a visual image and perform the method for instant positioning and mapping. The method for instant positioning and mapping may refer to the description of fig. 2. As shown, the system 100 for simultaneous localization and mapping may include an image acquisition device 101 and a mapping device 102 (also referred to herein as an instant localization and mapping apparatus).
The image capturing device 101 is used to capture a visual image of the surrounding environment. The image acquisition device 101 may be a camera, such as a fisheye camera, a catadioptric camera, a panoramic imaging camera. Image capture device 101 may be mounted on mapping apparatus 102.
In some embodiments, the mapping device 102 may include a communication port 150 to facilitate data communication. The mapping device 102 may also include a processor 120, the processor 120 in the form of one or more processors for executing computer instructions. The computer instructions may include, for example, routines, programs, objects, components, data structures, procedures, modules, and functions that perform the particular functions described herein. For example, the processor 120 may determine valid bin bits in the top view image in conjunction with a preset bin bit width. For another example, the processor 120 may determine a match between a valid bin in the overhead image and a bin in the map, determine a pose of the mapping device based on the match, and then update the map based on the pose of the mapping device.
In some embodiments, processor 120 may include one or more hardware processors, such as microcontrollers, microprocessors, Reduced Instruction Set Computers (RISC), Application Specific Integrated Circuits (ASICs), application specific instruction-set processors (ASIPs), Central Processing Units (CPUs), Graphics Processing Units (GPUs), Physical Processing Units (PPUs), microcontroller units, Digital Signal Processors (DSPs), Field Programmable Gate Arrays (FPGAs), Advanced RISC Machines (ARMs), Programmable Logic Devices (PLDs), any circuit or processor capable of executing one or more functions, or the like, or any combination thereof.
In some embodiments, the mapping device 102 may include an internal communication bus 110, program storage, and different forms of data storage (e.g., a disk 170, Read Only Memory (ROM)130, or Random Access Memory (RAM) 140). Mapping device 102 may also include program instructions stored in ROM 130, RAM 140, and/or other types of non-transitory storage media to be executed by processor 120. The methods and/or processes of the present application may be implemented as program instructions. The mapping device 102 also includes I/O components 160 that support input/output between the computer and other components (e.g., user interface elements). The mapping device 102 may also receive programming and data via network communications.
For illustrative purposes only, only one processor is depicted in the present mapping device 102. It should be noted, however, that the apparatus 102 illustrated herein may also include multiple processors, and thus, operations and/or method steps disclosed herein may be performed by one processor, as described in this disclosure, or by a combination of multiple processors. For example, if the processor 120 of the device 102 is illustrated herein as performing steps A and B, it should be understood that steps A and B can also be performed by two different processors in the processing of information, either in combination or separately (e.g., a first processor performing step A, a second processor performing step B, or both a first and second processor performing steps A and B).
FIG. 2 illustrates a flow diagram of a method of instant positioning and mapping, shown in accordance with some embodiments of the present application. The process 200 may be implemented as a set of instructions in a non-transitory storage medium in the mapping device 102. The mapping device 102 may execute the set of instructions and may perform the steps in the flow 200 accordingly.
The operations of illustrated flow 200 presented below are intended to be illustrative and not limiting. In some embodiments, flow 200 may be implemented with one or more additional operations not described, and/or with one or more operations described herein. Further, the order of the operations shown in FIG. 2 and described below is not intended to be limiting.
At 210, the mapping device 102 may acquire a top view image.
The mapping device 102 may directly acquire the top view image or may indirectly acquire the top view image. The indirect acquisition of the overhead view image comprises the following three steps:
in a first step, the mapping device 102 may acquire at least one visual image. The at least one visual image may be acquired by one or more image acquisition devices 101 at the same time, and each visual image may correspond to the same or different scenes (e.g., scenes from different local areas in the parking lot).
As an example, the mapping device 102 may acquire four visual images. The four visual images are acquired at the same time by four image acquisition apparatuses 101 installed in front, rear, left, and right sides of the image construction apparatus 102.
In a second step, the mapping device 102 may convert the at least one visual image into at least one sub-top-view image by an inverse perspective transformation. In connection with the example in the first step, the mapping device 102 may convert the four visual images into four sub top-view images by inverse perspective transformation. The sub-overhead images correspond to the visual images one to one.
In a third step, the mapping device 102 may stitch the at least one sub top view image into the top view image. With reference to the example in the first step and the second step, the mapping device 102 may convert the four sub top view images into the same image coordinate system by using the position relationship between the four image acquisition devices 101 and the mapping device 102, and then stitch the sub top view images in the same image coordinate system into the final top view image. It should be appreciated that the stitched overhead image has a greater field of view than the sub overhead images.
At 220, the mapping device 102 may determine a library corner point in the top view image and image coordinates of the library corner point. Each library site corner point corresponds to one positioning point of the library site.
The storage space in the application can be a parking space and can also be other areas for placing articles. For example, the warehouse locations may be different areas of a warehouse divided for placement of items. Therefore, the technical scheme disclosed by the application can also be used for intelligently loading and unloading goods in a large warehouse. For convenience of illustration only, the following application scenarios take a parking lot as an example, and take a parking space as an example. The parking spaces include slant line parking spaces, linear parking spaces and non-linear parking spaces, and the non-linear parking spaces are taken as examples below.
The anchor points of the bin locations in this application may be vertices on the bin boundary line. For example, referring to FIG. 3, the boundary line of bin A includes line segment 301, line segment 302, line segment 305 and line segment 308, and the anchor points of the bin include point 311, point 312, point 315 and point 316. In this application, the library bit boundary line is simply referred to as the library bit line. For non-font parking spaces, the library position lines are all straight lines.
The bin entry corner points may include a bin entry corner point and a non-bin entry corner point. The storehouse entrance corner point refers to a positioning point on a boundary line of a storehouse entrance. The non-store entrance corner points refer to other store entrance corner points except the store entrance corner point. For example, referring to fig. 3, the library a is used as a non-font parking space, and the library entrance corner points thereof are point 315 and point 316, and the non-library entrance corner points thereof are point 311 and point 312.
In some embodiments, the mapping device 102 may determine the library location points in the top view image based on the depth neural network and further determine the image coordinates for each library location point. The deep neural network can be obtained after training the initial deep neural network based on the top view image of the marked library bit corner points. Here, the library location corner points determined by the mapping apparatus 102 may be all library location corner points in the top-view image, or may be partial library location corner points in the top-view image. The partial store location corner points may only include a store location entrance corner point, may also only include a non-store location entrance corner point, and may also include a partial store location entrance corner point and a partial non-store location entrance corner point. For example, referring to fig. 3, for a top-view image including only the bin a, the partial bin corner points determined by the mapping apparatus 102 may include only the point 315 and the point 316, may also include only the point 311 and the point 312, and may also include only the point 311 and the point 315, or the point 312 and the point 316.
At 230, the mapping apparatus 102 may determine a valid bin based on the image coordinates of the bin corner point and the first preset bin width.
The first preset width may be determined according to national standards, industry standards, life experiences, or actual stock space width. The valid parking space refers to a real-existing parking space in a specific area (e.g., parking lot, warehouse), which may be in an available state or an unavailable state (e.g., in a state of being parked by a vehicle, a state of being occupied by goods).
Specifically, the mapping device 102 may determine candidate bin positions based on the image coordinates of the bin position corner point and the first preset bin position width, and determine the valid bin position in the candidate bin positions. More specifically, the mapping device 102 may determine a region of interest of the candidate bay; and classifying the candidate library positions through a deep neural network based on the region of interest to determine effective library positions. The deep neural network may be derived after training the initial neural network based on the labeled overhead images (e.g., the overhead images labeled with the library bits). It should be understood that the deep neural network herein is different from the deep neural network in the foregoing that detects the bin corners. The region of interest is a region around the library position corner point, and the surrounding region is a region within a certain distance around the library position corner point.
Of course, the mapping device 102 may also classify other information of the candidate bin based on the deep neural network, such as the depth direction, whether occupied, and the like.
For purposes of illustration only, the library location corner point is referred to herein simply as the library location entry corner point. The mapping equipment 102 may pair the library location entrance corner points pairwise according to the image coordinates of the library location entrance corner points, and determine the candidate library location by using the first preset library location width. For example, referring to FIG. 3, for a top-down image that includes only the library site portal corner points 315, 316, and 317, the mapping device 102 may match the library site portal corner points 315, 316, and 317 two by two. The mapping apparatus 102 may obtain three matching results, which are the matching result of the bin entry corner points 315 and 316, the matching result of the bin entry corner points 315 and 317, and the matching result of the bin entry corner points 316 and 317. Then, using the first preset width, the mapping apparatus 102 may exclude the matching result of the bin entry corner points 315 and 317, and obtain candidate bins determined by the bin entry corner points 315 and 316 and candidate bins determined by the bin entry corner points 316 and 317.
It should be understood that when the library location corner point is a non-library location entrance corner point, or a combination of a library location entrance corner point and a non-library location entrance corner point, matching of the library location corner points changes correspondingly, and the first preset library location width changes correspondingly. Such variations are within the scope of the present application.
In 240, the mapping device 102 may match the valid bin with a bin in a map. The mapping device 102 may perform the following two steps:
first, the mapping device 102 may match the library location corner point of the valid library location with the library location corner point of the library location in the map.
Specifically, for each bin corner point of the valid bin, the mapping device 102 may determine a distance between the bin corner point and the bin corner point of the bin in the map. The distance between two bin corner points may be a euclidean distance.
When there are one or more library location corner points in the map, the mapping device 102 may obtain one or more distances when matching each library location corner point of the valid library. At this time, the mapping device 102 further determines whether the determined distance satisfies a preset condition, and if so, the library location corner point of the valid library location is matched with the library location corner point of the library location in the map. The preset condition is that the distance is within a preset threshold range, and the distance is the smallest in the one or more distances.
When a map does not have a library location corner point, for example, at the initial moment of mapping, each library location corner point of a valid library location is matched, the mapping device 102 may not obtain the distance between two library location corner points. Then, the mapping device 102 determines that there is no library corner point in the map that matches the library corner point of the valid library. At this time, the mapping apparatus 102 does not need to perform the following second step.
In a second step, the mapping device 102 may determine at least two pairs of matching library corner points. It should be appreciated that at least two bin corner points may define a bin. If there are valid bin matching bin in the map in the top view image, the mapping device 102 may determine at least two pairs of matching bin corner points.
In 250, the mapping device 102 may determine a pose of the mapping device 102 based on the matching results. The mapping device 102 may determine the pose of the mapping device based on the at least two pairs of mutually matched library location corner points.
Specifically, the mapping device 102 may determine a confidence for each pair of mutually matching library corner points, based on which the pose of the mapping device 102 is determined. The confidence level is related to the distance of the matching library-site corner points from the mapping device 102 and/or the number of times the matching library-site corner points have been historically observed in the map. The greater the distance between the mutually matched library-site corner points (such as library-site corner points in the map) and the mapping device 102, the smaller the confidence coefficient; the greater the number of times that matching library loci (e.g., library locus corners in a map) are observed historically, the greater the confidence.
In some embodiments, the mapping device 102 may determine the confidence level for each pair of mutually matching library corner points by equation (1). Equation (1) is as follows:
Ck=Detkf(Obk)g(dk) Formula (1)
Wherein Ck is the confidence of the kth pair of mutually matched library-site corners, and Detk is the confidence of the kth pair of mutually matched library-site corners in a detection network (i.e., a deep neural network for detecting library-site corners); obk is the number of times that the k-th matched library corner point is observed by history; and dk is the distance between the corner points of the k-th pair of matched library positions and the mapping equipment.
In some embodiments, the mapping device 102 may determine the pose of the mapping device 102 by equation (2). Equation (2) is as follows:
wherein, TwvA pose transformation matrix of the mapping equipment under a global map coordinate system is a variable to be optimized; t isviA transformation matrix from the image coordinate to a coordinate system of the mapping equipment is obtained; pk_iFor the coordinates, P, of the k-th pair of mutually matched bin corner points on the imagekAnd coordinates of the corner points of the k-th pair of matched library positions under the map.
In 260, the mapping device 102 may update the map based on the pose of the mapping device 102. For example, the mapping device 102 may determine that a new map point is inserted into the map by triangularization calculation.
In some embodiments, the method for instant positioning and mapping may further include: the mapping equipment 102 determines the valid library positions of the overhead image which are not matched with the library positions in the map; inserting valid bin bits that do not match the map.
As an example, at the initial moment of drawing, no library bit exists in the map, and the effective library bit in the overhead image is not necessarily matched with the library bit in the map. At this point, the mapping device 102 may insert the valid bin in the overhead image into the map.
In some embodiments, the method for instant positioning and mapping may further include: the mapping device 102 optimizes the map. The optimization includes at least one of:
firstly, at least part of the library position angle points in the map are fitted according to a preset position relation. For the non-font parking spaces, the preset position relation is that all the library position angular points are distributed on a plurality of straight lines. Thus, the mapping device 102 may perform a straight line fit to at least some of the bin angle points in the map.
By way of example, referring to FIG. 3, library corner points 311, 312, 313, and 314 lie on a straight line, library corner points 315, 316, 317, and 318 lie on a straight line, library corner points 311 and 315 lie on a straight line, library corner points 312 and 316 lie on a straight line, library corner points 315 and 317 lie on a straight line, and library corner points 314 and 318 lie on a straight line. The mapping device 102 may perform a straight line fitting on the library location corner points 311 to 318 according to the above-mentioned position relationship.
In some embodiments, the mapping device 102 may fit a straight line to the library corner points according to equation (3). Equation (3) is as follows:
wherein,and diRespectively the direction vector and the offset of the ith straight line,Pijis the jth library corner point, C, on the ith straight lineijThe confidence of the jth library corner point on the ith straight line is shown.
Secondly, carrying out weight combination on the library position corner points of the map with the mutual position difference within a preset threshold range. When the position difference between two or more library location corner points in the map is small, the mapping device 102 may perform weight combination on the two or more library location corner points.
In some embodiments, the mapping device 102 may weight the bin corner points according to equation (4). Equation (4) is as follows:
wherein, PmergeFor the combined bin corner coordinates, PiFor the ith bin corner point coordinate, Ci_normThe weight of the ith library-site corner point is the normalized value of the confidence coefficient of the library-site corner point relative to the confidence coefficients of all (i.e., I) library-site corner points to be merged.
Third, the direction vectors of the library bit lines in the map are optimized. In some embodiments, the library bit lines are in a parallel or intersecting relationship. In some embodiments, mapping device 102 may optimize the direction vectors of the library bit lines in the map based on the parallel or intersecting relationship (e.g., a particular angle of intersection). In some embodiments, the method for the mapping device 102 to optimize the library bit lines based on the parallel and/or perpendicular relationship of the library bit lines when the library bit lines are parallel and perpendicular may be described as an example with reference to fig. 3.
Referring to FIG. 3, the bank bit lines 301, 302, 303, and 304 are parallel to one another, forming a first set of bank bit lines; the bank bit lines 305-310 are parallel to each other, forming a second set of bank bit lines. Any bank bit line in the first set of bank bit lines is orthogonal to any bank bit line in the second set of bank bit lines. The mapping apparatus 102 optimizes the direction vectors of the library bit lines in the map according to the above-described parallel and/or perpendicular relationships.
In some embodiments, mapping device 102 may optimize the direction vectors of the library bit lines in the map according to equation (5), equation (6), and equation (7). Equations (5) to (7) are as follows:
wherein,is composed ofA desired unit direction vector;is prepared by reacting withUnit direction vectors for all the bank bit lines where a parallel relationship may exist;is prepared by reacting withUnit direction vectors for all the bank bit lines for which a vertical relationship may exist;is composed ofThe unit of the orthogonal vector of (a),andcan be obtained by a clustering method; cj_lineAnd Ck_lineAre respectively asAndis related to the confidence of all library bit corner points present on the library bit line. Because of the fact thatIs prepared by reacting withThere may be a vertical relationship, soTo pairBy orthogonal theretoAnd (4) generating.
Fourthly, optimizing the library position corner points of the library positions in the map based on a second preset library position width.
The second preset width may be determined according to national standards, industry standards, life experiences, or actual stock space widths. The second predetermined width and the first predetermined width above may be the same or different.
Specifically, the mapping device 102 may adjust a distance between two library location corner points in the map according to the second preset library location width, so as to optimize positions of the two library location corner points.
In some embodiments, the mapping device 102 may perform global optimization on the map by combining the above four optimization methods. As an example, the mapping device 102 may globally optimize the map according to equation (8).
Wherein,a set of all bin bit line direction vectors; dopA set of all bin bit line offsets; popThe collection of all the library position angular points is obtained; t iswv_opAnd (4) collecting all pose key frames of the mapping equipment.
The above four sets are all variables to be optimized. In some embodiments, the mapping device 102 may optimize the four sets according to equations (9) through (14).
The formulas (9) to (11) can ensure that the poses of the optimized library bit lines, library bit corner points and the mapping equipment are close to the values before optimization, and no sudden change is generated. Equation (12) is a width constraint of the bin, Pr1_opAnd Pr2_opTwo bin corner points (e.g., bin entrance corner points) belonging to the same bin, respectively, and Lot _ W is preset bin width information. Formula (13) is projection error constraint of pose of the mapping equipment and the observed library position angular points, and ensures that the pose of the optimized mapping equipment and the library position angular points meet the observation and projection relation, and formula (14) ensures that the library position angular points P belonging to the ith library position lineij_opAnd after optimization, the device still stays on the library bit line.
With reference to the above description, during mapping and optimization, the mapping device 102 may operate on all the library location corner points or may operate on some of the library location corner points. For example, the mapping device 102 may operate only on all of the bin entry corner points. It should be appreciated that when the mapping device 102 completes the mapping and optimization, for each valid bin, there are only two bin entry corners on the map, rather than the full four bin corners. At this time, the mapping apparatus 102 may recover the complete four library position corner points by using the library position depth direction, the mutually perpendicular or parallel relationship between the library position lines, and the preset library position depth value.
The non-font parking space is taken as an example, and a detailed description is given to the method for instantly positioning and creating a map with reference to fig. 3. Of course, the parking space may be other parking spaces other than the non-font parking space. According to the change of the type of the library, some changes can be made to the simultaneous positioning and mapping method in combination with the actual situation of the library. It should be understood that the above-described modifications are not inventive in any way, and that the above-described modifications are still within the scope of the present application.
For example, when the boundary line of the library bit is a rounded curve (e.g., a circle), no vertex exists on the library bit boundary line. The mapping device 102 may determine a point in a specific direction on the library location boundary line as an anchor point, and further determine a library location corner point. Meanwhile, the mapping equipment 102 may fit the library location corner points according to the position relationship that all the library location corner points are located on the circle.
For another example, when the parking space is a linear parking space, the entrance of the parking space is different from the entrance of the non-linear parking space, and the corresponding preset width of the parking space also needs to be modified accordingly.
For another example, when the library is a diagonal parking space, there is no vertical relationship between the intersecting library boundaries. The mapping device 102 may optimize the direction vectors of the library bit lines in the map according to the size of the angle between the intersecting library bit boundary lines.
In conclusion, upon reading the present detailed disclosure, those skilled in the art will appreciate that the foregoing detailed disclosure can be presented by way of example only, and not limitation. Those skilled in the art will appreciate that the present application is intended to cover various reasonable variations, adaptations, and modifications of the embodiments described herein, although not explicitly described herein. Such alterations, improvements, and modifications are intended to be suggested by this disclosure, and are within the spirit and scope of the exemplary embodiments of this disclosure.
Furthermore, certain terminology has been used in this application to describe embodiments of the disclosure. For example, "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined as suitable in one or more embodiments of the disclosure.
It should be appreciated that in the foregoing description of embodiments of the disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of the subject disclosure. This application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. This is not to be taken as an admission that any of the features of the claims are essential, and it is fully possible for a person skilled in the art to extract some of them as separate embodiments when reading the present application. That is, embodiments in the present application may also be understood as an integration of multiple sub-embodiments. And each sub-embodiment described herein is equally applicable to less than all features of a single foregoing disclosed embodiment.
In some embodiments, numbers expressing quantities or properties used to describe and claim certain embodiments of the application are to be understood as being modified in certain instances by the term "about", "approximately" or "substantially". For example, "about," "approximately," or "substantially" can mean a ± 20% variation of the value it describes, unless otherwise specified. Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as possible.
Each patent, patent application, publication of a patent application, and other material, such as articles, books, descriptions, publications, documents, articles, and the like, cited herein is hereby incorporated by reference. All matters hithertofore set forth herein except as related to any prosecution history, may be inconsistent or conflicting with this document or any prosecution history which may have a limiting effect on the broadest scope of the claims. Now or later associated with this document. For example, if there is any inconsistency or conflict in the description, definition, and/or use of terms associated with any of the included materials with respect to the terms, descriptions, definitions, and/or uses associated with this document, the terms in this document are used.
Finally, it should be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the present application. Other modified embodiments are also within the scope of the present application. Accordingly, the disclosed embodiments are presented by way of example only, and not limitation. Those skilled in the art can implement the invention in the present application in alternative configurations according to the embodiments in the present application. Thus, embodiments of the present application are not limited to those embodiments described with accuracy in the application.
Claims (11)
1. A method for instant positioning and mapping, comprising:
acquiring a top view image;
determining the library location angular points in the overlook image based on a depth neural network, wherein each library location angular point corresponds to one positioning point of a library location;
determining the image coordinates of the corner points of the library;
determining an effective library position based on the image coordinates of the library position corner points and the width of a first preset library position;
matching the library position angular points of the effective library positions with the library position angular points of the library positions in the map;
determining at least two pairs of mutually matched library position angular points;
determining the confidence of each pair of mutually matched library-site corners based on the confidence of each pair of mutually matched library-site corners in the deep neural network, the distance between each pair of mutually matched library-site corners and a mapping device, and the number of times each pair of mutually matched library-site corners is historically observed in the map;
calculating a pose transformation matrix based on the following formula:
wherein, TwvA pose transformation matrix of the mapping equipment under a global map coordinate system is obtained; ckThe determined confidence of the k-th pair of mutually matched library site corner points; t isviA transformation matrix from the image coordinate system to the coordinate system of the mapping equipment is obtained; pk_iFor the image coordinates, P, of the k-th pair of mutually matched bin cornerskCoordinates of the k-th pair of mutually matched library location corner points on the map;
determining a pose of the mapping device based on the pose transformation matrix;
and updating the map based on the pose of the mapping equipment.
2. The method of instantaneous localization and mapping of claim 1, wherein said obtaining an overhead image comprises:
at least one visual image is acquired and,
converting the at least one visual image into at least one sub-top-view image by inverse perspective transformation, and
stitching the at least one sub-top view image into the top view image.
3. The method of on-line localization and mapping of claim 1, wherein the library location corner point is a library location entry corner point.
4. The method of on-line localization and mapping of claim 1, wherein the determining a valid bin based on the image coordinates of the bin corner point and a first preset bin width comprises:
and determining candidate library positions based on the image coordinates of the library position corner points and the first preset library position width, and determining the effective library positions in the candidate library positions.
5. The method of instantaneous location and mapping of claim 4, wherein said determining a valid bin among said candidate bins comprises:
determining a region of interest of the candidate library position;
and classifying the candidate library positions through a deep neural network based on the region of interest to determine the effective library positions.
6. The method of on-line localization and mapping of claim 1, wherein matching the library location corner point of the valid library location with the library location corner point of the library location in the map comprises:
for each bin corner point of the active bin,
determining the distance between the library position corner point and the library position corner point of the library position in the map;
and judging whether the distance meets a preset condition, if so, matching the library position corner point of the effective library position with the library position corner point of the library position in the map.
7. The method of instantaneous location and mapping of claim 6, wherein the predetermined condition is that the distance is within a predetermined threshold and the distance is the smallest within the predetermined threshold.
8. The method of on-the-fly localization and mapping of claim 1, wherein the confidence of each pair of mutually matching library location corner points is inversely proportional to the distance and directly proportional to the degree.
9. The method of instant location and mapping of claim 1, the method further comprising:
determining effective library positions which are not matched with the library positions in the map in the effective library positions of the overhead view image;
inserting valid bin bits that do not match the map.
10. The method of instant location and mapping of claim 1, further comprising optimizing the map, the optimizing comprising at least one of:
fitting at least part of the library position angle points in the map according to a preset position relation;
carrying out weight combination on the library position corner points of which the mutual position difference is within a preset threshold range in the map;
optimizing a direction vector of a library bit line in the map;
and optimizing the library position corner points of the library positions in the map based on a second preset library position width.
11. An instant positioning and mapping device, comprising:
at least one image acquisition device port;
at least one storage device comprising a set of instructions; and
at least one processor in communication with the at least one memory device, wherein the at least one processor, when executing the set of instructions, causes the instant positioning and mapping apparatus to perform the method of any of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910400102.5A CN110132278B (en) | 2019-05-14 | 2019-05-14 | Method and device for instant positioning and mapping |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910400102.5A CN110132278B (en) | 2019-05-14 | 2019-05-14 | Method and device for instant positioning and mapping |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110132278A CN110132278A (en) | 2019-08-16 |
CN110132278B true CN110132278B (en) | 2021-07-02 |
Family
ID=67573934
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910400102.5A Active CN110132278B (en) | 2019-05-14 | 2019-05-14 | Method and device for instant positioning and mapping |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110132278B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111337947B (en) * | 2020-05-18 | 2020-09-22 | 深圳市智绘科技有限公司 | Instant mapping and positioning method, device, system and storage medium |
CN111862673B (en) * | 2020-06-24 | 2021-10-15 | 北京易航远智科技有限公司 | Parking lot vehicle self-positioning and map construction method based on top view |
CN115407355B (en) * | 2022-11-01 | 2023-01-10 | 小米汽车科技有限公司 | Library position map verification method and device and terminal equipment |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105469405B (en) * | 2015-11-26 | 2018-08-03 | 清华大学 | Positioning and map constructing method while view-based access control model ranging |
CN109389013B (en) * | 2017-08-10 | 2024-02-09 | 纵目科技(上海)股份有限公司 | Parking space combination algorithm and medium based on parking space main direction and template response points |
CN107856667B (en) * | 2017-11-08 | 2020-02-14 | 科大讯飞股份有限公司 | Parking assist system and method |
CN108682027A (en) * | 2018-05-11 | 2018-10-19 | 北京华捷艾米科技有限公司 | VSLAM realization method and systems based on point, line Fusion Features |
CN108875911B (en) * | 2018-05-25 | 2021-06-18 | 同济大学 | Parking space detection method |
CN108564814B (en) * | 2018-06-06 | 2020-11-17 | 清华大学苏州汽车研究院(吴江) | Image-based parking lot parking space detection method and device |
CN109243289B (en) * | 2018-09-05 | 2021-02-05 | 武汉中海庭数据技术有限公司 | Method and system for extracting parking spaces of underground garage in high-precision map manufacturing |
-
2019
- 2019-05-14 CN CN201910400102.5A patent/CN110132278B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110132278A (en) | 2019-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110132278B (en) | Method and device for instant positioning and mapping | |
US9483703B2 (en) | Online coupled camera pose estimation and dense reconstruction from video | |
US20140072217A1 (en) | Template matching with histogram of gradient orientations | |
Gong et al. | A Frustum-based probabilistic framework for 3D object detection by fusion of LiDAR and camera data | |
US10460472B2 (en) | System and method for model adaptation | |
Hanek et al. | The contracting curve density algorithm: Fitting parametric curve models to images using local self-adapting separation criteria | |
CN110097064B (en) | Picture construction method and device | |
Son et al. | A multi-vision sensor-based fast localization system with image matching for challenging outdoor environments | |
KR20210074163A (en) | Joint detection and description systems and methods | |
CN107194896B (en) | Background suppression method and system based on neighborhood structure | |
CN111915657A (en) | Point cloud registration method and device, electronic equipment and storage medium | |
CN109543694A (en) | A kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature | |
CN111753858B (en) | Point cloud matching method, device and repositioning system | |
CN113822996B (en) | Pose estimation method and device for robot, electronic device and storage medium | |
CN115147576A (en) | Underwater robot docking monocular vision guiding method based on key characteristics | |
WO2024147898A1 (en) | Parking space detection method and system | |
Sohn et al. | Sequential modelling of building rooftops by integrating airborne LiDAR data and optical imagery: preliminary results | |
CN113673288A (en) | Idle parking space detection method and device, computer equipment and storage medium | |
US9259840B1 (en) | Device and method to localize and control a tool tip with a robot arm | |
Liu et al. | Online object-level SLAM with dual bundle adjustment | |
Stentoumis et al. | Implementing an adaptive approach for dense stereo-matching | |
CN108917768B (en) | Unmanned aerial vehicle positioning navigation method and system | |
CN113222999A (en) | Remote sensing target segmentation and automatic splicing method based on voting mechanism | |
CN118262336B (en) | Indoor parking lot positioning method and system based on visual SLAM | |
CN117542008B (en) | Semantic point cloud fusion automatic driving scene identification method and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |