CN110097064A - One kind building drawing method and device - Google Patents

One kind building drawing method and device Download PDF

Info

Publication number
CN110097064A
CN110097064A CN201910400106.3A CN201910400106A CN110097064A CN 110097064 A CN110097064 A CN 110097064A CN 201910400106 A CN201910400106 A CN 201910400106A CN 110097064 A CN110097064 A CN 110097064A
Authority
CN
China
Prior art keywords
map
library
semantic feature
mapping
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910400106.3A
Other languages
Chinese (zh)
Other versions
CN110097064B (en
Inventor
何潇
张丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uisee Technologies Beijing Co Ltd
Original Assignee
Uisee Technologies Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uisee Technologies Beijing Co Ltd filed Critical Uisee Technologies Beijing Co Ltd
Priority to CN201910400106.3A priority Critical patent/CN110097064B/en
Publication of CN110097064A publication Critical patent/CN110097064A/en
Application granted granted Critical
Publication of CN110097064B publication Critical patent/CN110097064B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

This application involves one kind to build drawing method and device.The drawing method of building includes: acquisition map, and the map includes semantic feature point;Based on preset positional relationship, at least partly location information of semantic feature point is updated in the map, and the map after being optimized.

Description

Picture construction method and device
Technical Field
The invention relates to the field of computer vision, in particular to the field of vision positioning and image building.
Background
An instant positioning And Mapping (SLAM) is a technology for achieving the positioning And navigation targets by tracking the motion of a robot in real time And simultaneously establishing a surrounding environment map in the process.
At present, the SLAM technology is applied to a plurality of navigation and positioning fields, but the map established by the SLAM method generally has the problems of low precision and positioning error. Therefore, it is necessary to provide a method for optimizing a map.
Disclosure of Invention
The application aims to provide a mapping method. The method can optimize the map by utilizing semantic information (such as width information of the library positions, position relation of the library positions and position relation of corner points of the library positions) of objects (such as the library positions), and can modify the mapping process by utilizing the optimized map.
One aspect of the present application provides a method for creating a map. The method comprises the following steps: acquiring a map, wherein the map comprises semantic feature points; and updating the position information of at least part of semantic feature points in the map based on a preset position relation, and obtaining the optimized map.
In some embodiments, the semantic feature points in the map are library site corner points of a library site. The method further comprises: and determining the position information of other library position angular points in the optimized map through the updated position information of the library position angular points based on the library position depth direction and the preset library position depth.
In some embodiments, the preset position relationship includes that semantic feature points in the map whose mutual position difference is within a preset threshold value correspond to the same semantic feature point. The updating of the position information of at least part of semantic feature points in the map based on the preset position relationship comprises: and determining the semantic feature points with the mutual position difference within a preset threshold range, and combining the semantic feature points with the mutual position difference within the preset threshold range.
In some embodiments, the preset positional relationship comprises at least some semantic feature points in the map being constrained by the same positional relationship function.
In some embodiments, the same positional relationship function is a straight line function. The updating the position information of at least part of semantic feature points in the map based on the same position relation function comprises: and performing straight line fitting on at least part of semantic feature points in the map based on the straight line function.
In some embodiments, the preset positional relationship includes that at least part of semantic feature points in the map are distributed on at least two straight lines, and the at least two straight lines are parallel or intersect. The updating of the position information of at least part of semantic feature points in the map based on the preset position relationship comprises: and optimizing the direction vector of at least one of the at least two straight lines based on the parallel or intersecting angle of the at least two straight lines.
In some embodiments, the preset position relationship includes that at least some semantic feature points in the map are distributed on a straight line, and the distance between at least two semantic feature points is a preset distance. The updating of the position information of at least part of semantic feature points in the map based on the preset position relationship comprises: and updating the position information of at least part of semantic feature points in the map based on the preset distance.
In some embodiments, the map is created by: acquiring a top view image; matching semantic feature points in the top view image with semantic feature points in the map; determining the pose of the mapping equipment based on the matching result; and updating the map based on the pose of the mapping equipment.
In some embodiments, the method further comprises: and at least partial semantic feature points in the optimized map are back projected onto the overhead view image, and the position information of the semantic feature points in the overhead view image is corrected.
In some embodiments, the method further comprises: determining the number of times that an object in the optimized map is observed; judging whether the object is observed for a preset number of times; determining whether the object detection in the overhead image is correct based on the determination result.
In some embodiments, the semantic feature points in the top view image and the semantic feature points in the map are library site corner points of a library site, the library site including a library bit line, the library site corner points being located on the library bit line. The method further comprises: judging whether the library position angular points in the overlook images are positioned on the library position lines in the optimized map and/or judging whether the library position angular points in the overlook images are associated with two or less library positions in the optimized map; and determining whether the detection of the corner points of the library in the overlooking image is correct or not based on the judgment result.
In some embodiments, the method further comprises: counting the initial detection state of the object on the same position relation function in the optimized map, and determining the final detection state of the object on the same position relation function according to the counting result; and correcting the detection state of the object in the overhead view image according to the final detection state.
In some embodiments, the method further comprises: counting the times of different detection states of the object in the optimized map, and determining the final detection state of the object according to the counting result; and correcting the detection state of the object in the overhead view image according to the final detection state.
One aspect of the present application provides an apparatus for image creation, the apparatus comprising at least one port for an image acquisition device, at least one storage device, the storage device comprising a set of instructions; and at least one processor in communication with the at least one storage device. Wherein the at least one processor, when executing the set of instructions, is configured to cause the mapping apparatus to perform a mapping method.
Additional features of the present application will be set forth in part in the description which follows. The descriptions of the figures and examples below will become apparent to those of ordinary skill in the art from this disclosure. The inventive aspects of the present application can be fully explained by the practice or use of the methods, instrumentalities and combinations set forth in the detailed examples discussed below.
Drawings
The following drawings describe in detail exemplary embodiments disclosed in the present application. Wherein like reference numerals represent similar structures throughout the several views of the drawings. Those of ordinary skill in the art will understand that the present embodiments are non-limiting, exemplary embodiments and that the accompanying drawings are for illustrative and descriptive purposes only and are not intended to limit the scope of the present disclosure, as other embodiments may equally fulfill the inventive intent of the present application. It should be understood that the drawings are not to scale. Wherein:
FIG. 1 illustrates a diagramming system according to some embodiments of the present application;
FIG. 2 illustrates a flow diagram of a charting method shown in accordance with some embodiments of the present application;
FIG. 3 illustrates a flow diagram of a method of optimizing a map, shown in accordance with some embodiments of the present application;
figure 4 illustrates a schematic view of a parking lot according to some embodiments of the present application.
Detailed Description
The following description is presented to enable any person skilled in the art to make and use the present disclosure, and is provided in the context of a particular application and its requirements. Various local modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. For example, as used herein, the singular forms "a", "an" and "the" may include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "comprising," and/or "including," when used in this specification, mean that the associated integers, steps, operations, elements, and/or components are present, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof in the system/method.
These and other features of the present disclosure, as well as the operation and function of the related elements of the structure, and the combination of parts and economies of manufacture, may be particularly improved upon in view of the following description. All of which form a part of the present disclosure, with reference to the accompanying drawings. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the disclosure.
The flow diagrams used in this disclosure illustrate system-implemented operations according to some embodiments of the disclosure. It should be clearly understood that the operations of the flow diagrams may be performed out of order. Rather, the operations may be performed in reverse order or simultaneously. In addition, one or more other operations may be added to the flowchart. One or more operations may be removed from the flowchart.
One aspect of the present application relates to a method of mapping. Specifically, the method comprises a map building method and a map optimizing method. The mapping method may be a normal SLAM mapping method, or may be a mapping method using semantic information of an object. The map optimizing method can optimize the map according to the position relation of the preset semantic feature points. The mapping method disclosed by the application can be applied to multiple fields, such as parking lots, warehouses, factories, or mapping of scenes with specific position relations among objects.
FIG. 1 illustrates a mapping system shown in accordance with some embodiments of the present application.
The mapping system 100 may acquire a visual image and perform a mapping method. The mapping method can refer to the description of fig. 2 and fig. 3. As shown, the mapping system 100 may include an image acquisition apparatus 101 and a mapping device 102.
The image capturing device 101 is used to capture a visual image of the surrounding environment. The image acquisition device 101 may be a camera, such as a monocular camera, a binocular camera, a stereo camera, a fisheye camera, a catadioptric camera, a panoramic imaging camera. Image capture device 101 may be mounted on a graphics-creating apparatus 102.
The mapping apparatus 102 is an exemplary computing device that can execute the mapping method. As an example, the mapping apparatus 102 may be a vehicle. When the image pickup apparatus 101 is mounted on the vehicle, the image pickup apparatus 101 may be mounted on at least one of the front, rear, left side, and right side of the vehicle. Accordingly, the number of the image capturing apparatuses 101 may be one or more.
In some embodiments, the mapping device 102 may include a communication port 150 to facilitate data communication. The mapping apparatus 102 may also include a processor 120, the processor 120 in the form of one or more processors for executing computer instructions. The computer instructions may include, for example, routines, programs, objects, components, data structures, procedures, modules, and functions that perform the particular functions described herein. For example, the processor 120 may acquire a top view image and construct a map based on the top view image. For another example, the processor 120 may update the position information of the semantic feature points in the map according to the preset position relationship, so as to optimize the map.
In some embodiments, processor 120 may include one or more hardware processors, such as microcontrollers, microprocessors, Reduced Instruction Set Computers (RISC), Application Specific Integrated Circuits (ASICs), application specific instruction-set processors (ASIPs), Central Processing Units (CPUs), Graphics Processing Units (GPUs), Physical Processing Units (PPUs), microcontroller units, Digital Signal Processors (DSPs), Field Programmable Gate Arrays (FPGAs), Advanced RISC Machines (ARMs), Programmable Logic Devices (PLDs), any circuit or processor capable of executing one or more functions, or the like, or any combination thereof.
In some embodiments, the mapping device 102 may include an internal communication bus 110, program storage, and different forms of data storage (e.g., a disk 170, Read Only Memory (ROM)130, or Random Access Memory (RAM) 140). The mapping device 102 may also include program instructions stored in the ROM130, RAM140, and/or other types of non-transitory storage media to be executed by the processor 120. The methods and/or processes of the present application may be implemented as program instructions. The mapping apparatus 102 also includes I/O components 160 that support input/output between the computer and other components (e.g., user interface elements). The mapping device 102 may also receive programming and data via network communications.
For illustrative purposes only, only one processor is depicted in the present mapping apparatus 102. It should be noted, however, that the apparatus 102 illustrated herein may also include multiple processors, and thus, operations and/or method steps disclosed herein may be performed by one processor, as described in this disclosure, or by a combination of multiple processors. For example, if the processor 120 of the apparatus 102 is shown in this application to perform steps A and B, it should be understood that steps A and B can also be performed by two different processors in the information processing, either jointly or separately (e.g., a first processor performing step A, a second processor performing step B, or both a first and second processor performing steps A and B).
FIG. 2 illustrates a flow diagram of a method of mapping shown in accordance with some embodiments of the present application. The process 200 may be implemented as a set of instructions in a non-transitory storage medium in the mapping apparatus 102. The mapping apparatus 102 may execute the set of instructions and may perform the steps in the flow 200 accordingly.
The operations of illustrated flow 200 presented below are intended to be illustrative and not limiting. In some embodiments, flow 200 may be implemented with one or more additional operations not described, and/or with one or more operations described herein. Further, the order of the operations shown in FIG. 2 and described below is not intended to be limiting.
At 210, the mapping apparatus 102 may acquire a top view image.
The mapping apparatus 102 may directly acquire the overhead view image or may indirectly acquire the overhead view image. The indirect acquisition of the overhead view image comprises the following three steps:
in a first step, the mapping apparatus 102 may acquire at least one visual image. The at least one visual image may be acquired by one or more image acquisition devices 101 at the same time, and each visual image may correspond to the same or different scenes, e.g., scenes from different local areas in a parking lot.
As an example, the mapping apparatus 102 may acquire four visual images. The four visual images are acquired at the same time by four image acquisition apparatuses 101 installed in front, rear, left, and right sides of the image creation device 102.
In a second step, the mapping apparatus 102 may convert the at least one visual image into at least one sub-top-view image by inverse perspective transformation. In connection with the example in the first step, the mapping apparatus 102 may convert the four visual images into four sub top-view images through an inverse perspective transformation. The sub-overhead images correspond to the visual images one to one.
In a third step, the mapping device 102 may stitch the at least one sub top view image into the top view image. With reference to the examples in the first step and the second step, the mapping apparatus 102 may convert the four sub top view images into the same image coordinate system by using the position relationship between the four image acquisition devices 101 and the mapping apparatus 102, and then stitch the sub top view images in the same image coordinate system into the final top view image. It should be appreciated that the stitched overhead image has a greater field of view than the sub overhead images.
In 220, the mapping device 102 may match semantic feature points in the top view image with semantic feature points in the map.
The semantic feature points are feature points with semantics. For example, the mapping apparatus 102 may map a parking lot, and the semantic feature point may be a library location corner point or a library location entrance corner point. For another example, the mapping apparatus 102 may map a warehouse, and the semantic feature point may be an anchor point of a specific area of the warehouse. For another example, the mapping apparatus 102 may map the mapping region, and the semantic feature point may be a corner point of each building. In the following, semantic feature points in the parking lot mapping will be described in more detail.
And the angular point of the library position is a positioning point of the library position. The anchor points of the bin may be vertices on bin boundary lines. For example, referring to fig. 4, the boundary line of the non-glyph library site a includes a line segment 401, a line segment 402, a line segment 405, and a line segment 408, and the anchor points include a point 411, a point 412, a point 415, and a point 416. In this application, the library bit boundary line is simply referred to as the library bit line.
The storehouse entrance angular point is one kind of storehouse angular point and refers to a positioning point on a boundary line at the storehouse entrance. For example, referring to fig. 4, the library a is used as a non-font parking space, and the library entrance corner points are point 315 and point 316.
For illustrative purposes only, the mapping apparatus 102 may match the bin entry corner in the top view image with the bin entry corner in the map, and then determine the matching bin entry corner in the top view image and the map.
In some embodiments, the mapping apparatus 102 may determine matching bin entry corner points according to the distance between two bin entry corner points in the top view image and the map. When there are one or more library location points in the map, the mapping device 102 may obtain one or more distances when matching each library location point in the top-view image. At this time, the mapping apparatus 102 further determines whether the determined distance satisfies a preset condition, and if so, the library position entrance corner point in the overhead image and the library position entrance corner point in the map are matched with each other. The preset condition is that the distance is within a preset threshold range, and the distance is the smallest in the one or more distances.
In 230, the mapping apparatus 102 may determine the pose of the mapping device based on the matching results.
And the matching result is semantic feature points which are matched with each other in the map in the overlooking image. With reference to the example in step 220, the matching result is a corner point of the library location in the top view image that matches with the corner point in the map. Typically, the matching result is at least two pairs of bin entry corner points that match each other, since at least two bin entry corner points can determine a bin. The mapping apparatus 102 may determine the pose of the mapping device based on the mutually matched library position corner points.
In some embodiments, the mapping apparatus 102 may determine a confidence level for each pair of mutually matching bin entry corner points, based on which the pose of the mapping apparatus is determined. The confidence level is related to the distance between the matched library-site corner points and the mapping device and/or the number of times the matched library-site corner points are historically observed in the map. The greater the distance between the mutually matched library position angular points (such as library position angular points in a map) and the mapping device is, the smaller the confidence coefficient is; the greater the number of times that matching library loci (e.g., library locus corners in a map) are observed historically, the greater the confidence.
In some embodiments, the mapping apparatus 102 may determine the confidence level of each pair of mutually matched library corner points by equation (1). Equation (1) is as follows:
Ck=Detkf(Obk)g(dk) Formula (1)
Wherein Ck is the confidence of the kth pair of mutually matched library-site corners, and Detk is the confidence of the kth pair of mutually matched library-site corners in a detection network (i.e., a deep neural network for detecting library-site corners); obk is the number of times that the k-th matched library corner point is observed by history; and dk is the distance between the corner points of the k-th pair of matched library positions and the mapping equipment.
In some embodiments, the mapping apparatus 102 may determine the pose of the mapping apparatus 102 by equation (2). Equation (2) is as follows:
wherein, TwvA pose transformation matrix of the mapping equipment under a global map coordinate system is a variable to be optimized; t isviA transformation matrix from the image coordinate to a coordinate system of the mapping equipment is obtained; pk_iFor the coordinates, P, of the k-th pair of mutually matched bin corner points on the imagekThe corner points of the library matched with each other for the k-th pair are located on the groundCoordinates under the graph.
In 240, the mapping apparatus 102 may update the map based on the pose of the mapping device. For example, the map building apparatus 102 may determine that a new map point is inserted into the map through triangulation calculation.
It should be noted that the above-described method of creating the map is for illustrative purposes only and does not limit the scope of the present application. The mapping method in the application can be general SLAM mapping, and can also be mapping combining scene semantic information.
FIG. 3 illustrates a flow diagram of a method of optimizing a map, shown in accordance with some embodiments of the present application. The process 300 may be implemented as a set of instructions in a non-transitory storage medium in the mapping apparatus 102. The mapping apparatus 102 may execute the set of instructions and may perform the steps in the flow 300 accordingly.
The operations of illustrated flow 300 presented below are intended to be illustrative and not limiting. In some embodiments, flow 300 may be implemented with one or more additional operations not described, and/or with one or more operations described herein. Further, the order of the operations shown in FIG. 3 and described below is not intended to be limiting.
In 310, the mapping device 102 may obtain a map. The map includes semantic feature points. The map may be obtained by the process 200.
In 320, the map creating device 102 may update the position information of at least a part of the semantic feature points in the map based on a preset position relationship, and obtain an optimized map.
The preset position relationship refers to a position relationship between semantic feature points in the map, for example, at least part of the semantic feature points are located on two parallel (or perpendicular) straight lines; for another example, two semantic feature points correspond to the same semantic feature point when the positions of the semantic feature points are close. The preset position relationship can be set according to the position relationship between the semantic feature points of the actual scenery of which the picture is built.
In some embodiments, the preset position relationship includes that semantic feature points in the map whose mutual position difference is within a preset threshold value correspond to the same semantic feature point. The mapping device 102 may determine the semantic feature points whose mutual position difference is within a preset threshold range, and merge the semantic feature points whose mutual position difference is within the preset threshold range. Furthermore, the map building apparatus 102 may update the position information of the semantic feature points in the map, and obtain the optimized map.
In some embodiments, the preset positional relationship comprises that at least part of semantic feature points in the map are constrained by the same positional relationship function. The position relation function refers to a function for restricting the position of the semantic feature point, such as a straight line function. The mapping apparatus 102 may perform a straight line fitting on at least a part of semantic feature points in the map based on the straight line function.
In some embodiments, the preset position relationship includes that at least part of semantic feature points in the map are distributed on at least two straight lines, and the at least two straight lines are parallel or intersect with each other. The mapping apparatus 102 may optimize a direction vector of at least one of the at least two straight lines based on an included angle at which the at least two straight lines are parallel to or intersect with each other.
In some embodiments, the preset position relationship includes that at least some semantic feature points in the map are distributed on a straight line and the distance between the at least two semantic feature points is a preset distance. In some embodiments, the mapping apparatus 102 may update the position information of at least part of the semantic feature points in the map based on the preset distance.
It should be understood that the mapping device 102 may optimize the map according to one or more of the preset position relationships described above. Taking the position relationship between the library location corner points preset in the parking lot map as an example, the mapping device 102 updates the position information of the library location corner points in the map, and obtains the optimized map.
First, for a parking lot map, a preset position relationship includes that library location corner points, whose mutual position differences are within a preset threshold range, in the map correspond to the same library location corner point. When the position difference between two or more library location corner points in the map is small, the mapping apparatus 102 may merge the two or more library location corner points, for example, weight merging.
In some embodiments, the mapping apparatus 102 may perform weight combination on the library corner points according to equation (3). Equation (3) is as follows:
wherein, PmergeFor the combined bin corner coordinates, PiFor the ith bin corner point coordinate, Ci_normThe weight of the ith library-site corner point is the normalized value of the confidence coefficient of the library-site corner point relative to the confidence coefficients of all (i.e., I) library-site corner points to be merged.
Secondly, for the parking lot map, the preset position relationship includes that at least part of the library position angle points in the map are constrained by the same straight line function, namely, at least part of the library position angle points are distributed on the same straight line. Thus, the mapping means 102 may perform a straight line fit to at least some of the bin angle points in the map.
By way of example, referring to FIG. 4, library corner points 411, 412, 413, and 414 lie on a straight line, library corner points 415, 416, 417, and 418 lie on a straight line, library corner points 411 and 415 lie on a straight line, library corner points 412 and 416 lie on a straight line, library corner points 415 and 417 lie on a straight line, and library corner points 414 and 418 lie on a straight line. The mapping apparatus 102 may perform a straight line fitting on the library location points 411 to 418 according to the above-mentioned position relationship.
In some embodiments, the mapping apparatus 102 may perform a straight line fit to the library corner points according to equation (4). Equation (4) is as follows:
wherein,and diRespectively, the direction vector and offset, P, of the ith straight lineijIs the jth library corner point, C, on the ith straight lineijThe confidence of the jth library corner point on the ith straight line is shown.
Thirdly, for the parking lot map, the preset position relationship comprises that the library position corner points in the map are distributed on at least two straight lines, and the two straight lines are parallel or intersected. When the two lines intersect, the included angle may be 30 °, 60 °, 90 ° (i.e., the two lines are perpendicular), 120 °, or any other angle. At this time, the mapping apparatus 102 may optimize the direction vector of the straight line (i.e., the library bit line) in the map based on the parallel relationship or the specific angle.
For the non-font parking lot, the preset position relationship comprises that the library position corner points in the map are distributed on at least two straight lines, and the two straight lines are parallel or vertical to each other. Therefore, the mapping apparatus 102 can optimize the direction vector of the straight line in the map based on the parallel or perpendicular relationship.
As an example, referring to fig. 4, the library bit corner points in the map are distributed over library bit lines 401 to 410, and the library bit lines 401 to 404 are parallel to each other to form a first set of library bit lines; the bank bit lines 405 through 410 are parallel to each other, forming a second set of bank bit lines. Any bank bit line in the first set of bank bit lines is orthogonal to any bank bit line in the second set of bank bit lines. The mapping means 102 optimizes the direction vectors of the library bit lines in the map according to the above-described parallel and/or perpendicular relationship.
In some embodiments, mapping apparatus 102 may optimize the direction vectors of the library bit lines in the map according to equation (5), equation (6), and equation (7). Equations (5) to (7) are as follows:
wherein,is composed ofA desired unit direction vector;is prepared by reacting withUnit direction vectors for all the bank bit lines where a parallel relationship may exist;is prepared by reacting withUnit direction vectors for all the bank bit lines for which a vertical relationship may exist;is composed ofThe unit of the orthogonal vector of (a),andcan be obtained by a clustering method; cj_lineAnd Ck_lineAre respectively asAndis related to the confidence of all library bit corner points present on the library bit line. Because of the fact thatIs prepared by reacting withThere may be a vertical relationship, soTo pairBy orthogonal theretoAnd (4) generating.
Fourthly, for the parking lot map, the preset position relationship includes that the library site corner points in the map are distributed on at least one straight line (namely, library sites) and the distance between the at least two library site corner points (namely, the width or the depth of the library sites) is a preset distance. Therefore, the mapping apparatus 102 may optimize the library corner points in the map based on the preset library width or depth. The preset width or depth of the library space can be determined according to national standards, industry standards, life experiences or actual width or depth of the library space.
Specifically, the mapping apparatus 102 may adjust a distance between two library location corner points in the map according to a preset library location width or depth, and then update the position information of the two library location corner points.
In some embodiments, the map building apparatus 102 may perform global optimization on the map by combining the above four preset position relationships. As an example, the mapping apparatus 102 may perform global optimization on the map according to equation (8).
Wherein,a set of all bin bit line direction vectors; dopA set of all bin bit line offsets; popThe collection of all the library position angular points is obtained; t iswv_opAnd (4) collecting all pose key frames of the mapping equipment.
The above four sets are all variables to be optimized. In some embodiments, the mapping apparatus 102 may optimize the four sets according to equations (9) to (14).
The formulas (9) to (11) can ensure that the poses of the optimized library bit lines, library bit corner points and the mapping equipment are close to the values before optimization, and no sudden change is generated. Equation (12) is a width constraint of the bin, Pr1_opAnd Pr2_opTwo bin corner points (e.g., bin entrance corner points) belonging to the same bin, respectively, and Lot _ W is preset bin width information. Formula (13) is projection error constraint of pose of the mapping equipment and the observed library position angular points, and ensures that the pose of the optimized mapping equipment and the library position angular points meet the observation and projection relation, and formula (14) ensures that the library position angular points P belonging to the ith library position lineij_opAnd after optimization, the device still stays on the library bit line.
It should be noted that, when the parking lot map is taken as an example to perform map optimization, the mapping device 102 may operate all the library location corner points, or may operate some of the library location corner points. For example, the mapping apparatus 102 may operate only on all of the bin entry corner points. When the mapping device 102 completes mapping, for each library location, only two library location entrance corner points exist on the map instead of the complete four library location corner points. At this time, the mapping device 102 may determine, based on the library level depth direction and the preset library level depth, the position information of the other library level corner points in the optimized map according to the updated position information of the library level corner points.
It should be noted that the above description takes the non-font library bit shown in FIG. 4 as an example. Of course, the parking space may be other parking spaces other than the non-font parking space. According to the change of the type of the library, some changes can be made to the simultaneous positioning and mapping method in combination with the actual situation of the library. It should be understood that the above-described modifications are not inventive in any way, and that the above-described modifications are still within the scope of the present application.
For example, when the boundary line of the library bit is a rounded curve (e.g., a circle), no vertex exists on the library bit boundary line. The mapping device 102 may determine a point in a specific direction on the library bit boundary line as an anchor point, and further determine a library bit corner point. Meanwhile, the mapping device 102 may fit the library corner points according to a circular function.
For another example, when the library space is a diagonal parking space, there is no vertical relationship between the intersecting library space lines. The mapping apparatus 102 may optimize the direction vector of the library bit lines in the map according to the size of the included angle between the intersecting library bit lines.
It should be noted that the above optimization of the parking lot map is merely an example. The method of optimizing a map disclosed herein may be applicable to a variety of maps. For example, the mapping apparatus 102 may optimize the warehouse map according to the location relationship between the areas in the warehouse. For another example, the mapping apparatus 102 may optimize the building area map according to the position relationship between the buildings. As another example, the mapping apparatus 102 may optimize the restaurant map according to the position relationship between the tables in the restaurant. It will be appreciated that maps having specific relationships between the locations of objects within an area, or maps having objects with specific characteristics (e.g., circles, rectangles) themselves, may be optimized using the optimization methods disclosed herein.
Meanwhile, the optimized map may correct the detection and classification results of the top view images in the process 200. Still taking the parking lot map as an example, the mapping device 102 may correct the detection and classification result of the library position in the top view image in the parking lot mapping.
In some embodiments, the mapping device 102 may update the position information of the semantic feature points detected in the top view image based on the optimized map. For example, the mapping device 102 may back-project at least a part of the semantic feature points in the optimized map onto the corresponding top-view image, and modify the position information of the semantic feature points in the top-view image. That is, the mapping device 102 may back-project the library position corner points in the optimized map to the corresponding overhead image, and correct the position information of the library position corner points in the overhead image.
In some embodiments, the mapping device 102 may determine whether the detection of the object in the overhead image is correct based on the optimized map. For example, the mapping apparatus 102 may count the number of times the object in the optimized map is observed; and judging whether the object is observed for a preset number of times, if so, determining that the object is detected correctly, and if not, determining that the object is detected incorrectly. In some embodiments, the mapping device 102 may determine the number of times a bin is observed in the optimized map; and judging whether the observed library position reaches a preset number, if so, detecting the observed library position correctly, otherwise, detecting the observed library position incorrectly.
This is because the bin accuracy observed in the optimized map for a predetermined number of times, N being an integer greater than 1, is relatively high. When the library position is judged to be matched with the library position in the corresponding overhead image, the mapping device 102 may determine that the library position in the overhead image is correctly detected. Otherwise, the mapping device 102 may determine a bin detection error in the top view image.
In some embodiments, the mapping device 102 may determine whether the detection of the library corner points in the top view image is correct based on the optimized map. For example, the mapping device 102 may determine whether a library position corner point in the top view image is located on a library position line in the optimized map, and/or determine whether a library position corner point in the top view image is associated with two or less library positions in the optimized map; and determining whether the detection of the corner points of the library in the overlooking image is correct or not based on the judgment result. This is because, for a typical library site, all library site corner points are located on the library site line, and each library site corner point can be located on two library sites at most. When the library position corner points in the top view image are located on the library position line of the optimized map, and/or when the library position corner points in the top view image are associated with two or less library positions, the image creating device 102 may determine that the library position corner points in the top view image are correctly detected. Otherwise, the mapping device 102 may determine that the bin corner point in the top view image is detected incorrectly.
In some embodiments, the mapping device 102 may determine whether the detection status of the library corner points in the top view image is correct based on the optimized map. In some embodiments, the mapping device 102 counts the number of different detection states of the bin corner points. The mapping device 102 determines the detection state with a large number of times of detection states of the library location corner point as the detection state of the current library location corner point. In some embodiments, the detection status of the library site corner point includes a library site depth determined by the library site corner point, whether the library site determined by the library site corner point is occupied, and the like. And counting the detection states of the library position angular points in the detection process, wherein the detection state with larger counting is regarded as the library position detection state determined by the library position angular points, and if error classification exists, the detection state is corrected.
For example, the mapping device 102 may determine whether the depth direction of the library bit in the top-view image is correct based on the optimized map. The mapping apparatus 102 may count the times of the library locations in different depth directions, such as the times of south facing and the times of north facing, and then determine the depth direction with the larger number of times (e.g., south facing) as the depth direction of the library location.
In some embodiments, the mapping device 102 may determine whether the classification of the detection state of the object in the overhead image is correct based on the optimized map. The classification of the detection state of the object may include a depth direction (also referred to as an orientation, for example, an orientation of a library site, i.e., a direction in which a library site entrance is located) of the object, whether the object is occupied, and the like. As an example, the mapping device 102 may determine whether the classification of the depth direction of the object in the top view image is correct based on the optimized map. Specifically, the map building apparatus 102 may count the initial depth direction of the object on the same position relationship function in the optimized map, and determine the final depth direction of the object on the same position relationship function according to the statistical result; and correcting the depth direction of the object in the overhead view image according to the final depth direction. That is, the map building apparatus 102 may count the initial depth direction of the library location on the same position relationship function in the optimized map, and determine the final depth direction of the library location on the same position relationship function according to the statistical result; and correcting the depth direction of the corresponding library bit in the overlook image according to the final depth direction.
The position relation function refers to a function for restraining the position of the object. For the bin, the positional relationship function may be a straight line function. That is, the mapping device 102 may count the initial depth directions of the library positions on the same straight line in the optimized map.
It should be emphasized again that the implementation of the various steps in the mapping method disclosed in this application is not limited by order. For example, when the map creating apparatus 102 optimizes the map based on a plurality of preset positional relationships, operations for optimizing the map based on different preset positional relationships may be performed simultaneously or may not be performed simultaneously, and the order of the operations is not limited. For another example, when the mapping apparatus 102 corrects the detection and classification result of the overhead image based on the optimized map, the operations of the mapping apparatus 102 correction may be performed simultaneously or not, and the order of the operations is not limited.
In conclusion, upon reading the present detailed disclosure, those skilled in the art will appreciate that the foregoing detailed disclosure can be presented by way of example only, and not limitation. Those skilled in the art will appreciate that the present application is intended to cover various reasonable variations, adaptations, and modifications of the embodiments described herein, although not explicitly described herein. Such alterations, improvements, and modifications are intended to be suggested by this disclosure, and are within the spirit and scope of the exemplary embodiments of this disclosure.
Furthermore, certain terminology has been used in this application to describe embodiments of the disclosure. For example, "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined as suitable in one or more embodiments of the disclosure.
It should be appreciated that in the foregoing description of embodiments of the disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of the subject disclosure. This application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. This is not to be taken as an admission that any of the features of the claims are essential, and it is fully possible for a person skilled in the art to extract some of them as separate embodiments when reading the present application. That is, embodiments in the present application may also be understood as an integration of multiple sub-embodiments. And each sub-embodiment described herein is equally applicable to less than all features of a single foregoing disclosed embodiment.
In some embodiments, numbers expressing quantities or properties used to describe and claim certain embodiments of the application are to be understood as being modified in certain instances by the term "about", "approximately" or "substantially". For example, "about," "approximately," or "substantially" can mean a ± 20% variation of the value it describes, unless otherwise specified. Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as possible.
Each patent, patent application, publication of a patent application, and other material, such as articles, books, descriptions, publications, documents, articles, and the like, cited herein is hereby incorporated by reference. All matters hithertofore set forth herein except as related to any prosecution history, may be inconsistent or conflicting with this document or any prosecution history which may have a limiting effect on the broadest scope of the claims. Now or later associated with this document. For example, if there is any inconsistency or conflict in the description, definition, and/or use of terms associated with any of the included materials with respect to the terms, descriptions, definitions, and/or uses associated with this document, the terms in this document are used.
Finally, it should be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the present application. Other modified embodiments are also within the scope of the present application. Accordingly, the disclosed embodiments are presented by way of example only, and not limitation. Those skilled in the art can implement the invention in the present application in alternative configurations according to the embodiments in the present application. Thus, embodiments of the present application are not limited to those embodiments described with accuracy in the application.

Claims (14)

1. A method of mapping operating on an electronic device, the method comprising:
acquiring a map, wherein the map comprises semantic feature points;
and updating the position information of at least part of semantic feature points in the map based on a preset position relation, and obtaining the optimized map.
2. The method of mapping as claimed in claim 1, wherein the semantic feature points in the map are library site corner points of a library site, the method further comprising:
and determining the position information of other library position angular points in the optimized map through the updated position information of the library position angular points based on the library position depth direction and the preset library position depth.
3. The mapping method of claim 1,
the preset position relation comprises that semantic feature points with mutual position difference within a preset threshold range in the map correspond to the same semantic feature point;
the updating of the position information of at least part of semantic feature points in the map based on the preset position relationship comprises:
determining semantic feature points of which the mutual position difference is within a preset threshold value range,
and merging the semantic feature points with the mutual position difference within a preset threshold range.
4. The mapping method according to claim 1, wherein the preset positional relationship includes that at least part of semantic feature points in the map are constrained by the same positional relationship function.
5. The mapping method according to claim 4, wherein the co-location relationship function is a straight-line function;
the updating the position information of at least part of semantic feature points in the map based on the same position relation function comprises:
and performing straight line fitting on at least part of semantic feature points in the map based on the straight line function.
6. The mapping method of claim 1,
the preset position relation comprises that at least part of semantic feature points in the map are distributed on at least two straight lines, and the at least two straight lines are parallel or intersected with each other;
the updating of the position information of at least part of semantic feature points in the map based on the preset position relationship comprises:
and optimizing the direction vector of at least one straight line of the at least two straight lines based on the angle at which the at least two straight lines are parallel or intersected with each other.
7. The mapping method of claim 1,
the preset position relation comprises that at least part of semantic feature points in the map are distributed on a straight line, and the distance between at least two semantic feature points is a preset distance;
the updating of the position information of at least part of semantic feature points in the map based on the preset position relationship comprises:
and updating the position information of at least part of semantic feature points in the map based on the preset distance.
8. The mapping method of claim 1, wherein the map is created by:
acquiring a top view image;
matching semantic feature points in the top view image with semantic feature points in the map;
determining the pose of the mapping equipment based on the matching result;
and updating the map based on the pose of the mapping equipment.
9. The method of mapping according to claim 8, the method further comprising:
and at least partial semantic feature points in the optimized map are back projected onto the overhead view image, and the position information of the semantic feature points in the overhead view image is corrected.
10. The method of mapping according to claim 8, the method further comprising:
determining the number of times that an object in the optimized map is observed;
judging whether the object is observed for a preset number of times;
determining whether the object detection in the overhead image is correct based on the determination result.
11. The mapping method of claim 8,
the semantic feature points in the overlook images and the semantic feature points in the map are library position corner points of a library position, the library position comprises a library bit line, and the library position corner points are positioned on the library bit line;
the method further comprises:
judging whether the library position angular points in the overlook images are positioned on the library position lines in the optimized map and/or judging whether the library position angular points in the overlook images are associated with two or less library positions in the optimized map;
and determining whether the detection of the corner points of the library in the overlooking image is correct or not based on the judgment result.
12. The method of mapping according to claim 8, the method further comprising:
counting the initial detection state of the object on the same position relation function in the optimized map, and determining the final detection state of the object on the same position relation function according to the counting result;
and correcting the detection state of the object in the overhead view image according to the final detection state.
13. The method of mapping according to claim 8, the method further comprising:
counting the times of different detection states of the object in the optimized map, and determining the final detection state of the object according to the counting result;
and correcting the detection state of the object in the overhead view image according to the final detection state.
14. An apparatus for creating a map, comprising:
at least one storage device comprising a set of instructions; and
at least one processor in communication with the at least one memory device, wherein the at least one processor, when executing the set of instructions, causes the mapping apparatus to perform the method of any of claims 1-13.
CN201910400106.3A 2019-05-14 2019-05-14 Picture construction method and device Active CN110097064B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910400106.3A CN110097064B (en) 2019-05-14 2019-05-14 Picture construction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910400106.3A CN110097064B (en) 2019-05-14 2019-05-14 Picture construction method and device

Publications (2)

Publication Number Publication Date
CN110097064A true CN110097064A (en) 2019-08-06
CN110097064B CN110097064B (en) 2021-05-11

Family

ID=67448051

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910400106.3A Active CN110097064B (en) 2019-05-14 2019-05-14 Picture construction method and device

Country Status (1)

Country Link
CN (1) CN110097064B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110823225A (en) * 2019-10-29 2020-02-21 北京影谱科技股份有限公司 Positioning method and device under indoor dynamic situation
CN110956846A (en) * 2019-12-11 2020-04-03 济宁市众帮来袭信息科技有限公司 Parking service method, device and system and storage medium
CN111780771A (en) * 2020-05-12 2020-10-16 驭势科技(北京)有限公司 Positioning method, positioning device, electronic equipment and computer readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070293985A1 (en) * 2006-06-20 2007-12-20 Samsung Electronics Co., Ltd. Method, apparatus, and medium for building grid map in mobile robot and method, apparatus, and medium for cell decomposition that uses grid map
CN102663357A (en) * 2012-03-28 2012-09-12 北京工业大学 Color characteristic-based detection algorithm for stall at parking lot
CN102963355A (en) * 2012-11-01 2013-03-13 同济大学 Intelligent auxiliary parking method and implementation system thereof
CN103577484A (en) * 2012-08-07 2014-02-12 上海市测绘院 Spatial orientation method of any deformation map
CN104417615A (en) * 2013-09-06 2015-03-18 现代摩比斯株式会社 Method for controlling steering wheel and system therefor
CN105469405A (en) * 2015-11-26 2016-04-06 清华大学 Visual ranging-based simultaneous localization and map construction method
CN105674993A (en) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 Binocular camera-based high-precision visual sense positioning map generation system and method
CN106772389A (en) * 2016-11-07 2017-05-31 纵目科技(上海)股份有限公司 A kind of warehouse compartment detection method, system and mobile device
CN106910217A (en) * 2017-03-17 2017-06-30 驭势科技(北京)有限公司 Vision map method for building up, computing device, computer-readable storage medium and intelligent vehicle
CN107180215A (en) * 2017-05-31 2017-09-19 同济大学 Figure and high-precision locating method are built in parking lot based on warehouse compartment and Quick Response Code automatically
CN108318043A (en) * 2017-12-29 2018-07-24 百度在线网络技术(北京)有限公司 Method, apparatus for updating electronic map and computer readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070293985A1 (en) * 2006-06-20 2007-12-20 Samsung Electronics Co., Ltd. Method, apparatus, and medium for building grid map in mobile robot and method, apparatus, and medium for cell decomposition that uses grid map
CN102663357A (en) * 2012-03-28 2012-09-12 北京工业大学 Color characteristic-based detection algorithm for stall at parking lot
CN103577484A (en) * 2012-08-07 2014-02-12 上海市测绘院 Spatial orientation method of any deformation map
CN102963355A (en) * 2012-11-01 2013-03-13 同济大学 Intelligent auxiliary parking method and implementation system thereof
CN104417615A (en) * 2013-09-06 2015-03-18 现代摩比斯株式会社 Method for controlling steering wheel and system therefor
CN105469405A (en) * 2015-11-26 2016-04-06 清华大学 Visual ranging-based simultaneous localization and map construction method
CN105674993A (en) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 Binocular camera-based high-precision visual sense positioning map generation system and method
CN106772389A (en) * 2016-11-07 2017-05-31 纵目科技(上海)股份有限公司 A kind of warehouse compartment detection method, system and mobile device
CN106910217A (en) * 2017-03-17 2017-06-30 驭势科技(北京)有限公司 Vision map method for building up, computing device, computer-readable storage medium and intelligent vehicle
CN107180215A (en) * 2017-05-31 2017-09-19 同济大学 Figure and high-precision locating method are built in parking lot based on warehouse compartment and Quick Response Code automatically
CN108318043A (en) * 2017-12-29 2018-07-24 百度在线网络技术(北京)有限公司 Method, apparatus for updating electronic map and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BJOERN SONDERMANN,AND ETC: "Simultaneous Localization and Mapping Based on Semantic World Modelling", 《2014 EUROPEAN MODELLING SYMPOSIUM》 *
于金山等: "基于云的语义库设计及机器人语义地图构建", 《机器人》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110823225A (en) * 2019-10-29 2020-02-21 北京影谱科技股份有限公司 Positioning method and device under indoor dynamic situation
CN110956846A (en) * 2019-12-11 2020-04-03 济宁市众帮来袭信息科技有限公司 Parking service method, device and system and storage medium
CN111780771A (en) * 2020-05-12 2020-10-16 驭势科技(北京)有限公司 Positioning method, positioning device, electronic equipment and computer readable storage medium
CN111780771B (en) * 2020-05-12 2022-09-23 驭势科技(北京)有限公司 Positioning method, positioning device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN110097064B (en) 2021-05-11

Similar Documents

Publication Publication Date Title
KR102367361B1 (en) Location measurement and simultaneous mapping method and device
CN112444242B (en) Pose optimization method and device
EP2678824B1 (en) Determining model parameters based on transforming a model of an object
CN110097064B (en) Picture construction method and device
CN103700099B (en) Rotation and dimension unchanged wide baseline stereo matching method
CN110807350A (en) System and method for visual SLAM for scan matching
EP3547256A1 (en) Extracting a feature descriptor for an image feature
CN105303514A (en) Image processing method and apparatus
JP2017091079A (en) Image processing device and method for extracting image of object to be detected from input data
Pintore et al. Omnidirectional image capture on mobile devices for fast automatic generation of 2.5 D indoor maps
EP3028252A1 (en) Rolling sequential bundle adjustment
KR102608956B1 (en) A method for rectifying a sequence of stereo images and a system thereof
CN107980138A (en) A kind of false-alarm obstacle detection method and device
CN108717709A (en) Image processing system and image processing method
Kim et al. Planar structures from line correspondences in a manhattan world
CN110132278B (en) Method and device for instant positioning and mapping
Kuhl Comparison of stereo matching algorithms for mobile robots
CN114022560A (en) Calibration method and related device and equipment
CN113052907A (en) Positioning method of mobile robot in dynamic environment
CN111383264A (en) Positioning method, positioning device, terminal and computer storage medium
CN110673607A (en) Feature point extraction method and device in dynamic scene and terminal equipment
CN114066930A (en) Planar target tracking method and device, terminal equipment and storage medium
CN113570667B (en) Visual inertial navigation compensation method and device and storage medium
CN115937002A (en) Method, apparatus, electronic device and storage medium for estimating video rotation
von Schmude et al. Relative pose estimation from straight lines using optical flow-based line matching and parallel line clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant