CN111862206A - Visual positioning method and device, electronic equipment and readable storage medium - Google Patents

Visual positioning method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111862206A
CN111862206A CN201911404674.7A CN201911404674A CN111862206A CN 111862206 A CN111862206 A CN 111862206A CN 201911404674 A CN201911404674 A CN 201911404674A CN 111862206 A CN111862206 A CN 111862206A
Authority
CN
China
Prior art keywords
target image
feature points
visual
target
line segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911404674.7A
Other languages
Chinese (zh)
Inventor
包灵
徐斌
杜宪策
张军
滕晓强
李荣浩
许鹏飞
胡润波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ditu Beijing Technology Co Ltd
Original Assignee
Ditu Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ditu Beijing Technology Co Ltd filed Critical Ditu Beijing Technology Co Ltd
Priority to CN201911404674.7A priority Critical patent/CN111862206A/en
Publication of CN111862206A publication Critical patent/CN111862206A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a visual positioning method, a device, an electronic device and a readable storage medium, which can extract a plurality of visual characteristic points from a target image, can determine a reserved area and a removed area with a preset picture of the target image according to a target vanishing point detected from the target image, further divide the reserved area into a plurality of grids with equal size, and respectively screen a preset number of visual characteristic points from the visual characteristic points contained in each grid as credible characteristic points, and can reduce the number of credible characteristic points for positioning and improve the reliability of the credible characteristic points for positioning by screening the credible characteristic points only in the reserved area, and ensure that the credible characteristic points are approximately and uniformly distributed in the reserved area, thereby reducing the time consumption for positioning and simultaneously improving the success rate of visual positioning, thus according to the credible characteristic points, the geographical position of the entity scene in the target image can be accurately and quickly determined.

Description

Visual positioning method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of visual positioning technologies, and in particular, to a visual positioning method and apparatus, an electronic device, and a readable storage medium.
Background
In the gallery area of scenes such as airports, railway stations, subway stations, markets and the like, because of the shielding of buildings, GPS signals are unstable, so that the user can have positioning errors of different degrees when positioning the position through a GPS. Therefore, under the above-mentioned area, other technologies are needed to position the location, and currently, a visual positioning method is proposed to achieve effective positioning in the above-mentioned area. Here, visual localization is a technique of acquiring an image of an object using a visual sensor, and then performing image processing using a computer to obtain position information of the object.
However, the existing visual feature point extraction method mainly selects visual feature points through response values and scale values, neglects the influence of the distribution of the visual feature points on the success rate of visual positioning, and causes that the number of the visual feature points in some areas is extremely large and the number of the visual feature points in some areas is extremely small, so that the success rate of visual positioning is low.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide a visual positioning method, a visual positioning apparatus, an electronic device, and a readable storage medium, which can accurately and quickly determine a geographic location of an entity scene in a target image.
Mainly comprises the following aspects:
in a first aspect, an embodiment of the present application provides a visual positioning method, where the visual positioning method includes:
acquiring a target image to be visually positioned, and extracting a plurality of visual feature points from the target image;
detecting a target vanishing point from the target image; the target vanishing point is an intersection point of extension lines of all target line segments detected from the target image along the depth of field direction;
determining a removal area and a reserved area corresponding to the target image according to the coordinate position of the target vanishing point in the target image; the removal area comprises an area with a preset picture;
dividing the reserved area of the target image into a plurality of grids with equal size, and respectively screening out a preset number of visual feature points from the visual feature points contained in each grid of the plurality of grids as credible feature points;
And determining the geographical position of the entity scene in the target image according to the credible feature point.
In one possible embodiment, detecting a target vanishing point from the target image includes:
detecting a plurality of line segments from the target image;
calculating a slope angle of each of the plurality of line segments;
screening each target line segment along the depth of field direction in the target image from the plurality of line segments according to the slope angle of each line segment in the plurality of line segments;
and determining the intersection point of the extension lines of all the target line segments as the target vanishing point of the target image.
In a possible implementation manner, the screening, from the plurality of line segments, a respective target line segment in the target image along the depth direction according to a slope angle of each line segment in the plurality of line segments includes:
respectively judging whether the slope angle of each line segment in the line segments is smaller than or equal to a preset angle;
and determining the line segment with the slope angle smaller than or equal to the preset angle as the target line segment.
In a possible implementation, after detecting a plurality of line segments from the target image, the visual positioning method further includes:
Calculating the number of pixel points contained in each line segment in the plurality of line segments;
respectively judging whether the number of pixel points of each line segment in the line segments is greater than or equal to a preset number;
the calculating the slope angle of each line segment of the plurality of line segments comprises:
and calculating the slope angle of the line segment of which the number of the pixel points is greater than or equal to the preset number in the plurality of line segments.
In one possible embodiment, for each of the plurality of line segments, the slope angle of each line segment is calculated according to the following steps:
acquiring a starting point coordinate and an end point coordinate of each line segment;
calculating the slope of each line segment according to the starting point coordinate and the end point coordinate of each line segment;
and calculating the slope angle of each line segment according to the slope of each line segment.
In a possible implementation manner, the determining, according to the coordinate position of the target vanishing point in the target image, a removal area and a reserved area corresponding to the target image includes:
determining a dividing line which passes through the target vanishing point and is perpendicular to the depth of field direction on the target image according to the coordinate position of the target vanishing point in the target image;
Determining an area with the preset picture on one side of the dividing line as the removal area; and determining the region except the removal region in the target image as the reserved region.
In a possible embodiment, the preset picture comprises a picture with both ground and pedestrians.
In one possible embodiment, the dividing the reserved area of the target image into a plurality of grids of equal size includes:
dividing the reserved area into M × N grids with equal size according to a preset grid division rule;
wherein M is the number of grids in the horizontal direction in the reserved area, and N is the number of grids in the vertical direction in the reserved area.
In a possible implementation manner, the screening out, from the visual feature points included in each of the multiple grids, a preset number of visual feature points as credible feature points includes:
calculating, for each mesh of the plurality of meshes, a response value of a visual feature point included in each mesh;
and sequencing the response values of the visual feature points in each grid according to the sequence of the response values from large to small, and determining a first preset number of visual feature points which are ranked at the top as the credible feature points in each grid.
In a possible implementation manner, the screening out, from the visual feature points included in each of the multiple grids, a preset number of visual feature points as credible feature points includes:
calculating, for each mesh of the plurality of meshes, a scale value of a visual feature point included in each mesh;
and sequencing the scale values of the visual feature points in each grid according to the sequence of the scale values from large to small, and determining a second preset number of visual feature points which are ranked at the top as credible feature points in each grid.
In a second aspect, embodiments of the present application further provide a visual positioning device, where the visual positioning device includes:
the extraction module is used for acquiring a target image to be visually positioned and extracting a plurality of visual feature points from the target image;
the detection module is used for detecting a target vanishing point from the target image; the target vanishing point is an intersection point of extension lines of all target line segments detected from the target image along the depth of field direction;
the first determining module is used for determining a removal area and a reserved area corresponding to the target image according to the coordinate position of the target vanishing point in the target image; the removal area comprises an area with a preset picture;
The screening module is used for dividing the reserved area of the target image into a plurality of grids with equal size, and screening out a preset number of visual feature points from the visual feature points contained in each grid of the grids to serve as credible feature points;
and the second determining module is used for determining the geographical position of the entity scene in the target image according to the credible feature point.
In one possible embodiment, the detection module comprises:
a detection unit configured to detect a plurality of line segments from the target image;
a first calculation unit for calculating a slope angle of each of the plurality of line segments;
the screening unit is used for screening each target line segment in the target image along the depth of field direction from the plurality of line segments according to the slope angle of each line segment in the plurality of line segments;
and the determining unit is used for determining the intersection point of the extension lines of all the target line segments as the target vanishing point of the target image.
In a possible embodiment, the filtering unit is configured to filter out each target line segment according to the following steps:
respectively judging whether the slope angle of each line segment in the line segments is smaller than or equal to a preset angle;
And determining the line segment with the slope angle smaller than or equal to the preset angle as the target line segment.
In a possible implementation, the detection module further includes:
the second calculating unit is used for calculating the number of pixel points contained in each line segment in the line segments;
the judging unit is used for respectively judging whether the number of the pixel points of each line segment in the line segments is greater than or equal to the preset number;
the first calculating unit is specifically configured to calculate slope angles of the line segments of which the number of pixels is greater than or equal to the preset number among the plurality of line segments.
In a possible implementation, the first calculation unit is configured to calculate a slope angle of each line segment according to the following steps:
acquiring a starting point coordinate and an end point coordinate of each line segment;
calculating the slope of each line segment according to the starting point coordinate and the end point coordinate of each line segment;
and calculating the slope angle of each line segment according to the slope of each line segment.
In a possible implementation manner, the first determining module is configured to determine the removed area and the reserved area corresponding to the target image according to the following steps:
determining a dividing line which passes through the target vanishing point and is perpendicular to the depth of field direction on the target image according to the coordinate position of the target vanishing point in the target image;
Determining an area with the preset picture on one side of the dividing line as the removal area; and determining the region except the removal region in the target image as the reserved region.
In a possible embodiment, the preset picture comprises a picture with both ground and pedestrians.
In a possible implementation, the screening module is configured to divide the reserved area into a plurality of grids of equal size according to the following steps:
dividing the reserved area into M × N grids with equal size according to a preset grid division rule;
wherein M is the number of grids in the horizontal direction in the reserved area, and N is the number of grids in the vertical direction in the reserved area.
In a possible implementation, the screening module is configured to screen the credible feature points according to the following steps:
calculating, for each mesh of the plurality of meshes, a response value of a visual feature point included in each mesh;
and sequencing the response values of the visual feature points in each grid according to the sequence of the response values from large to small, and determining a first preset number of visual feature points which are ranked at the top as the credible feature points in each grid.
In a possible implementation, the screening module is configured to screen the credible feature points according to the following steps:
calculating, for each mesh of the plurality of meshes, a scale value of a visual feature point included in each mesh;
and sequencing the scale values of the visual feature points in each grid according to the sequence of the scale values from large to small, and determining a second preset number of visual feature points which are ranked at the top as credible feature points in each grid.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processor and the memory communicate via the bus, and the machine-readable instructions when executed by the processor perform the steps of the visual positioning method according to the first aspect or any one of the possible implementation manners of the first aspect.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the visual positioning method described in the first aspect or any one of the possible implementation manners of the first aspect.
In the embodiment of the application, a plurality of visual feature points are extracted from the target image, and according to the target vanishing point detected from the target image, a reserved area of the target image and a removed area with a preset picture can be determined, and further, the reserved area is divided into a plurality of grids with equal size, and respectively screening out a preset number of visual feature points from the visual feature points contained in each grid as credible feature points, screening out the credible feature points only in the reserved area, the number of trusted feature points used for positioning can be reduced, the reliability of the trusted feature points used for positioning can be improved, and the credible feature points are approximately and uniformly distributed in the reserved area, so that the success rate of visual positioning can be improved while the time consumption for positioning is reduced, and in this way, according to the credible feature points, the geographical position of the entity scene in the target image can be accurately and quickly determined.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart illustrating a visual positioning method according to an embodiment of the present application;
FIG. 2 is a flow chart of a visual positioning method provided in the second embodiment of the present application;
FIG. 3 is a functional block diagram of a visual positioning apparatus according to a third embodiment of the present application;
FIG. 4 shows one of the functional block diagrams of the detection block of FIG. 3;
FIG. 5 shows a second functional block diagram of the detection block of FIG. 3;
fig. 6 shows a schematic structural diagram of an electronic device according to a fourth embodiment of the present application.
Detailed Description
To make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and that steps without logical context may be performed in reverse order or concurrently. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
To enable those skilled in the art to use the present disclosure in conjunction with a specific application scenario "utilize visual positioning technology to identify the geographic location of a physical scene in a target image", the following embodiments are given, and it will be apparent to those skilled in the art that the general principles defined herein may be applied to other embodiments and application scenarios without departing from the spirit and scope of the present disclosure.
The following method, apparatus, electronic device or computer-readable storage medium in the embodiments of the present application may be applied to any scene that needs to be visually located, and the embodiments of the present application do not limit specific application scenes, and any scheme that uses the visual locating method and apparatus provided in the embodiments of the present application is within the scope of the present application.
Before the application is provided, in the existing scheme, the visual feature point extraction method mainly selects the visual feature points through response values and scale values, and ignores the influence of the distribution of the visual feature points on the visual positioning success rate, so that the visual feature points in some areas are more in number and the visual feature points in some areas are less in number, and the visual positioning success rate is lower.
In view of the above problem, in the embodiment of the present application, a plurality of visual feature points are extracted from a target image, and based on a target vanishing point detected from the target image, a reserved area of the target image and a removed area with a preset picture can be determined, and further, the reserved area is divided into a plurality of grids with equal size, and respectively screening out a preset number of visual feature points from the visual feature points contained in each grid as credible feature points, screening out the credible feature points only in the reserved area, the number of trusted feature points used for positioning can be reduced, the reliability of the trusted feature points used for positioning can be improved, and the credible feature points are approximately and uniformly distributed in the reserved area, so that the success rate of visual positioning can be improved while the time consumption for positioning is reduced, and in this way, according to the credible feature points, the geographical position of the entity scene in the target image can be accurately and quickly determined.
The visual positioning technology is a technology for acquiring an image of an object by a visual sensor (a camera, a terminal device, or the like), and then performing image processing by a computer to obtain position information of the object.
For the convenience of understanding of the present application, the technical solutions provided in the present application will be described in detail below with reference to specific embodiments.
Example one
Fig. 1 is a flowchart of a visual positioning method according to an embodiment of the present application. As shown in fig. 1, a visual positioning method provided in an embodiment of the present application includes the following steps:
s101: the method comprises the steps of obtaining a target image to be visually positioned, and extracting a plurality of visual feature points from the target image.
In specific implementation, a target image needing to be visually positioned is obtained first, and a plurality of visual feature points are extracted from the target image through a visual feature extraction method.
Here, the extraction of the visual feature points may be performed using a visual feature extraction method commonly used in the art, including, but not limited to, Scale-invariant feature transform (SIFT), Histogram of Oriented Gradient (HOG), and FAST feature point extraction and description (ordered FAST and Rotated BRIEF, ORB).
In one example, a user calls a train at a railway station through taxi taking software, but because a corridor area of the railway station is shielded by a building, a GPS signal is unstable, and when the user selects to position through a GPS, positioning errors of different degrees exist, the user selects a visual positioning technology to position a current geographic position, specifically, the user takes a photo firstly and uploads the photo to the taxi taking software, a background server of the taxi taking software can position the geographic position of an entity scene in the photo from the photo, and then the background server plans a traveling route from the current geographic position to a position where the user takes the car with a driver, so that the user can be quickly converged with the driver.
S102: detecting a target vanishing point from the target image; the target vanishing point is an intersection point of extension lines of each target line segment detected from the target image along the depth of field direction.
In a specific implementation, a vanishing point detection algorithm may be used to detect a target vanishing point from a target image, and first, a vanishing point detection algorithm may be used to detect a plurality of types of line segments from the target image, where the target line segment is one of the plurality of types of line segments, and the target line segment may be screened from the plurality of types of line segments, where the target line segment is a line segment along a depth direction in the target image.
Here, the vanishing point detecting algorithm is substantially a clustering algorithm, and may detect each straight line segment from the target image, and may also detect an intersection between each straight line segment, and if the classification of the straight line segments is known, a position of the vanishing point may be detected, and the vanishing point of the target detected in the present application is an intersection of extension lines of each target line segment along the depth of field direction detected from the target image. The vanishing point is the intersection of the extension lines of the line segments.
It should be noted that the depth of field refers to a distance range between the front and the back of a subject measured by imaging that can obtain a clear image at the front edge of a camera lens or other imager, and may be understood as a clear image appearing in a range between the front and the back of a focal point after focusing is completed.
In an example, in an outbound gallery of a railway station, the gallery includes a building with edges on both sides, a beam on a ceiling and a longitudinal beam in the depth direction, and a ground, and for an image taken of the outbound gallery at the railway station, a line segment perpendicular to the ground, a transverse line segment of the ceiling, a longitudinal line segment in the depth direction, and the like can be detected from the image.
S103: determining a removal area and a reserved area corresponding to the target image according to the coordinate position of the target vanishing point in the target image; the removal area is an area containing a preset picture.
In specific implementation, after a target vanishing point is detected from a target image, a removal area and a retention area corresponding to the target image can be determined according to a coordinate position of the target vanishing point in the target image, specifically, a dividing line passing through the target vanishing point can be found, the target image can be divided into two parts through the dividing line, one part is the removal area, the other part is the retention area, and further, the geographical position of an entity scene in the target image can be determined only according to a visual feature point detected from the retention area, so that on one hand, because visual positioning is performed only from a credible feature point obtained from the retention area which is beneficial to the visual positioning, the reliability of the feature point for the visual positioning can be improved, and further, the success rate of the visual positioning is improved; on the other hand, the visual feature points of the removed area are abandoned, namely, the number of the feature points is reduced, so that the time consumption for positioning can be reduced.
Here, the removed area includes an area with a preset frame, and the visual feature points detected from the preset frame are feature points that are not beneficial for visual positioning, for example, feature points detected from a dynamic area (such as a pedestrian) or an area that continuously and repeatedly appears in an image (such as the ground), and these feature points cause a visual ambiguity, resulting in a positioning error.
It should be noted that, in the visual positioning task, the reliability of the feature point affects the positioning accuracy and success rate, and the number of the feature points affects the positioning speed.
S104: dividing the reserved area of the target image into a plurality of grids with equal size, and respectively screening out a preset number of visual feature points from the visual feature points contained in each grid of the plurality of grids as credible feature points.
In specific implementation, after the reserved area is determined, visual positioning is performed only on the credible feature points acquired from the reserved area which is beneficial to performing visual positioning, specifically, the reserved area is firstly divided into a plurality of grids with equal sizes, and a preset number of visual feature points are respectively selected as the credible feature points finally used for performing visual positioning by using a common feature point screening method from the visual feature points contained in each grid of the plurality of grids.
S105: and determining the geographical position of the entity scene in the target image according to the credible feature point.
In specific implementation, the credible feature points are only screened from the visual feature points in the reserved area which is beneficial to visual positioning in the target image, and are approximately and uniformly distributed in the reserved area, so that the geographical position of the entity scene in the target image can be accurately determined through the credible feature points.
In the embodiment of the application, a plurality of visual feature points are extracted from the target image, and according to the target vanishing point detected from the target image, a reserved area of the target image and a removed area with a preset picture can be determined, and further, the reserved area is divided into a plurality of grids with equal size, and respectively screening out a preset number of visual feature points from the visual feature points contained in each grid as credible feature points, screening out the credible feature points only in the reserved area, the number of trusted feature points used for positioning can be reduced, the reliability of the trusted feature points used for positioning can be improved, and the credible feature points are approximately and uniformly distributed in the reserved area, so that the success rate of visual positioning can be improved while the time consumption for positioning is reduced, and in this way, according to the credible feature points, the geographical position of the entity scene in the target image can be accurately and quickly determined.
Example two
Fig. 2 is a flowchart of a visual positioning method according to a second embodiment of the present application. As shown in fig. 2, a visual positioning method provided in an embodiment of the present application includes the following steps:
s201: the method comprises the steps of obtaining a target image to be visually positioned, and extracting a plurality of visual feature points from the target image.
The description of S201 may refer to the description of S101, and the same technical effect may be achieved, which is not described in detail herein.
S202: detecting a plurality of line segments from the target image, calculating a slope angle of each line segment of the plurality of line segments, screening each target line segment along the depth direction in the target image from the plurality of line segments according to the slope angle of each line segment of the plurality of line segments, and determining an intersection point of extension lines of each target line segment as the target vanishing point of the target image.
In a specific implementation, a vanishing point detection algorithm may be adopted to detect multiple types of line segments from the target image, and a target line segment along the depth of field direction is to be screened out from the multiple types of line segments, specifically, a slope angle of each of the multiple line segments is calculated first, so that the target line segment can be screened out from the multiple line segments according to the slope angle of each of the multiple line segments, and then an intersection of extension lines of each target line segment is determined as a target vanishing point of the target image.
Further, in step S202, according to a slope angle of each line segment of the plurality of line segments, screening each target line segment along the depth direction in the target image from the plurality of line segments, including the following steps:
respectively judging whether the slope angle of each line segment in the line segments is smaller than or equal to a preset angle; and determining the line segment with the slope angle smaller than or equal to the preset angle as the target line segment.
In a specific implementation, a screening condition of the slope angle may be preset in advance, that is, a preset angle is set, and when a target line segment is screened, only a line segment whose slope angle is less than or equal to the preset angle is determined as the target line segment.
It should be noted that, since the slope angle of the target line segment in the depth direction is generally between 1 degree and 10 degrees, the preset angle may be set to 10 degrees, so that the target line segment may be selected from a plurality of line segments.
Further, after detecting a plurality of line segments from the target image, the visual positioning method further includes the following steps:
calculating the number of pixel points contained in each line segment in the plurality of line segments; respectively judging whether the number of pixel points of each line segment in the line segments is greater than or equal to a preset number; and calculating the slope angle of the line segment of which the number of the pixel points is greater than or equal to the preset number in the plurality of line segments.
In specific implementation, a target line segment does not need to be screened from all detected line segments, and only the target line segment can be screened from the line segments with high quality, so that the calculation amount can be reduced, the positioning accuracy and speed can be improved, specifically, the line segments with high quality can be screened from all detected line segments, here, the number of pixel points included in each line segment is calculated first, the line segments with the number of the pixel points being greater than or equal to the preset number are determined as the line segments with high quality, and then the target line segment is screened from only the screened line segments with high quality.
It should be noted that, the more the number of the pixels included in the line segment is, that is, the longer the length of the line segment is, the better the stability is in the dead point detection, the preset number is mainly set to ensure the stability of the line segment, and here, the preset number can be set according to actual needs.
Further, for each of the plurality of line segments, calculating a slope angle for each line segment according to the following steps:
acquiring a starting point coordinate and an end point coordinate of each line segment; calculating the slope of each line segment according to the starting point coordinate and the end point coordinate of each line segment; and calculating the slope angle of each line segment according to the slope of each line segment.
In a specific implementation, the slope of each line segment may be calculated according to the start point coordinate and the end point coordinate of the line segment, and then the slope angle of the line segment may be calculated according to the slope of the line segment.
In one example, if the coordinates of the start point and the end point of a line segment a are a (0, 0) and b (1, 1), the slope of the line segment a is k ═ 1-0)/(1-0 ═ 1, and the slope angle of the line segment a is α ═ arctan1 ═ 45 degrees.
S203: determining a removal area and a reserved area corresponding to the target image according to the coordinate position of the target vanishing point in the target image; the removal area is an area containing a preset picture.
The description of S203 may refer to the description of S103, and the same technical effect may be achieved, which is not described in detail herein.
Further, in step S203, determining a removal area and a reserved area corresponding to the target image according to the coordinate position of the target vanishing point in the target image, includes the following steps:
step a: and determining a dividing line which passes through the target vanishing point and is perpendicular to the depth of field direction on the target image according to the coordinate position of the target vanishing point in the target image.
In a specific implementation, after a target vanishing point is detected from the target image, a dividing line for dividing the target image into a removed area and a reserved area may be drawn on the target image, where the target vanishing point is an intersection of extension lines of target line segments detected from the target image, and the dividing line may be determined according to a coordinate position of the target vanishing point in the target image, specifically, the dividing line is a straight line passing through the target vanishing point and perpendicular to a depth direction, where the depth direction may be understood as a direction of the target image obtained by shooting, and is generally a longitudinal direction of the image.
In one example, the position coordinate of the target vanishing point is M (3, 4), and a line with y equal to 4 may be determined as a dividing line, i.e. a line perpendicular to the y-axis (the depth of field direction is understood to be the y-axis direction), and a line parallel to the x-axis is the dividing line.
Step b: determining an area with the preset picture on one side of the dividing line as the removal area; and determining the region except the removal region in the target image as the reserved region.
In a specific implementation, the target image may be divided into two parts by a dividing line, that is, a region on one side of the dividing line is a reserved region, and a region on the other side of the dividing line is a removed region, specifically, a region with a preset frame is determined as a removed region.
Here, the preset picture includes a picture with both the ground and the pedestrian.
It should be noted that the preset image includes a large number of visual feature points that are not conducive to visual positioning, such as feature points on the ground and feature points on the pedestrian image, for example, only the texture of the floor of the gallery is seen, and the tile near the entrance of the gallery is almost the same as the tile near the middle and exit of the gallery, and if positioning is performed with such features, positioning errors are easily caused; pedestrians are dynamic objects whose features can appear in different areas as the pedestrian appears, also leading to visual ambiguities. For example, if a pedestrian moves forward in the same direction as the sensor, the feature of the pedestrian captured at time t0 appears in the image at time t1, and if the feature is used as a reference, the sensor is considered to be not moving, but in reality, both of them move a certain distance in the world coordinate system, which leads to inaccurate visual positioning. In the prior art, visual feature points are directly extracted from a target image, and credible feature points are screened only through response values or scale values, so that a large number of feature points which are not beneficial to visual positioning on a removed area can be reserved, and the success rate of visual positioning is low due to the fact that the number of the feature points distributed in each area on the target image is uneven.
S204: dividing the reserved area of the target image into a plurality of grids with equal size, and respectively screening out a preset number of visual feature points from the visual feature points contained in each grid of the plurality of grids as credible feature points.
The description of S204 may refer to the description of S104, and the same technical effect may be achieved, which is not described in detail herein.
Further, the step S204 of dividing the reserved area of the target image into a plurality of grids with equal size includes the following steps:
dividing the reserved area into M × N grids with equal size according to a preset grid division rule; wherein M is the number of grids in the horizontal direction in the reserved area, and N is the number of grids in the vertical direction in the reserved area.
In a specific implementation, the grids divided on the reserved area of the target image are arranged according to an arrangement form of a matrix, the matrix is an M × N matrix, wherein the size of each grid is equal, M grids are arranged in the transverse direction of the reserved area, and N grids are arranged in the longitudinal direction of the reserved area.
Further, in step S204, a preset number of visual feature points are respectively screened out from the visual feature points included in each of the multiple grids as the credible feature points, and the following two embodiments may be adopted:
The first method is as follows: calculating, for each mesh of the plurality of meshes, a response value of a visual feature point included in each mesh; and sequencing the response values of the visual feature points in each grid according to the sequence of the response values from large to small, and determining a first preset number of visual feature points which are ranked at the top as the credible feature points in each grid.
In specific implementation, when the credible feature points are screened from the visual feature points in each grid, the credible feature points can be screened according to response values of the visual feature points, wherein the response values represent the strength of the features of the visual feature points, the higher the response values are, the stronger the feature points are, that is, the higher the quality of the visual feature points is, the higher the response values are, the credible feature points with higher quality are used for positioning the entity scene in the target image, and thus, the success rate and the accuracy of visual positioning can be improved.
The second method comprises the following steps: calculating, for each mesh of the plurality of meshes, a scale value of a visual feature point included in each mesh; and sequencing the scale values of the visual feature points in each grid according to the sequence of the scale values from large to small, and determining a second preset number of visual feature points which are ranked at the top as credible feature points in each grid.
In specific implementation, when the credible feature points are screened from the visual feature points in each grid, the credible feature points can be screened according to the scale values of the visual feature points, wherein the scale values represent the size of the feature area of the visual feature points, the larger the scale value is, the more stable the feature points are, and the more stable credible feature points are used for positioning the entity scene in the target image, so that the success rate and the accuracy rate of visual positioning can be improved.
S205: and determining the geographical position of the entity scene in the target image according to the credible feature point.
The description of S205 may refer to the description of S105, and the same technical effect may be achieved, which is not described in detail herein.
In the embodiment of the application, a plurality of visual feature points are extracted from the target image, and according to the target vanishing point detected from the target image, a reserved area of the target image and a removed area with a preset picture can be determined, and further, the reserved area is divided into a plurality of grids with equal size, and respectively screening out a preset number of visual feature points from the visual feature points contained in each grid as credible feature points, screening out the credible feature points only in the reserved area, the number of trusted feature points used for positioning can be reduced, the reliability of the trusted feature points used for positioning can be improved, and the credible feature points are approximately and uniformly distributed in the reserved area, so that the success rate of visual positioning can be improved while the time consumption for positioning is reduced, and in this way, according to the credible feature points, the geographical position of the entity scene in the target image can be accurately and quickly determined.
EXAMPLE III
Based on the same application concept, a visual positioning device corresponding to the visual positioning method provided in the first embodiment is also provided in the third embodiment of the present application.
As shown in fig. 3 to 5, fig. 3 is a functional block diagram of a visual positioning apparatus 300 according to a third embodiment of the present application, fig. 4 is a functional block diagram of the detection module 320 in fig. 3, and fig. 5 is a functional block diagram of the detection module 320 in fig. 3.
As shown in fig. 3, the visual positioning apparatus 300 includes:
an extraction module 310, configured to obtain a target image to be visually located, and extract a plurality of visual feature points from the target image;
a detection module 320, configured to detect a target vanishing point from the target image; the target vanishing point is an intersection point of extension lines of all target line segments detected from the target image along the depth of field direction;
a first determining module 330, configured to determine, according to a coordinate position of the target vanishing point in the target image, a removal area and a reserved area corresponding to the target image; the removal area comprises an area with a preset picture;
The screening module 340 is configured to divide the reserved area of the target image into a plurality of grids with equal sizes, and screen out a preset number of visual feature points from visual feature points included in each grid of the plurality of grids as trusted feature points;
a second determining module 350, configured to determine, according to the trusted feature point, a geographic location where an entity scene in the target image is located.
In one possible implementation, as shown in fig. 4, the detection module 320 includes:
a detection unit 321 configured to detect a plurality of line segments from the target image;
a first calculating unit 322 for calculating a slope angle of each of the plurality of line segments;
the screening unit 323 is configured to screen, according to a slope angle of each of the plurality of line segments, each target line segment in the target image along the depth of field direction from the plurality of line segments;
the determining unit 324 is configured to determine an intersection of the extension lines of the target line segments as the target vanishing point of the target image.
In a possible implementation, the filtering unit 323 shown in fig. 4 is configured to filter out the respective target line segments according to the following steps:
Respectively judging whether the slope angle of each line segment in the line segments is smaller than or equal to a preset angle;
and determining the line segment with the slope angle smaller than or equal to the preset angle as the target line segment.
In a possible implementation, as shown in fig. 5, the detection module 320 further includes:
a second calculating unit 325, configured to calculate the number of pixels included in each of the line segments;
a determining unit 326, configured to determine whether the number of pixels in each of the plurality of line segments is greater than or equal to a preset number;
the first calculating unit 322 is specifically configured to calculate slope angles of the line segments of which the number of pixels is greater than or equal to the preset number among the plurality of line segments.
In a possible implementation, the first calculating unit 322 shown in fig. 4 is configured to calculate the slope angle of each line segment according to the following steps:
acquiring a starting point coordinate and an end point coordinate of each line segment;
calculating the slope of each line segment according to the starting point coordinate and the end point coordinate of each line segment;
and calculating the slope angle of each line segment according to the slope of each line segment.
In a possible implementation, the first determining module 330 shown in fig. 3 is configured to determine the removed area and the reserved area corresponding to the target image according to the following steps:
Determining a dividing line which passes through the target vanishing point and is perpendicular to the depth of field direction on the target image according to the coordinate position of the target vanishing point in the target image;
determining an area with the preset picture on one side of the dividing line as the removal area; and determining the region except the removal region in the target image as the reserved region.
In a possible embodiment, the preset picture comprises a picture with both ground and pedestrians.
In one possible implementation, the filtering module 340 shown in fig. 3 is configured to divide the reserved area into a plurality of grids with equal size according to the following steps:
dividing the reserved area into M × N grids with equal size according to a preset grid division rule;
wherein M is the number of grids in the horizontal direction in the reserved area, and N is the number of grids in the vertical direction in the reserved area.
In a possible implementation, the filtering module 340 shown in fig. 3 is configured to filter the trusted feature points according to the following steps:
calculating, for each mesh of the plurality of meshes, a response value of a visual feature point included in each mesh;
And sequencing the response values of the visual feature points in each grid according to the sequence of the response values from large to small, and determining a first preset number of visual feature points which are ranked at the top as the credible feature points in each grid.
In a possible implementation, the filtering module 340 shown in fig. 3 is configured to filter the trusted feature points according to the following steps:
calculating, for each mesh of the plurality of meshes, a scale value of a visual feature point included in each mesh;
and sequencing the scale values of the visual feature points in each grid according to the sequence of the scale values from large to small, and determining a second preset number of visual feature points which are ranked at the top as credible feature points in each grid.
In the embodiment of the application, a plurality of visual feature points are extracted from the target image, and according to the target vanishing point detected from the target image, a reserved area of the target image and a removed area with a preset picture can be determined, and further, the reserved area is divided into a plurality of grids with equal size, and respectively screening out a preset number of visual feature points from the visual feature points contained in each grid as credible feature points, screening out the credible feature points only in the reserved area, the number of trusted feature points used for positioning can be reduced, the reliability of the trusted feature points used for positioning can be improved, and the credible feature points are approximately and uniformly distributed in the reserved area, so that the success rate of visual positioning can be improved while the time consumption for positioning is reduced, and in this way, according to the credible feature points, the geographical position of the entity scene in the target image can be accurately and quickly determined.
Example four
Based on the same application concept, referring to fig. 6, a schematic structural diagram of an electronic device 600 provided in the fourth embodiment of the present application includes: a processor 610, a memory 620 and a bus 630, wherein the memory 620 stores machine-readable instructions executable by the processor 610, when the electronic device 600 is running, the processor 610 and the memory 620 communicate via the bus 630, and the machine-readable instructions are executed by the processor 610 to perform the steps of the visual positioning method according to any one of the embodiments.
In particular, the machine readable instructions, when executed by the processor 610, may perform the following:
acquiring a target image to be visually positioned, and extracting a plurality of visual feature points from the target image;
detecting a target vanishing point from the target image; the target vanishing point is an intersection point of extension lines of all target line segments detected from the target image along the depth of field direction;
determining a removal area and a reserved area corresponding to the target image according to the coordinate position of the target vanishing point in the target image; the removal area comprises an area with a preset picture;
Dividing the reserved area of the target image into a plurality of grids with equal size, and respectively screening out a preset number of visual feature points from the visual feature points contained in each grid of the plurality of grids as credible feature points;
and determining the geographical position of the entity scene in the target image according to the credible feature point.
In the embodiment of the application, a plurality of visual feature points are extracted from the target image, and according to the target vanishing point detected from the target image, a reserved area of the target image and a removed area with a preset picture can be determined, and further, the reserved area is divided into a plurality of grids with equal size, and respectively screening out a preset number of visual feature points from the visual feature points contained in each grid as credible feature points, screening out the credible feature points only in the reserved area, the number of trusted feature points used for positioning can be reduced, the reliability of the trusted feature points used for positioning can be improved, and the credible feature points are approximately and uniformly distributed in the reserved area, so that the success rate of visual positioning can be improved while the time consumption for positioning is reduced, and in this way, according to the credible feature points, the geographical position of the entity scene in the target image can be accurately and quickly determined.
EXAMPLE five
Based on the same application concept, a fifth embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the visual positioning method provided in the first embodiment are performed.
Specifically, the storage medium may be a general storage medium, such as a mobile disk, a hard disk, or the like, and when a computer program on the storage medium is run, the visual positioning method may be executed, and by screening the trusted feature points only in the reserved area, the number of the trusted feature points used for positioning may be reduced, the reliability of the trusted feature points used for positioning may be improved, and the trusted feature points may be approximately and uniformly distributed in the reserved area, so that the success rate of visual positioning may be improved while reducing the time consumption for positioning, and thus, according to the trusted feature points, the geographic position of the entity scene in the target image may be accurately and quickly determined.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (22)

1. A visual positioning method, characterized in that the visual positioning method comprises:
acquiring a target image to be visually positioned, and extracting a plurality of visual feature points from the target image;
detecting a target vanishing point from the target image; the target vanishing point is an intersection point of extension lines of all target line segments detected from the target image along the depth of field direction;
determining a removal area and a reserved area corresponding to the target image according to the coordinate position of the target vanishing point in the target image; the removal area comprises an area with a preset picture;
dividing the reserved area of the target image into a plurality of grids with equal size, and respectively screening out a preset number of visual feature points from the visual feature points contained in each grid of the plurality of grids as credible feature points;
And determining the geographical position of the entity scene in the target image according to the credible feature point.
2. The visual positioning method of claim 1, wherein detecting a target vanishing point from the target image comprises:
detecting a plurality of line segments from the target image;
calculating a slope angle of each of the plurality of line segments;
screening each target line segment along the depth of field direction in the target image from the plurality of line segments according to the slope angle of each line segment in the plurality of line segments;
and determining the intersection point of the extension lines of all the target line segments as the target vanishing point of the target image.
3. The visual localization method of claim 2, wherein the selecting, from the plurality of line segments, each target line segment in the target image along the depth of field according to the slope angle of each line segment of the plurality of line segments comprises:
respectively judging whether the slope angle of each line segment in the line segments is smaller than or equal to a preset angle;
and determining the line segment with the slope angle smaller than or equal to the preset angle as the target line segment.
4. The visual localization method according to claim 2, wherein after detecting a plurality of line segments from the target image, the visual localization method further comprises:
Calculating the number of pixel points contained in each line segment in the plurality of line segments;
respectively judging whether the number of pixel points of each line segment in the line segments is greater than or equal to a preset number;
the calculating the slope angle of each line segment of the plurality of line segments comprises:
and calculating the slope angle of the line segment of which the number of the pixel points is greater than or equal to the preset number in the plurality of line segments.
5. The visual localization method of claim 2, wherein for each of the plurality of line segments, the slope angle of each line segment is calculated according to the following steps:
acquiring a starting point coordinate and an end point coordinate of each line segment;
calculating the slope of each line segment according to the starting point coordinate and the end point coordinate of each line segment;
and calculating the slope angle of each line segment according to the slope of each line segment.
6. The visual positioning method of claim 1, wherein the determining the corresponding removed area and the reserved area of the target image according to the coordinate position of the target vanishing point in the target image comprises:
determining a dividing line which passes through the target vanishing point and is perpendicular to the depth of field direction on the target image according to the coordinate position of the target vanishing point in the target image;
Determining an area with the preset picture on one side of the dividing line as the removal area; and determining the region except the removal region in the target image as the reserved region.
7. The visual positioning method of claim 1, wherein the predetermined scene comprises a scene with both ground and pedestrians.
8. The visual localization method of claim 1, wherein the partitioning the reserved area of the target image into a plurality of grids of equal size comprises:
dividing the reserved area into M × N grids with equal size according to a preset grid division rule;
wherein M is the number of grids in the horizontal direction in the reserved area, and N is the number of grids in the vertical direction in the reserved area.
9. The visual positioning method according to claim 1, wherein the screening out a preset number of visual feature points from the visual feature points included in each of the plurality of grids as the credible feature points comprises:
calculating, for each mesh of the plurality of meshes, a response value of a visual feature point included in each mesh;
And sequencing the response values of the visual feature points in each grid according to the sequence of the response values from large to small, and determining a first preset number of visual feature points which are ranked at the top as the credible feature points in each grid.
10. The visual positioning method according to claim 1, wherein the screening out a preset number of visual feature points from the visual feature points included in each of the plurality of grids as the credible feature points comprises:
calculating, for each mesh of the plurality of meshes, a scale value of a visual feature point included in each mesh;
and sequencing the scale values of the visual feature points in each grid according to the sequence of the scale values from large to small, and determining a second preset number of visual feature points which are ranked at the top as credible feature points in each grid.
11. A visual positioning device, comprising:
the extraction module is used for acquiring a target image to be visually positioned and extracting a plurality of visual feature points from the target image;
the detection module is used for detecting a target vanishing point from the target image; the target vanishing point is an intersection point of extension lines of all target line segments detected from the target image along the depth of field direction;
The first determining module is used for determining a removal area and a reserved area corresponding to the target image according to the coordinate position of the target vanishing point in the target image; the removal area comprises an area with a preset picture;
the screening module is used for dividing the reserved area of the target image into a plurality of grids with equal size, and screening out a preset number of visual feature points from the visual feature points contained in each grid of the grids to serve as credible feature points;
and the second determining module is used for determining the geographical position of the entity scene in the target image according to the credible feature point.
12. The visual positioning apparatus of claim 11, wherein the detection module comprises:
a detection unit configured to detect a plurality of line segments from the target image;
a first calculation unit for calculating a slope angle of each of the plurality of line segments;
the screening unit is used for screening each target line segment in the target image along the depth of field direction from the plurality of line segments according to the slope angle of each line segment in the plurality of line segments;
and the determining unit is used for determining the intersection point of the extension lines of all the target line segments as the target vanishing point of the target image.
13. The visual positioning apparatus of claim 12, wherein the filtering unit is configured to filter out each target line segment according to the following steps:
respectively judging whether the slope angle of each line segment in the line segments is smaller than or equal to a preset angle;
and determining the line segment with the slope angle smaller than or equal to the preset angle as the target line segment.
14. The visual positioning apparatus of claim 12, wherein the detection module further comprises:
the second calculating unit is used for calculating the number of pixel points contained in each line segment in the line segments;
the judging unit is used for respectively judging whether the number of the pixel points of each line segment in the line segments is greater than or equal to the preset number;
the first calculating unit is specifically configured to calculate slope angles of the line segments of which the number of pixels is greater than or equal to the preset number among the plurality of line segments.
15. The visual positioning apparatus of claim 12, wherein the first computing unit is configured to compute the slope angle of each line segment according to the following steps:
acquiring a starting point coordinate and an end point coordinate of each line segment;
calculating the slope of each line segment according to the starting point coordinate and the end point coordinate of each line segment;
And calculating the slope angle of each line segment according to the slope of each line segment.
16. The visual positioning apparatus of claim 11, wherein the first determining module is configured to determine the removed area and the reserved area corresponding to the target image according to the following steps:
determining a dividing line which passes through the target vanishing point and is perpendicular to the depth of field direction on the target image according to the coordinate position of the target vanishing point in the target image;
determining an area with the preset picture on one side of the dividing line as the removal area; and determining the region except the removal region in the target image as the reserved region.
17. The visual positioning apparatus of claim 11, wherein the predetermined view comprises a view with both ground and pedestrians.
18. The visual positioning apparatus of claim 11, wherein the filtering module is configured to divide the reserved area into a plurality of grids of equal size according to the following steps:
dividing the reserved area into M × N grids with equal size according to a preset grid division rule;
Wherein M is the number of grids in the horizontal direction in the reserved area, and N is the number of grids in the vertical direction in the reserved area.
19. The visual positioning apparatus of claim 11, wherein the filtering module is configured to filter the credible feature points according to the following steps:
calculating, for each mesh of the plurality of meshes, a response value of a visual feature point included in each mesh;
and sequencing the response values of the visual feature points in each grid according to the sequence of the response values from large to small, and determining a first preset number of visual feature points which are ranked at the top as the credible feature points in each grid.
20. The visual positioning apparatus of claim 11, wherein the filtering module is configured to filter the credible feature points according to the following steps:
calculating, for each mesh of the plurality of meshes, a scale value of a visual feature point included in each mesh;
and sequencing the scale values of the visual feature points in each grid according to the sequence of the scale values from large to small, and determining a second preset number of visual feature points which are ranked at the top as credible feature points in each grid.
21. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is run, the machine-readable instructions when executed by the processor performing the steps of the visual positioning method of any of claims 1 to 10.
22. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the visual positioning method of any one of claims 1 to 10.
CN201911404674.7A 2019-12-31 2019-12-31 Visual positioning method and device, electronic equipment and readable storage medium Pending CN111862206A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911404674.7A CN111862206A (en) 2019-12-31 2019-12-31 Visual positioning method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911404674.7A CN111862206A (en) 2019-12-31 2019-12-31 Visual positioning method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN111862206A true CN111862206A (en) 2020-10-30

Family

ID=72970778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911404674.7A Pending CN111862206A (en) 2019-12-31 2019-12-31 Visual positioning method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111862206A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638774A (en) * 2020-12-01 2022-06-17 珠海碳云智能科技有限公司 Image data processing method and device, and nonvolatile storage medium
WO2022160101A1 (en) * 2021-01-26 2022-08-04 深圳市大疆创新科技有限公司 Orientation estimation method and apparatus, movable platform, and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961182A (en) * 2018-06-25 2018-12-07 北京大学 Vertical direction vanishing point detection method and video positive twist method for video image
KR20180136057A (en) * 2017-06-14 2018-12-24 현대모비스 주식회사 Camera angle estimation method for around view monitoring system
CN109146972A (en) * 2018-08-21 2019-01-04 南京师范大学镇江创新发展研究院 Vision navigation method based on rapid characteristic points extraction and gridding triangle restriction
CN109900254A (en) * 2019-03-28 2019-06-18 合肥工业大学 A kind of the road gradient calculation method and its computing device of monocular vision
CN110503740A (en) * 2018-05-18 2019-11-26 杭州海康威视数字技术股份有限公司 A kind of vehicle-state determination method, device, computer equipment and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180136057A (en) * 2017-06-14 2018-12-24 현대모비스 주식회사 Camera angle estimation method for around view monitoring system
CN110503740A (en) * 2018-05-18 2019-11-26 杭州海康威视数字技术股份有限公司 A kind of vehicle-state determination method, device, computer equipment and system
CN108961182A (en) * 2018-06-25 2018-12-07 北京大学 Vertical direction vanishing point detection method and video positive twist method for video image
CN109146972A (en) * 2018-08-21 2019-01-04 南京师范大学镇江创新发展研究院 Vision navigation method based on rapid characteristic points extraction and gridding triangle restriction
CN109900254A (en) * 2019-03-28 2019-06-18 合肥工业大学 A kind of the road gradient calculation method and its computing device of monocular vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈无畏;蒋玉亭;谈东奎;: "一种基于边缘点投影的车道线快速识别算法", 汽车工程, no. 03, pages 1 - 3 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638774A (en) * 2020-12-01 2022-06-17 珠海碳云智能科技有限公司 Image data processing method and device, and nonvolatile storage medium
CN114638774B (en) * 2020-12-01 2024-02-02 珠海碳云智能科技有限公司 Image data processing method and device and nonvolatile storage medium
WO2022160101A1 (en) * 2021-01-26 2022-08-04 深圳市大疆创新科技有限公司 Orientation estimation method and apparatus, movable platform, and readable storage medium

Similar Documents

Publication Publication Date Title
CN106570446B (en) The method and apparatus of lane line drawing
Qin et al. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images
CN111487641B (en) Method and device for detecting object by using laser radar, electronic equipment and storage medium
US8204278B2 (en) Image recognition method
KR101643672B1 (en) Optical flow tracking method and apparatus
CN111462503B (en) Vehicle speed measuring method and device and computer readable storage medium
EP2811423A1 (en) Method and apparatus for detecting target
EP3112802A1 (en) Road feature measurement apparatus and road feature measuring method
US9396553B2 (en) Vehicle dimension estimation from vehicle images
JP2015031990A (en) Detection device for detecting obstacle and steep slope and vehicle including the same
CN106845324B (en) Method and device for processing guideboard information
CN107491065B (en) Method and apparatus for detecting side surface of object using ground boundary information of obstacle
CN110795978B (en) Road surface point cloud data extraction method and device, storage medium and electronic equipment
CN112525147B (en) Distance measurement method for automatic driving equipment and related device
CN111862206A (en) Visual positioning method and device, electronic equipment and readable storage medium
Petrovai et al. A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices
CN111243003A (en) Vehicle-mounted binocular camera and method and device for detecting road height limiting rod
EP2677462B1 (en) Method and apparatus for segmenting object area
CN113920275B (en) Triangular mesh construction method and device, electronic equipment and readable storage medium
CN113256683B (en) Target tracking method and related equipment
CN112749584A (en) Vehicle positioning method based on image detection and vehicle-mounted terminal
CN116778094B (en) Building deformation monitoring method and device based on optimal viewing angle shooting
CN113989765A (en) Detection method and detection device for rail obstacle and readable storage medium
CN116092035A (en) Lane line detection method, lane line detection device, computer equipment and storage medium
CN107958226B (en) Road curve detection method, device and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination