CN111553342B - Visual positioning method, visual positioning device, computer equipment and storage medium - Google Patents

Visual positioning method, visual positioning device, computer equipment and storage medium Download PDF

Info

Publication number
CN111553342B
CN111553342B CN202010249184.0A CN202010249184A CN111553342B CN 111553342 B CN111553342 B CN 111553342B CN 202010249184 A CN202010249184 A CN 202010249184A CN 111553342 B CN111553342 B CN 111553342B
Authority
CN
China
Prior art keywords
image
moving object
road sign
interest
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010249184.0A
Other languages
Chinese (zh)
Other versions
CN111553342A (en
Inventor
王溯恺
黄淮扬
王恒立
蔡培德
薛博桓
于洋
王鲁佳
刘明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yiqing Innovation Technology Co ltd
Original Assignee
Shenzhen Yiqing Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yiqing Innovation Technology Co ltd filed Critical Shenzhen Yiqing Innovation Technology Co ltd
Priority to CN202010249184.0A priority Critical patent/CN111553342B/en
Publication of CN111553342A publication Critical patent/CN111553342A/en
Application granted granted Critical
Publication of CN111553342B publication Critical patent/CN111553342B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a visual positioning method, a visual positioning device, computer equipment and a storage medium. The method comprises the following steps: acquiring a first position coordinate of a road sign in a visual area range of a moving object, wherein the first position coordinate is a coordinate of a world coordinate system; acquiring an image of a road sign in the visual area range acquired by an image acquisition device; calculating pixel coordinates of the road mark feature points in the image; determining the pose of the image capturing device according to the pixel coordinates and the first position coordinates; and calculating a second position coordinate of the moving object under the world coordinate system according to the pose and the pose transformation relation between the image capturing device and the moving object. By adopting the method, the accuracy of visual positioning can be improved.

Description

Visual positioning method, visual positioning device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to a visual positioning method, device, computer device, and storage medium based on road signs.
Background
With the development of computer and digital image processing technologies, visual localization technologies have emerged. The visual positioning technology is widely applied to the fields of robots, unmanned aerial vehicles and the like. The current vision odometer method based on continuous frames gradually increases the positioning error along with the increase of the running time of the system, thereby causing the problem of inaccurate positioning.
Disclosure of Invention
Based on this, it is necessary to provide a visual positioning method, apparatus, computer device and storage medium capable of precisely positioning in an application environment lacking stable and unique visual characteristic points, in view of the above-described technical problems.
A method of visual localization, the method comprising:
acquiring a first position coordinate of a road sign in a visual area range of a moving object, wherein the first position coordinate is a coordinate of a world coordinate system;
acquiring an image of a road sign in the visual area range acquired by an image acquisition device;
calculating pixel coordinates of the road mark feature points in the image;
determining the pose of the image capturing device according to the pixel coordinates and the first position coordinates;
and calculating a second position coordinate of the moving object under the world coordinate system according to the pose and the pose transformation relation between the image capturing device and the moving object.
In one embodiment, before the obtaining the first position coordinates of the landmark in the visual area of the moving object, the method further includes:
detecting angular velocity and acceleration of the moving object by a sensor;
calculating the estimated position of the moving object according to the angular speed, the acceleration and the initial position of the moving object;
The obtaining the first position coordinates of the road sign in the visual area range of the moving object comprises the following steps:
determining a road sign number in the visual area range of the estimated position of the moving object;
and acquiring a first position coordinate of the numbered road sign.
In one embodiment, after calculating the second position coordinates of the moving object in the world coordinate system, the method further includes:
and updating the estimated position by using the second position coordinates.
The calculating the pixel coordinates of the road sign feature points in the image comprises:
determining a region of interest in the image according to the estimated position;
extracting an area-of-interest image from the image according to the area-of-interest;
judging whether the region of interest image contains a complete road sign or not;
if not, the region of interest is adjusted, and the region of interest image containing the complete road sign is re-extracted;
and calculating pixel coordinates of the landmark feature points according to the re-extracted region-of-interest image.
In one embodiment, the determining whether the region of interest image includes a complete landmark includes:
performing binarization processing on the region-of-interest image to obtain a binarized image;
extracting road sign image features from the binarized image;
Comparing the extracted road sign image characteristics with preset road sign characteristics;
and determining whether the region of interest image contains a complete road sign according to the comparison result.
In one embodiment, the adjusting the region of interest and re-extracting the region of interest image containing the full landmark comprises:
determining the gravity center of the landmark graph in the binarized image;
and translating the region of interest in the binarized image, and extracting a region of interest image containing a complete road sign from the binarized image according to the translated region of interest.
In one embodiment, the image capturing device is a binocular camera; the determining the pose of the image capturing device according to the pixel coordinates and the first position coordinates includes:
calculating left and right eye re-projection errors of the binocular camera;
and calculating the pose of the image capturing device according to the reprojection error.
A visual positioning device, the device comprising:
the acquisition module is used for acquiring first position coordinates of road signs in the visual area range of the moving object, wherein the first position coordinates are coordinates of a world coordinate system;
the acquisition module is also used for acquiring images of road signs in the visual area range acquired by the image capturing device;
The calculating module is used for calculating pixel coordinates of the road mark feature points in the image;
the determining module is used for determining the pose of the image capturing device according to the pixel coordinates and the first position coordinates;
the calculation module is also used for calculating a second position coordinate of the moving object under the world coordinate system according to the pose and the pose transformation relation between the image capturing device and the moving object.
In one embodiment, the apparatus further comprises:
a detection module for detecting the angular velocity and acceleration of the moving object by a sensor;
the calculation module is also used for calculating the estimated position of the moving object according to the angular speed, the acceleration and the initial position of the moving object;
the acquisition module is also used for determining a road sign number in the visual area range of the estimated position of the moving object; and acquiring a first position coordinate of the numbered road sign.
In one embodiment, the apparatus further comprises:
and the updating module is used for updating the estimated position by utilizing the second position coordinates.
In one embodiment, the computing module is further configured to:
determining a region of interest in the image according to the estimated position;
Extracting an area-of-interest image from the image according to the area-of-interest;
judging whether the region of interest image contains a complete road sign or not;
if not, the region of interest is adjusted, and the region of interest image containing the complete road sign is re-extracted;
and calculating pixel coordinates of the landmark feature points according to the re-extracted region-of-interest image.
In one embodiment, the computing module is further to:
performing binarization processing on the region-of-interest image to obtain a binarized image;
extracting road sign image features from the binarized image;
comparing the extracted road sign image characteristics with preset road sign characteristics;
and determining whether the region of interest image contains a complete road sign according to the comparison result.
In one embodiment, the computing module is further to:
determining the gravity center of the landmark graph in the binarized image;
and translating the region of interest in the binarized image, and extracting a region of interest image containing a complete road sign from the binarized image according to the translated region of interest.
In one embodiment, the image capturing device is a binocular camera; the computing module is further for:
Calculating left and right eye re-projection errors of the binocular camera;
and calculating the pose of the image capturing device according to the reprojection error.
A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements a visual positioning method of:
acquiring a first position coordinate of a road sign in a visual area range of a moving object, wherein the first position coordinate is a coordinate of a world coordinate system;
acquiring an image of a road sign in the visual area range acquired by an image acquisition device;
calculating pixel coordinates of the road mark feature points in the image;
determining the pose of the image capturing device according to the pixel coordinates and the first position coordinates;
and calculating a second position coordinate of the moving object under the world coordinate system according to the pose and the pose transformation relation between the image capturing device and the moving object.
In one embodiment, before the obtaining the first position coordinates of the landmark in the visual area of the moving object, the method further includes:
detecting angular velocity and acceleration of the moving object by a sensor;
calculating the estimated position of the moving object according to the angular speed, the acceleration and the initial position of the moving object;
The obtaining the first position coordinates of the road sign in the visual area range of the moving object comprises the following steps:
determining a road sign number in the visual area range of the estimated position of the moving object;
and acquiring a first position coordinate of the numbered road sign.
In one embodiment, after calculating the second position coordinates of the moving object in the world coordinate system, the method further includes:
and updating the estimated position by using the second position coordinates.
The calculating the pixel coordinates of the road sign feature points in the image comprises:
determining a region of interest in the image according to the estimated position;
extracting an area-of-interest image from the image according to the area-of-interest;
judging whether the region of interest image contains a complete road sign or not;
if not, the region of interest is adjusted, and the region of interest image containing the complete road sign is re-extracted;
and calculating pixel coordinates of the landmark feature points according to the re-extracted region-of-interest image.
In one embodiment, the determining whether the region of interest image includes a complete landmark includes:
performing binarization processing on the region-of-interest image to obtain a binarized image;
extracting road sign image features from the binarized image;
Comparing the extracted road sign image characteristics with preset road sign characteristics;
and determining whether the region of interest image contains a complete road sign according to the comparison result.
In one embodiment, the adjusting the region of interest and re-extracting the region of interest image containing the full landmark comprises:
determining the gravity center of the landmark graph in the binarized image;
and translating the region of interest in the binarized image, and extracting a region of interest image containing a complete road sign from the binarized image according to the translated region of interest.
In one embodiment, the image capturing device is a binocular camera; the determining the pose of the image capturing device according to the pixel coordinates and the first position coordinates includes:
calculating left and right eye re-projection errors of the binocular camera;
and calculating the pose of the image capturing device according to the reprojection error.
A computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor implements a visual positioning method of:
acquiring a first position coordinate of a road sign in a visual area range of a moving object, wherein the first position coordinate is a coordinate of a world coordinate system;
Acquiring an image of a road sign in the visual area range acquired by an image acquisition device;
calculating pixel coordinates of the road mark feature points in the image;
determining the pose of the image capturing device according to the pixel coordinates and the first position coordinates;
and calculating a second position coordinate of the moving object under the world coordinate system according to the pose and the pose transformation relation between the image capturing device and the moving object.
In one embodiment, before the obtaining the first position coordinates of the landmark in the visual area of the moving object, the method further includes:
detecting angular velocity and acceleration of the moving object by a sensor;
calculating the estimated position of the moving object according to the angular speed, the acceleration and the initial position of the moving object;
the obtaining the first position coordinates of the road sign in the visual area range of the moving object comprises the following steps:
determining a road sign number in the visual area range of the estimated position of the moving object;
and acquiring a first position coordinate of the numbered road sign.
In one embodiment, after calculating the second position coordinates of the moving object in the world coordinate system, the method further includes:
and updating the estimated position by using the second position coordinates.
The calculating the pixel coordinates of the road sign feature points in the image comprises:
determining a region of interest in the image according to the estimated position;
extracting an area-of-interest image from the image according to the area-of-interest;
judging whether the region of interest image contains a complete road sign or not;
if not, the region of interest is adjusted, and the region of interest image containing the complete road sign is re-extracted;
and calculating pixel coordinates of the landmark feature points according to the re-extracted region-of-interest image.
In one embodiment, the determining whether the region of interest image includes a complete landmark includes:
performing binarization processing on the region-of-interest image to obtain a binarized image;
extracting road sign image features from the binarized image;
comparing the extracted road sign image characteristics with preset road sign characteristics;
and determining whether the region of interest image contains a complete road sign according to the comparison result.
In one embodiment, the adjusting the region of interest and re-extracting the region of interest image containing the full landmark comprises:
determining the gravity center of the landmark graph in the binarized image;
And translating the region of interest in the binarized image, and extracting a region of interest image containing a complete road sign from the binarized image according to the translated region of interest.
In one embodiment, the image capturing device is a binocular camera; the determining the pose of the image capturing device according to the pixel coordinates and the first position coordinates includes:
calculating left and right eye re-projection errors of the binocular camera;
and calculating the pose of the image capturing device according to the reprojection error.
According to the visual positioning method, the visual positioning device, the computer equipment and the storage medium, the image of the landmark in the visual area range is acquired through the image capturing device, and the pixel coordinates of the landmark in the image are extracted. The position and the posture of the image capturing device can be obtained through operation of the road sign pixel coordinates and the world coordinate system coordinates which are determined in advance, and then the position coordinates of the moving object under the world coordinate system are obtained through calculation of the position and the posture of the image capturing device and the posture transformation relation between the image capturing device and the moving object. Since the calculated position coordinates are calculated based on the landmarks in the visual area of the moving object, and the landmarks in the visual area are updated in real time during the movement of the object, the calculated position coordinates of the moving object are not accumulated to cause error accumulation. In addition, the distance is fixed at intervals between different road signs, visual positioning is carried out through the road signs at intervals of fixed distances, and even if other characteristic points in the visual range of a moving object are few, the position coordinates of the moving object can be calculated according to the road signs at intervals of fixed distances, so that the system is suitable for an environment without stable and unique visual characteristic points, and the system is stable in operation and high in positioning accuracy.
Drawings
FIG. 1 is a schematic view of an application environment of a visual positioning method in one embodiment;
FIG. 2 is a flow chart of a visual positioning method according to an embodiment;
FIG. 3 is a schematic diagram of a visual positioning system in one embodiment;
FIG. 4 is a flowchart of an embodiment for determining whether an image includes a complete landmark;
FIG. 5 is a flow chart of a visual positioning method according to an embodiment;
FIG. 6 is a flow chart of a visual positioning method according to another embodiment;
FIG. 7 is a block diagram of a visual positioning device in one embodiment;
FIG. 8 is a block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The visual positioning method provided by the application can be applied to an application environment shown in fig. 1. The application environment includes a moving object 102 and a road sign 104. The moving object 102 acquires a first position coordinate of a road sign 104 in the visual area range of the moving object, wherein the first position coordinate is a coordinate of a world coordinate system; acquiring an image of the landmark 104 in the visual area range acquired by the image capturing device; calculating pixel coordinates of landmark feature points in the image; determining the pose of the image capturing device according to the pixel coordinates and the first position coordinates; and calculating a second position coordinate of the moving object under the world coordinate system according to the pose and the pose transformation relation between the image capturing device and the moving object.
The moving object 102 may be an unmanned vehicle, an unmanned plane, a robot, or an unmanned ship. The road sign 104 is a mark that may have an indicating effect. The road sign can be various figures drawn on the ground (such as diamond road sign in fig. 1), or a sign standing on the ground, or floating on the water surface in a buoy form, or other modes such as hanging.
In one embodiment, as shown in fig. 2, a method for visually locating a moving object includes the steps of:
s202, acquiring a first position coordinate of a road sign in a visual area range of a moving object, wherein the first position coordinate is a coordinate of a world coordinate system.
The first position coordinates of the road signs in the visual area of the moving object are coordinates in a world coordinate system. The coordinates of the world coordinate system are absolute coordinates, and the coordinates of all points on the screen are determined by the origin of the world coordinate system before the user coordinate system is not established.
The visual area range is the range which can be shot by a camera on the moving object or the range which can be scanned by a radar on the moving object.
In one embodiment, the roadmap may be a diamond roadmap. Correspondingly, the moving object acquires a first position coordinate of the diamond road sign in the range of the self visual area.
In one embodiment, a moving object locates the moving object through a built-in GPS positioning system or a mobile terminal arranged in the moving object to obtain position information of the moving object, a number of a road sign in a visual area of the moving object can be determined according to the position information of the moving object, and a first position coordinate of the road sign is obtained according to the number.
In another embodiment, the road signs are marked with numbers, and each number corresponds to a first position coordinate measured in advance. The moving object acquires an image of the landmark in the visual area through the image capturing device, the number of the image is identified through a digital image processing technology, the number of the landmark in the visual area is obtained, and then the first position coordinate of the landmark in the visual area is obtained according to the number.
In one embodiment, before S202, the method further includes the following steps: the moving object obtains the angular velocity and the acceleration of the moving object through a sensor; obtaining a predicted position of the moving object according to the angular speed, the acceleration and the initial position of the moving object; determining a road sign number in a visual area range of the estimated position of the moving object; and acquiring a first position coordinate of the numbered road sign.
The sensor is a motion data detection device and can detect the angular velocity and the acceleration of a detected moving object. The displacement of the moving object relative to the initial position can be calculated through the angular velocity and the acceleration measured by the sensor, and then the position of the moving object (namely the estimated position) is obtained. The angular velocity may be the rotational velocity of the wheels of the moving object.
In one embodiment, the moving object converts the angular velocity into a linear velocity and obtains the estimated position of the moving object based on the linear velocity, the acceleration, and the initial position of the moving object.
In one embodiment, the sensor includes a gyroscope and an accelerometer. The gyroscope measures the angular velocities of X, Y and Z three axes according to the principle of conservation of angular momentum. The accelerometer measures acceleration in the X, Y, Z axes.
In one embodiment, the moving object obtains the number of the road sign nearest to the moving object in front of the estimated position according to the estimated position.
In one embodiment, when the number of the landmark and the corresponding position coordinates are stored in the moving object, the moving object acquires the corresponding first position coordinates from the storage area according to the number of the landmark after acquiring the number of the landmark.
In one embodiment, when the number of the road sign and the corresponding position coordinate are stored in the server, the moving object generates a position coordinate acquisition request carrying the number, and sends the position coordinate acquisition request to the server, so that the server acquires the corresponding first position coordinate according to the number in the position coordinate acquisition request, and sends the first position coordinate to the moving object.
S204, acquiring an image of the road sign in the visual area range acquired by the image capturing device.
The image capturing device is a part of a visual positioning system of a moving object and is a device for obtaining a road sign image. The visual positioning system may include a sensor 304 and a processor 306, as shown in fig. 3, in addition to the image capture device 302.
In one embodiment, the moving object captures images of the roadmap via the imager 302. The sensor 304 collects motion data. The image sensor 302 and the sensor 304 transmit the acquired data to the processor 306 for processing.
In one embodiment, the image capture device may be a video camera. The camera converts the optical signal into an electrical signal through the photosensitive element.
In one embodiment, the image capture device may be a lidar. The moving object emits a beam of laser light through a laser transmitter in the laser radar, and then the laser light is projected onto the moving object to be reflected. The moving object receives the reflected laser through a laser receiver in the laser radar, senses the external environment through the received reflected laser, and reads the point cloud data of the surrounding environment. And the moving object processes the point cloud data to obtain a gray image of the moving object.
In one embodiment, the image capturing device may be a binocular camera, and the binocular camera may be a camera with two cameras, wherein the images captured by the two cameras are similar to the images seen by the left eye and the right eye of a person, the two images have parallax, and the farther the object distance is, the smaller the parallax is; conversely, the greater the parallax. The same point has different pixel positions in the left and right eye images of the binocular camera. The distance between the object and the image capturing device can be calculated by the moving object according to the pixel positions of the object in the left-eye image and the right-eye image.
The camera has a certain field of view, the area it can acquire being limited. In order to be able to see clearly the road marking nearest to the moving object as much time as possible, in one embodiment the angle at which the camera is mounted is slightly inclined downwards.
S206, calculating pixel coordinates of the landmark feature points in the image.
The pixel coordinates of the landmark feature points are the relative coordinates of the landmark feature points in the image acquired by the image acquisition device.
In one embodiment, the moving object first performs edge detection on the image to extract the outline of the landmark. And the moving object extracts road sign feature points from the outline, and calculates pixel coordinates of the road sign points in the image.
In another embodiment, after the image in the view area is acquired by the image capturing device, the moving object performs image sharpening on the acquired image so as to enhance the edge and the gray jump part of the image and make the image outline clear. The moving object adopts an edge detection algorithm to extract the outline of the road sign from the sharpened image. And the moving object extracts road sign feature points from the outline, and calculates pixel coordinates of the road sign points in the image. The edge detection algorithm can be a sobel operator, a Roberts operator, a Canny operator, a Prewitt operator, a Laplacian operator or the like.
In one embodiment, FAST (Features From Accelerated Segment Test) algorithm is used to extract landmark feature points in the image. The FAST algorithm determines whether a candidate feature point can serve as a feature point based on a difference in gray values of the candidate feature point and surrounding pixel points.
Since the image acquired by the image capturing device contains not only the road sign but also other irrelevant backgrounds. In order to reduce the interference of other irrelevant backgrounds on the algorithm, in one embodiment, a moving object first predicts the region in the image where the road sign is located, and takes the predicted region in the image where the road sign is located as the region of interest. The moving object performs edge detection and feature extraction on the region of interest by using a digital image processing technology to obtain landmark feature points, and then calculates pixel coordinates of the landmark feature points.
In one embodiment, the moving object determines a region of interest in the image from the first position coordinates of the landmark.
In one embodiment, the moving object determines the region of interest based on a predicted position of the moving object. After the estimated position of the moving object is obtained, the first position coordinate of the road sign nearest to the moving object in the visual area range can be judged. The moving object then determines a region of interest in the image from the first position coordinates.
For a captured image, it may occur that a portion of the roadmap is located within the region of interest and another portion is located outside the region of interest. If the region of interest selected according to the estimated position does not contain a complete road sign, the moving object adjusts the region of interest, so that the road sign is positioned in the image of the region of interest, and the road sign detection is conveniently carried out in the next step.
In one embodiment, the moving object extracts a region of interest image from the image acquired by the camera in accordance with the region of interest. Then, S402 as in fig. 4 is performed, i.e., a determination is performed as to whether the region of interest image contains a complete landmark. If not, S404 is performed, i.e. the adjustment of the region of interest is performed and the region of interest image including the complete landmark is re-extracted. If a complete landmark has been included, S406 is performed, i.e. the calculation of the visual feature point pixel coordinates of the landmark is performed.
In one embodiment, the moving object extracts all feature points of the landmarks in the region of interest image according to the extraction rule, and if the number of extracted feature points of the landmarks is less than the preset number, it is determined that the region of interest image does not contain a complete landmark.
In another embodiment, the step of determining whether the region of interest image includes a complete landmark specifically includes: and the moving object carries out binarization processing on the region-of-interest image to obtain a binarized image. And extracting the features of the road sign image from the binarized image. And comparing the extracted road sign image characteristics with preset road sign characteristics. And determining whether the image of the region of interest contains a complete road sign according to the comparison result. If the extracted road sign features are identical to the preset features, the image acquired by the image capturing device contains complete road signs, and if the extracted road sign features are not identical to the preset features, the image acquired by the image capturing device does not contain complete road signs.
The binarization algorithm determines whether a pixel belongs to the background or the foreground by a certain threshold value, if the pixel belongs to the background, the pixel is assigned to 0, otherwise, the pixel is assigned to 1 (or 255). Because the color and the brightness of the road sign have larger difference with the background area, binarization processing is carried out on the image of the region of interest to obtain a binarized image, and the road sign can be separated from the background. In one embodiment, the moving object binarizes the region of interest image using an adaptive thresholding algorithm. The self-adaptive threshold algorithm is to iterate by judging and calculating the average threshold value of the image area, and different thresholds are adopted for different parts, so that the self-adaptive threshold algorithm has stronger robustness.
S208, determining the pose of the image capturing device according to the pixel coordinates and the first position coordinates.
The pixel coordinates of the landmark are the coordinates of the landmark in the image in the view area acquired by the image pick-up, while the first position coordinates of the landmark are the absolute coordinates thereof in the world coordinate system.
In one embodiment, the camera is a binocular camera, and the moving object can calculate the distance from the road sign to the camera according to the parallax between the left image and the right image of the road sign shot by the binocular camera. Since the first position coordinates of the landmark are known, the pose of the camera can be calculated from the position of the landmark from the camera.
In one embodiment, a diamond-shaped road sign is adopted, and the diamond-shaped road sign has the characteristics of small calculation complexity and good real-time performance; the method for calculating the pose of the image capturing device comprises the following steps:
the coordinates of four corner points of the known road sign diamond under the world coordinate system are. Which are collected by binocular camerasThe corresponding pixel coordinates in the image are +.>Calculating pose of camera in world coordinate system by minimizing three-dimensional point-to-image reprojection error>. The left and right eye re-projection errors are respectively expressed as follows:
wherein (1)>And->The internal reference matrix of the left and right cameras respectively, < > >The method is used for converting the left purpose to the right purpose which are calibrated in advance. Solving the nonlinear least square problem through a ceres-solver to obtain a pose, wherein the expression of the pose is as follows:
wherein (1)>Solving in an optimized mode to obtain an estimated value of the pose of the camera, wherein the estimated value is +.>Approximately equal to true value->
S210, calculating a second position coordinate of the moving object under the world coordinate system according to the pose and the pose transformation relation between the image capturing device and the moving object.
In one embodiment, the pose of the image capturing device has a certain pose transformation relation with the center position of the moving object, and after the pose of the image capturing device is obtained, the pose of the moving object can be obtained according to the pose transformation relation.
In the above embodiment, the image of the landmark in the visual area is collected by the image capturing device, and the pixel coordinates of the landmark in the image are extracted. The position and the posture of the image capturing device can be obtained through operation of the road sign pixel coordinates and the world coordinate system coordinates which are determined in advance, and then the position coordinates of the moving object under the world coordinate system are obtained through calculation of the position and the posture of the image capturing device and the posture transformation relation between the image capturing device and the moving object. Since the calculated position coordinates are calculated based on the landmarks in the visual area of the moving object, and the landmarks in the visual area are updated in real time during the movement of the object, the calculated position coordinates of the moving object are not accumulated to cause error accumulation. In addition, the distance is fixed at intervals between different road signs, visual positioning is carried out through the road signs at intervals of fixed distances, and even if other characteristic points in the visual range of a moving object are few, the position coordinates of the moving object can be calculated according to the road signs at intervals of fixed distances, so that the system is suitable for an environment without stable and unique visual characteristic points, and the system is stable in operation and high in positioning accuracy.
In one embodiment, as shown in fig. 5, a method for visually locating a moving object includes the steps of:
s502, detecting the angular velocity and acceleration of the moving object by the sensor.
S504, calculating the estimated position of the moving object according to the angular velocity, the acceleration and the initial position of the moving object.
S506, determining the road sign number in the visual area range of the estimated position of the moving object.
S508, obtaining the first position coordinates of the numbered road signs.
S510, acquiring an image of the road sign in the visual area range acquired by the image capturing device.
S512, calculating pixel coordinates of the landmark feature points in the image.
S514, determining the pose of the image capturing device according to the pixel coordinates and the first position coordinates.
S516, calculating a second position coordinate of the moving object under the world coordinate system according to the pose and the pose transformation relation between the image capturing device and the moving object.
S502 to S516, the method for calculating the second position coordinate by the moving object is as described above.
S518, reliability of the road sign feature points is calculated.
Since the estimated position of the moving object calculated by the moving object according to the angular velocity and the acceleration of the moving object acquired by the sensor has a certain error, and the error is accumulated continuously with the lapse of time, the estimated position is updated to improve the positioning accuracy. In the moving process of the moving object, the whole road sign can not be collected at any time due to the limited visual area range of the image capturing device. And the outline of the road sign obtained by the image processing technique may not be the same as the actual outline, so that the feature points of the extracted road sign may not be reliable. So that the pose of the image capturing device calculated by the moving object according to the pixel coordinates of the landmark feature points in the image of the region of interest and the first position coordinates thereof may not be accurate. Therefore, the reliability of the landmark feature points is calculated first.
In one embodiment, the shape of the road sign is diamond. The measure of reliability of landmark feature points is the pixel difference between the detected landmark contour and the contour of the ideal landmark through vertices. Specifically, the algorithm may output a road marking contour, but four sides of the contour are not completely straight lines, and a graph with four sides being straight lines may be constructed by four vertexes of the contour, and empirically, the closer the two contours are, the higher the reliability of the detected diamond shape is. The average of the pixel differences of the two contours is used as a measure.
S520, judging whether the reliability of the road sign feature points is larger than a preset threshold value.
In one embodiment, the threshold value is chosen to be 5 pixels, i.e. if the average of the pixel differences of the two contours is less than 5 pixels, then the diamond detected at this time is considered reliable and can be used to calculate the position of the moving object.
If the estimated position is greater than the preset threshold, the reliability of the road sign is considered to be high, S522 is executed, and the estimated position is updated by using the second position coordinates; if the estimated position is smaller than the preset threshold value, the reliability of the road sign is considered to be low, and the moving object is positioned by the estimated position.
When the reliability of the road signs is greater than a threshold value, the moving object calculates a second position coordinate, and when the reliability of the road signs between the two road signs is greater than a preset threshold value, the second position coordinate of the moving object is updated. That is, the position of the moving object is estimated using the motion information collected by the sensor when the reliability of the road sign is low. But as this method of estimation increases over time, the error increases gradually. Therefore, the estimated position of the moving object is updated after a period of time, the accumulated error of the estimated position coordinates can be reduced, and the accuracy of positioning the moving object is improved.
In one embodiment, as shown in fig. 6, the methods of S602 through S612 are as described above. If the reliability of the landmark is greater than the preset threshold, S616 to S620 are performed, and the second position coordinates of the moving object are calculated, and the methods of S616 to S620 are as described above. Then, S622 is performed to determine whether the moving object passes the new road sign. And selecting a positioning result with highest reliability from the last road sign to the process of passing the road sign to update the second position coordinates of the moving object every time the road sign passes.
The estimated position is updated by adopting the second position coordinates of the moving object when the reliability of the road sign between the two road signs is highest, so that the positioning accuracy can be improved to the greatest extent.
It should be understood that, although the steps in the flowcharts of fig. 2-4 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2-4 may include multiple steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the steps or stages in other steps or other steps.
In one embodiment, a visual positioning device is provided, the visual positioning device specifically comprising: an acquisition module 702, a determination module 704, and a calculation module 706; wherein:
the obtaining module 702 is configured to obtain a first position coordinate of a landmark in a view area of the moving object, where the first position coordinate is a coordinate of a world coordinate system;
the acquisition module 702 is further configured to acquire an image of a landmark in the view area acquired by the image capturing device;
A calculating module 706, configured to calculate pixel coordinates of landmark feature points in the image;
a determining module 704, configured to determine a pose of the image capturing device according to the pixel coordinates and the first position coordinates;
the calculating module 706 is further configured to calculate a second position coordinate of the moving object in the world coordinate system according to the pose and the pose transformation relationship between the image capturing device and the moving object.
In one embodiment, as shown in fig. 7, the apparatus further comprises:
a detection module 708 for detecting an angular velocity and an acceleration of the moving object by the sensor;
the calculating module 706 is further configured to calculate an estimated position of the moving object according to the angular velocity, the acceleration, and the initial position of the moving object;
the obtaining module 702 is further configured to determine a landmark number in the visual area range of the estimated position of the moving object; and acquiring a first position coordinate of the numbered road sign.
In one embodiment, as shown in fig. 7, the apparatus further comprises:
and an updating module 710, configured to update the estimated position with the second position coordinate.
In one embodiment, the computing module 706 is further to:
determining a region of interest in the image according to the estimated position;
extracting an image of the region of interest from the image according to the region of interest;
judging whether the image of the region of interest contains a complete road sign or not;
If not, the region of interest is adjusted, and the region of interest image containing the complete road sign is re-extracted;
and calculating pixel coordinates of the landmark feature points according to the re-extracted region-of-interest image.
In one embodiment, the computing module 706 is further to:
performing binarization processing on the region-of-interest image to obtain a binarized image;
extracting road sign image features from the binarized image;
comparing the extracted road sign image characteristics with preset road sign characteristics;
and determining whether the image of the region of interest contains a complete road sign according to the comparison result.
In one embodiment, the computing module 706 is further to:
determining the gravity center of a road sign graph in the binarized image;
and translating the region of interest in the binarized image, and extracting the region of interest image containing the complete road sign from the binarized image according to the translated region of interest.
In one embodiment, the image capture device is a binocular camera; the computing module is also for:
calculating left and right eye re-projection errors of the binocular camera;
and calculating the pose of the image capturing device according to the reprojection error.
For specific limitations of the visual positioning device, reference may be made to the above limitations of the visual positioning method, which are not repeated here. The various modules in the visual positioning device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
As shown in fig. 8, in one embodiment, a computer device is provided that includes a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the visual positioning method described above.
In one embodiment, a computer device is provided, which may be a moving object, and the internal structure of which may be as shown in fig. 8. The computer device comprises a processor, a memory, a display screen, an input device and a communication interface which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a visual positioning method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of: acquiring a first position coordinate of a road sign in a visual area range of a moving object, wherein the first position coordinate is a coordinate of a world coordinate system; acquiring an image of a road sign in a visual area range acquired by an image acquisition device; calculating pixel coordinates of landmark feature points in the image; determining the pose of the image capturing device according to the pixel coordinates and the first position coordinates; and calculating a second position coordinate of the moving object under the world coordinate system according to the pose and the pose transformation relation between the image capturing device and the moving object.
In one embodiment, the processor when executing the computer program further performs the steps of: detecting the angular velocity and the acceleration of the moving object by a sensor; calculating the estimated position of the moving object according to the angular speed, the acceleration and the initial position of the moving object; determining a road sign number in a visual area range of the estimated position of the moving object; and acquiring a first position coordinate of the numbered road sign.
In one embodiment, the processor when executing the computer program further performs the steps of: and updating the estimated position by using the second position coordinates.
In one embodiment, the processor when executing the computer program further performs the steps of: determining a region of interest in the image according to the estimated position; extracting an image of the region of interest from the image according to the region of interest; judging whether the image of the region of interest contains a complete road sign or not; if not, the region of interest is adjusted, and the region of interest image containing the complete road sign is re-extracted; and calculating pixel coordinates of the landmark feature points according to the re-extracted region-of-interest image.
In one embodiment, the processor when executing the computer program further performs the steps of: performing binarization processing on the region-of-interest image to obtain a binarized image; extracting road sign image features from the binarized image; comparing the extracted road sign image characteristics with preset road sign characteristics; and determining whether the image of the region of interest contains a complete road sign according to the comparison result.
In one embodiment, the processor when executing the computer program further performs the steps of: determining the gravity center of a road sign graph in the binarized image; and translating the region of interest in the binarized image, and extracting the region of interest image containing the complete road sign from the binarized image according to the translated region of interest.
In one embodiment, the processor when executing the computer program further performs the steps of: calculating left and right eye re-projection errors of the binocular camera; and calculating the pose of the image capturing device according to the reprojection error.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a first position coordinate of a road sign in a visual area range of a moving object, wherein the first position coordinate is a coordinate of a world coordinate system; acquiring an image of a road sign in a visual area range acquired by an image acquisition device; calculating pixel coordinates of landmark feature points in the image; determining the pose of the image capturing device according to the pixel coordinates and the first position coordinates; and calculating a second position coordinate of the moving object under the world coordinate system according to the pose and the pose transformation relation between the image capturing device and the moving object.
In one embodiment, the computer program when executed by the processor further performs the steps of: detecting the angular velocity and the acceleration of the moving object by a sensor; calculating the estimated position of the moving object according to the angular speed, the acceleration and the initial position of the moving object; the method for acquiring the first position coordinates of the road sign in the visual area of the moving object comprises the following steps: determining a road sign number in a visual area range of the estimated position of the moving object; and acquiring a first position coordinate of the numbered road sign.
In one embodiment, the computer program when executed by the processor further performs the steps of: and updating the estimated position by using the second position coordinates.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining a region of interest in the image according to the estimated position; extracting an image of the region of interest from the image according to the region of interest; judging whether the image of the region of interest contains a complete road sign or not; if not, the region of interest is adjusted, and the region of interest image containing the complete road sign is re-extracted; and calculating pixel coordinates of the landmark feature points according to the re-extracted region-of-interest image.
In one embodiment, the computer program when executed by the processor further performs the steps of: performing binarization processing on the region-of-interest image to obtain a binarized image; extracting road sign image features from the binarized image; comparing the extracted road sign image characteristics with preset road sign characteristics; and determining whether the image of the region of interest contains a complete road sign according to the comparison result.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining the gravity center of a road sign graph in the binarized image; and translating the region of interest in the binarized image, and extracting the region of interest image containing the complete road sign from the binarized image according to the translated region of interest.
In one embodiment, the computer program when executed by the processor further performs the steps of: calculating left and right eye re-projection errors of the binocular camera; and calculating the pose of the image capturing device according to the reprojection error.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. A method of visual localization, the method comprising:
acquiring a first position coordinate of a road sign in a visual area range of a moving object, wherein the first position coordinate is a coordinate of a world coordinate system; the first position coordinates are determined according to road sign numbers in the visual area range of the estimated position of the moving object; the estimated position is determined according to the angular velocity and the acceleration of the moving object; the road marks in the visual area range are graphical marks with indication function;
Acquiring an image of a road sign in the visual area range acquired by an image capturing device, wherein the image capturing device is a binocular camera and the installation angle of the image capturing device is inclined downwards;
performing edge detection on an image to extract the outline of a road sign, extracting road sign feature points from the outline, and calculating pixel coordinates of the road sign feature points in the image;
determining the pose of the image capturing device according to the pixel coordinates and the first position coordinates;
calculating a second position coordinate of the moving object under the world coordinate system according to the pose and the pose transformation relation between the image capturing device and the moving object;
calculating the reliability of the road sign feature points, and judging whether the reliability of the road sign feature points is larger than a preset threshold value or not;
if the reliability of the landmark feature points is greater than a preset threshold value, updating the estimated position by using the second position coordinates; and if the reliability of the road sign feature points is smaller than a preset threshold value, positioning the moving object according to the estimated position.
2. The method of claim 1, further comprising, prior to the obtaining the first position coordinates of the landmark within the view area of the moving object:
Detecting angular velocity and acceleration of the moving object by a sensor;
calculating the estimated position of the moving object according to the angular speed, the acceleration and the initial position of the moving object;
the obtaining the first position coordinates of the road sign in the visual area range of the moving object comprises the following steps:
determining a road sign number in the visual area range of the estimated position of the moving object;
and acquiring a first position coordinate of the numbered road sign.
3. The method of claim 2, wherein after calculating the second position coordinates of the moving object in the world coordinate system, further comprising:
and updating the estimated position by using the second position coordinates.
4. The method of claim 2, wherein the calculating pixel coordinates of the road-marking feature points in the image comprises:
determining a region of interest in the image according to the estimated position;
extracting an area-of-interest image from the image according to the area-of-interest;
judging whether the region of interest image contains a complete road sign or not;
if not, the region of interest is adjusted, and the region of interest image containing the complete road sign is re-extracted;
And calculating pixel coordinates of the landmark feature points according to the re-extracted region-of-interest image.
5. The method of claim 4, wherein determining whether the region of interest image includes a complete landmark comprises:
performing binarization processing on the region-of-interest image to obtain a binarized image;
extracting road sign image features from the binarized image;
comparing the extracted road sign image characteristics with preset road sign characteristics;
and determining whether the region of interest image contains a complete road sign according to the comparison result.
6. The method of claim 5, wherein the adjusting the region of interest and re-extracting the region of interest image comprising the full roadmap comprises:
determining the gravity center of the landmark graph in the binarized image;
and translating the region of interest in the binarized image, and extracting a region of interest image containing a complete road sign from the binarized image according to the translated region of interest.
7. The method of any one of claims 1 to 6, wherein the determining the pose of the image capture device from the pixel coordinates and the first position coordinates comprises:
Calculating left and right eye re-projection errors of the binocular camera;
and calculating the pose of the image capturing device according to the reprojection error.
8. A visual positioning device, the device comprising:
the acquisition module is used for acquiring first position coordinates of road signs in the visual area range of the moving object, wherein the first position coordinates are coordinates of a world coordinate system; the first position coordinates are determined according to road sign numbers in the visual area range of the estimated position of the moving object; the estimated position is determined according to the angular velocity and the acceleration of the moving object; the road marks in the visual area range are graphical marks with indication function;
the acquisition module is also used for acquiring images of road signs in the visual area range acquired by the image capturing device, and the image capturing device is a binocular camera and is inclined downwards at an installation angle;
the computing module is used for carrying out edge detection on the image to extract the outline of the road sign, extracting the characteristic points of the road sign from the outline and computing the pixel coordinates of the characteristic points of the road sign in the image;
the determining module is used for determining the pose of the image capturing device according to the pixel coordinates and the first position coordinates;
The calculation module is also used for calculating a second position coordinate of the moving object under the world coordinate system according to the pose and the pose transformation relation between the image capturing device and the moving object;
calculating the reliability of the road sign feature points, and judging whether the reliability of the road sign feature points is larger than a preset threshold value or not;
if the reliability of the landmark feature points is greater than a preset threshold value, updating the estimated position by using the second position coordinates; and if the reliability of the road sign feature points is smaller than a preset threshold value, positioning the moving object according to the estimated position.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202010249184.0A 2020-04-01 2020-04-01 Visual positioning method, visual positioning device, computer equipment and storage medium Active CN111553342B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010249184.0A CN111553342B (en) 2020-04-01 2020-04-01 Visual positioning method, visual positioning device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010249184.0A CN111553342B (en) 2020-04-01 2020-04-01 Visual positioning method, visual positioning device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111553342A CN111553342A (en) 2020-08-18
CN111553342B true CN111553342B (en) 2023-08-08

Family

ID=72007321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010249184.0A Active CN111553342B (en) 2020-04-01 2020-04-01 Visual positioning method, visual positioning device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111553342B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785648B (en) * 2021-04-12 2021-07-06 成都新西旺自动化科技有限公司 Visual alignment method, device and equipment based on to-be-imaged area and storage medium
CN113870296B (en) * 2021-12-02 2022-02-22 暨南大学 Image edge detection method, device and medium based on rigid body collision optimization algorithm

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0749950A (en) * 1993-08-06 1995-02-21 Sharp Corp Method and device for extracting target object in image
JP2012007922A (en) * 2010-06-23 2012-01-12 Nec Corp Road structure measuring method and road surface measuring device
CN104008377A (en) * 2014-06-07 2014-08-27 北京联合大学 Ground traffic sign real-time detection and recognition method based on space-time correlation
WO2014160027A1 (en) * 2013-03-13 2014-10-02 Image Sensing Systems, Inc. Roadway sensing systems
CN107992793A (en) * 2017-10-20 2018-05-04 深圳华侨城卡乐技术有限公司 A kind of indoor orientation method, device and storage medium
CN108180917A (en) * 2017-12-31 2018-06-19 芜湖哈特机器人产业技术研究院有限公司 A kind of top mark map constructing method based on the optimization of pose figure
CN108198216A (en) * 2017-12-12 2018-06-22 深圳市神州云海智能科技有限公司 A kind of robot and its position and orientation estimation method and device based on marker
WO2018124337A1 (en) * 2016-12-28 2018-07-05 주식회사 에이다스원 Object detection method and apparatus utilizing adaptive area of interest and discovery window

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0749950A (en) * 1993-08-06 1995-02-21 Sharp Corp Method and device for extracting target object in image
JP2012007922A (en) * 2010-06-23 2012-01-12 Nec Corp Road structure measuring method and road surface measuring device
WO2014160027A1 (en) * 2013-03-13 2014-10-02 Image Sensing Systems, Inc. Roadway sensing systems
CN104008377A (en) * 2014-06-07 2014-08-27 北京联合大学 Ground traffic sign real-time detection and recognition method based on space-time correlation
WO2018124337A1 (en) * 2016-12-28 2018-07-05 주식회사 에이다스원 Object detection method and apparatus utilizing adaptive area of interest and discovery window
CN107992793A (en) * 2017-10-20 2018-05-04 深圳华侨城卡乐技术有限公司 A kind of indoor orientation method, device and storage medium
CN108198216A (en) * 2017-12-12 2018-06-22 深圳市神州云海智能科技有限公司 A kind of robot and its position and orientation estimation method and device based on marker
CN108180917A (en) * 2017-12-31 2018-06-19 芜湖哈特机器人产业技术研究院有限公司 A kind of top mark map constructing method based on the optimization of pose figure

Also Published As

Publication number Publication date
CN111553342A (en) 2020-08-18

Similar Documents

Publication Publication Date Title
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
US20200334843A1 (en) Information processing apparatus, control method for same, non-transitory computer-readable storage medium, and vehicle driving support system
CN108419446B (en) System and method for laser depth map sampling
US9157757B1 (en) Methods and systems for mobile-agent navigation
US9625912B2 (en) Methods and systems for mobile-agent navigation
EP2614487B1 (en) Online reference generation and tracking for multi-user augmented reality
CN111210477B (en) Method and system for positioning moving object
KR102052114B1 (en) Object change detection system for high definition electronic map upgrade and method thereof
CN110472553B (en) Target tracking method, computing device and medium for fusion of image and laser point cloud
US20140253679A1 (en) Depth measurement quality enhancement
US20150138310A1 (en) Automatic scene parsing
US8503730B2 (en) System and method of extracting plane features
EP3070430B1 (en) Moving body position estimation device and moving body position estimation method
US20140254874A1 (en) Method of detecting and describing features from an intensity image
CN110349212B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
EP3716210B1 (en) Three-dimensional point group data generation method, position estimation method, three-dimensional point group data generation device, and position estimation device
US10607350B2 (en) Method of detecting and describing features from an intensity image
CN112106111A (en) Calibration method, calibration equipment, movable platform and storage medium
JPWO2019021569A1 (en) Information processing apparatus, information processing method, and program
CN111553342B (en) Visual positioning method, visual positioning device, computer equipment and storage medium
KR101280392B1 (en) Apparatus for managing map of mobile robot based on slam and method thereof
CN116468786A (en) Semantic SLAM method based on point-line combination and oriented to dynamic environment
CN113344906A (en) Vehicle-road cooperative camera evaluation method and device, road side equipment and cloud control platform
KR20160125803A (en) Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest
KR20110038971A (en) Apparatus and method for tracking image patch in consideration of scale

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant