CN113096182A - Method and device for positioning mobile object, electronic equipment and storage medium - Google Patents

Method and device for positioning mobile object, electronic equipment and storage medium Download PDF

Info

Publication number
CN113096182A
CN113096182A CN202110236385.1A CN202110236385A CN113096182A CN 113096182 A CN113096182 A CN 113096182A CN 202110236385 A CN202110236385 A CN 202110236385A CN 113096182 A CN113096182 A CN 113096182A
Authority
CN
China
Prior art keywords
target
feature point
coordinate
points
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110236385.1A
Other languages
Chinese (zh)
Inventor
焦继超
王晨旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202110236385.1A priority Critical patent/CN113096182A/en
Publication of CN113096182A publication Critical patent/CN113096182A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method and a device for positioning a moving object, electronic equipment and a storage medium, which are applied to the technical field of computers, wherein the method comprises the steps of acquiring a target image and a reference image when the moving object needs to be positioned; determining a static object region and a dynamic object region in a reference image; determining a plurality of first feature points from the static object region; determining a first space coordinate of the position of the moving object at the current moment by using the pixel coordinate of each first characteristic point and the pixel coordinate of the corresponding target characteristic point; determining a plurality of second feature points from the dynamic object region; calculating the predicted coordinates of the target feature points corresponding to the second feature points; screening target feature points with the difference value between the predicted coordinate and the pixel coordinate within a preset range to serve as supplementary feature points; and determining a second space coordinate of the position of the moving object at the current moment as a positioning result of the moving object. The positioning accuracy of the moving object can be improved.

Description

Method and device for positioning mobile object, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for positioning a moving object, an electronic device, and a storage medium.
Background
An instant positioning and mapping (SLAM) method is a method capable of realizing autonomous positioning and navigation of a robot. For example: the intelligent robot can be used for autonomous positioning and navigation in dynamic scenes such as shopping malls and the like.
In the prior art, when positioning is performed by using the SLAM method, each feature point in a static object region in a reference image acquired at the previous time is identified, for example: each pixel point in the area such as a desk or a chair; determining target characteristic points matched with the characteristic points in a target image acquired at the current moment; and determining the position change of each reference characteristic point and the corresponding target characteristic point, and determining the position of the mobile object to be positioned at the current moment according to the position change.
However, in the prior art, when determining the position of the moving object to be positioned at the current time, only the position change of the feature point in the static object region is considered, that is, the reference information used in positioning is not comprehensive enough, so that the positioning accuracy of the moving object is not high.
Disclosure of Invention
An embodiment of the present invention provides a method and an apparatus for positioning a moving object, an electronic device, and a storage medium, so as to solve the problem of low positioning accuracy of the moving object. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for positioning a moving object, including:
when a moving object needs to be positioned, a target image and a reference image are obtained; the target image and the reference image are respectively images acquired at the current moment and the last moment at the position of the moving object;
determining a static object region and a dynamic object region in the reference image;
determining a plurality of first feature points from the static object region, and determining pixel points matched with the first feature points in the target image as target feature points corresponding to the first feature points aiming at each first feature point;
determining a first space coordinate of the current position of the moving object by using the pixel coordinate of each first characteristic point and the pixel coordinate of the corresponding target characteristic point;
determining a plurality of second feature points from the dynamic object region, and determining pixel points matched with the second feature points in the target image as target feature points corresponding to the second feature points aiming at each second feature point;
calculating the predicted coordinates of the target feature points corresponding to the second feature points according to the first space coordinates and the pixel coordinates of the second feature points in the dynamic object region;
screening target feature points with the difference value between the predicted coordinate and the pixel coordinate within a preset range from the target feature points corresponding to the second feature points to serve as supplementary feature points;
and determining a second space coordinate of the current position of the moving object by using the pixel coordinate of each first characteristic point, the pixel coordinate of the target characteristic point corresponding to each first characteristic point, the pixel coordinate of the supplementary characteristic point and the pixel coordinate of the second characteristic point corresponding to the supplementary characteristic point, and taking the second space coordinate as the positioning result of the moving object.
Optionally, the determining, by using the pixel coordinate of each first feature point and the pixel coordinate of the corresponding target feature point, the first spatial coordinate of the current position of the moving object at the current time includes:
determining a first space coordinate of the position of the moving object at the current moment by using the pixel coordinate of each first characteristic point and the pixel coordinate of the corresponding target characteristic point and adopting a preset position calculation formula;
the preset position calculation formula includes:
Figure BDA0002960345450000021
wherein, PxSet of x-axis coordinates, P, characterizing each feature point to be utilized in a target imageySet of y-axis coordinates characterizing feature points to be utilized in a target imageIn the formula (I), KxRepresenting a set of x-axis coordinates after mapping each feature point to be utilized in the reference image to a preset three-dimensional coordinate system, KyRepresenting a set of y-axis coordinates after mapping each feature point to be utilized in the reference image to a preset three-dimensional coordinate system, KzRepresenting a set of z-axis coordinates after each feature point to be utilized in a reference image is mapped to a preset three-dimensional coordinate system, T representing a first space coordinate of a current time position of the moving object, fx、fy、cx、cyAnd z is a built-in parameter of a target camera used for acquiring images, and the preset three-dimensional space is a three-dimensional coordinate system which is constructed for a scene where the moving object is located in advance.
Optionally, the screening, from the target feature points corresponding to each second feature point, a target feature point of which a difference between the predicted coordinate and the pixel coordinate is within a preset range, as a supplementary feature point, includes:
calculating the difference value between the pixel coordinate and the predicted coordinate of the target feature point corresponding to each second feature point according to a preset difference value calculation formula aiming at the target feature point corresponding to each second feature point to obtain the target difference value of the target feature point corresponding to the second feature point;
screening target feature points with the difference value between the predicted coordinate and the pixel coordinate within a preset range according to the target difference value of the target feature point corresponding to each second feature point to serve as supplementary feature points;
wherein the preset difference calculation formula comprises:
Figure BDA0002960345450000031
wherein, Wi(t) representing a target difference value W of a target characteristic point i corresponding to a second characteristic point in a target image acquired at the current t momenti(t-1) representing the difference value, dis (x), between the predicted coordinate and the pixel coordinate of the second feature point corresponding to the target feature point i in the reference image acquired at the last t-1 momentilast,xicur) Representing the predicted seat of the target feature point i corresponding to the second feature pointMark xilastPixel coordinate x of target feature point i corresponding to second feature pointicurOf between, max dis (x)ilast,xicur) And representing the maximum Euclidean distance between the predicted coordinates and the pixel coordinates of the target feature points corresponding to the second feature points in the target image, wherein w is a weight coefficient.
Optionally, the calculating, according to the first spatial coordinate and the pixel coordinate of each second feature point in the dynamic object region, a predicted coordinate of a target feature point corresponding to each second feature point includes:
and calculating the predicted coordinates of the target characteristic points corresponding to the second characteristic points according to the first space coordinates and the pixel coordinates of the second characteristic points in the dynamic object region by using the preset position calculation formula.
In a second aspect, an embodiment of the present invention provides a positioning apparatus for a moving object, including:
the image acquisition module is used for acquiring a target image and a reference image when a moving object needs to be positioned; the target image and the reference image are respectively images acquired at the current moment and the last moment at the position of the moving object;
the region confirmation module is used for determining a static object region and a dynamic object region in the reference image;
the first confirming module is used for determining a plurality of first characteristic points from the static object area, and determining pixel points matched with the first characteristic points in the target image as target characteristic points corresponding to the first characteristic points aiming at each first characteristic point;
the first calculation module is used for determining a first space coordinate of the position of the moving object at the current moment by using the pixel coordinate of each first characteristic point and the pixel coordinate of the corresponding target characteristic point;
the second confirming module is used for determining a plurality of second characteristic points from the dynamic object region, and determining pixel points matched with the second characteristic points in the target image as target characteristic points corresponding to the second characteristic points aiming at each second characteristic point;
the second calculation module is used for calculating the prediction coordinates of the target characteristic points corresponding to the second characteristic points according to the first space coordinates and the pixel coordinates of the second characteristic points in the dynamic object area;
the difference value screening module is used for screening target feature points with the difference value between the predicted coordinate and the pixel coordinate within a preset range from the target feature points corresponding to the second feature points as supplementary feature points;
and the result confirming module is used for determining a second space coordinate of the current position of the moving object as the positioning result of the moving object by utilizing the pixel coordinate of each first characteristic point, the pixel coordinate of the target characteristic point corresponding to each first characteristic point, the pixel coordinate of the supplementary characteristic point and the pixel coordinate of the second characteristic point corresponding to the supplementary characteristic point.
Optionally, the first calculating module is specifically configured to determine, by using the pixel coordinate of each first feature point and the pixel coordinate of the corresponding target feature point, a first spatial coordinate of a position where the moving object is located at the current time by using a preset position calculation formula;
the preset position calculation formula includes:
Figure BDA0002960345450000041
wherein, PxSet of x-axis coordinates, P, characterizing each feature point to be utilized in a target imageySet of y-axis coordinates characterizing each feature point to be utilized in a target image, KxRepresenting a set of x-axis coordinates after mapping each feature point to be utilized in the reference image to a preset three-dimensional coordinate system, KyRepresenting a set of y-axis coordinates after mapping each feature point to be utilized in the reference image to a preset three-dimensional coordinate system, KzRepresenting a set of z-axis coordinates after each feature point to be utilized in a reference image is mapped to a preset three-dimensional coordinate system, and T represents the current moment of the moving objectFirst spatial coordinates of position, fx、fy、cx、cyAnd z is a built-in parameter of a target camera used for acquiring images, and the preset three-dimensional space is a three-dimensional coordinate system which is constructed for a scene where the moving object is located in advance.
Optionally, the difference screening module is specifically configured to, for a target feature point corresponding to each second feature point, calculate a difference between a pixel coordinate and a predicted coordinate of the target feature point corresponding to the second feature point according to a preset difference calculation formula, so as to obtain a target difference of the target feature point corresponding to the second feature point;
screening target feature points with the difference value between the predicted coordinate and the pixel coordinate within a preset range according to the target difference value of the target feature point corresponding to each second feature point to serve as supplementary feature points;
wherein the preset difference calculation formula comprises:
Figure BDA0002960345450000051
wherein, Wi(t) representing a target difference value W of a target characteristic point i corresponding to a second characteristic point in a target image acquired at the current t momenti(t-1) representing the difference value, dis (x), between the predicted coordinate and the pixel coordinate of the second feature point corresponding to the target feature point i in the reference image acquired at the last t-1 momentilast,xicur) The predicted coordinate x of the target feature point i corresponding to the characterization second feature pointilastPixel coordinate x of target feature point i corresponding to second feature pointicurOf between, max dis (x)ilast,xicur) And representing the maximum Euclidean distance between the predicted coordinates and the pixel coordinates of the target feature points corresponding to the second feature points in the target image, wherein w is a weight coefficient.
Optionally, the second calculating module is specifically configured to calculate, by using the preset position calculation formula, the predicted coordinates of the target feature point corresponding to each second feature point according to the first spatial coordinate and the pixel coordinate of each second feature point in the dynamic object region.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the positioning method steps of any moving object when executing the program stored in the memory.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the positioning method for a moving object.
The embodiment of the invention has the following beneficial effects:
according to the positioning method provided by the embodiment of the invention, when a mobile object needs to be positioned, a target image and a reference image are obtained; determining a static object region and a dynamic object region in the reference image, identifying each pixel point in the static object region as a reference feature point, and determining a feature pixel point matched with the reference feature point in the target image as a target feature point corresponding to the reference feature point aiming at each reference feature point; determining a first space coordinate of the position of the moving object at the current moment by using the pixel coordinate of each reference characteristic point and the pixel coordinate of the corresponding target characteristic point; and calculating the coordinates of all pixel points in the dynamic object region in the reference image according to the reference space coordinates to obtain the predicted coordinates of all the pixel points, and determining the pixel points with the difference value between the predicted coordinates and the pixel coordinates within a preset range from all the pixel points in the dynamic object region to serve as supplementary feature points. Therefore, when the space coordinate of the moving object is determined, the characteristic points in the static object area are considered, meanwhile, the pixel points of which the difference value between the prediction coordinate and the pixel coordinate is within the preset range are screened from the dynamic object area to serve as supplementary characteristic points, the moving object is positioned through the characteristic points of the static extraction area and the supplementary characteristic points of the dynamic object area, and the positioning accuracy of the moving object can be improved.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
Fig. 1 is a flowchart of a method for positioning a moving object according to an embodiment of the present invention;
fig. 2 is another flowchart of a method for positioning a moving object according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a positioning apparatus for a moving object according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived from the embodiments given herein by one of ordinary skill in the art, are within the scope of the invention.
In order to solve the problem that the positioning accuracy of a moving object is not high, embodiments of the present invention provide a method and an apparatus for positioning a moving object, an electronic device, and a storage medium.
It should be noted that, in the positioning method provided in the embodiment of the present invention, the mobile object may be an intelligent robot, for example: the positioning method of the mobile object provided by the embodiment of the invention can be utilized when the unmanned aerial vehicle and the like need to perform autonomous positioning and navigation.
First, a method for positioning a moving object according to an embodiment of the present invention is described below.
As shown in fig. 1, a method for positioning a moving object according to an embodiment of the present invention may include the following steps:
s101, when a moving object needs to be positioned, a target image and a reference image are obtained; the target image and the reference image are respectively images acquired at the current moment and the last moment at the position of the moving object. Namely, the target image is an image acquired at the position of the moving object at the current moment; the reference image is an image acquired at the position where the moving object is located at the previous moment.
It can be understood that the moving object may use its own image capturing module, such as a camera, to capture an image along any specified direction, and when positioning is required, the process of positioning the moving object is performed through the target image captured at the current time and the reference image captured at the previous time. Of course, the mobile object may also perform the process of locating the mobile object by receiving the image acquired by the target acquisition device. The target acquisition device may be any image acquisition device that is connected to a moving object and can communicate with the moving object. In addition, when the image is acquired at the position of the moving object, the acquisition interval of the image may be any specified interval, for example: 5 seconds, 10 seconds, etc.
S102, determining a static object region and a dynamic object region in the reference image;
as in the acquired image, static objects may be present, for example: tables, stools, etc., dynamic objects may also be present, such as: a car, a person, etc., in order to facilitate the localization of the moving object, the static object region and the dynamic object region in the reference image may be distinguished.
There may be multiple types of determining the static object region and the dynamic object region in the reference image, and for example, in one implementation, the determining the static object region and the dynamic object region in the reference image may include: and performing target detection classification on the reference image by adopting a preset target detection algorithm to obtain a static object region and a dynamic object region in the reference image. The implementation process of the preset target detection algorithm is various, for example: and performing object detection classification on the reference image by using a YOLOV3 model. In the embodiment of the present invention, the implementation process of the preset YOLOV3 model is not specifically limited. In addition, the YOLOV3 model can be trained on coco datasets and the YOLOV3 model can classify objects into 80 classes.
For example, in another implementation, the static object region and the dynamic object region in the reference image may include: and performing semantic segmentation on the reference image by adopting a preset Unet model to obtain a static object region and a dynamic object region in the reference image.
S103, determining a plurality of first feature points from the static object area, and determining pixel points matched with the first feature points in the target image as target feature points corresponding to the first feature points aiming at each first feature point;
for example, in one implementation, determining a plurality of first feature points from the static object region may include: and identifying a plurality of pixel points of which the pixel coordinates are located in the static area as a plurality of first characteristic points according to the coordinates of the static object area and the pixel coordinates of each pixel point in the reference image.
In an implementation manner, determining the pixel point matched with the first feature point from the target image may include: and determining pixel points matched with the first characteristic points from the target image by adopting a preset characteristic point matching algorithm. The preset feature point matching algorithm may be various, for example: ORB algorithm (organized Fast and organized BRIEF algorithm), accelerated Up Robust Features algorithm (SURF), and so on.
S104, determining a first space coordinate of the current position of the moving object by using the pixel coordinate of each first characteristic point and the pixel coordinate of the corresponding target characteristic point;
it is understood that each feature point may correspond to a pixel coordinate in the image to which the feature point belongs, and then, using the pixel coordinate of each first feature point and the pixel coordinate of the corresponding target feature point, the first spatial coordinate of the position where the moving object is located at the current time may be obtained.
For example, in an implementation manner, determining the first spatial coordinates of the position of the moving object at the current time by using the pixel coordinates of each first feature point and the pixel coordinates of the corresponding target feature point may include:
determining a first space coordinate of the position of the moving object at the current moment by using the pixel coordinate of each first characteristic point and the pixel coordinate of the corresponding target characteristic point and adopting a preset position calculation formula;
the preset position calculation formula includes:
Figure BDA0002960345450000091
wherein, PxSet of x-axis coordinates, P, characterizing each feature point to be utilized in a target imageySet of y-axis coordinates characterizing each feature point to be utilized in a target image, KxRepresenting a set of x-axis coordinates after mapping each feature point to be utilized in the reference image to a preset three-dimensional coordinate system, KyRepresenting a set of y-axis coordinates after mapping each feature point to be utilized in the reference image to a preset three-dimensional coordinate system, KzRepresenting a set of z-axis coordinates after each feature point to be utilized in a reference image is mapped to a preset three-dimensional coordinate system, T representing a first space coordinate of a current time position of the moving object, fx、fy、cx、cyAnd z are all built-in parameters of the target camera used to capture the image, theThe preset three-dimensional space is a three-dimensional coordinate system which is constructed for the scene where the moving object is located in advance.
Illustratively, P is the first spatial coordinate of the position of the moving object at the current timexCan represent a set of x-axis coordinates, P, of target feature points corresponding to each first feature point in the target imageyCan represent a set of y-axis coordinates of target feature points corresponding to each first feature point in the target image, KxCan represent a set of x-axis coordinates, K, of each first characteristic point in the reference image after being mapped to a preset three-dimensional coordinate systemyThe method can represent a set of y-axis coordinates, K, of each first characteristic point in the reference image after being mapped to a preset three-dimensional coordinate systemzThe set of z-axis coordinates of each first feature point in the reference image after being mapped to the preset three-dimensional coordinate system can be characterized.
S105, determining a plurality of second feature points from the dynamic object region, and determining pixel points matched with the second feature points in the target image as target feature points corresponding to the second feature points aiming at each second feature point;
for the implementation process of determining a plurality of second feature points from the dynamic object region and the implementation process of determining the pixel points in the target image that match the second feature points, reference may be made to step S103 in the foregoing embodiment, which is not described herein again.
S106, calculating the predicted coordinates of the target feature points corresponding to the second feature points according to the first space coordinates and the pixel coordinates of the second feature points in the dynamic object region;
it is considered that there may be dynamic objects that do not move within the dynamic object region of the reference image when the reference image is acquired, for example: in order to improve the positioning accuracy of the moving object, feature points included in the dynamic object which does not move can be determined from the dynamic object region in the reference image, and the feature points participate in positioning the moving object. Therefore, after the first space coordinate is determined, the predicted coordinate of the target feature point corresponding to each second feature point can be derived according to the determined first space coordinate and the pixel coordinate of each second feature point in the dynamic object region through the determination mode of the first space coordinate.
For example, in an implementation manner, calculating the predicted coordinates of the target feature point corresponding to each second feature point according to the first spatial coordinates and the pixel coordinates of each second feature point in the dynamic object region may include: and calculating the predicted coordinates of the target characteristic points corresponding to the second characteristic points according to the first space coordinates and the pixel coordinates of the second characteristic points in the dynamic object region by using the preset position calculation formula.
Wherein the preset position calculation formula may include:
Figure BDA0002960345450000101
in the preset position calculation formula, P is the predicted coordinate of the target feature point corresponding to each second feature pointxA set of x-axis coordinates, P, which can characterize the predicted coordinates of the target feature points corresponding to each second feature point to be calculatedyA set of y-axis coordinates, K, that can characterize the predicted coordinates of the target feature points corresponding to each second feature point to be calculatedxRepresenting a set of x-axis coordinates after each second characteristic point in the reference image is mapped to a preset three-dimensional coordinate system, KyRepresenting a set of y-axis coordinates after each second characteristic point in the reference image is mapped to a preset three-dimensional coordinate system, KzRepresenting a set of z-axis coordinates of each second feature point in the reference image after mapping to a preset three-dimensional coordinate system, T representing a first space coordinate of a current time position of the moving object, fx、fy、cx、cyAnd z is a built-in parameter of a target camera used for acquiring images, and the preset three-dimensional space is a three-dimensional coordinate system which is constructed for a scene where the moving object is located in advance.
S107, screening target feature points with the difference value between the predicted coordinate and the pixel coordinate within a preset range from the target feature points corresponding to the second feature points as supplementary feature points;
it can be understood that, after the predicted coordinates of the target feature points corresponding to the second feature points are calculated, the difference between the calculated predicted coordinates and the pixel coordinates of the target feature points corresponding to the second feature points can be used as an error value between the predicted coordinates and the pixel coordinates of the target feature points corresponding to the second feature points, so that the supplementary feature points can be screened by using the error value.
For example, in an implementation manner, screening target feature points, of which the difference between the predicted coordinate and the pixel coordinate is within a preset range, from among the target feature points corresponding to the respective second feature points, as supplementary feature points, may include:
calculating the difference value between the pixel coordinate and the predicted coordinate of the target feature point corresponding to each second feature point according to a preset difference value calculation formula aiming at the target feature point corresponding to each second feature point to obtain the target difference value of the target feature point corresponding to the second feature point;
screening target feature points with the difference value between the predicted coordinate and the pixel coordinate within a preset range according to the target difference value of the target feature point corresponding to each second feature point to serve as supplementary feature points;
wherein the preset difference calculation formula comprises:
Figure BDA0002960345450000111
wherein, Wi(t) representing a target difference value W of a target characteristic point i corresponding to a second characteristic point in a target image acquired at the current t momenti(t-1) representing the difference value between the predicted coordinate of the second characteristic point corresponding to the target characteristic point i and the pixel coordinate, dis (x) in the reference image acquired at the t-1 momentilast,xicur) The predicted coordinate x of the target feature point i corresponding to the characterization second feature pointilastPixel coordinate x of target feature point i corresponding to second feature pointicurOf between, max dis (x)ilast,xicur) And representing the maximum Euclidean distance between the predicted coordinates and the pixel coordinates of the target feature points corresponding to the second feature points in the target image, wherein w is a weight coefficient.
Wherein the calculated Wi(t) carrying out further normalization processing, sorting the normalized results in an ascending manner, and screening out supplementary feature points from the sorted results. For example: the target feature points ranked within the top 10 range are taken as supplementary feature points.
It will be understood that when x isilastAnd xicurThe larger the euclidean distance therebetween, the larger the distance between the predicted position and the actual position of the target feature point i corresponding to the second feature point in the target image is, and the greater the possibility that the target feature point i corresponding to the second feature point is located in the moving dynamic object region, that is, the greater the dynamic possibility of the target feature point i corresponding to the second feature point is. On the contrary, when xilastAnd xicurWhen the euclidean distance between the target feature points is smaller, the probability that the target feature point i corresponding to the second feature point is located in the dynamic object region which is not actually moved is higher, that is, the dynamic probability of the target feature point i corresponding to the second feature point is lower.
Further, when the dynamic possibility of the second feature point in the reference image is larger, the dynamic possibility of the target feature point corresponding to the second feature point in the target image is also larger. In consideration of the positioning accuracy of the moving object, the dynamic possibility of each second feature point in the reference image may be used as a reference when screening the supplementary feature points.
And S108, determining a second space coordinate of the current position of the moving object by using the pixel coordinate of each first characteristic point, the pixel coordinate of the target characteristic point corresponding to each first characteristic point, the pixel coordinate of the supplementary characteristic point and the pixel coordinate of the second characteristic point corresponding to the supplementary characteristic point, and taking the second space coordinate as the positioning result of the moving object.
It can be understood that, after the supplementary feature point is screened out, a feature point matching with the supplementary feature point may be determined from the reference image as a second feature point corresponding to the supplementary feature point, and a second spatial coordinate of the current position of the moving object may be determined by using the pixel coordinate of each first feature point, the pixel coordinate of the target feature point corresponding to each first feature point, the pixel coordinate of the supplementary feature point, and the pixel coordinate of the second feature point corresponding to the supplementary feature point.
For example, in an implementation manner, determining the second spatial coordinate of the position where the moving object is located at the current time by using the pixel coordinate of each first feature point, the pixel coordinate of the target feature point corresponding to each first feature point, the pixel coordinate of the supplementary feature point, and the pixel coordinate of the second feature point corresponding to the supplementary feature point may include: determining a second space coordinate of the current position of the moving object by using the preset position calculation formula and the pixel coordinates of each first characteristic point, the pixel coordinates of the target characteristic point corresponding to each first characteristic point, the pixel coordinates of the supplementary characteristic point and the pixel coordinates of the second characteristic point corresponding to the supplementary characteristic point;
wherein the preset position budget formula may include:
Figure BDA0002960345450000131
Pxa set of x-axis coordinates, P, which can characterize each first feature point and a second feature point corresponding to the supplementary feature pointyA set of y-axis coordinates, K, characterizing each first feature point and a second feature point corresponding to the supplementary feature pointxRepresenting the target characteristic point and the supplementary characteristic point corresponding to each first characteristic point, mapping to a set of x-axis coordinates after a preset three-dimensional coordinate system, KyRepresenting the target characteristic point and the supplementary characteristic point corresponding to each first characteristic point, mapping to a set of y-axis coordinates after a preset three-dimensional coordinate system, KzRepresenting the target characteristic point and the supplementary characteristic point corresponding to each first characteristic point, and mapping to the z of the preset three-dimensional coordinate systemSet of axis coordinates, T characterizing a second spatial coordinate to be determined, fx、fy、cx、cyAnd z is a built-in parameter of a target camera used for acquiring images, and the preset three-dimensional space is a three-dimensional coordinate system which is constructed for a scene where the moving object is located in advance.
In the embodiment of the invention, when the space coordinate of the moving object is determined, not only are the feature points in the static object area considered, but also the pixel points of which the difference value between the prediction coordinate and the pixel coordinate is within the preset range are screened from the dynamic object area to be used as the supplementary feature points, and the feature points of the static extraction area and the supplementary feature points of the dynamic object area are used for realizing the positioning of the moving object, so that the positioning precision of the moving object can be improved.
For clarity, a method for positioning a moving object according to an embodiment of the present invention is described below with reference to fig. 2.
As shown in fig. 2, a method for positioning a moving object according to an embodiment of the present invention may include the following steps S201 to S209:
step 201, when a moving object needs to be positioned, a target image and a reference image are obtained;
the target image and the reference image are respectively images acquired at the current moment and the last moment at the position of the moving object;
step S202, a plurality of feature points in the target image are extracted by adopting a preset ORB algorithm, and feature points matched with each feature point in the target image are determined from the reference image;
step S203, recognizing a static object region and a dynamic object region in a reference image by adopting a preset target detection algorithm, taking the feature point of the static object region as a first feature point, and taking the feature point of the dynamic object region as a second feature point;
step S204, determining a first space coordinate of the current position of the moving object by using the pixel coordinate of each first characteristic point and the pixel coordinate of a target characteristic point matched with each first characteristic point in the target image and adopting a preset position calculation formula;
step S205, calculating the predicted coordinate of each second feature point according to the first space coordinate and the pixel coordinate of each second feature point by using the preset position calculation formula;
step S206, screening target feature points with the difference value between the predicted coordinate and the pixel coordinate within a preset range by adopting a preset difference value calculation formula to serve as supplementary feature points;
step S207, determining a second space coordinate of the current position of the moving object as a positioning result of the moving object by using the pixel coordinate of each first feature point, the pixel coordinate of the target feature point corresponding to each first feature point, the pixel coordinate of the supplementary feature point and the pixel coordinate of the second feature point corresponding to the supplementary feature point;
step S208, judging whether the target image meets the preset key frame condition, if so, executing step S209; wherein, the preset key frame conditions include: the difference value between the second space coordinate of the position of the moving object at the current moment and the second space coordinate of the position of the moving object at the previous moment is within a preset range;
and step S209, local mapping is carried out by using the target image, and the process is looped.
The implementation process of creating a map according to a part and looping by using the target image is not particularly limited.
With respect to the above method embodiment, as shown in fig. 3, an embodiment of the present invention further provides a positioning apparatus for a moving object, including:
an image obtaining module 310, configured to obtain a target image and a reference image when a moving object needs to be located; the target image and the reference image are respectively images acquired at the current moment and the last moment at the position of the moving object;
a region confirmation module 320 for determining a static object region and a dynamic object region in the reference image;
a first confirming module 330, configured to determine a plurality of first feature points from the static object region, and determine, for each first feature point, a pixel point in the target image that matches the first feature point, as a target feature point corresponding to the first feature point;
the first calculating module 340 is configured to determine a first spatial coordinate of a current position of the moving object by using the pixel coordinate of each first feature point and the pixel coordinate of the corresponding target feature point;
a second determining module 350, configured to determine a plurality of second feature points from the dynamic object region, and determine, for each second feature point, a pixel point in the target image that matches the second feature point, as a target feature point corresponding to the second feature point;
a second calculating module 360, configured to calculate, according to the first spatial coordinate and the pixel coordinate of each second feature point in the dynamic object region, a predicted coordinate of a target feature point corresponding to each second feature point;
a difference value screening module 370, configured to screen, from the target feature points corresponding to the second feature points, target feature points whose difference values between the predicted coordinates and the pixel coordinates are within a preset range, as supplementary feature points; and the result confirming module 380 is configured to determine, as the positioning result of the moving object, a second spatial coordinate of the current position of the moving object by using the pixel coordinate of each first feature point, the pixel coordinate of the target feature point corresponding to each first feature point, the pixel coordinate of the supplementary feature point, and the pixel coordinate of the second feature point corresponding to the supplementary feature point.
In the embodiment of the invention, when the space coordinate of the moving object is determined, not only are the feature points in the static object area considered, but also the pixel points of which the difference value between the prediction coordinate and the pixel coordinate is within the preset range are screened from the dynamic object area to be used as the supplementary feature points, and the feature points of the static extraction area and the supplementary feature points of the dynamic object area are used for realizing the positioning of the moving object, so that the positioning precision of the moving object can be improved.
Optionally, the first calculating module is specifically configured to determine, by using the pixel coordinate of each first feature point and the pixel coordinate of the corresponding target feature point, a first spatial coordinate of a position where the moving object is located at the current time by using a preset position calculation formula;
the preset position calculation formula includes:
Figure BDA0002960345450000161
wherein, PxSet of x-axis coordinates, P, characterizing each feature point to be utilized in a target imageySet of y-axis coordinates characterizing each feature point to be utilized in a target image, KxRepresenting a set of x-axis coordinates after mapping each feature point to be utilized in the reference image to a preset three-dimensional coordinate system, KyRepresenting a set of y-axis coordinates after mapping each feature point to be utilized in the reference image to a preset three-dimensional coordinate system, KzRepresenting a set of z-axis coordinates after each feature point to be utilized in a reference image is mapped to a preset three-dimensional coordinate system, T representing a first space coordinate of a current time position of the moving object, fx、fy、cx、cyAnd z is a built-in parameter of a target camera used for acquiring images, and the preset three-dimensional space is a three-dimensional coordinate system which is constructed for a scene where the moving object is located in advance.
Optionally, the difference screening module is specifically configured to, for a target feature point corresponding to each second feature point, calculate a difference between a pixel coordinate and a predicted coordinate of the target feature point corresponding to the second feature point according to a preset difference calculation formula, so as to obtain a target difference of the target feature point corresponding to the second feature point;
screening target feature points with the difference value between the predicted coordinate and the pixel coordinate within a preset range according to the target difference value of the target feature point corresponding to each second feature point to serve as supplementary feature points;
wherein the preset difference calculation formula comprises:
Figure BDA0002960345450000162
wherein, Wi(t) representing a target difference value W of a target characteristic point i corresponding to a second characteristic point in a target image acquired at the current t momenti(t-1) representing the difference value between the predicted coordinate of the second characteristic point corresponding to the target characteristic point i and the pixel coordinate, dis (x) in the reference image acquired at the t-1 momentilast,xicur) The predicted coordinate x of the target feature point i corresponding to the characterization second feature pointilastPixel coordinate x of target feature point i corresponding to second feature pointicurOf between, max dis (x)ilast,xicur) And representing the maximum Euclidean distance between the predicted coordinates and the pixel coordinates of the target feature points corresponding to the second feature points in the target image, wherein w is a weight coefficient.
Optionally, the second calculating module is specifically configured to calculate, by using the preset position calculation formula, the predicted coordinates of the target feature point corresponding to each second feature point according to the first spatial coordinate and the pixel coordinate of each second feature point in the dynamic object region.
An embodiment of the present invention further provides an electronic device, as shown in fig. 4, including a processor 401, a communication interface 402, a memory 403, and a communication bus 404, where the processor 401, the communication interface 402, and the memory 403 complete mutual communication through the communication bus 404,
a memory 403 for storing a computer program;
the processor 401 is configured to implement any of the above-described steps of the method for positioning a moving object when executing the program stored in the memory 403.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In a further embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the method for positioning a moving object as described in any one of the above.
In a further embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method for positioning a moving object as described in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for embodiments such as the apparatus, the electronic device, and the storage medium, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A method for locating a moving object, comprising:
when a moving object needs to be positioned, a target image and a reference image are obtained; the target image and the reference image are respectively images acquired at the current moment and the last moment at the position of the moving object;
determining a static object region and a dynamic object region in the reference image;
determining a plurality of first feature points from the static object region, and determining pixel points matched with the first feature points in the target image as target feature points corresponding to the first feature points aiming at each first feature point;
determining a first space coordinate of the current position of the moving object by using the pixel coordinate of each first characteristic point and the pixel coordinate of the corresponding target characteristic point;
determining a plurality of second feature points from the dynamic object region, and determining pixel points matched with the second feature points in the target image as target feature points corresponding to the second feature points aiming at each second feature point;
calculating the predicted coordinates of the target feature points corresponding to the second feature points according to the first space coordinates and the pixel coordinates of the second feature points in the dynamic object region;
screening target feature points with the difference value between the predicted coordinate and the pixel coordinate within a preset range from the target feature points corresponding to the second feature points to serve as supplementary feature points;
and determining a second space coordinate of the current position of the moving object by using the pixel coordinate of each first characteristic point, the pixel coordinate of the target characteristic point corresponding to each first characteristic point, the pixel coordinate of the supplementary characteristic point and the pixel coordinate of the second characteristic point corresponding to the supplementary characteristic point, and taking the second space coordinate as the positioning result of the moving object.
2. The method according to claim 1, wherein the determining the first spatial coordinates of the position of the moving object at the current time by using the pixel coordinates of each first feature point and the pixel coordinates of the corresponding target feature point comprises:
determining a first space coordinate of the position of the moving object at the current moment by using the pixel coordinate of each first characteristic point and the pixel coordinate of the corresponding target characteristic point and adopting a preset position calculation formula;
the preset position calculation formula includes:
Figure FDA0002960345440000021
wherein, PxSet of x-axis coordinates, P, characterizing each feature point to be utilized in a target imageySet of y-axis coordinates characterizing each feature point to be utilized in a target image, KxRepresenting a set of x-axis coordinates after mapping each feature point to be utilized in the reference image to a preset three-dimensional coordinate system, KyRepresenting a set of y-axis coordinates after mapping each feature point to be utilized in the reference image to a preset three-dimensional coordinate system, KzRepresenting a set of z-axis coordinates after each feature point to be utilized in a reference image is mapped to a preset three-dimensional coordinate system, T representing a first space coordinate of a current time position of the moving object, fx、fy、cx、cyAnd z is a built-in parameter of a target camera used for acquiring images, and the preset three-dimensional space is a three-dimensional coordinate system which is constructed for a scene where the moving object is located in advance.
3. The method according to claim 1 or 2, wherein the screening, as the supplementary feature points, of the target feature points corresponding to each second feature point, the target feature points having a difference value between a predicted coordinate and a pixel coordinate within a preset range includes:
calculating the difference value between the pixel coordinate and the predicted coordinate of the target feature point corresponding to each second feature point according to a preset difference value calculation formula aiming at the target feature point corresponding to each second feature point to obtain the target difference value of the target feature point corresponding to the second feature point;
screening target feature points with the difference value between the predicted coordinate and the pixel coordinate within a preset range according to the target difference value of the target feature point corresponding to each second feature point to serve as supplementary feature points;
wherein the preset difference calculation formula comprises:
Figure FDA0002960345440000022
wherein, Wi(t) representing a target difference value W of a target characteristic point i corresponding to a second characteristic point in a target image acquired at the current t momenti(t-1) representing the difference value, dis (x), between the predicted coordinate and the pixel coordinate of the second feature point corresponding to the target feature point i in the reference image acquired at the last t-1 momentilast,xicur) The predicted coordinate x of the target feature point i corresponding to the characterization second feature pointilastPixel coordinate x of target feature point i corresponding to second feature pointicurOf between, max dis (x)ilast,xicur) And representing the maximum Euclidean distance between the predicted coordinates and the pixel coordinates of the target feature points corresponding to the second feature points in the target image, wherein w is a weight coefficient.
4. The method according to claim 2, wherein calculating the predicted coordinates of the target feature point corresponding to each second feature point according to the first spatial coordinates and the pixel coordinates of each second feature point in the dynamic object region comprises:
and calculating the predicted coordinates of the target characteristic points corresponding to the second characteristic points according to the first space coordinates and the pixel coordinates of the second characteristic points in the dynamic object region by using the preset position calculation formula.
5. A positioning apparatus for a moving object, comprising:
the image acquisition module is used for acquiring a target image and a reference image when a moving object needs to be positioned; the target image and the reference image are respectively images acquired at the current moment and the last moment at the position of the moving object;
the region confirmation module is used for determining a static object region and a dynamic object region in the reference image;
the first confirming module is used for determining a plurality of first characteristic points from the static object area, and determining pixel points matched with the first characteristic points in the target image as target characteristic points corresponding to the first characteristic points aiming at each first characteristic point;
the first calculation module is used for determining a first space coordinate of the position of the moving object at the current moment by using the pixel coordinate of each first characteristic point and the pixel coordinate of the corresponding target characteristic point;
the second confirming module is used for determining a plurality of second characteristic points from the dynamic object region, and determining pixel points matched with the second characteristic points in the target image as target characteristic points corresponding to the second characteristic points aiming at each second characteristic point;
the second calculation module is used for calculating the prediction coordinates of the target characteristic points corresponding to the second characteristic points according to the first space coordinates and the pixel coordinates of the second characteristic points in the dynamic object area;
the difference value screening module is used for screening target feature points with the difference value between the predicted coordinate and the pixel coordinate within a preset range from the target feature points corresponding to the second feature points as supplementary feature points;
and the result confirming module is used for determining a second space coordinate of the current position of the moving object as the positioning result of the moving object by utilizing the pixel coordinate of each first characteristic point, the pixel coordinate of the target characteristic point corresponding to each first characteristic point, the pixel coordinate of the supplementary characteristic point and the pixel coordinate of the second characteristic point corresponding to the supplementary characteristic point.
6. The apparatus according to claim 5, wherein the first calculating module is specifically configured to determine the first spatial coordinates of the current position of the moving object by using the pixel coordinates of each first feature point and the pixel coordinates of the corresponding target feature point and using a preset position calculation formula;
the preset position calculation formula includes:
Figure FDA0002960345440000041
wherein, PxSet of x-axis coordinates, P, characterizing each feature point to be utilized in a target imageySet of y-axis coordinates characterizing each feature point to be utilized in a target image, KxRepresenting a set of x-axis coordinates after mapping each feature point to be utilized in the reference image to a preset three-dimensional coordinate system, KyRepresenting a set of y-axis coordinates after mapping each feature point to be utilized in the reference image to a preset three-dimensional coordinate system, KzRepresenting a set of z-axis coordinates after each feature point to be utilized in a reference image is mapped to a preset three-dimensional coordinate system, T representing a first space coordinate of a current time position of the moving object, fx、fy、cx、cyAnd z is a built-in parameter of a target camera used for acquiring images, and the preset three-dimensional space is a three-dimensional coordinate system which is constructed for a scene where the moving object is located in advance.
7. The apparatus according to claim 5 or 6, wherein the difference screening module is specifically configured to, for each target feature point corresponding to each second feature point, calculate a difference between a pixel coordinate and a predicted coordinate of the target feature point corresponding to the second feature point according to a preset difference calculation formula, so as to obtain a target difference of the target feature point corresponding to the second feature point;
screening target feature points with the difference value between the predicted coordinate and the pixel coordinate within a preset range according to the target difference value of the target feature point corresponding to each second feature point to serve as supplementary feature points;
wherein the preset difference calculation formula comprises:
Figure FDA0002960345440000042
wherein, Wi(t) representing a target difference value W of a target characteristic point i corresponding to a second characteristic point in a target image acquired at the current t momenti(t-1) representing the difference value, dis (x), between the predicted coordinate and the pixel coordinate of the second feature point corresponding to the target feature point i in the reference image acquired at the last t-1 momentilast,xicur) The predicted coordinate x of the target feature point i corresponding to the characterization second feature pointilastPixel coordinate x of target feature point i corresponding to second feature pointicurOf between, max dis (x)ilast,xicur) And representing the maximum Euclidean distance between the predicted coordinates and the pixel coordinates of the target feature points corresponding to the second feature points in the target image, wherein w is a weight coefficient.
8. The apparatus according to claim 6, wherein the second calculating module is specifically configured to calculate, by using the preset position calculation formula, the predicted coordinates of the target feature point corresponding to each second feature point according to the first spatial coordinates and the pixel coordinates of each second feature point in the dynamic object region.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 4 when executing a program stored in the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 4.
CN202110236385.1A 2021-03-03 2021-03-03 Method and device for positioning mobile object, electronic equipment and storage medium Pending CN113096182A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110236385.1A CN113096182A (en) 2021-03-03 2021-03-03 Method and device for positioning mobile object, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110236385.1A CN113096182A (en) 2021-03-03 2021-03-03 Method and device for positioning mobile object, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113096182A true CN113096182A (en) 2021-07-09

Family

ID=76666468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110236385.1A Pending CN113096182A (en) 2021-03-03 2021-03-03 Method and device for positioning mobile object, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113096182A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829947A (en) * 2019-02-25 2019-05-31 北京旷视科技有限公司 Pose determines method, tray loading method, apparatus, medium and electronic equipment
CN111080805A (en) * 2019-11-26 2020-04-28 北京云聚智慧科技有限公司 Method and device for generating three-dimensional block diagram of marked object, electronic equipment and storage medium
US20200300639A1 (en) * 2017-10-31 2020-09-24 Hewlett-Packard Development Company, L.P. Mobile robots to generate reference maps for localization
CN111813984A (en) * 2020-06-23 2020-10-23 北京邮电大学 Method and device for realizing indoor positioning by using homography matrix and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200300639A1 (en) * 2017-10-31 2020-09-24 Hewlett-Packard Development Company, L.P. Mobile robots to generate reference maps for localization
CN109829947A (en) * 2019-02-25 2019-05-31 北京旷视科技有限公司 Pose determines method, tray loading method, apparatus, medium and electronic equipment
CN111080805A (en) * 2019-11-26 2020-04-28 北京云聚智慧科技有限公司 Method and device for generating three-dimensional block diagram of marked object, electronic equipment and storage medium
CN111813984A (en) * 2020-06-23 2020-10-23 北京邮电大学 Method and device for realizing indoor positioning by using homography matrix and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JICHAO JIAO等: "An adaptive visual Dynamic-SLAM method based on fusing the semantic information", 《IEEE SENSORS JOURNAL (EARLY ACCESS)》 *
丁斗建 等: "基于视觉的机器人自主定位与障碍物检测方法", 《计算机应用》 *

Similar Documents

Publication Publication Date Title
CN109961009B (en) Pedestrian detection method, system, device and storage medium based on deep learning
CN109255352B (en) Target detection method, device and system
CN111079570B (en) Human body key point identification method and device and electronic equipment
CN108154171B (en) Figure identification method and device and electronic equipment
CN107358596B (en) Vehicle loss assessment method and device based on image, electronic equipment and system
CN109815843B (en) Image processing method and related product
CN105164700B (en) Detecting objects in visual data using a probabilistic model
CN109325456B (en) Target identification method, target identification device, target identification equipment and storage medium
CN109426785B (en) Human body target identity recognition method and device
CN108154099B (en) Figure identification method and device and electronic equipment
CN110969045B (en) Behavior detection method and device, electronic equipment and storage medium
CN110969100B (en) Human body key point identification method and device and electronic equipment
CN111027412B (en) Human body key point identification method and device and electronic equipment
CN113971653A (en) Target detection method, device and equipment for remote sensing image and storage medium
CN111738120B (en) Character recognition method, character recognition device, electronic equipment and storage medium
CN110909664A (en) Human body key point identification method and device and electronic equipment
CN115294063A (en) Method, device, system, electronic device and medium for detecting defect of electronic component
CN108229583B (en) Method and device for fast template matching based on main direction difference characteristics
CN113011398A (en) Target change detection method and device for multi-temporal remote sensing image
CN110910445A (en) Object size detection method and device, detection equipment and storage medium
CN113420848A (en) Neural network model training method and device and gesture recognition method and device
CN114120221A (en) Environment checking method based on deep learning, electronic equipment and storage medium
CN110287361B (en) Figure picture screening method and device
CN114595352A (en) Image identification method and device, electronic equipment and readable storage medium
CN112418159A (en) Attention mask based diner monitoring method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination