CN111578938A - Target object positioning method and device - Google Patents

Target object positioning method and device Download PDF

Info

Publication number
CN111578938A
CN111578938A CN201910124689.1A CN201910124689A CN111578938A CN 111578938 A CN111578938 A CN 111578938A CN 201910124689 A CN201910124689 A CN 201910124689A CN 111578938 A CN111578938 A CN 111578938A
Authority
CN
China
Prior art keywords
particle
target object
indoor map
map model
indoor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910124689.1A
Other languages
Chinese (zh)
Other versions
CN111578938B (en
Inventor
陈彦宇
马雅奇
赵尹发
谭泽汉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201910124689.1A priority Critical patent/CN111578938B/en
Publication of CN111578938A publication Critical patent/CN111578938A/en
Application granted granted Critical
Publication of CN111578938B publication Critical patent/CN111578938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Abstract

The application discloses a method and a device for positioning a target object. Wherein, the method comprises the following steps: positioning a target object to obtain initial position information of the target object; constructing an indoor map model of the position of the target object; and correcting the initial position information according to a particle filter algorithm and an indoor map model to obtain the position information of the target object. The method solves the technical problems that the indoor positioning method at the present stage is unstable or inaccurate in positioning due to complex indoor environment and non-ranging errors.

Description

Target object positioning method and device
Technical Field
The application relates to the technical field of indoor positioning, in particular to a target object positioning method and device.
Background
With popularization of mobile internet and rising application of internet of things, technologies such as cloud computing, big data, robots and intelligent sensing slowly enter the visual field of people, and a positioning technology is one of important technologies of a sensing layer and plays a significant role. The development of the outdoor positioning technology is mature depending on the satellite positioning technologies such as GPS, Beidou and the like. Under the indoor environment, due to the fact that shielding of buildings and satellite positioning are not accurate enough, outdoor positioning technology does not meet requirements any more, and therefore indoor positioning technology serving as the basis of the internet of things and position big data gradually becomes a demand.
At present, the traditional indoor positioning method has the technical problems of unstable or inaccurate positioning and the like due to the complex indoor environment and the existence of non-ranging errors.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a target object positioning method and device, and aims to at least solve the technical problems of unstable or inaccurate positioning caused by complex indoor environment and non-ranging errors in the indoor positioning method at the present stage.
According to an aspect of an embodiment of the present application, there is provided a method for locating a target object, including: positioning a target object to obtain initial position information of the target object; constructing an indoor map model of the position of the target object; and correcting the initial position information according to a particle filter algorithm and an indoor map model to obtain the position information of the target object.
Optionally, the correcting the initial position information according to a particle filter algorithm and an indoor map model includes: performing particle through-wall detection by using an indoor map model to obtain the position of a particle in the indoor map model, wherein the particle in the particle through-wall detection is used for representing a predicted position and a weight corresponding to the predicted position; determining the weight of the particles according to the positions of the particles in the indoor map model; resampling the particles according to the determined weights; and performing through-wall detection on the resampled particles again, and correcting the initial position information according to the result of performing through-wall detection on the particles again.
Optionally, determining the weight of the particle according to the position of the particle in the indoor map model includes: if the fact that the destination of the particle transfer is located in an unreachable area in the indoor map model is detected, a first weight is given to the predicted position corresponding to the particle; and if the destination of the particle transfer is detected to be positioned in the reachable area, giving a second weight to the predicted position corresponding to the particle, wherein the second weight is larger than the first weight.
Optionally, resampling the particles according to the determined weights, comprising: filtering out the particles with the first weight and keeping the particles with the second weight; and (3) carrying out through-wall detection again on the resampled particles, comprising the following steps: and carrying out wall-through detection again by adopting the particles with the second weight to obtain a positioning result.
Optionally, the method of correcting the initial position information according to a particle filter algorithm and an indoor map model further includes: when the positioning result indicates that the destination of the particle transfer is converted from the current area to another area, adjusting the positioning result to a communication position between the current area and the other area; and updating the initial position information of the target object to be a communication position.
Optionally, constructing an indoor map model of a location of the target object includes: the indoor map model is constructed according to an indoor plan, wherein the indoor map model comprises at least one of the following information: point, line, plane.
Optionally, the method further includes updating the indoor map model by: dividing an indoor plane graph into a plurality of grid areas, wherein each grid area is a pixel point; counting the times of the target object falling in each grid area, and converting the times into pixel values of pixel points; removing background noise of the plan, and dividing the plan into a background area and a target area, wherein the target area is an area needing edge detection; removing random noise of the plan; calculating gradients of the filtered pixel points in the width direction and the height direction to obtain a gradient amplitude image; labeling the gradient amplitude image by using a non-maximum suppression method to obtain a labeled image; and removing isolated noise points in the marked image according to an edge detection algorithm.
According to another aspect of the embodiments of the present application, there is also provided a target object positioning apparatus, including: the positioning module is used for positioning the target object to obtain initial position information of the target object; the building module is used for building an indoor map model of the position of the target object; and the correction module is used for correcting the initial position information according to the particle filter algorithm and the indoor map model to obtain the position information of the target object.
According to still another aspect of the embodiments of the present application, there is provided a storage medium including a stored program, where the program, when executed, controls a device on which the storage medium is located to perform the above method for locating an object.
According to still another aspect of the embodiments of the present application, there is provided a processor, configured to execute a program, where the program executes the above method for locating an object.
In the embodiment of the application, the target object is positioned to obtain the initial position information of the target object; constructing an indoor map model of the position of the target object; according to the particle filter algorithm and the indoor map model, the initial position information is corrected to obtain the position information of the target object, the indoor map model is constructed, then the indoor map model is fused in the particle filter algorithm, and then the positioning result of the target object is corrected, so that the technical effect of improving the precision and stability of indoor positioning is achieved, and the technical problems of unstable positioning or inaccuracy caused by complex indoor environment and non-ranging errors of the indoor positioning method in the prior art are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart of a method for locating an object according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of updating an indoor map model according to an embodiment of the present application;
FIG. 3 is a flow chart of another method for locating an object according to an embodiment of the present application;
fig. 4 is a block diagram of a target object positioning device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present application, there is provided a method embodiment of a method for locating an object, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that illustrated herein.
Fig. 1 is a flowchart of a method for locating an object according to an embodiment of the present application, as shown in fig. 1, the method includes the following steps:
and S102, positioning the target object to obtain initial position information of the target object.
According to an optional embodiment of the present application, when step S102 is executed, an indoor positioning method based on Received Signal Strength (RSSI) is used to position the target object, and the RSSI indoor positioning method measures the distance between a Signal point and a receiving point according to the Strength of the Received Signal, and then performs positioning according to corresponding data. The signal for positioning may be a bluetooth signal, a WIFI signal, a WLAN signal, or other signals that are widely used in indoor positioning. Which signal is specifically applied needs to be determined according to the characteristics of the indoor environment.
Optionally, after the indoor positioning deployment scheme is selected, the network connectivity of the environment needs to be further tested, and parameters such as environment complexity and reference signal strength are estimated by a maximum likelihood estimation method.
The target object may be a person or an object moving indoors, or may be a stationary tag.
And step S104, constructing an indoor map model of the position of the target object.
In some embodiments of the present application, the step S104 may be executed to construct an indoor map model according to a plan view of an indoor, wherein the indoor map model includes at least one of the following information: point, line, plane.
In the embodiment of the application, a mixed vector representation method and a layer representation method are mainly adopted to represent the geographic information of the indoor map, and the geographic information mainly comprises the following three types: point, line, plane. The points are represented as turning points, splicing points and the like in the map, the lines are used for representing wall entities, door entities and walkable path track entities in the map, and the surfaces are mainly used for representing indoor space areas such as rooms, corridors and stairs in the map.
And S106, correcting the initial position information according to the particle filter algorithm and the indoor map model to obtain the position information of the target object.
In some embodiments of the present application, in executing step S106, the indoor map model constructed in step S104 is fused in a particle filtering algorithm, and then the initial positioning result of the target object is corrected.
The particle filter means: the method is characterized in that a group of random samples which are propagated in a state space are searched to approximately represent a probability density function, the mean value of the samples is used for replacing integral operation, and then the minimum variance estimation process of the system state is obtained.
The idea of Particle filtering (PF: Particle Filter) is based on the Monte Carlo method (Monte Carlo methods), which uses a set of particles to represent the probability, and can be used on any form of state space model. The core idea is to express the distribution of random state particles by extracting the random state particles from the posterior probability, and the method is a Sequential Importance Sampling method (Sequential Importance Sampling). Briefly, the particle filtering method is a process of approximating a probability density function by searching a group of random samples propagating in a state space, and substituting an integral operation with a sample mean value to obtain a state minimum variance distribution. Where samples are referred to as particles, any form of probability density distribution can be approximated when the number of samples N → ∞.
Through the steps, the indoor map model is constructed, then the indoor map model is fused in the particle filter algorithm, and then the positioning result of the target object is corrected, so that the technical effects of improving the accuracy and stability of indoor positioning are achieved.
In some embodiments of the present application, step S104 is implemented by: performing particle through-wall detection by using an indoor map model to obtain the position of a particle in the indoor map model, wherein the particle in the particle through-wall detection is used for representing a predicted position and a weight corresponding to the predicted position; determining the weight of the particles according to the positions of the particles in the indoor map model; resampling the particles according to the determined weights; and performing through-wall detection on the resampled particles again, and correcting the initial position information according to the result of performing through-wall detection on the particles again.
Through the steps, the indoor map is fused in the particle filtering to carry out particle through-wall detection, and the fused positioning result is subjected to through-wall detection again, so that the positioning accuracy and stability of the target object are improved.
According to an alternative embodiment of the present application, determining the weight of the particle according to the position of the particle in the indoor map model comprises: if the fact that the destination of the particle transfer is located in an unreachable area in the indoor map model is detected, a first weight is given to the predicted position corresponding to the particle; and if the destination of the particle transfer is detected to be positioned in the reachable area, giving a second weight to the predicted position corresponding to the particle, wherein the second weight is larger than the first weight.
Constraining particle transfer by using position information of an indoor map, and if a destination of the particle transfer belongs to an unreachable or directly unreachable area, giving a low weight value to a predicted position corresponding to the particle; if the destination of the particle transfer belongs to the reachable region, a high weight value is assigned to the particle.
In some embodiments of the present application, resampling the particles in accordance with the determined weights comprises: filtering out the particles with the first weight and keeping the particles with the second weight; and (3) carrying out through-wall detection again on the resampled particles, comprising the following steps: and carrying out wall-through detection again by adopting the particles with the second weight to obtain a positioning result.
Optionally, executing step S106 further comprises the following method: when the positioning result indicates that the destination of the particle transfer is converted from the current area to another area, adjusting the positioning result to a communication position between the current area and the other area; and updating the initial position information of the target object to be a communication position. Wherein the communication position may be a position where the door is located. Optionally, if the estimated track after filtering and fusion passes through the wall to another area, the positioning result is backed by N steps, the positioning result before the N steps is adjusted to the nearest gate, and the positioning result before the N-1 step is adjusted by using the difference between the positioning result before the adjusted N steps and the actual positioning result.
Fig. 2 is a flowchart of a method for updating an indoor map model according to an embodiment of the present application, as shown in fig. 2, the method including the steps of:
step S201, dividing an indoor plane map into a plurality of grid regions, where each grid region is a pixel point. The whole indoor environment is regarded as a two-dimensional image, the indoor space is divided into grids with proper sizes according to certain intervals, such as 1 meter and 0.5 meter, and each grid is used as a pixel point.
Step S202, counting the times of the target object falling in each grid area, and converting the times into pixel values of pixel points. Acquiring positioning information of the indoor environment in a near period of time, calculating the number of times that the target object falls on each grid, and converting the number of times into an image pixel value, namely converting the number of times into a range of [0,255], wherein the calculation formula is as follows:
Figure BDA0001973137590000061
wherein x isqaxIs the current maximum value of data, xminIs the minimum value of the current data, x is any value in the current data, and y is the normalized mapped value.
Step S203, removing background noise of the plan view, and dividing the plan view into a background area and a target area, wherein the target area is an area where an edge needs to be detected. In an alternative embodiment of the present application, a method based on modulus mathematics is used to automatically remove the interference of image background noise, then the image is divided into a background area and a target area (an area where an edge needs to be detected), and the target area is automatically extracted according to the maximum membership principle in fuzzy mathematics.
And step S204, removing random noise of the plan view. According to an alternative embodiment of the present application, the random noise is removed using median filtering and gaussian smoothing filtering.
Step S205, calculating gradients of the filtered pixel points in the width direction and the height direction to obtain a gradient amplitude image; and labeling the gradient amplitude image by using a non-maximum suppression method to obtain a labeled image.
In some optional embodiments of the present application, firstly, gradients in the width direction and the height direction are obtained for each pixel point after the filtering, so as to obtain a gradient amplitude image, and the gradient amplitude image is labeled by using a non-maximum suppression method. The non-maxima suppression method is an edge refinement method. The gradient edge that is usually obtained is more than one pixel wide, but a plurality of pixels wide. Therefore, such a gradient map is also very "blurred". Since only one exact dot width remains in the edge region. The non-maxima suppression method can help preserve the local maximum gradient while suppressing all other gradient values. This means that only the sharpest positions in the gradient change remain. The specific algorithm is as follows: comparing the gradient strength of the current point with the gradient strength of the positive and negative gradient direction points; if the gradient strength of the current point is the maximum compared to the gradient strength of other points in the same direction, its value is retained, otherwise, it is suppressed, i.e. set to 0. For example, the direction of the current point is directed 90 deg. directly above, which requires comparison with pixels in the vertical direction (directly above and directly below).
And step S206, removing isolated noise points in the marked image according to an edge detection algorithm.
Typical edge detection algorithms use a threshold to filter out small gradient values caused by noise or color changes, while retaining large gradient values. The Canny algorithm applies dual thresholds, one high and one low, to distinguish edge pixels. If the gradient value of the edge pixel point is larger than the high threshold value, the edge pixel point is considered as a strong edge point; if the edge gradient value is smaller than the high threshold value and larger than the low threshold value, marking as a weak edge point; points less than the low threshold are suppressed. So far, strong edge points may be considered as true edges, and weak edge points may be true edges, or may be caused by noise or color change. For accurate results, the weak edge points caused by the latter should be removed. It is generally considered that the weak edge points and the strong edge points caused by the real edges are connected, whereas the weak edge points caused by the noise are not. In the embodiment of the application, a lag boundary tracking algorithm is adopted to check one weak edge point eight-connected domain pixel, and as long as a strong edge point exists, the weak edge point is regarded as a true edge to be reserved. And finally, taking a 3X3 area with the edge point as the center for each edge point, obtaining the total number m of the edge points in the area, and if m is 1, removing the edge points as isolated noise points to obtain a final edge image so as to obtain edge positions and update the indoor map.
And repeating the steps S201 to S206 to update the indoor map in real time.
In some optional embodiments of the present application, the edge information of the indoor space may also be identified using a minimum bounding rectangle algorithm to update the indoor map.
Fig. 3 is a flowchart of another method for locating an object according to an embodiment of the present application, as shown in fig. 3, the method includes the following steps:
step S301, determining an indoor positioning deployment scheme, and determining the indoor positioning deployment scheme according to the characteristics of the indoor environment, wherein the indoor positioning deployment scheme deploys the indoor positioning environment by using indoor positioning technologies with wide application such as Bluetooth, WIFI, WLAN and the like.
And S302, estimating system parameters such as environment complexity and reference signal strength, testing network connectivity after indoor positioning environment deployment is finished, and estimating the system parameters such as the environment complexity and the reference signal strength by a maximum likelihood estimation method.
Step S303, positioning the target object by the indoor positioning algorithm based on the RSSI, collecting data, and positioning the target object by the indoor positioning algorithm based on the RSSI.
And step S304, constructing an indoor map model according to the indoor plane map. The indoor map model can be constructed according to an indoor plan, wherein the indoor map model comprises at least one of the following information: point, line, plane.
In the embodiment of the application, a mixed vector representation method and a layer representation method are mainly adopted to represent the geographic information of the indoor map, and the geographic information mainly comprises the following three types: point, line, plane. The points are represented as turning points, splicing points and the like in the map, the lines are used for representing wall entities, door entities and walkable path track entities in the map, and the surfaces are mainly used for representing indoor space areas such as rooms, corridors and stairs in the map.
In step S305, the indoor map is updated using an edge detection algorithm.
Firstly, an indoor plan is divided into a plurality of grid areas, wherein each grid area is a pixel point. The whole indoor environment is regarded as a two-dimensional image, the indoor space is divided into grids with proper sizes according to certain intervals, such as 1 meter and 0.5 meter, and each grid is used as a pixel point.
And counting the times of the target object falling in each grid area, and converting the times into pixel values of the pixel points. Acquiring positioning information of the indoor environment in a near period of time, calculating the number of times that the target object falls on each grid, and converting the number of times into an image pixel value, namely converting the number of times into a range of [0,255], wherein the calculation formula is as follows:
Figure BDA0001973137590000081
wherein x ismaxIs the current maximum value of data, xminIs the minimum value of the current data, x is any value in the current data, and y is the normalized mapped value.
And removing background noise of the plan view, and dividing the plan view into a background area and a target area, wherein the target area is an area needing to detect edges. In an alternative embodiment of the present application, a method based on modulus mathematics is used to automatically remove the interference of image background noise, then the image is divided into a background area and a target area (an area where an edge needs to be detected), and the target area is automatically extracted according to the maximum membership principle in fuzzy mathematics.
And removing random noise of the plan view. According to an alternative embodiment of the present application, the random noise is removed using median filtering and gaussian smoothing filtering.
Calculating gradients of the filtered pixel points in the width direction and the height direction to obtain a gradient amplitude image; and labeling the gradient amplitude image by using a non-maximum suppression method to obtain a labeled image.
In some optional embodiments of the present application, firstly, gradients in the width direction and the height direction are obtained for each pixel point after the filtering, so as to obtain a gradient amplitude image, and the gradient amplitude image is labeled by using a non-maximum suppression method. The non-maxima suppression method is an edge refinement method. The gradient edge that is usually obtained is more than one pixel wide, but a plurality of pixels wide. Therefore, such a gradient map is also very "blurred". Since only one exact dot width remains in the edge region. The non-maxima suppression method can help preserve the local maximum gradient while suppressing all other gradient values. This means that only the sharpest positions in the gradient change remain. The specific algorithm is as follows: comparing the gradient strength of the current point with the gradient strength of the positive and negative gradient direction points; if the gradient strength of the current point is the maximum compared to the gradient strength of other points in the same direction, its value is retained, otherwise, it is suppressed, i.e. set to 0. For example, the direction of the current point is directed 90 deg. directly above, which requires comparison with pixels in the vertical direction (directly above and directly below).
And removing isolated noise points in the marked image according to an edge detection algorithm. Typical edge detection algorithms use a threshold to filter out small gradient values caused by noise or color changes, while retaining large gradient values. The Canny algorithm applies dual thresholds, one high and one low, to distinguish edge pixels. If the gradient value of the edge pixel point is larger than the high threshold value, the edge pixel point is considered as a strong edge point; if the edge gradient value is smaller than the high threshold value and larger than the low threshold value, marking as a weak edge point; points less than the low threshold are suppressed. So far, strong edge points may be considered as true edges, and weak edge points may be true edges, or may be caused by noise or color change. For accurate results, the weak edge points caused by the latter should be removed. It is generally considered that the weak edge points and the strong edge points caused by the real edges are connected, whereas the weak edge points caused by the noise are not. In the embodiment of the application, a lag boundary tracking algorithm is adopted to check one weak edge point eight-connected domain pixel, and as long as a strong edge point exists, the weak edge point is regarded as a true edge to be reserved. And finally, taking a 3X3 area with the edge point as the center for each edge point, obtaining the total number m of the edge points in the area, and if m is 1, removing the edge points as isolated noise points to obtain a final edge image so as to obtain edge positions and update the indoor map.
S306, fusing an indoor map in a particle filter algorithm to perform particle through-wall detection;
step S307, performing through-the-wall detection on the fused positioning result again;
step S308, the positioning result is corrected.
Through the steps S306 to S308, an indoor map is fused in the particle filter algorithm, and particle through-wall detection is carried out by using an indoor map model to obtain the position of the particle in the indoor map model, wherein the particle in the particle through-wall detection is used for representing the predicted position and the weight corresponding to the predicted position; determining the weight of the particles according to the positions of the particles in the indoor map model; resampling the particles according to the determined weights; and performing through-wall detection on the resampled particles again, and correcting the initial position information according to the result of performing through-wall detection on the particles again.
It should be noted that, the preferred implementation of the embodiment shown in fig. 3 can be referred to the description related to the embodiment shown in fig. 1.
Fig. 4 is a block diagram of a positioning apparatus for an object according to an embodiment of the present application, and as shown in fig. 4, the method includes:
and the positioning module 40 is used for positioning the target object to obtain initial position information of the target object.
And the building module 42 is used for building an indoor map model of the position of the target object.
And the correction module 44 is configured to correct the initial position information according to a particle filter algorithm and an indoor map model, so as to obtain position information of the target object.
It should be noted that, reference may be made to the description related to the embodiment shown in fig. 1 for a preferred implementation of the embodiment shown in fig. 4, and details are not described here again.
The embodiment of the application also provides a storage medium, wherein the storage medium comprises a stored program, and when the program runs, the device where the storage medium is located is controlled to execute the above positioning method for the target object.
The storage medium stores a program for executing the following functions: positioning a target object to obtain initial position information of the target object; constructing an indoor map model of the position of the target object; and correcting the initial position information according to a particle filter algorithm and an indoor map model to obtain the position information of the target object.
The embodiment of the application also provides a processor, wherein the processor is used for running the program, and the positioning method of the target object is executed when the program runs.
The processor is used for running a program for executing the following functions: positioning a target object to obtain initial position information of the target object; constructing an indoor map model of the position of the target object; and correcting the initial position information according to a particle filter algorithm and an indoor map model to obtain the position information of the target object.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. A method for locating an object, comprising:
positioning a target object to obtain initial position information of the target object;
constructing an indoor map model of the position of the target object;
and correcting the initial position information according to a particle filter algorithm and the indoor map model to obtain the position information of the target object.
2. The method of claim 1, wherein modifying the initial location information according to a particle filtering algorithm and the indoor map model comprises:
performing particle through-wall detection by using the indoor map model to obtain the position of the particle in the indoor map model, wherein the particle in the particle through-wall detection is used for representing a predicted position and a weight corresponding to the predicted position;
determining the weight of the particle according to the position of the particle in an indoor map model;
resampling the particles according to the determined weights; and performing through-wall detection on the resampled particles again, and correcting the initial position information according to the result of performing through-wall detection on the particles again.
3. The method of claim 2, wherein determining the weight of the particle based on the position of the particle in the indoor map model comprises:
if the fact that the destination of the particle transfer is located in an unreachable area in the indoor map model is detected, giving a first weight to the predicted position corresponding to the particle; and if the destination of the particle transfer is detected to be positioned in the reachable area, giving a second weight to the predicted position corresponding to the particle, wherein the second weight is larger than the first weight.
4. The method of claim 3,
resampling the particles in accordance with the determined weights, comprising: filtering out the particles with the first weight and retaining the particles with the second weight;
and performing through-wall detection on the resampled particles again, wherein the through-wall detection comprises the following steps: and carrying out wall penetration detection again by adopting the particles with the second weight to obtain a positioning result.
5. The method of claim 4, wherein the initial location information is modified according to a particle filtering algorithm and the indoor map model, further comprising:
when the positioning result indicates that the destination of the particle transfer is converted from the current area to another area, adjusting the positioning result to a communication position of the current area and the another area;
and updating the initial position information of the target object to the communication position.
6. The method of claim 1, wherein constructing an indoor map model of the location of the object comprises:
constructing the indoor map model according to the indoor plan, wherein the indoor map model comprises at least one of the following information: point, line, plane.
7. The method of claim 6, further comprising updating the indoor map model by:
dividing the indoor plan into a plurality of grid areas, wherein each grid area is a pixel point;
counting the times of the target object falling in each grid area, and converting the times into pixel values of the pixel points;
removing background noise of the plan, and dividing the plan into a background area and a target area, wherein the target area is an area needing edge detection;
removing random noise of the plan;
calculating gradients of the filtered pixel points in the width direction and the height direction to obtain a gradient amplitude image;
labeling the gradient amplitude image by using a non-maximum suppression method to obtain a labeled image;
and removing the isolated noise points in the marked image according to the edge detection algorithm.
8. An apparatus for locating an object, comprising:
the positioning module is used for positioning a target object to obtain initial position information of the target object;
the building module is used for building an indoor map model of the position of the target object;
and the correction module is used for correcting the initial position information according to a particle filter algorithm and the indoor map model to obtain the position information of the target object.
9. A storage medium comprising a stored program, wherein the program when executed controls a device on which the storage medium is located to perform the method of locating an object according to any one of claims 1 to 7.
10. A processor, characterized in that the processor is configured to run a program, wherein the program is configured to perform the method for locating an object according to any one of claims 1 to 7 when running.
CN201910124689.1A 2019-02-19 2019-02-19 Target object positioning method and device Active CN111578938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910124689.1A CN111578938B (en) 2019-02-19 2019-02-19 Target object positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910124689.1A CN111578938B (en) 2019-02-19 2019-02-19 Target object positioning method and device

Publications (2)

Publication Number Publication Date
CN111578938A true CN111578938A (en) 2020-08-25
CN111578938B CN111578938B (en) 2022-08-02

Family

ID=72111362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910124689.1A Active CN111578938B (en) 2019-02-19 2019-02-19 Target object positioning method and device

Country Status (1)

Country Link
CN (1) CN111578938B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114615740A (en) * 2022-05-11 2022-06-10 中冶智诚(武汉)工程技术有限公司 Indoor personnel positioning method based on Bluetooth, PDR and map matching fusion

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140011518A1 (en) * 2012-06-26 2014-01-09 The Governing Council Of The University Of Toronto System, method and computer program for dynamic generation of a radio map
CN103791906A (en) * 2014-02-21 2014-05-14 南京北大工道创新有限公司 Indoor positioning position correction method based on indoor positioning device
CN106382931A (en) * 2016-08-19 2017-02-08 北京羲和科技有限公司 An indoor positioning method and a device therefor
US20170097237A1 (en) * 2014-06-19 2017-04-06 Chigoo Interactive Technology Co., Ltd. Method and device for real-time object locating and mapping
CN106643724A (en) * 2016-11-16 2017-05-10 浙江工业大学 Method for particle filter indoor positioning based on map information and position self-adaption correction
CN107990900A (en) * 2017-11-24 2018-05-04 江苏信息职业技术学院 A kind of particle filter design methods of pedestrian's indoor positioning data
US20180172451A1 (en) * 2015-08-14 2018-06-21 Beijing Evolver Robotics Co., Ltd Method and system for mobile robot to self-establish map indoors

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140011518A1 (en) * 2012-06-26 2014-01-09 The Governing Council Of The University Of Toronto System, method and computer program for dynamic generation of a radio map
CN103791906A (en) * 2014-02-21 2014-05-14 南京北大工道创新有限公司 Indoor positioning position correction method based on indoor positioning device
US20170097237A1 (en) * 2014-06-19 2017-04-06 Chigoo Interactive Technology Co., Ltd. Method and device for real-time object locating and mapping
US20180172451A1 (en) * 2015-08-14 2018-06-21 Beijing Evolver Robotics Co., Ltd Method and system for mobile robot to self-establish map indoors
CN106382931A (en) * 2016-08-19 2017-02-08 北京羲和科技有限公司 An indoor positioning method and a device therefor
CN106643724A (en) * 2016-11-16 2017-05-10 浙江工业大学 Method for particle filter indoor positioning based on map information and position self-adaption correction
CN107990900A (en) * 2017-11-24 2018-05-04 江苏信息职业技术学院 A kind of particle filter design methods of pedestrian's indoor positioning data

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114615740A (en) * 2022-05-11 2022-06-10 中冶智诚(武汉)工程技术有限公司 Indoor personnel positioning method based on Bluetooth, PDR and map matching fusion

Also Published As

Publication number Publication date
CN111578938B (en) 2022-08-02

Similar Documents

Publication Publication Date Title
US11709058B2 (en) Path planning method and device and mobile device
CN110602647B (en) Indoor fusion positioning method based on extended Kalman filtering and particle filtering
CN106600622B (en) A kind of point cloud data segmentation method based on super voxel
CN111028277B (en) SAR and optical remote sensing image registration method based on pseudo-twin convolution neural network
Mongus et al. Ground and building extraction from LiDAR data based on differential morphological profiles and locally fitted surfaces
CN108197583B (en) Building change detection method based on graph cut optimization and image structure characteristics
CN106454747B (en) Wireless positioning method of mobile phone terminal
CN104599286B (en) A kind of characteristic tracking method and device based on light stream
CN112883820B (en) Road target 3D detection method and system based on laser radar point cloud
CN108802722A (en) It is a kind of based on tracking before the Faint target detection virtually composed
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN106203238A (en) Well lid component identification method in mobile mapping system streetscape image
Fazan et al. Rectilinear building roof contour extraction based on snakes and dynamic programming
CN110457417B (en) Indoor map construction method based on edge detection algorithm, computer storage medium and terminal
KR101092250B1 (en) Apparatus and method for object segmentation from range image
CN103017655A (en) Method and system for extracting floor area of multi-floor building
CN107808524A (en) A kind of intersection vehicle checking method based on unmanned plane
CN112085778A (en) Oblique photography illegal building detection method and system based on superpixels and morphology
CN111578938B (en) Target object positioning method and device
Ma et al. Intelligent optimization of seam-line finding for orthophoto mosaicking with LiDAR point clouds
Adu-Gyamfi et al. Functional evaluation of pavement condition using a complete vision system
CN107657628A (en) A kind of real-time color method for tracking target
CN113393519A (en) Laser point cloud data processing method, device and equipment
Chang et al. Segmentation based on fusion of range and intensity images using robust trimmed methods
CN110969605A (en) Method and system for detecting moving small target based on space-time saliency map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant