CN111537954A - Real-time high-dynamic fusion positioning method and device - Google Patents

Real-time high-dynamic fusion positioning method and device Download PDF

Info

Publication number
CN111537954A
CN111537954A CN202010309959.9A CN202010309959A CN111537954A CN 111537954 A CN111537954 A CN 111537954A CN 202010309959 A CN202010309959 A CN 202010309959A CN 111537954 A CN111537954 A CN 111537954A
Authority
CN
China
Prior art keywords
identification
positioning
image
position coordinates
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010309959.9A
Other languages
Chinese (zh)
Inventor
任德旗
孙剑
任俊儒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010309959.9A priority Critical patent/CN111537954A/en
Publication of CN111537954A publication Critical patent/CN111537954A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Abstract

The invention provides a real-time high-dynamic fusion positioning method and a device, which are applied to the technical field of positioning. The real-time high-dynamic fusion positioning method provided by the invention can realize the positioning of the place to be positioned only by presetting the positioning identifier at the indoor designated position and combining the real-time high-dynamic fusion positioning method provided by the invention without arranging a large amount of hardware equipment, and compared with the prior art, the real-time high-dynamic fusion positioning method can effectively reduce the implementation cost.

Description

Real-time high-dynamic fusion positioning method and device
Technical Field
The invention belongs to the technical field of positioning, and particularly relates to a real-time high-dynamic fusion positioning method and device.
Background
Along with the continuous expansion of the building scale of places such as markets, stations, large underground parking lots and the like, the indoor positioning technology is more and more widely applied to solve the problem that the outdoor positioning technology such as a GPS (global positioning system) cannot accurately position in an indoor environment due to serious attenuation of positioning signals caused by wall obstruction. Currently, a common indoor positioning method is mainly realized based on technologies such as Wi-Fi, Bluetooth, infrared rays, ultra wide band, RFID, ZigBee and ultrasonic waves.
Taking an indoor positioning method realized based on Wi-Fi technology as an example, the method uses the position information of a wireless access point as a basis and a premise, and adopts a mode of combining empirical test and a signal propagation model to position an accessed mobile device, and the highest accuracy is about 1 meter to 20 meters. If the indoor environment is large and the building pattern is complex, a large number of wireless access points need to be arranged to realize the positioning function.
Similar to the above methods, the existing indoor positioning methods all require a large amount of hardware devices to be arranged in advance indoors, which results in too high implementation cost.
Disclosure of Invention
In view of this, an object of the present invention is to provide a real-time high dynamic fusion positioning method and apparatus, which achieve positioning based on a positioning identifier arranged in advance indoors, save a lot of hardware devices, and effectively reduce implementation cost, and the specific scheme is as follows:
in a first aspect, the present invention provides a real-time high dynamic fusion positioning method, including:
acquiring a reference image which is acquired at a to-be-positioned place and comprises a plurality of positioning marks, wherein the positioning marks are preset at indoor designated positions;
determining the position coordinates of each positioning identifier in the reference image;
converting the reference image into a depth map, and calculating an estimated distance between the position to be positioned and the position of each positioning identifier based on the depth map;
and determining the position coordinates of the to-be-positioned location according to the estimated distances and the position coordinates of the positioning marks to finish positioning.
Optionally, the determining the position coordinates of each positioning identifier in the reference image includes:
respectively identifying the identification information of each positioning identification in the reference image to obtain the identification code of each positioning identification;
and respectively determining the position coordinates corresponding to each identification code according to a preset mapping relation, and taking the position coordinates corresponding to each identification code as the position coordinates of the corresponding positioning identification, wherein the corresponding relation between the identification codes and the position coordinates of the positioning identification is recorded in the preset mapping relation.
Optionally, the respectively identifying the identification information of each positioning identifier in the reference image to obtain an identification code of each positioning identifier includes:
extracting identification images of the positioning identifications in the reference image;
respectively carrying out preset enhanced denoising processing on each identification image to obtain a processed identification image;
and respectively identifying the identification information in each processed identification image to obtain an identification code corresponding to each positioning identification.
Optionally, the respectively performing preset enhanced denoising processing on each identification image to obtain a processed identification image includes:
for each of said identification images, a unique identification code is assigned to each of said identification images,
carrying out perspective transformation reduction on the identification image to obtain a reduced identification image;
converting the restored identification image into a gray image to obtain a gray image of the restored identification image;
and carrying out binarization processing on the gray-scale image of the restored identification image, and executing closing operation on a processing result to obtain the processed identification image.
Optionally, the identifying the identification information in each processed identification image respectively to obtain an identification code corresponding to each positioning identification includes:
calling a pre-trained identification recognition model, wherein the identification recognition model is obtained by training a neural network model by taking an image of a positioning identification as a training sample and taking an identification code of the positioning identification as the training sample;
and respectively inputting each processed identification image into the identification recognition model to obtain identification codes corresponding to each positioning identification.
Optionally, the respectively inputting each processed identification image into the identification recognition model to obtain an identification code corresponding to each positioning identification includes:
for each of said processed identification images,
performing character separation on the processed identification image to obtain a plurality of character images;
respectively inputting each character image into the identification recognition model to obtain identification content corresponding to each character image;
and arranging the identification content corresponding to each character image according to the position of each character image in the processed identification image to obtain the identification code corresponding to the positioning identification.
Optionally, if the identification code is composed of letters and numbers, the identification recognition model includes a letter recognition model and a number recognition model, and the step of inputting each character image into the identification recognition model to obtain the identification content corresponding to each character image includes:
inputting each character image into the letter recognition model respectively to obtain a first recognition result of each character image and the accuracy rate of the recognition result;
inputting each character image into the digital recognition model respectively to obtain a second recognition result of each character image and the accuracy rate of the recognition result;
and for each character image, determining the recognition result with the highest recognition result accuracy rate as the identification content of the character image in the first recognition result and the second recognition result of the character image.
Optionally, the acquiring a reference image which is acquired at a to-be-positioned location and includes a plurality of positioning identifiers includes:
acquiring a video file acquired at a to-be-positioned place;
analyzing the video file to obtain a plurality of frames of images;
and taking the image which has the definition meeting the preset requirement and contains a plurality of positioning marks in the multi-frame image as a reference image.
Optionally, after the determining the position coordinates of the location to be positioned according to each estimated distance and the position coordinates of each positioning identifier, and completing positioning, the method further includes:
acquiring a target position coordinate input by a user;
and generating a navigation route between the position coordinates of the to-be-positioned location and the target position coordinates.
In a second aspect, the present invention provides a real-time high dynamic fusion positioning apparatus, including:
the device comprises a first acquisition unit, a second acquisition unit and a positioning unit, wherein the first acquisition unit is used for acquiring a reference image which is acquired at a to-be-positioned place and comprises a plurality of positioning marks, and the positioning marks are preset at indoor designated positions;
a determining unit, configured to determine position coordinates of each positioning identifier in the reference image;
the calculation unit is used for converting the reference image into a depth map and calculating an estimated distance between the position to be positioned and the position of each positioning identifier based on the depth map;
and the positioning unit is used for determining the position coordinates of the to-be-positioned location according to the estimated distances and the position coordinates of the positioning marks so as to complete positioning.
The real-time high-dynamic fusion positioning method provided by the invention comprises the steps of presetting positioning marks at an indoor designated position, after acquiring a reference image which is acquired at a to-be-positioned place and contains a plurality of positioning marks, determining the position coordinates of each positioning mark in the reference image, converting the reference image into a depth map, calculating the estimated distance between the to-be-positioned place and each positioning mark based on the depth map, and finally determining the position coordinates of the to-be-positioned place according to each estimated distance and the position coordinates of each positioning mark to finish positioning. The real-time high-dynamic fusion positioning method provided by the invention can realize the positioning of the place to be positioned only by presetting the positioning identifier at the indoor designated position and combining the real-time high-dynamic fusion positioning method provided by the invention without arranging a large amount of hardware equipment, and compared with the prior art, the real-time high-dynamic fusion positioning method can effectively reduce the implementation cost.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a real-time high dynamic fusion positioning method according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a principle of triangulation in the real-time high dynamic fusion positioning method according to the embodiment of the present invention;
fig. 3 is a block diagram of a real-time high dynamic fusion positioning apparatus according to an embodiment of the present invention;
FIG. 4 is a block diagram of another real-time high dynamic fusion positioning apparatus according to an embodiment of the present invention;
fig. 5 is a block diagram of a server according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a real-time high dynamic fusion positioning method provided in an embodiment of the present invention, where the method is applicable to an electronic device, and the electronic device may be an electronic device with image and data processing capabilities, such as a notebook computer, a smart phone, a PC (personal computer), and a data server, and obviously, the electronic device may also be implemented by a server on a network side in some cases; referring to fig. 1, a real-time high dynamic fusion positioning method provided in an embodiment of the present invention may include:
s100, acquiring a reference image which is acquired at a position to be positioned and contains a plurality of positioning marks.
In order to implement the real-time high dynamic fusion positioning method provided by the embodiment of the invention, a positioning identifier needs to be set at an indoor specified position.
Optionally, an embodiment of the present invention provides a set of complete setting methods for a positioning identifier, which mainly includes content such as identifier composition and identifier setting position, and the specific content is as follows:
firstly, regarding the specific structure of the positioning mark, the background color of the mark should be a pure background, which is convenient for positioning and identifying the positioning mark in the subsequent steps. The material of the positioning mark should avoid reflecting light, and the chromatic aberration under the environment light should be considered at the same time. Of course, when the positioning mark is specifically designed, other factors of the specific structure of the mark, such as the uniform appearance of the mark, the size of the mark that can be neither too large nor too small, etc., need to be comprehensively considered in combination with a specific installation location. For other matters not mentioned in the embodiments of the present invention, reference may be made to the prior art.
In the aspect of identification composition, the most important point is the specific composition of the identification content, the identification codes of the positioning identifications should be determined according to a certain coding rule, so that certain regularity is ensured to exist between the identification codes of the positioning identifications, and meanwhile, each positioning identification corresponds to a unique identification code, so that the situation that different positioning identifications use the same identification code is absolutely avoided.
For example, the identification code may be provided by a combination of numbers and capital letters, and letters or numbers identifying different positions in the code may be given different definitions as needed. Referring to table 1, table 1 shows an encoding rule of a positioning identifier according to an embodiment of the present invention.
TABLE 1
Name of character Number of bits Character composition Meaning of characters
Floor characters 1, 2 th position L1,L2 Floor sign
Character partition Position 3 A-Z Floor partition identification
Number of columns character 4, 5 th position 01-99 Column sequence number
Azimuth character Position 6 ABCD A-east; b-south; c-west; D-North
The upright posts refer to indoor building supporting structures, and particularly, when the upright posts are applied to low parking lots, a plurality of upright posts exist in indoor environments. Of course, according to the actual application scenario, other places where the positioning identifier is convenient to set may be used, or the placement platform may be set for the positioning identifier alone without increasing the cost seriously.
Secondly, as for the setting position of the positioning mark, the positioning mark can be selected to be arranged on an indoor upright post, a wall and other places which are easy to be positioned, and meanwhile, the positioning mark also can meet the requirement of being easily found and shot by vehicles or pedestrians. Therefore, the setting height of the positioning mark should be generally greater than 2m, the height should be the highest to avoid exceeding the shooting range of a vehicle driving recorder or a mobile terminal such as a mobile phone of a pedestrian, and the lowest to avoid being blocked by a vehicle or other objects parked near the positioning mark.
Furthermore, in order to provide pictures with positioning marks for pedestrians or vehicles as conveniently as possible, marks with different numbers can be arranged on platforms with multiple directions and multiple planes, such as the upright posts, so that the positioning marks can be shot at different angles. Meanwhile, the number of the positioning marks is set as much as possible, so that pedestrians or vehicles can cover a plurality of marks in the same picture taking place, and positioning in the subsequent steps can be facilitated.
The server of the real-time high-dynamic fusion positioning method provided by the embodiment of the invention obtains the reference image collected at the position to be positioned, and because enough positioning identifiers are set in the indoor environment, the occurrence of a plurality of positioning identifiers in the reference image has almost no realization difficulty. It is conceivable that the device for providing the reference image to the server may be any mobile terminal held by the user, such as a mobile phone, a tablet computer, and the like, and of course, may also be an electronic device such as a car recorder. The electronic devices provide the reference images to the server by means of wireless transmission, for example, the transmission is realized by applying a 5G network.
Optionally, the reference image acquired at the location to be positioned may be provided in the form of a picture directly by a pedestrian or a vehicle, but considering that a user may unconsciously shoot a reference image including a plurality of positioning identifiers, the method provided by the embodiment of the present invention may further obtain a video file acquired at the location to be positioned, analyze the video file to obtain a multi-frame image in the video, and finally use an image which satisfies a preset requirement in the multi-frame image and includes a plurality of positioning identifiers as the reference image. The method for selecting any frame of image in the video can be implemented by referring to the prior art, and the invention is not limited to this.
And S110, determining the position coordinates of each positioning mark in the reference image.
The determination of the position coordinates of each positioning mark in the reference image is an extremely important step in the method, and is related to whether the subsequent position to be positioned can be correctly positioned. The step is to identify and confirm the position of the positioning mark in the reference image to facilitate the subsequent identification of the mark code of the positioning mark.
Optionally, an embodiment of the present invention provides an identifier positioning model, where the identifier positioning model is obtained by training a neural network with a sample image containing a positioning identifier as a training sample and a positioning identifier image in the sample image as a training label. In actual use, the obtained reference image is input into the identification positioning model, the identification image of the positioning identification in the reference image can be identified through the identification positioning model, and then the identification image of the determined position is extracted to obtain the identification image of each positioning identification in the reference image.
And then, respectively carrying out preset enhanced denoising processing on each identification image, so that the content of each identification image is clearer, the identification content is easier to recognize, obtaining the processed identification images after the preset enhanced denoising processing, respectively recognizing the identification information in each processed identification image, and obtaining the identification codes corresponding to each positioning identification.
Optionally, in the embodiment of the present invention, a mapping relationship is preset, and a corresponding relationship between the identifier code and the position coordinate of the positioning identifier is recorded in the preset mapping relationship. In a specific implementation, the preset mapping relationship may be implemented in the form of a database, and the position coordinates of each positioning identifier in the room are recorded in the database, that is, the corresponding relationship between the positioning identifier code and the positioning identifier position coordinates is recorded.
And inquiring the preset mapping relation after obtaining the identification code of each positioning identification to obtain the position coordinate corresponding to each identification code, wherein the position coordinate corresponding to the identification code is the position coordinate of the corresponding positioning identification according to the content.
It is conceivable that, after the position coordinates of the positioning marks are obtained, the position of the location to be positioned can be roughly determined according to the indoor plane layout and the position coordinates of each positioning mark. For example, it can be determined which floor the location to be located is located on, which column is closer to, and the approximate orientation, etc. according to the positioning identification content, i.e. the identification code.
The following describes the pre-set enhanced denoising process of the identification image and the identification process of the identification code in detail, and since the processing processes of the identification images are consistent, only one processing process of the identification image is taken as an example for explanation.
Due to the influence of the shooting angle, the lens, and other factors, the marker image in the reference image may have deformation such as horizontal tilt, vertical tilt, or keystone distortion, and therefore, a certain corrective measure needs to be taken for the marker image.
Optionally, the identification image is subjected to perspective transformation and restoration to obtain a restored identification image. The perspective transformation is to project the identification image to a new viewing plane, fix the maximum edge by using the four vertexes of the detected marker image, restore the smaller edge, and finally obtain the restored identification image. The core processing formula of perspective transformation reduction is as follows:
Figure BDA0002457266600000081
further, the restored identification image is converted into a gray image to obtain a gray image of the restored identification image, the gray image of the restored identification image is subjected to binarization processing, and a closing operation is performed on a processing result to obtain a processed identification image.
By carrying out the above processing on the restored identification image, the image information can be kept as much as possible, the image quality can be improved, the differentiability of similar identification contents is improved, and the accuracy of the identification content identification result is ensured.
After the processed identification image is obtained, the content of the processed identification image can be identified, and then identification codes corresponding to the positioning identifications are obtained.
In consideration of the specific advantages of the deep neural network model in image recognition processing, the embodiment of the invention provides an identification recognition model obtained based on neural network model pre-training. The identification recognition model is obtained by training a neural network model by taking an image of a positioning identification as a training sample and taking an identification code of the positioning identification as a training label. For a specific training process, reference may be made to a training method in the prior art, which is not described herein again.
And calling the identification recognition model, and respectively inputting the processed identification images into the identification recognition model to obtain the identification codes of the positioning identifications.
Optionally, as described above, the identification code of the positioning identifier may be composed of a plurality of characters, and may include a plurality of types of characters such as letters and numbers, and in order to further provide accuracy of the identification code recognition result, the identification recognition model provided in the embodiment of the present invention may be further subdivided into an alphabetical recognition model and a digital recognition model.
Optionally, for each processed identification image, before specifically performing recognition, character segmentation is performed on the processed identification image, so as to obtain a plurality of character images. The character segmentation is to utilize the structural characteristics of characters, the similarity between characters, the interval between characters and other information, on one hand, single characters are respectively extracted, and the processing of special conditions including adhesion and broken characters and the like is also included, on the other hand, the characters with wide and high similarity can be classified into one class, so that a marked frame and some small noises are removed.
The character segmentation process includes horizontal scanning and vertical scanning of the processed identification image, the upper and lower limits of characters and edges are determined through the horizontal scanning, and the left and right coordinates of characters in the image can be determined through the vertical scanning. For the specific process of character segmentation, it can be done with reference to the prior art.
After any processed identification image is subjected to character segmentation, a plurality of character images can be obtained, and then an alphabet recognition model and a number recognition model are respectively input aiming at each character image to perform the recognition work of character contents.
Specifically, inputting each character image into a letter recognition model respectively to obtain a first recognition result of each character image and the accuracy rate of the recognition result; and then, inputting each character image into the digital recognition model respectively to obtain a second recognition result of each character image and the accuracy of the recognition result.
And for each character image, determining the recognition result with the highest recognition result accuracy rate as the identification content of the character image in the first recognition result and the second recognition result of the character image.
And finally, arranging the identification content corresponding to each character image according to the position of each character image in the processed identification image to obtain the identification code corresponding to the positioning identification.
Optionally, the letter recognition model and the number recognition model described in the embodiment of the present invention may be obtained based on the training of the ResNet101 neural network.
And S120, converting the reference image into a depth map, and calculating an estimated distance between the to-be-positioned location and the position of each positioning mark based on the depth map.
And after the position coordinates of each positioning mark are obtained, the reference image is further converted into a depth map, and the estimated distance between the to-be-positioned location and the position of each positioning mark is calculated based on the depth map. For the specific process of calculating the estimated distance according to the depth map, reference may be made to the prior art implementation, and details thereof are not described here.
And S130, determining the position coordinates of the to-be-positioned location according to the estimated distances and the position coordinates of the positioning marks, and completing positioning.
Optionally, referring to fig. 2, fig. 2 is a schematic diagram of a principle of triangulation in the real-time high dynamic fusion positioning method provided by the embodiment of the present invention. With reference to fig. 2, after obtaining the estimated distance between the location to be located and each location identifier and the location coordinates of each location identifier, the location coordinates of the location to be located can be determined based on the triangulation locating principle, and the location is completed.
Based on the principle of triangulation, the specific implementation process of determining the position coordinates of the location to be positioned can be implemented by referring to the prior art, which is not limited by the invention.
Optionally, for convenience of subsequent navigation route generation, after the position coordinate of the location to be positioned is determined, the determined position coordinate may be corrected, for example, in a scene of a parking lot, the position coordinate of the location to be positioned is located at an edge position of a lane, and the position coordinate may be corrected to a lane central line position.
In summary, according to the real-time high-dynamic fusion positioning method provided in the embodiment of the present invention, positioning identifiers are preset at indoor designated positions, after a reference image which is acquired at a location to be positioned and includes a plurality of positioning identifiers is acquired, position coordinates of each positioning identifier in the reference image are determined, then the reference image is converted into a depth map, estimated distances between the location to be positioned and each positioning identifier are calculated based on the depth map, and finally, the position coordinates of the location to be positioned are determined according to each estimated distance and the position coordinates of each positioning identifier, so as to complete positioning. The real-time high-dynamic fusion positioning method provided by the invention can realize the positioning of the place to be positioned only by presetting the positioning identifier at the indoor designated position and combining the real-time high-dynamic fusion positioning method provided by the invention without arranging a large amount of hardware equipment, and compared with the prior art, the real-time high-dynamic fusion positioning method can effectively reduce the implementation cost.
Optionally, after determining the position coordinate of the location to be located, the server executing the real-time high-dynamic fusion positioning method provided in the embodiment of the present invention may further obtain a target position coordinate input by the user, and of course, the target position coordinate may also be a position coordinate recommended for the user, and then, according to a pre-established electronic map, generate a navigation route between the position coordinate of the location to be located and the target position coordinate.
The real-time high dynamic fusion positioning device provided by the embodiment of the present invention is introduced below, and the real-time high dynamic fusion positioning device described below may be regarded as a functional module architecture that needs to be set in the central device to implement the real-time high dynamic fusion positioning method provided by the embodiment of the present invention; the following description may be cross-referenced with the above.
Optionally, referring to fig. 3, fig. 3 is a block diagram of a real-time high dynamic fusion positioning apparatus provided in an embodiment of the present invention, where the apparatus may include:
the system comprises a first acquisition unit 10, a second acquisition unit and a positioning unit, wherein the first acquisition unit is used for acquiring a reference image which is acquired at a to-be-positioned position and comprises a plurality of positioning marks, and the positioning marks are preset at indoor designated positions;
a determining unit 20, configured to determine position coordinates of each positioning identifier in the reference image;
a calculating unit 30, configured to convert the reference image into a depth map, and calculate an estimated distance between the location to be positioned and each position of the positioning identifier based on the depth map;
and the positioning unit 40 is configured to determine the position coordinates of the location to be positioned according to each estimated distance and the position coordinates of each positioning identifier, so as to complete positioning.
Optionally, the determining unit 20 is configured to, when determining the position coordinate of each positioning identifier in the reference image, specifically include:
respectively identifying the identification information of each positioning identification in the reference image to obtain the identification code of each positioning identification;
and respectively determining the position coordinates corresponding to each identification code according to a preset mapping relation, and taking the position coordinates corresponding to each identification code as the position coordinates of the corresponding positioning identification, wherein the corresponding relation between the identification codes and the position coordinates of the positioning identification is recorded in the preset mapping relation.
Optionally, the determining unit 20 is configured to respectively identify the identifier information of each positioning identifier in the reference image, and when obtaining the identifier code of each positioning identifier, specifically include:
extracting identification images of the positioning identifications in the reference image;
respectively carrying out preset enhanced denoising processing on each identification image to obtain a processed identification image;
and respectively identifying the identification information in each processed identification image to obtain an identification code corresponding to each positioning identification.
Optionally, the determining unit 20 is configured to perform preset enhanced denoising processing on each identification image, and when a processed identification image is obtained, the method specifically includes:
for each of said identification images, a unique identification code is assigned to each of said identification images,
carrying out perspective transformation reduction on the identification image to obtain a reduced identification image;
converting the restored identification image into a gray image to obtain a gray image of the restored identification image;
and carrying out binarization processing on the gray-scale image of the restored identification image, and executing closing operation on a processing result to obtain the processed identification image.
Optionally, the determining unit 20 is configured to identify the identifier information in each processed identifier image, and when obtaining the identifier code corresponding to each positioning identifier, specifically include:
calling a pre-trained identification recognition model, wherein the identification recognition model is obtained by training a neural network model by taking an image of a positioning identification as a training sample and taking an identification code of the positioning identification as the training sample;
and respectively inputting each processed identification image into the identification recognition model to obtain identification codes corresponding to each positioning identification.
Optionally, the determining unit 20 is configured to, when each processed identification image is input into the identification recognition model to obtain an identification code corresponding to each positioning identification, specifically include:
for each of said processed identification images,
performing character separation on the processed identification image to obtain a plurality of character images;
respectively inputting each character image into the identification recognition model to obtain identification content corresponding to each character image;
and arranging the identification content corresponding to each character image according to the position of each character image in the processed identification image to obtain the identification code corresponding to the positioning identification.
Optionally, if the identification code is composed of letters and numbers, the identification recognition model includes a letter recognition model and a number recognition model, and the determining unit 20 is configured to input each of the character images into the identification recognition model, so as to obtain the identification content corresponding to each of the character images, and specifically includes:
inputting each character image into the letter recognition model respectively to obtain a first recognition result of each character image and the accuracy rate of the recognition result;
inputting each character image into the digital recognition model respectively to obtain a second recognition result of each character image and the accuracy rate of the recognition result;
and for each character image, determining the recognition result with the highest recognition result accuracy rate as the identification content of the character image in the first recognition result and the second recognition result of the character image.
Optionally, the acquiring unit 10 is configured to, when acquiring a reference image that is acquired at a to-be-positioned location and includes a plurality of positioning identifiers, specifically include:
acquiring a video file acquired at a to-be-positioned place;
analyzing the video file to obtain a plurality of frames of images;
and taking the image which has the definition meeting the preset requirement and contains a plurality of positioning marks in the multi-frame image as a reference image.
Optionally, referring to fig. 4, fig. 4 is a block diagram of another real-time high dynamic fusion positioning apparatus provided in the embodiment of the present invention, and on the basis of the embodiment shown in fig. 3, the apparatus further includes:
a second acquiring unit 50 for acquiring a target position coordinate input by a user;
a generating unit 60, configured to generate a navigation route between the position coordinates of the location to be positioned and the target position coordinates.
Fig. 5 is a block diagram of a server according to an embodiment of the present invention, and as shown in fig. 5, the server may include: at least one processor 100, at least one communication interface 200, at least one memory 300, and at least one communication bus 400;
in the embodiment of the present invention, the number of the processor 100, the communication interface 200, the memory 300, and the communication bus 400 is at least one, and the processor 100, the communication interface 200, and the memory 300 complete the communication with each other through the communication bus 400; it is clear that the communication connections shown by the processor 100, the communication interface 200, the memory 300 and the communication bus 400 shown in fig. 5 are merely optional;
optionally, the communication interface 200 may be an interface of a communication module, such as an interface of a GSM module;
the processor 100 may be a central processing unit CPU or an application specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention.
The memory 300, which stores application programs, may include a high-speed RAM memory, and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor 100 is specifically configured to execute an application program in the memory to implement any embodiment of the real-time high dynamic fusion positioning method described above.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A real-time high dynamic fusion positioning method is characterized by comprising the following steps:
acquiring a reference image which is acquired at a to-be-positioned place and comprises a plurality of positioning marks, wherein the positioning marks are preset at indoor designated positions;
determining the position coordinates of each positioning identifier in the reference image;
converting the reference image into a depth map, and calculating an estimated distance between the position to be positioned and the position of each positioning identifier based on the depth map;
and determining the position coordinates of the to-be-positioned location according to the estimated distances and the position coordinates of the positioning marks to finish positioning.
2. The real-time high-dynamic fusion positioning method according to claim 1, wherein the determining the position coordinates of each positioning identifier in the reference image comprises:
respectively identifying the identification information of each positioning identification in the reference image to obtain the identification code of each positioning identification;
and respectively determining the position coordinates corresponding to each identification code according to a preset mapping relation, and taking the position coordinates corresponding to each identification code as the position coordinates of the corresponding positioning identification, wherein the corresponding relation between the identification codes and the position coordinates of the positioning identification is recorded in the preset mapping relation.
3. The real-time high-dynamic fusion positioning method according to claim 2, wherein the identifying information for respectively identifying each positioning identifier in the reference image to obtain an identifier code for each positioning identifier comprises:
identifying and extracting the identification image of each positioning identification in the reference image;
respectively carrying out preset enhanced denoising processing on each identification image to obtain a processed identification image;
and respectively identifying the identification information in each processed identification image to obtain an identification code corresponding to each positioning identification.
4. The real-time high-dynamic fusion positioning method according to claim 3, wherein the step of performing preset enhanced denoising processing on each identification image to obtain a processed identification image comprises:
for each of said identification images, a unique identification code is assigned to each of said identification images,
carrying out perspective transformation reduction on the identification image to obtain a reduced identification image;
converting the restored identification image into a gray image to obtain a gray image of the restored identification image;
and carrying out binarization processing on the gray-scale image of the restored identification image, and executing closing operation on a processing result to obtain the processed identification image.
5. The real-time high-dynamic fusion positioning method according to claim 3, wherein the identifying the identification information in each processed identification image respectively to obtain the identification code corresponding to each positioning identification comprises:
calling a pre-trained identification recognition model, wherein the identification recognition model is obtained by training a neural network model by taking an image of a positioning identification as a training sample and taking an identification code of the positioning identification as a training label;
and respectively inputting each processed identification image into the identification recognition model to obtain identification codes corresponding to each positioning identification.
6. The real-time high-dynamic fusion positioning method according to claim 5, wherein the respectively inputting each processed identification image into the identification recognition model to obtain an identification code corresponding to each positioning identification comprises:
for each of said processed identification images,
performing character separation on the processed identification image to obtain a plurality of character images;
respectively inputting each character image into the identification recognition model to obtain identification content corresponding to each character image;
and arranging the identification content corresponding to each character image according to the position of each character image in the processed identification image to obtain the identification code corresponding to the positioning identification.
7. The real-time high-dynamic fusion positioning method according to claim 6, wherein if the identification code is composed of letters and numbers, the identification recognition model includes a letter recognition model and a number recognition model, and the step of inputting each character image into the identification recognition model to obtain the identification content corresponding to each character image includes:
inputting each character image into the letter recognition model respectively to obtain a first recognition result of each character image and the accuracy rate of the recognition result;
inputting each character image into the digital recognition model respectively to obtain a second recognition result of each character image and the accuracy rate of the recognition result;
and for each character image, determining the recognition result with the highest recognition result accuracy rate as the identification content of the character image in the first recognition result and the second recognition result of the character image.
8. The real-time high-dynamic fusion positioning method according to claim 1, wherein the acquiring a reference image which is acquired at a location to be positioned and includes a plurality of positioning identifiers comprises:
acquiring a video file acquired at a to-be-positioned place;
analyzing the video file to obtain a plurality of frames of images;
and taking the image which has the definition meeting the preset requirement and contains a plurality of positioning marks in the multi-frame image as a reference image.
9. The real-time high-dynamic fusion positioning method according to any one of claims 1 to 8, wherein after determining the position coordinates of the location to be positioned according to each estimated distance and the position coordinates of each positioning identifier, the method further comprises:
acquiring a target position coordinate input by a user;
and generating a navigation route between the position coordinates of the to-be-positioned location and the target position coordinates.
10. A real-time high dynamic fusion positioning device, comprising:
the device comprises a first acquisition unit, a second acquisition unit and a positioning unit, wherein the first acquisition unit is used for acquiring a reference image which is acquired at a to-be-positioned place and comprises a plurality of positioning marks, and the positioning marks are preset at indoor designated positions;
a determining unit, configured to determine position coordinates of each positioning identifier in the reference image;
the calculation unit is used for converting the reference image into a depth map and calculating an estimated distance between the position to be positioned and the position of each positioning identifier based on the depth map;
and the positioning unit is used for determining the position coordinates of the to-be-positioned location according to the estimated distances and the position coordinates of the positioning marks so as to complete positioning.
CN202010309959.9A 2020-04-20 2020-04-20 Real-time high-dynamic fusion positioning method and device Pending CN111537954A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010309959.9A CN111537954A (en) 2020-04-20 2020-04-20 Real-time high-dynamic fusion positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010309959.9A CN111537954A (en) 2020-04-20 2020-04-20 Real-time high-dynamic fusion positioning method and device

Publications (1)

Publication Number Publication Date
CN111537954A true CN111537954A (en) 2020-08-14

Family

ID=71972962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010309959.9A Pending CN111537954A (en) 2020-04-20 2020-04-20 Real-time high-dynamic fusion positioning method and device

Country Status (1)

Country Link
CN (1) CN111537954A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880953A (en) * 2023-03-08 2023-03-31 北京熙捷科技有限公司 Unmanned aerial vehicle control method and intelligent street lamp system

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102697498A (en) * 2012-04-28 2012-10-03 孙剑 System and method for collecting remote data based on body composition measuring device
CN105571583A (en) * 2014-10-16 2016-05-11 华为技术有限公司 User location positioning method and server
CN105631445A (en) * 2014-11-06 2016-06-01 通号通信信息集团有限公司 Character recognition method and system for license plate with Chinese characters
CN105718872A (en) * 2016-01-15 2016-06-29 武汉光庭科技有限公司 Auxiliary method and system for rapid positioning of two-side lanes and detection of deflection angle of vehicle
CN106291635A (en) * 2016-07-25 2017-01-04 无锡知谷网络科技有限公司 Method and system for indoor positioning
CN106845478A (en) * 2016-12-30 2017-06-13 同观科技(深圳)有限公司 The secondary licence plate recognition method and device of a kind of character confidence level
CN107588767A (en) * 2012-04-18 2018-01-16 知谷(上海)网络科技有限公司 A kind of indoor intelligent positioning navigation method
CN108510545A (en) * 2018-03-30 2018-09-07 京东方科技集团股份有限公司 Space-location method, space orientation equipment, space positioning system and computer readable storage medium
CN108734734A (en) * 2018-05-18 2018-11-02 中国科学院光电研究院 Indoor orientation method and system
CN109341691A (en) * 2018-09-30 2019-02-15 百色学院 Intelligent indoor positioning system and its localization method based on icon-based programming
CN109459034A (en) * 2018-07-05 2019-03-12 北京中广通业信息科技股份有限公司 A kind of indoor bootstrap technique and system based on wireless network
CN110132274A (en) * 2019-04-26 2019-08-16 中国铁道科学研究院集团有限公司电子计算技术研究所 A kind of indoor orientation method, device, computer equipment and storage medium
CN111028358A (en) * 2018-10-09 2020-04-17 香港理工大学深圳研究院 Augmented reality display method and device for indoor environment and terminal equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107588767A (en) * 2012-04-18 2018-01-16 知谷(上海)网络科技有限公司 A kind of indoor intelligent positioning navigation method
CN102697498A (en) * 2012-04-28 2012-10-03 孙剑 System and method for collecting remote data based on body composition measuring device
CN105571583A (en) * 2014-10-16 2016-05-11 华为技术有限公司 User location positioning method and server
CN105631445A (en) * 2014-11-06 2016-06-01 通号通信信息集团有限公司 Character recognition method and system for license plate with Chinese characters
CN105718872A (en) * 2016-01-15 2016-06-29 武汉光庭科技有限公司 Auxiliary method and system for rapid positioning of two-side lanes and detection of deflection angle of vehicle
CN106291635A (en) * 2016-07-25 2017-01-04 无锡知谷网络科技有限公司 Method and system for indoor positioning
CN106845478A (en) * 2016-12-30 2017-06-13 同观科技(深圳)有限公司 The secondary licence plate recognition method and device of a kind of character confidence level
CN108510545A (en) * 2018-03-30 2018-09-07 京东方科技集团股份有限公司 Space-location method, space orientation equipment, space positioning system and computer readable storage medium
CN108734734A (en) * 2018-05-18 2018-11-02 中国科学院光电研究院 Indoor orientation method and system
CN109459034A (en) * 2018-07-05 2019-03-12 北京中广通业信息科技股份有限公司 A kind of indoor bootstrap technique and system based on wireless network
CN109341691A (en) * 2018-09-30 2019-02-15 百色学院 Intelligent indoor positioning system and its localization method based on icon-based programming
CN111028358A (en) * 2018-10-09 2020-04-17 香港理工大学深圳研究院 Augmented reality display method and device for indoor environment and terminal equipment
CN110132274A (en) * 2019-04-26 2019-08-16 中国铁道科学研究院集团有限公司电子计算技术研究所 A kind of indoor orientation method, device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙剑等: "基于视频牌照识别的动态交通OD估计仿真优化", 《公路交通科技》 *
迟晓君: "一种基于支持向量机的车牌字符识别方法", 《信息技术与信息化》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880953A (en) * 2023-03-08 2023-03-31 北京熙捷科技有限公司 Unmanned aerial vehicle control method and intelligent street lamp system

Similar Documents

Publication Publication Date Title
CN112184818B (en) Vision-based vehicle positioning method and parking lot management system applying same
CN111179152B (en) Road identification recognition method and device, medium and terminal
EP2975555B1 (en) Method and apparatus for displaying a point of interest
KR102143108B1 (en) Lane recognition modeling method, device, storage medium and device, and recognition method, device, storage medium and device
CN107067794B (en) Indoor vehicle positioning and navigation system and method based on video image processing
JP2020064068A (en) Visual reinforcement navigation
CN110360999B (en) Indoor positioning method, indoor positioning system, and computer readable medium
CN106767810B (en) Indoor positioning method and system based on WIFI and visual information of mobile terminal
US20100278436A1 (en) Method and system for image identification and identification result output
CN103154972A (en) Text-based 3D augmented reality
CN111160243A (en) Passenger flow volume statistical method and related product
CN110470295B (en) Indoor walking navigation system and method based on AR positioning
CN111368682B (en) Method and system for detecting and identifying station caption based on master RCNN
CN111028358A (en) Augmented reality display method and device for indoor environment and terminal equipment
CN115100423B (en) System and method for realizing real-time positioning based on view acquisition data
CN113256731A (en) Target detection method and device based on monocular vision
CN113762229B (en) Intelligent identification method and system for building equipment in building site
JP2009217832A (en) Method and device for automatically recognizing road sign in video image, and storage medium which stores program of road sign automatic recognition
CN111537954A (en) Real-time high-dynamic fusion positioning method and device
CN110738169B (en) Traffic flow monitoring method, device, equipment and computer readable storage medium
CN110969135B (en) Vehicle logo recognition method in natural scene
CN113673288A (en) Idle parking space detection method and device, computer equipment and storage medium
WO2020194570A1 (en) Sign position identification system and program
CN116091431A (en) Case Liang Binghai detection method, apparatus, computer device, and storage medium
CN113689378B (en) Determination method and device for accurate positioning of test strip, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200814

RJ01 Rejection of invention patent application after publication