CN109238286B - Intelligent navigation method, intelligent navigation device, computer equipment and storage medium - Google Patents

Intelligent navigation method, intelligent navigation device, computer equipment and storage medium Download PDF

Info

Publication number
CN109238286B
CN109238286B CN201811008410.5A CN201811008410A CN109238286B CN 109238286 B CN109238286 B CN 109238286B CN 201811008410 A CN201811008410 A CN 201811008410A CN 109238286 B CN109238286 B CN 109238286B
Authority
CN
China
Prior art keywords
image
preset
fixed point
point
point image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811008410.5A
Other languages
Chinese (zh)
Other versions
CN109238286A (en
Inventor
秦勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201811008410.5A priority Critical patent/CN109238286B/en
Publication of CN109238286A publication Critical patent/CN109238286A/en
Application granted granted Critical
Publication of CN109238286B publication Critical patent/CN109238286B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The invention discloses an intelligent navigation method, an intelligent navigation device, computer equipment and a storage medium, wherein the intelligent navigation method comprises the following steps: setting a movable direction of a movable shooting end at a current position; based on the movable direction, acquiring a real shooting fixed point image shot by a movable shooting end in an unobstructed direction; acquiring at least one preset fixed point image in the same direction as the real shot fixed point image based on a preset image query library; acquiring a target fixed-point image which is most similar to the real shot fixed-point image from at least one preset fixed-point image, and taking fixed-point coordinates corresponding to the target fixed-point image as starting point coordinates; and acquiring an end point coordinate, and selecting a recommended navigation route with the shortest path in the barrier-free state as a target navigation route according to the start point coordinate and the end point coordinate. The method is not limited by the hardware detection range, the positioning mode is simple and quick, and the navigation mode is flexible and reliable.

Description

Intelligent navigation method, intelligent navigation device, computer equipment and storage medium
Technical Field
The present invention relates to the field of indoor positioning, and in particular, to an intelligent navigation method, apparatus, computer device, and storage medium.
Background
The path planning is one of important links of navigation research of the obstacle avoidance vehicle. When the obstacle avoidance vehicle executes a task, the obstacle avoidance vehicle is required to search an optimal path from the current position to the target place in the working environment according to the current road condition at any time. Therefore, positioning the current position of the obstacle avoidance vehicle becomes a primary problem in path planning.
The existing positioning in the obstacle avoidance car room mainly adopts the modes of Bluetooth positioning, RFID (Radio Frequency Identification ) positioning, infrared positioning and the like, but the stability of a Bluetooth positioning system is poor, the RFID positioning has no communication capability, and the penetrability is poor when the infrared positioning encounters an obstacle. How to ensure the indoor positioning stability and the timely communication capability of the obstacle avoidance vehicle so as to improve the real-time acquisition of the optimal driving route of the obstacle avoidance vehicle according to road conditions becomes a problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides an intelligent navigation method, an intelligent navigation device, computer equipment and a storage medium for intelligent navigation, which are used for solving the problems of guaranteeing the indoor positioning stability and the timely communication capability of an obstacle avoidance vehicle.
An intelligent navigation method, comprising:
acquiring at least one movable direction of a mobile shooting end on a preset navigation map;
Based on at least one movable direction, acquiring a real shooting fixed point image shot by a movable shooting end in an unobstructed direction;
Acquiring at least one preset fixed point image in the same direction as the real shot fixed point image based on a preset image query library;
A feature extraction algorithm is adopted, a target fixed-point image which is most similar to the real shot fixed-point image is obtained from at least one preset fixed-point image, and fixed-point coordinates corresponding to the target fixed-point image are used as starting point coordinates;
Acquiring an end point coordinate, and generating at least two recommended navigation routes according to the start point coordinate and the end point coordinate;
And acquiring obstacle avoidance detection results of the mobile shooting end on at least two recommended navigation routes, selecting the recommended navigation route which is in an obstacle-free state and has the shortest distance as a target navigation route, sending the target navigation route to the mobile shooting end, and controlling the mobile shooting end to move according to the target navigation route.
An intelligent navigation device, comprising:
the mobile direction setting module is used for acquiring at least one movable direction of the mobile shooting end on a preset navigation map;
the real shooting image acquisition module is used for acquiring a real shooting fixed point image shot by the mobile shooting end in the barrier-free direction based on at least one movable direction;
The fixed point image acquisition module is used for acquiring at least one preset fixed point image in the same direction as the real shot fixed point image based on the preset image query library;
The starting point coordinate acquisition module is used for acquiring a target fixed point image which is most similar to the real shooting fixed point image from at least one preset fixed point image by adopting a characteristic extraction algorithm, and taking the fixed point coordinate corresponding to the target fixed point image as a starting point coordinate;
the recommended route generation module is used for acquiring the terminal point coordinates and generating at least two recommended navigation routes according to the starting point coordinates and the terminal point coordinates;
The mobile terminal moving control module is used for acquiring obstacle avoidance detection results of the mobile shooting terminal on at least two recommended navigation routes, selecting the recommended navigation route with the shortest path in the obstacle avoidance detection result as a target navigation route, sending the target navigation route to the mobile shooting terminal, and controlling the mobile shooting terminal to move according to the target navigation route.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above-described intelligent navigation method when the computer program is executed.
A computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the intelligent navigation method described above.
According to the intelligent navigation method, the intelligent navigation device, the computer equipment and the storage medium, the real shooting fixed point image shot at the current position by the mobile shooting end is compared with each preset fixed point image in the preset image query library, so that the most similar target fixed point image is obtained, the starting point coordinates corresponding to the current position by the mobile shooting end are confirmed, the current position of the mobile shooting end is positioned by adopting the image comparison method, the limitation of a hardware detection range is avoided, and the positioning mode is simple and rapid. Meanwhile, the intelligent navigation method, the intelligent navigation device, the computer equipment and the storage medium can plan a target navigation route capable of avoiding the obstacle according to the starting point coordinate and the end point coordinate corresponding to the current position, so that the mobile shooting time is enabled to move based on the target navigation route, the target navigation route can be adjusted in real time according to road conditions, the navigation process is not influenced by the hardware detection range, and the navigation mode is flexible and reliable.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an application environment of an intelligent navigation method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of intelligent navigation in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a real shot fixed point image corresponding to a preset fixed point according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of two recommended navigation routes generated by a server in an embodiment of the invention;
FIG. 5 is another flow chart of a method of intelligent navigation in an embodiment of the present invention;
FIG. 6 is another flow chart of a method of intelligent navigation in an embodiment of the present invention;
FIG. 7 is a diagram of data of a preset fixed-point image corresponding to a fixed-point coordinate in a preset image query library according to an embodiment of the present invention;
FIG. 8 is another flow chart of a method of intelligent navigation in an embodiment of the present invention;
FIG. 9 is a schematic diagram of surrounding pixels around a candidate point in an embodiment of the invention;
FIG. 10 is a schematic diagram of four point pairs within a circle centered around a feature point in an embodiment of the invention;
FIG. 11 is another flow chart of a method of intelligent navigation in an embodiment of the present invention;
FIG. 12 is another flow chart of a method of intelligent navigation in an embodiment of the present invention;
FIG. 13 is a schematic diagram of an intelligent navigation device according to an embodiment of the present invention;
FIG. 14 is a schematic diagram of a computer device in accordance with an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The intelligent navigation method provided by the embodiment of the invention can be applied to an application environment as shown in fig. 1, and is applied to an intelligent navigation system which comprises a client and a server, wherein the client communicates with the server through a network. The client is also called a client, and refers to a program corresponding to the server for providing local service for the client. The client may be installed on, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, portable wearable devices, and other computer devices. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, an intelligent navigation method is provided, and the method is applied to the server in fig. 1 for illustration, and includes the following steps:
s10, acquiring at least one movable direction of the movable shooting end on a preset navigation map.
The mobile shooting end is a signal receiving end provided with a camera and capable of receiving WIFI signals when moving indoors. The mobile shooting end comprises, but is not limited to, an intelligent robot or an obstacle avoidance vehicle and the like provided with a camera.
The preset navigation map is a grid map with a coordinate system and preset points (namely grid intersection points) which are preset on a server and are established for an indoor feasible region. Wherein each preset point (i.e. grid intersection) corresponds to a fixed point coordinate in the coordinate system. In this embodiment, a preset navigation map is preset in the server, so as to set a moving direction for the mobile shooting end, and a movable route of the mobile shooting end may also be displayed on the preset navigation map.
The movable direction is a direction in which the server refers to the current position of the movable shooting end in the preset navigation map, and the movable shooting end set by taking the coordinate system on the preset navigation map as a reference object can move, for example, a horizontal moving direction parallel to the horizontal axis in the coordinate system or a vertical moving direction parallel to the vertical axis in the coordinate system.
In step S10, the server may obtain at least one movable direction corresponding to the coordinate system of the mobile capturing end on the preset navigation map by moving the position of the mobile capturing end on the preset navigation map, so as to capture the real capturing fixed point image preparation technical foundation in the movable direction.
S20, acquiring a real shooting fixed point image shot by the mobile shooting end in the barrier-free direction based on at least one movable direction.
The barrier-free direction is a direction in which the mobile shooting end detects along each movable direction and no barrier exists in one detected movable direction.
The real shot fixed point image is a ground image shot along the barrier-free direction and comprises a current position and a next preset fixed point, wherein the preset fixed point is each intersection point in a grid formed by a preset navigation map based on a coordinate system and is used for guiding the mobile shooting end to move along the appointed preset fixed point direction, as shown in fig. 3.
In step S20, the server detects in each movable direction through the infrared detector, and the detection distance of the infrared detector is limited to 1 meter, but in this embodiment, the distance between every two preset points is set to be 1 meter, that is, the server can detect whether an obstacle exists between the current position of the movable shooting end and the adjacent preset point. It can be appreciated that the distance between any two preset points in the preset navigation map is less than or equal to the detection distance of the infrared detector, so that the infrared detector can detect whether an obstacle exists between the current position and the next preset point. The server can acquire the real shooting fixed point image shot by the mobile shooting end in the barrier-free direction, so that the position of the mobile shooting end can be conveniently positioned through the real shooting fixed point image.
S30, acquiring at least one preset fixed point image in the same direction as the real shot fixed point image based on a preset image query library.
The preset image query library is a database preset in the server and used for storing each preset point and preset point images of the preset points in the movable direction correspondingly. In this embodiment, since each preset point includes two movable directions: each preset fixed point corresponds to two preset fixed point images in the horizontal moving direction and the vertical moving direction, wherein one preset fixed point image is an image shot along the horizontal moving direction, and the other preset fixed point image is an image shot along the vertical direction.
The preset point images are standard ground images between two adjacent preset points, wherein each preset point image further comprises an image direction identifier for declaring the shooting direction of the preset point image. In this embodiment, the image direction indicator includes horizontal and vertical.
In step S30, the server may query all preset fixed point images corresponding to the image direction identifier identical to the target direction identifier in the preset image query library with the unobstructed direction as the target direction identifier, and lock the query range, so as to reduce the calculation time of the server, and facilitate fast searching of the preset fixed point images most similar to the real shot fixed point images.
S40, acquiring a target fixed-point image which is most similar to the real shot fixed-point image from at least one preset fixed-point image by adopting a characteristic extraction algorithm, and taking fixed-point coordinates corresponding to the target fixed-point image as starting point coordinates.
The feature extraction algorithm (Oriented FAST and Rotated BRIEF, hereinafter referred to as ORB algorithm) is an algorithm for fast feature point extraction and description. This algorithm is described by Ethan Rublee, vincent Rabaud, kurt Konolige and Gary r.bradski in "ORB: AN EFFICIENT ALTERNATIVE to SIFTor SURF "is used to compare the image feature points in the two images to obtain the similarity of the two images. The image feature points are understood to be more significant points in the image, such as contour points, bright points in darker areas, dark points in lighter areas, and the like.
The target fixed-point image is a preset fixed-point image corresponding to a preset fixed point having the most common feature points of the real shot fixed-point image.
Specifically, a feature extraction algorithm is adopted to obtain the features of the real shot image and the preset image corresponding to each real shot fixed point image respectively. And acquiring a preset fixed point image corresponding to the preset image feature with the most common feature of the real shot image features from each preset image feature as a target fixed point image. Based on the target fixed-point image, corresponding fixed-point coordinates can be obtained from a preset image query library to serve as starting point coordinates.
In step S40, the server may obtain the target fixed point image according to the real shot fixed point image matching, and in the preset image query library, the server is facilitated to confirm the corresponding position (i.e. the corresponding fixed point coordinate) of the mobile shooting end in the preset navigation map according to the fixed point coordinate corresponding to the target fixed point image as the starting point coordinate, and provide a technical basis for planning the target navigation route for the server.
S50, acquiring an end point coordinate, and generating at least two recommended navigation routes according to the start point coordinate and the end point coordinate.
The destination coordinates are agents sent to the server by the user through the client and used for determining a destination which the mobile shooting end is expected to finally reach, namely the destination which the mobile shooting end is expected to reach after moving along a specified preset fixed point on a preset navigation map. The recommended navigation route is a route starting along each movable direction of the pre-start point coordinates and reaching the end point coordinates, wherein all the optimal routes at least comprise a route with the shortest path, as shown in fig. 4.
Specifically, as known from step S10, the moving shooting end reaches the destination coordinate from the start point coordinate to include at least one movable direction. The server inputs the starting point coordinates and the ending point coordinates into an A star algorithm for calculation, and at least two recommended navigation routes of the mobile shooting end in each movable direction can be obtained. Among them, the A star algorithm is one of popular heuristic search algorithms, and is widely applied to the field of path optimization. The a-star algorithm is unique in that it introduces global information in the preset navigation map when checking each possible preset point in the shortest path, makes an estimate of the distance of the current starting point coordinates from the end point coordinates, and serves as a measure for evaluating the likelihood that the preset point is on the shortest line.
In step S50, the server processes the start point coordinate and the end point coordinate of the mobile shooting end by using an a-star algorithm, so as to obtain at least two recommended navigation routes of the mobile shooting end in each movable direction, and then prepare a technical basis for screening target navigation routes from all the recommended navigation routes.
S60, obtaining obstacle avoidance detection results of the mobile shooting end on at least two recommended navigation routes, selecting the recommended navigation route with the shortest path in the obstacle avoidance detection result as a target navigation route, sending the target navigation route to the mobile shooting end, and controlling the mobile shooting end to move according to the target navigation route.
The target navigation route is a route with no obstacle between the starting point coordinate and the adjacent fixed point coordinate in the movable direction in the recommended navigation route, and the shortest path between the starting point coordinate and the end point coordinate.
In this embodiment, the infrared detector is used to detect the obstacle, and the detection distance is limited between the current starting point coordinate and the adjacent fixed point coordinate in the next movable direction, that is, when the infrared detector does not detect the obstacle, it is indicated that no obstacle exists between the starting point coordinate and the adjacent fixed point coordinate in the next movable direction, and the movable shooting end can be controlled to continue to move from the starting point coordinate to the next fixed point coordinate according to the target navigation route.
In step S60, the server may detect whether an obstacle exists between the start point coordinate and the next fixed point coordinate in the movable direction through an infrared detector installed on the mobile photographing end, and select a recommended navigation route, which does not exist an obstacle and has the shortest path from the start point coordinate to the end point coordinate, as the target navigation route, so as to guide the mobile photographing end to move to the end point coordinate. The step ensures that the mobile shooting end is not influenced by the obstacle in the moving process, and the mobile shooting end smoothly moves from the starting point coordinate to the ending point coordinate to finish the indoor moving task.
In the intelligent navigation method provided in steps S10 to S60, the real shot fixed point image shot at the current position by the mobile shooting end is compared with each preset fixed point image in the preset image query library, so as to obtain the most similar target fixed point image, thereby confirming the starting point coordinate corresponding to the current position by the mobile shooting end, and the current position of the mobile shooting end is positioned by adopting the image comparison method, so that the method is not limited by the hardware detection range, and the positioning mode is simple and quick. Meanwhile, in the intelligent navigation method, a target navigation route capable of avoiding the obstacle can be planned according to the starting point coordinate and the end point coordinate corresponding to the current position, so that the mobile shooting is enabled to move based on the target navigation route, the target navigation route can be adjusted in real time according to road conditions, the intelligent navigation method is not influenced by a hardware detection range, and the navigation mode is flexible and reliable.
In one embodiment, as shown in fig. 5, in step S20, that is, based on at least one movable direction, a real shot fixed point image shot by a movable shooting end in an unobstructed direction is acquired, which specifically includes the following steps:
s201, detecting each movable direction of the movable shooting end based on at least one movable direction, and acquiring at least one barrier-free direction.
The purpose of this embodiment is to locate the position of the mobile shooting end in the preset navigation map, specifically, compare the real shooting fixed point image shot at the current position with the preset fixed point image in the preset image query library. In this embodiment, in order to increase the processing speed of the server, only one real shot fixed-point image in the unobstructed direction can be selected for comparison, that is, only one unobstructed direction is needed to meet the requirements.
In step S201, the server may select, as the unobstructed direction, a direction in which no obstacle exists according to whether the infrared detector installed on the mobile photographing terminal detects the current position and whether an obstacle exists within a detection distance in each of the mobile directions, so that the server photographs based on the unobstructed direction.
S202, controlling the movable shooting end to take the unobstructed direction as the shooting direction to shoot, and obtaining the real shooting fixed-point image.
Specifically, the server controls the mobile shooting end to shoot the real shooting fixed-point image in the unobstructed direction. In order to acquire the corresponding fixed point coordinates on the preset navigation map where the mobile shooting end is currently located, the shooting distance of the real shooting fixed point image should be greater than the distance between the current position and the next adjacent fixed point coordinates in the barrier-free direction.
In this embodiment, the preset navigation map may specify that the distance between every two fixed point coordinates is 1 meter, that is, the distance for capturing the real fixed point image is at least 1 meter. Specifying the shooting distance of the real shooting fixed-point image is also beneficial to more accurately positioning the position where the mobile shooting end is located. It will be appreciated that providing more detailed image features is more advantageous for the server to analyze and match images to similar images.
In the embodiment provided in steps S201 to S202, the server may select, as the unobstructed direction, a direction in which no obstacle exists according to whether the infrared detector installed on the mobile photographing terminal detects the current position and whether an obstacle exists within the detection distance in each movable direction, so that the server photographs based on the unobstructed direction. Meanwhile, the server specifies the shooting distance of the real shooting fixed point image, so that the position of the mobile shooting end is positioned more accurately, and the real shooting fixed point image with more image features is beneficial to improving the accuracy of image analysis.
In an embodiment, the preset point navigation map comprises at least two preset points, each comprising a point coordinate and at least one movable direction. As shown in fig. 6, before step S30, that is, before the step of acquiring at least one preset-point image in the same direction as the real-shot fixed-point image based on the preset-image query library, the intelligent navigation method further includes the steps of:
S301, acquiring preset fixed point images shot by each preset fixed point in each movable direction.
The preset points are each intersection point in a grid formed by the preset navigation map based on a coordinate system and are used for guiding the mobile shooting end to move along the appointed preset points.
The preset fixed point image is a standard ground image comprising two adjacent preset fixed points, wherein each preset fixed point image further comprises an image direction identifier for declaring the shooting direction of the preset fixed point image. In this embodiment, the image direction indicator includes horizontal and vertical.
In step S301, the server shoots a preset fixed point image for each preset fixed point in each movable direction, each preset fixed point image carries a fixed point identifier and an image direction identifier, the fixed point identifier is used for uniquely specifying a corresponding preset fixed point, and the image direction identifier is used for declaring the shooting direction of the preset fixed point image, so that the subsequent server can screen the preset fixed point image based on different image direction identifiers, reduce the comparison range and increase the searching speed.
S302, associating and storing fixed point coordinates corresponding to each preset fixed point, an image direction identifier and a preset fixed point image corresponding to the image direction identifier to form a preset image query library.
The fixed point coordinates are coordinate positions of each preset fixed point corresponding to a coordinate system in a preset navigation map.
The preset image query library is a database preset in the server and used for storing each preset point and preset point images of the preset points in the movable direction correspondingly. In this embodiment, since each preset point includes two movable directions: each preset fixed point corresponds to two preset fixed point images in the horizontal moving direction and in the vertical moving direction, one preset fixed point image is an image shot along the horizontal moving direction, the other preset fixed point image is an image shot along the vertical direction (shown in fig. 7), and the two preset fixed point images are distinguished through an image direction identifier.
In step S302, the server stores the fixed point coordinates corresponding to each preset fixed point, the image direction identifier and the preset fixed point images corresponding to the image direction identifier in an associated manner, and establishes a preset image query library, so that the subsequent server can perform position location based on each preset fixed point image matching real shot fixed point image in the preset image query library, and the location method is fast and simple.
In steps S301 to S302, the server captures a preset fixed point image for each preset fixed point in each movable direction, and the subsequent server can screen the preset fixed point image based on different image direction identifiers, so as to reduce the comparison range and speed up the searching. The server establishes a preset image query library, so that the subsequent server can perform position location based on the fact that each preset fixed point image in the preset image query library is matched with a real shot fixed point image, and the location method is fast and simple.
In one embodiment, as shown in fig. 8, in step S40, a feature extraction algorithm is adopted to obtain a target fixed-point image most similar to a real shot fixed-point image from at least one preset fixed-point image, and the fixed-point coordinates corresponding to the target fixed-point image are used as starting point coordinates, which specifically includes the following steps:
s41, comparing at least one preset fixed point image with the real shot fixed point image by adopting a feature extraction algorithm, and obtaining the feature matching degree corresponding to each preset fixed point image.
The feature extraction algorithm (Oriented FAST and Rotated BRIEF, hereinafter referred to as ORB algorithm) is an algorithm for fast feature point extraction and description. The characteristic points of the image are understood to be more significant points in the image, such as contour points, bright points in darker areas, dark points in lighter areas, etc. The ORB algorithm is divided into two parts, including feature point extraction and feature point matching. The preset fixed point image is a standard ground image comprising two adjacent preset fixed points. The live spot image is a ground image taken in the unobstructed direction including between the current position and the next preset spot. The feature matching degree is the similarity percentage of the real shot fixed point image and the preset fixed point image.
Specifically, the process of comparing at least one preset fixed point image and the real shot fixed point image by using a feature extraction algorithm (Oriented FAST and Rotated BRIEF, hereinafter referred to as ORB algorithm) is as follows:
1. and respectively extracting characteristic points in each preset fixed point image and the real shot fixed point image.
The extracting of the characteristic points of the preset fixed point image comprises the following steps: and setting more obvious points on the preset fixed point image, such as contour points, bright points in darker areas, dark points in lighter areas and the like as candidate points, detecting pixel values on circles with appointed selection radiuses around the candidate points, and if enough pixel points in the field around the candidate points have enough differences from the gray values of the candidate points, considering the candidate points as a characteristic point.
In order to obtain faster results, the following detection acceleration method may also be employed: the gray value difference between at least 3 points around the test candidate point and the candidate point is enough, if not, other points are not needed to be calculated, and the candidate point is not considered as the characteristic point directly. The selected radius of the circle around the candidate point is an important parameter, and for simplicity and efficiency, the detection radius can be specified to be 3, and then there are 16 peripheral pixels to be compared, as shown in fig. 9. To increase the efficiency of the comparison, only N surrounding pixels are typically used for comparison, namely FAST-N, FAST-9 is generally recommended.
The process of extracting the feature points of the real shot fixed point image is consistent with the process of extracting the feature points of the preset fixed point image, and the description is omitted here.
2. And respectively calculating and storing the feature point descriptors of each preset fixed point image and the feature point descriptors of the real shot fixed point images.
The calculating of the feature point descriptors of the preset point images comprises the following steps: the attribute of the feature point needs to be described in some way after the feature point of the preset point image is obtained. The attribute output of the feature point is the descriptor (Feature DescritorS) of the feature point. The ORB algorithm acquires the attribute of the feature point by the following steps:
(1) And taking the characteristic point P as a circle center and d as a radius to make a circle O.
(2) N point pairs are selected within the circle O. For convenience of explanation, in this embodiment, n=4 may be selected, as shown in fig. 10, where N may be 512 in practical application.
The 4 point pairs currently selected are respectively marked as follows:
P 1(A,B)、P2(A,B)、P3 (a, B) and P 4 (a, B).
(3) Definition of T-operations
Where I A represents the gray scale of point a and I B represents the gray scale of point B.
(4) And respectively carrying out T operation on the selected point pairs, and combining the obtained results. The description is described with the four points as continuing to be the description:
T(P1(A,B))=1
T(P2(A,B))=0
T(P3(A,B))=1
T(P4(A,B))=1
The final descriptor of the feature point P is 1011.
The process of calculating the feature point descriptors of the real shot fixed point images is consistent with the process of calculating the feature point descriptors of the preset fixed point images, and the description is omitted here.
3. And comparing the feature point descriptors of the real shot fixed point images with the feature point descriptors of each preset fixed point image one by one to obtain feature matching degree. The process of comparing the feature point descriptor of the real shot fixed point image with the feature point descriptor of one preset fixed point image is illustrated:
Feature point descriptor A of real shot fixed point image: 10101011
A characteristic point descriptor B of a preset point image: 10101010
In this example, only the last bit of A and B is different, and the feature matching degree is 87.5%. And performing exclusive OR operation on the A and the B to calculate the feature matching degree of the A and the B. The exclusive or operation can be completed by means of hardware, so that the efficiency is high, and the matching speed is increased.
And the process of calculating the characteristic point descriptors of the real shot fixed point images and the characteristic point descriptors of one preset fixed point image is carried out, and the characteristic matching degree of the characteristic point descriptors of each preset fixed point image relative to the characteristic point descriptors of the real shot fixed point images is obtained and stored.
In step S41, the server may use the ORB algorithm and obtain, by means of hardware, the feature matching degree of the feature point descriptors of each preset fixed point image with respect to the feature point descriptors of the real shot fixed point image, and provide a preparation technical basis for selecting the preset fixed point image with the highest feature matching degree.
S42, selecting a preset fixed point image with highest feature matching degree as a target fixed point image.
The target fixed point image is an image with highest similarity with the real shooting fixed point image in the preset fixed point image.
Specifically, as shown in step S41, the server has recorded the feature matching degree of the feature point descriptors of each preset fixed point image relative to the feature point descriptors of the real shot fixed point images, and only needs to select the preset fixed point image corresponding to the highest score percentage from all the feature matching degrees to determine the preset fixed point image as the target fixed point image.
In step S42, the server may sort the feature point descriptors of each preset fixed point image according to the feature matching degree of the feature point descriptors of the real shot fixed point image, and select the preset fixed point image corresponding to the feature matching degree with the highest score as the target fixed point image, so as to facilitate the subsequent positioning of the mobile shooting end based on the target fixed point image.
S43, taking fixed point coordinates of the target fixed point image in a preset image query library as starting point coordinates.
The starting point coordinates are coordinate positions corresponding to the current position of the mobile shooting end in a preset navigation map.
In step S43, the server may query the fixed point coordinates corresponding to the fixed point image of the target in the preset image query library as the starting point coordinates, so as to complete the positioning of the position of the mobile shooting end, and facilitate the navigation route planning based on the starting point position.
In steps S41 to S43, the server acquires the feature matching degree of the feature point descriptors of each preset fixed point image relative to the feature point descriptors of the real shot fixed point images by means of the ORB algorithm and hardware, and selects the preset fixed point image with the largest feature matching degree as the target fixed point image after the feature matching degree is ranked, and acquires the fixed point coordinates corresponding to the target fixed point image in the preset image query library as the starting point coordinates, thereby completing the position location of the mobile shooting end.
In one embodiment, as shown in fig. 11, in step S50, namely, end point coordinates are obtained, at least two recommended navigation routes are generated according to the start point coordinates and the end point coordinates, and the method specifically includes the following steps:
s51, determining a starting point coordinate and an end point coordinate on a preset fixed point navigation map.
The terminal point coordinate is a terminal point reached after the mobile shooting end moves along a specified preset fixed point on a preset navigation map.
In step S51, the server may identify the start point coordinate and the end point coordinate on the preset navigation map, so that a background controller of the server can intuitively learn the current position of the mobile shooting end and the end point coordinate to be reached by the current movement.
S52, acquiring at least two recommended navigation routes on a preset fixed-point navigation map by adopting an A star algorithm.
The recommended navigation route is a route that starts to reach the destination coordinate along each movable direction of the pre-start point coordinate, and all the optimal routes at least include a route with the shortest path, as shown in fig. 4.
Specifically, the implementation process of acquiring the recommended navigation route in a movable direction on the preset fixed point navigation map by adopting the A star algorithm is as follows:
Setting f=g+h, where F is the shortest path, g=a moving path from the start point coordinates to a preset point where it is currently located;
h=an estimated path moving from the preset fixed point where it is currently located to the end point coordinates.
1. The starting point coordinates are added to a walkable node list (each node is each preset point on a preset navigation map).
2. The following procedure was repeated:
a. Traversing the walkable node list, searching the node with the minimum F value, and taking the searched node as the preset fixed point to be processed currently.
B. The preset point is moved to the infeasible list.
C. analyzing each node of four adjacent nodes of a preset point:
If the neighbor node is unreachable or in a non-reachable list, it is ignored. Otherwise, the following operation is performed:
if the neighbor node is not in the walkable node list, adding the neighbor node to the walkable node list, setting the current node as a parent node, and recording F, G and H values of the node.
If the neighbor node is already in the walkable node list, it is checked whether the path (i.e. reaching the neighbor node via the current node) has a smaller G value. If yes, setting the father node as the current node, and recalculating the G and F values of the current node.
D. When the end point coordinates are added to the walkable node list, the searching of the optimal navigation path is completed at the moment.
3. Starting from the end point coordinates, each node moves along the parent node until the start point coordinates, i.e., the recommended navigation route.
In step S52, the server may acquire the recommended navigation route in each movable direction on the preset fixed point navigation map by using the a star algorithm, so that the following moving shooting end can replace or adjust the route in real time according to the road condition (such as that an obstacle exists in the road), and the flexibility of the moving shooting end is enhanced.
In steps S51 to S52, the server may identify the start point coordinate and the end point coordinate on the preset navigation map, so that the background control personnel of the server can intuitively learn the current position of the mobile shooting end and the end point coordinate to be reached by the current movement. The server acquires the recommended navigation route in each movable direction on the preset fixed-point navigation map by adopting an A star algorithm, so that the follow-up movable shooting end can replace or adjust the route in real time according to road conditions, and the movement flexibility of the movable shooting end is enhanced.
In one embodiment, the target navigation route includes at least one route point. As shown in fig. 12, in step S60, the moving capturing end is controlled to move according to the target navigation route, which specifically includes the following steps:
S61, controlling the mobile shooting end to move from the starting point coordinates to the next passing fixed point according to the target navigation route.
The target navigation route is a route with no obstacle between the starting point coordinate and the adjacent fixed point coordinate in the movable direction in the recommended navigation route, and the shortest path between the starting point coordinate and the end point coordinate.
The path fixed point is the next preset fixed point to which the mobile shooting end moves according to the direction of the target navigation route.
In step S61, the server sends the target navigation route to the mobile shooting end through the wireless network, and after the mobile shooting end receives the target navigation route, the mobile shooting end can move from the current position to the fixed point of the next path according to the guidance of the target navigation route, so that the safety and reliability of the mobile shooting end in the moving process are improved.
S62, updating the next passing fixed point to which the mobile shooting end moves into a new starting point coordinate, and if the new starting point coordinate is not an end point coordinate, repeatedly executing the step of acquiring the end point coordinate, and generating at least two recommended navigation routes according to the starting point coordinate and the end point coordinate.
Specifically, since the range of each time the mobile shooting end detects an obstacle is the distance between two preset points, when the mobile shooting end moves to the next path point according to the target navigation route, the safety of the path needs to be continuously determined, that is, the next path point needs to be updated to the starting point coordinate, and whether an obstacle exists between the starting point coordinate and the next path coordinate should be determined again.
It can be understood that when no obstacle exists between the starting point coordinate and the next path fixed point, the mobile shooting end can continue to move according to the target navigation route; when there is an obstacle between the start point coordinates and the next route fixed point, the server is required to re-plan the target navigation route, that is, repeatedly execute the step of obtaining the end point coordinates, and generate at least two recommended navigation routes according to the start point coordinates and the end point coordinates. The steps that are repeatedly executed are identical to steps S50 to S60, and will not be described here again.
In step S62, whenever the mobile shooting end moves to the next path fixed point and does not reach the destination coordinate yet, it is detected whether an obstacle exists between the current position and the next path fixed point, so that flexibility and movement safety of adjusting the target navigation route according to road conditions in real time during the movement process are improved, and smooth arrival of the mobile shooting end at the destination coordinate position can be ensured.
In step S61 to step S62, the server improves the safety and reliability of the mobile shooting end in the moving process by controlling the mobile shooting end to move from the current position to the next preset point according to the target navigation route. When the mobile shooting end moves to the next path fixed point and does not reach the end point coordinate yet, whether an obstacle exists between the current position and the next path fixed point or not is detected, so that the flexibility and the movement safety of adjusting the target navigation route according to road conditions in real time in the moving process are improved, and the mobile shooting end can be ensured to smoothly reach the position of the end point coordinate.
According to the intelligent navigation method provided by the embodiment, the real shooting fixed point image shot at the current position by the mobile shooting end is compared with each preset fixed point image in the preset image query library, so that the most similar target fixed point image is obtained, the starting point coordinates corresponding to the current position by the mobile shooting end are confirmed, the current position of the mobile shooting end is positioned by adopting the image comparison method, the limitation of a hardware detection range is avoided, and the positioning mode is simple and rapid. According to the intelligent navigation method, a target navigation route capable of avoiding the obstacle can be planned according to the starting point coordinate and the end point coordinate corresponding to the current position, so that the mobile shooting is short and moves based on the target navigation route, the target navigation route can be adjusted in real time according to road conditions, the navigation method is not influenced by the hardware detection range, and the navigation mode is flexible and reliable.
Further, the server may select a direction in which no obstacle exists as an unobstructed direction so that the server performs photographing based on the unobstructed direction. Meanwhile, the server specifies the shooting distance of the real shooting fixed point image, so that the position of the mobile shooting end is positioned more accurately, and the real shooting fixed point image with more image features is beneficial to improving the accuracy of image analysis. The server shoots preset fixed point images for each preset fixed point in each movable direction, the preset fixed point images can be screened by the subsequent server based on different image direction identifiers, the comparison range is reduced, and the searching speed is increased. The server establishes a preset image query library, so that the subsequent server can perform position location based on the fact that each preset fixed point image in the preset image query library is matched with a real shot fixed point image, and the location method is fast and simple. The server adopts ORB algorithm and completes the position location of the mobile shooting end by means of hardware, and meanwhile, the server is beneficial to navigation route planning based on the starting point position. The server can respectively mark the starting point coordinates and the end point coordinates on a preset navigation map, so that background control personnel of the server can intuitively know the current position of the mobile shooting end and the end point coordinates to be reached by the current movement. The server acquires the recommended navigation route in each movable direction on the preset fixed-point navigation map by adopting an A star algorithm, so that the follow-up movable shooting end can replace or adjust the route in real time according to road conditions, and the movement flexibility of the movable shooting end is enhanced. The server detects whether an obstacle exists between the current position and the fixed point of the next path or not by controlling the moving shooting end to move to the fixed point of the next path or not when the moving shooting end does not reach the end point coordinate yet, so that the flexibility and the moving safety of adjusting the target navigation route according to road conditions in real time in the moving process are improved, and the moving shooting end can be ensured to smoothly reach the position of the end point coordinate.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In an embodiment, an intelligent navigation device is provided, and the intelligent navigation device corresponds to the intelligent navigation method in the embodiment one by one. As shown in fig. 13, the intelligent navigation apparatus includes a set moving direction module 10, an acquisition real shot image module 20, an acquisition fixed point image module 30, an acquisition starting point coordinate module 40, a generation recommended route module 50, and a control moving end moving module 60. The functional modules are described in detail as follows:
The mobile direction module 10 is configured to obtain at least one movable direction of the mobile shooting end on a preset navigation map.
The capturing real shot image module 20 is configured to capture a real shot fixed point image captured by the mobile capturing terminal in the unobstructed direction based on at least one movable direction.
The fixed point image acquisition module 30 is configured to acquire at least one preset fixed point image in the same direction as the real shot fixed point image based on the preset image query library.
The start point coordinate acquiring module 40 is configured to acquire a target fixed point image that is most similar to the real shot fixed point image from at least one preset fixed point image by using a feature extraction algorithm, and take a fixed point coordinate corresponding to the target fixed point image as a start point coordinate.
The recommended route generation module 50 is configured to obtain an end point coordinate, and generate at least two recommended navigation routes according to the start point coordinate and the end point coordinate.
The control mobile terminal moving module 60 is configured to obtain obstacle avoidance detection results of the mobile photographing terminal on at least two recommended navigation routes, select a recommended navigation route with a shortest path and no obstacle avoidance detection result as a target navigation route, send the target navigation route to the mobile photographing terminal, and control the mobile photographing terminal to move according to the target navigation route.
Preferably, the capturing of the photographed image module 20 includes capturing the unobstructed direction unit 21 and capturing the photographed image unit 22.
An obstacle-free direction unit 21 is provided for detecting each movable direction of the moving photographing terminal based on at least one movable direction, and for obtaining at least one obstacle-free direction.
The capturing real shot image unit 22 is configured to control the mobile shooting end to take the unobstructed direction as the shooting direction, and obtain a real shot fixed point image.
Preferably, the intelligent navigation apparatus further comprises a module 301 for acquiring a preset fixed point image and a module 302 for forming an image query library.
The preset fixed point image acquiring module 301 is configured to acquire a preset fixed point image captured by each preset fixed point in each movable direction.
The image query library forming module 302 is configured to store fixed point coordinates corresponding to each preset fixed point, an image direction identifier, and a preset fixed point image corresponding to the image direction identifier in a correlated manner, so as to form a preset image query library.
Preferably, the acquisition start point coordinate module 40 includes an acquisition feature matching degree unit 401, a target fixed point image unit 402, and a start point coordinate unit 403.
The feature matching degree obtaining unit 401 is configured to compare at least one preset fixed point image with the real shot fixed point image by using a feature extraction algorithm, and obtain a feature matching degree corresponding to each preset fixed point image.
The target fixed-point image unit 402 is configured to select a preset fixed-point image with the highest feature matching degree as a target fixed-point image.
A start point coordinate unit 403 for taking the fixed point coordinates of the target fixed point image in the preset image query library as the start point coordinates.
Preferably, generating the recommended route module 50 includes determining a starting point coordinates unit 501 and acquiring a recommended route unit 502.
A start point coordinate determination unit 501 for determining start point coordinates and end point coordinates on a preset point navigation map.
The recommended route obtaining unit 502 is configured to obtain at least two recommended navigation routes on a preset fixed-point navigation map by using an a-star algorithm.
Preferably, the control mobile terminal moving module 60 includes a control photographing short moving unit 601 and a recommended navigation route generating unit 602.
The control shooting short moving unit 601 is used for controlling the moving shooting end to move from the starting point coordinates to the next passing fixed point according to the target navigation route.
And a recommended navigation route generation unit 602, configured to update a next route fixed point to which the mobile shooting end moves to a new start point coordinate, and if the new start point coordinate is not an end point coordinate, repeatedly perform the step of acquiring the end point coordinate, and generate at least two recommended navigation routes according to the start point coordinate and the end point coordinate.
For specific limitations of the intelligent navigation apparatus, reference may be made to the above limitation of the intelligent navigation method, and no further description is given here. The above-described respective modules in the intelligent navigation apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 14. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for data related to the intelligent navigation method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of intelligent navigation.
In one embodiment, a computer device is provided that includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program: acquiring at least one movable direction of a mobile shooting end on a preset navigation map; based on at least one movable direction, acquiring a real shooting fixed point image shot by a movable shooting end in an unobstructed direction; acquiring at least one preset fixed point image in the same direction as the real shot fixed point image based on a preset image query library; a feature extraction algorithm is adopted, a target fixed-point image which is most similar to the real shot fixed-point image is obtained from at least one preset fixed-point image, and fixed-point coordinates corresponding to the target fixed-point image are used as starting point coordinates; acquiring an end point coordinate, and generating at least two recommended navigation routes according to the start point coordinate and the end point coordinate; and acquiring obstacle avoidance detection results of the mobile shooting end on at least two recommended navigation routes, selecting the recommended navigation route which is in an obstacle-free state and has the shortest distance as a target navigation route, sending the target navigation route to the mobile shooting end, and controlling the mobile shooting end to move according to the target navigation route.
In an embodiment, based on at least one movable direction, acquiring a real shot fixed point image shot by a movable shooting end in an unobstructed direction includes: detecting each movable direction of the movable shooting end based on at least one movable direction, and acquiring at least one barrier-free direction; and controlling the mobile shooting end to take the barrier-free direction as the shooting direction to shoot, and acquiring the real shooting fixed-point image.
In an embodiment, the preset point navigation map comprises at least two preset points, each preset point comprising a point coordinate and at least one movable direction; before the step of acquiring at least one preset fixed point image in the same direction as the real shot fixed point image based on the preset image query library, the following steps are further realized when the processor executes the computer program: acquiring preset fixed point images shot by each preset fixed point in each movable direction; and storing fixed point coordinates corresponding to each preset fixed point, an image direction identifier and a preset fixed point image corresponding to the image direction identifier in a correlated mode to form a preset image query library.
In one embodiment, a feature extraction algorithm is adopted to obtain a target fixed-point image most similar to a real shot fixed-point image from at least one preset fixed-point image, and a fixed-point coordinate corresponding to the target fixed-point image is used as a starting point coordinate, which comprises the following steps: comparing at least one preset fixed point image with the real shot fixed point image by adopting a feature extraction algorithm to obtain the feature matching degree corresponding to each preset fixed point image; selecting a preset fixed point image with highest feature matching degree as a target fixed point image; and taking the fixed point coordinates of the target fixed point image in a preset image query library as starting point coordinates.
In an embodiment, obtaining the end point coordinates, generating at least two recommended navigation routes according to the start point coordinates and the end point coordinates includes: determining a starting point coordinate and an end point coordinate on a preset fixed point navigation map; and acquiring at least two recommended navigation routes on a preset fixed-point navigation map by adopting an A star algorithm.
In one embodiment, the target navigation route includes at least one via-point; controlling the mobile shooting end to move according to the target navigation route, comprising: controlling the mobile shooting end to move from the starting point coordinates to the next passing fixed point according to the target navigation route; updating the next passing fixed point to which the mobile shooting end moves to a new starting point coordinate, and if the new starting point coordinate is not an end point coordinate, repeatedly executing the step of acquiring the end point coordinate, and generating at least two recommended navigation routes according to the starting point coordinate and the end point coordinate.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring at least one movable direction of a mobile shooting end on a preset navigation map; based on at least one movable direction, acquiring a real shooting fixed point image shot by a movable shooting end in an unobstructed direction; acquiring at least one preset fixed point image in the same direction as the real shot fixed point image based on a preset image query library; a feature extraction algorithm is adopted, a target fixed-point image which is most similar to the real shot fixed-point image is obtained from at least one preset fixed-point image, and fixed-point coordinates corresponding to the target fixed-point image are used as starting point coordinates; acquiring an end point coordinate, and generating at least two recommended navigation routes according to the start point coordinate and the end point coordinate; and acquiring obstacle avoidance detection results of the mobile shooting end on at least two recommended navigation routes, selecting the recommended navigation route which is in an obstacle-free state and has the shortest distance as a target navigation route, sending the target navigation route to the mobile shooting end, and controlling the mobile shooting end to move according to the target navigation route.
In an embodiment, based on at least one movable direction, acquiring a real shot fixed point image shot by a movable shooting end in an unobstructed direction includes: detecting each movable direction of the movable shooting end based on at least one movable direction, and acquiring at least one barrier-free direction; and controlling the mobile shooting end to take the barrier-free direction as the shooting direction to shoot, and acquiring the real shooting fixed-point image.
In an embodiment, the preset point navigation map comprises at least two preset points, each preset point comprising a point coordinate and at least one movable direction; before the step of obtaining at least one preset point image in the same direction as the real shot point image based on the preset image query library, the computer program when executed by the processor further implements the steps of: acquiring preset fixed point images shot by each preset fixed point in each movable direction; and storing fixed point coordinates corresponding to each preset fixed point, an image direction identifier and a preset fixed point image corresponding to the image direction identifier in a correlated mode to form a preset image query library.
In one embodiment, a feature extraction algorithm is adopted to obtain a target fixed-point image most similar to a real shot fixed-point image from at least one preset fixed-point image, and a fixed-point coordinate corresponding to the target fixed-point image is used as a starting point coordinate, which comprises the following steps: comparing at least one preset fixed point image with the real shot fixed point image by adopting a feature extraction algorithm to obtain the feature matching degree corresponding to each preset fixed point image; selecting a preset fixed point image with highest feature matching degree as a target fixed point image; and taking the fixed point coordinates of the target fixed point image in a preset image query library as starting point coordinates.
In an embodiment, obtaining the end point coordinates, generating at least two recommended navigation routes according to the start point coordinates and the end point coordinates includes: determining a starting point coordinate and an end point coordinate on a preset fixed point navigation map; and acquiring at least two recommended navigation routes on a preset fixed-point navigation map by adopting an A star algorithm.
In one embodiment, the target navigation route includes at least one via-point; controlling the mobile shooting end to move according to the target navigation route, comprising: controlling the mobile shooting end to move from the starting point coordinates to the next passing fixed point according to the target navigation route; updating the next passing fixed point to which the mobile shooting end moves to a new starting point coordinate, and if the new starting point coordinate is not an end point coordinate, repeatedly executing the step of acquiring the end point coordinate, and generating at least two recommended navigation routes according to the starting point coordinate and the end point coordinate.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments of the application may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. An intelligent navigation method is characterized by comprising the following steps:
acquiring at least one movable direction of a mobile shooting end on a preset navigation map;
Acquiring a real shooting fixed point image shot by the mobile shooting end in an obstacle-free direction based on at least one movable direction, wherein the movable direction is a direction in which the mobile shooting end can move by taking a coordinate system on a preset navigation map as a reference object and set by a server with reference to the current position of the mobile shooting end in the preset navigation map, the obstacle-free direction is a direction in which the mobile shooting end detects along each movable direction, no obstacle exists in one detected movable direction, and the real shooting fixed point image is a ground image between the current position and the next preset fixed point;
Acquiring at least one preset fixed point image in the same direction as the real shot fixed point image based on a preset image query library, wherein the preset fixed point image is a standard ground image between two adjacent preset fixed points, and each preset fixed point image further comprises an image direction identifier for declaring the shooting direction of the preset fixed point image;
a feature extraction algorithm is adopted, a target fixed-point image which is most similar to the real shooting fixed-point image is obtained from at least one preset fixed-point image, and fixed-point coordinates corresponding to the target fixed-point image are used as starting point coordinates;
acquiring an end point coordinate, and generating at least two recommended navigation routes according to the start point coordinate and the end point coordinate;
And acquiring obstacle avoidance detection results of the mobile shooting end on at least two recommended navigation routes, selecting a recommended navigation route which is in an obstacle-free state and has the shortest distance as a target navigation route, sending the target navigation route to the mobile shooting end, and controlling the mobile shooting end to move according to the target navigation route.
2. The intelligent navigation method according to claim 1, wherein the acquiring, based on at least one of the movable directions, the real shot fixed point image shot by the movable shooting end in the unobstructed direction includes:
Detecting each movable direction of the movable shooting end based on at least one movable direction, and acquiring at least one barrier-free direction;
and controlling the mobile shooting end to take the barrier-free direction as a shooting direction to shoot, and acquiring a real shooting fixed-point image.
3. The intelligent navigation method of claim 1, wherein the preset fixed point navigation map comprises at least two preset fixed points, each of the preset fixed points comprising fixed point coordinates and at least one movable direction;
Before the step of acquiring at least one preset fixed point image in the same direction as the real shot fixed point image based on the preset image query library, the intelligent navigation method further comprises the following steps:
acquiring a preset fixed point image shot by each preset fixed point in each movable direction;
And storing fixed point coordinates corresponding to each preset fixed point, a movable direction and preset fixed point images corresponding to the movable direction in a correlated mode to form a preset image query library.
4. The intelligent navigation method according to claim 3, wherein the step of acquiring a target fixed-point image most similar to the real shot fixed-point image from at least one preset fixed-point image by using a feature extraction algorithm, and taking fixed-point coordinates corresponding to the target fixed-point image as starting point coordinates comprises:
comparing at least one preset fixed point image with the real shot fixed point image by adopting a feature extraction algorithm to obtain the feature matching degree corresponding to each preset fixed point image;
selecting a preset fixed point image with the highest characteristic matching degree as the target fixed point image;
And taking the fixed point coordinates of the target fixed point image in the preset image query library as starting point coordinates.
5. The intelligent navigation method of claim 3, wherein the obtaining the destination coordinates, generating at least two recommended navigation routes based on the start coordinates and the destination coordinates, comprises:
determining the starting point coordinates and the ending point coordinates on the preset fixed point navigation map;
And acquiring at least two recommended navigation routes on the preset fixed point navigation map by adopting an A star algorithm.
6. The intelligent navigation method according to claim 1, wherein the target navigation route includes at least one route point;
The controlling the mobile shooting end to move according to the target navigation route comprises the following steps:
controlling the mobile shooting end to move from a starting point coordinate to a next passing fixed point according to the target navigation route;
Updating the next passing fixed point to which the mobile shooting end moves into a new starting point coordinate, and if the new starting point coordinate is not the terminal point coordinate, repeatedly executing the step of acquiring the terminal point coordinate, and generating at least two recommended navigation routes according to the starting point coordinate and the terminal point coordinate.
7. An intelligent navigation device, comprising:
the mobile direction setting module is used for acquiring at least one movable direction of the mobile shooting end on a preset navigation map;
The real shooting image acquisition module is used for acquiring a real shooting fixed point image shot by the mobile shooting end in an obstacle-free direction based on at least one movable direction, wherein the movable direction is a direction in which the mobile shooting end can move by taking a coordinate system on a preset navigation map as a reference object set by a server for referencing the current position of the mobile shooting end in the preset navigation map, the obstacle-free direction is a direction in which the mobile shooting end detects along each movable direction, no obstacle exists in one detected movable direction, and the real shooting fixed point image is a ground image between the current position and the next preset fixed point;
The fixed point image acquisition module is used for acquiring at least one preset fixed point image in the same direction as the real shooting fixed point image based on a preset image query library, wherein the preset fixed point image is a standard ground image between two adjacent preset fixed points, and each preset fixed point image further comprises an image direction identifier used for declaring the shooting direction of the preset fixed point image;
The starting point coordinate acquisition module is used for acquiring a target fixed point image which is most similar to the real shooting fixed point image from at least one preset fixed point image by adopting a characteristic extraction algorithm, and taking fixed point coordinates corresponding to the target fixed point image as starting point coordinates;
the recommended route generation module is used for acquiring an end point coordinate, and generating at least two recommended navigation routes according to the start point coordinate and the end point coordinate;
the mobile terminal moving module is used for acquiring obstacle avoidance detection results of the mobile shooting terminal on at least two recommended navigation routes, selecting the recommended navigation route which is in an obstacle-free state and has the shortest distance as a target navigation route, sending the target navigation route to the mobile shooting terminal, and controlling the mobile shooting terminal to move according to the target navigation route.
8. The intelligent navigation apparatus of claim 7, wherein the means for acquiring the live image comprises:
The barrier-free direction obtaining unit is used for detecting each movable direction of the movable shooting end based on at least one movable direction and obtaining at least one barrier-free direction;
and the real shooting image acquisition unit is used for controlling the mobile shooting end to shoot by taking the barrier-free direction as the shooting direction, and acquiring a real shooting fixed-point image.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the intelligent navigation method according to any one of claims 1 to 6 when the computer program is executed by the processor.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the intelligent navigation method according to any one of claims 1 to 6.
CN201811008410.5A 2018-08-31 2018-08-31 Intelligent navigation method, intelligent navigation device, computer equipment and storage medium Active CN109238286B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811008410.5A CN109238286B (en) 2018-08-31 2018-08-31 Intelligent navigation method, intelligent navigation device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811008410.5A CN109238286B (en) 2018-08-31 2018-08-31 Intelligent navigation method, intelligent navigation device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109238286A CN109238286A (en) 2019-01-18
CN109238286B true CN109238286B (en) 2024-05-03

Family

ID=65069338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811008410.5A Active CN109238286B (en) 2018-08-31 2018-08-31 Intelligent navigation method, intelligent navigation device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109238286B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109916408A (en) * 2019-02-28 2019-06-21 深圳市鑫益嘉科技股份有限公司 Robot indoor positioning and air navigation aid, device, equipment and storage medium
CN110599089B (en) * 2019-08-30 2020-11-03 北京三快在线科技有限公司 Isolation strip position determining method and device, storage medium and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004163241A (en) * 2002-11-13 2004-06-10 Nec Access Technica Ltd Wireless portable terminal, navigation method used for the same, and its program
CN103424113A (en) * 2013-08-01 2013-12-04 毛蔚青 Indoor positioning and navigating method of mobile terminal based on image recognition technology
CN104422439A (en) * 2013-08-21 2015-03-18 希姆通信息技术(上海)有限公司 Navigation method, apparatus, server, navigation system and use method of system
CN105318881A (en) * 2014-07-07 2016-02-10 腾讯科技(深圳)有限公司 Map navigation method, and apparatus and system thereof
CN106382930A (en) * 2016-08-18 2017-02-08 广东工业大学 An indoor AGV wireless navigation method and a device therefor
CN107490377A (en) * 2017-07-17 2017-12-19 五邑大学 Indoor map-free navigation system and navigation method
CN107544507A (en) * 2017-09-28 2018-01-05 速感科技(北京)有限公司 Mobile robot control method for movement and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101329111B1 (en) * 2012-05-02 2013-11-14 한국과학기술연구원 System and method for indoor navigation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004163241A (en) * 2002-11-13 2004-06-10 Nec Access Technica Ltd Wireless portable terminal, navigation method used for the same, and its program
CN103424113A (en) * 2013-08-01 2013-12-04 毛蔚青 Indoor positioning and navigating method of mobile terminal based on image recognition technology
CN104422439A (en) * 2013-08-21 2015-03-18 希姆通信息技术(上海)有限公司 Navigation method, apparatus, server, navigation system and use method of system
CN105318881A (en) * 2014-07-07 2016-02-10 腾讯科技(深圳)有限公司 Map navigation method, and apparatus and system thereof
CN106382930A (en) * 2016-08-18 2017-02-08 广东工业大学 An indoor AGV wireless navigation method and a device therefor
CN107490377A (en) * 2017-07-17 2017-12-19 五邑大学 Indoor map-free navigation system and navigation method
CN107544507A (en) * 2017-09-28 2018-01-05 速感科技(北京)有限公司 Mobile robot control method for movement and device

Also Published As

Publication number Publication date
CN109238286A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN109239659B (en) Indoor navigation method, device, computer equipment and storage medium
CN109062207B (en) Charging seat positioning method and device, robot and storage medium
CN109068278B (en) Indoor obstacle avoidance method and device, computer equipment and storage medium
CN108297115B (en) Autonomous repositioning method for robot
CN109035299A (en) Method for tracking target, device, computer equipment and storage medium
CN107742304B (en) Method and device for determining movement track, mobile robot and storage medium
CN109959376B (en) Trajectory correction method, and navigation route drawing method and device related to indoor route
US20210097103A1 (en) Method and system for automatically collecting and updating information about point of interest in real space
US20150338497A1 (en) Target tracking device using handover between cameras and method thereof
CN112258567A (en) Visual positioning method and device for object grabbing point, storage medium and electronic equipment
CN110134117B (en) Mobile robot repositioning method, mobile robot and electronic equipment
CN109099889B (en) Close-range photogrammetry system and method
CN109238286B (en) Intelligent navigation method, intelligent navigation device, computer equipment and storage medium
US9239965B2 (en) Method and system of tracking object
JP2010033447A (en) Image processor and image processing method
CN115346256A (en) Robot searching method and system
CN109685062A (en) A kind of object detection method, device, equipment and storage medium
CN114353807B (en) Robot positioning method and positioning device
Lee et al. Intelligent robot for worker safety surveillance: Deep learning perception and visual navigation
EP4040400A1 (en) Guided inspection with object recognition models and navigation planning
CN109579793B (en) Terrain mapping method, apparatus, flight platform, computer device and storage medium
CN112689234B (en) Indoor vehicle positioning method, device, computer equipment and storage medium
CN112631333B (en) Target tracking method and device of unmanned aerial vehicle and image processing chip
CN106204516B (en) Automatic charging method and device for robot
CN114661049A (en) Inspection method, inspection device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant