Disclosure of Invention
The invention aims to provide a method and a system for reminding a navigation object of a traveling route, which can realize the purpose of reminding whether the traveling route of the navigation object is correct or not in time and reducing the time waste of a user.
The technical scheme provided by the invention is as follows:
the invention provides a method for reminding a navigation object of a traveling route, which comprises the following steps:
S1000, acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
s2000, generating a navigation route according to the current position and the target position;
s3000, acquiring a traveling route of a navigation object; the traveling route is a route in which a navigation object starts to move by taking the current position as a starting point according to the navigation route;
s4000, judging whether the traveling route is matched with the navigation route; if not, executing step S5000;
s5000 generates traveling error information.
Further, the step S3000 includes the steps of:
s3010, acquiring a target image frame; the target image frame comprises a navigation object;
s3020 extracting features of the corresponding target image frames through a plurality of classifiers to obtain a target block diagram;
s3030 calculating a response value of each target block diagram, and confirming that the position corresponding to the target block diagram with the maximum response value is the space position of the navigation object;
s3040 generates a travel route of the navigation object from all the spatial positions.
Further, the step S3010 includes the steps of:
s3001, acquiring a first video image according to the current detection range;
s3002, when the first video image is obtained according to the current detection range and the navigation object is not lost, performing image processing on the first video image to obtain the target image frame;
S3003, when the first video image is obtained according to the current detection range and the navigation object is lost, expanding the detection range until the detection range is expanded to a rated detection range;
s3004, when the navigation object is detected again according to the apparent information of the navigation object within the preset time length according to the expanded detection range, acquiring a second video image according to the expanded detection range, and performing image processing on the first video image and the second video image to obtain the target image frame;
s3005, when the navigation object is not detected again according to the apparent information of the navigation object within the preset time length according to the expanded detection range, continuing to expand the detection range until the navigation object is not detected again according to the apparent information of the navigation object within the preset time length after the navigation object is expanded to the rated detection range, and stopping tracking the navigation object when the navigation object is not detected again according to the apparent information of the navigation object.
Further, the step S4000 includes the steps of:
s4100 acquires a movement path S1 of the navigation object; the travel route S is S1+ S2+, … …, si, i belongs to N, wherein S1 is a first movement route in the travel route with the current position as a starting point, S2 is a second movement route in the travel route, and si is an ith movement route in the travel route;
S4200 determining whether the bearing difference between the moving path S1 and the comparing path d1 is within a predetermined difference range; the navigation route D is D1+ D2+, … …, di, i e N, where D1 is a first section of moving path in the navigation route with the current position as a starting point, D2 is a second section of moving path in the navigation route, and di is an ith section of moving path in the navigation route; if not, go to step S5000.
Further, the step S1000 includes, before the step, the steps of:
s0100, acquiring user voice information, and recognizing the voice information to obtain a key field;
s0200 judges whether the key field comprises a preset path inquiry field; if yes, executing step S0300; otherwise, returning to step S0100;
s0300 rotates the direction of the camera to the target direction; the target direction is the direction of the user voice information corresponding to the preset path inquiry field;
s0400 determines that the user with the largest user image size acquired by the capture frame is the navigation object in the preset acquisition range corresponding to the camera.
The invention also provides a system for reminding the navigation object of the traveling route, which comprises:
the information acquisition module is used for acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
The route generation module is used for generating a navigation route according to the current position and the target position;
the route acquisition module is used for acquiring a traveling route of the navigation object; the traveling route is a route in which a navigation object starts to move by taking the current position as a starting point according to the navigation route;
the matching judgment module is used for judging whether the traveling route is matched with the navigation route;
and the information generation module generates traveling error information when the traveling route does not match with the navigation route.
Further, the route acquisition module includes:
an image acquisition unit that acquires a target image frame; the target image frame comprises a navigation object;
the block diagram acquisition unit is used for extracting the characteristics of the corresponding target image frames through a plurality of classifiers to obtain a target block diagram;
the response value acquisition unit is used for calculating the response value of each target block diagram and confirming that the position corresponding to the target block diagram with the maximum response value is the space position of the navigation object;
and a travel route generation unit which generates a travel route of the navigation object according to all the spatial positions.
Further, the route obtaining module further includes:
the acquisition unit acquires a first video image according to the current detection range; when the navigation object is not detected again according to the apparent information of the navigation object within the preset time length, acquiring a second video image according to the expanded detection range;
The control unit is used for controlling the acquisition unit to expand the detection range until the detection range is expanded to a rated detection range when the first video image is acquired according to the current detection range and a navigation object is lost;
the image processing unit is used for processing the first video image to obtain the target image frame when the first video image is obtained according to the current detection range and the navigation object is not lost;
the image processing unit is further used for acquiring a second video image according to the expanded detection range when the navigation object is re-detected according to the apparent information of the navigation object within the preset time length according to the expanded detection range, and performing image processing on the first video image and the second video image to obtain the target image frame;
and the image processing unit is also used for continuing to expand the detection range when the navigation object is not re-detected according to the apparent information of the navigation object within the preset time length according to the expanded detection range until the navigation object is expanded to the rated detection range within the preset time length and stopping tracking the navigation object when the navigation object is not re-detected according to the apparent information of the navigation object.
Further, the matching judgment module includes:
a path acquisition unit that acquires a target movement path s1 of the navigation object; the traveling route S is S1+ S2+, … …, si, i e N, wherein S1 is a first moving route in the traveling route with the current position as a starting point, S2 is a second moving route in the traveling route, and si is an ith moving route in the traveling route;
the comparison and judgment unit is used for judging whether the bearing difference between the moving path s1 and the comparison path d1 is within a preset difference range or not; the navigation route D is D1+ D2+, … …, di, i e N, where D1 is a first section of moving path in the navigation route with the current position as a starting point, D2 is a second section of moving path in the navigation route, and di is an ith section of moving path in the navigation route;
the information generating module generates the traveling error information when the bearing difference between the moving path s1 and the comparison path d1 is out of a preset difference range.
Further, the method also comprises the following steps:
the voice acquisition module is used for acquiring the voice information of the user and identifying the voice information to obtain a key field;
The identification module is used for judging whether the key field comprises a preset path inquiry field or not;
the voice acquisition module is used for acquiring new user voice information again when the key field does not comprise a preset path inquiry field;
the rotating module is used for rotating the direction of the camera to the target direction when the key field comprises a preset path inquiry field; the target direction is the direction of the user voice information corresponding to the preset path inquiry field;
and the navigation object determining module is used for determining that the corresponding acquired user with the largest size of the capturing frame is the navigation object in the preset acquisition range corresponding to the camera.
The method and the system for reminding the traveling route of the navigation object can bring at least one of the following beneficial effects:
1) according to the invention, the travelling route of the navigation object is compared with the navigation route, so that whether the travelling route of the navigation object is correct or not can be timely reminded, the time waste of the user is reduced, and the navigation use experience of the user is prompted.
2) According to the invention, the target block diagrams are obtained by extracting the characteristics of the corresponding target image frames through the plurality of classifiers, the spatial position of the navigation object is determined according to the response value of each target block diagram so as to generate the traveling route of the navigation object, the interference of the external environment can be overcome, and the reliability and the accuracy of the tracking detection of the navigation object are improved.
3) The invention can judge whether the advancing direction and the advancing path of the navigation object are correct only by comparing and judging the navigation route and the first section of the advancing route in front of the current position as the starting point, and can accelerate the efficiency of judgment and analysis, thereby reducing the time for generating the advancing error information to remind the navigation object and further accelerating the prompting efficiency.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
An embodiment of a method for reminding a navigation object of a travel route according to the present invention is shown in fig. 1, and includes:
s1000, acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
s2000, generating a navigation route according to the current position and the target position;
s3000, acquiring a traveling route of a navigation object; the traveling route is a route in which a navigation object starts to move by taking the current position as a starting point according to the navigation route;
s4000, judging whether the traveling route is matched with the navigation route; if not, executing the step S5000;
s5000 generates traveling error information.
Specifically, in this embodiment, the navigation device such as the navigation robot acquires the current position of the navigation object, the current position of the navigation object may be acquired by GPS positioning, or may be acquired by a three-point positioning method through a plurality of camera nodes, any indoor positioning method is within the protection scope of the present invention, the navigation device such as the navigation robot further acquires the route query request information input by the navigation object, the route query request information may be acquired by voice input, or may be acquired by manual input through a human-computer interaction interface by a user, after acquiring the current position of the navigation object and the route query request information, the navigation device performs route planning according to the current position and the target position to generate a navigation route, the navigation object starts to move to the target position according to the navigation route, when the navigation object leaves from the current position and moves along the direction of the navigation route, the navigation equipment collects the traveling route of the navigation object in real time, compares and analyzes the traveling route of the navigation object and the navigation route, judges whether the traveling route is matched with the navigation route, and if the traveling route is matched with the navigation route, the navigation equipment can not generate any prompt information or generate correct traveling information to inform that the traveling direction and the traveling path of the navigation object are correct; and generating a traveling error message if the traveling route does not match with the navigation route so as to remind that the traveling direction and the traveling path of the navigation object are wrong. The method and the device can remind whether the traveling route of the navigation object is correct or not in time, reduce the time waste of the user and prompt the user of the navigation use experience.
Another embodiment of the method for reminding a navigation object of the travel route according to the present invention, as shown in fig. 2, includes:
s1000, acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
s2000, generating a navigation route according to the current position and the target position;
s3010, acquiring a target image frame; the target image frame comprises a navigation object;
s3020 extracting features of the corresponding target image frames through a plurality of classifiers to obtain a target block diagram;
s3030 calculating a response value of each target block diagram, and confirming that the position corresponding to the target block diagram with the maximum response value is the space position of the navigation object;
s3040 generating a traveling route of the navigation object according to all the spatial positions;
s4000, judging whether the traveling route is matched with the navigation route; if not, executing step S5000;
s5000 generates traveling error information.
Specifically, in this embodiment, based on the YOLO V2 algorithm and the KCF algorithm, the navigation object is completely detected and tracked, and the purpose of tracking and detecting on the navigation device is specifically:
detecting and tracking navigation object
The image frame is subjected to navigation object detection through a YOLO V2 algorithm, namely YOLO V2 adopts a 32-layer neural network structure (comprising convolution layers and pooling layers), performs pre-training detection on the image frame by using a network entry with 416 x 416 resolution, performs pre-training detection, performs positioning prediction by using 5-size box dimensions (the flat and long boxes are fewer, and the thin and long boxes are more to conform to the characteristics of a human), connects a shallow target block diagram (such as 26 resolution) to a deep target block diagram (resolution 13), links two high and low resolution obtained feature maps in a connection mode, superposes adjacent features to different channels (but not spatial positions), changes 26 x 512 feature maps into 13 x 2048 feature maps similar to 'shortcut connection' of ResNet, then connects the feature maps with the original deep feature maps, and changes the model input size every few turns (10 bytes), due to the adoption of a multi-scale training detection method, the method has good robustness. After the input size of the model is changed, the model continues to be trained, and the training mechanism forces the 32-layer neural network structure to learn how to make predictions on various input dimensions, which means that the same network can predict detection results under different resolutions. Because the YOLO V2 model operates fast on small-scale input, YOLO V2 provides a trade-off between speed and accuracy, and YOLO V2 can improve processing speed while maintaining accuracy in the detection of small-resolution images.
Navigation object detection and tracking
And performing discriminant tracking through a KCF algorithm, training a classifier in the tracking process, detecting whether the next frame of predicted position is a navigation object by using the classifier, and updating a training set by using a new detection result so as to update the classifier. And generally selecting a navigation object area as a positive sample and a navigation object surrounding area as a negative sample when training a classifier, wherein the probability that the area closer to the navigation object is the positive sample is higher, acquiring the positive and negative samples by using a circulation matrix of the target surrounding area, training the classifier by using ridge regression, and successfully converting the operation of the matrix into the dot multiplication of elements by using the diagonalization property of the circulation matrix in a Fourier space, so that the operation amount is greatly reduced, the operation speed is improved, and the algorithm meets the real-time requirement. The ridge regression in linear space is mapped to the nonlinear space through a kernel function, and the computation can also be simplified by solving a dual problem and some common constraints in the nonlinear space and using the circulation matrix Fourier space diagonalization.
Position determination of navigation object
Judging whether a tracked navigation object or surrounding background information is a navigation object or surrounding background information, collecting a sample by mainly using a rotation matrix, performing accelerated calculation on an algorithm by using fast Fourier change, tracking based on the detected navigation object, firstly detecting the navigation object before tracking to obtain the position of the navigation object, then learning and tracking the navigation object, then calculating the response value of each target block diagram, and confirming the position corresponding to the target block diagram with the maximum response value as the space position of the navigation object. As shown in fig. 2, the navigation object starts to move at the current position according to the navigation route, the navigation device shoots and starts to obtain the image video, the image on the left side of fig. 3 is the current image frame P1, the navigation object is framed by the dashed frame 6 for the current image frame P1, the pixel coordinate Q1 of the navigation object on the imaging image is obtained, the solid frame 3 is the sample target frame diagram including the navigation object, the other solid frames (such as the solid frame 1, the solid frame 2, the solid frame 4, and the solid frame 5) are the frame diagrams corresponding to the sample target frame diagram, namely, the samples obtained by circularly shifting the sample target frame diagram and aligning the navigation object, a classifier is trained by the samples, after the classifier is trained, the next image frame P2 is reached, namely, the image on the right side of fig. 3, firstly, the region corresponding to the sample target frame diagram, namely, the solid frame 3 is sampled and then the samples are circularly shifted, after aligning the target as shown in the right image of fig. 3 (which is convenient for understanding, and actually is not aligned), the classifier is used to calculate the response values for the target frame respectively, and it is obvious that the response value corresponding to the target frame corresponding to the solid frame 1 is the largest, the pixel coordinate Q2 of the navigation object on the imaged image is obtained by calculating the position of the target frame corresponding to the solid frame 1, then switching next image frame Pj, j epsilon N, continuing the above steps to measure pixel coordinate Q3 of the navigation object on the imaging image, according to the conversion relation between the world coordinate system and the image coordinate system, the coordinate position of the navigation object (including the coordinate position M1 corresponding to the pixel coordinate Q1, namely the current position, the coordinate position M2 corresponding to the pixel coordinate Q2 and the coordinate position Mj corresponding to the pixel coordinate Qj) is obtained, and the traveling route of the navigation object is drawn and generated according to all the coordinate positions. In the embodiment, under a tracking-detection framework, the accuracy, efficiency and reliability of tracking check are improved by using a YOLO V2 algorithm and a KCF algorithm, the KCF has the characteristic of high speed, a plurality of characteristic channels can be naturally utilized, the number of the characteristic channels of each layer in a detection network is large, the abstract of different layers in a neural network structure on the description of a target is different, the characteristics of a bottom layer are simple, the semantic characteristics of a high layer are more suitable for positioning, and the difficulty degree of tracking can be changed due to continuous change of the target in the tracking process, the characteristics of the bottom layer can be inaccurate, and a plurality of trackers are cascaded at the moment; certainly, if the tracker on the shallow layer can track well, the tracking effect is good, the following calculation is not needed, the time can be saved, whether the tracker performs well or not is to see the currently calculated response value, the tracking effect is good if the response value is large, therefore, different layers in the neural network structure are selected to construct a plurality of KCFs for cascade connection, a plurality of independent classifiers are respectively established by utilizing a plurality of network layers, and the reliability and the accuracy of the tracking of the navigation object are improved.
Another embodiment of the method for reminding a navigation object of the travel route according to the present invention, as shown in fig. 4, includes:
s1000, acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
s2000, generating a navigation route according to the current position and the target position;
s3001, acquiring a first video image according to the current detection range;
s3002, when the first video image is obtained according to the current detection range and the navigation object is not lost, performing image processing on the first video image to obtain the target image frame;
s3003, when the first video image is obtained according to the current detection range and the navigation object is lost, expanding the detection range until the detection range is expanded to a rated detection range;
s3004, when the navigation object is detected again according to the apparent information of the navigation object within the preset time length according to the expanded detection range, acquiring a second video image according to the expanded detection range, and performing image processing on the first video image and the second video image to obtain the target image frame;
s3005 when the navigation object is not detected again according to the apparent information of the navigation object within the preset time length according to the expanded detection range, continuing to expand the detection range until the navigation object is not detected again according to the apparent information of the navigation object within the preset time length after the navigation object is expanded to the rated detection range, and stopping tracking the navigation object.
S3010, acquiring a target image frame; the target image frame comprises a navigation object;
s3020 extracting features of the corresponding target image frames through a plurality of classifiers to obtain a target block diagram;
s3030 calculating a response value of each target block diagram, and confirming that the position corresponding to the target block diagram with the maximum response value is the space position of the navigation object;
s3040 generating a traveling route of the navigation object according to all the spatial positions;
s4000, judging whether the traveling route is matched with the navigation route; if not, executing step S5000;
s5000 generates traveling error information.
Specifically, in this embodiment, a camera of the navigation device performs first video image acquisition in a current detection range, and determines whether the navigation object is lost when the first video image is acquired in real time, where the navigation object is lost due to various reasons, such as missed detection by a classifier, and tracking loss of the navigation object due to shielding of background objects (such as walls, trees, and the like) in a scene. When the navigation object is not lost, directly carrying out image processing on the first video image to obtain a target image frame; when the navigation object is lost, the detection range is expanded when the navigation object is lost, according to apparent information of the navigation object, the apparent information is the distinguishing characteristics between the navigation object and the background, the apparent information comprises the characteristics of the navigation object, such as the head position, the volume, the height, the skin color, the hair style, the clothes color, the clothes texture and the like, whether the navigation object is detected again within the preset duration according to the expanded detection range is judged, if the lost navigation object is detected again within the preset duration, a second video image is obtained according to the expanded detection range, and the first video image and the second video image are subjected to image processing to obtain a target image frame; and if the lost navigation object is not detected within the preset time, continuing to expand the detection range until the expanded detection range reaches the rated detection range, and if the lost navigation object cannot be detected within the preset time after the expanded detection range reaches the rated detection range, not tracking and detecting the navigation object. Usually, the lost navigation object reappears within a certain range near the disappearing position within a certain time period, so the aim of the stage is to find the lost navigation object again, the lost navigation object is retained for a certain time period, the apparent information similarity degree between the lost navigation object and the re-detected object is compared within a certain range of the disappearing position, and if the similarity degree is larger than a certain threshold value, the re-detected object within the range of the detection frame is indicated to be the navigation object which disappears before. The method and the device avoid the problem that the navigation object cannot be continuously tracked and detected after the navigation object is lost, and improve the robustness of the detection of the advancing route of the navigation object.
Another embodiment of the method for reminding a navigation object of the travel route according to the present invention, as shown in fig. 5, includes:
s1000, acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
s2000, generating a navigation route according to the current position and the target position;
s3001, acquiring a first video image according to the current detection range;
s3002, when the first video image is obtained according to the current detection range and the navigation object is not lost, performing image processing on the first video image to obtain the target image frame;
s3003, when the first video image is obtained according to the current detection range and the navigation object is lost, expanding the detection range until the detection range is expanded to a rated detection range;
s3004, when the navigation object is detected again according to the apparent information of the navigation object within the preset time length according to the expanded detection range, acquiring a second video image according to the expanded detection range, and performing image processing on the first video image and the second video image to obtain the target image frame;
s3005 when the navigation object is not detected again according to the apparent information of the navigation object within the preset time length according to the expanded detection range, continuing to expand the detection range until the navigation object is not detected again according to the apparent information of the navigation object within the preset time length after the navigation object is expanded to the rated detection range, and stopping tracking the navigation object.
S3010, acquiring a target image frame; the target image frame comprises a navigation object;
s3020 extracting features of the corresponding target image frames through a plurality of classifiers to obtain a target block diagram;
s3030 calculating a response value of each target block diagram, and confirming that the position corresponding to the target block diagram with the maximum response value is the space position of the navigation object;
s3040 generating a traveling route of the navigation object according to all the spatial positions;
s4100 acquiring a movement path S1 of the navigation object; the traveling route S is S1+ S2+, … …, si, i e N, wherein S1 is a first moving route in the traveling route with the current position as a starting point, S2 is a second moving route in the traveling route, and si is an ith moving route in the traveling route;
s4200 determining whether the bearing difference between the moving path S1 and the comparing path d1 is within a preset difference range; the navigation route D is D1+ D2+, … …, di, i e N, where D1 is a first section of moving path in the navigation route with the current position as a starting point, D2 is a second section of moving path in the navigation route, and di is an ith section of moving path in the navigation route; if not, executing step S5000;
S5000 generates traveling error information.
Specifically, in the above embodiment, the complete travel route of the navigation object is tracked and detected all the time, where the travel route is the travel route of the user acquired within the effective acquisition range of the image acquisition device (such as a camera) of the navigation device, and the complete travel route and the navigation route are compared and matched, but in the present embodiment, as shown in fig. 6, the first section of the travel route of the navigation object, i.e. the moving path s1, and the first section of the navigation route and the comparison path d1 are obtained to perform comparison and judgment, so that it can be determined whether the travel direction and the travel route of the navigation object are correct only by comparing and judging the first section of the navigation route and the front section of the travel route with the current position as the starting point, that is, the efficiency of determination and analysis can be accelerated, thereby reducing the time for generating travel error information to remind the navigation object, further accelerating the prompting efficiency, and whether the traveling route of the navigation object is correct or not is timely reminded, so that the time waste of the user is reduced, and the navigation use experience of the user is prompted.
One embodiment of a reminding system for navigating a route of an object according to the present invention is shown in fig. 7, and includes:
an information acquisition module 100 for acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
A route generation module 200, which generates a navigation route according to the current position and the target position;
a route acquisition module 300 that acquires a travel route of the navigation object; the traveling route is a route in which a navigation object starts to move by taking the current position as a starting point according to the navigation route;
a matching judgment module 400 for judging whether the travel route matches with the navigation route;
the information generating module 500 generates a travel error information when the travel route does not match the navigation route.
Specifically, this embodiment is a system embodiment corresponding to the above method embodiment, and specific effects refer to the above corresponding method embodiment, which is not described in detail herein.
Another embodiment of a reminding system for navigating a route of an object according to the present invention, as shown in fig. 8, comprises:
an information acquisition module 100 for acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
a route generation module 200, which generates a navigation route according to the current position and the target position;
a route acquisition module 300 that acquires a travel route of the navigation object; the traveling route is a route in which a navigation object starts to move by taking the current position as a starting point according to the navigation route;
A matching judgment module 400 for judging whether the travel route matches the navigation route;
an information generating module 500 that generates a travel error information when the travel route does not match the navigation route;
the route acquisition module 300 includes:
an image acquisition unit 310 that acquires a target image frame; the target image frame comprises a navigation object;
the block diagram obtaining unit 320 is configured to perform feature extraction on the corresponding target image frames through a plurality of classifiers to obtain target block diagrams;
the response value obtaining unit 330 calculates a response value of each target block diagram, and determines a position corresponding to the target block diagram with the largest response value as a spatial position of the navigation object;
the travel route generation unit 340 generates a travel route of the navigation object based on all the spatial positions.
Preferably, the route obtaining module 300 further includes:
the acquisition unit 350 acquires a first video image according to the current detection range; when the navigation object is not detected again according to the apparent information of the navigation object within the preset time length, acquiring a second video image according to the expanded detection range;
the control unit 360 is used for controlling the acquisition unit to expand the detection range until the detection range is expanded to a rated detection range when the first video image is acquired according to the current detection range and the navigation object is lost;
The image processing unit 370, when the first video image is obtained according to the current detection range and the navigation object is not lost, performing image processing on the first video image to obtain the target image frame;
the image processing unit 370 further obtains a second video image according to the expanded detection range when the navigation object is re-detected according to the apparent information of the navigation object within the preset time length according to the expanded detection range, and performs image processing on the first video image and the second video image to obtain the target image frame;
the image processing unit 370 further continues to expand the detection range when the navigation object is not detected again according to the apparent information of the navigation object within the preset time period according to the expanded detection range until the navigation object is not detected again within the preset time period according to the apparent information of the navigation object after the detection range is expanded to the rated detection range, and stops tracking the navigation object when the navigation object is not detected again according to the apparent information of the navigation object.
Preferably, the matching determining module 400 includes:
a route acquisition unit 410 that acquires a target movement route s1 of the navigation object; the traveling route S is S1+ S2+, … …, si, i e N, wherein S1 is a first moving route in the traveling route with the current position as a starting point, S2 is a second moving route in the traveling route, and si is an ith moving route in the traveling route;
A comparison and determination unit 420 for determining whether the bearing difference between the moving path s1 and the comparison path d1 is within a preset difference range; the navigation route D is D1+ D2+, … …, di, i e N, where D1 is a first section of moving path in the navigation route with the current position as a starting point, D2 is a second section of moving path in the navigation route, and di is an ith section of moving path in the navigation route;
the information generating module 500 generates a traveling error information when the bearing difference between the moving path s1 and the comparing path d1 is outside a preset difference range.
Preferably, the method further comprises the following steps:
the voice acquiring module 600 acquires voice information of a user, and identifies the voice information to obtain a key field;
the identification module 700 determines whether the key field includes a preset path inquiry field;
the voice acquiring module 600, when the key field does not include the preset path query field, reacquires new voice information of the user;
a rotation module 800, configured to rotate the direction of the camera to a target direction when the key field includes a preset path query field; the target direction is the direction of the user voice information corresponding to the preset path inquiry field;
The navigation object determining module 900 determines that the corresponding acquired user with the largest capture frame size in the preset acquisition range corresponding to the camera is the navigation object.
Specifically, this embodiment is a system embodiment corresponding to the above method embodiment, and specific effects refer to the above corresponding method embodiment, which is not described in detail herein.
According to the above embodiment, by way of example, as shown in fig. 9,
in an indoor place with a complicated environment, such as a mall, a hospital, a station, an airport, etc., the navigation robot may provide a path indicating service for a specific target location for a navigation object.
The main content of the task is summarized as follows:
1) the navigation object inquires the navigation robot about the walking route of a specific place (such as a certain exit, a certain restaurant, a toilet and the like) nearby;
2) the navigation robot gives a navigation route from the local position, namely the current position, to the target position according to the map;
3) the navigation robot models and tracks a navigation object, namely the navigation object, for a certain distance, and if the moving path of the navigation object in the distance does not accord with the comparison path given by the robot, the navigation object is corrected and prompted by voice prompt.
The process of event development is that the navigation object inquires a route, the navigation object walks according to the navigation route provided by the navigation robot, the navigation robot tracks the navigation object, and the track of the navigation object, namely the consistency of the traveling route and the navigation route provided by the navigation robot is compared, wherein the difficulty of the target tracking stage is as follows: when the navigation object is inquired, the navigation object is close to and is at the front, and then the navigation object is tracked to the whole body at the back, so that the changes of the target dimension, the target rotation and the target appearance need to be dealt with, and the interference of other pedestrians needs to be overcome because the navigation object is in a place with dense crowds, and the KCF algorithm and the YOLO V2 algorithm can be adopted for detection and tracking.
In practice, the navigation route will usually include multiple segments, and in order to improve the working efficiency of the navigation robot, the tracking correction range is the moving path s1, i.e. when the traveling direction and the traveling path of the navigation object are not wrong in the moving path s1, the tracking correction task will not be performed on the navigation object.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.