CN109000634B - Navigation object traveling route reminding method and system - Google Patents

Navigation object traveling route reminding method and system Download PDF

Info

Publication number
CN109000634B
CN109000634B CN201810565075.2A CN201810565075A CN109000634B CN 109000634 B CN109000634 B CN 109000634B CN 201810565075 A CN201810565075 A CN 201810565075A CN 109000634 B CN109000634 B CN 109000634B
Authority
CN
China
Prior art keywords
route
navigation object
navigation
target
detection range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810565075.2A
Other languages
Chinese (zh)
Other versions
CN109000634A (en
Inventor
蒋化冰
苏合检
何家飞
康力方
邹武林
米万珠
谭舟
严婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Noah Wood Robot Technology Co ltd
Original Assignee
Shanghai Zhihuilin Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhihuilin Medical Technology Co ltd filed Critical Shanghai Zhihuilin Medical Technology Co ltd
Priority to CN201810565075.2A priority Critical patent/CN109000634B/en
Publication of CN109000634A publication Critical patent/CN109000634A/en
Application granted granted Critical
Publication of CN109000634B publication Critical patent/CN109000634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00

Abstract

The invention provides a method and a system for reminding a navigation object to travel a route, wherein the method comprises the following steps: s1000, acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position; s2000, generating a navigation route according to the current position and the target position; s3000, acquiring a traveling route of a navigation object; the traveling route is a route in which a navigation object starts to move by taking the current position as a starting point according to the navigation route; s4000, judging whether the traveling route is matched with the navigation route; if not, executing step S5000; s5000 generates traveling error information. The invention realizes the purpose of reminding the navigation object whether the traveling route is correct or not in time and reducing the time waste of the user.

Description

Navigation object traveling route reminding method and system
Technical Field
The invention relates to the field of video tracking, in particular to a method and a system for reminding a navigation object to travel a route.
Background
In an indoor place with a complicated environment, such as a mall, a hospital, a station, an airport, etc., an indoor navigation device, such as a navigation robot, may provide a route navigation service for a pedestrian at a specific target location.
However, when a user walks along a navigation route generated by indoor navigation equipment, since some navigation objects may be unclear from south to east and north, even if the user walks along the navigation route, the user may not find a target position by a walking error, or even if the user knows the wrong direction at a later stage, the user may need to return to the original position or find a new indoor navigation equipment to find navigation help again because the user deviates from the initial correct walking direction of the navigation route, which may cause the navigation object to waste a large amount of time to find a correct route to reach the target position, waste the time of the navigation object, and affect the use experience of the user.
How to prompt the navigation object whether the route traveled by the navigation object according to the navigation route is correct in time is a problem to be solved urgently in time.
Disclosure of Invention
The invention aims to provide a method and a system for reminding a navigation object of a traveling route, which can realize the purpose of reminding whether the traveling route of the navigation object is correct or not in time and reducing the time waste of a user.
The technical scheme provided by the invention is as follows:
the invention provides a method for reminding a navigation object of a traveling route, which comprises the following steps:
S1000, acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
s2000, generating a navigation route according to the current position and the target position;
s3000, acquiring a traveling route of a navigation object; the traveling route is a route in which a navigation object starts to move by taking the current position as a starting point according to the navigation route;
s4000, judging whether the traveling route is matched with the navigation route; if not, executing step S5000;
s5000 generates traveling error information.
Further, the step S3000 includes the steps of:
s3010, acquiring a target image frame; the target image frame comprises a navigation object;
s3020 extracting features of the corresponding target image frames through a plurality of classifiers to obtain a target block diagram;
s3030 calculating a response value of each target block diagram, and confirming that the position corresponding to the target block diagram with the maximum response value is the space position of the navigation object;
s3040 generates a travel route of the navigation object from all the spatial positions.
Further, the step S3010 includes the steps of:
s3001, acquiring a first video image according to the current detection range;
s3002, when the first video image is obtained according to the current detection range and the navigation object is not lost, performing image processing on the first video image to obtain the target image frame;
S3003, when the first video image is obtained according to the current detection range and the navigation object is lost, expanding the detection range until the detection range is expanded to a rated detection range;
s3004, when the navigation object is detected again according to the apparent information of the navigation object within the preset time length according to the expanded detection range, acquiring a second video image according to the expanded detection range, and performing image processing on the first video image and the second video image to obtain the target image frame;
s3005, when the navigation object is not detected again according to the apparent information of the navigation object within the preset time length according to the expanded detection range, continuing to expand the detection range until the navigation object is not detected again according to the apparent information of the navigation object within the preset time length after the navigation object is expanded to the rated detection range, and stopping tracking the navigation object when the navigation object is not detected again according to the apparent information of the navigation object.
Further, the step S4000 includes the steps of:
s4100 acquires a movement path S1 of the navigation object; the travel route S is S1+ S2+, … …, si, i belongs to N, wherein S1 is a first movement route in the travel route with the current position as a starting point, S2 is a second movement route in the travel route, and si is an ith movement route in the travel route;
S4200 determining whether the bearing difference between the moving path S1 and the comparing path d1 is within a predetermined difference range; the navigation route D is D1+ D2+, … …, di, i e N, where D1 is a first section of moving path in the navigation route with the current position as a starting point, D2 is a second section of moving path in the navigation route, and di is an ith section of moving path in the navigation route; if not, go to step S5000.
Further, the step S1000 includes, before the step, the steps of:
s0100, acquiring user voice information, and recognizing the voice information to obtain a key field;
s0200 judges whether the key field comprises a preset path inquiry field; if yes, executing step S0300; otherwise, returning to step S0100;
s0300 rotates the direction of the camera to the target direction; the target direction is the direction of the user voice information corresponding to the preset path inquiry field;
s0400 determines that the user with the largest user image size acquired by the capture frame is the navigation object in the preset acquisition range corresponding to the camera.
The invention also provides a system for reminding the navigation object of the traveling route, which comprises:
the information acquisition module is used for acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
The route generation module is used for generating a navigation route according to the current position and the target position;
the route acquisition module is used for acquiring a traveling route of the navigation object; the traveling route is a route in which a navigation object starts to move by taking the current position as a starting point according to the navigation route;
the matching judgment module is used for judging whether the traveling route is matched with the navigation route;
and the information generation module generates traveling error information when the traveling route does not match with the navigation route.
Further, the route acquisition module includes:
an image acquisition unit that acquires a target image frame; the target image frame comprises a navigation object;
the block diagram acquisition unit is used for extracting the characteristics of the corresponding target image frames through a plurality of classifiers to obtain a target block diagram;
the response value acquisition unit is used for calculating the response value of each target block diagram and confirming that the position corresponding to the target block diagram with the maximum response value is the space position of the navigation object;
and a travel route generation unit which generates a travel route of the navigation object according to all the spatial positions.
Further, the route obtaining module further includes:
the acquisition unit acquires a first video image according to the current detection range; when the navigation object is not detected again according to the apparent information of the navigation object within the preset time length, acquiring a second video image according to the expanded detection range;
The control unit is used for controlling the acquisition unit to expand the detection range until the detection range is expanded to a rated detection range when the first video image is acquired according to the current detection range and a navigation object is lost;
the image processing unit is used for processing the first video image to obtain the target image frame when the first video image is obtained according to the current detection range and the navigation object is not lost;
the image processing unit is further used for acquiring a second video image according to the expanded detection range when the navigation object is re-detected according to the apparent information of the navigation object within the preset time length according to the expanded detection range, and performing image processing on the first video image and the second video image to obtain the target image frame;
and the image processing unit is also used for continuing to expand the detection range when the navigation object is not re-detected according to the apparent information of the navigation object within the preset time length according to the expanded detection range until the navigation object is expanded to the rated detection range within the preset time length and stopping tracking the navigation object when the navigation object is not re-detected according to the apparent information of the navigation object.
Further, the matching judgment module includes:
a path acquisition unit that acquires a target movement path s1 of the navigation object; the traveling route S is S1+ S2+, … …, si, i e N, wherein S1 is a first moving route in the traveling route with the current position as a starting point, S2 is a second moving route in the traveling route, and si is an ith moving route in the traveling route;
the comparison and judgment unit is used for judging whether the bearing difference between the moving path s1 and the comparison path d1 is within a preset difference range or not; the navigation route D is D1+ D2+, … …, di, i e N, where D1 is a first section of moving path in the navigation route with the current position as a starting point, D2 is a second section of moving path in the navigation route, and di is an ith section of moving path in the navigation route;
the information generating module generates the traveling error information when the bearing difference between the moving path s1 and the comparison path d1 is out of a preset difference range.
Further, the method also comprises the following steps:
the voice acquisition module is used for acquiring the voice information of the user and identifying the voice information to obtain a key field;
The identification module is used for judging whether the key field comprises a preset path inquiry field or not;
the voice acquisition module is used for acquiring new user voice information again when the key field does not comprise a preset path inquiry field;
the rotating module is used for rotating the direction of the camera to the target direction when the key field comprises a preset path inquiry field; the target direction is the direction of the user voice information corresponding to the preset path inquiry field;
and the navigation object determining module is used for determining that the corresponding acquired user with the largest size of the capturing frame is the navigation object in the preset acquisition range corresponding to the camera.
The method and the system for reminding the traveling route of the navigation object can bring at least one of the following beneficial effects:
1) according to the invention, the travelling route of the navigation object is compared with the navigation route, so that whether the travelling route of the navigation object is correct or not can be timely reminded, the time waste of the user is reduced, and the navigation use experience of the user is prompted.
2) According to the invention, the target block diagrams are obtained by extracting the characteristics of the corresponding target image frames through the plurality of classifiers, the spatial position of the navigation object is determined according to the response value of each target block diagram so as to generate the traveling route of the navigation object, the interference of the external environment can be overcome, and the reliability and the accuracy of the tracking detection of the navigation object are improved.
3) The invention can judge whether the advancing direction and the advancing path of the navigation object are correct only by comparing and judging the navigation route and the first section of the advancing route in front of the current position as the starting point, and can accelerate the efficiency of judgment and analysis, thereby reducing the time for generating the advancing error information to remind the navigation object and further accelerating the prompting efficiency.
Drawings
The above features, technical features, and advantages of a method and system for prompting a navigation object to travel a route, and implementations thereof will be further described in the following detailed description of preferred embodiments in a clearly understandable manner in conjunction with the accompanying drawings.
FIG. 1 is a flow chart of one embodiment of a method for reminding a navigation object of a travel route of the present invention;
FIG. 2 is a flow chart of another embodiment of a method for reminding a navigation object of a travel route according to the present invention;
FIG. 3 is a schematic diagram of the navigation object position determination of a method for reminding a navigation object of a travel route according to the present invention;
FIG. 4 is a flow chart of another embodiment of a method for reminding a navigation object of a travel route according to the present invention;
FIG. 5 is a flow chart of another embodiment of a method for prompting a navigation object to travel a route of the present invention;
FIG. 6 is a schematic diagram of a travel route and a comparison route of a method for reminding a navigation object of the present invention;
FIG. 7 is a schematic diagram illustrating an embodiment of a reminder system for navigating a route traveled by an object in accordance with the present invention;
FIG. 8 is a schematic diagram of another embodiment of a reminder system for navigating a route traveled by an object in accordance with the present invention;
fig. 9 is a flowchart of an example of a method for reminding a navigation object to travel a route according to the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
An embodiment of a method for reminding a navigation object of a travel route according to the present invention is shown in fig. 1, and includes:
s1000, acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
s2000, generating a navigation route according to the current position and the target position;
s3000, acquiring a traveling route of a navigation object; the traveling route is a route in which a navigation object starts to move by taking the current position as a starting point according to the navigation route;
s4000, judging whether the traveling route is matched with the navigation route; if not, executing the step S5000;
s5000 generates traveling error information.
Specifically, in this embodiment, the navigation device such as the navigation robot acquires the current position of the navigation object, the current position of the navigation object may be acquired by GPS positioning, or may be acquired by a three-point positioning method through a plurality of camera nodes, any indoor positioning method is within the protection scope of the present invention, the navigation device such as the navigation robot further acquires the route query request information input by the navigation object, the route query request information may be acquired by voice input, or may be acquired by manual input through a human-computer interaction interface by a user, after acquiring the current position of the navigation object and the route query request information, the navigation device performs route planning according to the current position and the target position to generate a navigation route, the navigation object starts to move to the target position according to the navigation route, when the navigation object leaves from the current position and moves along the direction of the navigation route, the navigation equipment collects the traveling route of the navigation object in real time, compares and analyzes the traveling route of the navigation object and the navigation route, judges whether the traveling route is matched with the navigation route, and if the traveling route is matched with the navigation route, the navigation equipment can not generate any prompt information or generate correct traveling information to inform that the traveling direction and the traveling path of the navigation object are correct; and generating a traveling error message if the traveling route does not match with the navigation route so as to remind that the traveling direction and the traveling path of the navigation object are wrong. The method and the device can remind whether the traveling route of the navigation object is correct or not in time, reduce the time waste of the user and prompt the user of the navigation use experience.
Another embodiment of the method for reminding a navigation object of the travel route according to the present invention, as shown in fig. 2, includes:
s1000, acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
s2000, generating a navigation route according to the current position and the target position;
s3010, acquiring a target image frame; the target image frame comprises a navigation object;
s3020 extracting features of the corresponding target image frames through a plurality of classifiers to obtain a target block diagram;
s3030 calculating a response value of each target block diagram, and confirming that the position corresponding to the target block diagram with the maximum response value is the space position of the navigation object;
s3040 generating a traveling route of the navigation object according to all the spatial positions;
s4000, judging whether the traveling route is matched with the navigation route; if not, executing step S5000;
s5000 generates traveling error information.
Specifically, in this embodiment, based on the YOLO V2 algorithm and the KCF algorithm, the navigation object is completely detected and tracked, and the purpose of tracking and detecting on the navigation device is specifically:
detecting and tracking navigation object
The image frame is subjected to navigation object detection through a YOLO V2 algorithm, namely YOLO V2 adopts a 32-layer neural network structure (comprising convolution layers and pooling layers), performs pre-training detection on the image frame by using a network entry with 416 x 416 resolution, performs pre-training detection, performs positioning prediction by using 5-size box dimensions (the flat and long boxes are fewer, and the thin and long boxes are more to conform to the characteristics of a human), connects a shallow target block diagram (such as 26 resolution) to a deep target block diagram (resolution 13), links two high and low resolution obtained feature maps in a connection mode, superposes adjacent features to different channels (but not spatial positions), changes 26 x 512 feature maps into 13 x 2048 feature maps similar to 'shortcut connection' of ResNet, then connects the feature maps with the original deep feature maps, and changes the model input size every few turns (10 bytes), due to the adoption of a multi-scale training detection method, the method has good robustness. After the input size of the model is changed, the model continues to be trained, and the training mechanism forces the 32-layer neural network structure to learn how to make predictions on various input dimensions, which means that the same network can predict detection results under different resolutions. Because the YOLO V2 model operates fast on small-scale input, YOLO V2 provides a trade-off between speed and accuracy, and YOLO V2 can improve processing speed while maintaining accuracy in the detection of small-resolution images.
Navigation object detection and tracking
And performing discriminant tracking through a KCF algorithm, training a classifier in the tracking process, detecting whether the next frame of predicted position is a navigation object by using the classifier, and updating a training set by using a new detection result so as to update the classifier. And generally selecting a navigation object area as a positive sample and a navigation object surrounding area as a negative sample when training a classifier, wherein the probability that the area closer to the navigation object is the positive sample is higher, acquiring the positive and negative samples by using a circulation matrix of the target surrounding area, training the classifier by using ridge regression, and successfully converting the operation of the matrix into the dot multiplication of elements by using the diagonalization property of the circulation matrix in a Fourier space, so that the operation amount is greatly reduced, the operation speed is improved, and the algorithm meets the real-time requirement. The ridge regression in linear space is mapped to the nonlinear space through a kernel function, and the computation can also be simplified by solving a dual problem and some common constraints in the nonlinear space and using the circulation matrix Fourier space diagonalization.
Position determination of navigation object
Judging whether a tracked navigation object or surrounding background information is a navigation object or surrounding background information, collecting a sample by mainly using a rotation matrix, performing accelerated calculation on an algorithm by using fast Fourier change, tracking based on the detected navigation object, firstly detecting the navigation object before tracking to obtain the position of the navigation object, then learning and tracking the navigation object, then calculating the response value of each target block diagram, and confirming the position corresponding to the target block diagram with the maximum response value as the space position of the navigation object. As shown in fig. 2, the navigation object starts to move at the current position according to the navigation route, the navigation device shoots and starts to obtain the image video, the image on the left side of fig. 3 is the current image frame P1, the navigation object is framed by the dashed frame 6 for the current image frame P1, the pixel coordinate Q1 of the navigation object on the imaging image is obtained, the solid frame 3 is the sample target frame diagram including the navigation object, the other solid frames (such as the solid frame 1, the solid frame 2, the solid frame 4, and the solid frame 5) are the frame diagrams corresponding to the sample target frame diagram, namely, the samples obtained by circularly shifting the sample target frame diagram and aligning the navigation object, a classifier is trained by the samples, after the classifier is trained, the next image frame P2 is reached, namely, the image on the right side of fig. 3, firstly, the region corresponding to the sample target frame diagram, namely, the solid frame 3 is sampled and then the samples are circularly shifted, after aligning the target as shown in the right image of fig. 3 (which is convenient for understanding, and actually is not aligned), the classifier is used to calculate the response values for the target frame respectively, and it is obvious that the response value corresponding to the target frame corresponding to the solid frame 1 is the largest, the pixel coordinate Q2 of the navigation object on the imaged image is obtained by calculating the position of the target frame corresponding to the solid frame 1, then switching next image frame Pj, j epsilon N, continuing the above steps to measure pixel coordinate Q3 of the navigation object on the imaging image, according to the conversion relation between the world coordinate system and the image coordinate system, the coordinate position of the navigation object (including the coordinate position M1 corresponding to the pixel coordinate Q1, namely the current position, the coordinate position M2 corresponding to the pixel coordinate Q2 and the coordinate position Mj corresponding to the pixel coordinate Qj) is obtained, and the traveling route of the navigation object is drawn and generated according to all the coordinate positions. In the embodiment, under a tracking-detection framework, the accuracy, efficiency and reliability of tracking check are improved by using a YOLO V2 algorithm and a KCF algorithm, the KCF has the characteristic of high speed, a plurality of characteristic channels can be naturally utilized, the number of the characteristic channels of each layer in a detection network is large, the abstract of different layers in a neural network structure on the description of a target is different, the characteristics of a bottom layer are simple, the semantic characteristics of a high layer are more suitable for positioning, and the difficulty degree of tracking can be changed due to continuous change of the target in the tracking process, the characteristics of the bottom layer can be inaccurate, and a plurality of trackers are cascaded at the moment; certainly, if the tracker on the shallow layer can track well, the tracking effect is good, the following calculation is not needed, the time can be saved, whether the tracker performs well or not is to see the currently calculated response value, the tracking effect is good if the response value is large, therefore, different layers in the neural network structure are selected to construct a plurality of KCFs for cascade connection, a plurality of independent classifiers are respectively established by utilizing a plurality of network layers, and the reliability and the accuracy of the tracking of the navigation object are improved.
Another embodiment of the method for reminding a navigation object of the travel route according to the present invention, as shown in fig. 4, includes:
s1000, acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
s2000, generating a navigation route according to the current position and the target position;
s3001, acquiring a first video image according to the current detection range;
s3002, when the first video image is obtained according to the current detection range and the navigation object is not lost, performing image processing on the first video image to obtain the target image frame;
s3003, when the first video image is obtained according to the current detection range and the navigation object is lost, expanding the detection range until the detection range is expanded to a rated detection range;
s3004, when the navigation object is detected again according to the apparent information of the navigation object within the preset time length according to the expanded detection range, acquiring a second video image according to the expanded detection range, and performing image processing on the first video image and the second video image to obtain the target image frame;
s3005 when the navigation object is not detected again according to the apparent information of the navigation object within the preset time length according to the expanded detection range, continuing to expand the detection range until the navigation object is not detected again according to the apparent information of the navigation object within the preset time length after the navigation object is expanded to the rated detection range, and stopping tracking the navigation object.
S3010, acquiring a target image frame; the target image frame comprises a navigation object;
s3020 extracting features of the corresponding target image frames through a plurality of classifiers to obtain a target block diagram;
s3030 calculating a response value of each target block diagram, and confirming that the position corresponding to the target block diagram with the maximum response value is the space position of the navigation object;
s3040 generating a traveling route of the navigation object according to all the spatial positions;
s4000, judging whether the traveling route is matched with the navigation route; if not, executing step S5000;
s5000 generates traveling error information.
Specifically, in this embodiment, a camera of the navigation device performs first video image acquisition in a current detection range, and determines whether the navigation object is lost when the first video image is acquired in real time, where the navigation object is lost due to various reasons, such as missed detection by a classifier, and tracking loss of the navigation object due to shielding of background objects (such as walls, trees, and the like) in a scene. When the navigation object is not lost, directly carrying out image processing on the first video image to obtain a target image frame; when the navigation object is lost, the detection range is expanded when the navigation object is lost, according to apparent information of the navigation object, the apparent information is the distinguishing characteristics between the navigation object and the background, the apparent information comprises the characteristics of the navigation object, such as the head position, the volume, the height, the skin color, the hair style, the clothes color, the clothes texture and the like, whether the navigation object is detected again within the preset duration according to the expanded detection range is judged, if the lost navigation object is detected again within the preset duration, a second video image is obtained according to the expanded detection range, and the first video image and the second video image are subjected to image processing to obtain a target image frame; and if the lost navigation object is not detected within the preset time, continuing to expand the detection range until the expanded detection range reaches the rated detection range, and if the lost navigation object cannot be detected within the preset time after the expanded detection range reaches the rated detection range, not tracking and detecting the navigation object. Usually, the lost navigation object reappears within a certain range near the disappearing position within a certain time period, so the aim of the stage is to find the lost navigation object again, the lost navigation object is retained for a certain time period, the apparent information similarity degree between the lost navigation object and the re-detected object is compared within a certain range of the disappearing position, and if the similarity degree is larger than a certain threshold value, the re-detected object within the range of the detection frame is indicated to be the navigation object which disappears before. The method and the device avoid the problem that the navigation object cannot be continuously tracked and detected after the navigation object is lost, and improve the robustness of the detection of the advancing route of the navigation object.
Another embodiment of the method for reminding a navigation object of the travel route according to the present invention, as shown in fig. 5, includes:
s1000, acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
s2000, generating a navigation route according to the current position and the target position;
s3001, acquiring a first video image according to the current detection range;
s3002, when the first video image is obtained according to the current detection range and the navigation object is not lost, performing image processing on the first video image to obtain the target image frame;
s3003, when the first video image is obtained according to the current detection range and the navigation object is lost, expanding the detection range until the detection range is expanded to a rated detection range;
s3004, when the navigation object is detected again according to the apparent information of the navigation object within the preset time length according to the expanded detection range, acquiring a second video image according to the expanded detection range, and performing image processing on the first video image and the second video image to obtain the target image frame;
s3005 when the navigation object is not detected again according to the apparent information of the navigation object within the preset time length according to the expanded detection range, continuing to expand the detection range until the navigation object is not detected again according to the apparent information of the navigation object within the preset time length after the navigation object is expanded to the rated detection range, and stopping tracking the navigation object.
S3010, acquiring a target image frame; the target image frame comprises a navigation object;
s3020 extracting features of the corresponding target image frames through a plurality of classifiers to obtain a target block diagram;
s3030 calculating a response value of each target block diagram, and confirming that the position corresponding to the target block diagram with the maximum response value is the space position of the navigation object;
s3040 generating a traveling route of the navigation object according to all the spatial positions;
s4100 acquiring a movement path S1 of the navigation object; the traveling route S is S1+ S2+, … …, si, i e N, wherein S1 is a first moving route in the traveling route with the current position as a starting point, S2 is a second moving route in the traveling route, and si is an ith moving route in the traveling route;
s4200 determining whether the bearing difference between the moving path S1 and the comparing path d1 is within a preset difference range; the navigation route D is D1+ D2+, … …, di, i e N, where D1 is a first section of moving path in the navigation route with the current position as a starting point, D2 is a second section of moving path in the navigation route, and di is an ith section of moving path in the navigation route; if not, executing step S5000;
S5000 generates traveling error information.
Specifically, in the above embodiment, the complete travel route of the navigation object is tracked and detected all the time, where the travel route is the travel route of the user acquired within the effective acquisition range of the image acquisition device (such as a camera) of the navigation device, and the complete travel route and the navigation route are compared and matched, but in the present embodiment, as shown in fig. 6, the first section of the travel route of the navigation object, i.e. the moving path s1, and the first section of the navigation route and the comparison path d1 are obtained to perform comparison and judgment, so that it can be determined whether the travel direction and the travel route of the navigation object are correct only by comparing and judging the first section of the navigation route and the front section of the travel route with the current position as the starting point, that is, the efficiency of determination and analysis can be accelerated, thereby reducing the time for generating travel error information to remind the navigation object, further accelerating the prompting efficiency, and whether the traveling route of the navigation object is correct or not is timely reminded, so that the time waste of the user is reduced, and the navigation use experience of the user is prompted.
One embodiment of a reminding system for navigating a route of an object according to the present invention is shown in fig. 7, and includes:
an information acquisition module 100 for acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
A route generation module 200, which generates a navigation route according to the current position and the target position;
a route acquisition module 300 that acquires a travel route of the navigation object; the traveling route is a route in which a navigation object starts to move by taking the current position as a starting point according to the navigation route;
a matching judgment module 400 for judging whether the travel route matches with the navigation route;
the information generating module 500 generates a travel error information when the travel route does not match the navigation route.
Specifically, this embodiment is a system embodiment corresponding to the above method embodiment, and specific effects refer to the above corresponding method embodiment, which is not described in detail herein.
Another embodiment of a reminding system for navigating a route of an object according to the present invention, as shown in fig. 8, comprises:
an information acquisition module 100 for acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
a route generation module 200, which generates a navigation route according to the current position and the target position;
a route acquisition module 300 that acquires a travel route of the navigation object; the traveling route is a route in which a navigation object starts to move by taking the current position as a starting point according to the navigation route;
A matching judgment module 400 for judging whether the travel route matches the navigation route;
an information generating module 500 that generates a travel error information when the travel route does not match the navigation route;
the route acquisition module 300 includes:
an image acquisition unit 310 that acquires a target image frame; the target image frame comprises a navigation object;
the block diagram obtaining unit 320 is configured to perform feature extraction on the corresponding target image frames through a plurality of classifiers to obtain target block diagrams;
the response value obtaining unit 330 calculates a response value of each target block diagram, and determines a position corresponding to the target block diagram with the largest response value as a spatial position of the navigation object;
the travel route generation unit 340 generates a travel route of the navigation object based on all the spatial positions.
Preferably, the route obtaining module 300 further includes:
the acquisition unit 350 acquires a first video image according to the current detection range; when the navigation object is not detected again according to the apparent information of the navigation object within the preset time length, acquiring a second video image according to the expanded detection range;
the control unit 360 is used for controlling the acquisition unit to expand the detection range until the detection range is expanded to a rated detection range when the first video image is acquired according to the current detection range and the navigation object is lost;
The image processing unit 370, when the first video image is obtained according to the current detection range and the navigation object is not lost, performing image processing on the first video image to obtain the target image frame;
the image processing unit 370 further obtains a second video image according to the expanded detection range when the navigation object is re-detected according to the apparent information of the navigation object within the preset time length according to the expanded detection range, and performs image processing on the first video image and the second video image to obtain the target image frame;
the image processing unit 370 further continues to expand the detection range when the navigation object is not detected again according to the apparent information of the navigation object within the preset time period according to the expanded detection range until the navigation object is not detected again within the preset time period according to the apparent information of the navigation object after the detection range is expanded to the rated detection range, and stops tracking the navigation object when the navigation object is not detected again according to the apparent information of the navigation object.
Preferably, the matching determining module 400 includes:
a route acquisition unit 410 that acquires a target movement route s1 of the navigation object; the traveling route S is S1+ S2+, … …, si, i e N, wherein S1 is a first moving route in the traveling route with the current position as a starting point, S2 is a second moving route in the traveling route, and si is an ith moving route in the traveling route;
A comparison and determination unit 420 for determining whether the bearing difference between the moving path s1 and the comparison path d1 is within a preset difference range; the navigation route D is D1+ D2+, … …, di, i e N, where D1 is a first section of moving path in the navigation route with the current position as a starting point, D2 is a second section of moving path in the navigation route, and di is an ith section of moving path in the navigation route;
the information generating module 500 generates a traveling error information when the bearing difference between the moving path s1 and the comparing path d1 is outside a preset difference range.
Preferably, the method further comprises the following steps:
the voice acquiring module 600 acquires voice information of a user, and identifies the voice information to obtain a key field;
the identification module 700 determines whether the key field includes a preset path inquiry field;
the voice acquiring module 600, when the key field does not include the preset path query field, reacquires new voice information of the user;
a rotation module 800, configured to rotate the direction of the camera to a target direction when the key field includes a preset path query field; the target direction is the direction of the user voice information corresponding to the preset path inquiry field;
The navigation object determining module 900 determines that the corresponding acquired user with the largest capture frame size in the preset acquisition range corresponding to the camera is the navigation object.
Specifically, this embodiment is a system embodiment corresponding to the above method embodiment, and specific effects refer to the above corresponding method embodiment, which is not described in detail herein.
According to the above embodiment, by way of example, as shown in fig. 9,
in an indoor place with a complicated environment, such as a mall, a hospital, a station, an airport, etc., the navigation robot may provide a path indicating service for a specific target location for a navigation object.
The main content of the task is summarized as follows:
1) the navigation object inquires the navigation robot about the walking route of a specific place (such as a certain exit, a certain restaurant, a toilet and the like) nearby;
2) the navigation robot gives a navigation route from the local position, namely the current position, to the target position according to the map;
3) the navigation robot models and tracks a navigation object, namely the navigation object, for a certain distance, and if the moving path of the navigation object in the distance does not accord with the comparison path given by the robot, the navigation object is corrected and prompted by voice prompt.
The process of event development is that the navigation object inquires a route, the navigation object walks according to the navigation route provided by the navigation robot, the navigation robot tracks the navigation object, and the track of the navigation object, namely the consistency of the traveling route and the navigation route provided by the navigation robot is compared, wherein the difficulty of the target tracking stage is as follows: when the navigation object is inquired, the navigation object is close to and is at the front, and then the navigation object is tracked to the whole body at the back, so that the changes of the target dimension, the target rotation and the target appearance need to be dealt with, and the interference of other pedestrians needs to be overcome because the navigation object is in a place with dense crowds, and the KCF algorithm and the YOLO V2 algorithm can be adopted for detection and tracking.
In practice, the navigation route will usually include multiple segments, and in order to improve the working efficiency of the navigation robot, the tracking correction range is the moving path s1, i.e. when the traveling direction and the traveling path of the navigation object are not wrong in the moving path s1, the tracking correction task will not be performed on the navigation object.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A method for reminding a traveling route of a navigation object is applied to navigation equipment and comprises the following steps:
s1000, acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
s2000, generating a navigation route according to the current position and the target position;
s3000, acquiring a traveling route of a navigation object based on a YOLO V2 algorithm and a KCF algorithm; the traveling route is a route in which a navigation object starts to move by taking the current position as a starting point according to the navigation route; the method specifically comprises the following steps:
navigation object detection and tracking:
navigation object detection is carried out through a YOLO V2 algorithm, namely, a 32-layer neural network structure is adopted, and network entries with 416 multiplied by 416 resolution are used for carrying out pre-training detection on image frames; performing positioning prediction by using 5 sizes of box dimensions; the square frame comprises a thin and high frame and a flat and long frame, and the number of the thin and high frames is larger than that of the flat and long frames; connecting the shallow target block diagram with the resolution of 26 x 26 to the deep target block diagram with the resolution of 13 x 13, wherein the connection mode links the feature diagrams obtained by the high resolution and the low resolution, and superposes the adjacent features to different channels; changing the input size of the model every several rounds, and continuing training the model;
Performing discriminant tracking through a KCF algorithm, training a classifier in the tracking process, detecting whether the next frame of predicted position is a navigation object by using the classifier, and then updating a training set by using a new detection result so as to update the classifier; selecting a navigation object area as a positive sample and a surrounding area of the navigation object as a negative sample when training the classifier, collecting the positive and negative samples by using a circulation matrix of the surrounding area of a target, and training the classifier by using ridge regression;
navigation object position determination:
performing feature extraction on image frames through a plurality of classifiers to obtain target block diagrams, calculating response values of the target block diagrams, confirming the target block diagram with the maximum response value, calculating and obtaining pixel coordinates of a navigation object on the current image frame through the position of the target frame with the maximum response value, switching the next image frame, continuously measuring the pixel coordinates of the navigation object on an imaging image, obtaining the coordinate position of the navigation object according to the conversion relation between a world coordinate system and an image coordinate system, and drawing and generating a traveling route of the navigation object according to all the coordinate positions;
s4000, judging whether the traveling route is matched with the navigation route; if not, executing the step S5000;
S5000 generates traveling error information.
2. The method for reminding the navigation object of the traveling route according to claim 1, wherein the step S3000 comprises the steps of:
s3010, acquiring a target image frame; the target image frame comprises a navigation object;
s3020 extracting features of the corresponding target image frames through a plurality of classifiers to obtain a target block diagram;
s3030 calculating a response value of each target block diagram, and confirming that the position corresponding to the target block diagram with the maximum response value is the space position of the navigation object;
s3040 generates a travel route of the navigation object from all the spatial positions.
3. A method for reminding a navigation object of traveling a route according to claim 2, wherein said step S3010 is preceded by the steps of:
s3001, acquiring a first video image according to the current detection range;
s3002, when the first video image is obtained according to the current detection range and the navigation object is not lost, performing image processing on the first video image to obtain the target image frame;
s3003, when the first video image is obtained according to the current detection range and the navigation object is lost, expanding the detection range until the detection range is expanded to a rated detection range;
S3004, when the navigation object is detected again according to the apparent information of the navigation object within the preset time length according to the expanded detection range, acquiring a second video image according to the expanded detection range, and performing image processing on the first video image and the second video image to obtain the target image frame;
s3005 when the navigation object is not detected again according to the apparent information of the navigation object within the preset time length according to the expanded detection range, continuing to expand the detection range until the navigation object is not detected again according to the apparent information of the navigation object within the preset time length after the navigation object is expanded to the rated detection range, and stopping tracking the navigation object.
4. A method for reminding a person of a travel route of a navigation object according to claim 1, wherein the step S4000 includes the steps of:
s4100 acquires a movement path S1 of the navigation object; the travel route S = S1+ S2+, … …, si, i ∈ N, wherein S1 is a first movement route in the travel route with the current position as a starting point, S2 is a second movement route in the travel route, and si is an ith movement route in the travel route;
S4200 determining whether the bearing difference between the moving path S1 and the comparing path d1 is within a predetermined difference range; the navigation route D = D1+ D2+, … …, di, i e N, wherein D1 is a first section of movement path in the navigation route with the current position as a starting point, D2 is a second section of movement path in the navigation route, and di is an ith section of movement path in the navigation route; if not, go to step S5000.
5. The method for reminding the navigation object of the traveling route according to any one of claims 1 to 4, wherein the step S1000 is preceded by the steps of:
s0100, acquiring user voice information, and recognizing the voice information to obtain a key field;
s0200 judges whether the key field comprises a preset path inquiry field; if yes, executing step S0300; otherwise, returning to step S0100;
s0300 rotates the direction of the camera to the target direction; the target direction is the direction of the user voice information corresponding to the preset path inquiry field;
s0400 determines that the user with the largest user image size acquired by the capture frame is the navigation object in the preset acquisition range corresponding to the camera.
6. A reminder system for navigating a route traveled by an object, comprising:
the information acquisition module is used for acquiring the current position of the navigation object and the request information of asking for a way; the route asking request information comprises a target position;
the route generation module generates a navigation route according to the current position and the target position;
the route acquisition module is used for acquiring a traveling route of the navigation object on the basis of a YOLO V2 algorithm and a KCF algorithm; the traveling route is a route in which a navigation object starts to move by taking the current position as a starting point according to the navigation route;
the method specifically comprises the following steps:
navigation object detection and tracking:
navigation object detection is carried out through a YOLO V2 algorithm, namely, a 32-layer neural network structure is adopted, and network entries with 416 multiplied by 416 resolution are used for carrying out pre-training detection on image frames; performing positioning prediction by using 5 sizes of box dimensions; the square frame comprises a thin and high frame and a flat and long frame, and the number of the thin and high frames is larger than that of the flat and long frames; connecting the shallow target block diagram with the resolution of 26 x 26 to the deep target block diagram with the resolution of 13 x 13, wherein the connection mode links the feature diagrams obtained by the high resolution and the low resolution, and superposes the adjacent features to different channels; changing the input size of the model every several rounds, and continuing training the model;
Training a classifier in a tracking process through KCF algorithm discriminant tracking, detecting whether the next frame of predicted position is a navigation object by using the classifier, and then updating a training set by using a new detection result so as to update the classifier; selecting a navigation object area as a positive sample and a navigation object surrounding area as a negative sample when training a classifier, collecting the positive and negative samples by using a circulation matrix of the target surrounding area, and training the classifier by using ridge regression;
navigation object position determination:
performing feature extraction on the image frames through a plurality of classifiers to obtain target block diagrams, calculating response values of the target block diagrams, and confirming the target block diagram with the maximum response value;
calculating and acquiring pixel coordinates of the navigation object on the current image frame through the position of the target image frame with the maximum response value, switching the next image frame, continuously measuring the pixel coordinates of the navigation object on the imaging image, obtaining the coordinate position of the navigation object according to the conversion relation between the world coordinate system and the image coordinate system, and drawing and generating the traveling route of the navigation object according to all the coordinate positions;
the matching judgment module is used for judging whether the traveling route is matched with the navigation route;
And the information generation module generates traveling error information when the traveling route does not match with the navigation route.
7. The system for reminding a user of navigating a route traveled by an object according to claim 6, wherein the route acquiring module comprises:
an image acquisition unit that acquires a target image frame; the target image frame comprises a navigation object;
the block diagram acquisition unit is used for extracting the characteristics of the corresponding target image frames through a plurality of classifiers to obtain a target block diagram;
the response value acquisition unit is used for calculating the response value of each target block diagram and confirming that the position corresponding to the target block diagram with the maximum response value is the space position of the navigation object;
and a travel route generation unit which generates a travel route of the navigation object according to all the spatial positions.
8. The system for reminding a user of navigating a route traveled by an object according to claim 7, wherein the route acquiring module further comprises:
the acquisition unit acquires a first video image according to the current detection range;
the image processing unit is used for carrying out image processing on the first video image to obtain the target image frame when the first video image is obtained according to the current detection range and a navigation object is not lost;
The control unit is used for controlling the acquisition unit to expand the detection range until the detection range is expanded to a rated detection range when the first video image is acquired according to the current detection range and a navigation object is lost;
the image processing unit is further used for acquiring a second video image according to the expanded detection range when the navigation object is re-detected according to the apparent information of the navigation object within the preset time length according to the expanded detection range, and performing image processing on the first video image and the second video image to obtain the target image frame;
and the image processing unit is also used for continuing to expand the detection range when the navigation object is not re-detected according to the apparent information of the navigation object within the preset time length according to the expanded detection range until the navigation object is expanded to the rated detection range within the preset time length and stopping tracking the navigation object when the navigation object is not re-detected according to the apparent information of the navigation object.
9. The system for reminding the user of the travel route of the navigation object according to claim 6, wherein the matching judgment module comprises:
a path acquisition unit that acquires a movement path s1 of the navigation object; the travel route S = S1+ S2+, … …, si, i ∈ N, wherein S1 is a first movement route in the travel route with the current position as a starting point, S2 is a second movement route in the travel route, and si is an ith movement route in the travel route;
A comparison and judgment unit for judging whether the bearing difference between the moving path s1 and the comparison path d1 is within a preset difference range; the navigation route D = D1+ D2+, … …, di, i e N, wherein D1 is a first section of movement path in the navigation route with the current position as a starting point, D2 is a second section of movement path in the navigation route, and di is an ith section of movement path in the navigation route;
the information generating module generates the traveling error information when the bearing difference between the moving path s1 and the comparison path d1 is out of a preset difference range.
10. A system as claimed in any one of claims 6 to 9, further comprising:
the voice acquisition module acquires voice information of a user, and identifies the voice information to obtain a key field;
the identification module is used for judging whether the key field comprises a preset path inquiry field or not;
the voice acquisition module is used for acquiring new user voice information again when the key field does not comprise a preset path inquiry field;
the rotating module is used for rotating the direction of the camera to the target direction when the key field comprises a preset path inquiry field; the target direction is the direction of the user voice information corresponding to the preset path inquiry field;
And the navigation object determining module is used for determining that the user with the largest user image size acquired by the capturing frame is the navigation object in the preset acquisition range corresponding to the camera.
CN201810565075.2A 2018-06-04 2018-06-04 Navigation object traveling route reminding method and system Active CN109000634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810565075.2A CN109000634B (en) 2018-06-04 2018-06-04 Navigation object traveling route reminding method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810565075.2A CN109000634B (en) 2018-06-04 2018-06-04 Navigation object traveling route reminding method and system

Publications (2)

Publication Number Publication Date
CN109000634A CN109000634A (en) 2018-12-14
CN109000634B true CN109000634B (en) 2022-06-03

Family

ID=64573653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810565075.2A Active CN109000634B (en) 2018-06-04 2018-06-04 Navigation object traveling route reminding method and system

Country Status (1)

Country Link
CN (1) CN109000634B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197461B (en) * 2019-06-06 2022-12-30 上海木木聚枞机器人科技有限公司 Coordinate conversion relation determining method, device, equipment and storage medium
CN110926476B (en) * 2019-12-04 2023-09-01 三星电子(中国)研发中心 Accompanying service method and device for intelligent robot

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW550256B (en) * 1996-09-06 2003-09-01 Asta Pharman Ag N-substituted indole-3-glyoxylamides having anti-asthmatic, antiallergic and immunosuppressant/immuno-modulating action
CN103674038A (en) * 2013-12-04 2014-03-26 奇瑞汽车股份有限公司 Navigation system based on combination of local navigation and on-line navigation, and navigation method
CN103888719A (en) * 2012-12-21 2014-06-25 索尼公司 Display control system and recording medium
CN105004343A (en) * 2015-07-27 2015-10-28 上海美琦浦悦通讯科技有限公司 Indoor wireless navigation system and method
CN105975930A (en) * 2016-05-04 2016-09-28 南靖万利达科技有限公司 Camera angle calibration method during robot speech localization process
CN107316317A (en) * 2017-05-23 2017-11-03 深圳市深网视界科技有限公司 A kind of pedestrian's multi-object tracking method and device
CN206786674U (en) * 2017-06-17 2017-12-22 乌鲁木齐中亚环地卫星科技服务有限公司 It is a kind of can sound positioning intelligent lamp cap

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI550256B (en) * 2015-04-01 2016-09-21 國立臺灣大學 Bim-based indoor navigation method, indoor navigation information generation method, computer readable recording medium, and indoor navigation apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW550256B (en) * 1996-09-06 2003-09-01 Asta Pharman Ag N-substituted indole-3-glyoxylamides having anti-asthmatic, antiallergic and immunosuppressant/immuno-modulating action
CN103888719A (en) * 2012-12-21 2014-06-25 索尼公司 Display control system and recording medium
CN103674038A (en) * 2013-12-04 2014-03-26 奇瑞汽车股份有限公司 Navigation system based on combination of local navigation and on-line navigation, and navigation method
CN105004343A (en) * 2015-07-27 2015-10-28 上海美琦浦悦通讯科技有限公司 Indoor wireless navigation system and method
CN105975930A (en) * 2016-05-04 2016-09-28 南靖万利达科技有限公司 Camera angle calibration method during robot speech localization process
CN107316317A (en) * 2017-05-23 2017-11-03 深圳市深网视界科技有限公司 A kind of pedestrian's multi-object tracking method and device
CN206786674U (en) * 2017-06-17 2017-12-22 乌鲁木齐中亚环地卫星科技服务有限公司 It is a kind of can sound positioning intelligent lamp cap

Also Published As

Publication number Publication date
CN109000634A (en) 2018-12-14

Similar Documents

Publication Publication Date Title
He et al. Bounding box regression with uncertainty for accurate object detection
US10706285B2 (en) Automatic ship tracking method and system based on deep learning network and mean shift
US10614310B2 (en) Behavior recognition
CN107967473B (en) Robot autonomous positioning and navigation based on image-text recognition and semantics
CN103208008B (en) Based on the quick adaptive method of traffic video monitoring target detection of machine vision
CN109919977B (en) Video motion person tracking and identity recognition method based on time characteristics
CN108803617A (en) Trajectory predictions method and device
CN111797657A (en) Vehicle peripheral obstacle detection method, device, storage medium, and electronic apparatus
CN110781964A (en) Human body target detection method and system based on video image
US20220180534A1 (en) Pedestrian tracking method, computing device, pedestrian tracking system and storage medium
CN111860352B (en) Multi-lens vehicle track full tracking system and method
CN103677274A (en) Interactive projection method and system based on active vision
CN112949366B (en) Obstacle identification method and device
CN109000634B (en) Navigation object traveling route reminding method and system
CN113406659A (en) Mobile robot position re-identification method based on laser radar information
CN115376034A (en) Motion video acquisition and editing method and device based on human body three-dimensional posture space-time correlation action recognition
US20170053172A1 (en) Image processing apparatus, and image processing method
CN111353429A (en) Interest degree method and system based on eyeball turning
CN112989889A (en) Gait recognition method based on posture guidance
Hilario et al. Pedestrian detection for intelligent vehicles based on active contour models and stereo vision
CN113269038A (en) Multi-scale-based pedestrian detection method
CN112613668A (en) Scenic spot dangerous area management and control method based on artificial intelligence
CN116912763A (en) Multi-pedestrian re-recognition method integrating gait face modes
CN112541403B (en) Indoor personnel falling detection method by utilizing infrared camera
CN115880332A (en) Target tracking method for low-altitude aircraft visual angle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Applicant after: Shanghai Zhihui Medical Technology Co.,Ltd.

Address before: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Applicant before: SHANGHAI MROBOT TECHNOLOGY Co.,Ltd.

Address after: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Applicant after: Shanghai zhihuilin Medical Technology Co.,Ltd.

Address before: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Applicant before: Shanghai Zhihui Medical Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 202150 room 205, zone W, second floor, building 3, No. 8, Xiushan Road, Chengqiao Town, Chongming District, Shanghai (Shanghai Chongming Industrial Park)

Patentee after: Shanghai Noah Wood Robot Technology Co.,Ltd.

Address before: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Patentee before: Shanghai zhihuilin Medical Technology Co.,Ltd.

CP03 Change of name, title or address