CN115993829A - Machine dog blind guiding movement control method based on blind road recognition - Google Patents
Machine dog blind guiding movement control method based on blind road recognition Download PDFInfo
- Publication number
- CN115993829A CN115993829A CN202310272896.8A CN202310272896A CN115993829A CN 115993829 A CN115993829 A CN 115993829A CN 202310272896 A CN202310272896 A CN 202310272896A CN 115993829 A CN115993829 A CN 115993829A
- Authority
- CN
- China
- Prior art keywords
- image
- blind
- machine dog
- pixel
- dog
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000003702 image correction Methods 0.000 claims abstract description 9
- 230000007797 corrosion Effects 0.000 claims abstract description 5
- 238000005260 corrosion Methods 0.000 claims abstract description 5
- 238000013507 mapping Methods 0.000 claims description 6
- 241000251468 Actinopterygii Species 0.000 claims description 3
- 238000005530 etching Methods 0.000 claims description 3
- 230000006978 adaptation Effects 0.000 claims description 2
- 230000008447 perception Effects 0.000 abstract description 2
- 241000282472 Canis lupus familiaris Species 0.000 description 55
- 230000000007 visual effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a machine dog blind guiding motion control method based on blind road identification, which comprises the following steps: the method comprises the steps that a fisheye camera of a robot dog acquires a fisheye image of a blind road, and a fisheye image correction module corrects the fisheye image to obtain an image under a normal lens; sequentially carrying out graying, binarization and corrosion treatment on the image under the normal lens, extracting the outline of the blind road in the image and determining the central point of the blind road in the image; and calculating a pixel difference vector formed by the blind road center point and the image center point, and issuing a steering speed instruction of the machine dog along the blind road in real time by a machine dog blind guiding motion control module based on the proportional-integral-derivative controller according to the pixel difference vector. The invention is suitable for the machine dog to continuously move along the blind sidewalk, and can improve the perception capability of the machine dog on the blind sidewalk, thereby further improving the safety of the machine dog for guiding the blind.
Description
Technical Field
The invention belongs to the technical field of mobile robot navigation, and particularly relates to a robot dog blind guiding motion control method based on blind road identification.
Background
According to statistics of China blind person association, 1700 thousands of blind persons exist in China society. Urban traffic is continuously developed, and the safe trip of the blind becomes a large social problem. One guide dog can well help the blind people to go out, but the training cost of the guide dog is extremely high, the number of the guide dogs is rare, and most of the blind people are difficult to equip. Compared with a guide dog, the machine dog is low in price, and the control algorithm can be replicated in a large quantity, so that the machine dog is provided for a large number of blind people. Therefore, the blind guiding movement control method of the machine dog becomes a current research hot spot. In order to realize the function of guiding the blind on the robot dog, the robot dog is required to recognize the blind sidewalk from the image read by the camera and then walk continuously and stably along the blind sidewalk. The stability and tamper resistance of the machine dog directly affect the life safety of the user.
The existing machine dog blind guiding motion control algorithm mainly has the following problems: (1) The common camera is used, the visual field is too small, and the machine dog can only play a role when the machine dog is close to the blind sidewalk. In practical application, when the machine dog is required to be away from the blind sidewalk by a certain distance, the blind sidewalk can be identified and approaching. (2) The anti-interference capability is poor, the interference of complex road surface conditions on blind road identification cannot be completely eliminated, and once the blind road identification is wrong, huge potential safety hazards are brought. (3) By using linear motion control, the robot dog easily oscillates and swings on the blind road, which is not beneficial to guiding the blind person.
Due to the problems, the existing blind guiding machine dog is difficult to popularize completely. If the problems can be solved, a safe and stable control method for the machine dog is provided, the problem of safe travel of the blind is solved, and high social value is brought.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a simple and effective machine dog blind guiding movement control method based on blind road identification, which utilizes the chromatic aberration between the blind road and the surrounding environment in the visual image of the machine dog, identifies the blind road range through binarization processing, analyzes the blind road path by utilizing a PID algorithm, realizes the effective movement control of the machine dog on the blind road, and has great application prospect.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
a machine dog blind guiding movement control method based on blind road identification comprises the following steps:
step 1: the method comprises the steps that a fisheye camera of a robot dog acquires a fisheye image of a blind road, and a fisheye image correction module corrects the fisheye image to obtain an image under a normal lens;
step 2: sequentially carrying out graying, binarization and corrosion treatment on the image under the normal lens, extracting the outline of the blind road in the image and determining the central point of the blind road in the image;
step 3: and calculating a pixel difference vector formed by the blind road center point and the image center point, and issuing a steering speed instruction of the machine dog along the blind road in real time by a machine dog blind guiding motion control module based on the proportional-integral-derivative controller according to the pixel difference vector.
In order to optimize the technical scheme, the specific measures adopted further comprise:
the fisheye image correction module in step 1 performs fisheye image correction according to the following pixel mapping relation:
is pixel position +.>Corresponding pixel coordinates in a normal image coordinate system; />Is the focal length of the fish eye lens, < >>Is the distortion coefficient of the fish-eye lens.
The step 2 of graying the image under the normal lens includes:
the image is mapped according to the following formulaThe three components are weighted and averaged to obtain a gray image;
The step 2 of binarizing the gray-scale image obtained by the graying includes:
(2) Based on threshold valueThe gray level image is divided into a background part and a foreground part, so that the binarization of the gray level image is realized.
Utilization of the aboveDetermining a threshold value for binarization of a gray image +.>Comprising:
for each gray level of the gray image IAssume it as threshold +.>When the number of pixels belonging to the foreground is calculated, the proportion of the number of pixels to the whole image is recorded as +.>Average gray scale +.>The proportion of the number of pixels belonging to the background to the whole image isThe average gray level is +.>The total average gray level of the image is noted +.>The inter-class variance is recorded as +.>The size of the image isThe gray value of a pixel in the image is smaller than the threshold value +.>The number of pixels of (2) is recorded as +.>The pixel gray level is larger than the threshold value +.>The number of pixels of (2) is recorded as +.>The following steps are: />
Substituting formula (5) into formula (6) to obtain an equivalent formula:
finding the inter-class varianceMaximum gray level +.>I.e. threshold value for binarization of gray-scale image +.>。
The etching treatment in the step 2 refers to etching treatment on the binarized image to filter out white areas with areas smaller than the set value.
Step 2, calculating the gradient of each pixel of the corroded image, wherein the position where the gradient is larger than 0 is the pixel where the blind road contour is located; under the image coordinate system, pairThe axes are divided at equal intervals, and the center pixel point of the contour pixel in each interval is calculated, namely the center point of the blind road.
The step 3 of the motion control module for blind guiding of the robot dog based on the proportional-integral-derivative controller obtains the position deviation of the robot dog from the blind guiding center according to the pixel difference vector, and combines the time interval constantThe steering speed of the machine dog at the next moment is obtained,and issues steering speed instructions of the robot dog along the blind sidewalk in real time.
The formula for obtaining the position deviation of the machine dog from the blind guiding center by the machine dog blind guiding motion control module based on the proportional-integral-derivative controller according to the pixel difference vector is as follows:
wherein ,the position deviation of the robot dog from the blind guiding center is calculated by the controller;
for the purpose of +.>A pixel difference vector formed by the blind road center point with the largest pixel and the image center point;
is the three adaptation parameters of the controller, namely proportional gain, integral gain and differential gain.
Step 3 above sets the time interval constantAccording to the position deviation vector of the machine dog from the center of the blind road +.>To calculate the steering speed of the machine dog at the next moment +.>。
The invention has the following beneficial effects:
according to the invention, more environment information can be observed by adopting the fisheye lens, more complex use environments can be dealt with, the pid control can effectively avoid the left and right swing of the robot dog, and the anti-interference capability is effectively enhanced.
The blind guiding speed control method is suitable for the machine dog to continuously move along the blind sidewalk, and improves the perception capability of the machine dog on the blind sidewalk, so that the safety of the machine dog in blind guiding is further improved.
Drawings
Fig. 1 is a flowchart of a machine dog blind guiding movement control method based on blind road recognition according to an embodiment of the invention;
fig. 2 is a schematic diagram of a blind guiding field image, a corresponding correction result and an image binarization result obtained by a robot dog fisheye camera according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a blind track contour extraction and center point determination result according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Although the steps of the present invention are arranged by reference numerals, the order of the steps is not limited, and the relative order of the steps may be adjusted unless the order of the steps is explicitly stated or the execution of a step requires other steps as a basis. It is to be understood that the term "and/or" as used herein relates to and encompasses any and all possible combinations of one or more of the associated listed items.
As shown in fig. 1-3, a machine dog blind guiding movement control method based on blind road identification includes:
step 1: the method comprises the steps that a fisheye camera of a robot dog obtains a blind road image, namely a fisheye image, and a fisheye image correction module corrects the fisheye image to obtain an image under a normal lens;
step 2: sequentially carrying out graying, binarization and corrosion treatment on the image under the normal lens, extracting the outline of the blind road in the image and determining the position of the central point of the blind road in the image;
step 3: and calculating a pixel difference vector formed by the blind road center point and the image center point, and issuing a steering speed instruction of the machine dog along the blind road in real time by a machine dog blind guiding motion control module based on the proportional-integral-derivative controller according to the pixel difference vector.
In an embodiment, the fisheye image correction module in step 1 first determines a pixel mapping relationship between an image fisheye image obtained by a fisheye camera and an image under a normal lens; the image acquired by the fisheye camera is a fisheye image, and the image under the normal lens is an image under a normal coordinate system; and then correcting the fisheye image according to the pixel mapping relation.
The pixel mapping relation specifically comprises the following steps:
for any pixel point position on the acquired fisheye imageThe pixel coordinates of the point corresponding to the normal image coordinate system are +.>According to the fisheye camera imaging principle, the pixel mapping relation formula is determined as follows:
in the formula ,is the focal length of the fish eye lens, < >>The distortion coefficient of the fisheye lens is the internal reference of the fisheye camera.
In an embodiment, the step 2 specifically includes:
step 21, graying the image under the normal lens:
after the conversion of the fisheye image is completed, the normal image data needs to be grayed, and the three components are weighted and averaged by different weights according to importance and other indexes.
For humans, the human eye is most sensitive to green and least sensitive to blue, and therefore the image is written according to the following formulaThe three components are weighted and averaged to obtain a reasonable gray image.
Step 22, binarizing the gray level image obtained by graying:
For each gray level of the gray image IAssume it as threshold +.>When the number of pixels belonging to the foreground is calculated, the proportion of the number of pixels to the whole image is recorded as +.>Average gray scale +.>The proportion of the number of pixels belonging to the background to the whole image isThe average gray level is +.>The total average gray level of the image is noted +.>The inter-class variance is recorded as +.>The size of the image isThe gray value of a pixel in the image is smaller than the threshold value +.>The number of pixels of (2) is recorded as +.>The pixel gray level is larger than the threshold value +.>The number of pixels of (2) is recorded as +.>Then there are: />
Substituting formula (5) into formula (6) to obtain an equivalent formula:
(2) The gray image is divided into two parts of background (black) and foreground (white) based on the threshold T, and binarization of the gray image is achieved as shown in fig. 2 (c).
Step 23, corrosion treatment: the binarized image is corroded to filter out white areas with smaller areas.
Step 24, after a clearer black-and-white visual image is obtained, the contour extraction of the blind sidewalk is realized, and the position of the central point of the blind sidewalk in the image is determined: calculating the gradient of each pixel of the corroded image, wherein the position where the gradient is larger than 0 is the pixel where the blind road outline is located; and (3) in the image coordinate system, equally dividing the y axis, and calculating central pixel points of contour pixels in each interval, wherein the central pixel points are blind road central points, and the analysis result of the central points is shown in fig. 3.
In the embodiment, in step 3, the motion control module for guiding the blind of the robot dog based on the proportional-integral-derivative controller obtains the deviation of the position of the robot dog from the center of guiding the blind according to the pixel difference vector, and combines the time interval constantObtaining the steering speed of the machine dog at the next moment, and issuing a steering speed instruction of the machine dog along the blind sidewalk in real time;
the formula for obtaining the position deviation of the machine dog from the blind guiding center by the machine dog blind guiding motion control module based on the proportional-integral-derivative controller according to the pixel difference vector is as follows:
wherein ,the position deviation of the robot dog from the blind guiding center is calculated by the controller; />
for the purpose of +.>And 2, obtaining a pixel difference vector formed by the blind road center point with the maximum pixel and the image center point according to the result of the step 2;
the three adaptive parameters of the controller are respectively proportional gain, integral gain and differential gain, and are obtained through continuous debugging of a blind guiding site.
To ensure that the machine dog can travel along the blind track, a positional deviation vector is ensuredAs much as 0, the user sets the time interval constant +.>According to the position deviation vector of the machine dog from the center of the blind road +.>To calculate and obtainSteering speed of machine dog at next moment +.>。
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.
Claims (10)
1. The machine dog blind guiding movement control method based on blind road identification is characterized by comprising the following steps of:
step 1: the method comprises the steps that a fisheye camera of a robot dog acquires a fisheye image of a blind road, and a fisheye image correction module corrects the fisheye image to obtain an image under a normal lens;
step 2: sequentially carrying out graying, binarization and corrosion treatment on the image under the normal lens, extracting the outline of the blind road in the image and determining the central point of the blind road in the image;
step 3: and calculating a pixel difference vector formed by the blind road center point and the image center point, and issuing a steering speed instruction of the machine dog along the blind road in real time by a machine dog blind guiding motion control module based on the proportional-integral-derivative controller according to the pixel difference vector.
2. The method for controlling blind guiding movement of a machine dog based on blind road recognition according to claim 1, wherein the fisheye image correction module in step 1 performs fisheye image correction according to the following pixel mapping relation:
3. The method for controlling the blind guiding movement of the machine dog based on the blind road recognition according to claim 1, wherein the step 2 of graying the image under the normal lens comprises the following steps:
the image is mapped according to the following formulaAdding three componentsThe weight average obtains a gray image:
4. The method for controlling the blind guiding movement of the machine dog based on the blind road recognition according to claim 1, wherein the step 2 of binarizing the gray-scale image obtained by the gray-scale comprises the following steps:
(2) And dividing the gray image into a background part and a foreground part based on a threshold T, and realizing binarization of the gray image.
5. The method for controlling blind guiding movement of a machine dog based on blind road recognition according to claim 4, wherein the using is as followsDetermining a threshold value for binarization of a gray image +.>Comprising:
for each gray level of the gray image IAssume it as threshold +.>When the number of pixels belonging to the foreground is calculated, the proportion of the number of pixels to the whole image is recorded as +.>Average gray scale +.>The proportion of the number of pixels belonging to the background to the whole image is +.>The average gray level is +.>The total average gray level of the image is noted +.>The inter-class variance is recorded as +.>The size of the image is +.>The gray value of a pixel in the image is smaller than the threshold value +.>The number of pixels of (2) is recorded as +.>The pixel gray level is larger than the threshold value +.>The number of pixels of (a) is counted asThe following steps are:
substituting formula (5) into formula (6) to obtain an equivalent formula:
6. The method for controlling blind guiding movement of a machine dog based on blind road recognition according to claim 1, wherein the etching treatment in step 2 is to etch the binarized image to filter out white areas with areas smaller than a set value.
7. The method for controlling the blind guiding movement of the machine dog based on the blind road recognition according to claim 1, wherein the step 2 calculates the gradient of each pixel of the corroded image, and the position with the gradient larger than 0 is the pixel where the contour of the blind road is located; under the image coordinate system, pairThe axes are divided at equal intervals, and the center pixel point of the contour pixel in each interval is calculated, namely the center point of the blind road.
8. The method for controlling the blind guiding movement of the machine dog based on the blind road recognition according to claim 1, wherein the blind guiding movement control module of the machine dog based on the proportional-integral-derivative controller in step 3 obtains the position deviation amount of the machine dog from the blind guiding center according to the pixel difference vector, and combines with a time interval constantAnd obtaining the steering speed of the machine dog at the next moment, and issuing a steering speed instruction of the machine dog along the blind sidewalk in real time.
9. The method for controlling the blind guiding movement of the machine dog based on the blind road recognition according to claim 8, wherein the formula for obtaining the position deviation amount of the machine dog from the blind guiding center by the machine dog blind guiding movement control module based on the proportional-integral-derivative controller according to the pixel difference vector is as follows:
wherein ,the position deviation of the robot dog from the blind guiding center is calculated by the controller;
for the purpose of +.>A pixel difference vector formed by the blind road center point with the largest pixel and the image center point;
10. The method for controlling the blind guiding movement of the machine dog based on the recognition of the blind sidewalk according to claim 1, wherein the step 3 sets a constant time intervalAccording to the position deviation vector of the machine dog from the center of the blind road +.>To calculate the steering speed of the machine dog at the next moment +.>。/>
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310272896.8A CN115993829A (en) | 2023-03-21 | 2023-03-21 | Machine dog blind guiding movement control method based on blind road recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310272896.8A CN115993829A (en) | 2023-03-21 | 2023-03-21 | Machine dog blind guiding movement control method based on blind road recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115993829A true CN115993829A (en) | 2023-04-21 |
Family
ID=85992252
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310272896.8A Pending CN115993829A (en) | 2023-03-21 | 2023-03-21 | Machine dog blind guiding movement control method based on blind road recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115993829A (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102509098A (en) * | 2011-10-08 | 2012-06-20 | 天津大学 | Fisheye image vehicle identification method |
CN104173174A (en) * | 2014-08-22 | 2014-12-03 | 河海大学常州校区 | Portable walking aid blind guide device and method |
CN104573655A (en) * | 2015-01-09 | 2015-04-29 | 安徽清新互联信息科技有限公司 | Blind sidewalk direction detection method based on video |
US20180150944A1 (en) * | 2016-01-18 | 2018-05-31 | Shenzhen Arashi Vision Company Limited | Method and Device For Rectifying Image Photographed by Fish-Eye Lens |
CN109726681A (en) * | 2018-12-29 | 2019-05-07 | 北京航空航天大学 | It is a kind of that location algorithm is identified based on the blind way of machine learning identification and image segmentation |
CN110262495A (en) * | 2019-06-26 | 2019-09-20 | 山东大学 | Mobile robot autonomous navigation and pinpoint control system and method can be achieved |
CN110448436A (en) * | 2019-07-29 | 2019-11-15 | 清华大学 | A kind of intelligent guiding walking stick for blind person and blind-guiding method with blind way detection positioning function |
CN110488823A (en) * | 2019-08-15 | 2019-11-22 | 华南理工大学 | A kind of novel intelligent line walking trolley control method |
CN112274399A (en) * | 2020-10-25 | 2021-01-29 | 贵州大学 | Intelligent sensing machine blind guiding control method, storage medium, system and device |
CN113075926A (en) * | 2021-03-15 | 2021-07-06 | 南通大学 | Blind guiding robot dog based on artificial intelligence |
CN113813146A (en) * | 2021-09-30 | 2021-12-21 | 紫清智行科技(北京)有限公司 | Outdoor blind guiding method and system based on combination of navigation and blind track tracking |
CN115416047A (en) * | 2022-09-02 | 2022-12-02 | 北京化工大学 | Blind assisting system and method based on multi-sensor quadruped robot |
-
2023
- 2023-03-21 CN CN202310272896.8A patent/CN115993829A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102509098A (en) * | 2011-10-08 | 2012-06-20 | 天津大学 | Fisheye image vehicle identification method |
CN104173174A (en) * | 2014-08-22 | 2014-12-03 | 河海大学常州校区 | Portable walking aid blind guide device and method |
CN104573655A (en) * | 2015-01-09 | 2015-04-29 | 安徽清新互联信息科技有限公司 | Blind sidewalk direction detection method based on video |
US20180150944A1 (en) * | 2016-01-18 | 2018-05-31 | Shenzhen Arashi Vision Company Limited | Method and Device For Rectifying Image Photographed by Fish-Eye Lens |
CN109726681A (en) * | 2018-12-29 | 2019-05-07 | 北京航空航天大学 | It is a kind of that location algorithm is identified based on the blind way of machine learning identification and image segmentation |
CN110262495A (en) * | 2019-06-26 | 2019-09-20 | 山东大学 | Mobile robot autonomous navigation and pinpoint control system and method can be achieved |
CN110448436A (en) * | 2019-07-29 | 2019-11-15 | 清华大学 | A kind of intelligent guiding walking stick for blind person and blind-guiding method with blind way detection positioning function |
CN110488823A (en) * | 2019-08-15 | 2019-11-22 | 华南理工大学 | A kind of novel intelligent line walking trolley control method |
CN112274399A (en) * | 2020-10-25 | 2021-01-29 | 贵州大学 | Intelligent sensing machine blind guiding control method, storage medium, system and device |
CN113075926A (en) * | 2021-03-15 | 2021-07-06 | 南通大学 | Blind guiding robot dog based on artificial intelligence |
CN113813146A (en) * | 2021-09-30 | 2021-12-21 | 紫清智行科技(北京)有限公司 | Outdoor blind guiding method and system based on combination of navigation and blind track tracking |
CN115416047A (en) * | 2022-09-02 | 2022-12-02 | 北京化工大学 | Blind assisting system and method based on multi-sensor quadruped robot |
Non-Patent Citations (3)
Title |
---|
刘雪峰 等: "鱼眼图像多角度校正方法研究", 《现代电子技术》, vol. 45, no. 14, pages 89 - 94 * |
徐燕丽: "一种鱼眼镜头畸变图像实时校正算法", 信息通信, no. 2016, pages 25 - 26 * |
程增木: "《智能网联汽车技术入门一本通 彩色版》", 机械工业出版社, pages: 124 - 128 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2950791C (en) | Binocular visual navigation system and method based on power robot | |
CN108205658A (en) | Detection of obstacles early warning system based on the fusion of single binocular vision | |
CN109684921A (en) | A kind of road edge identification and tracking based on three-dimensional laser radar | |
CN105139420B (en) | A kind of video target tracking method based on particle filter and perception Hash | |
CN105182983A (en) | Face real-time tracking method and face real-time tracking system based on mobile robot | |
CN105243664A (en) | Vision-based wheeled mobile robot fast target tracking method | |
CN110379168A (en) | A kind of vehicular traffic information acquisition method based on Mask R-CNN | |
CN110487286B (en) | Robot pose judgment method based on point feature projection and laser point cloud fusion | |
CN106296743A (en) | A kind of adaptive motion method for tracking target and unmanned plane follow the tracks of system | |
CN106503683B (en) | A kind of video well-marked target detection method based on dynamic focal point | |
CN104346621A (en) | Method and device for creating eye template as well as method and device for detecting eye state | |
CN111612823A (en) | Robot autonomous tracking method based on vision | |
CN104463080A (en) | Detection method of human eye state | |
CN115619826A (en) | Dynamic SLAM method based on reprojection error and depth estimation | |
CN113129449A (en) | Vehicle pavement feature recognition and three-dimensional reconstruction method based on binocular vision | |
CN113593035A (en) | Motion control decision generation method and device, electronic equipment and storage medium | |
CN112819864A (en) | Driving state detection method and device and storage medium | |
EP4322020A1 (en) | Terminal device positioning method and related device therefor | |
CN108827250A (en) | A kind of robot monocular vision ranging technology method | |
CN104463081A (en) | Detection method of human eye state | |
CN115993829A (en) | Machine dog blind guiding movement control method based on blind road recognition | |
CN104637038B (en) | A kind of improvement CamShift trackings based on weighted histogram model | |
CN114387576A (en) | Lane line identification method, system, medium, device and information processing terminal | |
CN107437071B (en) | Robot autonomous inspection method based on double yellow line detection | |
CN111353481A (en) | Road obstacle identification method based on laser point cloud and video image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230421 |