CN115993829A - Machine dog blind guiding movement control method based on blind road recognition - Google Patents

Machine dog blind guiding movement control method based on blind road recognition Download PDF

Info

Publication number
CN115993829A
CN115993829A CN202310272896.8A CN202310272896A CN115993829A CN 115993829 A CN115993829 A CN 115993829A CN 202310272896 A CN202310272896 A CN 202310272896A CN 115993829 A CN115993829 A CN 115993829A
Authority
CN
China
Prior art keywords
image
blind
machine dog
pixel
dog
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310272896.8A
Other languages
Chinese (zh)
Inventor
吴巧云
周云
许柯
胡晓洁
王宇柯
余永泰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202310272896.8A priority Critical patent/CN115993829A/en
Publication of CN115993829A publication Critical patent/CN115993829A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a machine dog blind guiding motion control method based on blind road identification, which comprises the following steps: the method comprises the steps that a fisheye camera of a robot dog acquires a fisheye image of a blind road, and a fisheye image correction module corrects the fisheye image to obtain an image under a normal lens; sequentially carrying out graying, binarization and corrosion treatment on the image under the normal lens, extracting the outline of the blind road in the image and determining the central point of the blind road in the image; and calculating a pixel difference vector formed by the blind road center point and the image center point, and issuing a steering speed instruction of the machine dog along the blind road in real time by a machine dog blind guiding motion control module based on the proportional-integral-derivative controller according to the pixel difference vector. The invention is suitable for the machine dog to continuously move along the blind sidewalk, and can improve the perception capability of the machine dog on the blind sidewalk, thereby further improving the safety of the machine dog for guiding the blind.

Description

Machine dog blind guiding movement control method based on blind road recognition
Technical Field
The invention belongs to the technical field of mobile robot navigation, and particularly relates to a robot dog blind guiding motion control method based on blind road identification.
Background
According to statistics of China blind person association, 1700 thousands of blind persons exist in China society. Urban traffic is continuously developed, and the safe trip of the blind becomes a large social problem. One guide dog can well help the blind people to go out, but the training cost of the guide dog is extremely high, the number of the guide dogs is rare, and most of the blind people are difficult to equip. Compared with a guide dog, the machine dog is low in price, and the control algorithm can be replicated in a large quantity, so that the machine dog is provided for a large number of blind people. Therefore, the blind guiding movement control method of the machine dog becomes a current research hot spot. In order to realize the function of guiding the blind on the robot dog, the robot dog is required to recognize the blind sidewalk from the image read by the camera and then walk continuously and stably along the blind sidewalk. The stability and tamper resistance of the machine dog directly affect the life safety of the user.
The existing machine dog blind guiding motion control algorithm mainly has the following problems: (1) The common camera is used, the visual field is too small, and the machine dog can only play a role when the machine dog is close to the blind sidewalk. In practical application, when the machine dog is required to be away from the blind sidewalk by a certain distance, the blind sidewalk can be identified and approaching. (2) The anti-interference capability is poor, the interference of complex road surface conditions on blind road identification cannot be completely eliminated, and once the blind road identification is wrong, huge potential safety hazards are brought. (3) By using linear motion control, the robot dog easily oscillates and swings on the blind road, which is not beneficial to guiding the blind person.
Due to the problems, the existing blind guiding machine dog is difficult to popularize completely. If the problems can be solved, a safe and stable control method for the machine dog is provided, the problem of safe travel of the blind is solved, and high social value is brought.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a simple and effective machine dog blind guiding movement control method based on blind road identification, which utilizes the chromatic aberration between the blind road and the surrounding environment in the visual image of the machine dog, identifies the blind road range through binarization processing, analyzes the blind road path by utilizing a PID algorithm, realizes the effective movement control of the machine dog on the blind road, and has great application prospect.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
a machine dog blind guiding movement control method based on blind road identification comprises the following steps:
step 1: the method comprises the steps that a fisheye camera of a robot dog acquires a fisheye image of a blind road, and a fisheye image correction module corrects the fisheye image to obtain an image under a normal lens;
step 2: sequentially carrying out graying, binarization and corrosion treatment on the image under the normal lens, extracting the outline of the blind road in the image and determining the central point of the blind road in the image;
step 3: and calculating a pixel difference vector formed by the blind road center point and the image center point, and issuing a steering speed instruction of the machine dog along the blind road in real time by a machine dog blind guiding motion control module based on the proportional-integral-derivative controller according to the pixel difference vector.
In order to optimize the technical scheme, the specific measures adopted further comprise:
the fisheye image correction module in step 1 performs fisheye image correction according to the following pixel mapping relation:
Figure SMS_1
in the formula ,
Figure SMS_2
any pixel point position on the fisheye image;
Figure SMS_3
is pixel position +.>
Figure SMS_4
Corresponding pixel coordinates in a normal image coordinate system; />
Figure SMS_5
Is the focal length of the fish eye lens, < >>
Figure SMS_6
Is the distortion coefficient of the fish-eye lens.
The step 2 of graying the image under the normal lens includes:
the image is mapped according to the following formula
Figure SMS_7
The three components are weighted and averaged to obtain a gray image;
Figure SMS_8
wherein ,
Figure SMS_9
a gray image obtained by graying;
Figure SMS_10
respectively is image +.>
Figure SMS_11
Three components are +.>
Figure SMS_12
Is a pixel value of (a).
The step 2 of binarizing the gray-scale image obtained by the graying includes:
(1) By means of
Figure SMS_13
Determining a threshold value for binarization of a gray image +.>
Figure SMS_14
(2) Based on threshold value
Figure SMS_15
The gray level image is divided into a background part and a foreground part, so that the binarization of the gray level image is realized.
Utilization of the above
Figure SMS_16
Determining a threshold value for binarization of a gray image +.>
Figure SMS_17
Comprising:
for each gray level of the gray image I
Figure SMS_20
Assume it as threshold +.>
Figure SMS_24
When the number of pixels belonging to the foreground is calculated, the proportion of the number of pixels to the whole image is recorded as +.>
Figure SMS_28
Average gray scale +.>
Figure SMS_19
The proportion of the number of pixels belonging to the background to the whole image is
Figure SMS_23
The average gray level is +.>
Figure SMS_27
The total average gray level of the image is noted +.>
Figure SMS_30
The inter-class variance is recorded as +.>
Figure SMS_18
The size of the image is
Figure SMS_22
The gray value of a pixel in the image is smaller than the threshold value +.>
Figure SMS_26
The number of pixels of (2) is recorded as +.>
Figure SMS_29
The pixel gray level is larger than the threshold value +.>
Figure SMS_21
The number of pixels of (2) is recorded as +.>
Figure SMS_25
The following steps are: />
Figure SMS_31
Substituting formula (5) into formula (6) to obtain an equivalent formula:
Figure SMS_32
(7)
finding the inter-class variance
Figure SMS_33
Maximum gray level +.>
Figure SMS_34
I.e. threshold value for binarization of gray-scale image +.>
Figure SMS_35
The etching treatment in the step 2 refers to etching treatment on the binarized image to filter out white areas with areas smaller than the set value.
Step 2, calculating the gradient of each pixel of the corroded image, wherein the position where the gradient is larger than 0 is the pixel where the blind road contour is located; under the image coordinate system, pair
Figure SMS_36
The axes are divided at equal intervals, and the center pixel point of the contour pixel in each interval is calculated, namely the center point of the blind road.
The step 3 of the motion control module for blind guiding of the robot dog based on the proportional-integral-derivative controller obtains the position deviation of the robot dog from the blind guiding center according to the pixel difference vector, and combines the time interval constant
Figure SMS_37
The steering speed of the machine dog at the next moment is obtained,and issues steering speed instructions of the robot dog along the blind sidewalk in real time.
The formula for obtaining the position deviation of the machine dog from the blind guiding center by the machine dog blind guiding motion control module based on the proportional-integral-derivative controller according to the pixel difference vector is as follows:
Figure SMS_38
wherein ,
Figure SMS_39
the position deviation of the robot dog from the blind guiding center is calculated by the controller;
Figure SMS_40
is an intermediate parameter->
Figure SMS_41
Is the current time;
Figure SMS_42
for the purpose of +.>
Figure SMS_43
A pixel difference vector formed by the blind road center point with the largest pixel and the image center point;
Figure SMS_44
is the three adaptation parameters of the controller, namely proportional gain, integral gain and differential gain.
Step 3 above sets the time interval constant
Figure SMS_45
According to the position deviation vector of the machine dog from the center of the blind road +.>
Figure SMS_46
To calculate the steering speed of the machine dog at the next moment +.>
Figure SMS_47
The invention has the following beneficial effects:
according to the invention, more environment information can be observed by adopting the fisheye lens, more complex use environments can be dealt with, the pid control can effectively avoid the left and right swing of the robot dog, and the anti-interference capability is effectively enhanced.
The blind guiding speed control method is suitable for the machine dog to continuously move along the blind sidewalk, and improves the perception capability of the machine dog on the blind sidewalk, so that the safety of the machine dog in blind guiding is further improved.
Drawings
Fig. 1 is a flowchart of a machine dog blind guiding movement control method based on blind road recognition according to an embodiment of the invention;
fig. 2 is a schematic diagram of a blind guiding field image, a corresponding correction result and an image binarization result obtained by a robot dog fisheye camera according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a blind track contour extraction and center point determination result according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Although the steps of the present invention are arranged by reference numerals, the order of the steps is not limited, and the relative order of the steps may be adjusted unless the order of the steps is explicitly stated or the execution of a step requires other steps as a basis. It is to be understood that the term "and/or" as used herein relates to and encompasses any and all possible combinations of one or more of the associated listed items.
As shown in fig. 1-3, a machine dog blind guiding movement control method based on blind road identification includes:
step 1: the method comprises the steps that a fisheye camera of a robot dog obtains a blind road image, namely a fisheye image, and a fisheye image correction module corrects the fisheye image to obtain an image under a normal lens;
step 2: sequentially carrying out graying, binarization and corrosion treatment on the image under the normal lens, extracting the outline of the blind road in the image and determining the position of the central point of the blind road in the image;
step 3: and calculating a pixel difference vector formed by the blind road center point and the image center point, and issuing a steering speed instruction of the machine dog along the blind road in real time by a machine dog blind guiding motion control module based on the proportional-integral-derivative controller according to the pixel difference vector.
In an embodiment, the fisheye image correction module in step 1 first determines a pixel mapping relationship between an image fisheye image obtained by a fisheye camera and an image under a normal lens; the image acquired by the fisheye camera is a fisheye image, and the image under the normal lens is an image under a normal coordinate system; and then correcting the fisheye image according to the pixel mapping relation.
The pixel mapping relation specifically comprises the following steps:
for any pixel point position on the acquired fisheye image
Figure SMS_48
The pixel coordinates of the point corresponding to the normal image coordinate system are +.>
Figure SMS_49
According to the fisheye camera imaging principle, the pixel mapping relation formula is determined as follows:
Figure SMS_50
in the formula ,
Figure SMS_51
is the focal length of the fish eye lens, < >>
Figure SMS_52
The distortion coefficient of the fisheye lens is the internal reference of the fisheye camera.
In an embodiment, the step 2 specifically includes:
step 21, graying the image under the normal lens:
after the conversion of the fisheye image is completed, the normal image data needs to be grayed, and the three components are weighted and averaged by different weights according to importance and other indexes.
For humans, the human eye is most sensitive to green and least sensitive to blue, and therefore the image is written according to the following formula
Figure SMS_53
The three components are weighted and averaged to obtain a reasonable gray image.
Figure SMS_54
wherein ,
Figure SMS_55
a gray image obtained by graying;
Figure SMS_56
respectively is image +.>
Figure SMS_57
Three components are +.>
Figure SMS_58
Is a pixel value of (a).
Step 22, binarizing the gray level image obtained by graying:
(1) By means of
Figure SMS_59
(Ojin method) determination of threshold value for gray image binarization +.>
Figure SMS_60
For each gray level of the gray image I
Figure SMS_63
Assume it as threshold +.>
Figure SMS_67
When the number of pixels belonging to the foreground is calculated, the proportion of the number of pixels to the whole image is recorded as +.>
Figure SMS_71
Average gray scale +.>
Figure SMS_64
The proportion of the number of pixels belonging to the background to the whole image is
Figure SMS_68
The average gray level is +.>
Figure SMS_70
The total average gray level of the image is noted +.>
Figure SMS_73
The inter-class variance is recorded as +.>
Figure SMS_61
The size of the image is
Figure SMS_65
The gray value of a pixel in the image is smaller than the threshold value +.>
Figure SMS_69
The number of pixels of (2) is recorded as +.>
Figure SMS_72
The pixel gray level is larger than the threshold value +.>
Figure SMS_62
The number of pixels of (2) is recorded as +.>
Figure SMS_66
Then there are: />
Figure SMS_74
Substituting formula (5) into formula (6) to obtain an equivalent formula:
Figure SMS_75
finding the inter-class variance
Figure SMS_76
Maximum gray level +.>
Figure SMS_77
I.e. threshold +.>
Figure SMS_78
(2) The gray image is divided into two parts of background (black) and foreground (white) based on the threshold T, and binarization of the gray image is achieved as shown in fig. 2 (c).
Step 23, corrosion treatment: the binarized image is corroded to filter out white areas with smaller areas.
Step 24, after a clearer black-and-white visual image is obtained, the contour extraction of the blind sidewalk is realized, and the position of the central point of the blind sidewalk in the image is determined: calculating the gradient of each pixel of the corroded image, wherein the position where the gradient is larger than 0 is the pixel where the blind road outline is located; and (3) in the image coordinate system, equally dividing the y axis, and calculating central pixel points of contour pixels in each interval, wherein the central pixel points are blind road central points, and the analysis result of the central points is shown in fig. 3.
In the embodiment, in step 3, the motion control module for guiding the blind of the robot dog based on the proportional-integral-derivative controller obtains the deviation of the position of the robot dog from the center of guiding the blind according to the pixel difference vector, and combines the time interval constant
Figure SMS_79
Obtaining the steering speed of the machine dog at the next moment, and issuing a steering speed instruction of the machine dog along the blind sidewalk in real time;
the formula for obtaining the position deviation of the machine dog from the blind guiding center by the machine dog blind guiding motion control module based on the proportional-integral-derivative controller according to the pixel difference vector is as follows:
Figure SMS_80
wherein ,
Figure SMS_81
the position deviation of the robot dog from the blind guiding center is calculated by the controller; />
Figure SMS_82
Is an intermediate parameter->
Figure SMS_83
Is the current time;
Figure SMS_84
for the purpose of +.>
Figure SMS_85
And 2, obtaining a pixel difference vector formed by the blind road center point with the maximum pixel and the image center point according to the result of the step 2;
Figure SMS_86
the three adaptive parameters of the controller are respectively proportional gain, integral gain and differential gain, and are obtained through continuous debugging of a blind guiding site.
To ensure that the machine dog can travel along the blind track, a positional deviation vector is ensured
Figure SMS_87
As much as 0, the user sets the time interval constant +.>
Figure SMS_88
According to the position deviation vector of the machine dog from the center of the blind road +.>
Figure SMS_89
To calculate and obtainSteering speed of machine dog at next moment +.>
Figure SMS_90
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (10)

1. The machine dog blind guiding movement control method based on blind road identification is characterized by comprising the following steps of:
step 1: the method comprises the steps that a fisheye camera of a robot dog acquires a fisheye image of a blind road, and a fisheye image correction module corrects the fisheye image to obtain an image under a normal lens;
step 2: sequentially carrying out graying, binarization and corrosion treatment on the image under the normal lens, extracting the outline of the blind road in the image and determining the central point of the blind road in the image;
step 3: and calculating a pixel difference vector formed by the blind road center point and the image center point, and issuing a steering speed instruction of the machine dog along the blind road in real time by a machine dog blind guiding motion control module based on the proportional-integral-derivative controller according to the pixel difference vector.
2. The method for controlling blind guiding movement of a machine dog based on blind road recognition according to claim 1, wherein the fisheye image correction module in step 1 performs fisheye image correction according to the following pixel mapping relation:
Figure QLYQS_1
Figure QLYQS_2
in the formula ,
Figure QLYQS_3
any pixel point position on the fisheye image;
Figure QLYQS_4
is pixel position +.>
Figure QLYQS_5
Corresponding pixel coordinates in a normal image coordinate system;
Figure QLYQS_6
is the focal length of the fish eye lens, < >>
Figure QLYQS_7
Is the distortion coefficient of the fish-eye lens.
3. The method for controlling the blind guiding movement of the machine dog based on the blind road recognition according to claim 1, wherein the step 2 of graying the image under the normal lens comprises the following steps:
the image is mapped according to the following formula
Figure QLYQS_8
Adding three componentsThe weight average obtains a gray image:
Figure QLYQS_9
wherein ,
Figure QLYQS_10
a gray image obtained by graying;
Figure QLYQS_11
respectively is image +.>
Figure QLYQS_12
Three components are +.>
Figure QLYQS_13
Is a pixel value of (a).
4. The method for controlling the blind guiding movement of the machine dog based on the blind road recognition according to claim 1, wherein the step 2 of binarizing the gray-scale image obtained by the gray-scale comprises the following steps:
(1) By means of
Figure QLYQS_14
Determining a threshold value for binarization of a gray image +.>
Figure QLYQS_15
(2) And dividing the gray image into a background part and a foreground part based on a threshold T, and realizing binarization of the gray image.
5. The method for controlling blind guiding movement of a machine dog based on blind road recognition according to claim 4, wherein the using is as follows
Figure QLYQS_16
Determining a threshold value for binarization of a gray image +.>
Figure QLYQS_17
Comprising:
for each gray level of the gray image I
Figure QLYQS_20
Assume it as threshold +.>
Figure QLYQS_24
When the number of pixels belonging to the foreground is calculated, the proportion of the number of pixels to the whole image is recorded as +.>
Figure QLYQS_29
Average gray scale +.>
Figure QLYQS_21
The proportion of the number of pixels belonging to the background to the whole image is +.>
Figure QLYQS_25
The average gray level is +.>
Figure QLYQS_27
The total average gray level of the image is noted +.>
Figure QLYQS_30
The inter-class variance is recorded as +.>
Figure QLYQS_18
The size of the image is +.>
Figure QLYQS_22
The gray value of a pixel in the image is smaller than the threshold value +.>
Figure QLYQS_26
The number of pixels of (2) is recorded as +.>
Figure QLYQS_28
The pixel gray level is larger than the threshold value +.>
Figure QLYQS_19
The number of pixels of (a) is counted as
Figure QLYQS_23
The following steps are:
Figure QLYQS_31
(1)
Figure QLYQS_32
(2)
Figure QLYQS_33
(3)
Figure QLYQS_34
(4)
Figure QLYQS_35
(5)
Figure QLYQS_36
(6)
substituting formula (5) into formula (6) to obtain an equivalent formula:
Figure QLYQS_37
(7)
finding the gray level that maximizes the inter-class variance of (7)
Figure QLYQS_38
I.e. threshold value for binarization of gray-scale image +.>
Figure QLYQS_39
6. The method for controlling blind guiding movement of a machine dog based on blind road recognition according to claim 1, wherein the etching treatment in step 2 is to etch the binarized image to filter out white areas with areas smaller than a set value.
7. The method for controlling the blind guiding movement of the machine dog based on the blind road recognition according to claim 1, wherein the step 2 calculates the gradient of each pixel of the corroded image, and the position with the gradient larger than 0 is the pixel where the contour of the blind road is located; under the image coordinate system, pair
Figure QLYQS_40
The axes are divided at equal intervals, and the center pixel point of the contour pixel in each interval is calculated, namely the center point of the blind road.
8. The method for controlling the blind guiding movement of the machine dog based on the blind road recognition according to claim 1, wherein the blind guiding movement control module of the machine dog based on the proportional-integral-derivative controller in step 3 obtains the position deviation amount of the machine dog from the blind guiding center according to the pixel difference vector, and combines with a time interval constant
Figure QLYQS_41
And obtaining the steering speed of the machine dog at the next moment, and issuing a steering speed instruction of the machine dog along the blind sidewalk in real time.
9. The method for controlling the blind guiding movement of the machine dog based on the blind road recognition according to claim 8, wherein the formula for obtaining the position deviation amount of the machine dog from the blind guiding center by the machine dog blind guiding movement control module based on the proportional-integral-derivative controller according to the pixel difference vector is as follows:
Figure QLYQS_42
wherein ,
Figure QLYQS_43
the position deviation of the robot dog from the blind guiding center is calculated by the controller;
Figure QLYQS_44
is an intermediate parameter->
Figure QLYQS_45
Is the current time;
Figure QLYQS_46
for the purpose of +.>
Figure QLYQS_47
A pixel difference vector formed by the blind road center point with the largest pixel and the image center point;
Figure QLYQS_48
is the three adaptation parameters of the controller, namely proportional gain, integral gain and differential gain.
10. The method for controlling the blind guiding movement of the machine dog based on the recognition of the blind sidewalk according to claim 1, wherein the step 3 sets a constant time interval
Figure QLYQS_49
According to the position deviation vector of the machine dog from the center of the blind road +.>
Figure QLYQS_50
To calculate the steering speed of the machine dog at the next moment +.>
Figure QLYQS_51
。/>
CN202310272896.8A 2023-03-21 2023-03-21 Machine dog blind guiding movement control method based on blind road recognition Pending CN115993829A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310272896.8A CN115993829A (en) 2023-03-21 2023-03-21 Machine dog blind guiding movement control method based on blind road recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310272896.8A CN115993829A (en) 2023-03-21 2023-03-21 Machine dog blind guiding movement control method based on blind road recognition

Publications (1)

Publication Number Publication Date
CN115993829A true CN115993829A (en) 2023-04-21

Family

ID=85992252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310272896.8A Pending CN115993829A (en) 2023-03-21 2023-03-21 Machine dog blind guiding movement control method based on blind road recognition

Country Status (1)

Country Link
CN (1) CN115993829A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509098A (en) * 2011-10-08 2012-06-20 天津大学 Fisheye image vehicle identification method
CN104173174A (en) * 2014-08-22 2014-12-03 河海大学常州校区 Portable walking aid blind guide device and method
CN104573655A (en) * 2015-01-09 2015-04-29 安徽清新互联信息科技有限公司 Blind sidewalk direction detection method based on video
US20180150944A1 (en) * 2016-01-18 2018-05-31 Shenzhen Arashi Vision Company Limited Method and Device For Rectifying Image Photographed by Fish-Eye Lens
CN109726681A (en) * 2018-12-29 2019-05-07 北京航空航天大学 It is a kind of that location algorithm is identified based on the blind way of machine learning identification and image segmentation
CN110262495A (en) * 2019-06-26 2019-09-20 山东大学 Mobile robot autonomous navigation and pinpoint control system and method can be achieved
CN110448436A (en) * 2019-07-29 2019-11-15 清华大学 A kind of intelligent guiding walking stick for blind person and blind-guiding method with blind way detection positioning function
CN110488823A (en) * 2019-08-15 2019-11-22 华南理工大学 A kind of novel intelligent line walking trolley control method
CN112274399A (en) * 2020-10-25 2021-01-29 贵州大学 Intelligent sensing machine blind guiding control method, storage medium, system and device
CN113075926A (en) * 2021-03-15 2021-07-06 南通大学 Blind guiding robot dog based on artificial intelligence
CN113813146A (en) * 2021-09-30 2021-12-21 紫清智行科技(北京)有限公司 Outdoor blind guiding method and system based on combination of navigation and blind track tracking
CN115416047A (en) * 2022-09-02 2022-12-02 北京化工大学 Blind assisting system and method based on multi-sensor quadruped robot

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509098A (en) * 2011-10-08 2012-06-20 天津大学 Fisheye image vehicle identification method
CN104173174A (en) * 2014-08-22 2014-12-03 河海大学常州校区 Portable walking aid blind guide device and method
CN104573655A (en) * 2015-01-09 2015-04-29 安徽清新互联信息科技有限公司 Blind sidewalk direction detection method based on video
US20180150944A1 (en) * 2016-01-18 2018-05-31 Shenzhen Arashi Vision Company Limited Method and Device For Rectifying Image Photographed by Fish-Eye Lens
CN109726681A (en) * 2018-12-29 2019-05-07 北京航空航天大学 It is a kind of that location algorithm is identified based on the blind way of machine learning identification and image segmentation
CN110262495A (en) * 2019-06-26 2019-09-20 山东大学 Mobile robot autonomous navigation and pinpoint control system and method can be achieved
CN110448436A (en) * 2019-07-29 2019-11-15 清华大学 A kind of intelligent guiding walking stick for blind person and blind-guiding method with blind way detection positioning function
CN110488823A (en) * 2019-08-15 2019-11-22 华南理工大学 A kind of novel intelligent line walking trolley control method
CN112274399A (en) * 2020-10-25 2021-01-29 贵州大学 Intelligent sensing machine blind guiding control method, storage medium, system and device
CN113075926A (en) * 2021-03-15 2021-07-06 南通大学 Blind guiding robot dog based on artificial intelligence
CN113813146A (en) * 2021-09-30 2021-12-21 紫清智行科技(北京)有限公司 Outdoor blind guiding method and system based on combination of navigation and blind track tracking
CN115416047A (en) * 2022-09-02 2022-12-02 北京化工大学 Blind assisting system and method based on multi-sensor quadruped robot

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘雪峰 等: "鱼眼图像多角度校正方法研究", 《现代电子技术》, vol. 45, no. 14, pages 89 - 94 *
徐燕丽: "一种鱼眼镜头畸变图像实时校正算法", 信息通信, no. 2016, pages 25 - 26 *
程增木: "《智能网联汽车技术入门一本通 彩色版》", 机械工业出版社, pages: 124 - 128 *

Similar Documents

Publication Publication Date Title
CA2950791C (en) Binocular visual navigation system and method based on power robot
CN108205658A (en) Detection of obstacles early warning system based on the fusion of single binocular vision
CN109684921A (en) A kind of road edge identification and tracking based on three-dimensional laser radar
CN105139420B (en) A kind of video target tracking method based on particle filter and perception Hash
CN105182983A (en) Face real-time tracking method and face real-time tracking system based on mobile robot
CN105243664A (en) Vision-based wheeled mobile robot fast target tracking method
CN110379168A (en) A kind of vehicular traffic information acquisition method based on Mask R-CNN
CN110487286B (en) Robot pose judgment method based on point feature projection and laser point cloud fusion
CN106296743A (en) A kind of adaptive motion method for tracking target and unmanned plane follow the tracks of system
CN106503683B (en) A kind of video well-marked target detection method based on dynamic focal point
CN104346621A (en) Method and device for creating eye template as well as method and device for detecting eye state
CN111612823A (en) Robot autonomous tracking method based on vision
CN104463080A (en) Detection method of human eye state
CN115619826A (en) Dynamic SLAM method based on reprojection error and depth estimation
CN113129449A (en) Vehicle pavement feature recognition and three-dimensional reconstruction method based on binocular vision
CN113593035A (en) Motion control decision generation method and device, electronic equipment and storage medium
CN112819864A (en) Driving state detection method and device and storage medium
EP4322020A1 (en) Terminal device positioning method and related device therefor
CN108827250A (en) A kind of robot monocular vision ranging technology method
CN104463081A (en) Detection method of human eye state
CN115993829A (en) Machine dog blind guiding movement control method based on blind road recognition
CN104637038B (en) A kind of improvement CamShift trackings based on weighted histogram model
CN114387576A (en) Lane line identification method, system, medium, device and information processing terminal
CN107437071B (en) Robot autonomous inspection method based on double yellow line detection
CN111353481A (en) Road obstacle identification method based on laser point cloud and video image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230421