CN112529903A - Stair height and width visual detection method and device and robot dog - Google Patents

Stair height and width visual detection method and device and robot dog Download PDF

Info

Publication number
CN112529903A
CN112529903A CN202110143884.6A CN202110143884A CN112529903A CN 112529903 A CN112529903 A CN 112529903A CN 202110143884 A CN202110143884 A CN 202110143884A CN 112529903 A CN112529903 A CN 112529903A
Authority
CN
China
Prior art keywords
stair
gradient
matrix
height
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110143884.6A
Other languages
Chinese (zh)
Other versions
CN112529903B (en
Inventor
李学生
李晨
徐奇伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Delu Power Technology Chengdu Co Ltd
Original Assignee
Delu Power Technology Chengdu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Delu Power Technology Chengdu Co Ltd filed Critical Delu Power Technology Chengdu Co Ltd
Priority to CN202110143884.6A priority Critical patent/CN112529903B/en
Publication of CN112529903A publication Critical patent/CN112529903A/en
Application granted granted Critical
Publication of CN112529903B publication Critical patent/CN112529903B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention relates to a visual detection method, a device and a robot dog for the height and width of a stair, wherein the method comprises the following steps: s1, collecting stair image information to obtain an RGB image and a depth map; s2, performing semantic segmentation on the RGB image to obtain a semantic segmented image; s3, calculating a horizontal gradient matrix and a vertical gradient matrix of the semantic segmentation image; s4, extracting the pixel coordinates of the corner points by utilizing the gradient information; and S5, corresponding the pixel coordinates of the corner points to the depth map, and calculating the height and the width of the staircase. According to the invention, the parallax is calculated through the depth information; obtaining the stair border through the semantic segmentation, calculating the corner coordinate of location length, width through the gradient, then calculate the height, the width of stair, for the machine dog goes up the stair and provides visual assistance, avoid the machine dog to step on at stair edge or two-stage stair juncture, do benefit to the stability of guaranteeing the machine dog during operation.

Description

Stair height and width visual detection method and device and robot dog
Technical Field
The invention relates to the technical field of robots, in particular to a stair height and width visual detection method and device and a robot dog.
Background
The existing quadruped robot can only be 'blind-walking' when going up and down stairs or slopes because of lack of visual assistance: namely, in the motion control algorithm, a 'stair climbing mode' is specially developed. In this mode, the gait of the dog can only be controlled at a preset fixed height, and the step size is determined by the operator.
The pose of a quadruped robot walking up stairs in a fixed motion mode will be very stiff; meanwhile, because there is no visual aid, the quadruped robot cannot estimate the relative relationship between the landing point and the stairs. This means that the quadruped robot may step on the edge of a stair or a two-step stair junction, which can present a significant challenge to the control system and also make the operation of the quadruped robot very unstable.
Disclosure of Invention
The invention provides a stair height and width visual detection method, a stair height and width visual detection device and a robot dog for solving the technical problems.
The invention is realized by the following technical scheme:
a visual detection method for the height and width of a staircase comprises the following steps:
s1, collecting stair image information to obtain an RGB image and a depth map;
s2, performing semantic segmentation on the RGB image to obtain a semantic segmented image;
s3, calculating a horizontal gradient matrix and a vertical gradient matrix of the semantic segmentation image;
s4, extracting the pixel coordinates of the corner points by utilizing the gradient information;
and S5, corresponding the pixel coordinates of the corner points to the depth map, and calculating the height and the width of the staircase.
The method comprises the steps of carrying out semantic segmentation on an RGB image to generate a binary image.
Furthermore, before stair image information is collected, target detection and identification are carried out, and the pose of the camera is adjusted, so that the stair edge fills the whole lens.
Further, in S3, the transverse gradient matrix is calculated by using the transverse gradient matrix calculation formula (1):
Figure 479276DEST_PATH_IMAGE001
(1)
calculating a longitudinal gradient matrix by adopting a longitudinal gradient matrix calculation formula (2):
Figure 404376DEST_PATH_IMAGE002
(2)
wherein the content of the first and second substances,
Figure 517825DEST_PATH_IMAGE003
is the abscissa and ordinate of a certain point on the pixel matrix,
Figure 380739DEST_PATH_IMAGE004
is the pixel value of the image at that point.
Is provided with
Figure 260839DEST_PATH_IMAGE005
Is a matrix of the gradients which is,mis the number of rows in the matrix and,nis the number of columns of the matrix; set constant number
Figure 791178DEST_PATH_IMAGE006
Represents the step size; then is at
Figure 262479DEST_PATH_IMAGE005
Has one and only one partial sequence
Figure 940585DEST_PATH_IMAGE007
Figure 906267DEST_PATH_IMAGE008
Figure 277730DEST_PATH_IMAGE009
Wherein the content of the first and second substances,
Figure 873928DEST_PATH_IMAGE010
representation matrix
Figure 226281DEST_PATH_IMAGE005
Line
1 of (1)
Figure 57971DEST_PATH_IMAGE011
Column element, 2 nd row column element, … … thmLine of
Figure 297322DEST_PATH_IMAGE012
Column element;
Figure 313688DEST_PATH_IMAGE013
representation matrix
Figure 904070DEST_PATH_IMAGE005
To middleaLine ofbColumn element;
Figure 273871DEST_PATH_IMAGE014
Figure 616997DEST_PATH_IMAGE013
are all referred to as
Figure 617314DEST_PATH_IMAGE007
One component of (a);
device set
Figure 613433DEST_PATH_IMAGE015
Figure 990188DEST_PATH_IMAGE016
Represents a set of positive integers; then
Figure 187820DEST_PATH_IMAGE007
There is a subsequence
Figure 421355DEST_PATH_IMAGE017
,
Figure 720749DEST_PATH_IMAGE018
Figure DEST_PATH_IMAGE019
Each component of (a) is in
Figure 760249DEST_PATH_IMAGE005
Corresponds to a point, defines
Figure 77967DEST_PATH_IMAGE019
Of any two adjacent components
Figure 482404DEST_PATH_IMAGE020
Figure DEST_PATH_IMAGE021
Is a line segment; wherein, and
Figure 662237DEST_PATH_IMAGE022
the S4 specifically includes: on the gradient matrix each
Figure DEST_PATH_IMAGE023
The accumulated transverse gradient is calculated once per longitudinal pixel, or every time on the gradient matrix
Figure 380794DEST_PATH_IMAGE024
Calculating the accumulated longitudinal gradient once by each transverse pixel;
the cumulative transverse gradient is calculated using equation (3):
Figure DEST_PATH_IMAGE025
(3)
the cumulative longitudinal gradient is calculated using equation (4):
Figure 959543DEST_PATH_IMAGE026
(4)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE027
refers to a longitudinal gradient matrix corresponding to the pixel matrix,
Figure 659515DEST_PATH_IMAGE028
a transverse gradient matrix corresponding to the pixel matrix is indicated;
then, comparing the accumulated transverse gradient or the accumulated longitudinal gradient with a decision constant, and if the accumulated transverse gradient or the accumulated longitudinal gradient is larger than the decision constant, considering that the line segment belongs to a part of the step height; if the step width is smaller than the judgment constant, the line segment is considered to belong to one part of the step width;
and repeating the calculation of the accumulated transverse gradient or the accumulated longitudinal gradient to obtain relatively accurate coordinates of the starting point of the high and wide sides of the stairs, and reversely deducing the corresponding pixel points according to the index of the matrix.
Preferably, the decision constant takes 1.
A visual detection device for the height of a stair comprises an image acquisition module, a semantic segmentation module, a depth measurement module, a gradient calculation module and a stair height and width calculation module;
an image acquisition module: the stair image acquisition system is used for acquiring stair image information;
a semantic segmentation module: the method is used for performing semantic segmentation on the image to obtain the boundary of the staircase;
a depth measurement module: the depth information of the image is obtained to obtain a depth map;
a gradient calculation module: calculating gradient information according to the semantic segmentation image, and extracting corner point pixel coordinates according to the gradient information;
the stair height and width calculation module: and mapping the pixel coordinates of the corner points to the depth map, and calculating the height and the width of the staircase.
Further, the visual detection device for the height of the stair further comprises a target detection module: and the camera pose detection is carried out according to the acquired stair image information.
Further, the visual stair height detection device comprises a depth camera.
A machine dog comprises the stair height visual detection device.
Further, the robot dog includes a binocular depth camera module.
Compared with the prior art, the invention has the following beneficial effects:
the method comprises the steps of 1, performing edge detection and segmentation through semantic segmentation to obtain a stable binary image with small interference, extracting edge points by utilizing gradient information, and finally obtaining height and width information of the staircase through a binocular vision algorithm;
2, the height information of the stairs can be obtained through the stair height visual detection device, visual assistance is provided for the robot dog to go upstairs, the robot dog is prevented from stepping on the edge of the stairs or the junction of two levels of stairs, and the stability of the robot dog in working is guaranteed.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention.
FIG. 1 is a schematic illustration of a machine dog work environment;
FIG. 2 is a schematic view of a robot dog in operation;
FIG. 3 is a flow chart of the first embodiment;
FIG. 4 is a binary image obtained after semantic segmentation of an original stair image;
FIG. 5 is a schematic diagram of a binary image;
FIG. 6 is a schematic diagram of a binary matrix obtained by semantic segmentation;
FIG. 7 is a schematic diagram of a calculated longitudinal gradient matrix;
FIG. 8 is a schematic diagram of a calculated transverse gradient matrix;
FIG. 9 is a schematic view of two line segments;
FIG. 10 is a schematic illustration of changing point coordinates;
FIG. 11 (a) is a schematic of a longitudinal gradient matrix;
FIG. 11 (b) is a schematic of a gradient profile;
FIG. 12 is a schematic diagram of height measurement based on the Pythagorean theorem;
FIG. 13 is a schematic representation of the relative positions of the broadside slopes and the dog and stairs when the dog is relatively left;
FIG. 14 is a schematic diagram showing the relative positions of the slope of the broadside with respect to the dog and stairs, when the dog is relatively far to the right;
FIG. 15 is a schematic diagram of relative positions of a dog and a staircase when measuring a slope decision constant;
fig. 16 is a photographic view when a slope determination constant is measured;
FIG. 17 is a schematic view of the radial movement;
FIG. 18 is a schematic view of a rotational movement perpendicular to a radial direction;
FIG. 19 is a schematic view of pinhole imaging;
fig. 20 is an extreme position override map.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
The invention discloses a visual detection method for the height and width of a staircase, which comprises the following steps:
s1, collecting stair image information to obtain an RGB image and a depth map;
s2, performing semantic segmentation on the RGB image to obtain a semantic segmented image;
s3, calculating a horizontal gradient matrix and a vertical gradient matrix of the semantic segmentation image;
s4, extracting the pixel coordinates of the corner points by utilizing the gradient information;
and S5, corresponding the pixel coordinates of the corner points to the depth map, and calculating the height and the width of the staircase.
Based on the method, the invention discloses a visual detection device for the height of the stairs, which comprises an image acquisition module, a target detection module, a semantic segmentation module, a depth measurement module, a gradient calculation module and a stair height and width calculation module.
An image acquisition module: the stair image acquisition system is used for acquiring stair image information;
a target detection module: the system is used for detecting the pose of the camera according to the acquired stair image information;
a semantic segmentation module: the method is used for performing semantic segmentation on the image to obtain the boundary of the staircase;
a depth measurement module: the depth information of the image is obtained to obtain a depth map;
a gradient calculation module: calculating gradient information according to the semantic segmentation image, and extracting corner point pixel coordinates according to the gradient information;
the stair height and width calculation module: and mapping the pixel coordinates of the corner points to the depth map, and calculating the height and the width of the staircase.
In another embodiment, the visual detection device for the height of the stairs comprises a depth camera, wherein the depth camera integrates an image acquisition module and a depth measurement module, and an RGB-D camera can be selected.
Based on the stair height visual detection device method, the invention discloses a robot dog which comprises the stair height visual detection device. Through stair height visual detection device, can obtain the height information of stair, provide visual assistance for the machine dog goes up the stair, avoid the machine dog to step on at stair edge or two-stage stair juncture, ensure the stability of machine dog.
Based on the above robot dog, the invention discloses an embodiment.
Example one
In this embodiment, the height and width of the stairs are defined as shown in fig. 1.
The distance between the machine dog and the first step of the stair is within 30-50cm, namely a shaded area in figure 1. Under this condition, the color image size of the robot dog for semantic segmentation and image recognition is 640 × 480 through compression and downsampling. When the robot is 30-50cm from the first step, the main objects in the field of view are walls, stairs and skirting lines, as shown in fig. 2. The problem that the angular point under the step cannot be seen is solved.
The embodiment requires training an FCN network for stair semantics extraction separately. In which semantic data about the staircase is partially collected manually, and partially using a starcase-like image (e.g., Barcelona Dataset) in an existing open source Dataset.
The robot dog has a binocular depth camera. In this embodiment, an intel Real-sensor d435 depth camera is selected.
As shown in fig. 3, in operation, the robot dog detects the height of the stairs as follows:
and S1, acquiring image information through the depth camera. There are two streams of image information: one stream obtains RGB images for semantic segmentation and object detection and one stream for generating depth information. In the D435 camera, pixel-by-pixel matching of the RGB map and the depth map has been integrated.
Before stair image information is collected, a stair can be detected through the target detection module, and then the pose of the robot dog is adjusted, so that the edge of the stair fills the whole lens. The target detection module is not a necessary module, and detection and identification can be performed by using a simple YOLOv3 algorithm.
S2, semantic segmentation: a binary image containing stair and non-stair objects is generated as shown in fig. 4.
And S3, calculating a gradient matrix of the horizontal direction and the vertical direction of the binary image.
In this embodiment, taking the binary image shown in fig. 5 as an example, the obtained binary matrix is shown in fig. 6, the calculated longitudinal gradient matrix is shown in fig. 7, and the calculated transverse gradient matrix is shown in fig. 8.
Calculating a transverse gradient matrix by adopting a transverse gradient matrix calculation formula (1):
Figure 995818DEST_PATH_IMAGE001
(1)
calculating a longitudinal gradient matrix by adopting a longitudinal gradient matrix calculation formula (2);
Figure 501755DEST_PATH_IMAGE002
(2)
wherein the content of the first and second substances,
Figure 279218DEST_PATH_IMAGE003
is the abscissa and ordinate of a certain point on the pixel matrix,
Figure 943899DEST_PATH_IMAGE004
is the pixel value of the image at that point. In the embodiment, the image data is in an 8-bit storage format, so that the value range of each pixel point is (0, 255), i.e. 0 is more than or equal to 255,
Figure 377285DEST_PATH_IMAGE004
is an integer.
As can be seen from fig. 7 and 8, the longitudinal gradient matrix may reflect the intensity of the change in the horizontal direction, and the transverse gradient matrix may reflect the intensity of the change in the vertical direction.
Then, each time on the gradient matrix
Figure 686913DEST_PATH_IMAGE023
The accumulated transverse gradient is calculated once per longitudinal pixel, or every time on the gradient matrix
Figure 318882DEST_PATH_IMAGE024
Calculating the accumulated longitudinal gradient once by each transverse pixel;
calculating the cumulative transverse gradient using equation (3)
Figure 236023DEST_PATH_IMAGE025
(3)
The cumulative longitudinal gradient is calculated using equation (4):
Figure 468290DEST_PATH_IMAGE026
(4)
Figure 66761DEST_PATH_IMAGE027
refers to a longitudinal gradient matrix corresponding to the pixel matrix,
Figure 5767DEST_PATH_IMAGE028
refers to the lateral gradient matrix corresponding to the pixel matrix.
The present embodiment chooses to calculate the accumulated longitudinal gradient. As shown in fig. 9-11Shown as per pass over the gradient matrix
Figure 297071DEST_PATH_IMAGE024
Calculating the variation of the y direction corresponding to each column once
Figure 829684DEST_PATH_IMAGE023
Is provided with
Figure 218464DEST_PATH_IMAGE030
Is a matrix of the gradients which is,mis the number of rows in the matrix and,nis the number of columns of the matrix; set constant number
Figure 28289DEST_PATH_IMAGE032
Represents the step size; then is at
Figure 739761DEST_PATH_IMAGE030
Has one and only one partial sequence
Figure 166195DEST_PATH_IMAGE034
Figure 434365DEST_PATH_IMAGE036
Figure 410280DEST_PATH_IMAGE038
Wherein the content of the first and second substances,
Figure 715491DEST_PATH_IMAGE040
representation matrix
Figure 409646DEST_PATH_IMAGE030
Line
1 of (1)
Figure 215928DEST_PATH_IMAGE042
Column element, row 2
Figure 797082DEST_PATH_IMAGE044
The number of columns of elements, … …,first, themLine of
Figure 785111DEST_PATH_IMAGE046
Column element;
Figure 451716DEST_PATH_IMAGE048
representation matrix
Figure 61689DEST_PATH_IMAGE030
To middleaLine ofbColumn element;
Figure 746617DEST_PATH_IMAGE050
Figure 455947DEST_PATH_IMAGE048
are all referred to as
Figure 62378DEST_PATH_IMAGE034
One component of (a);
device set
Figure 148145DEST_PATH_IMAGE052
Figure 766208DEST_PATH_IMAGE054
Represents a set of positive integers; then
Figure 895707DEST_PATH_IMAGE034
There is a subsequence
Figure 474587DEST_PATH_IMAGE056
,
Figure 616243DEST_PATH_IMAGE058
Figure 88813DEST_PATH_IMAGE060
Each component of (a) is in
Figure 405525DEST_PATH_IMAGE030
Corresponds to a point, defines
Figure 720968DEST_PATH_IMAGE060
Of any two adjacent components
Figure 882959DEST_PATH_IMAGE062
Figure 662565DEST_PATH_IMAGE064
Is a line segment; wherein, and
Figure 212495DEST_PATH_IMAGE066
and (4) calculating the slope of the line segment by adopting a formula (4), wherein the slope of the line segment is the accumulated longitudinal gradient.
Subsequently, the longitudinal gradient will be accumulated
Figure 562705DEST_PATH_IMAGE067
And comparing with a decision constant so as to judge whether a segment of line belongs to the wide edge or the high edge, traversing the situation in the matrix, and finding out the end points of the wide edge and the high edge.
If it is
Figure 980917DEST_PATH_IMAGE067
If the value is larger than the decision constant, the line segment is considered to belong to a part of the step height; if it is
Figure 896921DEST_PATH_IMAGE067
If not, the line segment is considered to be part of the step width.
For example, in the present embodiment
Figure 294052DEST_PATH_IMAGE024
Taking the value as 4, and taking the value of a decision constant as 1; at this time
Figure 131558DEST_PATH_IMAGE068
And judging a constant, and considering that the constant belongs to a part of the step height. In a similar manner, decision constants may be used to decide on broadsides. Repeating the slope calculation process with the gradient matrix, gradually reducing
Figure 556723DEST_PATH_IMAGE024
The relatively accurate coordinates of the starting point of the high-wide edge of the staircase can be obtained, and therefore the corresponding pixel points are reversely deduced according to the indexes of the matrix.
S4, corresponding the pixel coordinates to the depth map, and obtaining the length of the corresponding edge width according to the depth camera, the calculation principle is shown in fig. 12, which is a conventional technique in the art and will not be described herein again.
In the present invention, there are two ways to calculate the gradient matrix and the cumulative gradient: a longitudinal gradient matrix, a transverse gradient matrix; and calculating the accumulated variation in the horizontal coordinate direction by taking the column coordinate as a denominator, and calculating the accumulated variation in the column coordinate by taking the row coordinate as a denominator. Two of them are combined, and there are 4 modes. The embodiment selects the vertical gradient matrix + calculating the accumulated variation of the row coordinate by taking the column coordinate as an argument.
When the column coordinate is taken as the independent variable, the decision constant is taken
Figure DEST_PATH_IMAGE069
. Related to the robot dog itself and the relative positions of the dog and the stairs. As shown in fig. 13, the dogs are relatively to the left. To obtain
Figure 310921DEST_PATH_IMAGE069
The way of the values is as follows: the machine dog is close to the left side of the stair, and the left side of the dog body and the left side of the stair are located on the same plane. The dog is moved back and forth, so that the distance between the lower edge of the dog in the visual angle and the horizontal direction of the transverse edge of the first step of the stair is 50cm, and the requirements of a working scene are met. At this time, the slope of the stair broadside in the camera is 1. This slope corresponds to the extreme position in which the dog can ascend the stairs when the dog is left relative to the centerline of the staircase, as shown in fig. 15 and 16. The measurement result of the 18cm high step obtained by the current test of the quadruped robot is 1.
The principle of decision constant measurement is as follows:
assuming that the point a is the measurement point of this embodiment, the point a has two motion modes, with respect to the point a, assuming that the height of the camera of the robot dog is not changed: respectively, a radial movement as shown in fig. 17 and a rotational movement perpendicular to the radial direction as shown in fig. 18.
For the radial movement as shown in fig. 17, the angle 1> 2 can be known from the keyhole imaging schematic diagram shown in fig. 19. Similarly, for the rotational movement shown in fig. 18, when the dog rotates from point a to point C, the angle between the wide side of the stairs in the picture and the horizontal direction will become smaller. As also shown in fig. 14, where the dog is relatively right relative to fig. 13, the slope of the broad side of the staircase in the frame will gradually decrease as the robot dog translates to the right. Therefore, the parameters measured by the method are discrimination parameters in a limit state.
The judgment of the c value has the following two functions: firstly, if the absolute values of the slopes of two line segments in the gradient matrix are found to be larger than c through calculation, it is proved that the relative positions of the dog and the stair are not suitable for the dog to go upstairs, and the dog needs to move to the right to adjust the position until the absolute value of the slope of one edge is smaller than or equal to c, as shown in fig. 20.
And secondly, if the absolute value of the slope of one edge is smaller than or equal to c, the value can be used for judging which edge is a wide edge and which edge is a high edge.
When the dog is relatively close to the right, the decision constant only needs to take a symmetric condition, i.e. the parameter value is-c, which is also the reason for taking the absolute value in the foregoing description. The relative positions of the stairs and dogs are available at the stair object identification module and are general knowledge in the field.
In another embodiment, the cumulative lateral gradient may be selected to be calculated, i.e. with the row coordinates as arguments. At this time, the decision constant takes 1/c. The specific judgment method is as follows:
each pass over a gradient matrix
Figure 78020DEST_PATH_IMAGE023
Calculating the variation of x direction corresponding to each row
Figure 652090DEST_PATH_IMAGE024
And (3) calculating the slope of the line segment by adopting a formula (3), wherein the slope of the line segment is the accumulated transverse gradient.
Subsequently, the longitudinal gradient will be accumulated
Figure DEST_PATH_IMAGE070
And comparing the comparison result with 1/c to judge whether a segment of line belongs to the wide edge or the high edge, then traversing the matrix, and then finding out the end points of the wide edge and the high edge.
If it is
Figure DEST_PATH_IMAGE071
The line segment is considered to be part of the step height; if it is
Figure DEST_PATH_IMAGE072
Then the line segment is considered to be part of the step width.
According to the invention, the depth information can be obtained through the depth vision module of the depth camera, and the parallax is calculated; obtaining the stair border through the semantic segmentation, calculating the corner coordinate of location length, width through the gradient, then calculate the height, the width of stair, for the machine dog goes up the stair and provides visual assistance, avoid the machine dog to step on at stair edge or two-stage stair juncture, do benefit to the stability of guaranteeing the machine dog during operation.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A visual detection method for the height and width of a stair is characterized by comprising the following steps: the method comprises the following steps:
s1, collecting stair image information to obtain an RGB image and a depth map;
s2, performing semantic segmentation on the RGB image to obtain a semantic segmented image;
s3, calculating a horizontal gradient matrix and a vertical gradient matrix of the semantic segmentation image;
s4, extracting the pixel coordinates of the corner points by utilizing the gradient information;
and S5, corresponding the pixel coordinates of the corner points to the depth map, and calculating the height and the width of the staircase.
2. The visual inspection method of the height and width of the staircase according to claim 1, wherein: and generating a binary image by performing semantic segmentation on the RGB image.
3. The visual inspection method of the height and width of the staircase according to claim 1, wherein: before stair image information is collected, target detection and identification are carried out, and the pose of a camera is adjusted, so that the stair edge fills the whole lens.
4. The visual inspection method of the height and width of the staircase according to claim 1, wherein: in S3, the transverse gradient matrix is calculated by using formula (1):
Figure 85751DEST_PATH_IMAGE001
(1)
calculating a longitudinal gradient matrix using equation (2):
Figure 65208DEST_PATH_IMAGE002
(2)
wherein the content of the first and second substances,
Figure 175771DEST_PATH_IMAGE003
is the abscissa and ordinate of a certain point on the pixel matrix,
Figure 898876DEST_PATH_IMAGE004
is the pixel value of the image at that point.
5. The visual inspection method of the stair height and width according to claim 4, wherein: is provided with
Figure 916511DEST_PATH_IMAGE006
Is a matrix of the gradients which is,mis the number of rows in the matrix and,nis the number of columns of the matrix; set constant number
Figure 66870DEST_PATH_IMAGE008
Represents the step size; then is at
Figure 396220DEST_PATH_IMAGE006
Has one and only one partial sequence
Figure 923016DEST_PATH_IMAGE010
Figure 654212DEST_PATH_IMAGE012
Figure 975472DEST_PATH_IMAGE014
Wherein the content of the first and second substances,
Figure 933063DEST_PATH_IMAGE016
representation matrix
Figure 732392DEST_PATH_IMAGE006
Line 1 of (1)
Figure 315165DEST_PATH_IMAGE018
Column element, row 2
Figure 72905DEST_PATH_IMAGE020
Column element, … …, thmLine of
Figure 642427DEST_PATH_IMAGE022
Column element;
Figure 979867DEST_PATH_IMAGE024
representation matrix
Figure 951234DEST_PATH_IMAGE006
To middleaLine ofbColumn element;
Figure 348717DEST_PATH_IMAGE026
Figure 874377DEST_PATH_IMAGE024
are all referred to as
Figure 15508DEST_PATH_IMAGE010
One component of (a);
device set
Figure 47574DEST_PATH_IMAGE028
Figure 740592DEST_PATH_IMAGE030
Represents a set of positive integers; then
Figure 222389DEST_PATH_IMAGE010
There is a subsequence
Figure 760686DEST_PATH_IMAGE032
,
Figure 909908DEST_PATH_IMAGE034
Figure 649194DEST_PATH_IMAGE036
Each component of (a) is in
Figure 880936DEST_PATH_IMAGE006
Corresponds to a point, defines
Figure 957346DEST_PATH_IMAGE036
Of any two adjacent components
Figure 961074DEST_PATH_IMAGE038
Figure 995895DEST_PATH_IMAGE040
Is a line segment; wherein
Figure 452284DEST_PATH_IMAGE042
And (a) and
Figure 473330DEST_PATH_IMAGE044
the S4 specifically includes: on the gradient matrix each
Figure DEST_PATH_IMAGE045
The accumulated transverse gradient is calculated once per longitudinal pixel, or every time on the gradient matrix
Figure 990286DEST_PATH_IMAGE046
Calculating the accumulated longitudinal gradient once by each transverse pixel;
the cumulative transverse gradient is calculated using equation (3):
Figure 930429DEST_PATH_IMAGE047
(3)
the cumulative longitudinal gradient is calculated using equation (4):
Figure 874114DEST_PATH_IMAGE048
(4)
wherein the content of the first and second substances,
Figure 698851DEST_PATH_IMAGE049
refers to a longitudinal gradient matrix corresponding to the pixel matrix,
Figure 798875DEST_PATH_IMAGE050
a transverse gradient matrix corresponding to the pixel matrix is indicated;
then, comparing the accumulated transverse gradient or the accumulated longitudinal gradient with a decision constant, and if the accumulated transverse gradient or the accumulated longitudinal gradient is greater than the decision constant, considering that the line segment belongs to a part of the step height; if the step width is smaller than the judgment constant, the line segment is considered to belong to one part of the step width;
and repeating the calculation of the accumulated transverse gradient or the accumulated longitudinal gradient to obtain relatively accurate coordinates of the starting point of the high and wide sides of the stairs, and reversely deducing the corresponding pixel points according to the index of the matrix.
6. The visual inspection method of the stair height and width according to claim 5, wherein: the decision constant is taken to be 1.
7. A stair height visual detection device which characterized in that: the system comprises an image acquisition module, a semantic segmentation module, a depth measurement module, a gradient calculation module and a stair height and width calculation module;
an image acquisition module: the stair image acquisition system is used for acquiring stair image information;
a semantic segmentation module: the method is used for performing semantic segmentation on the image to obtain the boundary of the staircase;
a depth measurement module: the depth information of the image is obtained to obtain a depth map;
a gradient calculation module: calculating gradient information according to the semantic segmentation image, and extracting corner point pixel coordinates according to the gradient information;
the stair height and width calculation module: and mapping the pixel coordinates of the corner points to the depth map, and calculating the height and the width of the staircase.
8. The visual stair height detection device of claim 7, wherein: still include the target detection module: and the camera pose detection is carried out according to the acquired stair image information.
9. A machine dog, comprising: comprising a visual stair height detection device according to any one of claims 7 or 8.
10. The machine dog of claim 9, wherein: including a binocular depth camera module.
CN202110143884.6A 2021-02-03 2021-02-03 Stair height and width visual detection method and device and robot dog Active CN112529903B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110143884.6A CN112529903B (en) 2021-02-03 2021-02-03 Stair height and width visual detection method and device and robot dog

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110143884.6A CN112529903B (en) 2021-02-03 2021-02-03 Stair height and width visual detection method and device and robot dog

Publications (2)

Publication Number Publication Date
CN112529903A true CN112529903A (en) 2021-03-19
CN112529903B CN112529903B (en) 2022-01-28

Family

ID=74975460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110143884.6A Active CN112529903B (en) 2021-02-03 2021-02-03 Stair height and width visual detection method and device and robot dog

Country Status (1)

Country Link
CN (1) CN112529903B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689498A (en) * 2021-08-16 2021-11-23 江苏仁和医疗器械有限公司 Artificial intelligence-based electric stair climbing vehicle auxiliary control method and system
CN113867333A (en) * 2021-09-03 2021-12-31 南方科技大学 Stair climbing planning method for quadruped robot based on visual perception and application of stair climbing planning method
CN114683290A (en) * 2022-05-31 2022-07-01 深圳鹏行智能研究有限公司 Method and device for optimizing pose of foot robot and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927751A (en) * 2014-04-18 2014-07-16 哈尔滨工程大学 Water surface optical visual image target area detection method based on gradient information fusion
CN108876798A (en) * 2018-06-12 2018-11-23 杭州视氪科技有限公司 A kind of stair detection system and method
US20190347803A1 (en) * 2018-05-09 2019-11-14 Microsoft Technology Licensing, Llc Skeleton-based supplementation for foreground image segmentation
CN110919653A (en) * 2019-11-29 2020-03-27 深圳市优必选科技股份有限公司 Stair climbing control method and device for robot, storage medium and robot
CN111179344A (en) * 2019-12-26 2020-05-19 广东工业大学 Efficient mobile robot SLAM system for repairing semantic information
CN111368749A (en) * 2020-03-06 2020-07-03 创新奇智(广州)科技有限公司 Automatic identification method and system for stair area
CN112102347A (en) * 2020-11-19 2020-12-18 之江实验室 Step detection and single-stage step height estimation method based on binocular vision

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927751A (en) * 2014-04-18 2014-07-16 哈尔滨工程大学 Water surface optical visual image target area detection method based on gradient information fusion
US20190347803A1 (en) * 2018-05-09 2019-11-14 Microsoft Technology Licensing, Llc Skeleton-based supplementation for foreground image segmentation
CN108876798A (en) * 2018-06-12 2018-11-23 杭州视氪科技有限公司 A kind of stair detection system and method
CN110919653A (en) * 2019-11-29 2020-03-27 深圳市优必选科技股份有限公司 Stair climbing control method and device for robot, storage medium and robot
CN111179344A (en) * 2019-12-26 2020-05-19 广东工业大学 Efficient mobile robot SLAM system for repairing semantic information
CN111368749A (en) * 2020-03-06 2020-07-03 创新奇智(广州)科技有限公司 Automatic identification method and system for stair area
CN112102347A (en) * 2020-11-19 2020-12-18 之江实验室 Step detection and single-stage step height estimation method based on binocular vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘东: "基于多轮足的自平衡越障爬楼梯机器人研发", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689498A (en) * 2021-08-16 2021-11-23 江苏仁和医疗器械有限公司 Artificial intelligence-based electric stair climbing vehicle auxiliary control method and system
CN113689498B (en) * 2021-08-16 2022-06-07 江苏仁和医疗器械有限公司 Artificial intelligence-based electric stair climbing vehicle auxiliary control method and system
CN113867333A (en) * 2021-09-03 2021-12-31 南方科技大学 Stair climbing planning method for quadruped robot based on visual perception and application of stair climbing planning method
CN113867333B (en) * 2021-09-03 2023-11-17 南方科技大学 Four-foot robot stair climbing planning method based on visual perception and application thereof
CN114683290A (en) * 2022-05-31 2022-07-01 深圳鹏行智能研究有限公司 Method and device for optimizing pose of foot robot and storage medium
CN114683290B (en) * 2022-05-31 2022-09-16 深圳鹏行智能研究有限公司 Method and device for optimizing pose of foot robot and storage medium

Also Published As

Publication number Publication date
CN112529903B (en) 2022-01-28

Similar Documents

Publication Publication Date Title
CN112529903B (en) Stair height and width visual detection method and device and robot dog
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
US10129521B2 (en) Depth sensing method and system for autonomous vehicles
CN101960860B (en) System and method for depth map extraction using region-based filtering
US7376250B2 (en) Apparatus, method and program for moving object detection
US20200349366A1 (en) Onboard environment recognition device
KR101776620B1 (en) Apparatus for recognizing location mobile robot using search based correlative matching and method thereof
JP6112221B2 (en) Moving object position estimation apparatus and moving object position estimation method
CN101067557A (en) Environment sensing one-eye visual navigating method adapted to self-aid moving vehicle
CN112184792B (en) Road gradient calculation method and device based on vision
JP2003168104A (en) Recognition device of white line on road
CN107136649B (en) Three-dimensional foot shape measuring device based on automatic track seeking mode and implementation method
US8873855B2 (en) Apparatus and method for extracting foreground layer in image sequence
KR101090082B1 (en) System and method for automatic measuring of the stair dimensions using a single camera and a laser
JP3333721B2 (en) Area detection device
JP4235018B2 (en) Moving object detection apparatus, moving object detection method, and moving object detection program
JP3952460B2 (en) Moving object detection apparatus, moving object detection method, and moving object detection program
CN108256470A (en) A kind of lane shift judgment method and automobile
CN112116644B (en) Obstacle detection method and device based on vision and obstacle distance calculation method and device
CN112597857B (en) Indoor robot stair climbing pose rapid estimation method based on kinect
JP2005196359A (en) Moving object detection apparatus, moving object detection method and moving object detection program
US11132530B2 (en) Method for three-dimensional graphic reconstruction of a vehicle
CN112767481A (en) High-precision positioning and mapping method based on visual edge features
Dargazany Stereo-based terrain traversability analysis using normal-based segmentation and superpixel surface analysis
KR101042171B1 (en) Method and apparatus for controlling vergence of intersting objects in the steroscopic camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant