CN112529903A - Stair height and width visual detection method and device and robot dog - Google Patents
Stair height and width visual detection method and device and robot dog Download PDFInfo
- Publication number
- CN112529903A CN112529903A CN202110143884.6A CN202110143884A CN112529903A CN 112529903 A CN112529903 A CN 112529903A CN 202110143884 A CN202110143884 A CN 202110143884A CN 112529903 A CN112529903 A CN 112529903A
- Authority
- CN
- China
- Prior art keywords
- stair
- gradient
- matrix
- height
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Abstract
The invention relates to a visual detection method, a device and a robot dog for the height and width of a stair, wherein the method comprises the following steps: s1, collecting stair image information to obtain an RGB image and a depth map; s2, performing semantic segmentation on the RGB image to obtain a semantic segmented image; s3, calculating a horizontal gradient matrix and a vertical gradient matrix of the semantic segmentation image; s4, extracting the pixel coordinates of the corner points by utilizing the gradient information; and S5, corresponding the pixel coordinates of the corner points to the depth map, and calculating the height and the width of the staircase. According to the invention, the parallax is calculated through the depth information; obtaining the stair border through the semantic segmentation, calculating the corner coordinate of location length, width through the gradient, then calculate the height, the width of stair, for the machine dog goes up the stair and provides visual assistance, avoid the machine dog to step on at stair edge or two-stage stair juncture, do benefit to the stability of guaranteeing the machine dog during operation.
Description
Technical Field
The invention relates to the technical field of robots, in particular to a stair height and width visual detection method and device and a robot dog.
Background
The existing quadruped robot can only be 'blind-walking' when going up and down stairs or slopes because of lack of visual assistance: namely, in the motion control algorithm, a 'stair climbing mode' is specially developed. In this mode, the gait of the dog can only be controlled at a preset fixed height, and the step size is determined by the operator.
The pose of a quadruped robot walking up stairs in a fixed motion mode will be very stiff; meanwhile, because there is no visual aid, the quadruped robot cannot estimate the relative relationship between the landing point and the stairs. This means that the quadruped robot may step on the edge of a stair or a two-step stair junction, which can present a significant challenge to the control system and also make the operation of the quadruped robot very unstable.
Disclosure of Invention
The invention provides a stair height and width visual detection method, a stair height and width visual detection device and a robot dog for solving the technical problems.
The invention is realized by the following technical scheme:
a visual detection method for the height and width of a staircase comprises the following steps:
s1, collecting stair image information to obtain an RGB image and a depth map;
s2, performing semantic segmentation on the RGB image to obtain a semantic segmented image;
s3, calculating a horizontal gradient matrix and a vertical gradient matrix of the semantic segmentation image;
s4, extracting the pixel coordinates of the corner points by utilizing the gradient information;
and S5, corresponding the pixel coordinates of the corner points to the depth map, and calculating the height and the width of the staircase.
The method comprises the steps of carrying out semantic segmentation on an RGB image to generate a binary image.
Furthermore, before stair image information is collected, target detection and identification are carried out, and the pose of the camera is adjusted, so that the stair edge fills the whole lens.
Further, in S3, the transverse gradient matrix is calculated by using the transverse gradient matrix calculation formula (1):
calculating a longitudinal gradient matrix by adopting a longitudinal gradient matrix calculation formula (2):
wherein the content of the first and second substances,is the abscissa and ordinate of a certain point on the pixel matrix,is the pixel value of the image at that point.
Is provided withIs a matrix of the gradients which is,mis the number of rows in the matrix and,nis the number of columns of the matrix; set constant numberRepresents the step size; then is atHas one and only one partial sequence:
Wherein the content of the first and second substances, representation matrixLine 1 of (1)Column element, 2 nd row column element, … … thmLine ofColumn element;representation matrixTo middleaLine ofbColumn element;,are all referred to asOne component of (a);
Each component of (a) is inCorresponds to a point, definesOf any two adjacent components,Is a line segment; wherein, and ;
the S4 specifically includes: on the gradient matrix eachThe accumulated transverse gradient is calculated once per longitudinal pixel, or every time on the gradient matrixCalculating the accumulated longitudinal gradient once by each transverse pixel;
the cumulative transverse gradient is calculated using equation (3):
the cumulative longitudinal gradient is calculated using equation (4):
wherein the content of the first and second substances,refers to a longitudinal gradient matrix corresponding to the pixel matrix,a transverse gradient matrix corresponding to the pixel matrix is indicated;
then, comparing the accumulated transverse gradient or the accumulated longitudinal gradient with a decision constant, and if the accumulated transverse gradient or the accumulated longitudinal gradient is larger than the decision constant, considering that the line segment belongs to a part of the step height; if the step width is smaller than the judgment constant, the line segment is considered to belong to one part of the step width;
and repeating the calculation of the accumulated transverse gradient or the accumulated longitudinal gradient to obtain relatively accurate coordinates of the starting point of the high and wide sides of the stairs, and reversely deducing the corresponding pixel points according to the index of the matrix.
Preferably, the decision constant takes 1.
A visual detection device for the height of a stair comprises an image acquisition module, a semantic segmentation module, a depth measurement module, a gradient calculation module and a stair height and width calculation module;
an image acquisition module: the stair image acquisition system is used for acquiring stair image information;
a semantic segmentation module: the method is used for performing semantic segmentation on the image to obtain the boundary of the staircase;
a depth measurement module: the depth information of the image is obtained to obtain a depth map;
a gradient calculation module: calculating gradient information according to the semantic segmentation image, and extracting corner point pixel coordinates according to the gradient information;
the stair height and width calculation module: and mapping the pixel coordinates of the corner points to the depth map, and calculating the height and the width of the staircase.
Further, the visual detection device for the height of the stair further comprises a target detection module: and the camera pose detection is carried out according to the acquired stair image information.
Further, the visual stair height detection device comprises a depth camera.
A machine dog comprises the stair height visual detection device.
Further, the robot dog includes a binocular depth camera module.
Compared with the prior art, the invention has the following beneficial effects:
the method comprises the steps of 1, performing edge detection and segmentation through semantic segmentation to obtain a stable binary image with small interference, extracting edge points by utilizing gradient information, and finally obtaining height and width information of the staircase through a binocular vision algorithm;
2, the height information of the stairs can be obtained through the stair height visual detection device, visual assistance is provided for the robot dog to go upstairs, the robot dog is prevented from stepping on the edge of the stairs or the junction of two levels of stairs, and the stability of the robot dog in working is guaranteed.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention.
FIG. 1 is a schematic illustration of a machine dog work environment;
FIG. 2 is a schematic view of a robot dog in operation;
FIG. 3 is a flow chart of the first embodiment;
FIG. 4 is a binary image obtained after semantic segmentation of an original stair image;
FIG. 5 is a schematic diagram of a binary image;
FIG. 6 is a schematic diagram of a binary matrix obtained by semantic segmentation;
FIG. 7 is a schematic diagram of a calculated longitudinal gradient matrix;
FIG. 8 is a schematic diagram of a calculated transverse gradient matrix;
FIG. 9 is a schematic view of two line segments;
FIG. 10 is a schematic illustration of changing point coordinates;
FIG. 11 (a) is a schematic of a longitudinal gradient matrix;
FIG. 11 (b) is a schematic of a gradient profile;
FIG. 12 is a schematic diagram of height measurement based on the Pythagorean theorem;
FIG. 13 is a schematic representation of the relative positions of the broadside slopes and the dog and stairs when the dog is relatively left;
FIG. 14 is a schematic diagram showing the relative positions of the slope of the broadside with respect to the dog and stairs, when the dog is relatively far to the right;
FIG. 15 is a schematic diagram of relative positions of a dog and a staircase when measuring a slope decision constant;
fig. 16 is a photographic view when a slope determination constant is measured;
FIG. 17 is a schematic view of the radial movement;
FIG. 18 is a schematic view of a rotational movement perpendicular to a radial direction;
FIG. 19 is a schematic view of pinhole imaging;
fig. 20 is an extreme position override map.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
The invention discloses a visual detection method for the height and width of a staircase, which comprises the following steps:
s1, collecting stair image information to obtain an RGB image and a depth map;
s2, performing semantic segmentation on the RGB image to obtain a semantic segmented image;
s3, calculating a horizontal gradient matrix and a vertical gradient matrix of the semantic segmentation image;
s4, extracting the pixel coordinates of the corner points by utilizing the gradient information;
and S5, corresponding the pixel coordinates of the corner points to the depth map, and calculating the height and the width of the staircase.
Based on the method, the invention discloses a visual detection device for the height of the stairs, which comprises an image acquisition module, a target detection module, a semantic segmentation module, a depth measurement module, a gradient calculation module and a stair height and width calculation module.
An image acquisition module: the stair image acquisition system is used for acquiring stair image information;
a target detection module: the system is used for detecting the pose of the camera according to the acquired stair image information;
a semantic segmentation module: the method is used for performing semantic segmentation on the image to obtain the boundary of the staircase;
a depth measurement module: the depth information of the image is obtained to obtain a depth map;
a gradient calculation module: calculating gradient information according to the semantic segmentation image, and extracting corner point pixel coordinates according to the gradient information;
the stair height and width calculation module: and mapping the pixel coordinates of the corner points to the depth map, and calculating the height and the width of the staircase.
In another embodiment, the visual detection device for the height of the stairs comprises a depth camera, wherein the depth camera integrates an image acquisition module and a depth measurement module, and an RGB-D camera can be selected.
Based on the stair height visual detection device method, the invention discloses a robot dog which comprises the stair height visual detection device. Through stair height visual detection device, can obtain the height information of stair, provide visual assistance for the machine dog goes up the stair, avoid the machine dog to step on at stair edge or two-stage stair juncture, ensure the stability of machine dog.
Based on the above robot dog, the invention discloses an embodiment.
Example one
In this embodiment, the height and width of the stairs are defined as shown in fig. 1.
The distance between the machine dog and the first step of the stair is within 30-50cm, namely a shaded area in figure 1. Under this condition, the color image size of the robot dog for semantic segmentation and image recognition is 640 × 480 through compression and downsampling. When the robot is 30-50cm from the first step, the main objects in the field of view are walls, stairs and skirting lines, as shown in fig. 2. The problem that the angular point under the step cannot be seen is solved.
The embodiment requires training an FCN network for stair semantics extraction separately. In which semantic data about the staircase is partially collected manually, and partially using a starcase-like image (e.g., Barcelona Dataset) in an existing open source Dataset.
The robot dog has a binocular depth camera. In this embodiment, an intel Real-sensor d435 depth camera is selected.
As shown in fig. 3, in operation, the robot dog detects the height of the stairs as follows:
and S1, acquiring image information through the depth camera. There are two streams of image information: one stream obtains RGB images for semantic segmentation and object detection and one stream for generating depth information. In the D435 camera, pixel-by-pixel matching of the RGB map and the depth map has been integrated.
Before stair image information is collected, a stair can be detected through the target detection module, and then the pose of the robot dog is adjusted, so that the edge of the stair fills the whole lens. The target detection module is not a necessary module, and detection and identification can be performed by using a simple YOLOv3 algorithm.
S2, semantic segmentation: a binary image containing stair and non-stair objects is generated as shown in fig. 4.
And S3, calculating a gradient matrix of the horizontal direction and the vertical direction of the binary image.
In this embodiment, taking the binary image shown in fig. 5 as an example, the obtained binary matrix is shown in fig. 6, the calculated longitudinal gradient matrix is shown in fig. 7, and the calculated transverse gradient matrix is shown in fig. 8.
Calculating a transverse gradient matrix by adopting a transverse gradient matrix calculation formula (1):
calculating a longitudinal gradient matrix by adopting a longitudinal gradient matrix calculation formula (2);
wherein the content of the first and second substances,is the abscissa and ordinate of a certain point on the pixel matrix,is the pixel value of the image at that point. In the embodiment, the image data is in an 8-bit storage format, so that the value range of each pixel point is (0, 255), i.e. 0 is more than or equal to 255,is an integer.
As can be seen from fig. 7 and 8, the longitudinal gradient matrix may reflect the intensity of the change in the horizontal direction, and the transverse gradient matrix may reflect the intensity of the change in the vertical direction.
Then, each time on the gradient matrixThe accumulated transverse gradient is calculated once per longitudinal pixel, or every time on the gradient matrixCalculating the accumulated longitudinal gradient once by each transverse pixel;
calculating the cumulative transverse gradient using equation (3)
The cumulative longitudinal gradient is calculated using equation (4):
refers to a longitudinal gradient matrix corresponding to the pixel matrix,refers to the lateral gradient matrix corresponding to the pixel matrix.
The present embodiment chooses to calculate the accumulated longitudinal gradient. As shown in fig. 9-11Shown as per pass over the gradient matrixCalculating the variation of the y direction corresponding to each column once。
Is provided withIs a matrix of the gradients which is,mis the number of rows in the matrix and,nis the number of columns of the matrix; set constant numberRepresents the step size; then is atHas one and only one partial sequence:
Wherein the content of the first and second substances, representation matrixLine 1 of (1)Column element, row 2The number of columns of elements, … …,first, themLine ofColumn element;representation matrixTo middleaLine ofbColumn element;,are all referred to asOne component of (a);
Each component of (a) is inCorresponds to a point, definesOf any two adjacent components,Is a line segment; wherein, and 。
and (4) calculating the slope of the line segment by adopting a formula (4), wherein the slope of the line segment is the accumulated longitudinal gradient.
Subsequently, the longitudinal gradient will be accumulatedAnd comparing with a decision constant so as to judge whether a segment of line belongs to the wide edge or the high edge, traversing the situation in the matrix, and finding out the end points of the wide edge and the high edge.
If it isIf the value is larger than the decision constant, the line segment is considered to belong to a part of the step height; if it isIf not, the line segment is considered to be part of the step width.
For example, in the present embodimentTaking the value as 4, and taking the value of a decision constant as 1; at this timeAnd judging a constant, and considering that the constant belongs to a part of the step height. In a similar manner, decision constants may be used to decide on broadsides. Repeating the slope calculation process with the gradient matrix, gradually reducingThe relatively accurate coordinates of the starting point of the high-wide edge of the staircase can be obtained, and therefore the corresponding pixel points are reversely deduced according to the indexes of the matrix.
S4, corresponding the pixel coordinates to the depth map, and obtaining the length of the corresponding edge width according to the depth camera, the calculation principle is shown in fig. 12, which is a conventional technique in the art and will not be described herein again.
In the present invention, there are two ways to calculate the gradient matrix and the cumulative gradient: a longitudinal gradient matrix, a transverse gradient matrix; and calculating the accumulated variation in the horizontal coordinate direction by taking the column coordinate as a denominator, and calculating the accumulated variation in the column coordinate by taking the row coordinate as a denominator. Two of them are combined, and there are 4 modes. The embodiment selects the vertical gradient matrix + calculating the accumulated variation of the row coordinate by taking the column coordinate as an argument.
When the column coordinate is taken as the independent variable, the decision constant is taken. Related to the robot dog itself and the relative positions of the dog and the stairs. As shown in fig. 13, the dogs are relatively to the left. To obtainThe way of the values is as follows: the machine dog is close to the left side of the stair, and the left side of the dog body and the left side of the stair are located on the same plane. The dog is moved back and forth, so that the distance between the lower edge of the dog in the visual angle and the horizontal direction of the transverse edge of the first step of the stair is 50cm, and the requirements of a working scene are met. At this time, the slope of the stair broadside in the camera is 1. This slope corresponds to the extreme position in which the dog can ascend the stairs when the dog is left relative to the centerline of the staircase, as shown in fig. 15 and 16. The measurement result of the 18cm high step obtained by the current test of the quadruped robot is 1.
The principle of decision constant measurement is as follows:
assuming that the point a is the measurement point of this embodiment, the point a has two motion modes, with respect to the point a, assuming that the height of the camera of the robot dog is not changed: respectively, a radial movement as shown in fig. 17 and a rotational movement perpendicular to the radial direction as shown in fig. 18.
For the radial movement as shown in fig. 17, the angle 1> 2 can be known from the keyhole imaging schematic diagram shown in fig. 19. Similarly, for the rotational movement shown in fig. 18, when the dog rotates from point a to point C, the angle between the wide side of the stairs in the picture and the horizontal direction will become smaller. As also shown in fig. 14, where the dog is relatively right relative to fig. 13, the slope of the broad side of the staircase in the frame will gradually decrease as the robot dog translates to the right. Therefore, the parameters measured by the method are discrimination parameters in a limit state.
The judgment of the c value has the following two functions: firstly, if the absolute values of the slopes of two line segments in the gradient matrix are found to be larger than c through calculation, it is proved that the relative positions of the dog and the stair are not suitable for the dog to go upstairs, and the dog needs to move to the right to adjust the position until the absolute value of the slope of one edge is smaller than or equal to c, as shown in fig. 20.
And secondly, if the absolute value of the slope of one edge is smaller than or equal to c, the value can be used for judging which edge is a wide edge and which edge is a high edge.
When the dog is relatively close to the right, the decision constant only needs to take a symmetric condition, i.e. the parameter value is-c, which is also the reason for taking the absolute value in the foregoing description. The relative positions of the stairs and dogs are available at the stair object identification module and are general knowledge in the field.
In another embodiment, the cumulative lateral gradient may be selected to be calculated, i.e. with the row coordinates as arguments. At this time, the decision constant takes 1/c. The specific judgment method is as follows:
each pass over a gradient matrixCalculating the variation of x direction corresponding to each rowAnd (3) calculating the slope of the line segment by adopting a formula (3), wherein the slope of the line segment is the accumulated transverse gradient.
Subsequently, the longitudinal gradient will be accumulatedAnd comparing the comparison result with 1/c to judge whether a segment of line belongs to the wide edge or the high edge, then traversing the matrix, and then finding out the end points of the wide edge and the high edge.
If it isThe line segment is considered to be part of the step height; if it isThen the line segment is considered to be part of the step width.
According to the invention, the depth information can be obtained through the depth vision module of the depth camera, and the parallax is calculated; obtaining the stair border through the semantic segmentation, calculating the corner coordinate of location length, width through the gradient, then calculate the height, the width of stair, for the machine dog goes up the stair and provides visual assistance, avoid the machine dog to step on at stair edge or two-stage stair juncture, do benefit to the stability of guaranteeing the machine dog during operation.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. A visual detection method for the height and width of a stair is characterized by comprising the following steps: the method comprises the following steps:
s1, collecting stair image information to obtain an RGB image and a depth map;
s2, performing semantic segmentation on the RGB image to obtain a semantic segmented image;
s3, calculating a horizontal gradient matrix and a vertical gradient matrix of the semantic segmentation image;
s4, extracting the pixel coordinates of the corner points by utilizing the gradient information;
and S5, corresponding the pixel coordinates of the corner points to the depth map, and calculating the height and the width of the staircase.
2. The visual inspection method of the height and width of the staircase according to claim 1, wherein: and generating a binary image by performing semantic segmentation on the RGB image.
3. The visual inspection method of the height and width of the staircase according to claim 1, wherein: before stair image information is collected, target detection and identification are carried out, and the pose of a camera is adjusted, so that the stair edge fills the whole lens.
4. The visual inspection method of the height and width of the staircase according to claim 1, wherein: in S3, the transverse gradient matrix is calculated by using formula (1):
calculating a longitudinal gradient matrix using equation (2):
5. The visual inspection method of the stair height and width according to claim 4, wherein: is provided withIs a matrix of the gradients which is,mis the number of rows in the matrix and,nis the number of columns of the matrix; set constant numberRepresents the step size; then is atHas one and only one partial sequence:
Wherein the content of the first and second substances,representation matrixLine 1 of (1)Column element, row 2Column element, … …, thmLine ofColumn element;representation matrixTo middleaLine ofbColumn element;,are all referred to asOne component of (a);
Each component of (a) is inCorresponds to a point, definesOf any two adjacent components,Is a line segment; whereinAnd (a) and ;
the S4 specifically includes: on the gradient matrix eachThe accumulated transverse gradient is calculated once per longitudinal pixel, or every time on the gradient matrixCalculating the accumulated longitudinal gradient once by each transverse pixel;
the cumulative transverse gradient is calculated using equation (3):
the cumulative longitudinal gradient is calculated using equation (4):
wherein the content of the first and second substances,refers to a longitudinal gradient matrix corresponding to the pixel matrix,a transverse gradient matrix corresponding to the pixel matrix is indicated;
then, comparing the accumulated transverse gradient or the accumulated longitudinal gradient with a decision constant, and if the accumulated transverse gradient or the accumulated longitudinal gradient is greater than the decision constant, considering that the line segment belongs to a part of the step height; if the step width is smaller than the judgment constant, the line segment is considered to belong to one part of the step width;
and repeating the calculation of the accumulated transverse gradient or the accumulated longitudinal gradient to obtain relatively accurate coordinates of the starting point of the high and wide sides of the stairs, and reversely deducing the corresponding pixel points according to the index of the matrix.
6. The visual inspection method of the stair height and width according to claim 5, wherein: the decision constant is taken to be 1.
7. A stair height visual detection device which characterized in that: the system comprises an image acquisition module, a semantic segmentation module, a depth measurement module, a gradient calculation module and a stair height and width calculation module;
an image acquisition module: the stair image acquisition system is used for acquiring stair image information;
a semantic segmentation module: the method is used for performing semantic segmentation on the image to obtain the boundary of the staircase;
a depth measurement module: the depth information of the image is obtained to obtain a depth map;
a gradient calculation module: calculating gradient information according to the semantic segmentation image, and extracting corner point pixel coordinates according to the gradient information;
the stair height and width calculation module: and mapping the pixel coordinates of the corner points to the depth map, and calculating the height and the width of the staircase.
8. The visual stair height detection device of claim 7, wherein: still include the target detection module: and the camera pose detection is carried out according to the acquired stair image information.
9. A machine dog, comprising: comprising a visual stair height detection device according to any one of claims 7 or 8.
10. The machine dog of claim 9, wherein: including a binocular depth camera module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110143884.6A CN112529903B (en) | 2021-02-03 | 2021-02-03 | Stair height and width visual detection method and device and robot dog |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110143884.6A CN112529903B (en) | 2021-02-03 | 2021-02-03 | Stair height and width visual detection method and device and robot dog |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112529903A true CN112529903A (en) | 2021-03-19 |
CN112529903B CN112529903B (en) | 2022-01-28 |
Family
ID=74975460
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110143884.6A Active CN112529903B (en) | 2021-02-03 | 2021-02-03 | Stair height and width visual detection method and device and robot dog |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112529903B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113689498A (en) * | 2021-08-16 | 2021-11-23 | 江苏仁和医疗器械有限公司 | Artificial intelligence-based electric stair climbing vehicle auxiliary control method and system |
CN113867333A (en) * | 2021-09-03 | 2021-12-31 | 南方科技大学 | Stair climbing planning method for quadruped robot based on visual perception and application of stair climbing planning method |
CN114683290A (en) * | 2022-05-31 | 2022-07-01 | 深圳鹏行智能研究有限公司 | Method and device for optimizing pose of foot robot and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927751A (en) * | 2014-04-18 | 2014-07-16 | 哈尔滨工程大学 | Water surface optical visual image target area detection method based on gradient information fusion |
CN108876798A (en) * | 2018-06-12 | 2018-11-23 | 杭州视氪科技有限公司 | A kind of stair detection system and method |
US20190347803A1 (en) * | 2018-05-09 | 2019-11-14 | Microsoft Technology Licensing, Llc | Skeleton-based supplementation for foreground image segmentation |
CN110919653A (en) * | 2019-11-29 | 2020-03-27 | 深圳市优必选科技股份有限公司 | Stair climbing control method and device for robot, storage medium and robot |
CN111179344A (en) * | 2019-12-26 | 2020-05-19 | 广东工业大学 | Efficient mobile robot SLAM system for repairing semantic information |
CN111368749A (en) * | 2020-03-06 | 2020-07-03 | 创新奇智(广州)科技有限公司 | Automatic identification method and system for stair area |
CN112102347A (en) * | 2020-11-19 | 2020-12-18 | 之江实验室 | Step detection and single-stage step height estimation method based on binocular vision |
-
2021
- 2021-02-03 CN CN202110143884.6A patent/CN112529903B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927751A (en) * | 2014-04-18 | 2014-07-16 | 哈尔滨工程大学 | Water surface optical visual image target area detection method based on gradient information fusion |
US20190347803A1 (en) * | 2018-05-09 | 2019-11-14 | Microsoft Technology Licensing, Llc | Skeleton-based supplementation for foreground image segmentation |
CN108876798A (en) * | 2018-06-12 | 2018-11-23 | 杭州视氪科技有限公司 | A kind of stair detection system and method |
CN110919653A (en) * | 2019-11-29 | 2020-03-27 | 深圳市优必选科技股份有限公司 | Stair climbing control method and device for robot, storage medium and robot |
CN111179344A (en) * | 2019-12-26 | 2020-05-19 | 广东工业大学 | Efficient mobile robot SLAM system for repairing semantic information |
CN111368749A (en) * | 2020-03-06 | 2020-07-03 | 创新奇智(广州)科技有限公司 | Automatic identification method and system for stair area |
CN112102347A (en) * | 2020-11-19 | 2020-12-18 | 之江实验室 | Step detection and single-stage step height estimation method based on binocular vision |
Non-Patent Citations (1)
Title |
---|
刘东: "基于多轮足的自平衡越障爬楼梯机器人研发", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113689498A (en) * | 2021-08-16 | 2021-11-23 | 江苏仁和医疗器械有限公司 | Artificial intelligence-based electric stair climbing vehicle auxiliary control method and system |
CN113689498B (en) * | 2021-08-16 | 2022-06-07 | 江苏仁和医疗器械有限公司 | Artificial intelligence-based electric stair climbing vehicle auxiliary control method and system |
CN113867333A (en) * | 2021-09-03 | 2021-12-31 | 南方科技大学 | Stair climbing planning method for quadruped robot based on visual perception and application of stair climbing planning method |
CN113867333B (en) * | 2021-09-03 | 2023-11-17 | 南方科技大学 | Four-foot robot stair climbing planning method based on visual perception and application thereof |
CN114683290A (en) * | 2022-05-31 | 2022-07-01 | 深圳鹏行智能研究有限公司 | Method and device for optimizing pose of foot robot and storage medium |
CN114683290B (en) * | 2022-05-31 | 2022-09-16 | 深圳鹏行智能研究有限公司 | Method and device for optimizing pose of foot robot and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112529903B (en) | 2022-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112529903B (en) | Stair height and width visual detection method and device and robot dog | |
CN110569704B (en) | Multi-strategy self-adaptive lane line detection method based on stereoscopic vision | |
US10129521B2 (en) | Depth sensing method and system for autonomous vehicles | |
CN101960860B (en) | System and method for depth map extraction using region-based filtering | |
US7376250B2 (en) | Apparatus, method and program for moving object detection | |
US20200349366A1 (en) | Onboard environment recognition device | |
KR101776620B1 (en) | Apparatus for recognizing location mobile robot using search based correlative matching and method thereof | |
JP6112221B2 (en) | Moving object position estimation apparatus and moving object position estimation method | |
CN101067557A (en) | Environment sensing one-eye visual navigating method adapted to self-aid moving vehicle | |
CN112184792B (en) | Road gradient calculation method and device based on vision | |
JP2003168104A (en) | Recognition device of white line on road | |
CN107136649B (en) | Three-dimensional foot shape measuring device based on automatic track seeking mode and implementation method | |
US8873855B2 (en) | Apparatus and method for extracting foreground layer in image sequence | |
KR101090082B1 (en) | System and method for automatic measuring of the stair dimensions using a single camera and a laser | |
JP3333721B2 (en) | Area detection device | |
JP4235018B2 (en) | Moving object detection apparatus, moving object detection method, and moving object detection program | |
JP3952460B2 (en) | Moving object detection apparatus, moving object detection method, and moving object detection program | |
CN108256470A (en) | A kind of lane shift judgment method and automobile | |
CN112116644B (en) | Obstacle detection method and device based on vision and obstacle distance calculation method and device | |
CN112597857B (en) | Indoor robot stair climbing pose rapid estimation method based on kinect | |
JP2005196359A (en) | Moving object detection apparatus, moving object detection method and moving object detection program | |
US11132530B2 (en) | Method for three-dimensional graphic reconstruction of a vehicle | |
CN112767481A (en) | High-precision positioning and mapping method based on visual edge features | |
Dargazany | Stereo-based terrain traversability analysis using normal-based segmentation and superpixel surface analysis | |
KR101042171B1 (en) | Method and apparatus for controlling vergence of intersting objects in the steroscopic camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |