CN114882343A - Slope detection method, robot and storage medium - Google Patents

Slope detection method, robot and storage medium Download PDF

Info

Publication number
CN114882343A
CN114882343A CN202210600378.XA CN202210600378A CN114882343A CN 114882343 A CN114882343 A CN 114882343A CN 202210600378 A CN202210600378 A CN 202210600378A CN 114882343 A CN114882343 A CN 114882343A
Authority
CN
China
Prior art keywords
pixel point
slope
pixel
value
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210600378.XA
Other languages
Chinese (zh)
Inventor
宋西来
庞建新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ubtech Technology Co ltd
Original Assignee
Shenzhen Ubtech Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ubtech Technology Co ltd filed Critical Shenzhen Ubtech Technology Co ltd
Priority to CN202210600378.XA priority Critical patent/CN114882343A/en
Publication of CN114882343A publication Critical patent/CN114882343A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of data processing, and provides a slope detection method, a robot and a storage medium, wherein the method comprises the following steps: the method comprises the steps of collecting a depth image through a depth camera, converting the depth image into point cloud data, obtaining a first gray image corresponding to the depth image according to the point cloud data, obtaining a gradient value of a first pixel point by utilizing a gray value of the first pixel point in the first gray image, obtaining a key pixel point by utilizing the gradient value of the first pixel point, performing flood filling by utilizing the key pixel point, and determining a slope area of the ground. The slope region is confirmed to the depth image that this application used the degree of depth camera to gather, and the regional area that the degree of depth camera was gathered is big, and the resolution ratio of degree of depth camera is high, uses laser radar to confirm the slope region on ground among the prior art, and this application can be more quick, the slope region on ground of more accurate definite.

Description

Slope detection method, robot and storage medium
Technical Field
The application belongs to the technical field of data processing, and particularly relates to a slope detection method, a robot and a storage medium.
Background
With the development of robots, robots have been able to move and navigate autonomously. The key of the autonomous navigation of the robot is the detection of the ground, the detection of the ground identifies the non-feasible region in the ground, such as an obstacle, a cliff and the like, and the movement route of the robot is planned according to the non-feasible region.
At present, the detection of the ground is lack, the detection of the ground slope area is not enough, and the perception capability of the robot to the external environment is not enough.
Disclosure of Invention
The embodiment of the application provides a slope detection method, a robot and a storage medium, and can solve the problem that the perception capability of the robot to the external environment is insufficient due to the fact that a slope region on the ground cannot be detected.
In a first aspect, an embodiment of the present application provides a slope detection method, including:
acquiring a first gray image of a depth image acquired by a depth camera, wherein the depth image comprises the ground;
calculating gradient values of first pixel points in the first gray image based on gray values of the first pixel points, wherein the number of the first pixel points is multiple;
determining key pixel points from the first pixel points based on the gradient values of the first pixel points;
determining a slope region in the first gray image based on the gray value of the key pixel point and the gray value of a second pixel point, wherein the second pixel point is a first pixel point in the first gray image except the key pixel point, and the slope region in the first gray image is used for determining the slope region of the ground.
In a second aspect, an embodiment of the present application provides a slope detection device, including:
the system comprises an image acquisition module, a display module and a control module, wherein the image acquisition module is used for acquiring a first gray image of a depth image acquired by a depth camera, and the depth image comprises the ground;
the gradient calculation module is used for calculating the gradient value of a first pixel point based on the gray value of the first pixel point in the first gray image, and the number of the first pixel points is multiple;
a key point determining module, configured to determine a key pixel point from the first pixel points based on the gradient value of the first pixel point;
and the slope detection module is used for determining a slope region in the first gray image based on the gray value of the key pixel point and the gray value of a second pixel point, wherein the second pixel point is a first pixel point in the first gray image except the key pixel point, and the slope region in the first gray image is used for determining the slope region of the ground.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the slope detection method of any of the first aspect described above when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, where a computer program is stored, and when executed by a processor, the computer program implements the slope detection method according to any one of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to execute the slope detection method according to any one of the above first aspects.
Compared with the prior art, the embodiment of the first aspect of the application has the following beneficial effects: according to the method and the device, the gradient value of the first pixel point is obtained by using the gray value of the first pixel point in the first gray image of the depth image collected by the depth camera, the key pixel point is obtained by using the gradient value of the first pixel point, the slope area in the first gray image is determined by using the gray value of the key pixel point and the gray values of other pixel points, and then the slope area on the ground is determined. For prior art can not detect the slope region on ground, the slope region is confirmed to the depth image that this application used the degree of depth camera to gather, and the regional area that the degree of depth camera was gathered is big, and the resolution ratio of degree of depth camera is high, and this application can be more quick, more accurate definite ground slope region, improves the perception ability of robot to external environment.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic view of an application scenario of a slope detection method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a slope detection method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for determining a first grayscale image according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart illustrating a method for determining a key pixel according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a method for determining a slope angle of a pixel according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a slope detection device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise.
At present, when ground detection is carried out by utilizing a laser radar or an RGB camera, the effect of ground detection can be achieved. According to the slope detection method, the depth image acquired by the depth camera is used for determining the slope area of the ground, and the depth image comprises distance and space information, so that when the depth camera is used for detecting the slope of the ground, the slope area on the ground can be determined, and the angle, the slope type and the like of the slope can be determined. In addition, the depth camera is cheap relative to the laser radar, so the road surface detection can be carried out by using the depth camera, and the cost can be saved relative to the road surface detection by using the laser radar.
Fig. 1 is a schematic view of an application scenario of a slope detection method provided in an embodiment of the present application, where the slope detection method may be used for detecting a slope region of a ground. The depth camera 10 is used, among other things, to capture depth images of the ground. The data processing device 20 is configured to acquire a depth image acquired by the depth camera 10 and output a ramp region in the depth image according to the depth image.
Fig. 2 shows a schematic flow chart of the slope detection method provided in the present application, and referring to fig. 2, the method is described in detail as follows:
s101, a first gray image of a depth image collected by a depth camera is obtained, wherein the depth image comprises the ground.
In this embodiment, the depth camera is a camera that can perform depth measurement. The depth camera may be a robot-mounted depth camera; the depth camera may also be a depth camera mounted within a fixed area. The installation angle of the depth camera is based on the image collected to the ground. For example, a depth camera on the robot may capture information about the ground in front of the robot, which is the direction of movement of the robot.
Depth images, also known as range images, refer to images having as pixel values the range (depth) values of points in a scene captured by a depth camera, which directly reflect the geometry of the visible surface of a scene.
The gray image is also called a gray scale image, and the white and the black are divided into a plurality of levels according to a logarithmic relation, which is called gray scale.
In this embodiment, the gray value of the pixel point in the first gray image is determined based on the pixel value (depth) of the pixel point in the depth image.
S102, calculating gradient values of first pixel points based on the gray values of the first pixel points in the first gray image, wherein the number of the first pixel points is multiple.
In this embodiment, each pixel point in the first grayscale image may correspond to a grayscale value. The first pixel point may include each pixel point in the first grayscale image, and a pixel point in a region in the first grayscale image may also be recorded as the first pixel point, where the region includes the ground, for example, if the first grayscale image includes a chair, a pixel point in a region other than the chair region is recorded as the first pixel point.
In this embodiment, the gradient values of the first pixel point may include a first gradient value and a second gradient value.
And calculating a first derivative of the first gray level image to obtain a first gradient value of the first pixel point. And calculating a derivative of the first gradient value to obtain a second gradient value of the first pixel point.
Specifically, a coordinate system where the first grayscale image is located is established, for example, the coordinate system is established with the lower left corner of the first grayscale image as an origin and two edges of the first grayscale image passing through the origin as an x-axis and a y-axis. And calculating the first-order gradient of the first pixel point in the x-axis direction and the first-order gradient of the first pixel point in the y-axis direction to obtain a first-order gradient value of the first pixel point, and further obtaining a first-order gradient image of the first gray level image. And calculating the gradient of the first-order gradient image to obtain a second-order gradient image of the first gray-scale image.
Specifically, the value of the first pixel point in the first-order gradient map that is greater than or equal to the first threshold is set to a first value (for example, 255), and the values of the other first pixel points are set to a second value (for example, 0), so as to obtain a first binary map. The first threshold value may be set as desired.
And setting the values of the first pixel points which are less than or equal to the second threshold value in the second-order gradient image as first values, and setting the values of other first pixel points as second values to obtain a second binary image. The second threshold value may be set as desired.
In the first gray image, the first pixel points which are the first value in the first binary image and the first value in the second binary image are marked as third pixel points, the value of the third pixel points is set as the first value, and the values of other first pixel points are set as the second values, so that a third binary image is obtained.
S103, determining key pixel points from the first pixel points based on the gradient values of the first pixel points.
In this embodiment, the image labeled with the gradient value of the first pixel point is input into the key point extraction model, so as to obtain a key pixel point of the first pixel point. The key point extraction model can be a convolution neural network model, a full-connection neural network and the like.
The number of the key pixel points is one or more.
S104, determining a slope region in the first gray image based on the gray value of the key pixel point and the gray value of a second pixel point, wherein the second pixel point is the first pixel point except the key pixel point in the first gray image.
In this embodiment, the slope region in the first grayscale image is used to determine the slope region of the ground.
In this embodiment, each key pixel point is a seed point, and a slope region corresponding to each key pixel point is calculated by using a flood filling method.
In the embodiment of the application, the gray value of the first pixel point in the first gray image of the depth image collected by the depth camera is used for obtaining the gradient value of the first pixel point, then the gradient value of the first pixel point is used for obtaining the key pixel point, the slope area in the first gray image is determined by utilizing the gray value of the key pixel point and the gray values of other pixel points, and then the slope area on the ground is determined. The slope region is determined by using the depth image acquired by the depth camera, the area of the region acquired by the depth camera is large, for example, the depth camera can reach 480 lines, the resolution ratio of the depth camera is high, and compared with the slope region which is determined by using a laser radar (64 lines) in the prior art, the slope region which is determined by using the ground can be determined more quickly and accurately. In addition, the ground is detected by using the depth camera, so that the requirement on hardware is not high, for example, the occupation of a Central Processing Unit (CPU) and a memory is not high, the hardware with low computational power can be selected, and the resource consumption is reduced. The depth camera has no strict limitation on installation angle, position, height and the like, and is simple to install. The depth camera is used for determining the slope area of the ground on the robot, the perception capability of the robot to the environment is improved, and the motion safety of the robot can be guaranteed.
As shown in fig. 3, in a possible implementation manner, the implementation process of step S101 may include:
s1011, converting the depth image into first point cloud data in a camera coordinate system, wherein the camera coordinate system is a coordinate system determined based on the pose of the depth camera.
In this embodiment, dimension reduction processing is performed on data in the depth image to obtain first point cloud data in a camera coordinate system. The camera coordinate system comprises an x-axis, a y-axis and a z-axis, wherein the x-axis and the y-axis represent coordinates of the pixel points, and the z-axis is a pixel value (depth) of the pixel points. The conversion of the depth image into point cloud data in the camera coordinate system is prepared for the conversion of the depth image into point cloud data in the standard coordinate system.
And S1012, converting the first point cloud data into second point cloud data under a standard coordinate system, wherein the standard coordinate system is a world coordinate system determined based on a horizontal plane.
In this embodiment, coordinate conversion is performed on the first point cloud data to obtain second point cloud data. The x axis and the y axis of the standard coordinate system are parallel to the horizontal plane and are vertical to each other; the z-axis in the standard coordinate system is perpendicular to the x-axis and the y-axis. The second point cloud data includes x-axis data, y-axis data, and z-axis data.
And S1013, performing normalization processing on the height value in the second point cloud data to obtain the height value after the normalization processing, wherein the height value is used for representing the degree that the ground is higher than the origin of the standard coordinate system.
In this embodiment, the height value (z-axis data) in the second point cloud data is normalized, for example, to the range of [ -0.42m, 0.22m ], so as to obtain the normalized height value, so that the data processing is simple and fast.
And S1014, obtaining a first gray image of the depth image based on the height value after the normalization processing.
In this embodiment, the height value after the normalization process is converted into grayscale image data (0 to 255), and a second grayscale image is obtained. And carrying out median filtering processing and dimensionality reduction processing on the second gray level image to obtain a first gray level image.
Optionally, the first grayscale image is subjected to reverse normalization processing to obtain filtered z-axis data, and the filtered point cloud data is composed of x-axis data, y-axis data and filtered z-axis data in the second point cloud data.
In the embodiment of the application, the depth image is converted into the first gray image so as to determine the slope region according to the first gray image. The point cloud data is converted into the image, the image processing speed is high and can reach the millisecond level, and therefore the slope area can be determined more easily and more quickly by using the first gray image.
As shown in fig. 4, in a possible implementation manner, the implementation process of step S103 may include:
and S1031, determining third pixel points which meet preset requirements in the first pixel points, wherein the preset requirements comprise that an absolute value of a gradient value is greater than or equal to a first threshold value, and an absolute value of a second gradient value is less than or equal to a second threshold value.
In this embodiment, the first pixel point of the third binary image with the first value is the third pixel point.
S1032, determining a first candidate area in the first gray image based on the position of the third pixel point in the first gray image.
In this embodiment, a first candidate region is determined in the third binary image, and a region in the first grayscale image corresponding to the third binary image is the first candidate region.
Specifically, the adjacent third pixel points are classified into one region, and the adjacent third pixel points are adjacent in position, so that a first candidate region is obtained. Adjacent may include adjacent in the x-axis direction, or adjacent in the y-axis direction. For example, the third pixel point a is a pixel point in the ith row and the jth column in the first gray image, the third pixel point B is a pixel point in the ith row and the jth +1 column in the first gray image, the third pixel point C is a pixel point in the ith +1 row and the jth column in the first gray image, the third pixel point D is a pixel point in the ith +1 row and the jth +1 column in the first gray image, and the third pixel point a, the third pixel point B, the third pixel point C and the third pixel point D form a first candidate region.
S1033, determining a second candidate region based on an area of the first candidate region, where the second candidate region is the first candidate region having an area greater than or equal to a preset area in the first candidate region.
In this embodiment, the first candidate regions are subjected to erosion and expansion processing, the contour of each first candidate region is determined, and the area of each first candidate region is calculated according to the contour of each first candidate region.
S1034, determining the gravity center point of the second candidate region.
In this embodiment, the center of gravity point of the second candidate region may be within the second candidate region or outside the second candidate region. Each second candidate region corresponds to a center of gravity point.
S1035, based on the second candidate region and the position relationship between the gravity center points of the second candidate region, determining a key pixel point from the first pixel points included in the second candidate region.
In this embodiment, different strategies are selected to determine the key pixel points according to whether the center of gravity is in or out of the second candidate region. A center of gravity may determine a key pixel, and thus a second candidate region corresponds to a key pixel.
In the embodiment of the application, a first candidate region is determined according to the gradient value of a first pixel point, then screening is performed according to the area of the first candidate region to obtain a second candidate region, the second candidate region is a slope region more accurate than the first candidate region, and finally a key pixel point of the second candidate region is determined according to the center of gravity of the second candidate region, so that when the slope region is determined according to the key pixel point subsequently, the obtained slope region is more accurate.
In one possible implementation manner, the implementation procedure of step S1035 may include:
s10351, at a first barycentric point in the second candidate region, the key pixel points include a first pixel point corresponding to the first barycentric point.
In this embodiment, if the center of gravity point of the second candidate region is within the second candidate region, the center of gravity point is marked as the first center of gravity point. And the first pixel point corresponding to the first gravity center point is a key pixel point of the second candidate area. The key pixel point of the first gray image comprises a first gravity center point.
S10352, determining a fourth pixel point based on a position of the second center of gravity point in the first gray-scale image, where the second center of gravity point is not in the second candidate region.
In this embodiment, the fourth pixel point is a first pixel point in the same row as the second centroid point and in the second candidate region corresponding to the second centroid point, and a first pixel point in the same row as the second centroid point and in the second candidate region corresponding to the second centroid point.
In this embodiment, if the center of gravity point of the second candidate region is not within the second candidate region, the center of gravity point is recorded as the second center of gravity point. And searching first pixel points which are in the same row and the same column as the second center of gravity point in the first gray level image, and recording the first pixel points which are in the second candidate region and in the same row and the same column as the second center of gravity point as fourth pixel points.
S10353, determining a fifth pixel point with the smallest distance from the second center of gravity among the fourth pixel points, where the key pixel point includes the fifth pixel point.
In this embodiment, the distance between each fourth pixel point and the second center of gravity is calculated to obtain each distance value. And recording a fourth pixel point corresponding to the minimum value in all the distance values as a fifth pixel point, wherein the key pixel point of the second candidate region is the fifth pixel point. The key pixel points of the first gray image comprise a fifth pixel point.
In one possible implementation manner, the implementation process of step S104 may include:
and calculating a first difference value between the gray value of the key pixel point and the gray value of the second pixel point.
If the first difference value is within a preset interval, obtaining a slope area corresponding to the key pixel point based on the position of the second pixel point corresponding to the first difference value within the preset interval in the first gray-scale image, wherein the slope area in the first gray-scale image comprises slope areas corresponding to all key pixel points.
In this embodiment, according to the key pixel point, the slope region is determined by using the flooding filling method.
Specifically, each key pixel point is taken as a center to gradually calculate and diffuse outwards. Specifically, the gray difference between the key pixel point and the second pixel point is sequentially calculated from the near to the far direction from the key pixel point by taking the key pixel point as the center, and the gray difference is recorded as a first difference. And when the first difference value is within the preset interval, continuing diffusion, and forming a slope region by a second pixel point corresponding to the first difference value within the preset interval and a corresponding key pixel point. And stopping diffusion until the first difference is not within the preset interval, and obtaining a final slope region.
The preset interval can be set as required. One key pixel point corresponds to one slope area, and different slope areas may be crossed. And the point cloud data of the first pixel point of the slope area is determined according to the second point cloud data.
If the first difference value is not in the preset interval, the second pixel point corresponding to the first difference value which is not in the preset interval and the corresponding key pixel point cannot form a slope area.
As shown in fig. 5, in a possible implementation manner, after step S104, the method may further include:
s201, performing plane fitting processing on the slope area to obtain a fitting plane of the slope area.
In this embodiment, a plane fitting process is performed on each slope region to obtain a fitting plane of each slope region.
S202, calculating a slope angle between the fitting plane and a horizontal plane, wherein the slope angle between the fitting plane and the horizontal plane is the slope angle of the slope area.
S203, aiming at a sixth pixel point, the sixth pixel point is a first pixel point existing in a slope area in the first pixel points, and the slope angle of the slope area where the sixth pixel point is located is the slope angle of the sixth pixel point.
In this embodiment, if the first pixel point exists in a slope region, the first pixel point is marked as a sixth pixel point. The slope angle of the sixth pixel point is the slope angle of the slope region where the sixth pixel point is located.
S204, aiming at a seventh pixel point, the seventh pixel point is a first pixel point existing in at least two slope areas in the first pixel point, and the maximum value of the slope angle in the slope area comprising the seventh pixel point is used as the slope angle of the seventh pixel point.
In this embodiment, if the first pixel point exists in at least two slope regions, the first pixel point is marked as a seventh pixel point. And determining the maximum value of the slope angles of the at least two slope areas, and taking the maximum value of the slope angles of the at least two slope areas as the slope angle of the seventh pixel point.
For example, if the seventh pixel point is in the slope area a and the slope area B, the slope angle of the slope area a is 20 degrees, the slope angle of the slope area B is 24 degrees, and the slope angle of the seventh pixel point is 24 degrees.
In this embodiment, the slope angle of the slope region may be determined by a plane fitting method, and then the slope angle of each first pixel point in the slope region is determined according to the slope angle, so that the slope angle of each first pixel point in the slope region is output, and richer slope data is obtained.
In a possible implementation manner, after step S103, the method may further include:
and calculating a second difference value obtained by subtracting the gray value of the first pixel point in the ith row and the jth column from the gray value of the first pixel point in the ith-1 row and the jth column in the first gray image, wherein the first pixel point in the ith row and the jth column is the first pixel point in the slope area.
And if the second difference is larger than a preset value, determining that the slope type of the first pixel point of the ith row and the jth column relative to the first pixel point of the ith-1 row and the jth column is a downhill. The slope type of the first pixel point of the jth line of the ith-1 line relative to the first pixel point of the jth line of the ith line is an ascending slope. The preset value may be 0.
And if the second difference is smaller than the preset value, determining that the slope type of the first pixel point of the ith row and the jth column relative to the first pixel point of the ith-1 row and the jth column is an ascending slope. The slope type of the first pixel point of the jth line of the ith-1 is a downhill slope relative to the first pixel point of the jth line of the ith line.
And if the second difference is equal to the preset value, determining that the slope type of the first pixel point of the ith row and the jth column is parallel to that of the first pixel point of the ith-1 row and the jth column.
In this embodiment, a third difference obtained by subtracting the gray value of the first pixel point in the ith row and the jth column from the gray value of the first pixel point in the ith row and the jth column may also be calculated, and if the third difference is greater than a preset value, the slope type of the first pixel point in the ith row and the jth column relative to the first pixel point in the ith row and the jth column is an uphill. And if the third difference is smaller than the preset value, the slope type of the first pixel point of the ith row and the jth column relative to the first pixel point of the ith row and the jth-1 column is a downhill slope. And if the third difference is equal to the preset value, the slope type of the first pixel point of the ith row and the jth column relative to the first pixel point of the ith row and the jth-1 column is parallel.
In this embodiment, if the depth camera is mounted on the robot, the depth image is an image of the ground in front of the robot taken when the robot moves forward or needs to move forward. The first pixel point on the ith row is closer to the depth camera than the first pixel point on the (i-1) th row. And if the first pixel point at the ith row and the jth column is a point in the slope region, calculating the gray value of the first pixel point at the ith-1 row and the jth column minus the gray value of the first pixel point at the ith row and the jth column to obtain a fourth difference value. If the fourth difference is greater than the preset value, the slope type of the first pixel point in the ith row and the jth column is an uphill, that is, when the robot moves to the first pixel point in the ith row and the jth column, if the robot continues to move, the robot needs to ascend. If the fourth difference is smaller than the preset value, the slope type of the first pixel point in the ith row and the jth column is a downhill slope, that is, when the robot moves to the first pixel point in the ith row and the jth column, if the robot continues to move, the downhill slope is needed.
In the embodiment of the application, the slope type of each first pixel point in the slope region can be determined according to the gray value of the pixel point, and then the slope type of the first pixel point in the slope region is output, so that richer slope data are obtained.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 6 shows a block diagram of a slope detection device provided in the embodiment of the present application, corresponding to the slope detection method described in the above embodiment, and only the relevant parts of the embodiment of the present application are shown for convenience of description.
Referring to fig. 6, the apparatus 300 may include: an image acquisition module 310, a gradient calculation module 320, a keypoint determination module 330, and a slope detection module 340.
The image obtaining module 310 is configured to obtain a first grayscale image of a depth image collected by a depth camera, where the depth image includes the ground;
the gradient calculation module 320 is configured to calculate a gradient value of a first pixel point based on a gray value of the first pixel point in the first gray image, where the number of the first pixel points is multiple;
a key point determining module 330, configured to determine a key pixel point from the first pixel points based on the gradient value of the first pixel point;
the slope detection module 340 is configured to determine a slope region in the first grayscale image based on the grayscale value of the key pixel and the grayscale value of a second pixel, where the second pixel is a first pixel in the first grayscale image except the key pixel, and the slope region in the first grayscale image is used to determine the slope region of the ground.
In a possible implementation manner, the gradient value of the first pixel point includes a first gradient value and a second gradient value;
the keypoint determination module 330 may be specifically configured to:
determining a third pixel point which meets preset requirements in the first pixel points, wherein the preset requirements comprise that the absolute value of a gradient value is greater than or equal to a first threshold value, and the absolute value of a second gradient value is less than or equal to a second threshold value;
determining a first candidate region in the first gray image based on the position of the third pixel point in the first gray image;
determining a second candidate region based on the area of the first candidate region, wherein the second candidate region is a first candidate region of the first candidate region, and the area of the second candidate region is larger than or equal to a preset area;
determining a center of gravity point of the second candidate region;
and determining key pixel points from first pixel points included in the second candidate region based on the position relation between the second candidate region and the gravity center points of the second candidate region.
In a possible implementation manner, the key point determining module 330 may specifically be configured to:
a first center of gravity point in the second candidate region, wherein the key pixel points comprise first pixel points corresponding to the first center of gravity point;
determining a fourth pixel point based on the position of the second gravity center point in the first gray-scale image, wherein the fourth pixel point is a first pixel point which is in the same row with the second gravity center point and is in the second candidate area corresponding to the second gravity center point, and a first pixel point which is in the same column with the second gravity center point and is in the second candidate area corresponding to the second gravity center point;
and determining a fifth pixel point with the minimum distance from the second center of gravity point in the fourth pixel points, wherein the key pixel points comprise the fifth pixel point.
In a possible implementation manner, the slope detection module 340 may specifically be configured to:
calculating a first difference value between the gray value of the key pixel point and the gray value of the second pixel point;
if the first difference value is within a preset interval, obtaining a slope area corresponding to the key pixel point based on the position of the second pixel point corresponding to the first difference value within the preset interval in the first gray-scale image, wherein the slope area in the first gray-scale image comprises slope areas corresponding to all key pixel points.
In one possible implementation, the image obtaining module 310 may be specifically configured to:
converting the depth image into first point cloud data in a camera coordinate system, wherein the camera coordinate system is a coordinate system determined based on the pose of the depth camera;
converting the first point cloud data into second point cloud data under a standard coordinate system, wherein the standard coordinate system is a world coordinate system determined based on a horizontal plane;
normalizing the height value in the second point cloud data to obtain a normalized height value, wherein the height value is used for representing the degree of the ground higher than the origin of the standard coordinate system;
and obtaining a first gray image of the depth image based on the height value after the normalization processing.
In a possible implementation manner, the ramp detection module 340 further includes:
the plane fitting module is used for carrying out plane fitting processing on the slope area to obtain a fitting plane of the slope area;
and the angle calculation module is used for calculating the slope angle between the fitting plane and the horizontal plane, and the slope angle between the fitting plane and the horizontal plane is the slope angle of the slope area.
In a possible implementation manner, the method connected to the angle calculation module further includes:
the first angle determining module is used for determining a sixth pixel point, wherein the sixth pixel point is a first pixel point existing in a slope region in the first pixel points, and the slope angle of the slope region where the sixth pixel point is located is the slope angle of the sixth pixel point;
and the second angle determining module is used for determining a seventh pixel point, wherein the seventh pixel point is a first pixel point existing in at least two slope areas in the first pixel point, and the maximum value of the slope angle in the slope area comprising the seventh pixel point is used as the slope angle of the seventh pixel point.
In a possible implementation manner, the ramp detection module 340 further includes:
a difference value calculating module, configured to calculate a first difference value between a gray value of a first pixel point in the ith-1 th row and the jth column in the first gray image and a gray value of the first pixel point in the ith row and the jth column, where the first pixel point in the ith row and the jth column is a pixel point in the slope region;
the first judgment module is used for determining that the slope type of the first pixel point of the ith row and the jth column relative to the first pixel point of the ith-1 row and the jth column is a downhill if the first difference is larger than a preset value;
the second judgment module is used for determining that the slope type of the first pixel point of the ith row and the jth column relative to the first pixel point of the ith-1 row and the jth column is an ascending slope if the first difference value is smaller than the preset value;
and the third judging module is used for determining that the slope types of the first pixel point of the ith row and the jth column are parallel relative to the slope types of the first pixel point of the ith-1 row and the jth column if the first difference value is equal to the preset value.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It should be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units and modules is only used for illustration, and in practical applications, the above function distribution may be performed by different functional units and modules as needed, that is, the internal structure of the apparatus may be divided into different functional units or modules to perform all or part of the above described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a terminal device, and referring to fig. 7, the terminal device 400 may include: at least one processor 410, a memory 420, and a computer program stored in the memory 420 and executable on the at least one processor 410, wherein the processor 410 when executing the computer program implements the steps of any of the method embodiments described above, such as the steps S101 to S104 in the embodiment shown in fig. 2. Alternatively, the processor 410, when executing the computer program, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 310 to 340 shown in fig. 6.
Illustratively, a computer program may be partitioned into one or more modules/units, which are stored in the memory 420 and executed by the processor 410 to accomplish the present application. The one or more modules/units may be a series of computer program segments capable of performing specific functions, which are used to describe the execution of the computer program in the terminal device 400.
Those skilled in the art will appreciate that fig. 7 is merely an example of a terminal device and is not limiting and may include more or fewer components than shown, or some components may be combined, or different components such as input output devices, network access devices, buses, etc.
The Processor 410 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 420 may be an internal storage unit of the terminal device, or may be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. The memory 420 is used for storing the computer programs and other programs and data required by the terminal device. The memory 420 may also be used to temporarily store data that has been output or is to be output.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The slope detection method provided by the embodiment of the application can be applied to terminal equipment such as a computer, a tablet computer, a notebook computer, a netbook, a Personal Digital Assistant (PDA) and the like, and the embodiment of the application does not limit the specific type of the terminal equipment at all.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed terminal device, apparatus and method may be implemented in other ways. For example, the above-described terminal device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical function division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the method embodiments described above when the computer program is executed by one or more processors.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the method embodiments described above when the computer program is executed by one or more processors.
Also, as a computer program product, when the computer program product runs on a terminal device, the terminal device is enabled to implement the steps in the above-mentioned method embodiments when executed.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method of slope detection, comprising:
acquiring a first gray image of a depth image acquired by a depth camera, wherein the depth image comprises the ground;
calculating gradient values of first pixel points in the first gray image based on gray values of the first pixel points, wherein the number of the first pixel points is multiple;
determining key pixel points from the first pixel points based on the gradient values of the first pixel points;
determining a slope region in the first gray image based on the gray value of the key pixel point and the gray value of a second pixel point, wherein the second pixel point is a first pixel point in the first gray image except the key pixel point, and the slope region in the first gray image is used for determining the slope region of the ground.
2. The slope detection method according to claim 1, wherein the gradient values of the first pixel point include a gradient value and a second gradient value;
determining a key pixel point from the first pixel points based on the gradient value of the first pixel point, including:
determining a third pixel point which meets preset requirements in the first pixel points, wherein the preset requirements comprise that the absolute value of a gradient value is greater than or equal to a first threshold value, and the absolute value of a second gradient value is less than or equal to a second threshold value;
determining a first candidate region in the first gray image based on the position of the third pixel point in the first gray image;
determining a second candidate region based on the area of the first candidate region, wherein the second candidate region is a first candidate region of the first candidate region, and the area of the second candidate region is larger than or equal to a preset area;
determining a center of gravity point of the second candidate region;
and determining key pixel points from first pixel points included in the second candidate region based on the position relation between the second candidate region and the gravity center points of the second candidate region.
3. The slope detection method according to claim 2, wherein the determining a key pixel point from first pixel points included in the second candidate region based on the second candidate region and a positional relationship of the gravity center points of the second candidate region includes:
a first center of gravity point in the second candidate region, wherein the key pixel points comprise first pixel points corresponding to the first center of gravity point;
determining a fourth pixel point based on the position of the second gravity center point in the first gray-scale image, wherein the fourth pixel point is a first pixel point which is in the same row with the second gravity center point and is in the second candidate area corresponding to the second gravity center point, and a first pixel point which is in the same column with the second gravity center point and is in the second candidate area corresponding to the second gravity center point;
and determining a fifth pixel point with the minimum distance from the second center of gravity point in the fourth pixel points, wherein the key pixel points comprise the fifth pixel point.
4. The slope detection method of claim 1, wherein the determining the slope region in the first grayscale image based on the grayscale value of the key pixel and the grayscale value of the second pixel comprises:
calculating a first difference value between the gray value of the key pixel point and the gray value of the second pixel point;
if the first difference value is within a preset interval, obtaining a slope area corresponding to the key pixel point based on the position of the second pixel point corresponding to the first difference value within the preset interval in the first gray-scale image, wherein the slope area in the first gray-scale image comprises slope areas corresponding to all key pixel points.
5. The slope detection method according to any one of claims 1 to 4, wherein the acquiring a first grayscale image of the depth image acquired by the depth camera comprises:
converting the depth image into first point cloud data in a camera coordinate system, wherein the camera coordinate system is a coordinate system determined based on the pose of the depth camera;
converting the first point cloud data into second point cloud data under a standard coordinate system, wherein the standard coordinate system is a world coordinate system determined based on a horizontal plane;
normalizing the height value in the second point cloud data to obtain a normalized height value, wherein the height value is used for representing the degree that the ground is higher than the origin of the standard coordinate system;
and obtaining a first gray image of the depth image based on the height value after the normalization processing.
6. The slope detection method according to any one of claims 1 to 4, wherein after determining the slope region in the first grayscale image based on the grayscale value of the key pixel and the grayscale value of the second pixel, the method further comprises:
performing plane fitting processing on the slope area to obtain a fitting plane of the slope area;
and calculating the slope angle between the fitting plane and the horizontal plane, wherein the slope angle between the fitting plane and the horizontal plane is the slope angle of the slope area.
7. The slope detection method of claim 6, wherein after the calculating a slope angle between the fitted plane and a horizontal plane, the method further comprises:
aiming at a sixth pixel point, the sixth pixel point is a first pixel point existing in a slope region in the first pixel points, and the slope angle of the slope region where the sixth pixel point is located is the slope angle of the sixth pixel point;
and aiming at a seventh pixel point, the seventh pixel point is a first pixel point existing in at least two slope regions in the first pixel point, and the maximum value of the slope angle in the slope region comprising the seventh pixel point is used as the slope angle of the seventh pixel point.
8. The slope detection method according to any one of claims 1 to 4, wherein after determining the slope region in the first grayscale image based on the grayscale value of the key pixel and the grayscale value of the second pixel, the method further comprises:
calculating a second difference value of the gray value of the first pixel point in the ith-1 th row and the jth column in the first gray image minus the gray value of the first pixel point in the ith row and the jth column, wherein the first pixel point in the ith row and the jth column is the first pixel point in the slope region;
if the second difference value is larger than a preset value, determining that the slope type of the first pixel point in the ith row and the jth column relative to the first pixel point in the ith-1 row and the jth column is downhill;
if the second difference is smaller than the preset value, determining that the slope type of the first pixel point of the ith row and the jth column relative to the first pixel point of the ith-1 row and the jth column is an ascending slope;
and if the second difference is equal to the preset value, determining that the slope type of the first pixel point of the ith row and the jth column is parallel to that of the first pixel point of the ith-1 row and the jth column.
9. A robot comprising a depth camera, a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements a hill detection method as recited in any one of claims 1 to 8.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a hill detection method according to any one of claims 1 to 8.
CN202210600378.XA 2022-05-30 2022-05-30 Slope detection method, robot and storage medium Pending CN114882343A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210600378.XA CN114882343A (en) 2022-05-30 2022-05-30 Slope detection method, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210600378.XA CN114882343A (en) 2022-05-30 2022-05-30 Slope detection method, robot and storage medium

Publications (1)

Publication Number Publication Date
CN114882343A true CN114882343A (en) 2022-08-09

Family

ID=82679544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210600378.XA Pending CN114882343A (en) 2022-05-30 2022-05-30 Slope detection method, robot and storage medium

Country Status (1)

Country Link
CN (1) CN114882343A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160140769A1 (en) * 2014-11-17 2016-05-19 Qualcomm Incorporated Edge-aware volumetric depth map fusion
CN107220964A (en) * 2017-05-03 2017-09-29 长安大学 A kind of linear feature extraction is used for geology Taking stability appraisal procedure
CN110136193A (en) * 2019-05-08 2019-08-16 广东嘉腾机器人自动化有限公司 Cubold cabinet three-dimensional dimension measurement method and storage medium based on depth image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160140769A1 (en) * 2014-11-17 2016-05-19 Qualcomm Incorporated Edge-aware volumetric depth map fusion
CN107220964A (en) * 2017-05-03 2017-09-29 长安大学 A kind of linear feature extraction is used for geology Taking stability appraisal procedure
CN110136193A (en) * 2019-05-08 2019-08-16 广东嘉腾机器人自动化有限公司 Cubold cabinet three-dimensional dimension measurement method and storage medium based on depth image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KUANQI CAI 等: "A Vision-Based Road Surface Slope Estimation Algorithm for Mobile Service Robots in Indoor Environments", 《PROCEEDING OF THE IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION》, 26 August 2019 (2019-08-26), pages 621 - 626 *
辛菁 等: "基于迁移学习的移动机器人单帧图像坡度检测算法", 《智能系统学报》, vol. 16, no. 1, 3 February 2021 (2021-02-03), pages 81 - 91 *

Similar Documents

Publication Publication Date Title
CN112287860B (en) Training method and device of object recognition model, and object recognition method and system
CN110378297B (en) Remote sensing image target detection method and device based on deep learning and storage medium
CN110389341B (en) Charging pile identification method and device, robot and computer readable storage medium
CN113874927A (en) Parking detection method, system, processing device and storage medium
CN112733812A (en) Three-dimensional lane line detection method, device and storage medium
CN114581887B (en) Method, device, equipment and computer readable storage medium for detecting lane line
CN112927309B (en) Vehicle-mounted camera calibration method and device, vehicle-mounted camera and storage medium
CN112198878B (en) Instant map construction method and device, robot and storage medium
CN114445404A (en) Automatic structural vibration response identification method and system based on sub-pixel edge detection
CN112395962A (en) Data augmentation method and device, and object identification method and system
CN104915642A (en) Method and apparatus for measurement of distance to vehicle ahead
CN108197531B (en) Road curve detection method, device and terminal
CN114219770A (en) Ground detection method, ground detection device, electronic equipment and storage medium
CN114332802A (en) Road surface flatness semantic segmentation method and system based on binocular camera
WO2023284358A1 (en) Camera calibration method and apparatus, electronic device, and storage medium
CN110673607A (en) Feature point extraction method and device in dynamic scene and terminal equipment
CN116863124B (en) Vehicle attitude determination method, controller and storage medium
CN113741446A (en) Robot autonomous exploration method, terminal equipment and storage medium
CN112150522A (en) Remote sensing image registration method, device, equipment, storage medium and system
CN116358528A (en) Map updating method, map updating device, self-mobile device and storage medium
CN114882343A (en) Slope detection method, robot and storage medium
CN117095408A (en) Water level identification method and system in outdoor complex scene
CN116642490A (en) Visual positioning navigation method based on hybrid map, robot and storage medium
CN111223139B (en) Target positioning method and terminal equipment
CN115240150A (en) Lane departure warning method, system, device and medium based on monocular camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination