CN112036210B - Method and device for detecting obstacle, storage medium and mobile robot - Google Patents

Method and device for detecting obstacle, storage medium and mobile robot Download PDF

Info

Publication number
CN112036210B
CN112036210B CN201910476765.5A CN201910476765A CN112036210B CN 112036210 B CN112036210 B CN 112036210B CN 201910476765 A CN201910476765 A CN 201910476765A CN 112036210 B CN112036210 B CN 112036210B
Authority
CN
China
Prior art keywords
area
region
shadow
obstacle
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910476765.5A
Other languages
Chinese (zh)
Other versions
CN112036210A (en
Inventor
李芃桦
何小嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Co Ltd
Original Assignee
Hangzhou Hikrobot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Co Ltd filed Critical Hangzhou Hikrobot Co Ltd
Priority to CN201910476765.5A priority Critical patent/CN112036210B/en
Priority to PCT/CN2020/092276 priority patent/WO2020244414A1/en
Publication of CN112036210A publication Critical patent/CN112036210A/en
Application granted granted Critical
Publication of CN112036210B publication Critical patent/CN112036210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device, a storage medium and a mobile robot for detecting an obstacle, which specifically comprise the following steps: acquiring an image of a traveling area shot by a declining shooting device arranged on the AGV; determining a lower part in the traveling region image as a reference region; determining the rest part of the travel area image located above the reference area as a detection area; detecting an object region and a shadow region in a detection region; when the detection region is detected to have the object region and the shadow region at the same time, detecting whether the position relationship between the object region and the shadow region matches the light projection direction; when the positional relationship between the object region and the shadow region matches the light projection direction, the object region is marked as a suspected obstacle. According to the method and the device, the acquired travelling area images are analyzed, the position information of the three-dimensional obstacle is judged efficiently, a large amount of data is prevented from being used for calculation or training, and high calculation efficiency is achieved.

Description

Method and device for detecting obstacle, storage medium and mobile robot
Technical Field
The present application relates to the field of robotics, and in particular, to a method and apparatus for detecting an obstacle, a storage medium, and a mobile robot.
Background
An important sign of realizing the intellectualization of the mobile robot (Automated Guided Vehicle, AGV) is autonomous navigation, and obstacle avoidance is a main research topic in autonomous navigation. The obstacle avoidance generally refers to that a mobile robot effectively avoids an obstacle according to a certain method when sensing static and dynamic objects which obstruct the passing of the mobile robot through a sensor in the walking process according to the acquired state information of the obstacle, and finally reaches a target point. Mobile robots typically use sensors to obtain environmental information of the surroundings, including the size, shape, and location of obstacles. Currently, the mainstream sensing obstacle avoidance technical schemes of mobile robots are mainly divided into three types according to the sensors used. The first type is to use active sensors such as laser, radar or ultrasonic; the second category is the use of passive sensors, such as binocular cameras or depth cameras; the third category is to combine active and passive sensors, such as obstacle avoidance schemes using a combination of cameras and radar. In addition, the mobile robot can also realize obstacle avoidance through methods in the artificial intelligence fields such as genetic algorithm, neural network algorithm, fuzzy algorithm and the like. The technical classification is mainly divided into: a scheme using stereoscopic vision and a scheme using deep learning.
The above solution has various drawbacks. First, the cost is high. Among them, the laser sensor is expensive, and the binocular camera generally needs to be equipped with a graphics processor (Graphics Processing Unit, GPU) which is expensive to meet the requirements of real-time processing. Secondly, the calculated amount is large, three-dimensional information of the real world is obtained through image or point cloud calculation, and the processing of the information generates huge calculated amount. Finally, the amount of data required is enormous, for example, using deep learning methods requires a large amount of manually labeled training data to train the neural network, resulting in a large amount of manual effort.
Disclosure of Invention
The embodiment of the application provides a method for detecting an obstacle, which is used for detecting the obstacle by acquiring a traveling area image shot in a traveling area and analyzing the relation between a shadow area and an object area in the traveling area image, and has the advantages of simple calculation mode and small calculation amount.
The method comprises the following steps:
acquiring an image of a traveling area shot by a declining shooting device arranged on the AGV;
determining a lower part of a ground area in the traveling area image as a reference area;
determining the rest part of the travel area image, which is positioned above the reference area, as a detection area;
Detecting an object region and a shadow region in the detection region, wherein the object region is a region with a chromaticity difference from the reference region exceeding a first threshold value, and the shadow region is a region with a chromaticity difference from the reference region being smaller than the first threshold value and a luminance being lower than the reference region;
detecting whether a positional relationship between the object region and the shadow region matches a light projection direction when it is detected that the object region and the shadow region exist simultaneously in the detection region;
when the positional relationship between the object region and the shadow region matches a light projection direction, the object region is marked as a suspected obstacle.
Optionally, the object area marked as the suspected obstacle is subjected to size condition filtering, and the object area remained after filtering is marked as an obstacle.
Optionally, dividing the detection area into a plurality of sub-detection areas according to columns, wherein partial pixels of adjacent sub-detection areas coincide;
dividing the reference area into a plurality of sub-reference areas according to columns, wherein partial pixels of adjacent sub-reference areas coincide;
Comparing each sub-detection area with the chromaticity difference of the corresponding sub-reference area in the same column, and screening out a first partial area with the chromaticity difference larger than the first threshold value in the sub-detection area;
merging the first partial areas with the overlapped pixels, and marking the merged areas as an object area; and/or the first partial region where no pixel overlap exists is individually marked as one object region.
Optionally, screening out a second partial region with a chromaticity difference smaller than the first threshold and a brightness smaller than a second threshold in the sub-detection region;
merging the second partial areas with the overlapped pixels, and marking the merged areas as a shadow area; and/or, the second partial region where no pixel overlap exists is individually marked as a shadow region.
Optionally, merging the first partial areas with the overlapped pixels, and marking the areas with the merged areas larger than a third threshold value as an object area;
and/or, the first partial area which does not exist in the pixel superposition and has the area larger than the third threshold value is singly marked as an object area.
Optionally, merging the second partial areas with the overlapping pixels, and marking the areas with the merged areas larger than the fourth threshold value as shadow areas;
And/or, a second partial region which does not have pixel coincidence and has an area larger than a fourth threshold value is singly marked as a shadow region.
Optionally, calculating a ratio of an area of the shadow region to an area of the object region of the suspected obstacle;
when the ratio is greater than a fifth threshold, the object region is marked as the obstacle.
Optionally, determining left and right boundary projection point coordinates of the object region marked as the obstacle in the reference region;
detecting whether the traveling track of the AGV approaches a position range between the left boundary projection point coordinate and the right boundary projection point coordinate;
when the traveling track of the AGV passes through the position range between the left boundary projection point coordinate and the right boundary projection point coordinate is detected, generating a planning obstacle avoidance strategy for the AGV, wherein the planning obstacle avoidance strategy enables the traveling track to avoid the position range between the left boundary projection point coordinate and the right boundary projection point coordinate.
Optionally, when it is detected that the shadow area and the object area do not exist in the detection area at the same time, or only one of the shadow area and the object area is searched, it is determined that there is no obstacle in the detection area.
In another embodiment of the present invention, there is provided an apparatus for detecting an obstacle, wherein the apparatus includes:
the acquisition module is used for acquiring an image of a traveling area shot by a declined shooting device arranged on the AGV;
a first determining module configured to determine a lower part in the travel area image as a reference area;
a second determining module configured to determine a remaining portion of the travel area image located above the reference area as a detection area;
a first detection module configured to detect an object region and a shadow region in the detection region, where the object region is a region whose chromaticity difference from the reference region exceeds a first threshold, and the shadow region is a region whose chromaticity difference from the reference region is smaller than the first threshold and whose luminance is lower than the reference region;
a second detection module configured to detect whether a positional relationship between the object region and the shadow region matches a light projection direction when it is detected that the object region and the shadow region exist simultaneously in the detection region;
and the first marking module is used for marking the object area as a suspected obstacle when the position relation between the object area and the shadow area matches the light projection direction.
In another embodiment of the present invention, a non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the steps of one of the methods of detecting an obstacle described above is provided.
In another embodiment of the present invention, there is provided a mobile robot including a processor for performing each of the steps of the above-described method of detecting an obstacle, and an image pickup device declining at a preset angle at a front end of the mobile robot.
Optionally, the preset angle is 8 ° to 12 °.
As can be seen from the above, based on the above embodiment, firstly, a travel area image captured by a downtilt image capturing device mounted to an AGV is acquired, a lower portion in the travel area image is determined as a reference area, the remaining portion above the reference area in the travel area image is determined as a detection area, secondly, an object area and a shadow area are detected in the detection area, wherein the object area is an area whose chromaticity difference from the reference area exceeds a first threshold value, and the shadow area is an area whose chromaticity difference from the reference area is smaller than the first threshold value and whose luminance is lower than the reference area, then, when the existence of both the object area and the shadow area in the detection area is detected, it is detected whether or not a positional relationship between the object area and the shadow area matches a light projection direction, and finally, when the positional relationship between the object area and the shadow area matches the light projection direction, the object area is marked as a suspected obstacle. According to the method and the device for judging the position information of the suspected obstacle in the object area, the object area and the shadow area are judged in the acquired travelling area image, and according to the relation between the object area and the shadow area and the light projection direction, the position information of the suspected obstacle is judged in the object area efficiently, the judging mode is simple, and the computing efficiency is high.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a flow chart of a method of detecting an obstacle provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of determining a reference area of a detection area according to an embodiment of the present application;
FIG. 3 is a schematic diagram showing a specific flow of a method for detecting an obstacle according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of monocular camera mounting and field of view provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a method for dividing a sub-detection area and a sub-reference area according to an embodiment of the present disclosure;
fig. 6a shows a specific schematic view of capturing an image of a traveling area of a suspended object according to an embodiment of the present application;
FIG. 6b illustrates a specific schematic view of the AGV and the position of the overhead object corresponding to FIG. 6a provided in an embodiment of the present application;
FIG. 6c illustrates a merged schematic of a first partial region in a travel region image provided by an embodiment of the present application;
fig. 7a is a schematic diagram of another embodiment of the present application for capturing an image of a traveling area of a suspended object;
FIG. 7b is a schematic illustration showing the position of the AGV and the suspended object corresponding to FIG. 7a according to an embodiment of the present application
FIG. 8 is a schematic diagram of an apparatus for detecting an obstacle according to an embodiment of the present disclosure;
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the present invention will be further described in detail below by referring to the accompanying drawings and examples.
Based on the problems in the prior art, the embodiment of the application provides a method for detecting an obstacle, which utilizes the shadow phenomenon that the obstacle with a certain height has a shadow with a certain area in an indoor environment with uniform illumination, acquires a traveling area image of a traveling area of a mobile robot through an imaging device of the mobile robot, and selects a reference area and a detection area from the traveling area image. And determining a shadow area and an object area in the detection area through image analysis, and judging whether the object area is a suspected obstacle according to the position relationship between the shadow area and the object area. And finally, screening the size conditions of the determined suspected barriers, and determining the screened suspected barriers as barriers. The method has the advantages of low cost and high efficiency in detecting the obstacle.
The application field of the application is mainly the robot field, and the main applicable environment is indoor, factory environment or storage environment with uniform illumination. Referring to fig. 1, the detailed steps are as follows:
s11, acquiring an image of a traveling area shot by a declining shooting device arranged on the AGV.
In this step, install on the mobile robot AGV and be equipped with camera device, camera of camera device decline towards ground direction. During the advancing of the mobile robot, the imaging device captures an image of the mobile robot in the advancing area. The traveling area generally refers to an area where the mobile robot is self-advancing, i.e., a possible area where the mobile robot is about to avoid an obstacle. The shooting range of the camera device is mainly the front travelling road surface in the process of moving the robot.
When the mobile robot advances in the traveling area, the image of the traveling area in the traveling area is acquired by a portable declining camera, such as a monocular camera, and the acquired image of the traveling area is stored. Wherein the travel area image is typically a single photograph.
And S12, determining the lower part in the traveling region image as a reference region.
In this step, after the image of the traveling area is acquired, a part of the area in the image of the traveling area is selected as a reference of the current ground, and this selected part of the area is referred to as a reference area in this application. Here, the default captured travel area image includes the ground, and the ground is generally located at the lower portion of the travel area image, so in an alternative implementation, a region with a set size at the bottom of the travel area image may be selected as the reference region, as shown in fig. 2.
And S13, determining the rest part of the travel area image, which is positioned above the reference area, as a detection area.
In this step, after the reference area in the travel area image is determined in the above-described step S12, the other portion than the reference area in the travel area image may be taken as the detection area. As shown in fig. 2, the detection region is the remaining portion of the travel image located above the reference region.
S14, detecting an object area and a shadow area in the detection area.
In this step, whether or not there are an object region and a shadow region that meet the condition is detected in the detection region. The object region is a region with a chromaticity difference from the reference region exceeding a first threshold value, and the shadow region is a region with a chromaticity difference from the reference region smaller than the first threshold value and a luminance lower than the reference region. Alternatively, it is detected in the detection region whether or not there is a portion having a large chromaticity difference from the reference region. The color of the detection area is compared with the color of the reference area. And calculating the chromaticity difference of the colors of the detection area and the reference area according to the difference of the colors of the detection area and the reference area. And after calculating the value of the chromaticity difference of the color, comparing the chromaticity difference with a first threshold value. The specific value of the first threshold may be preset, so as to distinguish the color of the detection area from the color of the reference area. After the comparison, a detection region having a chromaticity difference higher than a first threshold value is marked as an object region, that is, a portion of the detection region having a larger chromaticity difference from the reference region is marked as an object region.
In addition, the luminance of the region of the detection region where the chromaticity difference is lower than the first threshold value is compared with the magnitude of the second threshold value. The specific value of the second threshold may be preset, so as to distinguish the brightness of the detection area. And marking a region with the brightness lower than a second threshold value in the detection region with the color difference lower than the first threshold value as a shadow region, namely marking a part with the color similar to that of the reference region in the detection region and the brightness lower than the second threshold value as the shadow region.
S15, when the fact that the object area and the shadow area exist in the detection area at the same time is detected, whether the position relation between the object area and the shadow area is matched with the light projection direction or not is detected.
In this embodiment of the application, based on the AGV facial make-up is equipped with downdip camera towards ground direction, the regional image of marcing that AGV gathered when advancing probably contains following several kinds of circumstances: only one of the shadow area or the object area is photographed, or the shadow area and the object area are not photographed, or both the shadow area and the object area are photographed.
When the shadow area and the object area are not shot in the traveling area image, no obstacle can be generally determined in a certain range in front of the AGV, and the situation that no obstacle exists in the traveling area image is judged, so that the AGV can normally travel without obstacle avoidance.
When only a shadow area is shot in the traveling area image, the corresponding situation may be that the object corresponding to the shadow area is in a suspended state, and the height of the object in the suspended state is generally higher than the height of the AGV at this time, or it may be that the object corresponding to the shadow area has a certain distance from the AGV and does not enter the shooting field of view of the shooting device yet. In this case, it is determined that no obstacle exists in the traveling area image, and the AGV can normally travel; or the AGV can temporarily reduce the running speed, then if the obstacle is detected in the set number of image frames, the AGV can avoid the obstacle timely, and if the obstacle is not detected in the set number of image frames, the AGV can restore the normal running speed.
When only an object area is photographed in the travel area image, it may be the corresponding case that the height of the object area cannot form a shadow area satisfying the obstacle condition, such as a planar object such as a sticker, a word sign, or the like, the height of which does not cause an obstacle to travel of the AGV. Under the condition, no obstacle is judged in the travelling area image, and the AGV can normally run without obstacle avoidance.
When the presence of both the object area and the shadow area is detected in the travel area image captured by the AGV, it is considered that an obstacle may be present in the travel area of the AGV, in which case step S16 may be continued to confirm the possibility of the presence of the obstacle in the travel area of the AGV by further determining the positional relationship of the captured object area and shadow area.
S16, when the position relation between the object area and the shadow area matches the light projection direction, the object area is marked as a suspected obstacle.
The light projection direction generally refers to the positional relationship between an object and a corresponding shadow in a shadow phenomenon satisfying the natural law. Here, according to the shadow phenomenon in the indoor environment where the light is uniform, it is mainly detected whether or not the object region and the shadow region satisfy the corresponding positional relationship, and whether or not the shadow region matching the light projection direction can be detected within the preset range around the object region, that is, it is judged whether or not the shadow region corresponds to the object region. The preset range here is a range of the shadow phenomenon satisfying the natural rule around the object area, for example, a corresponding shadow area can be detected in a preset range below the suspended object area.
In this step, when the positional relationship between the object region and the shadow region detected in the travel region image matches the light projection direction, it can be determined that the object region corresponds to the shadow region. At this time, the object region is marked as a suspected obstacle.
According to the embodiment of the application, the image of the traveling area shot in the traveling area by the declined image shooting device arranged on the AGV of the mobile robot is acquired, and the reference area and the detection area are detected in the image of the traveling area. The object region and the shadow region are marked in the detection region by the difference in color, and finally a suspected obstacle is determined. And the light shadow phenomenon and the acquired travelling region image in the real world are utilized to analyze the travelling region image, so that the three-dimensional information of the object is efficiently judged, the calculated amount is reduced, the calculation efficiency is high, and the cost is lower. Meanwhile, the method mainly uses the photographed travelling region image for analysis, so that the calculated amount is small, and the method can be carried on a platform with weak calculation power for real-time processing.
The method for detecting the obstacle in the embodiment of the application mainly utilizes the shadow phenomenon of the obstacle under the illumination condition to analyze the acquired target picture. Fig. 3 is a schematic diagram of a specific flow of a method for detecting an obstacle according to an embodiment of the present application. The detailed process of the specific flow is as follows:
s301, a traveling area image is shot in a traveling area of the AGV through a declined image shooting device arranged on the AGV.
Here, the image pickup device mounted in front of the mobile robot may be a monocular camera. The camera of the monocular camera is tilted downward in the ground direction so that the photographed travel area image mainly includes the road surface in front. Optionally, as shown in fig. 4, a schematic diagram of installation and a field of view of the image capturing apparatus according to the embodiment of the present application is shown. The camera device can form an included angle of 10 degrees with the direction of the ground normal vector, the visual field range of the camera device can be 30 degrees, and a relatively good camera visual field is formed. Here, specific values of the angle of the monocular camera and the angle of the field of view range may be set manually, and may be adjusted to other specific values.
S302, determining a reference area and a detection area in the photographed travelling area image, and dividing the detection area and the reference area into a plurality of sub-detection areas and a plurality of sub-reference areas respectively.
Here, the lower part in the travel area image is first determined as the reference area, and the remaining part located above the reference area in the travel area image is determined as the detection area.
After the reference area and the detection area are determined, dividing the detection area into a plurality of sub-detection areas according to columns, wherein partial pixels of adjacent sub-detection areas are overlapped; the reference area is divided into a plurality of sub-reference areas by columns, wherein partial pixels of adjacent sub-reference areas coincide. After the detection area and the reference area are determined, the detection area and the reference area are divided into a plurality of sub-detection areas and a plurality of sub-reference areas according to columns respectively. And overlapping part of pixels in the divided sub-detection areas with part of pixels in other sub-detection areas, namely, overlapping part of images of the divided sub-detection areas with adjacent other sub-detection areas. Alternatively, as shown in fig. 5, the detection area may be divided by dividing pixels in different columns, and so on until the whole detection area is divided, so as to obtain a plurality of sub-detection areas, where any two adjacent sub-detection areas in any two columns have coincidence. When the divided sub-detection areas are compared with the sub-reference areas in the same column, overlapping exists among the plurality of sub-detection areas, in subsequent comparison, overlapping parts among adjacent sub-detection areas generally undergo repeated comparison, and in the subsequent comparison, the comparison result can be more accurate. The determination method of the sub-reference area refers to the determination method of the sub-detection area.
S303, comparing the chromaticity difference of each sub-detection area with the chromaticity difference of the corresponding sub-reference area in the same column, screening a first partial area with the chromaticity difference larger than a first threshold value from the sub-detection areas, and screening a second partial area with the chromaticity difference smaller than the first threshold value and the brightness smaller than a second threshold value.
For convenience of distinction and description, the region satisfying the chromaticity difference greater than the first threshold value in the sub-detection region is referred to as a first partial region in the embodiments of the present application; and, a region of the sub-detection region satisfying the chromaticity difference smaller than the first threshold and the luminance smaller than the second threshold is referred to as a second partial region.
In this step, after the above-mentioned several sub-detection regions and sub-reference regions are obtained, the chromaticity difference between the color of each sub-detection region and the color of the corresponding sub-reference region is calculated. The reference area is also divided into a plurality of sub-reference areas, and the colors of the sub-reference areas are compared with the corresponding sub-reference areas in the same column. Alternatively, the colors in the sub-detection regions and the colors in the corresponding sub-reference regions may have various representation formats, such as HSV (Hue, saturation, value) color space, YUV color space, and the like. In an alternative embodiment, the chromaticity difference of the sub-detection area and the sub-reference area of the same column may be obtained by calculating the euclidean distance. For example, the YUV average value of the sub-reference area may be calculated first, then the euclidean distance between the YUV value of each pixel in the sub-detection area of the same column and the YUV average value of the sub-reference area is calculated, and the obtained euclidean distance is used as the chromaticity difference between the pixel in the sub-detection area and the sub-reference area of the same column.
After calculating the chromaticity difference of the pixel in the sub-detection area, the chromaticity difference of the pixel may be compared with a first threshold value, and the magnitude relation of the luminance of the pixel and a second threshold value may be further compared if the chromaticity difference of the pixel is smaller than the first threshold value. Based on the comparison result, if pixels with the chromaticity difference higher than the first threshold value exist in the sub-detection area, the pixels with the chromaticity difference higher than the first threshold value and adjacent positions in the sub-detection area are communicated, so that a first partial area is obtained. Similarly, if pixels with chromaticity differences smaller than the first threshold and brightness smaller than the second threshold exist in the sub-detection area, the pixels with chromaticity differences smaller than the first threshold and brightness smaller than the second threshold and adjacent positions in the sub-detection area are communicated, and a second partial area is obtained.
The value of the second threshold is determined according to the brightness of the corresponding sub-reference area, for example, a value obtained by subtracting an artificially set value from the brightness of the sub-reference area or multiplying the artificially set value by a scaling factor smaller than 1 may be used as the second threshold.
The screening results of the sub-detection areas are integrated, and the following results are possible: 1) All the sub-detection areas are not screened by the first partial area and the second partial area; 2) All the sub-detection areas can not be screened out of the first partial area, and at least one sub-detection area can be screened out of the second partial area; 3) At least one sub-detection area can be screened to a first partial area, and all the sub-detection areas cannot be screened to a second partial area; 4) At least one sub-detection area can be screened to a first partial area and at least one sub-detection area can be screened to a second partial area.
S304, if all the sub-detection areas are not screened to the first partial area and the second partial area, determining that no obstacle exists, and ending the flow.
Each sub-detection area included in the traveling area image cannot be screened out of the first partial area and the second partial area, namely, the fact that no object area and no shadow area exist in the traveling area image is meant, and the corresponding actual situation may be that no obstacle exists in a certain range in front of the AGV. At this time, it is considered that the AGV can smoothly pass through the front obstacle-free situation corresponding to the several types of obstacle-free situations listed in the above step S15.
S305, if all the sub-detection areas cannot screen the first partial area and at least one sub-detection area can screen the second partial area, determining that no obstacle exists, and ending the flow.
Each sub-detection area included in the travelling area image cannot be screened out of the first partial area, but at least one sub-detection area can be screened out of the second partial area, namely, a shadow area possibly exists in the travelling area image and no object area exists in the travelling area image. The corresponding actual situation may be that the object corresponding to the shadow area is in a suspended state, or that the object corresponding to the shadow area has a certain distance from the AGV and has not yet entered the shooting field of view of the shooting device. At this time, it is considered that the AGV can smoothly pass through the front obstacle-free situation corresponding to the several types of obstacle-free situations listed in the above step S15.
For the situation that only the shadow area of the suspended object is detected in the situation, in the indoor or factory environment, the illumination is sufficient and uniform, and if the suspended or semi-suspended object exists in front of the robot, a certain shadow exists below the suspended or semi-suspended object. When only a shadow area can be detected in the acquired traveling area image and an object area cannot be detected in the preset range of the shadow area, the suspended height of the object corresponding to the shadow can be considered to not influence the normal traveling of the AGV. As shown in fig. 6a, a specific schematic diagram of only a shadow area detected in the traveling area image according to the embodiment of the present application is shown. At this time, the actual situations of the AGV and the object may be as shown in fig. 6b, optionally, as the robot continuously advances, the area of the shadow area in the visual field of the image capturing device is larger and larger, if the obstacle is suspended, the image capturing device cannot see the obstacle corresponding to the shadow until the AGV passes under the obstacle object, so that the object corresponding to the shadow area can be determined as an invalid obstacle, and the AGV can continue to pass forward. Therefore, when only a shadow area is detected in the detection area, it can be determined that there is no obstacle, and the AGV will continue to travel forward.
S306, if at least one sub-detection area can screen the first partial area and all sub-detection areas cannot screen the second partial area, determining that no obstacle exists, and ending the flow.
Each sub-detection area included in the travel area image cannot be screened out of the second partial area, but at least one sub-detection area can be screened out of the first partial area, namely that an object area possibly exists in the travel area image but no shadow area exists in the travel area image. The corresponding practical situation may be that the height of the object area cannot form a shadow area meeting the condition of the obstacle, such as a plane object like a sticker, a word mark, etc., and the height cannot cause obstacle to the running of the AGV. At this time, it may be determined that no obstacle exists in the travel area image, and the present flow is ended.
S307, if at least one sub-detection area can screen the first partial area and at least one sub-detection area can screen the second partial area, marking the screened first partial area as an object area and marking the screened second partial area as a shadow area.
In an alternative implementation, the object region may be marked by: if only one sub-detection area exists to be able to screen out the first partial areas, each first partial area screened out of the one sub-detection area may be marked as one object area, respectively. If there are multiple sub-detection areas that can screen out the first partial area, the first partial areas that have pixels overlapping with each other may be combined, and the combined areas may be marked as one object area, and the first partial areas that do not have pixels overlapping with other first partial areas may be individually marked as one object area.
For example, referring to fig. 6c, a first partial region 1 and a first partial region 2 are screened out in the sub-detection region 1, a first partial region 3 is screened out in the sub-detection region 2, and a first partial region 4 is screened out in the sub-detection region 3; since there is a pixel coincidence between the first partial region 2 and the first partial region 3, and there is a pixel coincidence between the first partial region 3 and the first partial region 4, these three regions can be combined and marked as one object region; whereas the first sub-area 1 may be marked as an object area alone, since the first sub-area 1 does not overlap with other first sub-areas by pixels.
Optionally, the area of each object region can be calculated before or after marking the object region, and the object region with smaller area can be removed. When the area of the object area in the traveling area image is smaller, the corresponding situation may be that the area of the actual object corresponding to the object area is smaller and the height is lower, so that the traveling of the AGV is not blocked, or the actual object corresponding to the object area is farther from the AGV at the moment, so that the traveling of the AGV is not influenced temporarily; the object area with smaller area is eliminated, so that the subsequent processing amount can be reduced. One implementation is as follows: and merging the first partial areas with the overlapped pixels, and marking the areas with the areas larger than the third threshold value after merging as one object area, and/or marking the first partial areas without the overlapped pixels and with the areas larger than the third threshold value as one object area independently. Here, the object region is screened by setting a third threshold value to determine an object region that may cause an obstacle to travel of the AGV.
Accordingly, the shadow areas may be marked by the steps of: when the second partial area with the chromaticity difference smaller than the first threshold and the brightness smaller than the second threshold in the sub-detection area is screened out, the second partial area with the pixel coincidence can be combined, the combined area is marked as a shadow area, and/or the second partial area without the pixel coincidence is singly marked as a shadow area.
Alternatively, areas may be calculated to cull out unconditionally shaded areas before or after the shaded areas are marked. For example, merging the second partial areas with the overlapping pixels, calculating the area of the merged partial detection areas, and marking the area with the merged area larger than a fourth threshold value as a shadow area; and/or, a second partial region which does not have pixel coincidence and has an area larger than a fourth threshold value is singly marked as a shadow region. If the color detection is used, after the short obstacles such as the paper, the landmark or the paper board are detected as the object area, the shadow area of the paper board, the paper or the landmark is very small and even no shadow is detected because the height of the short obstacles is very small, and the shadow area can be filtered by not meeting the set fourth threshold value, so that objects which are lower than the chassis of the robot and have no influence on the running of the robot such as the paper are filtered.
S308, it is detected whether the positional relationship between the marked object region and the shadow region matches the light projection direction.
Here, for each object region, it is determined whether or not the corresponding shadow region exists for each object region. Optionally, it is detected whether the positional relationship between the object region and the shadow region matches the light projection direction. In general, in a warehouse environment with uniform illumination, a corresponding object area exists in a preset range around a shadow area, that is, the position relationship between the object area and the shadow area meets the light shadow phenomenon in the natural phenomenon. It is considered that a real obstacle and its corresponding shadow are connected in the traveling area image, or that an obstacle and its corresponding shadow are not connected, but should exist in a space of a preset range at the same time. The specific size of the preset range is determined by factors such as the shooting range of the shooting device arranged on the robot, the height of the robot and the like, and can be a range in the traveling area image shot by the declined shooting device arranged on the traveling surface of the AGV.
S309, when it is detected that the positional relationship between the object region and the shadow region matches the light projection direction, the object region is marked as a suspected obstacle.
Here, when an object region corresponding to the shadow region matching the light projection direction is detected around the shadow region, the object region corresponding to the shadow region is marked as a suspected obstacle. If the suspended object may affect the normal running of the AGV, then the presence of both the shadow area and the object area may be detected in the captured travel area image. As shown in fig. 7a, the camera may detect the shadow area and the corresponding object area. At this time, the actual situation of the AGV and the obstacle may be as shown in FIG. 7 b. At this time, the object region as shown in fig. 7a may be marked as a suspected obstacle.
And S310, performing size condition filtering on the object area marked as the suspected obstacle, and marking the object area remained after filtering as the obstacle.
Here, the filtering of the size condition of the suspected obstacle is mainly to calculate the ratio of the area between the suspected obstacle and the corresponding shadow area to determine whether the suspected obstacle meets the size condition that may form an obstacle for the AGV. Calculating the ratio of the area of the shadow area to the area of the object area of the suspected obstacle by utilizing the phenomenon that the shadow of the object with a certain height has a certain area, and marking the object area corresponding to the suspected obstacle as the obstacle when the ratio is larger than a fifth threshold value. Optionally, calculating the ratio of the area of the object area corresponding to the suspected obstacle to the area of the shadow area, and comparing the obtained ratio with a preset fifth threshold value. Wherein the fifth threshold is also set by human. After the object area is marked, in order to avoid false detection of a planar object, such as a sticker, a landmark, etc., as the object area, it is necessary to filter objects below the robot chassis that do not affect the travel of the robot by acquiring height information of the object area. In order to avoid complex calculation of three-dimensional information of the obstacle, the calculation of the three-dimensional information of the obstacle is replaced by detecting the shadow of the obstacle according to the phenomenon that objects with heights are provided with shadows of a certain size. Here, it is required that the corresponding shadow area can be detected before the object area marked as a suspected obstacle.
If the obstacle is semi-suspended or has a certain distance from the robot, the image pickup device can see the object area corresponding to the front shadow area in the forward process of the AGV. But at this time, the ratio of the object area of the suspected obstacle to the area of the shadow area is smaller than the set fifth threshold, and the robot can continue to pass forward for a distance. For the obstacle which can influence the robot to pass, the ratio of the area of the obstacle in the visual field of the camera to the shadow area exceeds a set fifth threshold value in a certain distance range, at the moment, the obstacle which can influence the robot to pass is considered to be detected in front, the corresponding object area is determined to be the obstacle, and the robot starts a corresponding obstacle avoidance strategy.
S311, detecting whether the obstacle influences the travel of the AGV, and realizing obstacle avoidance for the AGV.
Here, after the actual obstacle in the moving robot traveling area is determined through the above steps, the left boundary projection point coordinates and the right boundary projection point coordinates of the object area marked as the obstacle in the reference area are determined; detecting whether a traveling track of the AGV passes through a position range between a left boundary projection point coordinate and the right boundary projection point coordinate; when the position range of the traveling track passing through the left boundary projection point coordinate and the right boundary projection point coordinate of the AGV is detected, a planning obstacle avoidance strategy for enabling the traveling track to avoid the position range between the left boundary projection point coordinate and the right boundary projection point coordinate is generated for the AGV.
Based on the steps, the three-dimensional information of the object is efficiently judged by utilizing the shadow phenomenon in the real world and utilizing a single picture without calculating the information, so that the obstacle avoidance effect of the mobile robot is realized.
Based on the same inventive concept, an embodiment of the present application further provides an apparatus for detecting an obstacle, where, as shown in fig. 8, the apparatus includes:
an acquisition module 801, configured to acquire an image of a traveling area captured by a declined imaging device installed in an AGV;
a first determining module 802 for determining a lower part in the traveling region image as a reference region;
a second determining module 803 for determining the remaining portion of the travel region image located above the reference region as a detection region;
a first detection module 804, configured to detect an object region and a shadow region in a detection region, where the object region is a region whose chromaticity difference from the reference region exceeds a first threshold, and the shadow region is a region whose chromaticity difference from the reference region is less than the first threshold and whose luminance is less than the reference region;
a second detection module 805 for detecting whether or not a positional relationship between the object region and the shadow region matches a light projection direction when it is detected that the object region and the shadow region exist simultaneously in the detection region;
The first marking module 806 is configured to mark the object area as a suspected obstacle when the positional relationship between the object area and the shadow area matches the light projection direction.
In this embodiment, specific functions and interaction manners of the acquiring module 801, the first determining module 802, the second determining module 803, the first detecting module 804, the second detecting module 805, and the first marking module 806 may be referred to the description of the corresponding embodiment of fig. 1, and are not repeated herein.
Optionally, the apparatus further comprises:
a second marking module 807 configured to perform size condition filtering on the object region marked as the suspected obstacle, and mark the object region remaining after filtering as an obstacle.
Optionally, the first detection module 804 includes:
the first dividing unit is used for dividing the detection area into a plurality of sub-detection areas according to columns, wherein partial pixels of adjacent sub-detection areas are overlapped;
the second dividing unit is used for dividing the reference area into a plurality of sub-reference areas according to columns, wherein partial pixels of adjacent sub-reference areas coincide;
a first comparing unit, configured to compare the chromaticity difference between each sub-detection area and the corresponding sub-reference area in the same column, and screen out a first partial area where the chromaticity difference in the sub-detection area is greater than the first threshold;
The first marking unit is used for merging the first partial areas with the overlapped pixels and marking the merged areas as an object area; and/or the first partial region where no pixel overlap exists is individually marked as one object region.
Optionally, the first detection module 804 further includes:
the screening unit is used for screening out a second partial area with the chromaticity difference smaller than the first threshold value and the brightness smaller than the second threshold value in the sub-detection area;
a second marking unit, configured to combine the second partial areas where the pixels overlap, and mark the combined areas as a shadow area; and/or, the second partial region where no pixel overlap exists is individually marked as a shadow region.
Optionally, the first marking unit includes:
a first marking subunit, configured to combine the first partial areas where the pixels overlap, and mark the combined areas with areas greater than a third threshold as an object area;
and the second marking subunit is used for singly marking the first partial area which does not have pixel coincidence and has the area larger than the third threshold value as an object area.
Optionally, the second marking unit includes:
A third marking subunit, configured to combine the second partial areas where the pixels overlap, and mark the areas where the combined areas are larger than the fourth threshold as a shadow area;
and a fourth marking subunit, configured to mark, as a shadow area, a second partial area where no pixel overlap exists and the area is greater than the fourth threshold value.
Optionally, the first marking module 805 includes:
a first calculation unit configured to calculate a ratio of an area of the shadow region to an area of the object region of the suspected obstacle;
and a third marking unit configured to mark the object area as the obstacle when the ratio is greater than a fifth threshold.
Optionally, the apparatus further comprises:
a third determining module 808 for determining left and right boundary projection point coordinates of the object region marked as the obstacle at the reference region;
a third detection module 809, configured to detect whether the travel track of the AGV approaches a position range between the left boundary projection point coordinate and the right boundary projection point coordinate;
and the planning module 810 is configured to generate a planned obstacle avoidance strategy for the AGV to avoid the travel track from the position range between the left boundary projection point coordinate and the right boundary projection point coordinate when the travel track of the AGV is detected to pass through the position range between the left boundary projection point coordinate and the right boundary projection point coordinate.
Optionally, the apparatus further comprises:
a fourth determining module 811 is configured to determine that there is no obstacle in the detection area when it is detected that the shadow area and the object area do not exist simultaneously in the detection area, or only one of the shadow area and the object area is searched.
Specifically, the storage medium can be a general-purpose storage medium, such as a mobile disk, a hard disk, a FLASH, and the like, and when a computer program on the storage medium is executed, the method for detecting the obstacle can be executed, so that three-dimensional information of the obstacle can be efficiently judged by shooting an image of a traveling area, and the effect of reducing the calculation amount can be achieved.
Still another embodiment of the present application provides a mobile robot including a processor and an image pickup device, the image pickup device being declined at a preset angle at a front end of the mobile robot, the processor being configured to perform steps in a method of detecting an obstacle. Wherein the preset angle of declination of the image pickup device is 8-12 degrees.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present application, and are not intended to limit the scope of the present application, but the present application is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, the present application is not limited thereto. Any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or make equivalent substitutions for some of the technical features within the technical scope of the disclosure of the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A method of detecting an obstacle, comprising:
acquiring an image of a traveling area shot by a declining shooting device arranged on the AGV;
determining a lower part in the travel area image as a reference area;
determining the rest part of the travel area image, which is positioned above the reference area, as a detection area;
detecting an object region and a shadow region in the detection region, wherein the object region is a region with a chromaticity difference from the reference region exceeding a first threshold value, and the shadow region is a region with a chromaticity difference from the reference region being smaller than the first threshold value and a luminance being lower than the reference region;
detecting whether a positional relationship between the object region and the shadow region matches a light projection direction when it is detected that the object region and the shadow region exist simultaneously in the detection region;
when the positional relationship between the object region and the shadow region matches a light projection direction, the object region is marked as a suspected obstacle.
2. The method of claim 1, wherein after the step of marking the object area as a suspected obstruction, the method further comprises:
And performing size condition filtering on the object area marked as the suspected obstacle, and marking the object area remained after filtering as the obstacle.
3. The method according to claim 1, characterized in that the object area is marked by the steps of:
dividing the detection area into a plurality of sub-detection areas according to columns, wherein partial pixels of adjacent sub-detection areas are overlapped;
dividing the reference area into a plurality of sub-reference areas according to columns, wherein partial pixels of adjacent sub-reference areas coincide;
comparing each sub-detection area with the chromaticity difference of the corresponding sub-reference area in the same column, and screening out a first partial area with the chromaticity difference larger than the first threshold value in the sub-detection area;
merging the first partial areas with the overlapped pixels, and marking the merged areas as an object area; and/or the first partial region where no pixel overlap exists is individually marked as one object region.
4. A method according to claim 3, wherein the shadow region is marked by:
screening out a second partial region with the chromaticity difference smaller than the first threshold value and the brightness smaller than a second threshold value in the sub-detection region;
Merging the second partial areas with the overlapped pixels, and marking the merged areas as a shadow area; and/or, the second partial region where no pixel overlap exists is individually marked as a shadow region.
5. A method according to claim 3, wherein the first partial areas where there are pixel overlap are merged and the merged areas are marked as one object area; and/or, the step of individually marking the first partial region where no pixel overlap exists as one object region, comprising:
merging the first partial areas with the overlapped pixels, and marking the areas with the merged areas larger than a third threshold value as an object area;
and/or, the first partial area which does not exist in the pixel superposition and has the area larger than the third threshold value is singly marked as an object area.
6. The method of claim 4, wherein the second partial areas where there are pixel overlap are merged and the merged areas are marked as a shadow area; and/or, a step of individually marking the second partial region where no pixel overlap exists as a shadow region, including:
Merging the second partial areas with the overlapped pixels, and marking the areas with the merged areas larger than a fourth threshold value as shadow areas;
and/or, a second partial region which does not have pixel coincidence and has an area larger than a fourth threshold value is singly marked as a shadow region.
7. The method of claim 2, wherein the step of size condition filtering the object region marked as a suspected obstacle and marking the object region remaining after filtering as an obstacle comprises:
calculating the ratio of the area of the shadow area to the area of the object area of the suspected obstacle;
when the ratio is greater than a fifth threshold, the object region is marked as the obstacle.
8. The method of claim 2, wherein after the step of marking the object region remaining after filtering as an obstacle, the method further comprises:
determining left and right boundary projection point coordinates of the object region marked as the obstacle in the reference region;
detecting whether the traveling track of the AGV passes through a position range between the left boundary projection point coordinate and the right boundary projection point coordinate;
When the traveling track of the AGV passes through the position range between the left boundary projection point coordinate and the right boundary projection point coordinate is detected, generating a planning obstacle avoidance strategy for the AGV, wherein the planning obstacle avoidance strategy enables the traveling track to avoid the position range between the left boundary projection point coordinate and the right boundary projection point coordinate.
9. The method according to claim 1, wherein between the step of detecting an object region and a shadow region in the detection region and the step of detecting whether or not a positional relationship between the object region and the shadow region matches a light projection direction, the method further comprises:
when it is detected that the shadow area and the object area are not present in the detection area or only one of the shadow area and the object area is searched, it is determined that there is no obstacle in the detection area.
10. An apparatus for detecting an obstacle, comprising:
the acquisition module is used for acquiring an image of a traveling area shot by a declined shooting device arranged on the AGV;
a first determination module for determining a lower portion of the travel area image as a reference area;
a second determining module configured to determine a remaining portion of the travel area image located above the reference area as a detection area;
A first detection module configured to detect an object region and a shadow region in the detection region, where the object region is a region whose chromaticity difference from the reference region exceeds a first threshold, and the shadow region is a region whose chromaticity difference from the reference region is smaller than the first threshold and whose luminance is lower than the reference region;
a second detection module configured to detect whether a positional relationship between the object region and the shadow region matches a light projection direction when it is detected that the object region and the shadow region exist simultaneously in the detection region;
and the first marking module is used for marking the object area as a suspected obstacle when the position relation between the object area and the shadow area matches the light projection direction.
11. A non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the steps in the method of detecting an obstacle as claimed in claims 1 to 9.
12. A mobile robot comprising a processor and an imaging device, the imaging device being declined at a predetermined angle at a front end of the mobile robot, the processor being configured to perform the steps of the method of detecting an obstacle as claimed in claims 1 to 9.
13. The mobile robot of claim 12, wherein the predetermined angle is 8 ° to 12 °.
CN201910476765.5A 2019-06-03 2019-06-03 Method and device for detecting obstacle, storage medium and mobile robot Active CN112036210B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910476765.5A CN112036210B (en) 2019-06-03 2019-06-03 Method and device for detecting obstacle, storage medium and mobile robot
PCT/CN2020/092276 WO2020244414A1 (en) 2019-06-03 2020-05-26 Obstacle detection method, device, storage medium, and mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910476765.5A CN112036210B (en) 2019-06-03 2019-06-03 Method and device for detecting obstacle, storage medium and mobile robot

Publications (2)

Publication Number Publication Date
CN112036210A CN112036210A (en) 2020-12-04
CN112036210B true CN112036210B (en) 2024-03-08

Family

ID=73576676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910476765.5A Active CN112036210B (en) 2019-06-03 2019-06-03 Method and device for detecting obstacle, storage medium and mobile robot

Country Status (2)

Country Link
CN (1) CN112036210B (en)
WO (1) WO2020244414A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677318A (en) * 2020-12-24 2022-06-28 苏州科瓴精密机械科技有限公司 Obstacle identification method, device, equipment, medium and weeding robot
CN114751151B (en) * 2021-01-12 2024-03-26 贵州中烟工业有限责任公司 Calculation method of detection device installation area and storage medium
CN113624249B (en) * 2021-08-26 2024-04-12 北京京东乾石科技有限公司 Lock point operation execution method, device, electronic equipment and computer readable medium
WO2023113799A1 (en) * 2021-12-16 2023-06-22 Hewlett-Packard Development Company, L.P. Surface marking robots and obstacles
CN117148811B (en) * 2023-11-01 2024-01-16 宁波舜宇贝尔机器人有限公司 AGV trolley carrying control method and system, intelligent terminal and lifting mechanism
CN117496359B (en) * 2023-12-29 2024-03-22 浙江大学山东(临沂)现代农业研究院 Plant planting layout monitoring method and system based on three-dimensional point cloud

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413308A (en) * 2013-08-01 2013-11-27 东软集团股份有限公司 Obstacle detection method and device
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN106228110A (en) * 2016-07-07 2016-12-14 浙江零跑科技有限公司 A kind of barrier based on vehicle-mounted binocular camera and drivable region detection method
CN106650701A (en) * 2017-01-05 2017-05-10 华南理工大学 Binocular vision-based method and apparatus for detecting barrier in indoor shadow environment
CN106997721A (en) * 2017-04-17 2017-08-01 深圳奥比中光科技有限公司 Draw method, device and the storage device of 2D maps
CN108416306A (en) * 2018-03-12 2018-08-17 海信集团有限公司 Continuous type obstacle detection method, device, equipment and storage medium
CN108680157A (en) * 2018-03-12 2018-10-19 海信集团有限公司 A kind of planing method, device and the terminal in detection of obstacles region
CN109141364A (en) * 2018-08-01 2019-01-04 北京进化者机器人科技有限公司 Obstacle detection method, system and robot

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1782668A (en) * 2004-12-03 2006-06-07 曾俊元 Method and device for preventing collison by video obstacle sensing
CN104574365B (en) * 2014-12-18 2018-09-07 中国科学院计算技术研究所 Obstacle detector and method
US10499039B2 (en) * 2016-12-15 2019-12-03 Egismos Technology Corporation Path detection system and path detection method generating laser pattern by diffractive optical element
CN108596012B (en) * 2018-01-19 2022-07-15 海信集团有限公司 Barrier frame combining method, device and terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413308A (en) * 2013-08-01 2013-11-27 东软集团股份有限公司 Obstacle detection method and device
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN106228110A (en) * 2016-07-07 2016-12-14 浙江零跑科技有限公司 A kind of barrier based on vehicle-mounted binocular camera and drivable region detection method
CN106650701A (en) * 2017-01-05 2017-05-10 华南理工大学 Binocular vision-based method and apparatus for detecting barrier in indoor shadow environment
CN106997721A (en) * 2017-04-17 2017-08-01 深圳奥比中光科技有限公司 Draw method, device and the storage device of 2D maps
CN108416306A (en) * 2018-03-12 2018-08-17 海信集团有限公司 Continuous type obstacle detection method, device, equipment and storage medium
CN108680157A (en) * 2018-03-12 2018-10-19 海信集团有限公司 A kind of planing method, device and the terminal in detection of obstacles region
CN109141364A (en) * 2018-08-01 2019-01-04 北京进化者机器人科技有限公司 Obstacle detection method, system and robot

Also Published As

Publication number Publication date
CN112036210A (en) 2020-12-04
WO2020244414A1 (en) 2020-12-10

Similar Documents

Publication Publication Date Title
CN112036210B (en) Method and device for detecting obstacle, storage medium and mobile robot
CA2950791C (en) Binocular visual navigation system and method based on power robot
US8750567B2 (en) Road structure detection and tracking
CN102682292B (en) Method based on monocular vision for detecting and roughly positioning edge of road
Turk et al. Video road-following for the autonomous land vehicle
US10867403B2 (en) Vehicle external recognition apparatus
CN108116410A (en) A method for controlling the speed of a vehicle and equipment
US20200143179A1 (en) Infrastructure-free nlos obstacle detection for autonomous cars
US11335099B2 (en) Proceedable direction detection apparatus and proceedable direction detection method
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN114359714A (en) Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body
Silva et al. Monocular trail detection and tracking aided by visual SLAM for small unmanned aerial vehicles
Betke et al. Highway scene analysis from a moving vehicle under reduced visibility conditions
Yoneda et al. Simultaneous state recognition for multiple traffic signals on urban road
CN107767366B (en) A kind of transmission line of electricity approximating method and device
CN107886544A (en) IMAQ control method and device for vehicle calibration
US11069049B2 (en) Division line detection device and division line detection method
Bauda et al. Real-time ground marking analysis for safe trajectories of autonomous mobile robots
CN112706159B (en) Robot control method and device and robot
Bonin-Font et al. A monocular mobile robot reactive navigation approach based on the inverse perspective transformation
Coronado-Vergara et al. Towards landmine detection using artificial vision
EP4089649A1 (en) Neuromorphic cameras for aircraft
JP7491260B2 (en) Person detection device, person detection method and computer program for detecting persons
Duric et al. Estimating relative vehicle motions in traffic scenes
US20230252638A1 (en) Systems and methods for panoptic segmentation of images for autonomous driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Hikvision Robot Co.,Ltd.

Address before: 310052 5 / F, building 1, building 2, no.700 Dongliu Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant