CN110941973A - Obstacle detection method and device, vehicle-mounted equipment and storage medium - Google Patents

Obstacle detection method and device, vehicle-mounted equipment and storage medium Download PDF

Info

Publication number
CN110941973A
CN110941973A CN201811108785.9A CN201811108785A CN110941973A CN 110941973 A CN110941973 A CN 110941973A CN 201811108785 A CN201811108785 A CN 201811108785A CN 110941973 A CN110941973 A CN 110941973A
Authority
CN
China
Prior art keywords
information
edge
obstacle
dimensional image
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811108785.9A
Other languages
Chinese (zh)
Other versions
CN110941973B (en
Inventor
张宇
林伟
刘晓彤
王勃
孔凡君
冯威
雷坤宇
石磊
李国靖
刘静仁
张新平
尚坚强
刘淳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uisee Technologies Beijing Co Ltd
Original Assignee
Uisee Technologies Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uisee Technologies Beijing Co Ltd filed Critical Uisee Technologies Beijing Co Ltd
Priority to CN201811108785.9A priority Critical patent/CN110941973B/en
Publication of CN110941973A publication Critical patent/CN110941973A/en
Application granted granted Critical
Publication of CN110941973B publication Critical patent/CN110941973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention relates to an obstacle detection method, an obstacle detection device, vehicle-mounted equipment and a storage medium, wherein the method comprises the following steps: acquiring image information acquired by an image sensor; determining edge information of an obstacle in the image information; carrying out redundancy processing on the edge information to obtain redundant information; and determining the edge information and the redundant information as the obstacle information. In consideration of the fact that the edges of the obstacles have consistency in the two-dimensional image and the three-dimensional image, the embodiment of the invention determines the edge information of the obstacles in the image information, performs redundant processing on the edge information, takes the edge information added with the redundant information as the obstacle information, can contain spatial three-dimensional information, provides a basis for planning an obstacle avoidance strategy, and thus improves the accuracy of the planning obstacle avoidance strategy.

Description

Obstacle detection method and device, vehicle-mounted equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of machine vision, in particular to a method and a device for detecting obstacles, vehicle-mounted equipment and a storage medium.
Background
With the rapid development of vehicle automatic driving technology, the sensing requirement on the surrounding environment of the vehicle is higher and higher. The perception sensors mainly used in the current automatic driving of vehicles include: a camera is provided. The image shot by the camera belongs to a two-dimensional image and has no spatial three-dimensional information.
Therefore, the obstacle can not be detected based on the image shot by the camera, the space three-dimensional information of the obstacle can not be determined, and the accuracy of the obstacle avoidance strategy planned based on the detected obstacle needs to be improved. Therefore, it is desirable to provide an obstacle detection method, which can determine spatial three-dimensional information of an obstacle and improve the accuracy of a planning obstacle avoidance strategy.
Disclosure of Invention
In order to solve the problems in the prior art, at least one embodiment of the present invention provides an obstacle detection method, an obstacle detection apparatus, a vehicle-mounted device, and a storage medium.
In a first aspect, an embodiment of the present invention provides an obstacle detection method, where the method includes:
acquiring image information acquired by an image sensor;
determining edge information of an obstacle in the image information;
carrying out redundancy processing on the edge information to obtain redundant information;
and determining the edge information and the redundant information as obstacle information.
In some embodiments, the image information collected by the image sensor is two-dimensional image information;
accordingly, the determining edge information of the obstacle in the image information includes:
determining three-dimensional image information of the two-dimensional image information in a vehicle coordinate system;
and determining the edge information of the obstacle in the three-dimensional image information.
In some embodiments, the determining three-dimensional image information of the two-dimensional image information in a vehicle coordinate system includes:
detecting the two-dimensional image information based on a preset edge detection strategy to obtain edge detection information of an obstacle in the two-dimensional image;
and determining three-dimensional image information of the edge detection information in a vehicle coordinate system.
In some embodiments, the method further comprises:
acquiring view field information and pose information of the image sensor;
accordingly, the determining the edge information of the obstacle in the three-dimensional image information comprises:
determining an initial scanning position and a scanning angle of a scanning ray based on the view field information and the pose information;
and scanning the three-dimensional image information through the scanning ray based on the initial scanning position and the scanning angle to obtain the edge information of the obstacle in the three-dimensional image information.
In some embodiments, the performing redundancy processing on the edge information to obtain redundant information includes:
and performing pixel adding processing on the edge information to obtain redundant information.
In some embodiments, the performing pixel addition processing on the edge information to obtain redundant information includes:
determining pixels included by the edge based on the edge information;
the pixels included at the edge are increased along the extension direction of the scanning ray;
adding pixels laterally at pixels at both ends of the edge;
pixels with increasing extension direction and pixels with increasing lateral direction are determined as redundant information.
In a second aspect, an embodiment of the present invention further provides an obstacle detection apparatus, where the apparatus includes:
the first acquisition unit is used for acquiring image information acquired by the image sensor;
a first determination unit configured to determine edge information of an obstacle in the image information;
the processing unit is used for carrying out redundancy processing on the edge information to obtain redundant information;
a second determining unit configured to determine that the edge information and the redundant information are obstacle information.
In some embodiments, the image information collected by the image sensor is two-dimensional image information;
accordingly, the first determination unit includes:
the first subunit is used for determining three-dimensional image information of the two-dimensional image information in a vehicle coordinate system;
and the second subunit is used for determining the edge information of the obstacle in the three-dimensional image information.
In some embodiments, the first subunit is to:
detecting the two-dimensional image information based on a preset edge detection strategy to obtain edge detection information of an obstacle in the two-dimensional image;
and determining three-dimensional image information of the edge detection information in a vehicle coordinate system.
In some embodiments, the apparatus further comprises:
a third acquisition unit that acquires field information and pose information of the image sensor;
accordingly, the second subunit is configured to:
determining an initial scanning position and a scanning angle of a scanning ray based on the view field information and the pose information;
and scanning the three-dimensional image information through the scanning ray based on the initial scanning position and the scanning angle to obtain the edge information of the obstacle in the three-dimensional image information.
In some embodiments, the processing unit is configured to perform pixel addition processing on the edge information to obtain redundant information.
In some embodiments, the processing unit is to:
determining pixels included by the edge based on the edge information;
the pixels included at the edge are increased along the extension direction of the scanning ray;
adding pixels laterally at pixels at both ends of the edge;
pixels with increasing extension direction and pixels with increasing lateral direction are determined as redundant information.
In a third aspect, an embodiment of the present invention further provides an on-board device, including:
a processor, memory, a network interface, and a user interface;
the processor, memory, network interface and user interface are coupled together by a bus system;
the processor is adapted to perform the steps of the method according to the first aspect by calling a program or instructions stored by the memory.
In a fourth aspect, an embodiment of the present invention also provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the steps of the method according to the first aspect.
Therefore, in at least one embodiment of the embodiments of the present invention, edge information of an obstacle in image information is determined, redundant processing is performed on the edge information, the edge information with the redundant information added is used as obstacle information, and spatial three-dimensional information can be included to provide a basis for planning an obstacle avoidance strategy, so as to improve accuracy of the planning obstacle avoidance strategy.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a schematic structural diagram of an on-board device according to an embodiment of the present invention;
fig. 2 is a flowchart of an obstacle detection method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an image acquired by an image sensor according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an embodiment of the present invention, in which an image captured by an image sensor is converted into a three-dimensional image in a vehicle coordinate system;
FIG. 5 is a schematic diagram of determining an intersection between an obstacle and the ground according to an embodiment of the present invention;
fig. 6 is a block diagram of an obstacle detection device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Fig. 1 is a schematic structural diagram of an in-vehicle device provided in an embodiment of the present invention.
The in-vehicle apparatus shown in fig. 1 includes: at least one processor 101, at least one memory 102, at least one network interface 104, and other user interfaces 103. The various components in the in-vehicle device are coupled together by a bus system 105. It is understood that the bus system 105 is used to enable communications among the components. The bus system 105 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 105 in FIG. 1.
The user interface 103 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, or touch pad, among others.
It will be appreciated that the memory 102 in this embodiment may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a Read-only memory (ROM), a programmable Read-only memory (PROM), an erasable programmable Read-only memory (erasabprom, EPROM), an electrically erasable programmable Read-only memory (EEPROM), or a flash memory. The volatile memory may be a Random Access Memory (RAM) which functions as an external cache. By way of example, but not limitation, many forms of RAM are available, such as static random access memory (staticiram, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (syncronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced synchronous SDRAM (ESDRAM), synchronous link SDRAM (SLDRAM), and direct memory bus SDRAM (DRRAM). The memory 102 described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 102 stores elements, executable units or data structures, or a subset thereof, or an expanded set thereof as follows: an operating system 1021 and application programs 1022.
The operating system 1021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application 1022 includes various applications, such as a media player (MediaPlayer), a Browser (Browser), and the like, for implementing various application services. Programs that implement methods in accordance with embodiments of the invention can be included in application 1022.
In the embodiment of the present invention, the processor 101 is configured to execute the method steps provided by each obstacle detection method embodiment by calling a program or an instruction stored in the memory 102, specifically, a program or an instruction stored in the application 1022, and for example, the method steps include:
acquiring image information acquired by an image sensor; determining edge information of an obstacle in the image information; carrying out redundancy processing on the edge information to obtain redundant information; and determining the edge information and the redundant information as the obstacle information.
The method disclosed by the above embodiment of the present invention can be applied to the processor 101, or implemented by the processor 101. The processor 101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 101. The processor 101 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software elements in the decoding processor. The software elements may be located in ram, flash, rom, prom, or eprom, registers, among other storage media that are well known in the art. The storage medium is located in the memory 102, and the processor 101 reads the information in the memory 102 and completes the steps of the method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented by means of units performing the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the execution sequence of the steps of the method embodiments can be arbitrarily adjusted unless there is an explicit precedence sequence. The disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or make a contribution to the prior art, or may be implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
Fig. 2 is a flowchart of an obstacle detection method according to an embodiment of the present invention. The execution subject of the method is the vehicle-mounted equipment.
As shown in fig. 2, the obstacle detection method disclosed in the present embodiment may include the following steps 201 to 204:
201. and acquiring image information acquired by the image sensor.
202. Edge information of an obstacle in the image information is determined.
203. And carrying out redundancy processing on the edge information to obtain redundant information.
204. And determining the edge information and the redundant information as the obstacle information.
In this embodiment, the image sensor may be a camera, and the camera may be a fisheye camera. The image sensor is mounted on the vehicle and is in communication connection with the in-vehicle device. A plurality of image sensors may also be mounted on the vehicle.
In this embodiment, the existing obstacle detection technology may be adopted to determine the obstacle in the image information, and this embodiment is not described again. After the obstacle in the image information is determined, a MASK (MASK) map corresponding to the image information can be obtained.
In this embodiment, the MASK map includes a map with a limited number of colors, each color representing one type of object, for example, black represents an obstacle, gray represents a road, and white represents a parking space.
In this embodiment, after obtaining the MASK map corresponding to the image information, the edge information of the obstacle may be determined from the MASK map.
Fig. 3 is a schematic diagram of an image acquired by an image sensor according to an embodiment of the present invention. In fig. 3, the obstacle 3 is located within the lane 4, and reference numeral 5 denotes a lane line of the lane 4. Obstacle detection is performed on fig. 4 using existing obstacle detection techniques, and the obstacle 3 in fig. 4 can be determined. After determining the obstacle 3 in fig. 4, a MASK map corresponding to the image captured by the image sensor may be obtained.
Fig. 4 is a three-dimensional image in the vehicle coordinate system obtained by coordinate transformation of the MASK map corresponding to the image shown in fig. 3, where the coordinate transformation method is, for example, a back-projection transformation method. The intersection between the obstacle 3' and the ground in fig. 4 is the same as the intersection between the obstacle 3 and the ground in fig. 3. Therefore, the intersection line between the obstacle and the ground can reflect the three-dimensional information of the obstacle in the vehicle coordinate system. In this embodiment, the edge information of the obstacle is the intersection line between the obstacle and the ground.
In this embodiment, in order to ensure that the edge information of the obstacle is complete, redundant processing needs to be performed on the edge information, so as to obtain the edge information to which the redundant information is added.
In this embodiment, the edge information added with the redundant information is used as the obstacle information, and may include spatial three-dimensional information, so as to provide a basis for planning an obstacle avoidance strategy, thereby improving the accuracy of the planning obstacle avoidance strategy.
As can be seen, the method for detecting an obstacle disclosed in this embodiment determines edge information of an obstacle in image information, and performs redundant processing on the edge information, so that the edge information and the redundant information can be used as obstacle information. The edges of the obstacles are consistent in the two-dimensional image and the three-dimensional image, so that the obstacle information comprises spatial three-dimensional information, a basis is provided for planning obstacle avoidance strategies, and the accuracy of the planning obstacle avoidance strategies is improved.
In some embodiments, the image information acquired by the image sensor is two-dimensional image information. Determining edge information of an obstacle in image information, specifically comprising the following two steps:
the method comprises the following steps: and determining three-dimensional image information of the two-dimensional image information in a vehicle coordinate system.
Step two: edge information of an obstacle in the three-dimensional image information is determined.
In this embodiment, the existing obstacle detection technology may be adopted to determine the obstacle in the two-dimensional image information, and this embodiment is not described again. After the obstacle in the two-dimensional image information is determined, a MASK map corresponding to the two-dimensional image information can be obtained.
In this embodiment, coordinate transformation is performed on the MASK map corresponding to the two-dimensional image information, so as to obtain three-dimensional image information of the MASK map in the vehicle coordinate system, thereby determining the three-dimensional image information of the two-dimensional image information in the vehicle coordinate system.
In this embodiment, the coordinate transformation method may follow an existing coordinate transformation method, such as a back projection transformation method, and this embodiment is not described again.
In some embodiments, determining three-dimensional image information of the two-dimensional image information in a vehicle coordinate system specifically includes the following two steps:
the method comprises the following steps: and detecting the information of the two-dimensional image based on a preset edge detection strategy to obtain the edge detection information of the obstacle in the two-dimensional image.
Step two: and determining three-dimensional image information of the edge detection information in a vehicle coordinate system.
In this embodiment, after the edge detection information of the obstacle in the two-dimensional image information is obtained, the two-dimensional image information may be clipped, so that the obstacle in the clipped two-dimensional image information only has the edge detection information.
In this embodiment, after the two-dimensional image information is cropped, the MASK map corresponding to the cropped two-dimensional image information is determined. And performing coordinate transformation on the MASK chart to obtain the three-dimensional image information of the MASK chart in the vehicle coordinate system, thereby determining the three-dimensional image information of the edge detection information in the vehicle coordinate system.
In the embodiment, by cutting the two-dimensional image information, redundant obstacle information in the two-dimensional image information can be removed, and the accuracy of determining the edge information of the obstacle is improved.
In some embodiments, the obstacle detection method further comprises the steps of: and acquiring the view field information and the pose information of the image sensor.
In this embodiment, the pose information of the image sensor in the vehicle coordinate system may be determined based on the installation position of the image sensor.
In this embodiment, determining the edge information of the obstacle in the three-dimensional image information specifically includes the following two steps:
the method comprises the following steps: and determining the initial scanning position and the scanning angle of the scanning ray based on the view field information and the pose information.
Step two: and scanning the three-dimensional image information through scanning rays based on the initial scanning position and the scanning angle to obtain the edge information of the obstacle in the three-dimensional image information.
In this embodiment, fig. 4 is a schematic diagram of the image captured by the image sensor being converted into a three-dimensional image in a vehicle coordinate system, and fig. 5 is a further processing based on fig. 4, aiming at determining an intersection line 6 between the obstacle 3 and the ground.
As shown in fig. 5, reference numeral 2 denotes a field edge of the image sensor 1. In fig. 5, the way of determining the intersection line 6 is: the scanning ray with the position of the image sensor 1 as an end point scans from the left edge to the right edge of the field of view of the image sensor 1, and the position where the scanning ray is blocked belongs to the intersection line 6 between the obstacle 3 and the ground.
In some embodiments, the redundant processing is performed on the edge information to obtain redundant information, specifically: and performing pixel increasing processing on the edge information to obtain redundant information.
In this embodiment, the redundant information is added pixels.
In some embodiments, performing pixel addition processing on the edge information to obtain redundant information specifically includes the following steps one to four:
the method comprises the following steps: pixels included in the edge are determined based on the edge information.
Step two: the pixels included at the edges increase in pixels along the extension of the scanning ray.
Step three: pixels are added laterally at pixels located at both ends of the edge.
Step four: pixels with increasing extension direction and pixels with increasing lateral direction are determined as redundant information.
In the present embodiment, the number of pixels increased in the extending direction of the scanning ray is less than the number of pixels increased in the lateral direction. The number of pixels increasing along the extending direction of the scanning ray and the number of pixels increasing laterally can be determined according to actual needs, and the embodiment does not limit specific values.
It should be noted that, the obstacle detection method disclosed in each of the above embodiments may be combined into a new embodiment unless a combination is specifically described, and the steps in each embodiment may be performed in an adjustable order unless there is a logical contradiction.
As shown in fig. 6, the present embodiment discloses an obstacle detection device, which may include the following units: a first acquisition unit 61, a first determination unit 62, a processing unit 63 and a second determination unit 64. The units are specifically described as follows:
a first acquiring unit 61, configured to acquire image information acquired by an image sensor;
a first determination unit 62 configured to determine edge information of an obstacle in the image information;
a processing unit 63, configured to perform redundancy processing on the edge information to obtain redundant information;
a second determining unit 64, configured to determine that the edge information and the redundant information are obstacle information.
In some embodiments, the image information collected by the image sensor is two-dimensional image information;
accordingly, the first determination unit 62 includes:
the first subunit is used for determining three-dimensional image information of the two-dimensional image information in a vehicle coordinate system;
and the second subunit is used for determining the edge information of the obstacle in the three-dimensional image information.
In some embodiments, the first subunit is to:
detecting the two-dimensional image information based on a preset edge detection strategy to obtain edge detection information of an obstacle in the two-dimensional image;
and determining three-dimensional image information of the edge detection information in a vehicle coordinate system.
In some embodiments, the apparatus further comprises:
a third acquisition unit that acquires field information and pose information of the image sensor;
accordingly, the second subunit is configured to:
determining an initial scanning position and a scanning angle of a scanning ray based on the view field information and the pose information;
and scanning the three-dimensional image information through the scanning ray based on the initial scanning position and the scanning angle to obtain the edge information of the obstacle in the three-dimensional image information.
In some embodiments, the processing unit 63 is configured to perform pixel adding processing on the edge information to obtain redundant information.
In some embodiments, the processing unit 63 is configured to:
determining pixels included by the edge based on the edge information;
the pixels included at the edge are increased along the extension direction of the scanning ray;
adding pixels laterally at pixels at both ends of the edge;
pixels with increasing extension direction and pixels with increasing lateral direction are determined as redundant information.
The obstacle detection device disclosed in the above embodiments can implement the flows of the obstacle detection methods disclosed in the above method embodiments, and in order to avoid repetition, details are not described here again.
An embodiment of the present invention further provides a non-transitory computer-readable storage medium storing computer instructions, where the computer instructions cause the computer to execute the method steps provided by each obstacle detection method embodiment, for example, including:
acquiring image information acquired by an image sensor;
determining edge information of an obstacle in the image information;
carrying out redundancy processing on the edge information to obtain redundant information;
and determining the edge information and the redundant information as obstacle information.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Those skilled in the art will appreciate that although some embodiments described herein include some features included in other embodiments instead of others, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (14)

1. An obstacle detection method, characterized in that the method comprises:
acquiring image information acquired by an image sensor;
determining edge information of an obstacle in the image information;
carrying out redundancy processing on the edge information to obtain redundant information;
and determining the edge information and the redundant information as obstacle information.
2. The method according to claim 1, wherein the image information acquired by the image sensor is two-dimensional image information;
accordingly, the determining edge information of the obstacle in the image information includes:
determining three-dimensional image information of the two-dimensional image information in a vehicle coordinate system;
and determining the edge information of the obstacle in the three-dimensional image information.
3. The method of claim 2, wherein determining three-dimensional image information of the two-dimensional image information in a vehicle coordinate system comprises:
detecting the two-dimensional image information based on a preset edge detection strategy to obtain edge detection information of an obstacle in the two-dimensional image;
and determining three-dimensional image information of the edge detection information in a vehicle coordinate system.
4. A method according to claim 2 or 3, characterized in that the method further comprises:
acquiring view field information and pose information of the image sensor;
accordingly, the determining the edge information of the obstacle in the three-dimensional image information comprises:
determining an initial scanning position and a scanning angle of a scanning ray based on the view field information and the pose information;
and scanning the three-dimensional image information through the scanning ray based on the initial scanning position and the scanning angle to obtain the edge information of the obstacle in the three-dimensional image information.
5. The method of claim 4, wherein the performing redundancy processing on the edge information to obtain redundant information comprises:
and performing pixel adding processing on the edge information to obtain redundant information.
6. The method of claim 5, wherein the performing the incremental pixel processing on the edge information to obtain redundant information comprises:
determining pixels included by the edge based on the edge information;
the pixels included at the edge are increased along the extension direction of the scanning ray;
adding pixels laterally at pixels at both ends of the edge;
pixels with increasing extension direction and pixels with increasing lateral direction are determined as redundant information.
7. An obstacle detection apparatus, characterized in that the apparatus comprises:
the first acquisition unit is used for acquiring image information acquired by the image sensor;
a first determination unit configured to determine edge information of an obstacle in the image information;
the processing unit is used for carrying out redundancy processing on the edge information to obtain redundant information;
a second determining unit configured to determine that the edge information and the redundant information are obstacle information.
8. The apparatus according to claim 7, wherein the image information collected by the image sensor is two-dimensional image information;
accordingly, the first determination unit includes:
the first subunit is used for determining three-dimensional image information of the two-dimensional image information in a vehicle coordinate system;
and the second subunit is used for determining the edge information of the obstacle in the three-dimensional image information.
9. The apparatus of claim 8, wherein the first subunit is configured to:
detecting the two-dimensional image information based on a preset edge detection strategy to obtain edge detection information of an obstacle in the two-dimensional image;
and determining three-dimensional image information of the edge detection information in a vehicle coordinate system.
10. The apparatus of claim 8 or 9, further comprising:
a third acquisition unit that acquires field information and pose information of the image sensor;
accordingly, the second subunit is configured to:
determining an initial scanning position and a scanning angle of a scanning ray based on the view field information and the pose information;
and scanning the three-dimensional image information through the scanning ray based on the initial scanning position and the scanning angle to obtain the edge information of the obstacle in the three-dimensional image information.
11. The apparatus of claim 10, wherein the processing unit is configured to perform incremental pixel processing on the edge information to obtain redundant information.
12. The apparatus of claim 11, wherein the processing unit is configured to:
determining pixels included by the edge based on the edge information;
the pixels included at the edge are increased along the extension direction of the scanning ray;
adding pixels laterally at pixels at both ends of the edge;
pixels with increasing extension direction and pixels with increasing lateral direction are determined as redundant information.
13. An in-vehicle apparatus, characterized by comprising:
a processor, memory, a network interface, and a user interface;
the processor, memory, network interface and user interface are coupled together by a bus system;
the processor is adapted to perform the steps of the method of any one of claims 1 to 6 by calling a program or instructions stored in the memory.
14. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the steps of the method according to any one of claims 1 to 6.
CN201811108785.9A 2018-09-21 2018-09-21 Obstacle detection method and device, vehicle-mounted equipment and storage medium Active CN110941973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811108785.9A CN110941973B (en) 2018-09-21 2018-09-21 Obstacle detection method and device, vehicle-mounted equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811108785.9A CN110941973B (en) 2018-09-21 2018-09-21 Obstacle detection method and device, vehicle-mounted equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110941973A true CN110941973A (en) 2020-03-31
CN110941973B CN110941973B (en) 2023-09-15

Family

ID=69905543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811108785.9A Active CN110941973B (en) 2018-09-21 2018-09-21 Obstacle detection method and device, vehicle-mounted equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110941973B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112578795A (en) * 2020-12-15 2021-03-30 深圳市优必选科技股份有限公司 Robot obstacle avoidance method and device, robot and storage medium
CN113554882A (en) * 2021-07-20 2021-10-26 阿波罗智联(北京)科技有限公司 Method, apparatus, device and storage medium for outputting information

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030076317A1 (en) * 2001-10-19 2003-04-24 Samsung Electronics Co., Ltd. Apparatus and method for detecting an edge of three-dimensional image data
CN1980322A (en) * 2005-12-07 2007-06-13 日产自动车株式会社 Object detecting system and object detecting method
WO2010138574A1 (en) * 2009-05-26 2010-12-02 Rapiscan Security Products, Inc. X-ray tomographic inspection systems for the identification of specific target items
JP2011209896A (en) * 2010-03-29 2011-10-20 Nec Corp Obstacle detecting apparatus, obstacle detecting method, and obstacle detecting program
CN102903102A (en) * 2012-09-11 2013-01-30 西安电子科技大学 Non-local-based triple Markov random field synthetic aperture radar (SAR) image segmentation method
US9098754B1 (en) * 2014-04-25 2015-08-04 Google Inc. Methods and systems for object detection using laser point clouds
CN105407804A (en) * 2013-07-31 2016-03-16 株式会社东芝 X-ray computed tomography (CT) device, image processing device, image processing method, and storage medium
CN107480638A (en) * 2017-08-16 2017-12-15 北京京东尚科信息技术有限公司 Vehicle obstacle-avoidance method, controller, device and vehicle
US20180091221A1 (en) * 2016-09-23 2018-03-29 Qualcomm Incorporated Selective pixel activation for light-based communication processing

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030076317A1 (en) * 2001-10-19 2003-04-24 Samsung Electronics Co., Ltd. Apparatus and method for detecting an edge of three-dimensional image data
CN1980322A (en) * 2005-12-07 2007-06-13 日产自动车株式会社 Object detecting system and object detecting method
WO2010138574A1 (en) * 2009-05-26 2010-12-02 Rapiscan Security Products, Inc. X-ray tomographic inspection systems for the identification of specific target items
JP2011209896A (en) * 2010-03-29 2011-10-20 Nec Corp Obstacle detecting apparatus, obstacle detecting method, and obstacle detecting program
CN102903102A (en) * 2012-09-11 2013-01-30 西安电子科技大学 Non-local-based triple Markov random field synthetic aperture radar (SAR) image segmentation method
CN105407804A (en) * 2013-07-31 2016-03-16 株式会社东芝 X-ray computed tomography (CT) device, image processing device, image processing method, and storage medium
US9098754B1 (en) * 2014-04-25 2015-08-04 Google Inc. Methods and systems for object detection using laser point clouds
US20180091221A1 (en) * 2016-09-23 2018-03-29 Qualcomm Incorporated Selective pixel activation for light-based communication processing
CN107480638A (en) * 2017-08-16 2017-12-15 北京京东尚科信息技术有限公司 Vehicle obstacle-avoidance method, controller, device and vehicle

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
姜啸宇: "红外阵列探测器及边缘检测的研究", 中国优秀硕士学位论文全文数据库 (信息科技辑), no. 10, pages 135 - 152 *
张穗华等: "基于三维激光雷达的障碍物检测方法研究", 机电产品开发与创新, no. 06, pages 23 - 26 *
石磊: "自主式车辆环境感知技术研究 ————道路环境理解方法研究", 中国博士学位论文全文数据库 (信息科技辑), no. 08, pages 138 - 168 *
马建设等: "基于轮廓提取与深度筛选的双目三维重构技术", 《计算机工程与科学》 *
马建设等: "基于轮廓提取与深度筛选的双目三维重构技术", 《计算机工程与科学》, no. 04, 15 April 2018 (2018-04-15), pages 665 - 672 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112578795A (en) * 2020-12-15 2021-03-30 深圳市优必选科技股份有限公司 Robot obstacle avoidance method and device, robot and storage medium
CN113554882A (en) * 2021-07-20 2021-10-26 阿波罗智联(北京)科技有限公司 Method, apparatus, device and storage medium for outputting information

Also Published As

Publication number Publication date
CN110941973B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN110936893B (en) Blind area obstacle processing method and device, vehicle-mounted equipment and storage medium
US10794718B2 (en) Image processing apparatus, image processing method, computer program and computer readable recording medium
US20210190513A1 (en) Navigation map updating method and apparatus and robot using the same
US11763575B2 (en) Object detection for distorted images
CN109683170B (en) Image driving area marking method and device, vehicle-mounted equipment and storage medium
CN108629292B (en) Curved lane line detection method and device and terminal
US11657319B2 (en) Information processing apparatus, system, information processing method, and non-transitory computer-readable storage medium for obtaining position and/or orientation information
CN111178122B (en) Detection and planar representation of three-dimensional lanes in road scene
DE102017120709A1 (en) OBJECTIVITY ESTIMATION USING DATA FROM A SINGLE CAMERA
CN109255005B (en) Vehicle repositioning method and device, vehicle-mounted equipment, server and storage medium
JP6552448B2 (en) Vehicle position detection device, vehicle position detection method, and computer program for vehicle position detection
CN108169729A (en) The method of adjustment of the visual field of laser radar, medium, laser radar system
CN111169381A (en) Vehicle image display method and device, vehicle and storage medium
CN111160070A (en) Vehicle panoramic image blind area eliminating method and device, storage medium and terminal equipment
CN110941973B (en) Obstacle detection method and device, vehicle-mounted equipment and storage medium
EP3029602A1 (en) Method and apparatus for detecting a free driving space
CN112988922A (en) Perception map construction method and device, computer equipment and storage medium
CN111413701A (en) Method and device for determining distance between obstacles, vehicle-mounted equipment and storage medium
DE102018133030A1 (en) VEHICLE REMOTE CONTROL DEVICE AND VEHICLE REMOTE CONTROL METHOD
CN111832347B (en) Method and device for dynamically selecting region of interest
CN109740502B (en) Road quality detection method and device
CN108416305B (en) Pose estimation method and device for continuous road segmentation object and terminal
CN111383268A (en) Vehicle distance state acquisition method and device, computer equipment and storage medium
WO2016078742A1 (en) Method for operating a navigation system of a motor vehicle by means of a user gesture
JP2021051348A (en) Object distance estimation apparatus and object distance estimation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant