CN116704473B - Obstacle information detection method, obstacle information detection device, electronic device, and computer-readable medium - Google Patents
Obstacle information detection method, obstacle information detection device, electronic device, and computer-readable medium Download PDFInfo
- Publication number
- CN116704473B CN116704473B CN202310595751.1A CN202310595751A CN116704473B CN 116704473 B CN116704473 B CN 116704473B CN 202310595751 A CN202310595751 A CN 202310595751A CN 116704473 B CN116704473 B CN 116704473B
- Authority
- CN
- China
- Prior art keywords
- image
- edge detection
- detection
- obstacle
- generate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 114
- 238000003708 edge detection Methods 0.000 claims abstract description 188
- 238000003709 image segmentation Methods 0.000 claims abstract description 36
- 238000000034 method Methods 0.000 claims abstract description 23
- 238000004590 computer program Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 7
- 238000012549 training Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 14
- 238000013527 convolutional neural network Methods 0.000 description 12
- 239000003086 colorant Substances 0.000 description 9
- 230000011218 segmentation Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 241000219289 Silene Species 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the disclosure discloses an obstacle information detection method, an obstacle information detection device, an electronic device and a computer readable medium. One embodiment of the method comprises the following steps: acquiring a road image; performing edge detection on the road image to generate a first edge detection image, a second edge detection image, a third edge detection image and a fourth edge detection image; obstacle detection information is generated based on the first edge detection image, the second edge detection image, the third edge detection image, the fourth edge detection image, and a preset image segmentation model. The embodiment can generate the accuracy of the obstacle information.
Description
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a method, an apparatus, an electronic device, and a computer readable medium for detecting obstacle information.
Background
The obstacle information detection method is a technique for determining obstacle information in an image. Currently, when generating obstacle information (for example, the obstacle is another vehicle, and the obstacle information may be information such as a distance, a speed, and a position coordinate of the other vehicle), the following methods are generally adopted: classifying or dividing the image through a preset convolutional neural network, and then detecting obstacle information to obtain obstacle detection information.
However, the inventors found that when the obstacle information detection is performed in the above manner, there are often the following technical problems:
firstly, under the conditions that the number of obstacle vehicles is large, the distance is far, the colors of the vehicles are similar, and the distances among the obstacle vehicles are similar, the characteristics of the obstacle vehicles in an image are less, and the obstacle vehicles are easily identified as the same obstacle, so that the accuracy of image segmentation is reduced, and the accuracy of generated obstacle detection information is reduced;
second, the convolutional neural network is highly dependent on diversity of training sets and quality of training samples, and a large amount of sequence data is needed to improve recognition performance, but in the case that obstacle vehicles are more, far away, vehicle colors are similar, and distances between obstacle vehicles are similar, it is difficult to provide better training data for training of the convolutional neural network, so that image segmentation capability of a trained convolutional neural network model is reduced.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose obstacle information detection methods, apparatuses, electronic devices, and computer-readable media to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide an obstacle information detection method, the method including: acquiring a road image; performing edge detection on the road image to generate a first edge detection image, a second edge detection image, a third edge detection image and a fourth edge detection image; obstacle detection information is generated based on the first edge detection image, the second edge detection image, the third edge detection image, the fourth edge detection image, and a preset image segmentation model.
In a second aspect, some embodiments of the present disclosure provide an obstacle information detection apparatus, the apparatus including: an acquisition unit configured to acquire a road image; an edge detection unit configured to perform edge detection on the road image to generate a first edge detection image, a second edge detection image, a third edge detection image, and a fourth edge detection image; and a generation unit configured to generate obstacle detection information based on the first edge detection image, the second edge detection image, the third edge detection image, the fourth edge detection image, and a preset image division model.
In a third aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors causes the one or more processors to implement the method described in any of the implementations of the first aspect above.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantageous effects: by the obstacle information detection method of some embodiments of the present disclosure, the accuracy of the generated obstacle detection information may be improved. Specifically, the cause of the decrease in accuracy of the generated obstacle detection information is that: under the conditions that the number of obstacle vehicles is large, the distances are far, the colors of the vehicles are similar, and the distances among the obstacle vehicles are similar, the characteristics of the obstacle vehicles in an image are fewer, and the obstacle vehicles are easily identified as the same obstacle, so that the accuracy of image segmentation is reduced, and the accuracy of generated obstacle detection information is reduced. Based on this, the obstacle information detection method of some embodiments of the present disclosure first acquires a road image. Then, edge detection is performed on the road image to generate a first edge detection image, a second edge detection image, a third edge detection image, and a fourth edge detection image. Through edge detection, can be used for providing characteristic support for obstacle information detection. Also because the first edge detection image, the second edge detection image, the third edge detection image, and the fourth edge detection image are generated, it is possible to facilitate highlighting the characteristics of the obstacle vehicle. Even if the obstacle vehicles are more, the distances are far, the colors of the vehicles are similar, and the distances among the obstacle vehicles are similar, the characteristics of different vehicles can be well distinguished. Finally, obstacle detection information is generated based on the first edge detection image, the second edge detection image, the third edge detection image, the fourth edge detection image, and a preset image segmentation model. Therefore, the situation that the approximate obstacle is recognized as the same obstacle can be greatly avoided, and the segmentation capability of the image segmentation model on each obstacle in the road image is improved. Further, the accuracy of the generated obstacle information can be improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of some embodiments of an obstacle information detection method according to the present disclosure;
fig. 2 is a schematic structural view of some embodiments of an obstacle information detection device according to the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of an obstacle information detection method according to the present disclosure. The obstacle information detection method comprises the following steps:
step 101, obtaining a road image.
In some embodiments, the execution subject of the obstacle information detection method may acquire the road image by a wired manner or a wireless manner. The road image may be an image photographed by an in-vehicle camera of the current vehicle.
It should be noted that the wireless connection may include, but is not limited to, 3G/4G connections, wiFi connections, bluetooth connections, wiMAX connections, zigbee connections, UWB (ultra wideband) connections, and other now known or later developed wireless connection means.
In practice, the obstacle information detection may be performed on road images of a single frame, or may be performed on road images of consecutive frames.
Step 102, edge detection is performed on the road image to generate a first edge detection image, a second edge detection image, a third edge detection image, and a fourth edge detection image.
In some embodiments, the executing body may perform edge detection on the road image in various manners to generate a first edge detection image, a second edge detection image, a third edge detection image, and a fourth edge detection image.
In some optional implementations of some embodiments, the performing body performs edge detection on the road image to generate a first edge detection image, a second edge detection image, a third edge detection image, and a fourth edge detection image, and may include the steps of:
and firstly, graying the road image to obtain a gray image. The road image can be grayed through a preset gray algorithm, and a gray image is obtained.
And secondly, carrying out edge detection on the gray level image based on a preset first detection operator so as to generate a first edge detection image.
As an example, the first detection operator may include, but is not limited to, at least one of: laplace operator, laplace gaussian operator, and the like.
And thirdly, carrying out edge detection on the gray level image based on a preset second detection operator so as to generate a second edge detection image.
As an example, the second detection operator may include, but is not limited to, at least one of: scharr (feature gradient) operator, sobel operator, and the like.
And fourthly, performing edge detection on the gray level image based on a preset third detection operator to generate a third edge detection image.
As an example, the third detection operator may include, but is not limited to, at least one of: a Roots operator, a non-differential edge detection operator, and the like.
And fifthly, performing edge detection on the gray level image based on a preset fourth detection operator to generate a fourth edge detection image.
As an example, the fourth detection operator may be a catchfly operator.
Step 103, generating obstacle detection information based on the first edge detection image, the second edge detection image, the third edge detection image, the fourth edge detection image and a preset image segmentation model.
In some embodiments, the execution body may generate the obstacle detection information in various ways based on the first edge detection image, the second edge detection image, the third edge detection image, the fourth edge detection image, and a preset image segmentation model.
In some optional implementations of some embodiments, the executing body may generate the obstacle detection information based on the first edge detection image, the second edge detection image, the third edge detection image, the fourth edge detection image, and a preset image segmentation model, and may include the steps of:
and a first step of performing stitching processing on the gray-scale image, the first edge detection image, the second edge detection image, the third edge detection image, and the fourth edge detection image to generate a stitched image. The spliced image is a 5-channel image. Next, the stitching process may be performed by overlapping the grayscale image, the first edge detection image, the second edge detection image, the third edge detection image, and the fourth edge detection image to obtain a 5-channel feature image.
And secondly, inputting the spliced image into a preset image segmentation model to generate a segmented road image. The image segmentation model can segment the image to obtain an obstacle edge coordinate set. Then, each obstacle edge coordinate in the obstacle edge coordinate set may be marked into a blank image having the same size as the gray image, thereby obtaining a segmented road image.
As an example, the image segmentation model may include, but is not limited to, at least one of: a Residual Network model, a VGG (Visual Geometry Group Network, convolutional neural Network) model, a google net (deep neural Network) model, and the like.
And thirdly, performing obstacle detection on the segmented road image to generate obstacle detection information. The obstacle detection can be performed on the segmented road image through a preset detection algorithm, so that obstacle detection information is generated. Second, the obstacle detection information may include information of at least one obstacle.
As an example, the detection algorithm may include, but is not limited to, at least one of: MRF (MRF-Markov Random Field, markov conditional random field) model, SPP (Spatial Pyramid Pooling, spatial pyramid pooling module) model, FCN (Fully Convolutional Networks, full-roll machine neural network) model, and the like.
Alternatively, the image segmentation model may be trained by:
first, a training sample is obtained. Wherein, the training sample comprises: the sample detection image is an image formed by splicing four sample detection images generated after the original image is detected by the first detection operator, the second detection operator, the third detection operator and the fourth detection operator with a gray level image of the original image, and the sample detection image is a 5-channel image. And secondly, the segmentation result sample graph is a label image. Here, the segmentation result sample map may include a set of segmentation image coordinates. The respective segmented image coordinates in the segmented image coordinate set may be coordinates of an object edge in the segmentation result sample map.
And secondly, inputting a sample detection image and a segmentation result sample image included in the training sample into an initial image segmentation model to obtain an output segmented image, wherein the output segmented image comprises an output rear edge detection coordinate set.
And thirdly, determining the image similarity between the output segmented image and the segmented result sample image by using a preset first loss function, and taking the image similarity as a first loss value.
As an example, the first loss function may include, but is not limited to, at least one of: SSIM (structural similarity) structural similarity measurement algorithm, cosine similarity, mean hash algorithm, difference hash algorithm, perceptual hash algorithm, pixeltach pixel matching algorithm, etc.
And a fourth step of determining, as a second loss function, the coordinate similarity between each output trailing edge detection coordinate in the output trailing edge detection coordinate set in the output segmented image and each segmented image coordinate in the segmented image coordinate set in the segmented result sample graph by using a preset second loss function.
As an example, the second loss function may include, but is not limited to, at least one of: euclidean distance value, manhattan distance value, chebyshev distance value, minkowski distance value, angle cosine value, pearson correlation coefficient value, and Jacquard similarity coefficient value.
And fifthly, carrying out weighted summation on the first loss value and the second loss value according to a preset weight ratio to obtain a comprehensive loss value.
And a sixth step of determining that the training of the initial image segmentation model is completed and determining the initial image segmentation model as an image segmentation model in response to determining that the comprehensive loss value is smaller than a preset loss threshold.
In addition, if the comprehensive loss value is greater than or equal to the preset loss threshold value, parameters in the initial image segmentation model can be adjusted through a gradient descent algorithm, and training samples are acquired again for training the initial image segmentation model.
The above step 103 is related to the present disclosure as an invention point of the embodiments of the present disclosure, which solves the technical problem mentioned in the background art, namely, the convolutional neural network is highly dependent on the diversity of the training set and the quality of the training samples, and a large amount of sequence data is needed to improve the recognition performance, while in the case that there are many obstacle vehicles, far distances, similar vehicle colors, and similar distances between the obstacle vehicles, it is difficult to provide better training data for the training of the convolutional neural network, thereby resulting in a reduction of the image segmentation capability of the trained convolutional neural network model. Factors that lead to a decrease in image segmentation ability of the trained convolutional neural network model tend to be as follows: the convolutional neural network is highly dependent on the diversity of training sets and the quality of training samples, and a large amount of sequence data is needed to improve the recognition performance, and under the conditions that obstacle vehicles are more, far away, the colors of the vehicles are similar and the distances among the obstacle vehicles are similar, better training data is difficult to provide for the training of the convolutional neural network. To achieve this effect, first, the image may be edge-detected from multiple angles by a preset first, second, third, and fourth detection operator. Thereby, the advantages of the different detection operators can be integrated for substitution into the stitched image. And the images are spliced into spliced images, so that the number and diversity of the image features can be improved. Therefore, in the process of model training, the image recognition and segmentation capability of the initial image segmentation model can be improved. Then, by introducing a first loss function and a second loss function, it can be used to determine the integrated loss value with the label image from the overall image angle and the angle of the edge detection coordinates in the image, respectively. Therefore, the accuracy of the initial image segmentation model in the training process can be further improved. Thus, even when there are many obstacle vehicles, the distance is long, the vehicle colors are close, and the distance between the obstacle vehicles is close, the trained image segmentation model can perform image segmentation well. Furthermore, the image segmentation capability of the trained convolutional neural network model can be improved.
Optionally, the executing body may further send the obstacle detection information to a display terminal for display.
The above embodiments of the present disclosure have the following advantageous effects: by the obstacle information detection method of some embodiments of the present disclosure, the accuracy of the generated obstacle detection information may be improved. Specifically, the cause of the decrease in accuracy of the generated obstacle detection information is that: under the conditions that the number of obstacle vehicles is large, the distances are far, the colors of the vehicles are similar, and the distances among the obstacle vehicles are similar, the characteristics of the obstacle vehicles in an image are fewer, and the obstacle vehicles are easily identified as the same obstacle, so that the accuracy of image segmentation is reduced, and the accuracy of generated obstacle detection information is reduced. Based on this, the obstacle information detection method of some embodiments of the present disclosure first acquires a road image. Then, edge detection is performed on the road image to generate a first edge detection image, a second edge detection image, a third edge detection image, and a fourth edge detection image. Through edge detection, can be used for providing characteristic support for obstacle information detection. Also because the first edge detection image, the second edge detection image, the third edge detection image, and the fourth edge detection image are generated, it is possible to facilitate highlighting the characteristics of the obstacle vehicle. Even if the obstacle vehicles are more, the distances are far, the colors of the vehicles are similar, and the distances among the obstacle vehicles are similar, the characteristics of different vehicles can be well distinguished. Finally, obstacle detection information is generated based on the first edge detection image, the second edge detection image, the third edge detection image, the fourth edge detection image, and a preset image segmentation model. Therefore, the situation that the approximate obstacle is recognized as the same obstacle can be greatly avoided, and the segmentation capability of the image segmentation model on each obstacle in the road image is improved. Further, the accuracy of the generated obstacle information can be improved.
With further reference to fig. 2, as an implementation of the method shown in the above figures, the present disclosure provides some embodiments of an obstacle information detection device, which correspond to those method embodiments shown in fig. 1, and which are particularly applicable in various electronic apparatuses.
As shown in fig. 2, the obstacle information detection apparatus 200 of some embodiments includes: an acquisition unit 201, an edge detection unit 202, and a generation unit 203. Wherein the acquisition unit 201 is configured to acquire a road image; an edge detection unit 202 configured to perform edge detection on the road image to generate a first edge detection image, a second edge detection image, a third edge detection image, and a fourth edge detection image; and a generation unit 203 configured to generate obstacle detection information based on the first edge detection image, the second edge detection image, the third edge detection image, the fourth edge detection image, and a preset image division model.
It will be appreciated that the elements described in the apparatus 200 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations, features and resulting benefits described above for the method are equally applicable to the apparatus 200 and the units contained therein, and are not described in detail herein.
Referring now to fig. 3, a schematic diagram of an electronic device 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 3 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means 301 (e.g., a central processing unit, a graphics processor, etc.) that may perform various suitable actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data required for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device 300 to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 shows an electronic device 300 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 3 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications device 309, or from storage device 308, or from ROM 302. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing means 301.
It should be noted that, in some embodiments of the present disclosure, the computer readable medium may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (Hyper Text Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be embodied in the apparatus; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a road image; performing edge detection on the road image to generate a first edge detection image, a second edge detection image, a third edge detection image and a fourth edge detection image; obstacle detection information is generated based on the first edge detection image, the second edge detection image, the third edge detection image, the fourth edge detection image, and a preset image segmentation model.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes an acquisition unit, an edge detection unit, and a generation unit. The names of these units do not constitute limitations on the unit itself in some cases, and for example, the edge detection unit may also be described as "a unit that performs edge detection on a road image".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.
Claims (5)
1. An obstacle information detection method, comprising:
acquiring a road image;
performing edge detection on the road image to generate a first edge detection image, a second edge detection image, a third edge detection image and a fourth edge detection image;
generating obstacle detection information based on the first edge detection image, the second edge detection image, the third edge detection image, the fourth edge detection image and a preset image segmentation model;
wherein the edge detection of the road image to generate a first edge detection image, a second edge detection image, a third edge detection image, and a fourth edge detection image includes:
graying the road image to obtain a gray image;
performing edge detection on the gray level image based on a preset first detection operator to generate a first edge detection image;
performing edge detection on the gray level image based on a preset second detection operator to generate a second edge detection image;
performing edge detection on the gray level image based on a preset third detection operator to generate a third edge detection image;
performing edge detection on the gray image based on a preset fourth detection operator to generate a fourth edge detection image, wherein the first detection operator, the second detection operator, the third detection operator and the fourth detection operator are different detection operators;
wherein the generating obstacle detection information based on the first edge detection image, the second edge detection image, the third edge detection image, the fourth edge detection image, and a preset image segmentation model includes:
performing stitching processing on the gray level image, the first edge detection image, the second edge detection image, the third edge detection image and the fourth edge detection image to generate a stitched image, wherein the stitched image is a 5-channel image;
inputting the spliced images into a preset image segmentation model to generate segmented road images;
detecting the obstacle of the segmented road image to generate obstacle detection information;
wherein the performing obstacle detection on the segmented road image to generate obstacle detection information includes:
detecting the segmented road image to generate an image region detection information set, wherein each image region detection information in the image region detection information set comprises obstacle detection data;
and determining each image region detection information in the image region detection information set and the corresponding image region in the segmented road image as obstacle detection information.
2. The method of claim 1, wherein the method further comprises:
and sending the obstacle detection information to a display terminal for display.
3. An obstacle information detection device, comprising:
an acquisition unit configured to acquire a road image;
an edge detection unit configured to perform edge detection on the road image to generate a first edge detection image, a second edge detection image, a third edge detection image, and a fourth edge detection image;
a generation unit configured to generate obstacle detection information based on the first edge detection image, the second edge detection image, the third edge detection image, the fourth edge detection image, and a preset image division model;
wherein the edge detection of the road image to generate a first edge detection image, a second edge detection image, a third edge detection image, and a fourth edge detection image includes:
graying the road image to obtain a gray image;
performing edge detection on the gray level image based on a preset first detection operator to generate a first edge detection image;
performing edge detection on the gray level image based on a preset second detection operator to generate a second edge detection image;
performing edge detection on the gray level image based on a preset third detection operator to generate a third edge detection image;
performing edge detection on the gray image based on a preset fourth detection operator to generate a fourth edge detection image, wherein the first detection operator, the second detection operator, the third detection operator and the fourth detection operator are different detection operators;
wherein the generating obstacle detection information based on the first edge detection image, the second edge detection image, the third edge detection image, the fourth edge detection image, and a preset image segmentation model includes:
performing stitching processing on the gray level image, the first edge detection image, the second edge detection image, the third edge detection image and the fourth edge detection image to generate a stitched image, wherein the stitched image is a 5-channel image;
inputting the spliced images into a preset image segmentation model to generate segmented road images;
detecting the obstacle of the segmented road image to generate obstacle detection information;
wherein the performing obstacle detection on the segmented road image to generate obstacle detection information includes:
detecting the segmented road image to generate an image region detection information set, wherein each image region detection information in the image region detection information set comprises obstacle detection data;
and determining each image region detection information in the image region detection information set and the corresponding image region in the segmented road image as obstacle detection information.
4. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-2.
5. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310595751.1A CN116704473B (en) | 2023-05-24 | 2023-05-24 | Obstacle information detection method, obstacle information detection device, electronic device, and computer-readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310595751.1A CN116704473B (en) | 2023-05-24 | 2023-05-24 | Obstacle information detection method, obstacle information detection device, electronic device, and computer-readable medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116704473A CN116704473A (en) | 2023-09-05 |
CN116704473B true CN116704473B (en) | 2024-03-08 |
Family
ID=87826887
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310595751.1A Active CN116704473B (en) | 2023-05-24 | 2023-05-24 | Obstacle information detection method, obstacle information detection device, electronic device, and computer-readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116704473B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118229171B (en) * | 2024-05-11 | 2024-07-30 | 北京国网信通埃森哲信息技术有限公司 | Power equipment storage area information display method and device and electronic equipment |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102007059735A1 (en) * | 2006-12-12 | 2008-07-24 | Cognex Corp., Natick | Obstacle and vehicle e.g. lorry, recognition system, has cameras aligned such that area on side and over end of own vehicle are recognized, and output device providing information about object or vehicle within range of own vehicle |
CN110502983A (en) * | 2019-07-11 | 2019-11-26 | 平安科技(深圳)有限公司 | The method, apparatus and computer equipment of barrier in a kind of detection highway |
CN112489029A (en) * | 2020-12-10 | 2021-03-12 | 深圳先进技术研究院 | Medical image segmentation method and device based on convolutional neural network |
CN112818932A (en) * | 2021-02-26 | 2021-05-18 | 北京车和家信息技术有限公司 | Image processing method, obstacle detection device, medium, and vehicle |
WO2021093418A1 (en) * | 2019-11-12 | 2021-05-20 | 深圳创维数字技术有限公司 | Ground obstacle detection method and device, and computer-readable storage medium |
WO2022083402A1 (en) * | 2020-10-22 | 2022-04-28 | 腾讯科技(深圳)有限公司 | Obstacle detection method and apparatus, computer device, and storage medium |
CN114419604A (en) * | 2022-03-28 | 2022-04-29 | 禾多科技(北京)有限公司 | Obstacle information generation method and device, electronic equipment and computer readable medium |
CN114693712A (en) * | 2022-04-08 | 2022-07-01 | 重庆邮电大学 | Dark vision and low-illumination image edge detection method based on deep learning |
CN115423865A (en) * | 2022-07-29 | 2022-12-02 | 松灵机器人(深圳)有限公司 | Obstacle detection method, obstacle detection device, mowing robot, and storage medium |
CN116071707A (en) * | 2023-02-27 | 2023-05-05 | 南京航空航天大学 | Airport special vehicle identification method and system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9972096B2 (en) * | 2016-06-14 | 2018-05-15 | International Business Machines Corporation | Detection of obstructions |
-
2023
- 2023-05-24 CN CN202310595751.1A patent/CN116704473B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102007059735A1 (en) * | 2006-12-12 | 2008-07-24 | Cognex Corp., Natick | Obstacle and vehicle e.g. lorry, recognition system, has cameras aligned such that area on side and over end of own vehicle are recognized, and output device providing information about object or vehicle within range of own vehicle |
CN110502983A (en) * | 2019-07-11 | 2019-11-26 | 平安科技(深圳)有限公司 | The method, apparatus and computer equipment of barrier in a kind of detection highway |
WO2021093418A1 (en) * | 2019-11-12 | 2021-05-20 | 深圳创维数字技术有限公司 | Ground obstacle detection method and device, and computer-readable storage medium |
WO2022083402A1 (en) * | 2020-10-22 | 2022-04-28 | 腾讯科技(深圳)有限公司 | Obstacle detection method and apparatus, computer device, and storage medium |
CN112489029A (en) * | 2020-12-10 | 2021-03-12 | 深圳先进技术研究院 | Medical image segmentation method and device based on convolutional neural network |
CN112818932A (en) * | 2021-02-26 | 2021-05-18 | 北京车和家信息技术有限公司 | Image processing method, obstacle detection device, medium, and vehicle |
CN114419604A (en) * | 2022-03-28 | 2022-04-29 | 禾多科技(北京)有限公司 | Obstacle information generation method and device, electronic equipment and computer readable medium |
CN114693712A (en) * | 2022-04-08 | 2022-07-01 | 重庆邮电大学 | Dark vision and low-illumination image edge detection method based on deep learning |
CN115423865A (en) * | 2022-07-29 | 2022-12-02 | 松灵机器人(深圳)有限公司 | Obstacle detection method, obstacle detection device, mowing robot, and storage medium |
CN116071707A (en) * | 2023-02-27 | 2023-05-05 | 南京航空航天大学 | Airport special vehicle identification method and system |
Non-Patent Citations (2)
Title |
---|
Pothole Classification Model Using Edge Detection in Road Image;Ji-Won Baek等;《applied sciences》;第10卷(第19期);1-19 * |
基于深度学习的轨道侵限异物检测方法研究;刘力;《中国优秀硕士学位论文全文数据库 (中国优秀硕士学位论文全文数据库(工程科技Ⅰ辑)》(第2期);B026-325 * |
Also Published As
Publication number | Publication date |
---|---|
CN116704473A (en) | 2023-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112184738B (en) | Image segmentation method, device, equipment and storage medium | |
CN112381717A (en) | Image processing method, model training method, device, medium, and apparatus | |
CN110211195B (en) | Method, device, electronic equipment and computer-readable storage medium for generating image set | |
CN114282581A (en) | Training sample obtaining method and device based on data enhancement and electronic equipment | |
CN116704473B (en) | Obstacle information detection method, obstacle information detection device, electronic device, and computer-readable medium | |
CN112419179B (en) | Method, apparatus, device and computer readable medium for repairing image | |
CN116164770B (en) | Path planning method, path planning device, electronic equipment and computer readable medium | |
CN110633718B (en) | Method and device for determining a driving area in an environment image | |
CN111382695A (en) | Method and apparatus for detecting boundary points of object | |
CN111209856B (en) | Invoice information identification method and device, electronic equipment and storage medium | |
CN114898177B (en) | Defect image generation method, model training method, device, medium and product | |
CN113592033B (en) | Oil tank image recognition model training method, oil tank image recognition method and device | |
CN110956128A (en) | Method, apparatus, electronic device, and medium for generating lane line image | |
CN117036965B (en) | Bridge maintenance apparatus control method, electronic apparatus, and computer-readable medium | |
CN112418054B (en) | Image processing method, apparatus, electronic device, and computer readable medium | |
CN115546766B (en) | Lane line generation method, lane line generation device, electronic device, and computer-readable medium | |
CN116703943A (en) | Lane line information generation method, device, electronic equipment and computer readable medium | |
CN110633598B (en) | Method and device for determining a driving area in an environment image | |
CN115115836B (en) | Image recognition method, device, storage medium and electronic equipment | |
CN110852242A (en) | Watermark identification method, device, equipment and storage medium based on multi-scale network | |
CN111797931B (en) | Image processing method, image processing network training method, device and equipment | |
CN111340813B (en) | Image instance segmentation method and device, electronic equipment and storage medium | |
CN114399696A (en) | Target detection method and device, storage medium and electronic equipment | |
CN112528970A (en) | Guideboard detection method, device, equipment and computer readable medium | |
CN114004229A (en) | Text recognition method and device, readable medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |