CN113610056A - Obstacle detection method, obstacle detection device, electronic device, and storage medium - Google Patents

Obstacle detection method, obstacle detection device, electronic device, and storage medium Download PDF

Info

Publication number
CN113610056A
CN113610056A CN202111009218.XA CN202111009218A CN113610056A CN 113610056 A CN113610056 A CN 113610056A CN 202111009218 A CN202111009218 A CN 202111009218A CN 113610056 A CN113610056 A CN 113610056A
Authority
CN
China
Prior art keywords
obstacle
feature
characteristic
image
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111009218.XA
Other languages
Chinese (zh)
Inventor
汪全伍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dilu Technology Co Ltd
Original Assignee
Dilu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dilu Technology Co Ltd filed Critical Dilu Technology Co Ltd
Priority to CN202111009218.XA priority Critical patent/CN113610056A/en
Publication of CN113610056A publication Critical patent/CN113610056A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction

Abstract

The disclosure relates to an obstacle detection method, an obstacle detection device, an electronic apparatus, and a storage medium. The method comprises the following steps: extracting the characteristics of the acquired image to be processed to acquire matrix characteristics of multiple dimensions, wherein the matrix characteristics of each dimension correspond to different sizes; respectively taking the matrix characteristic of each dimension as a reference size characteristic, and carrying out characteristic scaling on the non-reference size characteristic to obtain a scaling characteristic corresponding to each reference size characteristic; performing feature fusion on the standard size feature and the scaling feature corresponding to the standard size feature to obtain a fusion feature corresponding to the standard size feature; and processing the image to be processed according to the fusion features corresponding to the reference dimension features to obtain an obstacle detection result in the image to be processed. By adopting the method, the detection capability of the small target barrier can be improved on the premise of not increasing the calculated amount remarkably.

Description

Obstacle detection method, obstacle detection device, electronic device, and storage medium
Technical Field
The present disclosure relates to the field of environmental awareness for unmanned vehicles, and in particular, to a method and an apparatus for detecting obstacles, an electronic device, and a storage medium.
Background
With the development of the driving assistance technology, a manner of performing obstacle detection with a camera as a sensor has emerged. The method mainly comprises the steps of acquiring an environment image around a vehicle by taking a camera as a sensor, and detecting and identifying obstacles in the environment image by processing a deep learning algorithm.
At present, a deep learning algorithm for obstacle detection is limited by detection precision, the detection effect on small target obstacles is poor, and missed detection is easy to occur. It is common practice to improve the detection accuracy by increasing the depth of the algorithm model, and improve the detection capability for small target obstacles, but this may result in a significant increase in the amount of computation.
Disclosure of Invention
The invention provides a method for detecting obstacles, which can improve the detection precision and solve the problem that small target obstacles are easy to miss detection without significantly increasing the calculation amount. The technical scheme of the disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a method of obstacle detection, the method comprising:
acquiring an image to be processed, extracting the characteristics of the image to be processed, and acquiring matrix characteristics of multiple dimensions, wherein the matrix characteristics of each dimension correspond to different sizes;
respectively taking the matrix characteristic of each dimension as a reference size characteristic, and carrying out characteristic scaling on the non-reference size characteristic to obtain a scaling characteristic corresponding to each reference size characteristic; the feature scaling is to scale the size corresponding to the non-reference size feature to be the same as the size corresponding to the reference size feature; the non-reference dimension characteristic refers to a matrix characteristic of the matrix characteristics of the plurality of dimensions except for being used as a reference dimension characteristic;
performing feature fusion on the standard size feature and the scaling feature corresponding to the standard size feature to obtain a fusion feature corresponding to the standard size feature;
and processing the image to be processed according to the fusion features corresponding to the reference dimension features to obtain an obstacle detection result in the image to be processed.
According to the first aspect of the embodiment of the present disclosure, the performing feature fusion on the scaling feature corresponding to the reference size feature and the reference size feature to obtain a fusion feature corresponding to the reference size feature includes:
obtaining a weight coefficient of the reference size characteristic and a scaling characteristic corresponding to the reference size characteristic through deep learning network training;
and performing weighted fusion on the scaling features corresponding to the reference size features and the reference size features by using the weight coefficients to obtain weighted-fused fusion features.
According to the first aspect of the embodiment of the present disclosure, the processing the image to be processed according to the fusion feature corresponding to each of the reference size features, and obtaining an obstacle detection result in the image to be processed includes:
predicting the fusion characteristics by using a deep learning network model to obtain obstacle information comprising the probability of whether an obstacle exists, the obstacle prediction category, the probability corresponding to the obstacle prediction category and the obstacle coordinates;
and multiplying the probability of whether the obstacle exists with the probability corresponding to the obstacle prediction type to obtain a multiplication value, outputting the obstacle prediction type of the obstacle as the obstacle type when the multiplication value meets a set obstacle identification threshold, and taking the obstacle coordinate corresponding to the obstacle as a first obstacle coordinate.
According to the first aspect of the embodiment of the present disclosure, after the processing the image to be processed according to the fusion feature corresponding to each reference size feature to obtain the obstacle detection result in the image to be processed, the processing further includes:
solving the intersection ratio of the areas of the target obstacle in the current frame and the historical obstacles in the historical frame, and the Euclidean distance between the center point of the target obstacle and the center point of the historical obstacle;
matching by adopting a maximum matching algorithm according to the obtained intersection ratio of the target barrier in the current frame and the area of the historical barrier in the historical frame and the Euclidean distance between the central point of the target barrier and the central point of the historical barrier;
when the matching meets a preset matching threshold, outputting the obstacle category of the current frame and the obstacle coordinates corresponding to the obstacles;
and performing optimal estimation on the output obstacle type and the obstacle coordinate corresponding to the obstacle, and the historical obstacle type and the obstacle coordinate corresponding to the obstacle in the historical frame by using a filtering algorithm to obtain the obstacle type detected by the current frame and the obstacle coordinate corresponding to the obstacle type as a second obstacle coordinate.
According to the first aspect of the embodiments of the present disclosure, in obtaining the intersection ratio between the target obstacle in the current frame and the area of the historical obstacle in the historical frame and the euclidean distance between the center point of the target obstacle and the center point of the historical obstacle, the method further includes:
weighting the intersection ratio of the target barrier and the area of the historical barrier in the current frame and the Euclidean distance between the central point of the target barrier and the central point of the historical barrier respectively to obtain the weighted intersection ratio and/or the weighted Euclidean distance;
the maximum matching according to the obtained intersection ratio of the target obstacle in the current frame and the area of the historical obstacle in the historical frame and the Euclidean distance between the center point of the target obstacle and the center point of the historical obstacle comprises the following steps: and performing maximum matching according to the intersection ratio of the weighted target barrier in the current frame and the area of the historical barrier in the historical frame and the Euclidean distance between the weighted center point of the target barrier and the center point of the historical barrier.
According to a first aspect of the embodiments of the present disclosure, the acquiring the to-be-processed image is acquired by:
acquiring image data acquired by a camera sensor in a shared memory;
and processing the image data into an image with a specified pixel size to obtain an image to be processed.
According to a second aspect of the embodiments of the present disclosure, there is provided an obstacle detection device including:
the characteristic extraction module is used for acquiring an image to be processed, extracting characteristics of the image to be processed and acquiring matrix characteristics of multiple dimensions, wherein the matrix characteristics of each dimension correspond to different sizes;
the characteristic scaling module is used for scaling the characteristics of the non-reference size characteristics by taking the matrix characteristics of each dimension as the reference size characteristics to obtain scaling characteristics corresponding to each reference size characteristic; the feature scaling is to scale the size corresponding to the non-reference size feature to be the same as the size corresponding to the reference size feature; the non-reference dimension characteristic refers to a matrix characteristic of the matrix characteristics of the plurality of dimensions except for being used as a reference dimension characteristic;
the characteristic fusion module is used for carrying out characteristic fusion on the standard size characteristic and the scaling characteristic corresponding to the standard size characteristic to obtain a fusion characteristic corresponding to the standard size characteristic;
and the obstacle detection module is used for processing the image to be processed according to the fusion features corresponding to the reference dimension features to obtain an obstacle detection result in the image to be processed.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising a processor and a memory for storing processor-executable instructions, the processor being configured to execute instructions to implement the method of obstacle detection as described in the first aspect above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by the processor, implements the method of obstacle detection as described in the above first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising instructions that, when executed, implement the method of obstacle detection as described in the first aspect above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the embodiment scheme provided by the disclosure, the features with different sizes are associated by performing feature fusion on the features with different sizes, so that deep semantic information and shallow characterization information of small targets are considered and enhanced. Therefore, the ability of identifying and detecting small target obstacles is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is an application environment diagram illustrating a method of obstacle detection according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a method of obstacle detection according to an exemplary embodiment.
FIG. 3 is a flow diagram illustrating feature scaling and fusion of a method of obstacle detection according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating obstacle detection result output of an obstacle detection method according to an exemplary embodiment.
Fig. 5 is a flow chart illustrating obstacle tracking of a method of obstacle detection according to an exemplary embodiment.
Fig. 6 is a flowchart illustrating an obstacle detection method of acquiring an image to be processed according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating an obstacle detection device according to an exemplary embodiment.
FIG. 8 illustrates a block diagram of an electronic device in accordance with an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should also be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) referred to in the present disclosure are both information and data that are authorized by the user or sufficiently authorized by various parties.
The obstacle detection method provided by the present disclosure may be applied to an application environment as shown in fig. 1. Where vehicle 110 communicates with computer device 120 over a network. The vehicle 110 acquires an image of the road environment, and uploads the acquired image of the road environment to the computer device 120. In some embodiments, when the computer device 120 acquires the road environment image in which the target vehicle is located, the image may be subjected to obstacle detection processing to obtain a probability of whether an obstacle corresponding to each target object exists in the road environment image in which the target vehicle is located, an obstacle prediction type, a probability corresponding to the obstacle prediction type, and obstacle information of obstacle coordinates. The type of the obstacle may be finally determined according to the probability of whether the obstacle exists or not and the probability corresponding to the predicted type of the obstacle. In addition, the computer device 120 may also issue the detection result of the obstacle to the target vehicle, so that the target vehicle 110 may avoid the obstacle in time according to the obstacle detection result.
The target vehicle 120 may be, but is not limited to, an autonomous automobile, a motor vehicle with autonomous driving or assisted driving, and the like. Of course, the obstacle detection method, the obstacle detection device and the like provided by the disclosure can also be applied to vehicles such as non-motor vehicles, aircrafts and rail transit vehicles. The computer device may be a terminal or a server, and the terminal may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing basic cloud computing services such as cloud storage, network service, cloud communication, big data and artificial intelligence platforms. The terminal and the server may be connected directly or indirectly in a wired or wireless manner, and the disclosure is not limited thereto.
Generally, the small target has fewer available pixels compared with a conventional target, so that better features are difficult to extract, and with the increase of the number of layers of the deep neural network, the feature information and the position information of the small target are lost and difficult to detect by the network, so that the small target needs deep semantic information and shallow characterization information at the same time. According to the embodiment scheme provided by the disclosure, the features with different sizes can be associated by performing feature fusion on the features with different sizes, so that deep semantic information and shallow characterization information of a small target are considered and enhanced, and the identification and detection capability of a small target obstacle is improved. The small object is generally defined in various ways, such as may be a relative size, such as a length and width of the object size being 0.1 or other factor of the original image size, and may be considered a small object in some embodiments of the present disclosure. Or definition of absolute dimensions, such as targets with dimensions less than 32 x 32 pixels, etc., may be considered small targets in some embodiments of the present disclosure.
Fig. 2 is a flowchart illustrating an obstacle detection method according to an exemplary embodiment, as shown in fig. 2, for use in a computer device, including the following steps.
In step S210, an image to be processed is obtained, feature extraction is performed on the image to be processed, matrix features of multiple dimensions are obtained, and the matrix features of each dimension correspond to different sizes.
The image to be processed may be an image obtained by acquiring road environment information around the vehicle by an image acquisition device (such as a camera, a video camera, etc.). Conventional image processing algorithms may be employed to process the image into a pixel-specific image. The image may reflect the environment surrounding the vehicle, such as lanes and obstacles, including pedestrians, vehicles, animals, street lights, street trees, and the like. The format of the image to be processed can be pictures or video forms. The feature extraction of the image to be processed can be to extract the feature layer of the image through a deep learning network. The deep learning network may include a convolutional neural network, ssd (single Shot multitox detector), yolo (you Only Look once), and the like. Among them, SSD is a kind of target detection network. The plurality of dimensions may generally include 2 or more than 2 specified dimensions. This disclosure does not exclude implementations where the plurality of dimensions described in some embodiments may also comprise 1 dimension. The dimension is typically determined by the number of feature levels in the deep learning network. For example, a 3-layer feature layer is generally used for yolo (young Only Look one), and a 6-layer feature layer is generally used for ssd (single Shot multi box detector). The feature information of different layers can be extracted from the image by performing related operations such as convolution and the like on the feature layer through the deep learning network. The features of each dimension generally correspond to different size dimensions, which may mean that the image size of the feature map of each dimension is different. Due to the fact that the different scales of different targets on the input image are different greatly, the different scales of the targets can be better adapted by using the features with different sizes.
In one embodiment, the computer device takes an image to be processed as an input of an open source target detection algorithm YOLOv5 (young Only Look Once version 5), performs feature extraction on the image, and acquires features of 3 different layers, which are respectively marked as feature 1, feature 2, and feature 3. Where feature 1 has a size greater than feature 2 and feature 2 has a size greater than feature 3.
In step S220, the matrix feature of each dimension is used as a reference dimension feature, and the non-reference dimension feature is subjected to feature scaling to obtain a scaling feature corresponding to each reference dimension feature; the feature scaling is to scale the size corresponding to the non-reference size feature to be the same as the size corresponding to the reference size feature; the non-reference dimension characteristic refers to a matrix characteristic of the matrix characteristics of the plurality of dimensions except for being used as a reference dimension characteristic;
wherein, the feature scaling may include two modes of feature reduction and feature enlargement. Feature reduction may be the reduction of large-sized features to conform to a reference-sized feature size. Feature enlargement may be enlarging a feature of a small size to coincide with a reference-size feature size. The reference size feature may be any feature that becomes a reference size feature when scaling the feature with the feature as a reference size.
In one embodiment, the computer device extracts features of 3 different layers, which are respectively denoted as feature 1, feature 2, and feature 3, by using a YOLOv5 (young Only Look one version 5) algorithm, wherein the size of feature 1 is larger than that of feature 2, and the size of feature 2 is larger than that of feature 3. Firstly, scaling the features by taking the feature 3 as a reference size feature, wherein the specific operations comprise: reducing the size of the feature 1 to be the same as that of the feature 3 to obtain a reduced feature 1; feature 2 is scaled down to the same size as feature 3, resulting in scaled down feature 2. Secondly, scaling the features by taking the feature 2 as a reference dimension feature, wherein the specific operations comprise: reducing the size of the feature 1 to be the same as that of the feature 2 to obtain a reduced feature 1; feature 3 is enlarged to the same size as feature 2, resulting in enlarged feature 3. Finally, scaling the features by taking the feature 1 as a reference size feature, wherein the specific operations comprise: enlarging the size of the feature 2 to be the same as that of the feature 1 to obtain an enlarged feature 2; feature 3 is enlarged to the same size as feature 1, resulting in enlarged feature 2.
In step S230, performing feature fusion on the reference size feature and the scaling feature corresponding to the reference size feature to obtain a fusion feature corresponding to the reference size feature;
in which, feature fusion generally relates different feature layers, so as to enrich the information of each feature. The specific operation may be to fuse information of feature layers other than the reference dimension features into the reference dimension features to form fused features.
In step S240, the to-be-processed image is processed according to the fusion feature corresponding to each of the reference size features, so as to obtain an obstacle detection result in the to-be-processed image.
The detection result of the obstacle may include obstacle type information of the target obstacle, position coordinate information of the obstacle, and the like.
Through the steps, the acquired image can be subjected to feature extraction processing, the matrix features with different sizes are fused, and information among different features is correlated, so that the feature fusion can give consideration to both higher resolution and larger receptive field for small target detection, and the detection capability of the small target detection is improved.
In an exemplary embodiment, as shown in fig. 3, in step S230, the scaling feature corresponding to the reference size feature and the reference size feature are feature fused to obtain a fused feature corresponding to the reference size feature, which may specifically be implemented by the following steps:
in step S310, a weighting coefficient of the reference size feature and a scaling feature corresponding to the reference size feature is obtained through deep learning network training;
wherein, the value of the weight coefficient is usually obtained in the deep learning network training process.
In step S320, the scaling features corresponding to the reference size feature and the reference size feature are weighted and fused by using the weighting coefficient, so as to obtain a weighted and fused fusion feature.
Taking 3 layers of feature layers as an example, the process of weighted fusion can be performed in the following manner:
first, scaling the feature with the feature 3 size as a reference size, and performing:
reducing the size of the feature 1 to be the same as that of the feature 3 to obtain a reduced feature 1;
feature 2 is scaled down to the same size as feature 3, resulting in scaled down feature 2.
Then, feature fusion is performed on the feature 3 according to a fusion formula 1, which is:
fused feature 3-a 3-scaled feature 1+ b 3-scaled feature 2+ c 3-feature 3.
Wherein, the values of the positions a3, b3 and c3 are obtained in the network training process.
Then, scaling the feature with the feature 2 size as a reference size, and executing:
reducing the size of the feature 1 to be the same as that of the feature 2 to obtain a reduced feature 1;
feature 3 is enlarged to the same size as feature 2, resulting in enlarged feature 3.
Then, feature fusion is performed on the feature 2 according to a fusion formula 2, which is:
fused feature 2 ═ a2 × reduced feature 1+ b2 × -feature 2+ c2 × enlarged feature 3.
Wherein, the values of the positions a2, b2 and c2 are obtained in the network training process.
Then, scaling the feature with the feature 1 size as a reference size, and executing:
enlarging the size of the feature 2 to be the same as that of the feature 1 to obtain an enlarged feature 2;
feature 3 is enlarged to the same size as feature 1, resulting in enlarged feature 3.
Then, feature fusion is performed on the feature 1 according to a fusion formula 3, which is:
fused feature 1 ═ a1 ═ b1 ═ enlarged feature 2+ c1 × -enlarged feature 3.
Wherein, the values of the positions a1, b1 and c1 are obtained in the network training process.
By the embodiment, the weight coefficient can be utilized, the characteristic information of the reference size characteristic and the scaling characteristic is considered, the information of the reference size characteristic and the scaling characteristic is balanced, and the final fusion characteristic is obtained.
In an exemplary embodiment, as shown in fig. 4, in step S240, processing the image to be processed according to the fusion feature corresponding to each of the reference size features, and obtaining the obstacle detection result in the image to be processed may be implemented by:
in step S410, the fusion feature may be predicted by using a deep learning network model, and obstacle information including a probability of whether an obstacle exists, an obstacle prediction category, a probability corresponding to the obstacle prediction category, and an obstacle coordinate is obtained;
the probability of whether an obstacle is present or not can be represented by a decimal between 0 and 1, and the greater the numerical value, the closer to 1, the greater the probability that the object is an obstacle. The barrier category may be pedestrian, automobile, bicycle, street tree, animal, street lamp, sign, etc. The probability that an obstacle class corresponds generally refers to the likelihood that the identified obstacle is of the class. The coordinates of the obstacle, including the abscissa and the ordinate of the obstacle in the image picture, reflect the position of the obstacle, and the size of the obstacle can also be calculated according to the coordinates. The obstacle coordinates may include center position information of the obstacle. The obstacle information may also include the speed and orientation of the obstacle.
In step S420, the probability of whether the obstacle exists is multiplied by the probability corresponding to the predicted obstacle type to obtain a multiplication value, and when the multiplication value satisfies a set obstacle identification threshold, the predicted obstacle type of the obstacle is output as the obstacle type, and the obstacle coordinate corresponding to the obstacle is used as the first obstacle coordinate.
The determination of the obstacle usually requires two factors, namely, the probability of whether the obstacle exists and the probability corresponding to the obstacle prediction type, and when the product of the two factors is greater than a set threshold value, the obstacle can be determined as the obstacle of the corresponding type. This set threshold may be learned through the network.
By the embodiment, the obstacle in the image can be distinguished from irrelevant information in the background, so that the vehicle can accurately identify the obstacle and acquire the information of the obstacle.
In an exemplary embodiment, as shown in fig. 5, after the to-be-processed image is processed according to the fusion feature in step S240 to obtain the obstacle detection result in the to-be-processed image, the following steps may be further performed on the output result:
in step S510, an intersection ratio between a target obstacle in a current frame and an area of a historical obstacle in a historical frame, and an euclidean distance between a center point of the target obstacle and a center point of the historical obstacle are solved;
the information of the target barrier in the current frame and the barrier in the historical frame can be extracted, and the intersection ratio of the target barrier in the current frame and the barrier in the historical frame is calculated and can reflect the overlapping degree of the two barriers. The Euclidean distance is calculated by using the barrier in the current frame and the barrier in the historical frame, and the Euclidean distance can reflect the distance information between two target barriers. When the intersection ratio and/or the euclidean distance between two obstacles are calculated, the obstacle in the current frame may not have a correspondence with the obstacle in the history frame. The calculated intersection ratio and/or euclidean distance may be used as a criterion as an input of the maximum matching algorithm in the next step S520.
In step S520, matching by using a maximum matching algorithm according to the obtained intersection ratio between the target obstacle in the current frame and the area of the historical obstacle in the historical frame and the euclidean distance between the center point of the target obstacle and the center point of the historical obstacle;
the maximum matching algorithm may be to perform association matching on the target obstacle in the current frame and the obstacle in the historical frame. The maximum matching algorithm may comprise the hungarian maximum matching algorithm. And (3) inputting the intersection ratio and/or Euclidean distance obtained by calculation in the last step as a matching standard into a maximum matching algorithm for calculation and judgment. When the maximum matching is carried out, only the cross-over ratio or one Euclidean distance item can be adopted, or the cross-over ratio or the Euclidean distance can be respectively used as the matching standard to carry out the matching twice, and the matching result meeting the matching process of the two times is obtained and is output as the result of the maximum matching.
In step S530, when the matching meets a preset matching threshold, outputting the obstacle category of the current frame and the obstacle coordinates corresponding to the obstacle;
the matching meeting the preset matching threshold generally means that a maximum matching algorithm calculates a target obstacle of a current frame and a target obstacle of a historical frame, and a matching result can be output according to a pre-calculated intersection ratio and/or Euclidean distance as a judgment standard.
In step S540, the outputted obstacle type and obstacle coordinates corresponding to the obstacle, and the historical obstacle type and obstacle coordinates corresponding to the obstacle in the historical frame are optimally estimated by using a filtering algorithm, so as to obtain the obstacle type detected in the current frame and the obstacle coordinates corresponding to the obstacle type as second obstacle coordinates.
The filter algorithm is a prediction algorithm, an algorithm for estimating a true value based on data of an observed value and an estimated value comprises a Kalman filter algorithm and a particle filter algorithm, and tracking of a current frame target obstacle can be realized through the filter algorithm.
By this embodiment, detected obstacles can be tracked and predicted.
In an exemplary embodiment, in step S520, according to the obtained intersection ratio between the target obstacle in the current frame and the historical obstacle area in the historical frame and the euclidean distance between the center point of the target obstacle and the center point of the historical obstacle, the matching using the maximum matching algorithm may include the following steps of further processing the output result:
in step S610, weighting the intersection ratio between the target obstacle and the area of the historical obstacle in the current frame and the euclidean distance between the center point of the target obstacle and the center point of the historical obstacle respectively to obtain a weighted intersection ratio and/or a weighted euclidean distance;
the intersection ratio and the euclidean distance may also be determined by correlation, and one of the correlation methods may be a weighted method.
In step S620, the performing maximum matching according to the obtained intersection ratio between the target obstacle in the current frame and the historical obstacle area in the historical frame and the euclidean distance between the center point of the target obstacle and the center point of the historical obstacle further includes: and performing maximum matching according to the intersection ratio of the weighted target barrier in the current frame and the area of the historical barrier in the historical frame and the Euclidean distance between the weighted center point of the target barrier and the center point of the historical barrier.
When the maximum matching is performed by using the target obstacle of the current frame and the target obstacle of the historical frame, the intersection ratio and the Euclidean distance can be correlated, and the correlation result is used as the judgment standard and input of the maximum matching. The association of the cross-over ratio and the euclidean distance may be performed in a weighted manner.
By the embodiment, the Euclidean distance and the intersection ratio result are weighted, the matching degree of the obstacle of the current frame and the obstacle of the historical frame can be more accurately judged, and the obtained maximum matching result has better accuracy compared with the single intersection ratio or the single Euclidean distance as the matching standard. And the matching efficiency is higher than that of using Euclidean distance and intersection compared with performing two independent matching.
In an exemplary embodiment, as shown in fig. 6, in step S210, the acquiring the to-be-processed image in the following manner further includes:
in step S710, image data acquired by a camera sensor is acquired in a shared memory;
the camera sensor is an image acquisition device, can be a camera, can be a video camera, and can also be other systems for acquiring video or image information. The camera sensor can be built in the vehicle, or can be an external image acquisition system which is associated with the target vehicle and is connected with the computer in a wired or wireless mode. The shared memory may be a memory built in the camera, or may be implemented by any type of volatile or nonvolatile memory device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read Only Memory (EEPROM), an Erasable Programmable Read Only Memory (EPROM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a magnetic memory, a flash memory, a magnetic disk, an optical disk, or a graphene memory, or may be a memory device in the cloud, and is connected to the camera sensor or the computer in a wired or wireless manner.
In step S720, the image data is processed into an image with a specified pixel size, and an image to be processed is obtained.
Wherein the image of the specified pixel size may include an image of 1080p pixels, an image of 720p pixels, an image of 4k pixels, and the like. The 1080p pixel image may specifically refer to 1920 × 1080 pixel image, and the 720p, 4k pixel and the like refer to the 1080p description.
It should be understood that although the various steps in the flowcharts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
It is understood that the same/similar parts between the embodiments of the method described above in this specification can be referred to each other, and each embodiment focuses on the differences from the other embodiments, and it is sufficient that the relevant points are referred to the descriptions of the other method embodiments.
Based on the description of the obstacle detection method described above, the present disclosure also provides an obstacle detection apparatus. The apparatus may include systems (including distributed systems), software (applications), modules, components, servers, clients, etc. that use the methods described in embodiments of the present specification in conjunction with any necessary apparatus to implement the hardware. Based on the same innovative concept, the embodiments of the present disclosure provide an apparatus in one or more embodiments as described in the following embodiments. Since the implementation scheme of the apparatus for solving the problem is similar to that of the method, the specific implementation of the apparatus in the embodiment of the present specification may refer to the implementation of the foregoing method, and repeated details are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 7 is a block diagram illustrating an apparatus 800 for obstacle detection according to an example embodiment. Referring to fig. 7, the apparatus includes a feature extraction module 810, a feature scaling module 820, a feature fusion module 830, and an obstacle detection module 840.
The feature extraction module 810 is configured to obtain an image to be processed, perform feature extraction on the image to be processed, and obtain matrix features of multiple dimensions, where the matrix features of each dimension correspond to different sizes;
the feature scaling module 820 is configured to scale the features of the non-reference size features by taking the matrix features of each dimension as the reference size features respectively to obtain scaling features corresponding to the reference size features; the feature scaling is to scale the size corresponding to the non-reference size feature to be the same as the size corresponding to the reference size feature; the non-reference dimension characteristic refers to a matrix characteristic of the matrix characteristics of the plurality of dimensions except for being used as a reference dimension characteristic;
the feature fusion module 830 is configured to perform feature fusion on the scaled features corresponding to the reference size feature and the reference size feature to obtain a fusion feature corresponding to the reference size feature;
the obstacle detection module 840 is configured to process the image to be processed according to the fusion features corresponding to the reference size features, so as to obtain an obstacle detection result in the image to be processed.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here. The various modules in the above-described apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 8 is a block diagram illustrating an electronic device 900 for obstacle detection in accordance with an example embodiment. For example, the electronic device 900 may be a computer, a messaging device, a tablet device, and the like.
Referring to fig. 8, the electronic device may include one or more of the following components: processing component 910, memory 920, power component 930, multimedia component 940, audio component 950, input/output (I/O) interface 960, sensor component 970, communication component 980, and processor 990.
The processing component 910 generally controls the overall operation of the electronic device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 910 may include one or more processors 990 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 910 can include one or more modules that facilitate interaction between the processing component 910 and other components. For example, the processing component 910 may include a multimedia module to facilitate interaction between the multimedia component 940 and the processing component 910.
The memory 920 is configured to store various types of data to support operations at the electronic device 900. Examples of such data include instructions for any application or method operating on the electronic device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 920 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, optical disk, or graphene memory.
The power supply component 930 provides power to the various components of the electronic device 900. The power components 930 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 900.
The multimedia component 940 includes a screen providing an output interface between the electronic device 900 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 940 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 900 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 950 is configured to output and/or input audio signals. For example, audio component 950 includes a Microphone (MIC) configured to receive external audio signals when electronic device 900 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 920 or transmitted via the communication component 980. In some embodiments, audio component 950 also includes a speaker for outputting audio signals.
The I/O interface 960 provides an interface between the processing component 910 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 970 includes one or more sensors for providing various aspects of status assessment for the electronic device 900. For example, the sensor assembly 970 may detect an open/closed state of the electronic device 900, the relative positioning of components, such as a display and keypad of the electronic device 900, the sensor assembly 970 may also detect a change in the position of the electronic device 900 or components of the electronic device 900, the presence or absence of user contact with the electronic device 900, orientation or acceleration/deceleration of the electronic device 900, and a change in the temperature of the electronic device 900. The sensor assembly 970 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 970 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 970 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 980 is configured to facilitate wired or wireless communication between the electronic device 900 and other devices. The electronic device 900 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 980 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 980 also includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory 920 comprising instructions, executable by the processor 990 of the electronic device 900 to perform the above-described method is also provided. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, which includes instructions executable by the processor 990 of the electronic device 900 to perform the above-described method.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the hardware + program class embodiment, since it is substantially similar to the method embodiment, the description is simple, and the relevant points can be referred to the partial description of the method embodiment.
It should be noted that the descriptions of the above-mentioned apparatus, the electronic device, the computer-readable storage medium, the computer program product, and the like according to the method embodiments may also include other embodiments, and specific implementations may refer to the descriptions of the related method embodiments, which are not described in detail herein.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An obstacle detection method, characterized in that the method comprises:
acquiring an image to be processed, extracting the characteristics of the image to be processed, and acquiring matrix characteristics of multiple dimensions, wherein the matrix characteristics of each dimension correspond to different sizes;
respectively taking the matrix characteristic of each dimension as a reference size characteristic, and carrying out characteristic scaling on the non-reference size characteristic to obtain a scaling characteristic corresponding to each reference size characteristic; the feature scaling is to scale the size corresponding to the non-reference size feature to be the same as the size corresponding to the reference size feature; the non-reference dimension characteristic refers to a matrix characteristic of the matrix characteristics of the plurality of dimensions except for being used as a reference dimension characteristic;
performing feature fusion on the standard size feature and the scaling feature corresponding to the standard size feature to obtain a fusion feature corresponding to the standard size feature;
and processing the image to be processed according to the fusion features corresponding to the reference dimension features to obtain an obstacle detection result in the image to be processed.
2. The method according to claim 1, wherein the feature fusing the scaled features corresponding to the reference size feature and the reference size feature to obtain a fused feature corresponding to the reference size feature comprises:
obtaining a weight coefficient of the reference size characteristic and a scaling characteristic corresponding to the reference size characteristic through deep learning network training;
and performing weighted fusion on the scaling features corresponding to the reference size features and the reference size features by using the weight coefficients to obtain weighted-fused fusion features.
3. The method according to claim 1, wherein the processing the image to be processed according to the fusion features corresponding to the respective reference size features to obtain the obstacle detection result in the image to be processed comprises:
predicting the fusion characteristics by using a deep learning network model to obtain obstacle information comprising the probability of whether an obstacle exists, the obstacle prediction category, the probability corresponding to the obstacle prediction category and the obstacle coordinates;
and multiplying the probability of whether the obstacle exists with the probability corresponding to the obstacle prediction type to obtain a multiplication value, outputting the obstacle prediction type of the obstacle as the obstacle type when the multiplication value meets a set obstacle identification threshold, and taking the obstacle coordinate corresponding to the obstacle as a first obstacle coordinate.
4. The method according to claim 1, wherein the processing the image to be processed according to the fusion feature corresponding to each of the reference size features, and after obtaining the obstacle detection result in the image to be processed, further comprises:
solving the intersection ratio of the areas of the target obstacle in the current frame and the historical obstacles in the historical frame, and the Euclidean distance between the center point of the target obstacle and the center point of the historical obstacle;
matching by adopting a maximum matching algorithm according to the obtained intersection ratio of the target barrier in the current frame and the area of the historical barrier in the historical frame and the Euclidean distance between the central point of the target barrier and the central point of the historical barrier;
when the matching meets a preset matching threshold, outputting the obstacle category of the current frame and the obstacle coordinates corresponding to the obstacles;
and performing optimal estimation on the output obstacle type and the obstacle coordinate corresponding to the obstacle, and the historical obstacle type and the obstacle coordinate corresponding to the obstacle in the historical frame by using a filtering algorithm to obtain the obstacle type detected by the current frame and the obstacle coordinate corresponding to the obstacle type as a second obstacle coordinate.
5. The method of claim 4, wherein in obtaining the intersection ratio of the target obstacle in the current frame and the area of the historical obstacle in the historical frame and the Euclidean distance between the center point of the target obstacle and the center point of the historical obstacle, the method further comprises:
weighting the intersection ratio of the target barrier and the area of the historical barrier in the current frame and the Euclidean distance between the central point of the target barrier and the central point of the historical barrier respectively to obtain the weighted intersection ratio and/or the weighted Euclidean distance;
the performing maximum matching according to the obtained intersection ratio of the target obstacle in the current frame and the area of the historical obstacle in the historical frame and the Euclidean distance between the center point of the target obstacle and the center point of the historical obstacle further comprises: and performing maximum matching according to the intersection ratio of the weighted target barrier in the current frame and the area of the historical barrier in the historical frame and the Euclidean distance between the weighted center point of the target barrier and the center point of the historical barrier.
6. The method of claim 1, wherein the acquiring the image to be processed is acquired by:
acquiring image data acquired by a camera sensor in a shared memory;
and processing the image data into an image with a specified pixel size to obtain an image to be processed.
7. An obstacle detection apparatus, characterized in that the apparatus comprises:
the characteristic extraction module is used for acquiring an image to be processed, extracting characteristics of the image to be processed and acquiring matrix characteristics of multiple dimensions, wherein the matrix characteristics of each dimension correspond to different sizes;
the characteristic scaling module is used for scaling the characteristics of the non-reference size characteristics by taking the matrix characteristics of each dimension as the reference size characteristics to obtain scaling characteristics corresponding to each reference size characteristic; the feature scaling is to scale the size corresponding to the non-reference size feature to be the same as the size corresponding to the reference size feature; the non-reference dimension characteristic refers to a matrix characteristic of the matrix characteristics of the plurality of dimensions except for being used as a reference dimension characteristic;
the characteristic fusion module is used for carrying out characteristic fusion on the standard size characteristic and the scaling characteristic corresponding to the standard size characteristic to obtain a fusion characteristic corresponding to the standard size characteristic;
and the obstacle detection module is used for processing the image to be processed according to the fusion features corresponding to the reference dimension features to obtain an obstacle detection result in the image to be processed.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
10. A computer program product comprising instructions, characterized in that said instructions, when executed, are capable of performing the steps of the method of any one of claims 1 to 6.
CN202111009218.XA 2021-08-31 2021-08-31 Obstacle detection method, obstacle detection device, electronic device, and storage medium Pending CN113610056A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111009218.XA CN113610056A (en) 2021-08-31 2021-08-31 Obstacle detection method, obstacle detection device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111009218.XA CN113610056A (en) 2021-08-31 2021-08-31 Obstacle detection method, obstacle detection device, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN113610056A true CN113610056A (en) 2021-11-05

Family

ID=78342313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111009218.XA Pending CN113610056A (en) 2021-08-31 2021-08-31 Obstacle detection method, obstacle detection device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN113610056A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903339A (en) * 2019-03-26 2019-06-18 南京邮电大学 A kind of video group personage's position finding and detection method based on multidimensional fusion feature
CN109948524A (en) * 2019-03-18 2019-06-28 北京航空航天大学 A kind of vehicular traffic density estimation method based on space base monitoring
CN111027381A (en) * 2019-11-06 2020-04-17 杭州飞步科技有限公司 Method, device, equipment and storage medium for recognizing obstacle by monocular camera
CN111191600A (en) * 2019-12-30 2020-05-22 深圳元戎启行科技有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN111324115A (en) * 2020-01-23 2020-06-23 北京百度网讯科技有限公司 Obstacle position detection fusion method and device, electronic equipment and storage medium
US20200320353A1 (en) * 2016-11-10 2020-10-08 Snap Inc. Dense captioning with joint interference and visual context
CN111860072A (en) * 2019-04-30 2020-10-30 广州汽车集团股份有限公司 Parking control method and device, computer equipment and computer readable storage medium
CN111898539A (en) * 2020-07-30 2020-11-06 国汽(北京)智能网联汽车研究院有限公司 Multi-target detection method, device, system, equipment and readable storage medium
CN111898501A (en) * 2020-07-17 2020-11-06 东南大学 Unmanned aerial vehicle online aerial photography vehicle identification and statistics method for congested road sections
CN112329552A (en) * 2020-10-16 2021-02-05 爱驰汽车(上海)有限公司 Obstacle detection method and device based on automobile
WO2021085784A1 (en) * 2019-10-31 2021-05-06 재단법인대구경북과학기술원 Learning method of object detection model, and object detection device in which object detection model is executed

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200320353A1 (en) * 2016-11-10 2020-10-08 Snap Inc. Dense captioning with joint interference and visual context
CN109948524A (en) * 2019-03-18 2019-06-28 北京航空航天大学 A kind of vehicular traffic density estimation method based on space base monitoring
CN109903339A (en) * 2019-03-26 2019-06-18 南京邮电大学 A kind of video group personage's position finding and detection method based on multidimensional fusion feature
CN111860072A (en) * 2019-04-30 2020-10-30 广州汽车集团股份有限公司 Parking control method and device, computer equipment and computer readable storage medium
WO2021085784A1 (en) * 2019-10-31 2021-05-06 재단법인대구경북과학기술원 Learning method of object detection model, and object detection device in which object detection model is executed
CN111027381A (en) * 2019-11-06 2020-04-17 杭州飞步科技有限公司 Method, device, equipment and storage medium for recognizing obstacle by monocular camera
CN111191600A (en) * 2019-12-30 2020-05-22 深圳元戎启行科技有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN111324115A (en) * 2020-01-23 2020-06-23 北京百度网讯科技有限公司 Obstacle position detection fusion method and device, electronic equipment and storage medium
CN111898501A (en) * 2020-07-17 2020-11-06 东南大学 Unmanned aerial vehicle online aerial photography vehicle identification and statistics method for congested road sections
CN111898539A (en) * 2020-07-30 2020-11-06 国汽(北京)智能网联汽车研究院有限公司 Multi-target detection method, device, system, equipment and readable storage medium
CN112329552A (en) * 2020-10-16 2021-02-05 爱驰汽车(上海)有限公司 Obstacle detection method and device based on automobile

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
陆峰;徐友春;李永乐;王德宇;谢德胜;: "基于信息融合的智能车障碍物检测方法", 计算机应用, no. 2 *
陆峰;徐友春;李永乐;王德宇;谢德胜;: "基于信息融合的智能车障碍物检测方法", 计算机应用, no. 2, 20 December 2017 (2017-12-20) *

Similar Documents

Publication Publication Date Title
CN106651955B (en) Method and device for positioning target object in picture
US11288531B2 (en) Image processing method and apparatus, electronic device, and storage medium
CN106778773B (en) Method and device for positioning target object in picture
CN111340766A (en) Target object detection method, device, equipment and storage medium
CN107784279B (en) Target tracking method and device
CN109145150B (en) Target matching method and device, electronic equipment and storage medium
CN111461182B (en) Image processing method, image processing apparatus, and storage medium
CN110751659B (en) Image segmentation method and device, terminal and storage medium
CN111340048B (en) Image processing method and device, electronic equipment and storage medium
CN111104920B (en) Video processing method and device, electronic equipment and storage medium
CN110543849B (en) Detector configuration method and device, electronic equipment and storage medium
CN111680646B (en) Action detection method and device, electronic equipment and storage medium
CN116824533A (en) Remote small target point cloud data characteristic enhancement method based on attention mechanism
CN112906484B (en) Video frame processing method and device, electronic equipment and storage medium
CN114267041A (en) Method and device for identifying object in scene
CN111178115B (en) Training method and system for object recognition network
CN113496237A (en) Domain-adaptive neural network training and traffic environment image processing method and device
CN111310595A (en) Method and apparatus for generating information
WO2023155350A1 (en) Crowd positioning method and apparatus, electronic device, and storage medium
CN115223143A (en) Image processing method, apparatus, device, and medium for automatically driving vehicle
CN111832338A (en) Object detection method and device, electronic equipment and storage medium
CN115825979A (en) Environment sensing method and device, electronic equipment, storage medium and vehicle
CN113052874B (en) Target tracking method and device, electronic equipment and storage medium
CN113610056A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN113065392A (en) Robot tracking method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination