CN113610056B - Obstacle detection method, obstacle detection device, electronic equipment and storage medium - Google Patents

Obstacle detection method, obstacle detection device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113610056B
CN113610056B CN202111009218.XA CN202111009218A CN113610056B CN 113610056 B CN113610056 B CN 113610056B CN 202111009218 A CN202111009218 A CN 202111009218A CN 113610056 B CN113610056 B CN 113610056B
Authority
CN
China
Prior art keywords
feature
obstacle
size
reference size
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111009218.XA
Other languages
Chinese (zh)
Other versions
CN113610056A (en
Inventor
汪全伍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dilu Technology Co Ltd
Original Assignee
Dilu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dilu Technology Co Ltd filed Critical Dilu Technology Co Ltd
Priority to CN202111009218.XA priority Critical patent/CN113610056B/en
Publication of CN113610056A publication Critical patent/CN113610056A/en
Application granted granted Critical
Publication of CN113610056B publication Critical patent/CN113610056B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to an obstacle detection method, an obstacle detection device, an electronic device and a storage medium. The method comprises the following steps: extracting features of the acquired image to be processed, and acquiring matrix features of multiple dimensions, wherein the matrix features of each dimension correspond to different dimensions; respectively taking the matrix characteristics of each dimension as reference dimension characteristics, and carrying out characteristic scaling on the non-reference dimension characteristics to obtain scaling characteristics corresponding to the reference dimension characteristics; performing feature fusion on the reference size feature and a scaling feature corresponding to the reference size feature to obtain a fusion feature corresponding to the reference size feature; and processing the image to be processed according to the fusion characteristics corresponding to the reference size characteristics to obtain an obstacle detection result in the image to be processed. By adopting the method, the detection capability of the small target obstacle can be improved on the premise of not remarkably increasing the calculated amount.

Description

Obstacle detection method, obstacle detection device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of unmanned vehicle environment sensing, in particular to a method and a device for detecting obstacles, electronic equipment and a storage medium.
Background
With the development of driving assistance technology, a method of detecting an obstacle using a camera as a sensor has appeared. The method mainly comprises the steps of acquiring an environment image around a vehicle through a camera serving as a sensor, and detecting and identifying obstacles in the environment image through deep learning algorithm processing.
At present, a deep learning algorithm for obstacle detection is limited by detection precision, has poor detection effect on small target obstacles, and is easy to miss. It is common practice to increase the accuracy of detection by increasing the depth of the algorithm model, improving the detection capability of small target obstacles, but this results in a significant increase in the computational effort.
Disclosure of Invention
The method for detecting the obstacle can improve detection precision and solve the problem that a small target obstacle is easy to miss detection without obviously increasing the calculated amount in advance. The technical scheme of the present disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a method of obstacle detection, the method comprising:
acquiring an image to be processed, extracting features of the image to be processed, and acquiring matrix features of multiple dimensions, wherein the matrix features of each dimension correspond to different dimensions;
Respectively taking the matrix characteristics of each dimension as reference dimension characteristics, and carrying out characteristic scaling on the non-reference dimension characteristics to obtain scaling characteristics corresponding to the reference dimension characteristics; the feature scaling means that the size corresponding to the non-reference size feature is scaled to be the same as the size corresponding to the reference size feature; the non-reference size feature refers to a matrix feature other than the reference size feature among the matrix features of the plurality of dimensions;
performing feature fusion on the reference size feature and a scaling feature corresponding to the reference size feature to obtain a fusion feature corresponding to the reference size feature;
And processing the image to be processed according to the fusion characteristics corresponding to the reference size characteristics to obtain an obstacle detection result in the image to be processed.
According to a first aspect of the embodiments of the present disclosure, the feature fusing the reference size feature and the scaling feature corresponding to the reference size feature, to obtain the fused feature corresponding to the reference size feature includes:
Obtaining the weight coefficient of the reference size feature and the scaling feature corresponding to the reference size feature through deep learning network training;
And carrying out weighted fusion on the reference size characteristic and the scaling characteristic corresponding to the reference size characteristic by using the weight coefficient to obtain a fused characteristic after weighted fusion.
According to a first aspect of the embodiments of the present disclosure, the processing the image to be processed according to the fusion features corresponding to the reference size features, to obtain an obstacle detection result in the image to be processed includes:
Predicting the fusion characteristic by using a deep learning network model to obtain obstacle information comprising the probability of whether an obstacle exists, the predicted type of the obstacle, the probability corresponding to the predicted type of the obstacle and the coordinates of the obstacle;
multiplying the probability of whether the obstacle exists with the probability corresponding to the obstacle prediction category to obtain a multiplied value, outputting the obstacle prediction category of the obstacle as the obstacle category when the multiplied value meets a set obstacle recognition threshold, and taking the obstacle coordinate corresponding to the obstacle as a first obstacle coordinate.
According to a first aspect of the embodiments of the present disclosure, the processing the image to be processed according to the fusion features corresponding to the reference size features, after obtaining the obstacle detection result in the image to be processed, further includes:
solving the intersection ratio of the area of the target obstacle in the current frame and the area of the historical obstacle in the historical frame and the Euclidean distance between the center point of the target obstacle and the center point of the historical obstacle;
Matching by adopting a maximum matching algorithm according to the obtained intersection ratio of the areas of the target obstacle in the current frame and the historical obstacle in the historical frame and the Euclidean distance between the central point of the target obstacle and the central point of the historical obstacle;
Outputting the obstacle category of the current frame and the obstacle coordinates corresponding to the obstacle when the matching meets a preset matching threshold;
And carrying out optimal estimation on the outputted obstacle category, the obstacle coordinate corresponding to the obstacle, the historical obstacle category in the historical frame and the obstacle coordinate corresponding to the obstacle by using a filtering algorithm to obtain the obstacle category detected by the current frame and taking the obstacle coordinate corresponding to the obstacle category as a second obstacle coordinate.
According to a first aspect of an embodiment of the present disclosure, in obtaining an intersection ratio of a target obstacle in the current frame and a historical obstacle area in a historical frame and a euclidean distance between a center point of the target obstacle and a center point of the historical obstacle, the method further includes:
Respectively weighting the intersection ratio of the target obstacle and the area of the historical obstacle in the current frame and the Euclidean distance between the central point of the target obstacle and the central point of the historical obstacle to obtain weighted intersection ratio and/or weighted Euclidean distance;
The maximum matching according to the obtained intersection ratio of the target obstacle in the current frame and the historical obstacle area in the historical frame and the Euclidean distance between the center point of the target obstacle and the center point of the historical obstacle comprises the following steps: and carrying out maximum matching according to the weighted intersection ratio of the target obstacle in the current frame and the area of the historical obstacle in the historical frame and the Euclidean distance between the central point of the target obstacle and the central point of the historical obstacle after weighting.
According to a first aspect of an embodiment of the present disclosure, the acquiring an image to be processed is acquired by:
acquiring image data acquired by a camera sensor in a shared memory;
and processing the image data into an image with a specified pixel size to obtain an image to be processed.
According to a second aspect of embodiments of the present disclosure, there is provided an obstacle detection device, the device comprising:
The feature extraction module is used for obtaining an image to be processed, carrying out feature extraction on the image to be processed, obtaining matrix features of multiple dimensions, wherein the matrix features of each dimension correspond to different dimensions;
The feature scaling module is used for performing feature scaling on the non-reference size features by taking the matrix features of each dimension as the reference size features respectively to obtain scaling features corresponding to the reference size features; the feature scaling means that the size corresponding to the non-reference size feature is scaled to be the same as the size corresponding to the reference size feature; the non-reference size feature refers to a matrix feature other than the reference size feature among the matrix features of the plurality of dimensions;
The feature fusion module is used for carrying out feature fusion on the reference size feature and the scaling feature corresponding to the reference size feature to obtain a fusion feature corresponding to the reference size feature;
and the obstacle detection module is used for processing the image to be processed according to the fusion characteristics corresponding to the reference size characteristics to obtain an obstacle detection result in the image to be processed.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising a processor and a memory for storing instructions executable by the processor, the processor being configured to execute instructions to implement a method of obstacle detection as described in the first aspect above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by the processor, implements a method of obstacle detection as described in the first aspect above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising instructions which when executed implement a method of obstacle detection as described in the first aspect above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
According to the embodiment scheme provided by the disclosure, the features with different sizes are associated by carrying out feature fusion on the features with different sizes, so that deep semantic information and shallow characterization information of a small target are considered and enhanced. Therefore, the recognition and detection capability of the small target obstacle is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is an application environment diagram illustrating an obstacle detection method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a method of obstacle detection according to an exemplary embodiment.
Fig. 3 is a flow chart illustrating feature scaling and fusion of an obstacle detection method according to an exemplary embodiment.
Fig. 4 is a flowchart showing an obstacle detection result output of an obstacle detection method according to an exemplary embodiment.
Fig. 5 is a flowchart illustrating obstacle tracking of an obstacle detection method according to an exemplary embodiment.
Fig. 6 is a flowchart of acquiring an image to be processed of an obstacle detection method according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating an obstacle detection device according to an exemplary embodiment.
Fig. 8 illustrates a block diagram of an electronic device, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be further noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for presentation, analyzed data, etc.) related to the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
The obstacle detection method provided by the disclosure can be applied to an application environment as shown in fig. 1. Wherein vehicle 110 communicates with computer device 120 over a network. The vehicle 110 acquires an image of the road environment where it is located, and uploads the acquired image of the road environment to the computer device 120. In some embodiments, when the computer device 120 obtains the road environment image where the target vehicle is located, the image may be subjected to an obstacle detection process to obtain the probability of whether the obstacle corresponding to each target object exists, the obstacle prediction type, the probability corresponding to the obstacle prediction type, and the obstacle information of the obstacle coordinates in the road environment image where the target vehicle is located. The category of the obstacle may be finally determined according to the probability of whether the obstacle exists or not, and the probability corresponding to the predicted category of the obstacle. In addition, the computer device 120 may also issue the detection result of the obstacle to the target vehicle, so that the target vehicle 110 may avoid the obstacle in time according to the detection result of the obstacle.
The target vehicle 120 may be, but is not limited to, an autonomous car, a motor vehicle with autonomous or assisted driving, or the like. Of course, the obstacle detection method, the obstacle detection device and the like provided by the disclosure can also be applied to vehicles such as non-motor vehicles, aircrafts, rail transit and the like. The computer device may be a terminal or a server, and the terminal may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing basic cloud computing services such as cloud storage, network services, cloud communication, big data, artificial intelligent platforms and the like. The terminals and servers may be directly or indirectly connected in a wired or wireless manner, and the disclosure is not limited herein.
In general, compared with a conventional target, the small target has fewer available pixels, better features are difficult to extract, and as the number of layers of the deep neural network increases, the feature information and the position information of the small target are lost and are difficult to detect by the network, and the features cause that the small target needs deep semantic information and shallow characterization information at the same time. According to the embodiment scheme provided by the disclosure, the features with different sizes can be associated by carrying out feature fusion on the features with different sizes, so that deep semantic information and shallow characterization information of the small target are considered and enhanced, and the recognition and detection capability of the small target obstacle is improved. The small target may be defined in a number of ways, such as a relative size, such as a length to width of the target size being 0.1 of the original image size or other factors, which may be considered a small target in some embodiments of the present disclosure. Or absolute size, e.g., a size less than 32 x 32 pixels, may be considered a small target in some embodiments of the present disclosure.
Fig. 2 is a flowchart illustrating an obstacle detection method according to an exemplary embodiment, as shown in fig. 2, for use in a computer device in the obstacle detection method, including the following steps.
In step S210, an image to be processed is acquired, feature extraction is performed on the image to be processed, matrix features of multiple dimensions are acquired, and the matrix features of each dimension correspond to different dimensions.
The image to be processed may be an image obtained by capturing road environment information around the vehicle by an image capturing device (e.g., a camera, a video camera, etc.). Conventional image processing algorithms may be employed to process the image into specific pixels. The image may reflect the environment surrounding the vehicle, such as lanes and obstacles including pedestrians, vehicles, animals, street lamps, pavement trees, etc. The format of the image to be processed can be a picture or a video. The feature extraction of the image to be processed may be the extraction of its feature layer through a deep learning network. The deep learning network may include convolutional neural networks, SSD (Single Shot MultiBox Detector), YOLO (You Only Look Once), and the like. Wherein SSD is a kind of target detection network. The plurality of dimensions may generally include 2 or more specified dimensions. The present disclosure does not exclude implementations in which the multiple dimensions described in some embodiments may also comprise 1 dimension. The dimension is typically determined by the number of feature layers in the deep learning network. For example, 3 layers of feature layers are typically used for YOLO (You Only Look Once), while 6 layers of feature layers are typically used for SSD (Single Shot MultiBox Detector). The characteristic information of different layers can be extracted from the image through the relevant operations such as convolution of the characteristic layers by the deep learning network. Features in each dimension typically correspond to different dimensional sizes, which may mean that the image size of the feature map is different for each dimension. Because the scale difference of different targets on the input image is larger, the characteristics with different sizes can be better adapted to the targets with different scales.
In one embodiment, the computer device takes the image to be processed as input of an open source target detection algorithm YOLOv (You Only Look Once version 5), performs feature extraction on the image, and obtains features of 3 different layers, which are respectively denoted as feature 1, feature 2 and feature 3. Wherein the size of feature 1 is greater than the size of feature 2 and the size of feature 2 is greater than the size of feature 3.
In step S220, the matrix features of each dimension are used as reference dimension features, and feature scaling is performed on the non-reference dimension features to obtain scaled features corresponding to the reference dimension features; the feature scaling means that the size corresponding to the non-reference size feature is scaled to be the same as the size corresponding to the reference size feature; the non-reference size feature refers to a matrix feature other than the reference size feature among the matrix features of the plurality of dimensions;
Wherein, the feature scaling can comprise two modes of feature shrinking and feature enlarging. Feature scaling may be scaling down large-sized features to conform to the reference-sized feature size. Feature magnification may be to magnify features of small size to conform to the reference size feature size. The reference size feature may be any feature that becomes a reference size feature when scaled with the feature as a reference size.
In one embodiment, the computer device extracts features of 3 different layers, denoted feature 1, feature 2, and feature 3, respectively, by YOLOv (You Only Look Once version) algorithm, where feature 1 has a size greater than feature 2 and feature 2 has a size greater than feature 3. First, feature scaling is performed with feature 3 as a reference size feature, and specific operations include: reducing the size of the feature 1 to be the same as the size of the feature 3 to obtain a reduced feature 1; feature 2 is scaled down to the same size as feature 3, resulting in scaled down feature 2. Secondly, performing feature scaling by taking the feature 2 as a reference size feature, wherein the specific operations comprise: reducing the size of the feature 1 to be the same as the size of the feature 2 to obtain a reduced feature 1; feature 3 is scaled up to the same size as feature 2, resulting in scaled up feature 3. Finally, performing feature scaling by taking the feature 1 as a reference size feature, wherein the specific operations comprise: amplifying the size of the feature 2 to be the same as the size of the feature 1 to obtain an amplified feature 2; feature 3 is scaled up to the same size as feature 1, resulting in scaled up feature 2.
In step S230, feature fusion is performed on the reference size feature and the scaling feature corresponding to the reference size feature, so as to obtain a fusion feature corresponding to the reference size feature;
wherein, feature fusion is generally to associate different feature layers so as to enrich the information of each feature. The specific operation may be to fuse information of feature layers other than the reference size features into the reference size features to form fused features.
In step S240, the image to be processed is processed according to the fusion features corresponding to the reference size features, so as to obtain an obstacle detection result in the image to be processed.
The detection result of the obstacle may include obstacle type information of the target obstacle, position coordinate information of the obstacle, and the like.
Through the steps, the obtained image can be subjected to feature extraction processing, matrix features with different sizes are fused, and information among different features is associated, so that the feature fusion can simultaneously give consideration to higher resolution and larger receptive field to small target detection, and the detection capability of the small target detection is improved.
In an exemplary embodiment, as shown in fig. 3, in step S230, feature fusion is performed on the reference size feature and the scaled feature corresponding to the reference size feature, so as to obtain a fused feature corresponding to the reference size feature, which may be specifically implemented by the following steps:
in step S310, obtaining a weight coefficient of the reference size feature and a scaling feature corresponding to the reference size feature through deep learning network training;
the weight coefficient is usually obtained in the deep learning network training process.
In step S320, the weighted fusion is performed on the reference size feature and the scaling feature corresponding to the reference size feature by using the weight coefficient, so as to obtain a fusion feature after weighted fusion.
Taking 3 layers of feature layers as an example, the weighted fusion process may be performed in the following manner:
first, feature scaling is performed with the feature 3 size as a reference size, and the steps of:
reducing the size of the feature 1 to be the same as the size of the feature 3 to obtain a reduced feature 1;
Feature 2 is scaled down to the same size as feature 3, resulting in scaled down feature 2.
Then, feature 3 is subjected to feature fusion according to a fusion formula 1, wherein the formula is as follows:
Fused feature 3=a3 reduced feature 1+b3 reduced feature 2+c3 feature 3.
Wherein the values of the positions a3, b3 and c3 are obtained in the network training process.
Next, feature scaling is performed with the feature 2 size as a reference size, and the steps of:
reducing the size of the feature 1 to be the same as the size of the feature 2 to obtain a reduced feature 1;
feature 3 is scaled up to the same size as feature 2, resulting in scaled up feature 3.
Then, feature 2 is subjected to feature fusion according to a fusion formula 2, wherein the formula is as follows:
Fused feature 2=a2 reduced feature 1+b2 feature 2+c2 amplified feature 3.
Wherein the values of the positions a2, b2 and c2 are obtained in the network training process.
Next, feature scaling is performed with the feature 1 size as a reference size, and the steps of:
Amplifying the size of the feature 2 to be the same as the size of the feature 1 to obtain an amplified feature 2;
feature 3 is scaled up to the same size as feature 1, resulting in scaled up feature 3.
Then, feature 1 is feature fused according to a fusion formula 3, which is:
fused feature 1=a1×feature 1+b1×amplified feature 2+c1×amplified feature 3.
Wherein the values of the positions a1, b1 and c1 are obtained in the network training process.
According to the embodiment, the weight coefficient can be utilized, the characteristic information of the reference size characteristic and the scaling characteristic is considered, and the information of the reference size characteristic and the scaling characteristic is balanced to obtain the final fusion characteristic.
In an exemplary embodiment, as shown in fig. 4, in step S240, the processing of the image to be processed according to the fusion feature corresponding to each of the reference size features, to obtain the obstacle detection result in the image to be processed may be implemented by the following steps:
In step S410, the fusion feature may be predicted using a deep learning network model, and barrier information including a probability of whether a barrier exists, a barrier prediction category, a probability corresponding to the barrier prediction category, and a barrier coordinate may be acquired;
the probability of whether an obstacle exists or not may be represented by a decimal between 0 and 1, and the larger the numerical value, the closer to 1, and the larger the probability that the object exists as an obstacle may be represented. The obstacle category may be pedestrians, automobiles, bicycles, pavement trees, animals, street lamps, signs, and the like. The probability corresponding to the category of obstacle generally refers to the likelihood that the identified obstacle is of that category. The coordinates of the obstacle, including the abscissa and the ordinate of the obstacle in the image frame, reflect the position of the obstacle, and the size of the obstacle can be calculated according to the coordinates. The obstacle coordinates may include center position information of the obstacle. The obstacle information may also include the speed and orientation of the obstacle.
In step S420, the probability of whether the obstacle exists is multiplied by the probability corresponding to the obstacle prediction category to obtain a multiplied value, and when the multiplied value satisfies a set obstacle recognition threshold, the obstacle prediction category of the obstacle is output as the obstacle category, and the obstacle coordinate corresponding to the obstacle is set as the first obstacle coordinate.
The determination of an obstacle generally requires determining whether the obstacle is a corresponding obstacle by using two factors, i.e., the probability of whether the obstacle exists and the probability of the predicted obstacle class, and determining that the obstacle is a corresponding class when the product of the two factors is greater than a set threshold. This set threshold may be obtained through network learning.
By the embodiment, the obstacle in the image can be distinguished from irrelevant information in the background, so that the vehicle can accurately identify the obstacle, and information of the obstacle can be acquired.
In an exemplary embodiment, as shown in fig. 5, in step S240, the image to be processed is processed according to the fusion feature, and after obtaining the obstacle detection result in the image to be processed, the following steps may be further performed on the output result:
in step S510, the intersection ratio of the target obstacle in the current frame and the historical obstacle area in the historical frame, and the euclidean distance between the center point of the target obstacle and the center point of the historical obstacle are solved;
And extracting the information of the target obstacle in the current frame and the obstacle in the history frame, calculating the intersection ratio of the target obstacle in the current frame and the obstacle in the history frame, wherein the intersection ratio can reflect the overlapping degree between the two obstacles. The euclidean distance is calculated by using the obstacle in the current frame and the obstacle in the history frame, and the euclidean distance can reflect the distance information between two target obstacles. When calculating the intersection ratio and/or euclidean distance between two obstacles, there may be no correspondence between the obstacle in the current frame and the obstacle in the history frame. The calculated cross ratio and/or euclidean distance may be used as a criterion as input to the maximum matching algorithm of the next step S520.
In step S520, matching is performed by using a maximum matching algorithm according to the obtained intersection ratio of the target obstacle in the current frame and the historical obstacle area in the historical frame and the euclidean distance between the center point of the target obstacle and the center point of the historical obstacle;
The maximum matching algorithm may be to perform association matching on the target obstacle in the current frame and the obstacle in the history frame. The maximum matching algorithm may comprise a hungarian maximum matching algorithm. And (3) taking the cross ratio and/or Euclidean distance calculated in the previous step as a matching standard, and inputting the matching standard into a maximum matching algorithm for calculation and judgment. When the maximum matching is carried out, only the cross-over ratio can be adopted, only one Euclidean distance item can be adopted, and the cross-over ratio or the Euclidean distance can be respectively used as a matching standard for carrying out twice matching, so that a matching result meeting the twice matching process is obtained and is used as the output of the maximum matching result.
In step S530, when the matching meets a preset matching threshold, outputting the obstacle category of the current frame and the obstacle coordinates corresponding to the obstacle;
The matching meeting the preset matching threshold generally means that the maximum matching algorithm calculates the target obstacle of the current frame and the target obstacle of the history frame, and can output a matching result according to the pre-calculated intersection ratio and/or Euclidean distance as a judgment standard.
In step S540, the output obstacle type and the obstacle coordinate corresponding to the obstacle, and the history obstacle type in the history frame and the obstacle coordinate corresponding to the obstacle are optimally estimated by using a filtering algorithm, so as to obtain the obstacle type detected in the current frame and the obstacle coordinate corresponding to the obstacle type as the second obstacle coordinate.
The filtering algorithm is a prediction algorithm, and is an algorithm for estimating a true value based on data of an observed value and an estimated value, and comprises a Kalman filtering algorithm and a particle filtering algorithm, and tracking of a target obstacle of a current frame can be achieved through the filtering algorithm.
With this embodiment, detected obstacles can be tracked and predicted.
In an exemplary embodiment, in step S520, matching with the maximum matching algorithm according to the obtained intersection ratio of the target obstacle in the current frame and the historical obstacle area in the historical frame and the euclidean distance between the center point of the target obstacle and the center point of the historical obstacle may include the following steps:
in step S610, weighting the intersection ratio of the target obstacle and the area of the historical obstacle in the current frame and the euclidean distance between the center point of the target obstacle and the center point of the historical obstacle respectively to obtain a weighted intersection ratio and/or a weighted euclidean distance;
The cross-correlation ratio and the euclidean distance may be determined by association, and one of the association methods may be association by weighting.
In step S620, the performing maximum matching according to the obtained intersection ratio of the target obstacle in the current frame and the historical obstacle area in the historical frame and the euclidean distance between the center point of the target obstacle and the center point of the historical obstacle further includes: and carrying out maximum matching according to the weighted intersection ratio of the target obstacle in the current frame and the area of the historical obstacle in the historical frame and the Euclidean distance between the central point of the target obstacle and the central point of the historical obstacle after weighting.
When the maximum matching is performed by using the target obstacle of the current frame and the target obstacle of the history frame, the cross ratio and the Euclidean distance can be associated, and the association result can be used as the judgment standard and input of the maximum matching. The association of the cross-over ratio and the euclidean distance may be performed in a weighted manner.
By the embodiment, the matching degree of the obstacle of the current frame and the obstacle of the historical frame can be judged more accurately by weighting the results of the Euclidean distance and the cross-over ratio, and the obtained maximum matching result has better accuracy compared with the single cross-over ratio or the single Euclidean distance as a matching standard. And compared with the method of independently matching two times by using Euclidean distance and cross-over, the matching efficiency is higher.
In an exemplary embodiment, as shown in fig. 6, in step S210, the acquiring the image to be processed further includes:
In step S710, image data acquired by the camera sensor is acquired in the shared memory;
The camera sensor is an image acquisition device, can be a camera, can be a video camera, and can also be other systems for acquiring video or image information. The camera sensor can be built in the vehicle, can be an external image acquisition system which is associated with the target vehicle, and is connected with the computer in a wired or wireless mode. The shared memory may be a memory built in the camera, may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as a Static Random Access Memory (SRAM), an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory (ROM), a magnetic memory, a flash memory, a magnetic disk, an optical disk, or a graphene memory, or may be a cloud storage device, and is connected to the camera sensor or the computer by a wired or wireless manner.
In step S720, the image data is processed into an image of a specified pixel size, resulting in a to-be-processed image.
The image of the specified pixel size may include an image of 1080p pixels, an image of 720p pixels, an image of 4k pixels, and the like. The 1080p pixel image may specifically refer to 1920×1080 pixel image, for example, and the 720p, 4k pixel, etc. refer to 1080p description.
It should be understood that, although the steps in the flowcharts of fig. 2-6 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps of fig. 2-6 may include multiple steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the steps or stages are performed necessarily occur sequentially, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages in other steps.
It should be understood that the same/similar parts of the embodiments of the method described above in this specification may be referred to each other, and each embodiment focuses on differences from other embodiments, and references to descriptions of other method embodiments are only needed.
Based on the description of the obstacle detection method described above, the present disclosure also provides an obstacle detection device. The apparatus may comprise a system (including a distributed system), software (applications), modules, components, servers, clients, etc. that employ the methods described in the embodiments of the present specification in combination with the necessary apparatus to implement the hardware. Based on the same innovative concepts, embodiments of the present disclosure provide for devices in one or more embodiments as described in the following examples. Because the implementation scheme and the method for solving the problem by the device are similar, the implementation of the device in the embodiment of the present disclosure may refer to the implementation of the foregoing method, and the repetition is not repeated. As used below, the term "unit" or "module" may be a combination of software and/or hardware that implements the intended function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 7 is a block diagram illustrating an apparatus 800 for obstacle detection, according to an example embodiment. Referring to fig. 7, the apparatus includes a feature extraction module 810, a feature scaling module 820, a feature fusion module 830, and an obstacle detection module 840.
The feature extraction module 810 is configured to obtain an image to be processed, perform feature extraction on the image to be processed, obtain matrix features of multiple dimensions, where the matrix features of each dimension correspond to different dimensions;
The feature scaling module 820 is configured to perform feature scaling on the non-reference size features with the matrix feature of each dimension as the reference size feature, so as to obtain scaling features corresponding to the reference size features; the feature scaling means that the size corresponding to the non-reference size feature is scaled to be the same as the size corresponding to the reference size feature; the non-reference size feature refers to a matrix feature other than the reference size feature among the matrix features of the plurality of dimensions;
the feature fusion module 830 is configured to perform feature fusion on the reference size feature and the scaling feature corresponding to the reference size feature to obtain a fusion feature corresponding to the reference size feature;
the obstacle detection module 840 is configured to process the image to be processed according to the fusion features corresponding to the reference size features, so as to obtain an obstacle detection result in the image to be processed.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein. Each of the modules in the above-described apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Fig. 8 is a block diagram illustrating an electronic device 900 for obstacle detection, according to an example embodiment. For example, the electronic device 900 may be a computer, a messaging device, a tablet device, or the like.
Referring to fig. 8, an electronic device may include one or more of the following components: a processing component 910, a memory 920, a power component 930, a multimedia component 940, an audio component 950, an input/output (I/O) interface 960, a sensor component 970, a communication component 980, and a processor 990.
The processing component 910 generally controls overall operation of the electronic device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 910 may include one or more processors 990 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 910 may include one or more modules that facilitate interactions between the processing component 910 and other components. For example, the processing component 910 may include a multimedia module to facilitate interaction between the multimedia component 940 and the processing component 910.
The memory 920 is configured to store various types of data to support operations at the electronic device 900. Examples of such data include instructions for any application or method operating on the electronic device 900, contact data, phonebook data, messages, pictures, video, and so forth. The memory 920 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, optical disk, or graphene memory.
The power supply component 930 provides power to the various components of the electronic device 900. The power supply components 930 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 900.
The multimedia component 940 includes a screen between the electronic device 900 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 940 includes a front camera and/or a rear camera. When the electronic device 900 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 950 is configured to output and/or input audio signals. For example, the audio component 950 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 900 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 920 or transmitted via the communication component 980. In some embodiments, the audio component 950 further includes a speaker for outputting audio signals.
I/O interface 960 provides an interface between processing component 910 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 970 includes one or more sensors for providing status assessment of various aspects of the electronic device 900. For example, the sensor assembly 970 may detect an on/off state of the electronic device 900, a relative positioning of the components, such as a display and keypad of the electronic device 900, the sensor assembly 970 may also detect a change in position of the electronic device 900 or an electronic device 900 component, the presence or absence of a user's contact with the electronic device 900, an orientation or acceleration/deceleration of the electronic device 900, and a change in temperature of the electronic device 900. The sensor assembly 970 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 970 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 970 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 980 is configured to facilitate wired or wireless communication between the electronic device 900 and other devices. The electronic device 900 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G,3G,4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 980 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 980 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a computer-readable storage medium is also provided, such as memory 920, including instructions executable by processor 990 of electronic device 900 to perform the above-described method. For example, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product is also provided, comprising instructions executable by the processor 990 of the electronic device 900 to perform the above-described method.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a hardware+program class embodiment, the description is relatively simple, as it is substantially similar to the method embodiment, as relevant see the partial description of the method embodiment.
It should be noted that the descriptions of the foregoing apparatus, the electronic device, the computer readable storage medium, the computer program product, and the like according to the method embodiments may further include other implementations, and the specific implementation may refer to the descriptions of the related method embodiments and are not described herein in detail.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of detecting an obstacle, the method comprising:
acquiring an image to be processed, extracting features of the image to be processed, and acquiring matrix features of multiple dimensions, wherein the matrix features of each dimension correspond to different dimensions;
Respectively taking the matrix characteristics of each dimension as reference dimension characteristics, and carrying out characteristic scaling on the non-reference dimension characteristics to obtain scaling characteristics corresponding to the reference dimension characteristics; the feature scaling means that the size corresponding to the non-reference size feature is scaled to be the same as the size corresponding to the reference size feature; the non-reference size feature refers to a matrix feature other than the reference size feature among the matrix features of the plurality of dimensions;
performing feature fusion on the reference size feature and a scaling feature corresponding to the reference size feature to obtain a fusion feature corresponding to the reference size feature;
processing the image to be processed according to the fusion characteristics corresponding to the reference size characteristics to obtain an obstacle detection result in the image to be processed;
solving the intersection ratio of the area of the target obstacle in the current frame and the area of the historical obstacle in the historical frame and the Euclidean distance between the center point of the target obstacle and the center point of the historical obstacle;
respectively weighting the Euclidean distance between the central point of the target obstacle and the central point of the historical obstacle according to the intersection ratio of the target obstacle and the area of the historical obstacle in the current frame to obtain weighted intersection ratio and/or weighted Euclidean distance;
Performing maximum matching according to the weighted intersection ratio of the areas of the target obstacle in the current frame and the historical obstacle in the historical frame and the Euclidean distance between the central point of the target obstacle and the central point of the historical obstacle after weighting;
Outputting the obstacle category of the current frame and the obstacle coordinates corresponding to the obstacle when the matching meets a preset matching threshold;
And carrying out optimal estimation on the outputted obstacle category, the obstacle coordinate corresponding to the obstacle, the historical obstacle category in the historical frame and the obstacle coordinate corresponding to the obstacle by using a filtering algorithm to obtain the obstacle category detected by the current frame and taking the obstacle coordinate corresponding to the obstacle category as a second obstacle coordinate.
2. The method of claim 1, wherein feature fusing the reference size feature with the scaled feature corresponding to the reference size feature to obtain the fused feature corresponding to the reference size feature comprises:
Obtaining the weight coefficient of the reference size feature and the scaling feature corresponding to the reference size feature through deep learning network training;
And carrying out weighted fusion on the reference size characteristic and the scaling characteristic corresponding to the reference size characteristic by using the weight coefficient to obtain a fused characteristic after weighted fusion.
3. The method according to claim 1, wherein the processing the image to be processed according to the fusion feature corresponding to each reference size feature, to obtain the obstacle detection result in the image to be processed includes:
Predicting the fusion characteristic by using a deep learning network model to obtain obstacle information comprising the probability of whether an obstacle exists, the predicted type of the obstacle, the probability corresponding to the predicted type of the obstacle and the coordinates of the obstacle;
multiplying the probability of whether the obstacle exists with the probability corresponding to the obstacle prediction category to obtain a multiplied value, outputting the obstacle prediction category of the obstacle as the obstacle category when the multiplied value meets a set obstacle recognition threshold, and taking the obstacle coordinate corresponding to the obstacle as a first obstacle coordinate.
4. The method according to claim 1, wherein the acquiring the image to be processed is acquired by:
acquiring image data acquired by a camera sensor in a shared memory;
and processing the image data into an image with a specified pixel size to obtain an image to be processed.
5. The method according to claim 1, wherein the performing feature scaling on the non-reference size features with the matrix feature of each dimension as the reference size feature to obtain scaled features corresponding to the reference size features includes:
extracting the characteristics of 3 different layers through a specified algorithm, and respectively marking the characteristics as a characteristic 1, a characteristic 2 and a characteristic 3, wherein the size of the characteristic 1 is larger than that of the characteristic 2, and the size of the characteristic 2 is larger than that of the characteristic 3;
first, feature scaling is performed with feature 3 as a reference size feature, including: reducing the size of the feature 1 to be the same as the size of the feature 3 to obtain a reduced feature 1; reducing the size of the feature 2 to be the same as the size of the feature 3 to obtain a reduced feature 2;
secondly, performing feature scaling by taking the feature 2 as a reference size feature, including: reducing the size of the feature 1 to be the same as the size of the feature 2 to obtain a reduced feature 1; amplifying the size of the feature 3 to be the same as the size of the feature 2 to obtain an amplified feature 3;
Finally, performing feature scaling with feature 1 as a reference size feature, including: amplifying the size of the feature 2 to be the same as the size of the feature 1 to obtain an amplified feature 2; feature 3 is scaled up to the same size as feature 1, resulting in scaled up feature 2.
6. The method according to claim 1, wherein the cross-correlation and the euclidean distance are correlated when the maximum matching is performed using the target obstacle of the current frame and the target obstacle of the history frame, and the result of the correlation is used as the criterion and input of the maximum matching.
7. An obstacle detection device, the device comprising:
The feature extraction module is used for obtaining an image to be processed, carrying out feature extraction on the image to be processed, obtaining matrix features of multiple dimensions, wherein the matrix features of each dimension correspond to different dimensions;
The feature scaling module is used for performing feature scaling on the non-reference size features by taking the matrix features of each dimension as the reference size features respectively to obtain scaling features corresponding to the reference size features; the feature scaling means that the size corresponding to the non-reference size feature is scaled to be the same as the size corresponding to the reference size feature; the non-reference size feature refers to a matrix feature other than the reference size feature among the matrix features of the plurality of dimensions;
The feature fusion module is used for carrying out feature fusion on the reference size feature and the scaling feature corresponding to the reference size feature to obtain a fusion feature corresponding to the reference size feature;
The obstacle detection module is used for processing the image to be processed according to the fusion characteristics corresponding to the reference size characteristics to obtain an obstacle detection result in the image to be processed;
The coordinate detection module is used for solving the intersection ratio of the area of the target obstacle in the current frame and the area of the historical obstacle in the historical frame and the Euclidean distance between the center point of the target obstacle and the center point of the historical obstacle; respectively weighting the Euclidean distance between the central point of the target obstacle and the central point of the historical obstacle according to the intersection ratio of the target obstacle and the area of the historical obstacle in the current frame to obtain weighted intersection ratio and/or weighted Euclidean distance; performing maximum matching according to the weighted intersection ratio of the areas of the target obstacle in the current frame and the historical obstacle in the historical frame and the Euclidean distance between the central point of the target obstacle and the central point of the historical obstacle after weighting; outputting the obstacle category of the current frame and the obstacle coordinates corresponding to the obstacle when the matching meets a preset matching threshold; and carrying out optimal estimation on the outputted obstacle category, the obstacle coordinate corresponding to the obstacle, the historical obstacle category in the historical frame and the obstacle coordinate corresponding to the obstacle by using a filtering algorithm to obtain the obstacle category detected by the current frame and taking the obstacle coordinate corresponding to the obstacle category as a second obstacle coordinate.
8. An electronic device, comprising:
A processor;
A memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1 to 6.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising instructions which, when executed, are capable of performing the steps of the method of any one of claims 1 to 6.
CN202111009218.XA 2021-08-31 2021-08-31 Obstacle detection method, obstacle detection device, electronic equipment and storage medium Active CN113610056B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111009218.XA CN113610056B (en) 2021-08-31 2021-08-31 Obstacle detection method, obstacle detection device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111009218.XA CN113610056B (en) 2021-08-31 2021-08-31 Obstacle detection method, obstacle detection device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113610056A CN113610056A (en) 2021-11-05
CN113610056B true CN113610056B (en) 2024-06-07

Family

ID=78342313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111009218.XA Active CN113610056B (en) 2021-08-31 2021-08-31 Obstacle detection method, obstacle detection device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113610056B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903339A (en) * 2019-03-26 2019-06-18 南京邮电大学 A kind of video group personage's position finding and detection method based on multidimensional fusion feature
CN109948524A (en) * 2019-03-18 2019-06-28 北京航空航天大学 A kind of vehicular traffic density estimation method based on space base monitoring
CN111027381A (en) * 2019-11-06 2020-04-17 杭州飞步科技有限公司 Method, device, equipment and storage medium for recognizing obstacle by monocular camera
CN111191600A (en) * 2019-12-30 2020-05-22 深圳元戎启行科技有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN111324115A (en) * 2020-01-23 2020-06-23 北京百度网讯科技有限公司 Obstacle position detection fusion method and device, electronic equipment and storage medium
CN111860072A (en) * 2019-04-30 2020-10-30 广州汽车集团股份有限公司 Parking control method and device, computer equipment and computer readable storage medium
CN111898539A (en) * 2020-07-30 2020-11-06 国汽(北京)智能网联汽车研究院有限公司 Multi-target detection method, device, system, equipment and readable storage medium
CN111898501A (en) * 2020-07-17 2020-11-06 东南大学 Unmanned aerial vehicle online aerial photography vehicle identification and statistics method for congested road sections
CN112329552A (en) * 2020-10-16 2021-02-05 爱驰汽车(上海)有限公司 Obstacle detection method and device based on automobile
WO2021085784A1 (en) * 2019-10-31 2021-05-06 재단법인대구경북과학기술원 Learning method of object detection model, and object detection device in which object detection model is executed

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10198671B1 (en) * 2016-11-10 2019-02-05 Snap Inc. Dense captioning with joint interference and visual context

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948524A (en) * 2019-03-18 2019-06-28 北京航空航天大学 A kind of vehicular traffic density estimation method based on space base monitoring
CN109903339A (en) * 2019-03-26 2019-06-18 南京邮电大学 A kind of video group personage's position finding and detection method based on multidimensional fusion feature
CN111860072A (en) * 2019-04-30 2020-10-30 广州汽车集团股份有限公司 Parking control method and device, computer equipment and computer readable storage medium
WO2021085784A1 (en) * 2019-10-31 2021-05-06 재단법인대구경북과학기술원 Learning method of object detection model, and object detection device in which object detection model is executed
CN111027381A (en) * 2019-11-06 2020-04-17 杭州飞步科技有限公司 Method, device, equipment and storage medium for recognizing obstacle by monocular camera
CN111191600A (en) * 2019-12-30 2020-05-22 深圳元戎启行科技有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN111324115A (en) * 2020-01-23 2020-06-23 北京百度网讯科技有限公司 Obstacle position detection fusion method and device, electronic equipment and storage medium
CN111898501A (en) * 2020-07-17 2020-11-06 东南大学 Unmanned aerial vehicle online aerial photography vehicle identification and statistics method for congested road sections
CN111898539A (en) * 2020-07-30 2020-11-06 国汽(北京)智能网联汽车研究院有限公司 Multi-target detection method, device, system, equipment and readable storage medium
CN112329552A (en) * 2020-10-16 2021-02-05 爱驰汽车(上海)有限公司 Obstacle detection method and device based on automobile

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于信息融合的智能车障碍物检测方法;陆峰;徐友春;李永乐;王德宇;谢德胜;;计算机应用(第S2期);全文 *
陆峰 ; 徐友春 ; 李永乐 ; 王德宇 ; 谢德胜 ; .基于信息融合的智能车障碍物检测方法.计算机应用.2017,(第S2期),全文. *

Also Published As

Publication number Publication date
CN113610056A (en) 2021-11-05

Similar Documents

Publication Publication Date Title
CN110688951B (en) Image processing method and device, electronic equipment and storage medium
EP3163498B1 (en) Alarming method and device
US11288531B2 (en) Image processing method and apparatus, electronic device, and storage medium
CN106778773B (en) Method and device for positioning target object in picture
CN111340048B (en) Image processing method and device, electronic equipment and storage medium
CN113538519A (en) Target tracking method and device, electronic equipment and storage medium
CN109145150B (en) Target matching method and device, electronic equipment and storage medium
CN111104920B (en) Video processing method and device, electronic equipment and storage medium
CN111680646B (en) Action detection method and device, electronic equipment and storage medium
CN113052874B (en) Target tracking method and device, electronic equipment and storage medium
CN114581525A (en) Attitude determination method and apparatus, electronic device, and storage medium
CN116740158B (en) Image depth determining method, device and storage medium
CN113496237B (en) Domain adaptive neural network training and traffic environment image processing method and device
CN113610056B (en) Obstacle detection method, obstacle detection device, electronic equipment and storage medium
CN111178115B (en) Training method and system for object recognition network
CN106323316A (en) Device and method for achieving navigation prompts
CN111832338A (en) Object detection method and device, electronic equipment and storage medium
WO2023155350A1 (en) Crowd positioning method and apparatus, electronic device, and storage medium
CN115825979A (en) Environment sensing method and device, electronic equipment, storage medium and vehicle
CN115223143A (en) Image processing method, apparatus, device, and medium for automatically driving vehicle
CN114863392A (en) Lane line detection method, lane line detection device, vehicle, and storage medium
CN113724300A (en) Image registration method and device, electronic equipment and storage medium
CN115510336A (en) Information processing method, information processing device, electronic equipment and storage medium
CN113065392A (en) Robot tracking method and device
CN115082473B (en) Dirt detection method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant