CN115713745A - Obstacle detection method, electronic device, and storage medium - Google Patents
Obstacle detection method, electronic device, and storage medium Download PDFInfo
- Publication number
- CN115713745A CN115713745A CN202110961863.5A CN202110961863A CN115713745A CN 115713745 A CN115713745 A CN 115713745A CN 202110961863 A CN202110961863 A CN 202110961863A CN 115713745 A CN115713745 A CN 115713745A
- Authority
- CN
- China
- Prior art keywords
- obstacle
- vehicle
- target image
- image
- dangerous area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Traffic Control Systems (AREA)
Abstract
The embodiment of the application provides an obstacle detection method, electronic equipment and a storage medium. The obstacle detection method comprises the following steps: acquiring a target image, wherein the target image is an image outside a vehicle; in response to the target image being acquired, identifying whether an obstacle exists in the target image; in response to the fact that the obstacle exists in the target image, position information of the obstacle in the target image is obtained, and whether the obstacle is located in a dangerous area or not is identified based on the position information. In the embodiment of the application, whether the obstacle exists outside the vehicle can be identified based on the image outside the vehicle, and whether the obstacle is located in a dangerous area can be further identified, so that whether a potential safety problem exists at present can be determined, corresponding processing can be carried out on the potential safety problem, accidents are avoided, and the driving safety of the vehicle is improved.
Description
Technical Field
The present disclosure relates to the field of vehicle control technologies, and in particular, to an obstacle detection method, an electronic device, and a storage medium.
Background
Along with the continuous improvement of living standard of people, the car is more and more favored by people, becomes the important vehicle of people's trip. The automobile brings convenience for people to go out, and meanwhile, potential safety problems can exist. For example, other obstacles may exist outside the automobile during driving of the automobile, and if the obstacles are located in a dangerous area, a traffic accident such as a collision may be caused.
Therefore, how to detect an obstacle outside the vehicle so as to identify whether the obstacle is located in a dangerous area is an urgent technical problem to be solved at present.
Disclosure of Invention
In view of the above problems, the embodiments of the present application provide an obstacle detection method, an electronic device, and a storage medium, which are used to detect an obstacle outside a vehicle, so as to identify whether the obstacle is located in a dangerous area, thereby helping a user to find a road risk in time and ensuring driving safety of the user.
According to an aspect of embodiments of the present application, there is provided an obstacle detection method including: acquiring a target image, wherein the target image is an image outside a vehicle; in response to the target image being acquired, identifying whether an obstacle exists in the target image; in response to the fact that the obstacle exists in the target image, position information of the obstacle in the target image is obtained, and whether the obstacle is located in a dangerous area or not is identified based on the position information.
According to another aspect of embodiments of the present application, there is provided an electronic device including: one or more processors; and one or more computer-readable storage media having instructions stored thereon; the instructions, when executed by the one or more processors, cause the processors to perform the obstacle detection method of any one of the above.
According to a further aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the obstacle detection method according to any one of the above.
In the embodiment of the application, a target image is obtained, wherein the target image is an image outside a vehicle; in response to the acquisition of the target image, identifying whether an obstacle exists in the target image; in response to the fact that the obstacle exists in the target image, position information of the obstacle in the target image is obtained, and whether the obstacle is located in a dangerous area or not is identified based on the position information. Therefore, in the embodiment of the application, whether the obstacle exists outside the vehicle can be identified based on the image outside the vehicle, and whether the obstacle is located in a dangerous area can be further identified, so that whether a potential safety problem exists at present can be determined, corresponding processing can be carried out on the potential safety problem, and accidents are avoided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are only some drawings of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of device interaction according to an embodiment of the present application.
FIG. 2 is a schematic diagram of another device interaction according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating steps of a method for detecting an obstacle according to an embodiment of the present application.
Fig. 4 is a flowchart illustrating steps of another obstacle detection method according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a comparison of obstacles according to an embodiment of the present application.
Fig. 6 is a schematic diagram of another obstacle comparison of embodiments of the present application.
Fig. 7 is a schematic diagram of a vehicle left side image and a vehicle right side image according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In an alternative embodiment, the vehicle can be provided with a video capture device and a vehicle terminal. Referring to fig. 1, a schematic diagram of device interaction according to an embodiment of the present application is shown. As shown in fig. 1, the video capture device and the car terminal may perform bidirectional communication. The video acquisition equipment can be various cameras, one or more video acquisition equipment can be installed outside the vehicle and/or inside the vehicle, and the video acquisition equipment is used for acquiring videos outside the vehicle in real time or at regular time and transmitting the acquired videos outside the vehicle to the vehicle terminal. The vehicle-mounted terminal is a vehicle-mounted intelligent terminal installed on a vehicle, and the vehicle-mounted intelligent terminal can receive the video outside the vehicle transmitted by the video acquisition equipment and execute the obstacle detection method in the embodiment of the application.
In another optional implementation manner, the vehicle may be provided with a video capture device and a vehicle terminal, and the vehicle terminal may further correspond to the cloud server. Referring to fig. 2, a schematic diagram illustrating another device interaction according to an embodiment of the present application is shown. As shown in fig. 2, two-way communication can be performed between the video capture device and the car terminal, and between the car terminal and the cloud server. The video acquisition equipment can be installed outside the vehicle and/or inside the vehicle, one or more video acquisition equipment can be installed, and the video acquisition equipment is used for acquiring videos outside the vehicle in real time or at regular time and transmitting the acquired videos outside the vehicle to the vehicle terminal. The vehicle-mounted terminal is a vehicle-mounted intelligent terminal installed on a vehicle, and the vehicle-mounted intelligent terminal can receive the video outside the vehicle transmitted by the video acquisition equipment and transmit the received video outside the vehicle to the cloud server. The cloud server can receive the video outside the vehicle transmitted by the vehicle terminal, and execute the obstacle detection method in the embodiment of the application.
Next, the obstacle detection method will be described in detail by the following embodiments.
Referring to fig. 3, a flowchart illustrating steps of a method for detecting an obstacle according to an embodiment of the present application is shown. The obstacle detection method shown in fig. 3 may be executed by a car terminal or a cloud server.
As shown in fig. 3, the obstacle detection method may include the steps of:
The target image is an image of the outside of the vehicle.
The target image may be an image extracted from a received video outside the vehicle. Alternatively, images may be extracted from a video outside the vehicle as target images at preset fixed time intervals; alternatively, the image may be extracted from the video outside the vehicle as the target image at a variable time interval related to the vehicle running speed according to the vehicle running speed, which is not limited in the embodiment.
After one frame of target image is acquired, image recognition is performed on the target image in response to the acquisition of the target image so as to recognize whether an obstacle exists in the image. Alternatively, the obstacles may include, but are not limited to: various vehicles, pedestrians, other objects, roadblocks, etc.
And if the obstacle is identified to exist in the target image, acquiring the position information of the obstacle in the target image in response to identifying that the obstacle exists in the target image, and identifying whether the obstacle is located in the dangerous area or not based on the position information. By hazardous area is meant that the obstacle, when located in the area, presents a potential safety problem to the vehicle, requiring special attention by the driver.
And if the target image is identified to have no obstacle, continuing to identify the subsequently acquired target image.
In the embodiment of the application, whether the obstacle exists outside the vehicle can be identified based on the image outside the vehicle, and whether the obstacle is located in a dangerous area is further identified, so that whether a potential safety problem exists at present is determined, and therefore corresponding processing can be carried out on the potential safety problem, and accidents are avoided.
Referring to fig. 4, a flow chart of steps of another obstacle detection method of an embodiment of the present application is shown. The obstacle detection method shown in fig. 4 may be executed by the in-vehicle terminal or the cloud server.
As shown in fig. 4, the obstacle detection method may include the steps of:
In this embodiment, the driving speed of the vehicle can be acquired in real time or at regular time, and different extraction time intervals are selected to extract the target image according to the difference of the driving speed of the vehicle.
Therefore, the process of acquiring the target image may include the following steps A1 to A3:
step A1, obtaining a video outside a vehicle and the running speed of the vehicle.
In the implementation, a speed sensor can be installed on a vehicle, and bidirectional communication can be performed between the speed sensor and a vehicle terminal. The speed sensor can acquire the running speed of the vehicle and transmit the running speed of the vehicle to the vehicle terminal. If the obstacle detection is carried out by the cloud server, the vehicle terminal can further transmit the driving speed of the vehicle to the cloud server.
Alternatively, the speed sensor in the present embodiment may include, but is not limited to: photoelectric vehicle speed sensors, magnetoelectric vehicle speed sensors, hall vehicle speed sensors, wheel speed sensors, engine speed sensors, and the like.
And step A2, determining an extraction time interval based on the running speed.
Considering that the higher the traveling speed of the vehicle, the greater the possibility of danger occurring, the more quickly the target image can be extracted for obstacle detection, so as to increase the frequency of obstacle detection and thereby more promptly determine whether an obstacle is located in a dangerous area. On the contrary, under the condition that the driving speed of the vehicle is slower, the danger possibility is smaller, so that the target image can be extracted relatively slowly to detect the obstacle, the obstacle detection frequency can be reduced, the resource can be saved, and the resource waste can be avoided. Therefore, the present embodiment may set the decimation time interval to be inversely related to the running speed of the vehicle. That is, the faster the vehicle is traveling, the shorter the extraction time interval is; the slower the running speed of the vehicle, the longer the extraction time interval.
Alternatively, a correspondence relationship between the travel speed of the vehicle and the extraction time interval may be set in advance, and the extraction time interval corresponding to the travel speed of the vehicle may be queried from the correspondence relationship based on the acquired travel speed of the vehicle.
Alternatively, in the correspondence relationship, the travel speed of the vehicle may be in the form of a speed interval, that is, one speed interval may correspond to one decimation time interval. Therefore, after the traveling speed of the vehicle is acquired, the correspondence relationship may be queried based on the traveling speed of the vehicle, a speed section in which the traveling speed of the vehicle is located may be obtained, and the extraction time interval corresponding to the speed section may be set as the extraction time interval corresponding to the traveling speed of the vehicle.
For the above correspondence, any suitable value may be set according to practical experience, and this embodiment does not limit this. For example, when the running speed of the vehicle is 90 km/h-120 km/h (kilometers per hour), the corresponding extraction time interval is 1 second; when the running speed of the vehicle is 60 km/h-89 km/h, the corresponding extraction time interval is 1.5 seconds; when the running speed of the vehicle is 30 km/h-59 km/h, the corresponding extraction time interval is 2 seconds, and the like.
And A3, extracting images from the video as the target images according to the extraction time interval.
After the video outside the vehicle is acquired, the image is extracted from the video outside the vehicle as the target image according to the extraction time interval determined based on the traveling speed of the vehicle.
For example, when the traveling speed of the vehicle is 100km/h, the corresponding extraction time interval is determined to be 1 second, and therefore, one frame image is extracted from the video outside the vehicle every 1 second, and the extracted frame image is set as the target image.
And carrying out target detection on the acquired target image by utilizing an image recognition technology so as to detect whether an obstacle exists in the target image, and acquiring the position information of the obstacle in the target image under the condition that the obstacle exists.
Optionally, the manner of performing target detection on the target image may include, but is not limited to: HOG (Histogram of Oriented Gradients) + SVM (Support Vector Machine) mode, DPM (Deformable Part based Model) mode, R-CNN (regional convolutional Neural Network) mode, SPPNet (Spatial Pyramid Pooling Network) mode, fast RCNN mode, and the like.
For example, in the case of using the HOG + SVM method, for a frame of target image, first, a HOG feature vector of the target image is extracted, and then, the HOG feature vector is input into an SVM for classification detection, and the SVM can identify which pixels belong to the obstacle category and which pixels do not belong to the obstacle category, thereby detecting whether an obstacle exists in the frame of target image, and acquiring position information of the obstacle in the target image.
For another example, in the case of using the R-CNN method, for a frame of target image, a selective search (selective search) method is first used to determine a plurality of (about 1000-2000) candidate frames in the target image; then scaling the image blocks in each candidate frame to the same size, and inputting the image blocks into a CNN (Convolutional Neural Network) for feature extraction; then, judging whether the features extracted from the candidate frames belong to a specific class (whether the features belong to the obstacle class) by using a classifier; then, the position of the candidate frame belonging to a certain characteristic is further adjusted by a regressor, so that whether an obstacle exists in the frame target image or not is detected, and the position information of the obstacle in the target image is obtained.
Wherein, the number of the obstacles existing in the target image can be one or more.
If the obstacle is identified to be present in the target image, the position information of the obstacle in the target image, which is detected in the target detection process of the above step 402, is acquired in response to the presence of the obstacle in the target image, and whether the obstacle is located in the dangerous area is identified based on the position information of the obstacle. The details will be described below. And if the target image is identified to have no obstacle, continuing to identify the subsequently acquired target image.
And step 404, triggering danger reminding information in response to the fact that the obstacle is located in the dangerous area.
If the obstacle is identified as being located in the hazardous area, a hazard reminder message may be further triggered in response to identifying the obstacle as being located in the hazardous area to remind a user (e.g., a driver) of the current hazardous condition that safety attention is required. And if the obstacle is identified to be located in the non-dangerous area, continuing to identify the subsequently acquired target image.
Optionally, the form of the danger alert message may include, but is not limited to: voice alerts (e.g., play alert voice, etc.), text alerts (e.g., pop-up window show alert, etc.), audio and visual alerts (e.g., send audio and visual alerts, etc.), etc.
It should be noted that, if the vehicle terminal executes the obstacle detection method of the embodiment, the vehicle terminal triggers the danger reminding message; if the cloud server executes the obstacle detection method of the embodiment, the cloud server triggers the danger reminding information, sends the danger reminding information to the vehicle terminal, and then sends the corresponding danger reminding information to the user.
In one application scenario, when a user (i.e., a driver) drives a vehicle, if the user's vehicle is too close to a vehicle in front of and/or behind the vehicle, a traffic accident is easily caused. For such a situation, the method of the embodiment may be adopted to perform obstacle detection so as to identify whether the front vehicle and/or the rear vehicle is located in a dangerous area (i.e., whether the distance to the vehicle is too close), and in the case that the front vehicle and/or the rear vehicle is identified to be located in a dangerous area, trigger the danger alert information so as to alert the user that the front vehicle and/or the rear vehicle is too close, and pay attention to keeping the vehicle distance.
For such a scenario, in an alternative embodiment, the video capturing device may be mounted in front of the vehicle and/or behind the vehicle, and accordingly, the target image may comprise an image in front of the vehicle and/or an image behind the vehicle.
The process of identifying whether the obstacle is located in the danger area based on the position information of the obstacle in the target image may include the following steps B1 to B3:
and B1, calculating a first ratio of the width of the obstacle in the width of the target image and a second ratio of the height of the obstacle in the height of the target image based on the position information.
Generally, the farther the distance between the obstacle and the lens is, the smaller the occupation ratio of the width and/or height of the obstacle in the target image is; the closer the distance between the obstacle and the lens, the greater the proportion of the width and/or height of the obstacle in the target image. For example, in a target image captured at equal distances of 5 meters, 10 meters, 15 meters, 20 meters, 30 meters, and 40 meters from the lens, the ratio of the width and/or height of an obstacle is different, and the ratio of an obstacle captured at 5 meters from the lens is larger than that captured at 10 meters from the lens. Therefore, the ratio of the width and/or height of the obstacle in the target image can reflect the distance between the obstacle and the vehicle. Therefore, it is possible to identify whether the obstacle is located in the dangerous area based on the ratio of the width and/or height of the obstacle in the target image.
The position information of the obstacle in the target image, which is identified by the image identification technology, may include, but is not limited to: and coordinates of each pixel corresponding to the obstacle, wherein the coordinates use the lower left corner of the target image as an origin.
The width of the obstacle may be calculated based on the coordinates of the leftmost pixel (i.e., the pixel having the smallest abscissa corresponding to the obstacle) and the coordinates of the rightmost pixel (i.e., the pixel having the largest abscissa corresponding to the obstacle). Specifically, the difference between the coordinates of the rightmost pixel corresponding to the obstacle and the coordinates of the leftmost pixel corresponding to the obstacle is taken as the width of the obstacle.
The height of the obstacle can be calculated based on the coordinates of the lowermost pixel (i.e., the pixel having the smallest vertical coordinate corresponding to the obstacle) corresponding to the obstacle and the coordinates of the uppermost pixel (i.e., the pixel having the largest vertical coordinate corresponding to the obstacle). Specifically, the difference between the coordinates of the uppermost pixel corresponding to the obstacle and the coordinates of the lowermost pixel corresponding to the obstacle is taken as the height of the obstacle.
The method comprises the steps of obtaining the width of a target image, and calculating a first ratio of the width of an obstacle to the width of the target image, wherein the first ratio is used as a first ratio of the width of the obstacle to the width of the target image. And acquiring the height of the target image, and calculating a second ratio of the height of the obstacle to the height of the target image, wherein the second ratio is used as a second ratio of the height of the obstacle to the height of the target image.
And B2, identifying the type of the obstacle, and acquiring a preset dangerous area occupation range corresponding to the type of the obstacle.
In the case where the distance between the obstacle and the lens is the same, the type of the obstacle is different, and the ratio of the width and/or height of the obstacle in the target image is different. For example, the larger the obstacle, the larger the ratio of the width and/or height of the obstacle in the target image; the smaller the obstacle, the smaller the proportion of the width and/or height of the obstacle in the target image. Therefore, the dangerous area proportion range corresponding to the type of each obstacle can be set in advance in a targeted manner according to different obstacle types.
Optionally, the types of obstacles may include, but are not limited to: small obstacles (such as cars, electric vehicles, etc.), medium obstacles (such as small trucks, buses, etc.), large obstacles (such as large trucks, buses, etc.), and so on.
In implementation, the distance range between the obstacle and the vehicle corresponding to the danger zone may be set in advance according to actual experience. For example, the distance range corresponding to the dangerous area may be set to be 5 meters to 25 meters, and so on. And then acquiring the image shot in the distance range corresponding to the dangerous area according to the type of each obstacle, wherein the type of each obstacle corresponds to the occupation range of the width and the height of each obstacle in the image, and the occupation range is used as the occupation range of the dangerous area corresponding to the type of each obstacle.
After the obstacle is identified to exist in the target image, the type of the obstacle can be continuously identified, and the dangerous area proportion range corresponding to the type of the obstacle is obtained from the preset dangerous area proportion range corresponding to the type of each obstacle.
Alternatively, the aspect ratio range corresponding to each obstacle type is set in advance according to actual experience. The process of identifying the type of the obstacle may comprise the steps of: matching the aspect ratio of the obstacles with the preset aspect ratio range corresponding to each obstacle type; when the aspect ratio of the obstacle is within the aspect ratio range corresponding to a certain obstacle type, determining that the obstacle type is successfully matched; and in response to the matching success, taking the type of the matching success as the type of the obstacle. Any suitable value may be set for the aspect ratio range according to practical experience or multiple experiments, which is not limited in this embodiment.
And step B3, responding to the judgment that the first proportion and/or the second proportion are/is located in the danger area proportion range, and determining that the obstacle is located in the danger area.
And judging whether a first ratio of the width of the obstacle in the width of the target image and/or a second ratio of the height of the obstacle in the height of the target image are/is within a danger area ratio range corresponding to the type of the obstacle. And determining that the obstacle is located in the dangerous area in response to judging that the first proportion and/or the second proportion are/is located in the range of the dangerous area proportion corresponding to the type of the obstacle.
Fig. 5 is a schematic diagram of a comparison of obstacles according to an embodiment of the present application. As shown in fig. 5, the ratio of the width and height of the obstacle in the image a to the image b is smaller than the ratio of the width and height of the obstacle in the image a to the image b, which can indicate that the distance between the obstacle in the image a and the vehicle currently driven by the user is larger than the distance between the obstacle in the image b and the vehicle currently driven by the user. And judging that the obstacle in the image a is located in a non-dangerous area, and the obstacle in the image b is located in a dangerous area.
In another application scenario, when a user (i.e., a driver) drives a vehicle to reach an intersection, if the vehicle in front is too high, the sight of the user is blocked, so that the user cannot see the traffic condition of the intersection clearly and cannot see the traffic lights of the intersection clearly, which easily causes traffic accidents and violates traffic rules. For the situation, the method of the embodiment may be adopted to perform obstacle detection so as to identify whether the vehicle in front is located in a dangerous area (i.e., to identify whether the vehicle in front blocks the sight of the user), and trigger the danger-reminding information so as to remind the user that the vehicle in front blocks the sight, pay attention to the safety of the route at the intersection, and pay attention to the traffic signal lamp when the vehicle in front is identified to be located in the dangerous area.
For such a scenario, in an alternative embodiment, the video capture device may be mounted in front of the vehicle, and accordingly, the target image may comprise an image in front of the vehicle.
The process of identifying whether the obstacle is located in the danger area based on the position information of the obstacle in the target image may include the following steps C1 to C3:
and C1, judging whether the vehicle is positioned at the intersection at present.
In an optional implementation manner, during the process of driving the vehicle by the user, the vehicle-mounted terminal may obtain navigation data of the vehicle in real time or at regular time, where the navigation data may include intersection information and the like. If the vehicle-mounted terminal executes the obstacle detection method, the vehicle-mounted terminal can judge whether the vehicle is currently positioned at the intersection or not based on the navigation data of the vehicle; if the cloud server executes the obstacle detection method, the vehicle terminal can send the navigation data of the vehicle to the cloud server, and the cloud server judges whether the vehicle is located at the intersection currently based on the navigation data of the vehicle.
In another optional implementation manner, in the process of driving the vehicle by the user, the vehicle-mounted terminal may obtain positioning data of the vehicle and preset road network data in real time or at regular time, where the road network data may include intersection information and the like. If the vehicle terminal executes the obstacle detection method, the vehicle terminal can judge whether the vehicle is currently positioned at the intersection or not based on the positioning data and the road network data of the vehicle; if the cloud server executes the obstacle detection method, the vehicle terminal can send the positioning data and the road network data of the vehicle to the cloud server, and the cloud server judges whether the vehicle is located at the intersection currently or not based on the positioning data and the road network data of the vehicle.
And C2, responding to the judgment that the vehicle is currently positioned at the intersection, and calculating the distance between the top of the obstacle and the top of the target image based on the position information.
In general, in the target image, the higher the obstacle is, the smaller the distance between the top of the obstacle and the top of the target image is; the lower the obstacle, the greater the distance between the top of the obstacle and the top of the target image. Therefore, the distance between the top of the obstacle and the top of the target image can reflect the height of the obstacle, and further reflect whether the obstacle may obstruct the user's view of the vehicle behind. Accordingly, it is possible to identify whether the obstacle is located in the dangerous area based on the distance between the top of the obstacle and the top of the target image.
The position information of the obstacle in the target image, which is identified by the image identification technology, may include, but is not limited to: coordinates of each pixel corresponding to the obstacle, the coordinates having an origin at a lower left corner of the target image. The distance between the top of the obstacle and the top of the target image can be calculated based on the coordinates of the uppermost pixel corresponding to the obstacle (i.e., the pixel having the largest vertical coordinate corresponding to the obstacle) and the coordinates of the pixel at the top of the target image. Specifically, the difference between the coordinates of the pixel at the top of the target image and the coordinates of the pixel at the uppermost side corresponding to the obstacle is taken as the distance between the top of the obstacle and the top of the target image.
And step C3, responding to the judgment that the distance is smaller than a preset threshold value, and determining that the barrier is located in a dangerous area.
And judging whether the distance between the top of the obstacle and the top of the target image is smaller than a preset threshold value or not, and determining that the obstacle is located in the dangerous area in response to judging that the distance between the top of the obstacle and the top of the target image is smaller than the preset threshold value.
For the value of the preset threshold, a large number of experiments can be performed according to actual conditions, the maximum value of the distance between the top of the obstacle and the top of the acquired target image under the condition that the visual field of the user can be blocked is found out, the maximum value is used as the preset threshold, and the specific value of the preset threshold is not limited in the embodiment.
Fig. 6 is a schematic diagram of another comparison of obstacles according to an embodiment of the present application. As shown in fig. 6, the distance between the top of the obstacle in image c and the top of the target image is larger than the distance between the top of the obstacle in image d and the top of the target image, which may indicate the likelihood that the obstacle in image c obstructs the user's view, and the likelihood that the obstacle in image d obstructs the user's view is smaller. The obstacle in the image c is judged to be located in a non-dangerous area (the view field is wide), and the obstacle in the image d is judged to be located in a dangerous area (the view field is blocked).
In another application scenario, when a user (i.e., a driver) drives a vehicle, if the vehicle of the user is too close to a left obstacle and/or a right obstacle, a traffic accident is easily caused. For example, when a user drives a vehicle to make a right turn, if another vehicle or a pedestrian is close to the vehicle, a traffic accident may occur during the right turn. For another example, if a user opens a door of a vehicle, if another vehicle or a pedestrian is located close to the vehicle, a traffic accident may occur during the opening of the door. For such a situation, the method of the embodiment may be adopted to perform obstacle detection so as to identify whether the left obstacle and/or the right obstacle are located in a dangerous area (i.e., whether the left obstacle and/or the right obstacle are too close to the vehicle), and trigger the danger-reminding information so as to remind the user that the left obstacle and/or the right obstacle are too close to each other to pay attention to safety when the left obstacle and/or the right obstacle are identified to be located in the dangerous area.
For such a scenario, in an alternative embodiment, the video capture device may be mounted on the left side of the vehicle (e.g., at the left side rearview mirror of the vehicle, the left side of the rear of the vehicle, etc.) and/or the right side of the vehicle (e.g., at the right side rearview mirror of the vehicle, the right side of the rear of the vehicle, etc.), and accordingly, the target image may include an image of the left side of the vehicle and/or an image of the right side of the vehicle.
The process of identifying whether the obstacle is located in the danger area based on the position information of the obstacle in the target image may include the following steps D1 to D2:
and D1, acquiring the type of the vehicle and a preset dangerous area range corresponding to the type of the vehicle.
The type of the vehicle can be the type of the vehicle, and the vehicle terminal can store the type of the vehicle in advance.
Different vehicle types may correspond to different dangerous area ranges, so that a large number of experiments (such as different angles of video acquisition equipment, different distances and the like) can be performed according to actual conditions to find out the respective corresponding dangerous area ranges of the different vehicle types. The dangerous area range may be a partial area range in an image captured by the video capture device, and the dangerous area range may include coordinates of pixels at the edge of the dangerous area, and the like. The present embodiment is not limited to specific values of the hazardous area range.
And D2, responding to the judgment that the position information is located in the dangerous area range, and determining that the obstacle is located in the dangerous area.
The position information of the obstacle in the target image, which is identified by the image identification technology, may include, but is not limited to: coordinates of each pixel corresponding to the obstacle, the coordinates having an origin at a lower left corner of the target image. Based on the coordinates of the pixels corresponding to the obstacle, it can be determined whether the position information of the obstacle in the target image is within the range of the dangerous area corresponding to the type of the vehicle. And determining that the obstacle is located in a dangerous area in response to the judgment that the position information is located in the dangerous area range.
Fig. 7 is a schematic diagram of a vehicle left side image and a vehicle right side image according to an embodiment of the present application. As shown in fig. 7, the image e is a vehicle left image in which a dangerous area and a non-dangerous area are partitioned, wherein the position information of the obstacle is located within the range of the dangerous area. The image f is a right image of the vehicle, in which a dangerous area and a non-dangerous area are partitioned, wherein the position information of the obstacle is located within the range of the dangerous area.
In the embodiment of the application, whether the barrier is located in the dangerous area or not can be identified by adopting different identification modes aiming at different application scenes, so that the user can be reminded aiming at different scenes, and the driving safety is improved.
In an embodiment of the application, an electronic device is also provided. The electronic device may include one or more processors, and one or more computer-readable storage media having instructions, such as an application program, stored thereon. The instructions, when executed by the one or more processors, cause the processors to perform the method of obstacle detection of any of the embodiments described above.
Fig. 8 shows a schematic structural diagram of an electronic device 800 according to an embodiment of the present application. As shown in fig. 8, electronic device 800 includes a Central Processing Unit (CPU) 801 that can perform various appropriate actions and processes according to computer program instructions stored in a Read Only Memory (ROM) 802 or loaded from a storage Unit 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data required for the operation of the electronic apparatus 800 can also be stored. The CPU801, ROM 802, and RAM803 are connected to each other via a bus 804. An Input/Output (I/O) interface 805 is also connected to bus 804.
A number of components in the electronic device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, a microphone, and the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the electronic device 800 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The various processes and processes described above may be performed by processing unit 801. For example, the obstacle detection method of any of the above embodiments may be implemented as a computer software program tangibly embodied in a computer-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 800 via the ROM 802 and/or the communication unit 809. When the computer program is loaded into the RAM803 and executed by the CPU801, one or more actions of the obstacle detection method described above may be performed.
In an embodiment of the present application, there is also provided a computer-readable storage medium, on which a computer program is stored, the program being executable by a processor of an electronic device, and when the computer program is executed by the processor, causing the processor to perform the obstacle detection method as described in any of the above embodiments.
The above-mentioned processors may include, but are not limited to: a CPU, a Network Processor (NP), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, and so on.
The above-mentioned computer-readable storage media may include, but are not limited to: ROM, RAM, compact Disc read only Memory (CD-ROM), electrically Erasable Programmable Read Only Memory (EEPROM), hard disk, floppy disk, flash Memory, and the like.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising one of \ 8230; \8230;" does not exclude the presence of additional like elements in a process, method, article, or terminal device that comprises the element.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM, RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed in the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. In view of the above, the description should not be taken as limiting the application.
Claims (9)
1. An obstacle detection method, characterized in that the method comprises:
acquiring a target image, wherein the target image is an image outside a vehicle;
in response to the target image being acquired, identifying whether an obstacle exists in the target image;
in response to the fact that the obstacle exists in the target image, position information of the obstacle in the target image is obtained, and whether the obstacle is located in a dangerous area or not is identified based on the position information.
2. The method of claim 1, the acquiring a target image comprising the steps of:
acquiring a video outside the vehicle and the running speed of the vehicle;
determining a decimation time interval based on the travel speed; wherein the decimation time interval is inversely related to the travel speed;
and extracting images from the video as the target images according to the extraction time interval.
3. The method of claim 1, the target image being an image in front of a vehicle and/or an image behind a vehicle; the identifying whether the obstacle is located in a danger zone based on the position information includes:
calculating a first ratio of a width of the obstacle in a width of the target image and a second ratio of a height of the obstacle in a height of the target image based on the position information;
identifying the type of the obstacle, and acquiring a preset dangerous area occupation range corresponding to the type of the obstacle;
determining that the obstacle is located in a hazardous area in response to determining that the first and/or second ratios are within the hazardous area ratio range.
4. The method of claim 3, the identifying the type of obstacle comprising the steps of:
matching the aspect ratio of the obstacles with the preset aspect ratio range corresponding to each obstacle type;
and in response to the matching success, taking the type of the matching success as the type of the obstacle.
5. The method of claim 1, the target image being an image in front of a vehicle; the identifying whether the obstacle is located in a danger zone based on the position information includes:
judging whether the vehicle is currently positioned at the intersection or not;
in response to determining that the vehicle is currently located at an intersection, calculating a distance between a top of the obstacle and a top of the target image based on the location information;
and determining that the obstacle is located in a dangerous area in response to the judgment that the distance is smaller than a preset threshold value.
6. The method of claim 1, the target image being an image of a left side of a vehicle and/or an image of a right side of a vehicle; the identifying whether the obstacle is located in a danger area based on the position information includes:
acquiring the type of the vehicle and a preset dangerous area range corresponding to the type of the vehicle;
and determining that the obstacle is located in a dangerous area in response to the judgment that the position information is located in the dangerous area range.
7. The method of claim 1, further comprising:
and triggering danger reminding information in response to the fact that the obstacle is located in a dangerous area.
8. An electronic device, comprising:
one or more processors; and
one or more computer-readable storage media having instructions stored thereon;
the instructions, when executed by the one or more processors, cause the processors to perform the obstacle detection method of any of claims 1 to 7.
9. A computer-readable storage medium, characterized in that a computer program is stored thereon which, when executed by a processor, causes the processor to carry out the obstacle detection method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110961863.5A CN115713745A (en) | 2021-08-20 | 2021-08-20 | Obstacle detection method, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110961863.5A CN115713745A (en) | 2021-08-20 | 2021-08-20 | Obstacle detection method, electronic device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115713745A true CN115713745A (en) | 2023-02-24 |
Family
ID=85230204
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110961863.5A Pending CN115713745A (en) | 2021-08-20 | 2021-08-20 | Obstacle detection method, electronic device, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115713745A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117048596A (en) * | 2023-08-04 | 2023-11-14 | 广州汽车集团股份有限公司 | Method, device, vehicle and storage medium for avoiding obstacle |
CN117274952A (en) * | 2023-09-26 | 2023-12-22 | 镁佳(北京)科技有限公司 | Parking space detection method and device, computer equipment and storage medium |
CN117994759A (en) * | 2024-01-31 | 2024-05-07 | 小米汽车科技有限公司 | Method and device for detecting position of obstacle |
-
2021
- 2021-08-20 CN CN202110961863.5A patent/CN115713745A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117048596A (en) * | 2023-08-04 | 2023-11-14 | 广州汽车集团股份有限公司 | Method, device, vehicle and storage medium for avoiding obstacle |
CN117048596B (en) * | 2023-08-04 | 2024-05-10 | 广州汽车集团股份有限公司 | Method, device, vehicle and storage medium for avoiding obstacle |
CN117274952A (en) * | 2023-09-26 | 2023-12-22 | 镁佳(北京)科技有限公司 | Parking space detection method and device, computer equipment and storage medium |
CN117274952B (en) * | 2023-09-26 | 2024-05-28 | 镁佳(北京)科技有限公司 | Parking space detection method and device, computer equipment and storage medium |
CN117994759A (en) * | 2024-01-31 | 2024-05-07 | 小米汽车科技有限公司 | Method and device for detecting position of obstacle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10186147B2 (en) | Wrong-way determination apparatus | |
CN110430401B (en) | Vehicle blind area early warning method, early warning device, MEC platform and storage medium | |
CN115713745A (en) | Obstacle detection method, electronic device, and storage medium | |
CN109800633B (en) | Non-motor vehicle traffic violation judgment method and device and electronic equipment | |
CN106611512B (en) | Method, device and system for processing starting of front vehicle | |
WO2020055992A1 (en) | Inward/outward vehicle monitoring for remote reporting and in-cab warning enhancements | |
CN110400478A (en) | A kind of road condition notification method and device | |
WO2017153979A1 (en) | Running vehicle alerting system and method | |
CN110895662A (en) | Vehicle overload alarm method and device, electronic equipment and storage medium | |
CN112172663A (en) | Danger alarm method based on door opening and related equipment | |
US20130266187A1 (en) | Video-based method for parking angle violation detection | |
CN108162858B (en) | Vehicle-mounted monitoring device and method thereof | |
CN111223289B (en) | Method and system for snapshot of illegal parking event of shared vehicle and storage medium | |
CN113012436B (en) | Road monitoring method and device and electronic equipment | |
CN112249007A (en) | Vehicle danger alarm method and related equipment | |
CN113676702A (en) | Target tracking monitoring method, system and device based on video stream and storage medium | |
CN111105644A (en) | Vehicle blind area monitoring and driving control method and device and vehicle road cooperative system | |
CN101349562A (en) | Method and apparatus for alarming vehicle running bias direction | |
CN112185170B (en) | Traffic safety prompting method and road monitoring equipment | |
CN111582239A (en) | Violation monitoring method and device | |
CN111746526B (en) | Early warning method, device and equipment for rear unmanned vehicle and vehicle | |
JP2018106667A (en) | Information processor, information processing method and program | |
JP7359099B2 (en) | Mobile object interference detection device, mobile object interference detection system, and mobile object interference detection program | |
JP7238821B2 (en) | Map generation system and map generation program | |
JP2022056153A (en) | Temporary stop detection device, temporary stop detection system, and temporary stop detection program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |